You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Sean Laurent <or...@gmail.com> on 2008/11/06 07:45:32 UTC

Quickstart: only replicated to 0 nodes

So I'm new to Hadoop and I have been trying unsuccessfully to work
through the Quickstart tutorial to get a single node working in
pseudo-distributed mode. I can't seem to put data into HDFS using
release 0.18.2 under Java 1.6.0_04-b12:

$ bin/hadoop fs -put conf input
08/11/05 18:32:23 INFO dfs.DFSClient:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/slaurent/input/commons-logging.properties could only be
replicated to 0 nodes, instead of 1
...

The dfshealth jsp page reports 1 live datanode. The strange thing is
that the node listed as "dkz216" with a url of
"http://dkz216.neoplus.adsl.tpnet.pl:50075/browseDirectory.jsp?namenodeInfoPort=50070&dir=%2F"...
not sure where that came from.

No errors in the log files, other than the replication error. However,
I do see one other oddity in the datanode logfile:

---hadoop-user-datanode-server.log---
2008-11-05 18:32:28,317 INFO org.apache.hadoop.dfs.DataNode:
dnRegistration =
DatanodeRegistration(dkz216.neoplus.adsl.tpnet.pl:50010, storageID=,
infoPort=50075, ipcPort=50020)
2008-11-05 18:32:28,317 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 50020: starting
2008-11-05 18:32:28,443 INFO org.apache.hadoop.dfs.DataNode: New
storage id DS-2140500399-83.24.29.216-50010-1225931548407 is assigned
to data-node 127.0.0.1:50010
2008-11-05 18:32:28,444 INFO org.apache.hadoop.dfs.DataNode:
DatanodeRegistration(127.0.0.1:50010,
storageID=DS-2140500399-83.24.29.216-50010-1225931548407,
infoPort=50075, ipcPort=50020)In DataNode.run, data =
FSDataset{dirpath='/tmp/hadoop-slaurent/dfs/data/current'}
---hadoop-user-datanode-server.log---


Here are my config files:

---hadoop-site.xml---
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://127.0.0.1:9000/</value>
  </property>
  <property>
    <name>mapred.job.tracker</name>
    <value>127.0.0.1:9001</value>
  </property>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
</configuration>
---hadoop-site.xml---

---masters----
127.0.0.1
---masters----

---slaves---
127.0.0.1
---slaves---

I originally started with localhost everywhere but then switched to
127.0.0.1 to see if that helped. No luck. I can't seem to copy any
files to HDFS.

Any suggestions would be greatly appreciated!

-Sean

Re: Quickstart: only replicated to 0 nodes

Posted by Sean Laurent <or...@gmail.com>.
On Thu, Nov 6, 2008 at 12:45 AM, Sean Laurent <or...@gmail.com> wrote:
>
> So I'm new to Hadoop and I have been trying unsuccessfully to work
> through the Quickstart tutorial to get a single node working in
> pseudo-distributed mode. I can't seem to put data into HDFS using
> release 0.18.2 under Java 1.6.0_04-b12:
>
> $ bin/hadoop fs -put conf input
> 08/11/05 18:32:23 INFO dfs.DFSClient:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/slaurent/input/commons-logging.properties could only be
> replicated to 0 nodes, instead of 1
> ...

So I finally discovered my problems... :)

First, I didn't have an entry /etc/hosts for my machine name.

Second (and far more important), the HDFS system was getting created
in /tmp and the partition on which /tmp resides was running out of
disk space. Once I moved the HDFS to a partition with enough space, my
replication problems went away.

I have to admit that it kinda seems like a bug that Hadoop never gave
me ANY indication that I was out of disk space.

-Sean