You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Arul Ganesh <ar...@gmail.com> on 2008/11/13 21:28:33 UTC

Re: "could only be replicated to 0 nodes, instead of 1"

Hi,
If you are getting this in windows environment (2003 64 bit). We have faced
the same problem. Now we tried the following steps and it started working.
1)Install cygwin and ssh.
2) Downloaded the stable version Hadoop - hadoop-0.17.2.1.tar.gz as on
13/Nov/2008
3) Untar it via cygwin (tar xvfz hadoop-0.17.2.1.tar.gz). please DONOT use
WINZIP to untar.
4) We tried running the sudo distribution example provided in quickstart
(http://hadoop.apache.org/core/docs/current/quickstart.html) and it worked.

Thanks
Arul and Limin
eBay Inc.,



jerrro wrote:
> 
> I am trying to install/configure hadoop on a cluster with several
> computers. I followed exactly the instructions in the hadoop website for
> configuring multiple slaves, and when I run start-all.sh I get no errors -
> both datanode and tasktracker are reported to be running (doing ps awux |
> grep hadoop on the slave nodes returns two java processes). Also, the log
> files are empty - nothing is printed there. Still, when I try to use
> bin/hadoop dfs -put,
> I get the following error:
> 
> # bin/hadoop dfs -put w.txt w.txt
> put: java.io.IOException: File /user/scohen/w4.txt could only be
> replicated to 0 nodes, instead of 1
> 
> and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).
> 
> I couldn't find much information about this error, but I did manage to see
> somewhere it might mean that there are no datanodes running. But as I
> said, start-all does not give any errors. Any ideas what could be problem?
> 
> Thanks.
> 
> Jerr.
> 

-- 
View this message in context: http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tp14175780p20488938.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.