You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Ted Pedersen <tp...@d.umn.edu> on 2011/03/04 23:21:54 UTC

Question on Error : could only be replicated to 0 nodes, instead of 1

Greetings all,

I get the following error at seemingly irregular intervals when I'm
trying to do the following...

hadoop fs -put /scratch1/tdp/data/* input

(The data is a few hundred files of wikistats data, about 75GB in total).

11/03/04 15:55:05 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop
.ipc.RemoteException: java.io.IOException: File /user/pedersen/input/pagecounts-
20110129-020001 could only be replicated to 0 nodes, instead of 1

I've searched around on the error message, and have actually found a
lot of postings,
but they seem to be as irregular as the error itself (both in terms of
explanations and fixes).

http://www.mail-archive.com/common-user@hadoop.apache.org/msg00407.html
http://wiki.apache.org/hadoop/HowToSetupYourDevelopmentEnvironment
http://www.phacai.com/hadoop-error-could-only-be-replicated-to-0-nodes-instead-of-1
http://permalink.gmane.org/gmane.comp.jakarta.lucene.hadoop.user/20198

Is there a "best" currently understood explanation for this error, and
the preferred way to
resolve it? We are running in fully distributed mode here...

Thanks!
Ted

-- 
Ted Pedersen
http://www.d.umn.edu/~tpederse