You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by jerrro <je...@gmail.com> on 2007/12/05 17:59:42 UTC

"could only be replicated to 0 nodes, instead of 1"

I am trying to install/configure hadoop on a cluster with several computers.
I followed exactly the instructions in the hadoop website for configuring
multiple slaves, and when I run start-all.sh I get no errors - both datanode
and tasktracker are reported to be running (doing ps awux | grep hadoop on
the slave nodes returns two java processes). Also, the log files are empty -
nothing is printed there. Still, when I try to use bin/hadoop dfs -put,
I get the following error:

# bin/hadoop dfs -put w.txt w.txt
put: java.io.IOException: File /user/scohen/w4.txt could only be replicated
to 0 nodes, instead of 1

and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).

I couldn't find much information about this error, but I did manage to see
somewhere it might mean that there are no datanodes running. But as I said,
start-all does not give any errors. Any ideas what could be problem?

Thanks.

Jerr.
-- 
View this message in context: http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tf4950939.html#a14175780
Sent from the Hadoop Users mailing list archive at Nabble.com.


Re: "could only be replicated to 0 nodes, instead of 1"

Posted by John Menzer <st...@gmx.net>.
i had the same error message...
can you describe when and how this error occurs?


Jayant Durgad wrote:
> 
> I am faced with the exact same problem described here, does anybody know
> how
> to resolve this?
> 
> 

-- 
View this message in context: http://www.nabble.com/Re%3A-%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tp16623192p16656655.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.


Re: "could only be replicated to 0 nodes, instead of 1"

Posted by Jayant Durgad <ja...@gmail.com>.
I am faced with the exact same problem described here, does anybody know how
to resolve this?

RE: "could only be replicated to 0 nodes, instead of 1"

Posted by Hairong Kuang <ha...@yahoo-inc.com>.
Check http://namenode_host:50070/dfshealth.jsp to see if your cluster is
out of safemode or not and how many datanodes are up.

You could check .out/.log files under the log directory to see if there
is any error starting datanodes/namenode.

Hairong 

-----Original Message-----
From: jerrro [mailto:jerrro@gmail.com] 
Sent: Wednesday, December 05, 2007 9:29 AM
To: hadoop-user@lucene.apache.org
Subject: Re: "could only be replicated to 0 nodes, instead of 1"


I did this several times, while tuning the configuration in all kinds of
way... But still, nothing helped - Even when I stop everything, reformat
and start it back again, I get this error whenever trying to use dfs
-put.


Jason Venner-2 wrote:
> 
> This happens to me, when the dfs has gotten into an inconsistent
state.
> 
> NOTE: you will lose all of the contents of your HDS file system.
> 
> What I hae to do, is stop dfs, remove the contents of the dfs 
> directories on all the machines, hadoop namenode -format on the 
> controller, then restart dfs.
> That consistently fixes the problem for me. This may be serious 
> overkill but it works.
> 
> NOTE: you will lose all of the contents of your HDS file system.
> 
> jerrro wrote:
>> I am trying to install/configure hadoop on a cluster with several 
>> computers.
>> I followed exactly the instructions in the hadoop website for 
>> configuring multiple slaves, and when I run start-all.sh I get no 
>> errors - both datanode and tasktracker are reported to be running 
>> (doing ps awux | grep hadoop on the slave nodes returns two java 
>> processes). Also, the log files are empty - nothing is printed there.

>> Still, when I try to use bin/hadoop dfs -put, I get the following 
>> error:
>>
>> # bin/hadoop dfs -put w.txt w.txt
>> put: java.io.IOException: File /user/scohen/w4.txt could only be 
>> replicated to 0 nodes, instead of 1
>>
>> and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows
it).
>>
>> I couldn't find much information about this error, but I did manage 
>> to see somewhere it might mean that there are no datanodes running. 
>> But as I said, start-all does not give any errors. Any ideas what 
>> could be problem?
>>
>> Thanks.
>>
>> Jerr.
>>   
> 
> 

--
View this message in context:
http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-
of-1%22-tf4950939.html#a14176525
Sent from the Hadoop Users mailing list archive at Nabble.com.


Re: "could only be replicated to 0 nodes, instead of 1"

Posted by jerrro <je...@gmail.com>.
I did this several times, while tuning the configuration in all kinds of
way... But still, nothing helped -
Even when I stop everything, reformat and start it back again, I get this
error whenever trying to use dfs -put.


Jason Venner-2 wrote:
> 
> This happens to me, when the dfs has gotten into an inconsistent state.
> 
> NOTE: you will lose all of the contents of your HDS file system.
> 
> What I hae to do, is stop dfs, remove the contents of the dfs 
> directories on all the machines, hadoop namenode -format on the 
> controller, then restart dfs.
> That consistently fixes the problem for me. This may be serious overkill 
> but it works.
> 
> NOTE: you will lose all of the contents of your HDS file system.
> 
> jerrro wrote:
>> I am trying to install/configure hadoop on a cluster with several
>> computers.
>> I followed exactly the instructions in the hadoop website for configuring
>> multiple slaves, and when I run start-all.sh I get no errors - both
>> datanode
>> and tasktracker are reported to be running (doing ps awux | grep hadoop
>> on
>> the slave nodes returns two java processes). Also, the log files are
>> empty -
>> nothing is printed there. Still, when I try to use bin/hadoop dfs -put,
>> I get the following error:
>>
>> # bin/hadoop dfs -put w.txt w.txt
>> put: java.io.IOException: File /user/scohen/w4.txt could only be
>> replicated
>> to 0 nodes, instead of 1
>>
>> and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).
>>
>> I couldn't find much information about this error, but I did manage to
>> see
>> somewhere it might mean that there are no datanodes running. But as I
>> said,
>> start-all does not give any errors. Any ideas what could be problem?
>>
>> Thanks.
>>
>> Jerr.
>>   
> 
> 

-- 
View this message in context: http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tf4950939.html#a14176525
Sent from the Hadoop Users mailing list archive at Nabble.com.


Re: "could only be replicated to 0 nodes, instead of 1"

Posted by Jason Venner <ja...@attributor.com>.
This happens to me, when the dfs has gotten into an inconsistent state.

NOTE: you will lose all of the contents of your HDS file system.

What I hae to do, is stop dfs, remove the contents of the dfs 
directories on all the machines, hadoop namenode -format on the 
controller, then restart dfs.
That consistently fixes the problem for me. This may be serious overkill 
but it works.

NOTE: you will lose all of the contents of your HDS file system.

jerrro wrote:
> I am trying to install/configure hadoop on a cluster with several computers.
> I followed exactly the instructions in the hadoop website for configuring
> multiple slaves, and when I run start-all.sh I get no errors - both datanode
> and tasktracker are reported to be running (doing ps awux | grep hadoop on
> the slave nodes returns two java processes). Also, the log files are empty -
> nothing is printed there. Still, when I try to use bin/hadoop dfs -put,
> I get the following error:
>
> # bin/hadoop dfs -put w.txt w.txt
> put: java.io.IOException: File /user/scohen/w4.txt could only be replicated
> to 0 nodes, instead of 1
>
> and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).
>
> I couldn't find much information about this error, but I did manage to see
> somewhere it might mean that there are no datanodes running. But as I said,
> start-all does not give any errors. Any ideas what could be problem?
>
> Thanks.
>
> Jerr.
>   

Re: "could only be replicated to 0 nodes, instead of 1"

Posted by Raghu Angadi <ra...@yahoo-inc.com>.
jerrro wrote:
> 
> I couldn't find much information about this error, but I did manage to see
> somewhere it might mean that there are no datanodes running. But as I said,
> start-all does not give any errors. Any ideas what could be problem?

start-all return does not mean datanodes are ok. Did you check if there 
are any datanodes alive? You can check from http://namenode:50070/.

Raghu.


Re: "could only be replicated to 0 nodes, instead of 1"

Posted by Arul Ganesh <ar...@gmail.com>.
Hi,
If you are getting this in windows environment (2003 64 bit). We have faced
the same problem. Now we tried the following steps and it started working.
1)Install cygwin and ssh.
2) Downloaded the stable version Hadoop - hadoop-0.17.2.1.tar.gz as on
13/Nov/2008
3) Untar it via cygwin (tar xvfz hadoop-0.17.2.1.tar.gz). please DONOT use
WINZIP to untar.
4) We tried running the sudo distribution example provided in quickstart
(http://hadoop.apache.org/core/docs/current/quickstart.html) and it worked.

Thanks
Arul and Limin
eBay Inc.,



jerrro wrote:
> 
> I am trying to install/configure hadoop on a cluster with several
> computers. I followed exactly the instructions in the hadoop website for
> configuring multiple slaves, and when I run start-all.sh I get no errors -
> both datanode and tasktracker are reported to be running (doing ps awux |
> grep hadoop on the slave nodes returns two java processes). Also, the log
> files are empty - nothing is printed there. Still, when I try to use
> bin/hadoop dfs -put,
> I get the following error:
> 
> # bin/hadoop dfs -put w.txt w.txt
> put: java.io.IOException: File /user/scohen/w4.txt could only be
> replicated to 0 nodes, instead of 1
> 
> and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).
> 
> I couldn't find much information about this error, but I did manage to see
> somewhere it might mean that there are no datanodes running. But as I
> said, start-all does not give any errors. Any ideas what could be problem?
> 
> Thanks.
> 
> Jerr.
> 

-- 
View this message in context: http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tp14175780p20488938.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.


Re: "could only be replicated to 0 nodes, instead of 1"

Posted by Hairong Kuang <ha...@yahoo-inc.com>.
Could you please go to the dfs webUI and check how many datanodes are up and
how much available space each has?

Hairong


On 5/8/08 3:30 AM, "jasongs" <ja...@synthasite.com> wrote:

> 
> I get the same error when doing a put and my cluster is running ok
> 
> i.e. has capacity and all nodes are live.
> Error message is
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /test/test.txt could only be replicated to 0 nodes, instead of 1
> at
> org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1127)
> at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:312)
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.j
> ava:25)
> at java.lang.reflect.Method.invoke(Method.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901)
> 
> at org.apache.hadoop.ipc.Client.call(Client.java:512)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
> at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.j
> ava:25)
> at java.lang.reflect.Method.invoke(Method.java:585)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocation
> Handler.java:82)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandle
> r.java:59)
> at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient
> .java:2074)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClien
> t.java:1967)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1500(DFSClient.java:148
> 7)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.jav
> a:1601)
> I would appreciate any help/suggestions
> 
> Thanks
> 
> 
> jerrro wrote:
>> 
>> I am trying to install/configure hadoop on a cluster with several
>> computers. I followed exactly the instructions in the hadoop website for
>> configuring multiple slaves, and when I run start-all.sh I get no errors -
>> both datanode and tasktracker are reported to be running (doing ps awux |
>> grep hadoop on the slave nodes returns two java processes). Also, the log
>> files are empty - nothing is printed there. Still, when I try to use
>> bin/hadoop dfs -put,
>> I get the following error:
>> 
>> # bin/hadoop dfs -put w.txt w.txt
>> put: java.io.IOException: File /user/scohen/w4.txt could only be
>> replicated to 0 nodes, instead of 1
>> 
>> and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).
>> 
>> I couldn't find much information about this error, but I did manage to see
>> somewhere it might mean that there are no datanodes running. But as I
>> said, start-all does not give any errors. Any ideas what could be problem?
>> 
>> Thanks.
>> 
>> Jerr.
>> 


Re: "could only be replicated to 0 nodes, instead of 1"

Posted by jasongs <ja...@synthasite.com>.
I get the same error when doing a put and my cluster is running ok

i.e. has capacity and all nodes are live. 
Error message is
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/test/test.txt could only be replicated to 0 nodes, instead of 1
	at
org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1127)
	at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:312)
	at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
	at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:585)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901)

	at org.apache.hadoop.ipc.Client.call(Client.java:512)
	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
	at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:585)
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
	at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
	at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2074)
	at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:1967)
	at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1500(DFSClient.java:1487)
	at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1601)
I would appreciate any help/suggestions

Thanks


jerrro wrote:
> 
> I am trying to install/configure hadoop on a cluster with several
> computers. I followed exactly the instructions in the hadoop website for
> configuring multiple slaves, and when I run start-all.sh I get no errors -
> both datanode and tasktracker are reported to be running (doing ps awux |
> grep hadoop on the slave nodes returns two java processes). Also, the log
> files are empty - nothing is printed there. Still, when I try to use
> bin/hadoop dfs -put,
> I get the following error:
> 
> # bin/hadoop dfs -put w.txt w.txt
> put: java.io.IOException: File /user/scohen/w4.txt could only be
> replicated to 0 nodes, instead of 1
> 
> and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).
> 
> I couldn't find much information about this error, but I did manage to see
> somewhere it might mean that there are no datanodes running. But as I
> said, start-all does not give any errors. Any ideas what could be problem?
> 
> Thanks.
> 
> Jerr.
> 

-- 
View this message in context: http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tp14175780p17124514.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.