You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by jasongs <ja...@synthasite.com> on 2008/05/08 12:30:03 UTC

Re: "could only be replicated to 0 nodes, instead of 1"

I get the same error when doing a put and my cluster is running ok

i.e. has capacity and all nodes are live. 
Error message is
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/test/test.txt could only be replicated to 0 nodes, instead of 1
	at
org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1127)
	at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:312)
	at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
	at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:585)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901)

	at org.apache.hadoop.ipc.Client.call(Client.java:512)
	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
	at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:585)
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
	at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
	at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
	at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2074)
	at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:1967)
	at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1500(DFSClient.java:1487)
	at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1601)
I would appreciate any help/suggestions

Thanks


jerrro wrote:
> 
> I am trying to install/configure hadoop on a cluster with several
> computers. I followed exactly the instructions in the hadoop website for
> configuring multiple slaves, and when I run start-all.sh I get no errors -
> both datanode and tasktracker are reported to be running (doing ps awux |
> grep hadoop on the slave nodes returns two java processes). Also, the log
> files are empty - nothing is printed there. Still, when I try to use
> bin/hadoop dfs -put,
> I get the following error:
> 
> # bin/hadoop dfs -put w.txt w.txt
> put: java.io.IOException: File /user/scohen/w4.txt could only be
> replicated to 0 nodes, instead of 1
> 
> and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).
> 
> I couldn't find much information about this error, but I did manage to see
> somewhere it might mean that there are no datanodes running. But as I
> said, start-all does not give any errors. Any ideas what could be problem?
> 
> Thanks.
> 
> Jerr.
> 

-- 
View this message in context: http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tp14175780p17124514.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.


Re: "could only be replicated to 0 nodes, instead of 1"

Posted by Hairong Kuang <ha...@yahoo-inc.com>.
Could you please go to the dfs webUI and check how many datanodes are up and
how much available space each has?

Hairong


On 5/8/08 3:30 AM, "jasongs" <ja...@synthasite.com> wrote:

> 
> I get the same error when doing a put and my cluster is running ok
> 
> i.e. has capacity and all nodes are live.
> Error message is
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /test/test.txt could only be replicated to 0 nodes, instead of 1
> at
> org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1127)
> at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:312)
> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.j
> ava:25)
> at java.lang.reflect.Method.invoke(Method.java:585)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901)
> 
> at org.apache.hadoop.ipc.Client.call(Client.java:512)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
> at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.j
> ava:25)
> at java.lang.reflect.Method.invoke(Method.java:585)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocation
> Handler.java:82)
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandle
> r.java:59)
> at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient
> .java:2074)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClien
> t.java:1967)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1500(DFSClient.java:148
> 7)
> at
> org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.jav
> a:1601)
> I would appreciate any help/suggestions
> 
> Thanks
> 
> 
> jerrro wrote:
>> 
>> I am trying to install/configure hadoop on a cluster with several
>> computers. I followed exactly the instructions in the hadoop website for
>> configuring multiple slaves, and when I run start-all.sh I get no errors -
>> both datanode and tasktracker are reported to be running (doing ps awux |
>> grep hadoop on the slave nodes returns two java processes). Also, the log
>> files are empty - nothing is printed there. Still, when I try to use
>> bin/hadoop dfs -put,
>> I get the following error:
>> 
>> # bin/hadoop dfs -put w.txt w.txt
>> put: java.io.IOException: File /user/scohen/w4.txt could only be
>> replicated to 0 nodes, instead of 1
>> 
>> and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).
>> 
>> I couldn't find much information about this error, but I did manage to see
>> somewhere it might mean that there are no datanodes running. But as I
>> said, start-all does not give any errors. Any ideas what could be problem?
>> 
>> Thanks.
>> 
>> Jerr.
>>