You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Steve Lewis <lo...@gmail.com> on 2010/06/22 21:55:41 UTC
HDFS Errors
when I say
hadoop fs -copyFromLocal small_yeast /user/training/small_yeast
I get
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/training/small_yeast/yeast_chrXIV00000006.sam.gz could only be
replicated to 0 nodes, instead of 1
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1267)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:434)
at sun.reflect.GeneratedMethodAccessor841.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
...
Anyone seen this and know how to fix it
I am on a 4 node virtual cloudera cluster
--
Steven M. Lewis PhD
Institute for Systems Biology
Seattle WA
Re: HDFS Errors
Posted by Steve Lewis <lo...@gmail.com>.
No I have been using it for about two weeks and have many dozen files but it
may be close to full
On Jun 22, 2010 3:14 PM, "Allen Wittenauer" <aw...@linkedin.com>
wrote:
On Jun 22, 2010, at 1:58 PM, Steve Lewis wrote:
> training@hadoop1:~$ hadoop dfsadmin -safemode ge...
OK, so you are out of safemode.
>
> training@hadoop1:~$ hadoop dfsadmin -refreshNodes
This just re-reads the list of nodes. hadoop dfsadmin -report might be more
useful.
By chance, is this the first file you've tried writing to this hdfs?
Re: HDFS Errors
Posted by Allen Wittenauer <aw...@linkedin.com>.
On Jun 22, 2010, at 1:58 PM, Steve Lewis wrote:
> training@hadoop1:~$ hadoop dfsadmin -safemode get
> Safe mode is OFF
OK, so you are out of safemode.
>
> training@hadoop1:~$ hadoop dfsadmin -refreshNodes
This just re-reads the list of nodes. hadoop dfsadmin -report might be more useful.
By chance, is this the first file you've tried writing to this hdfs?
Re: HDFS Errors
Posted by Steve Lewis <lo...@gmail.com>.
training@hadoop1:~$ hadoop dfsadmin -safemode get
Safe mode is OFF
training@hadoop1:~$ hadoop dfsadmin -refreshNodes
training@hadoop1:~$ hadoop fs -copyFromLocal small_yeast
/user/training/small_yeast
^CcopyFromLocal: Filesystem closed
with 1 file copied then the same error
On Tue, Jun 22, 2010 at 1:03 PM, Allen Wittenauer
<aw...@linkedin.com>wrote:
>
> On Jun 22, 2010, at 12:55 PM, Steve Lewis wrote:
> > /user/training/small_yeast/yeast_chrXIV00000006.sam.gz could only be
> replicated to 0 nodes, instead of 1
>
> ... almost always means the namenode doesn't think it has any viable
> datanodes (anymore).
>
> > Anyone seen this and know how to fix it
> > I am on a 4 node virtual cloudera cluster
>
> Check the namenode UI and see if it is in safemode, how many live datanodes
> you have, etc.
>
>
>
--
Steven M. Lewis PhD
Institute for Systems Biology
Seattle WA
Re: HDFS Errors
Posted by Allen Wittenauer <aw...@linkedin.com>.
On Jun 22, 2010, at 12:55 PM, Steve Lewis wrote:
> /user/training/small_yeast/yeast_chrXIV00000006.sam.gz could only be replicated to 0 nodes, instead of 1
... almost always means the namenode doesn't think it has any viable datanodes (anymore).
> Anyone seen this and know how to fix it
> I am on a 4 node virtual cloudera cluster
Check the namenode UI and see if it is in safemode, how many live datanodes you have, etc.