You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@accumulo.apache.org by Sean Pines <SP...@potomacfusion.com> on 2012/06/19 23:04:40 UTC

Table could only be replicated to 0 nodes, instead of 1

Hey all,

I was recently trying to import a large amount of data into Accumulo. Around 13.63M entries I got the error below:

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /accumulo/tables/!0/table_info/A00000ox.rf_tmp could only be replicated to 0 nodes, instead of 1
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1520)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:665)
        at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

        org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /accumulo/tables/!0/table_info/A00000ox.rf_tmp could only be replicated to 0 nodes, instead of 1
               at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1520)
               at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:665)
               at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
               at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
               at java.lang.reflect.Method.invoke(Method.java:597)
               at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
               at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
               at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
               at java.security.AccessController.doPrivileged(Native Method)
               at javax.security.auth.Subject.doAs(Subject.java:396)
               at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
               at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)

               at org.apache.hadoop.ipc.Client.call(Client.java:1107)
               at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
               at $Proxy0.addBlock(Unknown Source)
               at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
               at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
               at java.lang.reflect.Method.invoke(Method.java:597)
               at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
               at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
               at $Proxy0.addBlock(Unknown Source)
               at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3553)
               at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3421)
               at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2100(DFSClient.java:2627)
               at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2822)

I'm currently testing on a single node VM. From the Accumulo Overview it says that only 46.30M of the disk has been used and the NameNode still has 3.87G of unreplicated capacity (1.19% used). I can't say I'm terribly familiar with hadoop and googling around for this error didn't come up with anything. Has anyone ran into this before and know what is causing it?

Thanks!
Sean

Re: Table could only be replicated to 0 nodes, instead of 1

Posted by Eric Newton <er...@gmail.com>.
Note the lack of "accumulo" in the mention of any of the classes named
below: HDFS is failing, and not even raising the error to accumulo.

We see this a lot... so accumulo is fairly immune to HDFS failures. Immune
to data loss, not immune to failure. Best to check out general health
status of your machines and check all the datanode logs.

-Eric

On Tue, Jun 19, 2012 at 5:04 PM, Sean Pines <SP...@potomacfusion.com>wrote:

> Hey all,
>
> I was recently trying to import a large amount of data into Accumulo.
> Around 13.63M entries I got the error below:
>
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /accumulo/tables/!0/table_info/A00000ox.rf_tmp could only be replicated to
> 0 nodes, instead of 1
>        at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1520)
>        at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:665)
>        at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown Source)
>        at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>        at java.security.AccessController.doPrivileged(Native Method)
>        at javax.security.auth.Subject.doAs(Subject.java:396)
>        at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>        org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /accumulo/tables/!0/table_info/A00000ox.rf_tmp could only be replicated to
> 0 nodes, instead of 1
>               at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1520)
>               at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:665)
>               at sun.reflect.GeneratedMethodAccessor9.invoke(Unknown
> Source)
>               at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>               at java.lang.reflect.Method.invoke(Method.java:597)
>               at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:557)
>               at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1434)
>               at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1430)
>               at java.security.AccessController.doPrivileged(Native Method)
>               at javax.security.auth.Subject.doAs(Subject.java:396)
>               at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1157)
>               at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1428)
>
>               at org.apache.hadoop.ipc.Client.call(Client.java:1107)
>               at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
>               at $Proxy0.addBlock(Unknown Source)
>               at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown
> Source)
>               at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>               at java.lang.reflect.Method.invoke(Method.java:597)
>               at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>               at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>               at $Proxy0.addBlock(Unknown Source)
>               at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3553)
>               at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3421)
>               at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2100(DFSClient.java:2627)
>               at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2822)
>
> I'm currently testing on a single node VM. From the Accumulo Overview it
> says that only 46.30M of the disk has been used and the NameNode still has
> 3.87G of unreplicated capacity (1.19% used). I can't say I'm terribly
> familiar with hadoop and googling around for this error didn't come up with
> anything. Has anyone ran into this before and know what is causing it?
>
> Thanks!
> Sean
>