You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@accumulo.apache.org by David Medinets <da...@gmail.com> on 2013/05/10 19:53:29 UTC

File X could only be replicated to 0 nodes instead of 1

I tried an install of 1.4.3 and am seeing the following message when I run
'accumulo init' without any logs being generated. Both hadoop and zookeeper
seem to be running OK. Any ideas where I should look to resolve this?

2013-05-10 13:43:54,894 [hdfs.DFSClient] WARN : DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/accumulo/accumulo/tables/!0/root_tablet/00000_00000.rf could only be
replicated to 0 nodes, instead of 1
    at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:616)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:416)
    at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

    at org.apache.hadoop.ipc.Client.call(Client.java:1070)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at sun.proxy.$Proxy1.addBlock(Unknown Source)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:616)
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at sun.proxy.$Proxy1.addBlock(Unknown Source)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
    at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)

Re: File X could only be replicated to 0 nodes instead of 1

Posted by John Vines <vi...@apache.org>.
Do you have HADOOP_HOME set to the same hadoop that is running? Are you
sure hadoop is running fine? Have you tried putting a file in it directly?


On Fri, May 10, 2013 at 1:53 PM, David Medinets <da...@gmail.com>wrote:

> I tried an install of 1.4.3 and am seeing the following message when I run
> 'accumulo init' without any logs being generated. Both hadoop and zookeeper
> seem to be running OK. Any ideas where I should look to resolve this?
>
> 2013-05-10 13:43:54,894 [hdfs.DFSClient] WARN : DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/accumulo/accumulo/tables/!0/root_tablet/00000_00000.rf could only be
> replicated to 0 nodes, instead of 1
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>     at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:416)
>     at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1070)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>     at sun.proxy.$Proxy1.addBlock(Unknown Source)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>     at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>     at sun.proxy.$Proxy1.addBlock(Unknown Source)
>     at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
>     at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
>     at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
>     at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
>

Re: File X could only be replicated to 0 nodes instead of 1

Posted by John Vines <vi...@apache.org>.
That shouldn't do it since init will idle until hdfs is out of safe mode.

Sent from my phone, please pardon the typos and brevity.
On May 12, 2013 10:30 AM, "Josh Elser" <jo...@gmail.com> wrote:

> Looking at the last commit you made, I would've guessed it was the 'sleep
> 60' that you added after starting Hadoop.
>
> But that's just an outsider's glance :)
>
> On Sunday, May 12, 2013, David Medinets wrote:
>
> > I think ... and I am not sure about this at all .. that one of the
> accumulo
> > v1.6.0 processes was still running while I was deleting directories and
> > re-installing software. I changed how my installation process stops
> > processes - it now checks the output from 'jps' instead of relying on a
> pid
> > file. Since my install process wipes out related directories, getting the
> > processes to close cleanly is not important. So I simply 'kill -9' them.
> >
> >
> > On Sat, May 11, 2013 at 11:54 PM, John Vines <vi...@apache.org> wrote:
> >
> > > Do you mind explicitly pointing out what was wrong and how you fixed it
> > so
> > > when people search for this issue they can easily find the resolution?
> > >
> > > Sent from my phone, please pardon the typos and brevity.
> > > On May 11, 2013 11:08 PM, "David Medinets" <da...@gmail.com>
> > > wrote:
> > >
> > > > Resolution: I had some part of the installation out of order. A
> working
> > > > installation script for v1.4.3 is at
> > > > https://github.com/medined/accumulo-at-home<
> > > > https://github.com/medined/accumulo-at-home/tree/master/1.4.3>
> > > > in
> > > > the v1.4.3 directory.
> > > >
> > > >
> > > > On Sat, May 11, 2013 at 11:12 AM, Eric Newton <eric.newton@gmail.com
> >
> > > > wrote:
> > > >
> > > > > Check your datanode logs... it's probably not running.
> > > > >
> > > > > -Eric
> > > > >
> > > > >
> > > > > On Fri, May 10, 2013 at 1:53 PM, David Medinets <
> > > > david.medinets@gmail.com
> > > > > >wrote:
> > > > >
> > > > > > I tried an install of 1.4.3 and am seeing the following message
> > when
> > > I
> > > > > run
> > > > > > 'accumulo init' without any logs being generated. Both hadoop and
> > > > > zookeeper
> > > > > > seem to be running OK. Any ideas where I should look to resolve
> > this?
> > > > > >
> > > > > > 2013-05-10 13:43:54,894 [hdfs.DFSClient] WARN : DataStreamer
> > > Exception:
> > > > > > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> > > > > > /user/accumulo/accumulo/tables/!0/root_tablet/00000_00000.rf
> could
> > > only
> > > > > be
> > > > > > replicated to 0 nodes, instead of 1
> > > > > >     at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> > > > > >     at
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> > > > > >     at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown
> Source)
> > > > > >     at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > > > >     at java.lang.reflect.Method.invoke(Method.java:616)
> > > > > >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> > > > > >     at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> > > > > >     at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> > > > > >     at java.security.AccessController.doPrivileged(Native Method)
> > > > > >     at javax.security.auth.Subject.doAs(Subject.java:416)
> > > > > >     at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> > > > > >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> > > > > >
> > > > > >     at org.apache.hadoop.ipc.Client.call(Client.java:1070)
> > > > > >     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> > > > > >     at sun.proxy.$Proxy1.addBlock(Unknown Source)
> > > > > >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> > > > > >     at
> > > > > >
> > > > > >
> > > > >
> > > >
> > > sun.reflect.NativeMethodAccessorImpl.invoke(Na
>

Re: File X could only be replicated to 0 nodes instead of 1

Posted by Josh Elser <jo...@gmail.com>.
Looking at the last commit you made, I would've guessed it was the 'sleep
60' that you added after starting Hadoop.

But that's just an outsider's glance :)

On Sunday, May 12, 2013, David Medinets wrote:

> I think ... and I am not sure about this at all .. that one of the accumulo
> v1.6.0 processes was still running while I was deleting directories and
> re-installing software. I changed how my installation process stops
> processes - it now checks the output from 'jps' instead of relying on a pid
> file. Since my install process wipes out related directories, getting the
> processes to close cleanly is not important. So I simply 'kill -9' them.
>
>
> On Sat, May 11, 2013 at 11:54 PM, John Vines <vi...@apache.org> wrote:
>
> > Do you mind explicitly pointing out what was wrong and how you fixed it
> so
> > when people search for this issue they can easily find the resolution?
> >
> > Sent from my phone, please pardon the typos and brevity.
> > On May 11, 2013 11:08 PM, "David Medinets" <da...@gmail.com>
> > wrote:
> >
> > > Resolution: I had some part of the installation out of order. A working
> > > installation script for v1.4.3 is at
> > > https://github.com/medined/accumulo-at-home<
> > > https://github.com/medined/accumulo-at-home/tree/master/1.4.3>
> > > in
> > > the v1.4.3 directory.
> > >
> > >
> > > On Sat, May 11, 2013 at 11:12 AM, Eric Newton <er...@gmail.com>
> > > wrote:
> > >
> > > > Check your datanode logs... it's probably not running.
> > > >
> > > > -Eric
> > > >
> > > >
> > > > On Fri, May 10, 2013 at 1:53 PM, David Medinets <
> > > david.medinets@gmail.com
> > > > >wrote:
> > > >
> > > > > I tried an install of 1.4.3 and am seeing the following message
> when
> > I
> > > > run
> > > > > 'accumulo init' without any logs being generated. Both hadoop and
> > > > zookeeper
> > > > > seem to be running OK. Any ideas where I should look to resolve
> this?
> > > > >
> > > > > 2013-05-10 13:43:54,894 [hdfs.DFSClient] WARN : DataStreamer
> > Exception:
> > > > > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> > > > > /user/accumulo/accumulo/tables/!0/root_tablet/00000_00000.rf could
> > only
> > > > be
> > > > > replicated to 0 nodes, instead of 1
> > > > >     at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> > > > >     at
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> > > > >     at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
> > > > >     at
> > > > >
> > > > >
> > > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > > >     at java.lang.reflect.Method.invoke(Method.java:616)
> > > > >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> > > > >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> > > > >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> > > > >     at java.security.AccessController.doPrivileged(Native Method)
> > > > >     at javax.security.auth.Subject.doAs(Subject.java:416)
> > > > >     at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> > > > >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> > > > >
> > > > >     at org.apache.hadoop.ipc.Client.call(Client.java:1070)
> > > > >     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> > > > >     at sun.proxy.$Proxy1.addBlock(Unknown Source)
> > > > >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > > >     at
> > > > >
> > > > >
> > > >
> > >
> > sun.reflect.NativeMethodAccessorImpl.invoke(Na

Re: File X could only be replicated to 0 nodes instead of 1

Posted by David Medinets <da...@gmail.com>.
I think ... and I am not sure about this at all .. that one of the accumulo
v1.6.0 processes was still running while I was deleting directories and
re-installing software. I changed how my installation process stops
processes - it now checks the output from 'jps' instead of relying on a pid
file. Since my install process wipes out related directories, getting the
processes to close cleanly is not important. So I simply 'kill -9' them.


On Sat, May 11, 2013 at 11:54 PM, John Vines <vi...@apache.org> wrote:

> Do you mind explicitly pointing out what was wrong and how you fixed it so
> when people search for this issue they can easily find the resolution?
>
> Sent from my phone, please pardon the typos and brevity.
> On May 11, 2013 11:08 PM, "David Medinets" <da...@gmail.com>
> wrote:
>
> > Resolution: I had some part of the installation out of order. A working
> > installation script for v1.4.3 is at
> > https://github.com/medined/accumulo-at-home<
> > https://github.com/medined/accumulo-at-home/tree/master/1.4.3>
> > in
> > the v1.4.3 directory.
> >
> >
> > On Sat, May 11, 2013 at 11:12 AM, Eric Newton <er...@gmail.com>
> > wrote:
> >
> > > Check your datanode logs... it's probably not running.
> > >
> > > -Eric
> > >
> > >
> > > On Fri, May 10, 2013 at 1:53 PM, David Medinets <
> > david.medinets@gmail.com
> > > >wrote:
> > >
> > > > I tried an install of 1.4.3 and am seeing the following message when
> I
> > > run
> > > > 'accumulo init' without any logs being generated. Both hadoop and
> > > zookeeper
> > > > seem to be running OK. Any ideas where I should look to resolve this?
> > > >
> > > > 2013-05-10 13:43:54,894 [hdfs.DFSClient] WARN : DataStreamer
> Exception:
> > > > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> > > > /user/accumulo/accumulo/tables/!0/root_tablet/00000_00000.rf could
> only
> > > be
> > > > replicated to 0 nodes, instead of 1
> > > >     at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> > > >     at
> > > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> > > >     at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
> > > >     at
> > > >
> > > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > >     at java.lang.reflect.Method.invoke(Method.java:616)
> > > >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> > > >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> > > >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> > > >     at java.security.AccessController.doPrivileged(Native Method)
> > > >     at javax.security.auth.Subject.doAs(Subject.java:416)
> > > >     at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> > > >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> > > >
> > > >     at org.apache.hadoop.ipc.Client.call(Client.java:1070)
> > > >     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> > > >     at sun.proxy.$Proxy1.addBlock(Unknown Source)
> > > >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > >     at
> > > >
> > > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > > >     at
> > > >
> > > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > >     at java.lang.reflect.Method.invoke(Method.java:616)
> > > >     at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> > > >     at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> > > >     at sun.proxy.$Proxy1.addBlock(Unknown Source)
> > > >     at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
> > > >     at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
> > > >     at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
> > > >     at
> > > >
> > > >
> > >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
> > > >
> > >
> >
>

Re: File X could only be replicated to 0 nodes instead of 1

Posted by John Vines <vi...@apache.org>.
Do you mind explicitly pointing out what was wrong and how you fixed it so
when people search for this issue they can easily find the resolution?

Sent from my phone, please pardon the typos and brevity.
On May 11, 2013 11:08 PM, "David Medinets" <da...@gmail.com> wrote:

> Resolution: I had some part of the installation out of order. A working
> installation script for v1.4.3 is at
> https://github.com/medined/accumulo-at-home<
> https://github.com/medined/accumulo-at-home/tree/master/1.4.3>
> in
> the v1.4.3 directory.
>
>
> On Sat, May 11, 2013 at 11:12 AM, Eric Newton <er...@gmail.com>
> wrote:
>
> > Check your datanode logs... it's probably not running.
> >
> > -Eric
> >
> >
> > On Fri, May 10, 2013 at 1:53 PM, David Medinets <
> david.medinets@gmail.com
> > >wrote:
> >
> > > I tried an install of 1.4.3 and am seeing the following message when I
> > run
> > > 'accumulo init' without any logs being generated. Both hadoop and
> > zookeeper
> > > seem to be running OK. Any ideas where I should look to resolve this?
> > >
> > > 2013-05-10 13:43:54,894 [hdfs.DFSClient] WARN : DataStreamer Exception:
> > > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> > > /user/accumulo/accumulo/tables/!0/root_tablet/00000_00000.rf could only
> > be
> > > replicated to 0 nodes, instead of 1
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> > >     at
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> > >     at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
> > >     at
> > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > >     at java.lang.reflect.Method.invoke(Method.java:616)
> > >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> > >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> > >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> > >     at java.security.AccessController.doPrivileged(Native Method)
> > >     at javax.security.auth.Subject.doAs(Subject.java:416)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> > >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> > >
> > >     at org.apache.hadoop.ipc.Client.call(Client.java:1070)
> > >     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> > >     at sun.proxy.$Proxy1.addBlock(Unknown Source)
> > >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > >     at
> > >
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > >     at
> > >
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > >     at java.lang.reflect.Method.invoke(Method.java:616)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> > >     at sun.proxy.$Proxy1.addBlock(Unknown Source)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
> > >     at
> > >
> > >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
> > >
> >
>

Re: File X could only be replicated to 0 nodes instead of 1

Posted by David Medinets <da...@gmail.com>.
Resolution: I had some part of the installation out of order. A working
installation script for v1.4.3 is at
https://github.com/medined/accumulo-at-home<https://github.com/medined/accumulo-at-home/tree/master/1.4.3>
in
the v1.4.3 directory.


On Sat, May 11, 2013 at 11:12 AM, Eric Newton <er...@gmail.com> wrote:

> Check your datanode logs... it's probably not running.
>
> -Eric
>
>
> On Fri, May 10, 2013 at 1:53 PM, David Medinets <david.medinets@gmail.com
> >wrote:
>
> > I tried an install of 1.4.3 and am seeing the following message when I
> run
> > 'accumulo init' without any logs being generated. Both hadoop and
> zookeeper
> > seem to be running OK. Any ideas where I should look to resolve this?
> >
> > 2013-05-10 13:43:54,894 [hdfs.DFSClient] WARN : DataStreamer Exception:
> > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> > /user/accumulo/accumulo/tables/!0/root_tablet/00000_00000.rf could only
> be
> > replicated to 0 nodes, instead of 1
> >     at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> >     at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> >     at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >     at java.lang.reflect.Method.invoke(Method.java:616)
> >     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> >     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> >     at java.security.AccessController.doPrivileged(Native Method)
> >     at javax.security.auth.Subject.doAs(Subject.java:416)
> >     at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> >     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> >
> >     at org.apache.hadoop.ipc.Client.call(Client.java:1070)
> >     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> >     at sun.proxy.$Proxy1.addBlock(Unknown Source)
> >     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >     at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >     at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >     at java.lang.reflect.Method.invoke(Method.java:616)
> >     at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> >     at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> >     at sun.proxy.$Proxy1.addBlock(Unknown Source)
> >     at
> >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
> >     at
> >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
> >     at
> >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
> >     at
> >
> >
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
> >
>

Re: File X could only be replicated to 0 nodes instead of 1

Posted by Eric Newton <er...@gmail.com>.
Check your datanode logs... it's probably not running.

-Eric


On Fri, May 10, 2013 at 1:53 PM, David Medinets <da...@gmail.com>wrote:

> I tried an install of 1.4.3 and am seeing the following message when I run
> 'accumulo init' without any logs being generated. Both hadoop and zookeeper
> seem to be running OK. Any ideas where I should look to resolve this?
>
> 2013-05-10 13:43:54,894 [hdfs.DFSClient] WARN : DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/accumulo/accumulo/tables/!0/root_tablet/00000_00000.rf could only be
> replicated to 0 nodes, instead of 1
>     at
>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>     at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:416)
>     at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>
>     at org.apache.hadoop.ipc.Client.call(Client.java:1070)
>     at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>     at sun.proxy.$Proxy1.addBlock(Unknown Source)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>     at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:616)
>     at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>     at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>     at sun.proxy.$Proxy1.addBlock(Unknown Source)
>     at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
>     at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
>     at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
>     at
>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
>