You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@accumulo.apache.org by Supun Kamburugamuva <su...@gmail.com> on 2013/03/29 16:57:00 UTC

Error while trying to initialize

Hi All,

I'm using a trunk build and when I try to init accumulo it gives the
following exception.

2013-03-29 11:54:47,842 [util.NativeCodeLoader] INFO : Loaded the
native-hadoop library
2013-03-29 11:54:47,884 [hdfs.DFSClient] WARN : DataStreamer
Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException:
File /accumulo/tables/!0/root_tablet/00000_00000.rf could only be
replicated to 0 nodes, instead of 1
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
	at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:601)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

	at org.apache.hadoop.ipc.Client.call(Client.java:1070)
	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
	at $Proxy5.addBlock(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:601)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
	at $Proxy5.addBlock(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
	at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)


Thanks,
Supun..

Re: Error while trying to initialize

Posted by Supun Kamburugamuva <su...@gmail.com>.
Thank you all for the help. I could get the latest release to working.

Supun..


On Fri, Mar 29, 2013 at 4:18 PM, Eric Newton <er...@gmail.com> wrote:

> How many tablet servers (and loggers in 1.4.x) are showing up in the
> monitor?
>
> If zero, check to make sure the write-ahead log directory exists on all
> slave nodes.  By default, this will be $ACCUMULO_HOME/walogs.
>
> -Eric
>
>
>
> On Fri, Mar 29, 2013 at 3:59 PM, Supun Kamburugamuva <su...@gmail.com>wrote:
>
>> Here is my jps -lm. It seems all are running. I've started zoo keeper
>> for-ground and I can see it is running.
>>
>> 27457 org.apache.hadoop.hdfs.server.namenode.NameNode
>> 8394 org.apache.accumulo.start.Main gc --address localhost
>> 10536 sun.tools.jps.Jps -lm
>> 8504 org.apache.accumulo.start.Main tracer --address localhost
>> 2142 com.intellij.idea.Main
>> 28109 org.apache.hadoop.mapred.JobTracker
>> 27732 org.apache.hadoop.hdfs.server.datanode.DataNode
>> 19888 org.jetbrains.idea.maven.server.RemoteMavenServer
>> 8304 org.apache.accumulo.start.Main master --address localhost
>> 28387 org.apache.hadoop.mapred.TaskTracker
>> 6590 org.apache.zookeeper.server.quorum.QuorumPeerMain
>> /home/supun/dev/apache/zookeeper-3.4.5/bin/../conf/zoo.cfg
>> 28019 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
>> 7952 org.apache.accumulo.start.Main monitor --address localhost
>>
>> Supun..
>>
>> On Fri, Mar 29, 2013 at 3:47 PM, William Slacum
>> <wi...@accumulo.net> wrote:
>> > When you do a `jps -lm`, are all the hadoop DFS processes, zookeeper and
>> > accumulo processes running?
>> >
>> >
>> > On Fri, Mar 29, 2013 at 3:43 PM, Supun Kamburugamuva <supun06@gmail.com
>> >
>> > wrote:
>> >>
>> >> I'm getting following exception while starting accumulo.
>> >>
>> >> ./start-all.sh
>> >>
>> >> This error is shown in monitor_supun-OptiPlex-960.debug.log. Similar
>> >> errors are show in other logs as well.
>> >>
>> >> 2013-03-29 15:41:43,993 [monitor.Monitor] DEBUG:  connecting to
>> >> zookeepers localhost:2181
>> >> 2013-03-29 15:41:44,018 [impl.ThriftScanner] DEBUG:  Failed to locate
>> >> tablet for table : !0 row : ~err_^@
>> >> 2013-03-29 15:41:47,025 [monitor.Monitor] INFO :  Failed to obtain
>> >> problem reports
>> >> java.lang.RuntimeException:
>> >>
>> org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
>> >>         at
>> >>
>> org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:174)
>> >>         at
>> >>
>> org.apache.accumulo.server.problems.ProblemReports$3.hasNext(ProblemReports.java:241)
>> >>         at
>> >>
>> org.apache.accumulo.server.problems.ProblemReports.summarize(ProblemReports.java:299)
>> >>         at
>> >> org.apache.accumulo.server.monitor.Monitor.fetchData(Monitor.java:392)
>> >>         at
>> >> org.apache.accumulo.server.monitor.Monitor$2.run(Monitor.java:504)
>> >>         at
>> >>
>> org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>> >>         at java.lang.Thread.run(Thread.java:722)
>> >> Caused by:
>> >>
>> org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
>> >>         at
>> >>
>> org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:244)
>> >>         at
>> >>
>> org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:82)
>> >>         at
>> >>
>> org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:164)
>> >>         ... 6 more
>> >>
>> >> Thanks,
>> >> Si[im//
>> >>
>> >>
>> >> On Fri, Mar 29, 2013 at 12:07 PM, Supun Kamburugamuva <
>> supun06@gmail.com>
>> >> wrote:
>> >> > Thanks Eric. It appears my datanode is not running.
>> >> >
>> >> > Supun..
>> >> >
>> >> > On Fri, Mar 29, 2013 at 12:03 PM, Eric Newton <eric.newton@gmail.com
>> >
>> >> > wrote:
>> >> >> HDFS is not up and working.  In particular, your data node(s) are
>> not
>> >> >> up.
>> >> >>
>> >> >> You can verify this without using accumulo:
>> >> >>
>> >> >> $ hadoop fs -put somefile .
>> >> >>
>> >> >> You will want to check your hadoop logs for errors.
>> >> >>
>> >> >> -Eric
>> >> >>
>> >> >>
>> >> >>
>> >> >> On Fri, Mar 29, 2013 at 11:57 AM, Supun Kamburugamuva
>> >> >> <su...@gmail.com>
>> >> >> wrote:
>> >> >>>
>> >> >>> Hi All,
>> >> >>>
>> >> >>> I'm using a trunk build and when I try to init accumulo it gives
>> the
>> >> >>> following exception.
>> >> >>>
>> >> >>> 2013-03-29 11:54:47,842 [util.NativeCodeLoader] INFO : Loaded the
>> >> >>> native-hadoop library
>> >> >>> 2013-03-29 11:54:47,884 [hdfs.DFSClient] WARN : DataStreamer
>> >> >>> Exception: org.apache.hadoop.ipc.RemoteException:
>> java.io.IOException:
>> >> >>> File /accumulo/tables/!0/root_tablet/00000_00000.rf could only be
>> >> >>> replicated to 0 nodes, instead of 1
>> >> >>>         at
>> >> >>>
>> >> >>>
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>> >> >>>         at
>> >> >>>
>> >> >>>
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>> >> >>>         at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown
>> Source)
>> >> >>>         at
>> >> >>>
>> >> >>>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >> >>>         at java.lang.reflect.Method.invoke(Method.java:601)
>> >> >>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>> >> >>>         at
>> >> >>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>> >> >>>         at
>> >> >>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>> >> >>>         at java.security.AccessController.doPrivileged(Native
>> Method)
>> >> >>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>> >> >>>         at
>> >> >>>
>> >> >>>
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>> >> >>>         at
>> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>> >> >>>
>> >> >>>         at org.apache.hadoop.ipc.Client.call(Client.java:1070)
>> >> >>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>> >> >>>         at $Proxy5.addBlock(Unknown Source)
>> >> >>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method)
>> >> >>>         at
>> >> >>>
>> >> >>>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> >> >>>         at
>> >> >>>
>> >> >>>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >> >>>         at java.lang.reflect.Method.invoke(Method.java:601)
>> >> >>>         at
>> >> >>>
>> >> >>>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>> >> >>>         at
>> >> >>>
>> >> >>>
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>> >> >>>         at $Proxy5.addBlock(Unknown Source)
>> >> >>>         at
>> >> >>>
>> >> >>>
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
>> >> >>>         at
>> >> >>>
>> >> >>>
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
>> >> >>>         at
>> >> >>>
>> >> >>>
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
>> >> >>>         at
>> >> >>>
>> >> >>>
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
>> >> >>>
>> >> >>>
>> >> >>> Thanks,
>> >> >>> Supun..
>> >> >>
>> >> >>
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Supun Kamburugamuva
>> >> > Member, Apache Software Foundation; http://www.apache.org
>> >> > E-mail: supun06@gmail.com;  Mobile: +1 812 369 6762
>> >> > Blog: http://supunk.blogspot.com
>> >>
>> >>
>> >>
>> >> --
>> >> Supun Kamburugamuva
>> >> Member, Apache Software Foundation; http://www.apache.org
>> >> E-mail: supun06@gmail.com;  Mobile: +1 812 369 6762
>> >> Blog: http://supunk.blogspot.com
>> >
>> >
>>
>>
>>
>> --
>> Supun Kamburugamuva
>> Member, Apache Software Foundation; http://www.apache.org
>> E-mail: supun06@gmail.com;  Mobile: +1 812 369 6762
>> Blog: http://supunk.blogspot.com
>>
>
>


-- 
Supun Kamburugamuva
Member, Apache Software Foundation; http://www.apache.org
E-mail: supun06@gmail.com;  Mobile: +1 812 369 6762
Blog: http://supunk.blogspot.com

Re: Error while trying to initialize

Posted by Eric Newton <er...@gmail.com>.
How many tablet servers (and loggers in 1.4.x) are showing up in the
monitor?

If zero, check to make sure the write-ahead log directory exists on all
slave nodes.  By default, this will be $ACCUMULO_HOME/walogs.

-Eric



On Fri, Mar 29, 2013 at 3:59 PM, Supun Kamburugamuva <su...@gmail.com>wrote:

> Here is my jps -lm. It seems all are running. I've started zoo keeper
> for-ground and I can see it is running.
>
> 27457 org.apache.hadoop.hdfs.server.namenode.NameNode
> 8394 org.apache.accumulo.start.Main gc --address localhost
> 10536 sun.tools.jps.Jps -lm
> 8504 org.apache.accumulo.start.Main tracer --address localhost
> 2142 com.intellij.idea.Main
> 28109 org.apache.hadoop.mapred.JobTracker
> 27732 org.apache.hadoop.hdfs.server.datanode.DataNode
> 19888 org.jetbrains.idea.maven.server.RemoteMavenServer
> 8304 org.apache.accumulo.start.Main master --address localhost
> 28387 org.apache.hadoop.mapred.TaskTracker
> 6590 org.apache.zookeeper.server.quorum.QuorumPeerMain
> /home/supun/dev/apache/zookeeper-3.4.5/bin/../conf/zoo.cfg
> 28019 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
> 7952 org.apache.accumulo.start.Main monitor --address localhost
>
> Supun..
>
> On Fri, Mar 29, 2013 at 3:47 PM, William Slacum
> <wi...@accumulo.net> wrote:
> > When you do a `jps -lm`, are all the hadoop DFS processes, zookeeper and
> > accumulo processes running?
> >
> >
> > On Fri, Mar 29, 2013 at 3:43 PM, Supun Kamburugamuva <su...@gmail.com>
> > wrote:
> >>
> >> I'm getting following exception while starting accumulo.
> >>
> >> ./start-all.sh
> >>
> >> This error is shown in monitor_supun-OptiPlex-960.debug.log. Similar
> >> errors are show in other logs as well.
> >>
> >> 2013-03-29 15:41:43,993 [monitor.Monitor] DEBUG:  connecting to
> >> zookeepers localhost:2181
> >> 2013-03-29 15:41:44,018 [impl.ThriftScanner] DEBUG:  Failed to locate
> >> tablet for table : !0 row : ~err_^@
> >> 2013-03-29 15:41:47,025 [monitor.Monitor] INFO :  Failed to obtain
> >> problem reports
> >> java.lang.RuntimeException:
> >> org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
> >>         at
> >>
> org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:174)
> >>         at
> >>
> org.apache.accumulo.server.problems.ProblemReports$3.hasNext(ProblemReports.java:241)
> >>         at
> >>
> org.apache.accumulo.server.problems.ProblemReports.summarize(ProblemReports.java:299)
> >>         at
> >> org.apache.accumulo.server.monitor.Monitor.fetchData(Monitor.java:392)
> >>         at
> >> org.apache.accumulo.server.monitor.Monitor$2.run(Monitor.java:504)
> >>         at
> >>
> org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
> >>         at java.lang.Thread.run(Thread.java:722)
> >> Caused by:
> >> org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
> >>         at
> >>
> org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:244)
> >>         at
> >>
> org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:82)
> >>         at
> >>
> org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:164)
> >>         ... 6 more
> >>
> >> Thanks,
> >> Si[im//
> >>
> >>
> >> On Fri, Mar 29, 2013 at 12:07 PM, Supun Kamburugamuva <
> supun06@gmail.com>
> >> wrote:
> >> > Thanks Eric. It appears my datanode is not running.
> >> >
> >> > Supun..
> >> >
> >> > On Fri, Mar 29, 2013 at 12:03 PM, Eric Newton <er...@gmail.com>
> >> > wrote:
> >> >> HDFS is not up and working.  In particular, your data node(s) are not
> >> >> up.
> >> >>
> >> >> You can verify this without using accumulo:
> >> >>
> >> >> $ hadoop fs -put somefile .
> >> >>
> >> >> You will want to check your hadoop logs for errors.
> >> >>
> >> >> -Eric
> >> >>
> >> >>
> >> >>
> >> >> On Fri, Mar 29, 2013 at 11:57 AM, Supun Kamburugamuva
> >> >> <su...@gmail.com>
> >> >> wrote:
> >> >>>
> >> >>> Hi All,
> >> >>>
> >> >>> I'm using a trunk build and when I try to init accumulo it gives the
> >> >>> following exception.
> >> >>>
> >> >>> 2013-03-29 11:54:47,842 [util.NativeCodeLoader] INFO : Loaded the
> >> >>> native-hadoop library
> >> >>> 2013-03-29 11:54:47,884 [hdfs.DFSClient] WARN : DataStreamer
> >> >>> Exception: org.apache.hadoop.ipc.RemoteException:
> java.io.IOException:
> >> >>> File /accumulo/tables/!0/root_tablet/00000_00000.rf could only be
> >> >>> replicated to 0 nodes, instead of 1
> >> >>>         at
> >> >>>
> >> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> >> >>>         at
> >> >>>
> >> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> >> >>>         at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown
> Source)
> >> >>>         at
> >> >>>
> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >> >>>         at java.lang.reflect.Method.invoke(Method.java:601)
> >> >>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> >> >>>         at
> >> >>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> >> >>>         at
> >> >>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> >> >>>         at java.security.AccessController.doPrivileged(Native
> Method)
> >> >>>         at javax.security.auth.Subject.doAs(Subject.java:415)
> >> >>>         at
> >> >>>
> >> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> >> >>>         at
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> >> >>>
> >> >>>         at org.apache.hadoop.ipc.Client.call(Client.java:1070)
> >> >>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> >> >>>         at $Proxy5.addBlock(Unknown Source)
> >> >>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> >> >>>         at
> >> >>>
> >> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >> >>>         at
> >> >>>
> >> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >> >>>         at java.lang.reflect.Method.invoke(Method.java:601)
> >> >>>         at
> >> >>>
> >> >>>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> >> >>>         at
> >> >>>
> >> >>>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> >> >>>         at $Proxy5.addBlock(Unknown Source)
> >> >>>         at
> >> >>>
> >> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
> >> >>>         at
> >> >>>
> >> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
> >> >>>         at
> >> >>>
> >> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
> >> >>>         at
> >> >>>
> >> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
> >> >>>
> >> >>>
> >> >>> Thanks,
> >> >>> Supun..
> >> >>
> >> >>
> >> >
> >> >
> >> >
> >> > --
> >> > Supun Kamburugamuva
> >> > Member, Apache Software Foundation; http://www.apache.org
> >> > E-mail: supun06@gmail.com;  Mobile: +1 812 369 6762
> >> > Blog: http://supunk.blogspot.com
> >>
> >>
> >>
> >> --
> >> Supun Kamburugamuva
> >> Member, Apache Software Foundation; http://www.apache.org
> >> E-mail: supun06@gmail.com;  Mobile: +1 812 369 6762
> >> Blog: http://supunk.blogspot.com
> >
> >
>
>
>
> --
> Supun Kamburugamuva
> Member, Apache Software Foundation; http://www.apache.org
> E-mail: supun06@gmail.com;  Mobile: +1 812 369 6762
> Blog: http://supunk.blogspot.com
>

Re: Error while trying to initialize

Posted by Supun Kamburugamuva <su...@gmail.com>.
Here is my jps -lm. It seems all are running. I've started zoo keeper
for-ground and I can see it is running.

27457 org.apache.hadoop.hdfs.server.namenode.NameNode
8394 org.apache.accumulo.start.Main gc --address localhost
10536 sun.tools.jps.Jps -lm
8504 org.apache.accumulo.start.Main tracer --address localhost
2142 com.intellij.idea.Main
28109 org.apache.hadoop.mapred.JobTracker
27732 org.apache.hadoop.hdfs.server.datanode.DataNode
19888 org.jetbrains.idea.maven.server.RemoteMavenServer
8304 org.apache.accumulo.start.Main master --address localhost
28387 org.apache.hadoop.mapred.TaskTracker
6590 org.apache.zookeeper.server.quorum.QuorumPeerMain
/home/supun/dev/apache/zookeeper-3.4.5/bin/../conf/zoo.cfg
28019 org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
7952 org.apache.accumulo.start.Main monitor --address localhost

Supun..

On Fri, Mar 29, 2013 at 3:47 PM, William Slacum
<wi...@accumulo.net> wrote:
> When you do a `jps -lm`, are all the hadoop DFS processes, zookeeper and
> accumulo processes running?
>
>
> On Fri, Mar 29, 2013 at 3:43 PM, Supun Kamburugamuva <su...@gmail.com>
> wrote:
>>
>> I'm getting following exception while starting accumulo.
>>
>> ./start-all.sh
>>
>> This error is shown in monitor_supun-OptiPlex-960.debug.log. Similar
>> errors are show in other logs as well.
>>
>> 2013-03-29 15:41:43,993 [monitor.Monitor] DEBUG:  connecting to
>> zookeepers localhost:2181
>> 2013-03-29 15:41:44,018 [impl.ThriftScanner] DEBUG:  Failed to locate
>> tablet for table : !0 row : ~err_^@
>> 2013-03-29 15:41:47,025 [monitor.Monitor] INFO :  Failed to obtain
>> problem reports
>> java.lang.RuntimeException:
>> org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
>>         at
>> org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:174)
>>         at
>> org.apache.accumulo.server.problems.ProblemReports$3.hasNext(ProblemReports.java:241)
>>         at
>> org.apache.accumulo.server.problems.ProblemReports.summarize(ProblemReports.java:299)
>>         at
>> org.apache.accumulo.server.monitor.Monitor.fetchData(Monitor.java:392)
>>         at
>> org.apache.accumulo.server.monitor.Monitor$2.run(Monitor.java:504)
>>         at
>> org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>>         at java.lang.Thread.run(Thread.java:722)
>> Caused by:
>> org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
>>         at
>> org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:244)
>>         at
>> org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:82)
>>         at
>> org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:164)
>>         ... 6 more
>>
>> Thanks,
>> Si[im//
>>
>>
>> On Fri, Mar 29, 2013 at 12:07 PM, Supun Kamburugamuva <su...@gmail.com>
>> wrote:
>> > Thanks Eric. It appears my datanode is not running.
>> >
>> > Supun..
>> >
>> > On Fri, Mar 29, 2013 at 12:03 PM, Eric Newton <er...@gmail.com>
>> > wrote:
>> >> HDFS is not up and working.  In particular, your data node(s) are not
>> >> up.
>> >>
>> >> You can verify this without using accumulo:
>> >>
>> >> $ hadoop fs -put somefile .
>> >>
>> >> You will want to check your hadoop logs for errors.
>> >>
>> >> -Eric
>> >>
>> >>
>> >>
>> >> On Fri, Mar 29, 2013 at 11:57 AM, Supun Kamburugamuva
>> >> <su...@gmail.com>
>> >> wrote:
>> >>>
>> >>> Hi All,
>> >>>
>> >>> I'm using a trunk build and when I try to init accumulo it gives the
>> >>> following exception.
>> >>>
>> >>> 2013-03-29 11:54:47,842 [util.NativeCodeLoader] INFO : Loaded the
>> >>> native-hadoop library
>> >>> 2013-03-29 11:54:47,884 [hdfs.DFSClient] WARN : DataStreamer
>> >>> Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> >>> File /accumulo/tables/!0/root_tablet/00000_00000.rf could only be
>> >>> replicated to 0 nodes, instead of 1
>> >>>         at
>> >>>
>> >>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>> >>>         at
>> >>>
>> >>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>> >>>         at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>> >>>         at
>> >>>
>> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >>>         at java.lang.reflect.Method.invoke(Method.java:601)
>> >>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>> >>>         at
>> >>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>> >>>         at
>> >>> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>> >>>         at java.security.AccessController.doPrivileged(Native Method)
>> >>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>> >>>         at
>> >>>
>> >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>> >>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>> >>>
>> >>>         at org.apache.hadoop.ipc.Client.call(Client.java:1070)
>> >>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>> >>>         at $Proxy5.addBlock(Unknown Source)
>> >>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >>>         at
>> >>>
>> >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>> >>>         at
>> >>>
>> >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >>>         at java.lang.reflect.Method.invoke(Method.java:601)
>> >>>         at
>> >>>
>> >>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>> >>>         at
>> >>>
>> >>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>> >>>         at $Proxy5.addBlock(Unknown Source)
>> >>>         at
>> >>>
>> >>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
>> >>>         at
>> >>>
>> >>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
>> >>>         at
>> >>>
>> >>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
>> >>>         at
>> >>>
>> >>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
>> >>>
>> >>>
>> >>> Thanks,
>> >>> Supun..
>> >>
>> >>
>> >
>> >
>> >
>> > --
>> > Supun Kamburugamuva
>> > Member, Apache Software Foundation; http://www.apache.org
>> > E-mail: supun06@gmail.com;  Mobile: +1 812 369 6762
>> > Blog: http://supunk.blogspot.com
>>
>>
>>
>> --
>> Supun Kamburugamuva
>> Member, Apache Software Foundation; http://www.apache.org
>> E-mail: supun06@gmail.com;  Mobile: +1 812 369 6762
>> Blog: http://supunk.blogspot.com
>
>



-- 
Supun Kamburugamuva
Member, Apache Software Foundation; http://www.apache.org
E-mail: supun06@gmail.com;  Mobile: +1 812 369 6762
Blog: http://supunk.blogspot.com

Re: Error while trying to initialize

Posted by William Slacum <wi...@accumulo.net>.
When you do a `jps -lm`, are all the hadoop DFS processes, zookeeper and
accumulo processes running?

On Fri, Mar 29, 2013 at 3:43 PM, Supun Kamburugamuva <su...@gmail.com>wrote:

> I'm getting following exception while starting accumulo.
>
> ./start-all.sh
>
> This error is shown in monitor_supun-OptiPlex-960.debug.log. Similar
> errors are show in other logs as well.
>
> 2013-03-29 15:41:43,993 [monitor.Monitor] DEBUG:  connecting to
> zookeepers localhost:2181
> 2013-03-29 15:41:44,018 [impl.ThriftScanner] DEBUG:  Failed to locate
> tablet for table : !0 row : ~err_^@
> 2013-03-29 15:41:47,025 [monitor.Monitor] INFO :  Failed to obtain
> problem reports
> java.lang.RuntimeException:
> org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
>         at
> org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:174)
>         at
> org.apache.accumulo.server.problems.ProblemReports$3.hasNext(ProblemReports.java:241)
>         at
> org.apache.accumulo.server.problems.ProblemReports.summarize(ProblemReports.java:299)
>         at
> org.apache.accumulo.server.monitor.Monitor.fetchData(Monitor.java:392)
>         at
> org.apache.accumulo.server.monitor.Monitor$2.run(Monitor.java:504)
>         at
> org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
>         at java.lang.Thread.run(Thread.java:722)
> Caused by:
> org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
>         at
> org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:244)
>         at
> org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:82)
>         at
> org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:164)
>         ... 6 more
>
> Thanks,
> Si[im//
>
>
> On Fri, Mar 29, 2013 at 12:07 PM, Supun Kamburugamuva <su...@gmail.com>
> wrote:
> > Thanks Eric. It appears my datanode is not running.
> >
> > Supun..
> >
> > On Fri, Mar 29, 2013 at 12:03 PM, Eric Newton <er...@gmail.com>
> wrote:
> >> HDFS is not up and working.  In particular, your data node(s) are not
> up.
> >>
> >> You can verify this without using accumulo:
> >>
> >> $ hadoop fs -put somefile .
> >>
> >> You will want to check your hadoop logs for errors.
> >>
> >> -Eric
> >>
> >>
> >>
> >> On Fri, Mar 29, 2013 at 11:57 AM, Supun Kamburugamuva <
> supun06@gmail.com>
> >> wrote:
> >>>
> >>> Hi All,
> >>>
> >>> I'm using a trunk build and when I try to init accumulo it gives the
> >>> following exception.
> >>>
> >>> 2013-03-29 11:54:47,842 [util.NativeCodeLoader] INFO : Loaded the
> >>> native-hadoop library
> >>> 2013-03-29 11:54:47,884 [hdfs.DFSClient] WARN : DataStreamer
> >>> Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> >>> File /accumulo/tables/!0/root_tablet/00000_00000.rf could only be
> >>> replicated to 0 nodes, instead of 1
> >>>         at
> >>>
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
> >>>         at
> >>>
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
> >>>         at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
> >>>         at
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>>         at java.lang.reflect.Method.invoke(Method.java:601)
> >>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
> >>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
> >>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
> >>>         at java.security.AccessController.doPrivileged(Native Method)
> >>>         at javax.security.auth.Subject.doAs(Subject.java:415)
> >>>         at
> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> >>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
> >>>
> >>>         at org.apache.hadoop.ipc.Client.call(Client.java:1070)
> >>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
> >>>         at $Proxy5.addBlock(Unknown Source)
> >>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>>         at
> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> >>>         at
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>>         at java.lang.reflect.Method.invoke(Method.java:601)
> >>>         at
> >>>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> >>>         at
> >>>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> >>>         at $Proxy5.addBlock(Unknown Source)
> >>>         at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
> >>>         at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
> >>>         at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
> >>>         at
> >>>
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
> >>>
> >>>
> >>> Thanks,
> >>> Supun..
> >>
> >>
> >
> >
> >
> > --
> > Supun Kamburugamuva
> > Member, Apache Software Foundation; http://www.apache.org
> > E-mail: supun06@gmail.com;  Mobile: +1 812 369 6762
> > Blog: http://supunk.blogspot.com
>
>
>
> --
> Supun Kamburugamuva
> Member, Apache Software Foundation; http://www.apache.org
> E-mail: supun06@gmail.com;  Mobile: +1 812 369 6762
> Blog: http://supunk.blogspot.com
>

Re: Error while trying to initialize

Posted by Supun Kamburugamuva <su...@gmail.com>.
I'm getting following exception while starting accumulo.

./start-all.sh

This error is shown in monitor_supun-OptiPlex-960.debug.log. Similar
errors are show in other logs as well.

2013-03-29 15:41:43,993 [monitor.Monitor] DEBUG:  connecting to
zookeepers localhost:2181
2013-03-29 15:41:44,018 [impl.ThriftScanner] DEBUG:  Failed to locate
tablet for table : !0 row : ~err_^@
2013-03-29 15:41:47,025 [monitor.Monitor] INFO :  Failed to obtain
problem reports
java.lang.RuntimeException:
org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:174)
        at org.apache.accumulo.server.problems.ProblemReports$3.hasNext(ProblemReports.java:241)
        at org.apache.accumulo.server.problems.ProblemReports.summarize(ProblemReports.java:299)
        at org.apache.accumulo.server.monitor.Monitor.fetchData(Monitor.java:392)
        at org.apache.accumulo.server.monitor.Monitor$2.run(Monitor.java:504)
        at org.apache.accumulo.core.util.LoggingRunnable.run(LoggingRunnable.java:34)
        at java.lang.Thread.run(Thread.java:722)
Caused by: org.apache.accumulo.core.client.impl.ThriftScanner$ScanTimedOutException
        at org.apache.accumulo.core.client.impl.ThriftScanner.scan(ThriftScanner.java:244)
        at org.apache.accumulo.core.client.impl.ScannerIterator$Reader.run(ScannerIterator.java:82)
        at org.apache.accumulo.core.client.impl.ScannerIterator.hasNext(ScannerIterator.java:164)
        ... 6 more

Thanks,
Si[im//


On Fri, Mar 29, 2013 at 12:07 PM, Supun Kamburugamuva <su...@gmail.com> wrote:
> Thanks Eric. It appears my datanode is not running.
>
> Supun..
>
> On Fri, Mar 29, 2013 at 12:03 PM, Eric Newton <er...@gmail.com> wrote:
>> HDFS is not up and working.  In particular, your data node(s) are not up.
>>
>> You can verify this without using accumulo:
>>
>> $ hadoop fs -put somefile .
>>
>> You will want to check your hadoop logs for errors.
>>
>> -Eric
>>
>>
>>
>> On Fri, Mar 29, 2013 at 11:57 AM, Supun Kamburugamuva <su...@gmail.com>
>> wrote:
>>>
>>> Hi All,
>>>
>>> I'm using a trunk build and when I try to init accumulo it gives the
>>> following exception.
>>>
>>> 2013-03-29 11:54:47,842 [util.NativeCodeLoader] INFO : Loaded the
>>> native-hadoop library
>>> 2013-03-29 11:54:47,884 [hdfs.DFSClient] WARN : DataStreamer
>>> Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>>> File /accumulo/tables/!0/root_tablet/00000_00000.rf could only be
>>> replicated to 0 nodes, instead of 1
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>>>         at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>>>         at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>>         at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>>>
>>>         at org.apache.hadoop.ipc.Client.call(Client.java:1070)
>>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>>         at $Proxy5.addBlock(Unknown Source)
>>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>         at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>         at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>>         at
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>         at
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>         at $Proxy5.addBlock(Unknown Source)
>>>         at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
>>>         at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
>>>         at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
>>>         at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
>>>
>>>
>>> Thanks,
>>> Supun..
>>
>>
>
>
>
> --
> Supun Kamburugamuva
> Member, Apache Software Foundation; http://www.apache.org
> E-mail: supun06@gmail.com;  Mobile: +1 812 369 6762
> Blog: http://supunk.blogspot.com



-- 
Supun Kamburugamuva
Member, Apache Software Foundation; http://www.apache.org
E-mail: supun06@gmail.com;  Mobile: +1 812 369 6762
Blog: http://supunk.blogspot.com

Re: Error while trying to initialize

Posted by Supun Kamburugamuva <su...@gmail.com>.
Thanks Eric. It appears my datanode is not running.

Supun..

On Fri, Mar 29, 2013 at 12:03 PM, Eric Newton <er...@gmail.com> wrote:
> HDFS is not up and working.  In particular, your data node(s) are not up.
>
> You can verify this without using accumulo:
>
> $ hadoop fs -put somefile .
>
> You will want to check your hadoop logs for errors.
>
> -Eric
>
>
>
> On Fri, Mar 29, 2013 at 11:57 AM, Supun Kamburugamuva <su...@gmail.com>
> wrote:
>>
>> Hi All,
>>
>> I'm using a trunk build and when I try to init accumulo it gives the
>> following exception.
>>
>> 2013-03-29 11:54:47,842 [util.NativeCodeLoader] INFO : Loaded the
>> native-hadoop library
>> 2013-03-29 11:54:47,884 [hdfs.DFSClient] WARN : DataStreamer
>> Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException:
>> File /accumulo/tables/!0/root_tablet/00000_00000.rf could only be
>> replicated to 0 nodes, instead of 1
>>         at
>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>>         at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>>         at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>>         at java.security.AccessController.doPrivileged(Native Method)
>>         at javax.security.auth.Subject.doAs(Subject.java:415)
>>         at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>>
>>         at org.apache.hadoop.ipc.Client.call(Client.java:1070)
>>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>>         at $Proxy5.addBlock(Unknown Source)
>>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>         at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>         at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>         at java.lang.reflect.Method.invoke(Method.java:601)
>>         at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>         at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>         at $Proxy5.addBlock(Unknown Source)
>>         at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
>>         at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
>>         at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
>>         at
>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
>>
>>
>> Thanks,
>> Supun..
>
>



-- 
Supun Kamburugamuva
Member, Apache Software Foundation; http://www.apache.org
E-mail: supun06@gmail.com;  Mobile: +1 812 369 6762
Blog: http://supunk.blogspot.com

Re: Error while trying to initialize

Posted by Eric Newton <er...@gmail.com>.
HDFS is not up and working.  In particular, your data node(s) are not up.

You can verify this without using accumulo:

$ hadoop fs -put somefile .

You will want to check your hadoop logs for errors.

-Eric



On Fri, Mar 29, 2013 at 11:57 AM, Supun Kamburugamuva <su...@gmail.com>wrote:

> Hi All,
>
> I'm using a trunk build and when I try to init accumulo it gives the
> following exception.
>
> 2013-03-29 11:54:47,842 [util.NativeCodeLoader] INFO : Loaded the
> native-hadoop library
> 2013-03-29 11:54:47,884 [hdfs.DFSClient] WARN : DataStreamer
> Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException:
> File /accumulo/tables/!0/root_tablet/00000_00000.rf could only be
> replicated to 0 nodes, instead of 1
>         at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
>         at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
>         at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1070)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
>         at $Proxy5.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:601)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>         at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>         at $Proxy5.addBlock(Unknown Source)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3510)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3373)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2600(DFSClient.java:2589)
>         at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2829)
>
>
> Thanks,
> Supun..
>