You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@accumulo.apache.org by Geoffry Roberts <th...@gmail.com> on 2014/05/27 15:35:06 UTC

Remote connections to Accumulo

I have Accumulo set up in a virtual environment.  From within the guest
environment, I can connect with the shell, and I can connect to Zookeeper.
 But from the host environment, things are different.  I can connect to
Zookeeper just fine, but I cannot connect to Accumulo with with a program
or with the shell.  the shell throws errors and the program appears to hang.

Hadoop: 2.3.0
Zookeeper: 3.4.6
Accumulo: 1.5.1
Host: OSX 10.9
Guest: Ubuntu precise 64.
Virtual Box 4.3.10

My questions:


   1. Should the shell be able to connect remotely?  Maybe I'm wrong in
   thinking it should.
   2. How should I interpret the error listed below?  I'm guessing the
   problem has to do with localhost:9000, but I'm not getting it.  Yes the
   accumulo_id appears to be available in hdfs.


Thanks

Error dump:

Starting /usr/local/accumulo/bin/accumulo shell -u root

2014-05-27 09:25:51.329 java[1015:6503] Unable to load realm info from
SCDynamicStore

2014-05-27 09:25:51,411 [util.NativeCodeLoader] WARN : Unable to load
native-hadoop library for your platform... using builtin-java classes where
applicable

2014-05-27 09:25:52,216 [client.ZooKeeperInstance] ERROR: Problem reading
instance id out of hdfs at /accumulo/instance_id

java.io.IOException: Failed on local exception: java.io.IOException:
Connection reset by peer; Host Details : local host is: "abend.home/
192.168.1.7"; destination host is: "localhost":9000;

at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)

at org.apache.hadoop.ipc.Client.call(Client.java:1410)

at org.apache.hadoop.ipc.Client.call(Client.java:1359)

at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)

at com.sun.proxy.$Proxy9.getListing(Unknown Source)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)

at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)

at com.sun.proxy.$Proxy9.getListing(Unknown Source)

at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:502)

at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1727)

at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1710)

at
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:646)

at
org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:98)

at
org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:708)

at
org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:704)

at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

at
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:704)

at
org.apache.accumulo.core.client.ZooKeeperInstance.getInstanceIDFromHdfs(ZooKeeperInstance.java:288)

at
org.apache.accumulo.core.util.shell.Shell.getDefaultInstance(Shell.java:402)

at org.apache.accumulo.core.util.shell.Shell.setInstance(Shell.java:394)

at org.apache.accumulo.core.util.shell.Shell.config(Shell.java:258)

at org.apache.accumulo.core.util.shell.Shell.main(Shell.java:411)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.accumulo.start.Main$1.run(Main.java:103)

at java.lang.Thread.run(Thread.java:745)

Caused by: java.io.IOException: Connection reset by peer

at sun.nio.ch.FileDispatcherImpl.read0(Native Method)

at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)

at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)

at sun.nio.ch.IOUtil.read(IOUtil.java:197)

at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)

at
org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)

at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)

at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)

at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)

at java.io.FilterInputStream.read(FilterInputStream.java:133)

at java.io.FilterInputStream.read(FilterInputStream.java:133)

at
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:510)

at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)

at java.io.BufferedInputStream.read(BufferedInputStream.java:254)

at java.io.DataInputStream.readInt(DataInputStream.java:387)

at
org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1050)

at org.apache.hadoop.ipc.Client$Connection.run(Client.java:945)

Thread "shell" died java.lang.reflect.InvocationTargetException


-- 
There are ways and there are ways,

Geoffry Roberts

Re: Remote connections to Accumulo

Posted by Josh Elser <jo...@gmail.com>.
Oops, looks like -zi and -zh were added in 1.6.0.

You could try doing the following instead:

`accumulo shell -u root -z your_instance_name zkhost`

Where you should substitute in the proper values for 
"your_instance_name" and the hostname for your VM for "zkhost" 
(localhost might work here because ZK binds to all interfaces on a machine).

Generally though, it might be easier to start your NameNode using the 
hostname for your VM (really, so it binds to the remote interface for 
the VM instead of localhost) and then configure Accumulo to use the 
hostname as well. This should allow you to get access from outside of 
that VM without doing trickery like this.


On 5/27/14, 3:08 PM, Geoffry Roberts wrote:
> The -zi and -zh options are unrecognized by the accumulo shell.  It
> doesn't matter the combination.
>
> *accumulo shell -u root -zi -zh*
>
> *2014-05-27 18:58:49,790 [shell.Shell] ERROR:
> org.apache.commons.cli.UnrecognizedOptionException: Unrecognized option:
> -zi*
>
>
> Thanks.
>
>
> On Tue, May 27, 2014 at 11:40 AM, Keith Turner <keith@deenlo.com
> <ma...@deenlo.com>> wrote:
>
>     Seems like there is a problem connecting to hdfs.  Seems its trying
>     to connect to the namenode using localhost:9000.  What is your
>     namenode configured to in your hdfs config where the shell is running?
>
>     You can try using the -zi and -zh options w/ the Accumulo shell.
>     With these options, only zookeeper will be used to find accumulo
>     servers (hdfs will not be used to find the instance id).
>
>
>
>
>     On Tue, May 27, 2014 at 9:35 AM, Geoffry Roberts
>     <threadedblue@gmail.com <ma...@gmail.com>> wrote:
>
>         I have Accumulo set up in a virtual environment.  From within
>         the guest environment, I can connect with the shell, and I can
>         connect to Zookeeper.  But from the host environment, things are
>         different.  I can connect to Zookeeper just fine, but I cannot
>         connect to Accumulo with with a program or with the shell.  the
>         shell throws errors and the program appears to hang.
>
>         Hadoop: 2.3.0
>         Zookeeper: 3.4.6
>         Accumulo: 1.5.1
>         Host: OSX 10.9
>         Guest: Ubuntu precise 64.
>         Virtual Box 4.3.10
>
>         My questions:
>
>          1. Should the shell be able to connect remotely?  Maybe
>             I'm wrong in thinking it should.
>          2. How should I interpret the error listed below?  I'm guessing
>             the problem has to do with localhost:9000, but I'm not
>             getting it.  Yes the accumulo_id appears to be available in
>             hdfs.
>
>
>         Thanks
>
>         Error dump:
>
>         Starting /usr/local/accumulo/bin/accumulo shell -u root
>
>         2014-05-27 09:25:51.329 java[1015:6503] Unable to load realm
>         info from SCDynamicStore
>
>         2014-05-27 09:25:51,411 [util.NativeCodeLoader] WARN : Unable to
>         load native-hadoop library for your platform... using
>         builtin-java classes where applicable
>
>         2014-05-27 09:25:52,216 [client.ZooKeeperInstance] ERROR:
>         Problem reading instance id out of hdfs at /accumulo/instance_id
>
>         java.io.IOException: Failed on local exception:
>         java.io.IOException: Connection reset by peer; Host Details :
>         local host is: "abend.home/192.168.1.7 <http://192.168.1.7>";
>         destination host is: "localhost":9000;
>
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1410)
>
>         at org.apache.hadoop.ipc.Client.call(Client.java:1359)
>
>         at
>         org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>
>         at com.sun.proxy.$Proxy9.getListing(Unknown Source)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
>         sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
>         at
>         sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:606)
>
>         at
>         org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>
>         at
>         org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>
>         at com.sun.proxy.$Proxy9.getListing(Unknown Source)
>
>         at
>         org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:502)
>
>         at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1727)
>
>         at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1710)
>
>         at
>         org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:646)
>
>         at
>         org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:98)
>
>         at
>         org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:708)
>
>         at
>         org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:704)
>
>         at
>         org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
>         at
>         org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:704)
>
>         at
>         org.apache.accumulo.core.client.ZooKeeperInstance.getInstanceIDFromHdfs(ZooKeeperInstance.java:288)
>
>         at
>         org.apache.accumulo.core.util.shell.Shell.getDefaultInstance(Shell.java:402)
>
>         at
>         org.apache.accumulo.core.util.shell.Shell.setInstance(Shell.java:394)
>
>         at org.apache.accumulo.core.util.shell.Shell.config(Shell.java:258)
>
>         at org.apache.accumulo.core.util.shell.Shell.main(Shell.java:411)
>
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
>         at
>         sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
>         at
>         sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
>         at java.lang.reflect.Method.invoke(Method.java:606)
>
>         at org.apache.accumulo.start.Main$1.run(Main.java:103)
>
>         at java.lang.Thread.run(Thread.java:745)
>
>         Caused by: java.io.IOException: Connection reset by peer
>
>         at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>
>         at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>
>         at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>
>         at sun.nio.ch.IOUtil.read(IOUtil.java:197)
>
>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
>
>         at
>         org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
>
>         at
>         org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>
>         at
>         org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>
>         at
>         org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
>
>         at java.io.FilterInputStream.read(FilterInputStream.java:133)
>
>         at java.io.FilterInputStream.read(FilterInputStream.java:133)
>
>         at
>         org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:510)
>
>         at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>
>         at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>
>         at java.io.DataInputStream.readInt(DataInputStream.java:387)
>
>         at
>         org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1050)
>
>         at org.apache.hadoop.ipc.Client$Connection.run(Client.java:945)
>
>         Thread "shell" died java.lang.reflect.InvocationTargetException
>
>
>
>         --
>         There are ways and there are ways,
>
>         Geoffry Roberts
>
>
>
>
>
> --
> There are ways and there are ways,
>
> Geoffry Roberts

Re: Remote connections to Accumulo

Posted by Geoffry Roberts <th...@gmail.com>.
The -zi and -zh options are unrecognized by the accumulo shell.  It doesn't
matter the combination.

*accumulo shell -u root -zi -zh*

*2014-05-27 18:58:49,790 [shell.Shell] ERROR:
org.apache.commons.cli.UnrecognizedOptionException: Unrecognized option:
-zi*

Thanks.


On Tue, May 27, 2014 at 11:40 AM, Keith Turner <ke...@deenlo.com> wrote:

> Seems like there is a problem connecting to hdfs.  Seems its trying to
> connect to the namenode using localhost:9000.  What is your namenode
> configured to in your hdfs config where the shell is running?
>
> You can try using the -zi and -zh options w/ the Accumulo shell.  With
> these options, only zookeeper will be used to find accumulo servers (hdfs
> will not be used to find the instance id).
>
>
>
>
> On Tue, May 27, 2014 at 9:35 AM, Geoffry Roberts <th...@gmail.com>wrote:
>
>> I have Accumulo set up in a virtual environment.  From within the guest
>> environment, I can connect with the shell, and I can connect to Zookeeper.
>>  But from the host environment, things are different.  I can connect to
>> Zookeeper just fine, but I cannot connect to Accumulo with with a program
>> or with the shell.  the shell throws errors and the program appears to hang.
>>
>> Hadoop: 2.3.0
>> Zookeeper: 3.4.6
>> Accumulo: 1.5.1
>> Host: OSX 10.9
>> Guest: Ubuntu precise 64.
>> Virtual Box 4.3.10
>>
>> My questions:
>>
>>
>>    1. Should the shell be able to connect remotely?  Maybe I'm wrong in
>>    thinking it should.
>>    2. How should I interpret the error listed below?  I'm guessing the
>>    problem has to do with localhost:9000, but I'm not getting it.  Yes the
>>    accumulo_id appears to be available in hdfs.
>>
>>
>> Thanks
>>
>> Error dump:
>>
>> Starting /usr/local/accumulo/bin/accumulo shell -u root
>>
>> 2014-05-27 09:25:51.329 java[1015:6503] Unable to load realm info from
>> SCDynamicStore
>>
>> 2014-05-27 09:25:51,411 [util.NativeCodeLoader] WARN : Unable to load
>> native-hadoop library for your platform... using builtin-java classes where
>> applicable
>>
>> 2014-05-27 09:25:52,216 [client.ZooKeeperInstance] ERROR: Problem reading
>> instance id out of hdfs at /accumulo/instance_id
>>
>> java.io.IOException: Failed on local exception: java.io.IOException:
>> Connection reset by peer; Host Details : local host is: "abend.home/
>> 192.168.1.7"; destination host is: "localhost":9000;
>>
>> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
>>
>> at org.apache.hadoop.ipc.Client.call(Client.java:1410)
>>
>> at org.apache.hadoop.ipc.Client.call(Client.java:1359)
>>
>> at
>> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>>
>> at com.sun.proxy.$Proxy9.getListing(Unknown Source)
>>
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>> at java.lang.reflect.Method.invoke(Method.java:606)
>>
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>>
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>>
>> at com.sun.proxy.$Proxy9.getListing(Unknown Source)
>>
>> at
>> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:502)
>>
>> at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1727)
>>
>> at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1710)
>>
>> at
>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:646)
>>
>> at
>> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:98)
>>
>> at
>> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:708)
>>
>> at
>> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:704)
>>
>> at
>> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>>
>> at
>> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:704)
>>
>> at
>> org.apache.accumulo.core.client.ZooKeeperInstance.getInstanceIDFromHdfs(ZooKeeperInstance.java:288)
>>
>> at
>> org.apache.accumulo.core.util.shell.Shell.getDefaultInstance(Shell.java:402)
>>
>> at org.apache.accumulo.core.util.shell.Shell.setInstance(Shell.java:394)
>>
>> at org.apache.accumulo.core.util.shell.Shell.config(Shell.java:258)
>>
>> at org.apache.accumulo.core.util.shell.Shell.main(Shell.java:411)
>>
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>> at java.lang.reflect.Method.invoke(Method.java:606)
>>
>> at org.apache.accumulo.start.Main$1.run(Main.java:103)
>>
>> at java.lang.Thread.run(Thread.java:745)
>>
>> Caused by: java.io.IOException: Connection reset by peer
>>
>> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>>
>> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>>
>> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>>
>> at sun.nio.ch.IOUtil.read(IOUtil.java:197)
>>
>> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
>>
>> at
>> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
>>
>> at
>> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>>
>> at
>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>>
>> at
>> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
>>
>> at java.io.FilterInputStream.read(FilterInputStream.java:133)
>>
>> at java.io.FilterInputStream.read(FilterInputStream.java:133)
>>
>> at
>> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:510)
>>
>> at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>>
>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>
>> at java.io.DataInputStream.readInt(DataInputStream.java:387)
>>
>> at
>> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1050)
>>
>> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:945)
>>
>> Thread "shell" died java.lang.reflect.InvocationTargetException
>>
>>
>> --
>> There are ways and there are ways,
>>
>> Geoffry Roberts
>>
>
>


-- 
There are ways and there are ways,

Geoffry Roberts

Re: Remote connections to Accumulo

Posted by Keith Turner <ke...@deenlo.com>.
Seems like there is a problem connecting to hdfs.  Seems its trying to
connect to the namenode using localhost:9000.  What is your namenode
configured to in your hdfs config where the shell is running?

You can try using the -zi and -zh options w/ the Accumulo shell.  With
these options, only zookeeper will be used to find accumulo servers (hdfs
will not be used to find the instance id).




On Tue, May 27, 2014 at 9:35 AM, Geoffry Roberts <th...@gmail.com>wrote:

> I have Accumulo set up in a virtual environment.  From within the guest
> environment, I can connect with the shell, and I can connect to Zookeeper.
>  But from the host environment, things are different.  I can connect to
> Zookeeper just fine, but I cannot connect to Accumulo with with a program
> or with the shell.  the shell throws errors and the program appears to hang.
>
> Hadoop: 2.3.0
> Zookeeper: 3.4.6
> Accumulo: 1.5.1
> Host: OSX 10.9
> Guest: Ubuntu precise 64.
> Virtual Box 4.3.10
>
> My questions:
>
>
>    1. Should the shell be able to connect remotely?  Maybe I'm wrong in
>    thinking it should.
>    2. How should I interpret the error listed below?  I'm guessing the
>    problem has to do with localhost:9000, but I'm not getting it.  Yes the
>    accumulo_id appears to be available in hdfs.
>
>
> Thanks
>
> Error dump:
>
> Starting /usr/local/accumulo/bin/accumulo shell -u root
>
> 2014-05-27 09:25:51.329 java[1015:6503] Unable to load realm info from
> SCDynamicStore
>
> 2014-05-27 09:25:51,411 [util.NativeCodeLoader] WARN : Unable to load
> native-hadoop library for your platform... using builtin-java classes where
> applicable
>
> 2014-05-27 09:25:52,216 [client.ZooKeeperInstance] ERROR: Problem reading
> instance id out of hdfs at /accumulo/instance_id
>
> java.io.IOException: Failed on local exception: java.io.IOException:
> Connection reset by peer; Host Details : local host is: "abend.home/
> 192.168.1.7"; destination host is: "localhost":9000;
>
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1410)
>
> at org.apache.hadoop.ipc.Client.call(Client.java:1359)
>
> at
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
>
> at com.sun.proxy.$Proxy9.getListing(Unknown Source)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:606)
>
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
>
> at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>
> at com.sun.proxy.$Proxy9.getListing(Unknown Source)
>
> at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:502)
>
> at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1727)
>
> at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1710)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:646)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:98)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:708)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:704)
>
> at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>
> at
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:704)
>
> at
> org.apache.accumulo.core.client.ZooKeeperInstance.getInstanceIDFromHdfs(ZooKeeperInstance.java:288)
>
> at
> org.apache.accumulo.core.util.shell.Shell.getDefaultInstance(Shell.java:402)
>
> at org.apache.accumulo.core.util.shell.Shell.setInstance(Shell.java:394)
>
> at org.apache.accumulo.core.util.shell.Shell.config(Shell.java:258)
>
> at org.apache.accumulo.core.util.shell.Shell.main(Shell.java:411)
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:606)
>
> at org.apache.accumulo.start.Main$1.run(Main.java:103)
>
> at java.lang.Thread.run(Thread.java:745)
>
> Caused by: java.io.IOException: Connection reset by peer
>
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>
> at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>
> at sun.nio.ch.IOUtil.read(IOUtil.java:197)
>
> at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
>
> at
> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
>
> at
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
>
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
>
> at java.io.FilterInputStream.read(FilterInputStream.java:133)
>
> at
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:510)
>
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
>
> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>
> at java.io.DataInputStream.readInt(DataInputStream.java:387)
>
> at
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1050)
>
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:945)
>
> Thread "shell" died java.lang.reflect.InvocationTargetException
>
>
> --
> There are ways and there are ways,
>
> Geoffry Roberts
>