You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@accumulo.apache.org by Christine Buss <ch...@gmx.de> on 2021/07/07 15:20:41 UTC

Hadoop ConnectException

Hi,



I am using:

Java 11

Ubuntu 20.04.2

Hadoop 3.3.1

Zookeeper 3.7.0

Accumulo 2.0.1





I followed the instructions here:

https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-
common/SingleCluster.html

and edited `etc/hadoop/hadoop-env.sh`,  etc/hadoop/core-site.xml,
etc/hadoop/hdfs-site.xml accordingly.

'ssh localhost' works without a passphrase.



Then I started Zookeper, start-dfs.sh and start-yarn.sh:

christine@centauri:~$ ./zookeeper-3.4.9/bin/zkServer.sh start  
ZooKeeper JMX enabled by default  
Using config: /home/christine/zookeeper-3.4.9/bin/../conf/zoo.cfg  
Starting zookeeper ... STARTED  
christine@centauri:~$ ./hadoop-3.3.1/sbin/start-dfs.sh  
Starting namenodes on [localhost]  
Starting datanodes  
Starting secondary namenodes [centauri]  
centauri: Warning: Permanently added
'centauri,2003:d4:771c:3b00:7223:40a1:4c07:7c7b' (ECDSA) to the list of known
hosts.  
christine@centauri:~$ ./hadoop-3.3.1/sbin/start-yarn.sh  
Starting resourcemanager  
Starting nodemanagers  
christine@centauri:~$ jps  
3921 Jps  
2387 QuorumPeerMain  
3171 SecondaryNameNode  
3732 NodeManager  
2955 DataNode  
3599 ResourceManager



BUT

when running 'accumulo init' I get this Error:

hristine@centauri:~$ ./accumulo-2.0.1/bin/accumulo init  
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in
version 9.0 and will likely be removed in a future release.  
2021-07-07 15:59:05,590 [conf.SiteConfiguration] INFO : Found Accumulo
configuration on classpath at
/home/christine/accumulo-2.0.1/conf/accumulo.properties  
2021-07-07 15:59:08,460 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose
set to false in hdfs-site.xml: data loss is possible on hard system reset or
power loss  
2021-07-07 15:59:08,461 [init.Initialize] INFO : Hadoop Filesystem is
hdfs://localhost:9000  
2021-07-07 15:59:08,461 [init.Initialize] INFO : Accumulo data dirs are
[hdfs://localhost:8020/accumulo]  
2021-07-07 15:59:08,461 [init.Initialize] INFO : Zookeeper server is
localhost:2181  
2021-07-07 15:59:08,461 [init.Initialize] INFO : Checking if Zookeeper is
available. If this hangs, then you need to make sure zookeeper is running  
2021-07-07 15:59:08,938 [init.Initialize] ERROR: Fatal exception  
java.io.IOException: Failed to check if filesystem already initialized  
    at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
    at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
    at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
    at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
    at java.base/java.lang.Thread.run(Thread.java:829)  
Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to
localhost:8020 failed on connection exception: java.net.ConnectException:
Connection refused; For more details see:
http://wiki.apache.org/hadoop/ConnectionRefused  
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)  
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
    at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
    at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
    at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
    at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
    at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
    at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
    at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
    at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
    at org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
    at org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
    at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
    ... 4 more  
Caused by: java.net.ConnectException: Connection refused  
    at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
    at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
    at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
    at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
    ... 28 more  
2021-07-07 15:59:08,944 [start.Main] ERROR: Thread 'init' died.  
java.lang.RuntimeException: java.io.IOException: Failed to check if filesystem
already initialized  
    at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:997)  
    at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
    at java.base/java.lang.Thread.run(Thread.java:829)  
Caused by: java.io.IOException: Failed to check if filesystem already
initialized  
    at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
    at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
    at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
    ... 2 more  
Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to
localhost:8020 failed on connection exception: java.net.ConnectException:
Connection refused; For more details see:
http://wiki.apache.org/hadoop/ConnectionRefused  
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)  
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
    at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
    at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
    at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
    at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
    at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
    at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
    at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
    at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
    at org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
    at org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
    at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
    ... 4 more  
Caused by: java.net.ConnectException: Connection refused  
    at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
    at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
    at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
    at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
    ... 28 more





I am not able to find the mistake. I found similar questions on Stackoverflow,
but none of them solved my problem.

Thanks in advance for any idea.


Re: Re: Hadoop ConnectException

Posted by Christopher <ct...@apache.org>.
"Connection refused" could also mean that you're not connecting using
the correct bind address or you have a firewall or something blocking
the connection.

You can see what ports your services are listening on with `sudo netstat -tlnp`
If your machine's hostname is "mymachine" and you're trying to connect
to "mymachine:8020", but it's listening on "127.0.0.1:8020", it might
not work.

On systems like Fedora/RHEL that use SSSD, I've had some luck putting
"myhostname" as one of the targets for "hosts:" in /etc/nsswitch.conf,
ahead of the other options, so that way the localhost name lookup
prefers the local host name, rather than go to DNS. But, getting the
nameservice configuration on your machine, the `hostname` output in
the scripts, and the bind address for the server in the Java code, to
all agree on the host name can be tricky.

On Thu, Jul 8, 2021 at 10:49 AM Christine Buss <ch...@gmx.de> wrote:
>
>
> ok Namenode is running.
>
> christine@centauri:~$ jps
> 42753 NameNode
> 42884 DataNode
> 44438 NodeManager
> 45002 Jps
> 43116 SecondaryNameNode
> 7629 QuorumPeerMain
> 44301 ResourceManager
>
> I deleted everything and reinstalled again.
> But accumulo init still gives me the connection refused Error.
> Any other ideas?
> Gesendet: Mittwoch, 07. Juli 2021 um 17:59 Uhr
> Von: "Brian Loss" <br...@gmail.com>
> An: user@accumulo.apache.org
> Betreff: Re: Hadoop ConnectException
> Based on the jps output below, it would appear that no NameNode process is running (only SecondaryNameNode). That would mean the name node process exited for some reason. Check its logs and see if there is any useful error message there.
>
>
> On Jul 7, 2021, at 11:45 AM, <de...@etcoleman.com> <de...@etcoleman.com> wrote:
>
> Did you verify that Hadoop is really up and healthy?  Look at the Hadoop monitor pages and confirm that you can use the Hadoop cli to navigate around?  You may also need to update the accumulo configuration files / env to match your configuration.
>
> You might what to look at using https://github.com/apache/fluo-uno as a quick way to stand up an instance for testing – and that might give to additional insights.
>
> From: Christine Buss <ch...@gmx.de>
> Sent: Wednesday, July 7, 2021 11:21 AM
> To: user@accumulo.apache.org
> Subject: Hadoop ConnectException
>
> Hi,
>
> I am using:
> Java 11
> Ubuntu 20.04.2
> Hadoop 3.3.1
> Zookeeper 3.7.0
> Accumulo 2.0.1
>
>
> I followed the instructions here:
> https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html
> and edited etc/hadoop/hadoop-env.sh,  etc/hadoop/core-site.xml, etc/hadoop/hdfs-site.xml accordingly.
> 'ssh localhost' works without a passphrase.
>
> Then I started Zookeper, start-dfs.sh and start-yarn.sh:
> christine@centauri:~$ ./zookeeper-3.4.9/bin/zkServer.sh start
> ZooKeeper JMX enabled by default
> Using config: /home/christine/zookeeper-3.4.9/bin/../conf/zoo.cfg
> Starting zookeeper ... STARTED
> christine@centauri:~$ ./hadoop-3.3.1/sbin/start-dfs.sh
> Starting namenodes on [localhost]
> Starting datanodes
> Starting secondary namenodes [centauri]
> centauri: Warning: Permanently added 'centauri,2003:d4:771c:3b00:7223:40a1:4c07:7c7b' (ECDSA) to the list of known hosts.
> christine@centauri:~$ ./hadoop-3.3.1/sbin/start-yarn.sh
> Starting resourcemanager
> Starting nodemanagers
> christine@centauri:~$ jps
> 3921 Jps
> 2387 QuorumPeerMain
> 3171 SecondaryNameNode
> 3732 NodeManager
> 2955 DataNode
> 3599 ResourceManager
>
> BUT
> when running 'accumulo init' I get this Error:
> hristine@centauri:~$ ./accumulo-2.0.1/bin/accumulo init
> OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
> 2021-07-07 15:59:05,590 [conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /home/christine/accumulo-2.0.1/conf/accumulo.properties
> 2021-07-07 15:59:08,460 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible on hard system reset or power loss
> 2021-07-07 15:59:08,461 [init.Initialize] INFO : Hadoop Filesystem is hdfs://localhost:9000
> 2021-07-07 15:59:08,461 [init.Initialize] INFO : Accumulo data dirs are [hdfs://localhost:8020/accumulo]
> 2021-07-07 15:59:08,461 [init.Initialize] INFO : Zookeeper server is localhost:2181
> 2021-07-07 15:59:08,461 [init.Initialize] INFO : Checking if Zookeeper is available. If this hangs, then you need to make sure zookeeper is running
> 2021-07-07 15:59:08,938 [init.Initialize] ERROR: Fatal exception
> java.io.IOException: Failed to check if filesystem already initialized
>     at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)
>     at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)
>     at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)
>     at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
>     at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>     at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>     at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>     at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>     at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)
>     at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)
>     at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1519)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1416)
>     at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
>     at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
>     at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>     at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)
>     at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)
>     at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)
>     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)
>     at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)
>     at org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)
>     at org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)
>     at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)
>     ... 4 more
> Caused by: java.net.ConnectException: Connection refused
>     at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>     at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
>     at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)
>     at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)
>     at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)
>     at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)
>     at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1463)
>     ... 28 more
> 2021-07-07 15:59:08,944 [start.Main] ERROR: Thread 'init' died.
> java.lang.RuntimeException: java.io.IOException: Failed to check if filesystem already initialized
>     at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:997)
>     at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
>     at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: java.io.IOException: Failed to check if filesystem already initialized
>     at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)
>     at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)
>     at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)
>     ... 2 more
> Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
>     at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>     at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>     at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>     at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)
>     at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)
>     at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1519)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1416)
>     at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
>     at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
>     at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>     at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)
>     at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)
>     at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)
>     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)
>     at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)
>     at org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)
>     at org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)
>     at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)
>     ... 4 more
> Caused by: java.net.ConnectException: Connection refused
>     at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>     at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
>     at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)
>     at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)
>     at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)
>     at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)
>     at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1463)
>     ... 28 more
>
>
> I am not able to find the mistake. I found similar questions on Stackoverflow, but none of them solved my problem.
> Thanks in advance for any idea.

Aw: Re: Hadoop ConnectException

Posted by Christine Buss <ch...@gmx.de>.
christine@centauri:~$ sudo netstat -lpten | grep java  
tcp        0      0 0.0.0.0:8031            0.0.0.0:*               LISTEN
1000       115504     6551/java  
tcp        0      0 0.0.0.0:8032            0.0.0.0:*               LISTEN
1000       110048     6551/java  
tcp        0      0 0.0.0.0:8033            0.0.0.0:*               LISTEN
1000       110034     6551/java  
tcp        0      0 0.0.0.0:8040            0.0.0.0:*               LISTEN
1000       115496     6684/java  
tcp        0      0 0.0.0.0:9864            0.0.0.0:*               LISTEN
1000       108156     5945/java  
tcp        0      0 127.0.0.1:9000          0.0.0.0:*               LISTEN
1000       106172     5811/java  



so the namenode is not the problem right?

**Gesendet:**  Donnerstag, 08. Juli 2021 um 16:48 Uhr  
**Von:**  "Christine Buss" <ch...@gmx.de>  
**An:**  user@accumulo.apache.org  
**Betreff:**  Aw: Re: Hadoop ConnectException



ok Namenode is running.



christine@centauri:~$ jps  
42753 NameNode  
42884 DataNode  
44438 NodeManager  
45002 Jps  
43116 SecondaryNameNode  
7629 QuorumPeerMain  
44301 ResourceManager



I deleted everything and reinstalled again.

But accumulo init still gives me the connection refused Error.

Any other ideas?

**Gesendet:**  Mittwoch, 07. Juli 2021 um 17:59 Uhr  
**Von:**  "Brian Loss" <br...@gmail.com>  
**An:**  user@accumulo.apache.org  
**Betreff:**  Re: Hadoop ConnectException

Based on the jps output below, it would appear that no NameNode process is
running (only SecondaryNameNode). That would mean the name node process exited
for some reason. Check its logs and see if there is any useful error message
there.



> On Jul 7, 2021, at 11:45 AM,
<[dev1@etcoleman.com](mailto:dev1@etcoleman.com)>
<[dev1@etcoleman.com](mailto:dev1@etcoleman.com)> wrote:

>

>  
>

> Did you verify that Hadoop is really up and healthy?  Look at the Hadoop
monitor pages and confirm that you can use the Hadoop cli to navigate around?
You may also need to update the accumulo configuration files / env to match
your configuration.

>

>  
>

> You might what to look at using <https://github.com/apache/fluo-uno> as a
quick way to stand up an instance for testing - and that might give to
additional insights.

>

>  
>

> **From:**  Christine Buss
<[christine.buss223@gmx.de](mailto:christine.buss223@gmx.de)>  
>  **Sent:**  Wednesday, July 7, 2021 11:21 AM  
>  **To:**  [user@accumulo.apache.org](mailto:user@accumulo.apache.org)  
>  **Subject:**  Hadoop ConnectException

>

>  
>

> Hi,

>

>  
>

> I am using:

>

> Java 11

>

> Ubuntu 20.04.2

>

> Hadoop 3.3.1

>

> Zookeeper 3.7.0

>

> Accumulo 2.0.1

>

>  
>

>  
>

> I followed the instructions here:

>

> <https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-
common/SingleCluster.html>

>

> and edited `etc/hadoop/hadoop-env.sh`,  etc/hadoop/core-site.xml,
etc/hadoop/hdfs-site.xml accordingly.

>

> 'ssh localhost' works without a passphrase.

>

>  
>

> Then I started Zookeper, start-dfs.sh and start-yarn.sh:

>

> christine@centauri:~$ ./zookeeper-3.4.9/bin/zkServer.sh start  
>  ZooKeeper JMX enabled by default  
>  Using config: /home/christine/zookeeper-3.4.9/bin/../conf/zoo.cfg  
>  Starting zookeeper ... STARTED  
>  christine@centauri:~$ ./hadoop-3.3.1/sbin/start-dfs.sh  
>  Starting namenodes on [localhost]  
>  Starting datanodes  
>  Starting secondary namenodes [centauri]  
>  centauri: Warning: Permanently added
'centauri,2003:d4:771c:3b00:7223:40a1:4c07:7c7b' (ECDSA) to the list of known
hosts.  
>  christine@centauri:~$ ./hadoop-3.3.1/sbin/start-yarn.sh  
>  Starting resourcemanager  
>  Starting nodemanagers  
>  christine@centauri:~$ jps  
>  3921 Jps  
>  2387 QuorumPeerMain  
>  3171 SecondaryNameNode  
>  3732 NodeManager  
>  2955 DataNode  
>  3599 ResourceManager

>

>  
>

> BUT

>

> when running 'accumulo init' I get this Error:

>

> hristine@centauri:~$ ./accumulo-2.0.1/bin/accumulo init  
>  OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated
in version 9.0 and will likely be removed in a future release.  
>  2021-07-07 15:59:05,590 [conf.SiteConfiguration] INFO : Found Accumulo
configuration on classpath at
/home/christine/accumulo-2.0.1/conf/accumulo.properties  
>  2021-07-07 15:59:08,460 [fs.VolumeManagerImpl] WARN :
dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible
on hard system reset or power loss  
>  2021-07-07 15:59:08,461 [init.Initialize] INFO : Hadoop Filesystem is
hdfs://localhost:9000  
>  2021-07-07 15:59:08,461 [init.Initialize] INFO : Accumulo data dirs are
[hdfs://localhost:8020/accumulo]  
>  2021-07-07 15:59:08,461 [init.Initialize] INFO : Zookeeper server is
localhost:2181  
>  2021-07-07 15:59:08,461 [init.Initialize] INFO : Checking if Zookeeper is
available. If this hangs, then you need to make sure zookeeper is running  
>  2021-07-07 15:59:08,938 [init.Initialize] ERROR: Fatal exception  
>  java.io.IOException: Failed to check if filesystem already initialized  
>      at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
>      at
org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
>      at
org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
>      at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
>      at java.base/java.lang.Thread.run(Thread.java:829)  
>  Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to
localhost:8020 failed on connection exception: java.net.ConnectException:
Connection refused; For more details see:
<http://wiki.apache.org/hadoop/ConnectionRefused>  
>      at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)  
>      at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
>      at
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
>      at
java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
>      at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
>      at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
>      at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
>      at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
>      at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
>      at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
>      at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
>      at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
>      at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
>      at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
>      at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
>      at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
>      at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
>      at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
>      at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
>      at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
>      at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
>      at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
>      at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
>      at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
>      at
org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
>      at
org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
>      at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
>      ... 4 more  
>  Caused by: java.net.ConnectException: Connection refused  
>      at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
>      at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
>      at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
>      at
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
>      at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
>      at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
>      at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
>      at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
>      ... 28 more  
>  2021-07-07 15:59:08,944 [start.Main] ERROR: Thread 'init' died.  
>  java.lang.RuntimeException: java.io.IOException: Failed to check if
filesystem already initialized  
>      at
org.apache.accumulo.server.init.Initialize.execute(Initialize.java:997)  
>      at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
>      at java.base/java.lang.Thread.run(Thread.java:829)  
>  Caused by: java.io.IOException: Failed to check if filesystem already
initialized  
>      at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
>      at
org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
>      at
org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
>      ... 2 more  
>  Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to
localhost:8020 failed on connection exception: java.net.ConnectException:
Connection refused; For more details see:
<http://wiki.apache.org/hadoop/ConnectionRefused>  
>      at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)  
>      at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
>      at
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
>      at
java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
>      at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
>      at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
>      at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
>      at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
>      at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
>      at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
>      at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
>      at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
>      at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
>      at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
>      at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
>      at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
>      at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
>      at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
>      at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
>      at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
>      at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
>      at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
>      at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
>      at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
>      at
org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
>      at
org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
>      at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
>      ... 4 more  
>  Caused by: java.net.ConnectException: Connection refused  
>      at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
>      at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
>      at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
>      at
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
>      at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
>      at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
>      at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
>      at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
>      ... 28 more

>

>  
>

>  
>

> I am not able to find the mistake. I found similar questions on
Stackoverflow, but none of them solved my problem.

>

> Thanks in advance for any idea.


Aw: Re: Hadoop ConnectException

Posted by Christine Buss <ch...@gmx.de>.

ok Namenode is running.



christine@centauri:~$ jps  
42753 NameNode  
42884 DataNode  
44438 NodeManager  
45002 Jps  
43116 SecondaryNameNode  
7629 QuorumPeerMain  
44301 ResourceManager



I deleted everything and reinstalled again.

But accumulo init still gives me the connection refused Error.

Any other ideas?

**Gesendet:**  Mittwoch, 07. Juli 2021 um 17:59 Uhr  
**Von:**  "Brian Loss" <br...@gmail.com>  
**An:**  user@accumulo.apache.org  
**Betreff:**  Re: Hadoop ConnectException

Based on the jps output below, it would appear that no NameNode process is
running (only SecondaryNameNode). That would mean the name node process exited
for some reason. Check its logs and see if there is any useful error message
there.



> On Jul 7, 2021, at 11:45 AM,
<[dev1@etcoleman.com](mailto:dev1@etcoleman.com)>
<[dev1@etcoleman.com](mailto:dev1@etcoleman.com)> wrote:

>

>  
>

> Did you verify that Hadoop is really up and healthy?  Look at the Hadoop
monitor pages and confirm that you can use the Hadoop cli to navigate around?
You may also need to update the accumulo configuration files / env to match
your configuration.

>

>  
>

> You might what to look at using <https://github.com/apache/fluo-uno> as a
quick way to stand up an instance for testing - and that might give to
additional insights.

>

>  
>

> **From:**  Christine Buss
<[christine.buss223@gmx.de](mailto:christine.buss223@gmx.de)>  
>  **Sent:**  Wednesday, July 7, 2021 11:21 AM  
>  **To:**  [user@accumulo.apache.org](mailto:user@accumulo.apache.org)  
>  **Subject:**  Hadoop ConnectException

>

>  
>

> Hi,

>

>  
>

> I am using:

>

> Java 11

>

> Ubuntu 20.04.2

>

> Hadoop 3.3.1

>

> Zookeeper 3.7.0

>

> Accumulo 2.0.1

>

>  
>

>  
>

> I followed the instructions here:

>

> <https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-
common/SingleCluster.html>

>

> and edited `etc/hadoop/hadoop-env.sh`,  etc/hadoop/core-site.xml,
etc/hadoop/hdfs-site.xml accordingly.

>

> 'ssh localhost' works without a passphrase.

>

>  
>

> Then I started Zookeper, start-dfs.sh and start-yarn.sh:

>

> christine@centauri:~$ ./zookeeper-3.4.9/bin/zkServer.sh start  
>  ZooKeeper JMX enabled by default  
>  Using config: /home/christine/zookeeper-3.4.9/bin/../conf/zoo.cfg  
>  Starting zookeeper ... STARTED  
>  christine@centauri:~$ ./hadoop-3.3.1/sbin/start-dfs.sh  
>  Starting namenodes on [localhost]  
>  Starting datanodes  
>  Starting secondary namenodes [centauri]  
>  centauri: Warning: Permanently added
'centauri,2003:d4:771c:3b00:7223:40a1:4c07:7c7b' (ECDSA) to the list of known
hosts.  
>  christine@centauri:~$ ./hadoop-3.3.1/sbin/start-yarn.sh  
>  Starting resourcemanager  
>  Starting nodemanagers  
>  christine@centauri:~$ jps  
>  3921 Jps  
>  2387 QuorumPeerMain  
>  3171 SecondaryNameNode  
>  3732 NodeManager  
>  2955 DataNode  
>  3599 ResourceManager

>

>  
>

> BUT

>

> when running 'accumulo init' I get this Error:

>

> hristine@centauri:~$ ./accumulo-2.0.1/bin/accumulo init  
>  OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated
in version 9.0 and will likely be removed in a future release.  
>  2021-07-07 15:59:05,590 [conf.SiteConfiguration] INFO : Found Accumulo
configuration on classpath at
/home/christine/accumulo-2.0.1/conf/accumulo.properties  
>  2021-07-07 15:59:08,460 [fs.VolumeManagerImpl] WARN :
dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible
on hard system reset or power loss  
>  2021-07-07 15:59:08,461 [init.Initialize] INFO : Hadoop Filesystem is
hdfs://localhost:9000  
>  2021-07-07 15:59:08,461 [init.Initialize] INFO : Accumulo data dirs are
[hdfs://localhost:8020/accumulo]  
>  2021-07-07 15:59:08,461 [init.Initialize] INFO : Zookeeper server is
localhost:2181  
>  2021-07-07 15:59:08,461 [init.Initialize] INFO : Checking if Zookeeper is
available. If this hangs, then you need to make sure zookeeper is running  
>  2021-07-07 15:59:08,938 [init.Initialize] ERROR: Fatal exception  
>  java.io.IOException: Failed to check if filesystem already initialized  
>      at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
>      at
org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
>      at
org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
>      at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
>      at java.base/java.lang.Thread.run(Thread.java:829)  
>  Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to
localhost:8020 failed on connection exception: java.net.ConnectException:
Connection refused; For more details see:
<http://wiki.apache.org/hadoop/ConnectionRefused>  
>      at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)  
>      at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
>      at
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
>      at
java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
>      at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
>      at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
>      at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
>      at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
>      at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
>      at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
>      at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
>      at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
>      at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
>      at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
>      at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
>      at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
>      at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
>      at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
>      at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
>      at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
>      at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
>      at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
>      at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
>      at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
>      at
org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
>      at
org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
>      at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
>      ... 4 more  
>  Caused by: java.net.ConnectException: Connection refused  
>      at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
>      at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
>      at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
>      at
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
>      at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
>      at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
>      at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
>      at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
>      ... 28 more  
>  2021-07-07 15:59:08,944 [start.Main] ERROR: Thread 'init' died.  
>  java.lang.RuntimeException: java.io.IOException: Failed to check if
filesystem already initialized  
>      at
org.apache.accumulo.server.init.Initialize.execute(Initialize.java:997)  
>      at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
>      at java.base/java.lang.Thread.run(Thread.java:829)  
>  Caused by: java.io.IOException: Failed to check if filesystem already
initialized  
>      at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
>      at
org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
>      at
org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
>      ... 2 more  
>  Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to
localhost:8020 failed on connection exception: java.net.ConnectException:
Connection refused; For more details see:
<http://wiki.apache.org/hadoop/ConnectionRefused>  
>      at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)  
>      at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
>      at
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
>      at
java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
>      at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
>      at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
>      at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
>      at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
>      at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
>      at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
>      at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
>      at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
>      at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
>      at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
>      at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
>      at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
>      at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
>      at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
>      at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
>      at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
>      at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
>      at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
>      at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
>      at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
>      at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
>      at
org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
>      at
org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
>      at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
>      ... 4 more  
>  Caused by: java.net.ConnectException: Connection refused  
>      at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
>      at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
>      at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
>      at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
>      at
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
>      at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
>      at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
>      at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
>      at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
>      ... 28 more

>

>  
>

>  
>

> I am not able to find the mistake. I found similar questions on
Stackoverflow, but none of them solved my problem.

>

> Thanks in advance for any idea.


Re: Hadoop ConnectException

Posted by Brian Loss <br...@gmail.com>.
Based on the jps output below, it would appear that no NameNode process is running (only SecondaryNameNode). That would mean the name node process exited for some reason. Check its logs and see if there is any useful error message there.

> On Jul 7, 2021, at 11:45 AM, <de...@etcoleman.com> <de...@etcoleman.com> wrote:
> 
> Did you verify that Hadoop is really up and healthy?  Look at the Hadoop monitor pages and confirm that you can use the Hadoop cli to navigate around?  You may also need to update the accumulo configuration files / env to match your configuration.
>  
> You might what to look at using https://github.com/apache/fluo-uno <https://github.com/apache/fluo-uno> as a quick way to stand up an instance for testing – and that might give to additional insights. 
>  
> From: Christine Buss <ch...@gmx.de> 
> Sent: Wednesday, July 7, 2021 11:21 AM
> To: user@accumulo.apache.org
> Subject: Hadoop ConnectException
>  
> Hi,
>  
> I am using:
> Java 11
> Ubuntu 20.04.2
> Hadoop 3.3.1
> Zookeeper 3.7.0
> Accumulo 2.0.1
>  
>  
> I followed the instructions here:
> https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html <https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html>
> and edited etc/hadoop/hadoop-env.sh,  etc/hadoop/core-site.xml, etc/hadoop/hdfs-site.xml accordingly.
> 'ssh localhost' works without a passphrase.
>  
> Then I started Zookeper, start-dfs.sh and start-yarn.sh:
> christine@centauri:~$ ./zookeeper-3.4.9/bin/zkServer.sh start
> ZooKeeper JMX enabled by default
> Using config: /home/christine/zookeeper-3.4.9/bin/../conf/zoo.cfg
> Starting zookeeper ... STARTED
> christine@centauri:~$ ./hadoop-3.3.1/sbin/start-dfs.sh
> Starting namenodes on [localhost]
> Starting datanodes
> Starting secondary namenodes [centauri]
> centauri: Warning: Permanently added 'centauri,2003:d4:771c:3b00:7223:40a1:4c07:7c7b' (ECDSA) to the list of known hosts.
> christine@centauri:~$ ./hadoop-3.3.1/sbin/start-yarn.sh
> Starting resourcemanager
> Starting nodemanagers
> christine@centauri:~$ jps
> 3921 Jps
> 2387 QuorumPeerMain
> 3171 SecondaryNameNode
> 3732 NodeManager
> 2955 DataNode
> 3599 ResourceManager
>  
> BUT
> when running 'accumulo init' I get this Error:
> hristine@centauri:~$ ./accumulo-2.0.1/bin/accumulo init
> OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
> 2021-07-07 15:59:05,590 [conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /home/christine/accumulo-2.0.1/conf/accumulo.properties
> 2021-07-07 15:59:08,460 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible on hard system reset or power loss
> 2021-07-07 15:59:08,461 [init.Initialize] INFO : Hadoop Filesystem is hdfs://localhost:9000
> 2021-07-07 15:59:08,461 [init.Initialize] INFO : Accumulo data dirs are [hdfs://localhost:8020/accumulo]
> 2021-07-07 15:59:08,461 [init.Initialize] INFO : Zookeeper server is localhost:2181
> 2021-07-07 15:59:08,461 [init.Initialize] INFO : Checking if Zookeeper is available. If this hangs, then you need to make sure zookeeper is running
> 2021-07-07 15:59:08,938 [init.Initialize] ERROR: Fatal exception
> java.io.IOException: Failed to check if filesystem already initialized
>     at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)
>     at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)
>     at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)
>     at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
>     at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused <http://wiki.apache.org/hadoop/ConnectionRefused>
>     at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>     at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>     at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>     at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)
>     at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)
>     at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1519)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1416)
>     at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
>     at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
>     at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>     at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)
>     at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)
>     at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)
>     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)
>     at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)
>     at org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)
>     at org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)
>     at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)
>     ... 4 more
> Caused by: java.net.ConnectException: Connection refused
>     at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>     at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
>     at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)
>     at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)
>     at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)
>     at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)
>     at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1463)
>     ... 28 more
> 2021-07-07 15:59:08,944 [start.Main] ERROR: Thread 'init' died.
> java.lang.RuntimeException: java.io.IOException: Failed to check if filesystem already initialized
>     at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:997)
>     at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
>     at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: java.io.IOException: Failed to check if filesystem already initialized
>     at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)
>     at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)
>     at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)
>     ... 2 more
> Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused <http://wiki.apache.org/hadoop/ConnectionRefused>
>     at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>     at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>     at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>     at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>     at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)
>     at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)
>     at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1519)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1416)
>     at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
>     at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
>     at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>     at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
>     at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)
>     at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)
>     at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)
>     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>     at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)
>     at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)
>     at org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)
>     at org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)
>     at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)
>     ... 4 more
> Caused by: java.net.ConnectException: Connection refused
>     at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>     at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
>     at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)
>     at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)
>     at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)
>     at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)
>     at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1463)
>     ... 28 more
>  
>  
> I am not able to find the mistake. I found similar questions on Stackoverflow, but none of them solved my problem.
> Thanks in advance for any idea.


Aw: RE: Hadoop ConnectException

Posted by Christine Buss <ch...@gmx.de>.

cool! I download the Hodoop cli now and will try that.

Sorry for this stupid question, but what are the Hadoop monitor pages?



**Gesendet:**  Mittwoch, 07. Juli 2021 um 17:45 Uhr  
**Von:**  dev1@etcoleman.com  
**An:**  user@accumulo.apache.org  
**Betreff:**  RE: Hadoop ConnectException

Did you verify that Hadoop is really up and healthy?  Look at the Hadoop
monitor pages and confirm that you can use the Hadoop cli to navigate around?
You may also need to update the accumulo configuration files / env to match
your configuration.



You might what to look at using <https://github.com/apache/fluo-uno> as a
quick way to stand up an instance for testing - and that might give to
additional insights.



**From:** Christine Buss  <ch...@gmx.de>  
**Sent:** Wednesday, July 7, 2021 11:21 AM  
**To:** user@accumulo.apache.org  
**Subject:** Hadoop ConnectException



Hi,



I am using:

Java 11

Ubuntu 20.04.2

Hadoop 3.3.1

Zookeeper 3.7.0

Accumulo 2.0.1





I followed the instructions here:

<https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-
common/SingleCluster.html>

and edited `etc/hadoop/hadoop-env.sh`,  etc/hadoop/core-site.xml,
etc/hadoop/hdfs-site.xml accordingly.

'ssh localhost' works without a passphrase.



Then I started Zookeper, start-dfs.sh and start-yarn.sh:

christine@centauri:~$ ./zookeeper-3.4.9/bin/zkServer.sh start  
ZooKeeper JMX enabled by default  
Using config: /home/christine/zookeeper-3.4.9/bin/../conf/zoo.cfg  
Starting zookeeper ... STARTED  
christine@centauri:~$ ./hadoop-3.3.1/sbin/start-dfs.sh  
Starting namenodes on [localhost]  
Starting datanodes  
Starting secondary namenodes [centauri]  
centauri: Warning: Permanently added
'centauri,2003:d4:771c:3b00:7223:40a1:4c07:7c7b' (ECDSA) to the list of known
hosts.  
christine@centauri:~$ ./hadoop-3.3.1/sbin/start-yarn.sh  
Starting resourcemanager  
Starting nodemanagers  
christine@centauri:~$ jps  
3921 Jps  
2387 QuorumPeerMain  
3171 SecondaryNameNode  
3732 NodeManager  
2955 DataNode  
3599 ResourceManager



BUT

when running 'accumulo init' I get this Error:

hristine@centauri:~$ ./accumulo-2.0.1/bin/accumulo init  
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in
version 9.0 and will likely be removed in a future release.  
2021-07-07 15:59:05,590 [conf.SiteConfiguration] INFO : Found Accumulo
configuration on classpath at
/home/christine/accumulo-2.0.1/conf/accumulo.properties  
2021-07-07 15:59:08,460 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose
set to false in hdfs-site.xml: data loss is possible on hard system reset or
power loss  
2021-07-07 15:59:08,461 [init.Initialize] INFO : Hadoop Filesystem is
hdfs://localhost:9000  
2021-07-07 15:59:08,461 [init.Initialize] INFO : Accumulo data dirs are
[hdfs://localhost:8020/accumulo]  
2021-07-07 15:59:08,461 [init.Initialize] INFO : Zookeeper server is
localhost:2181  
2021-07-07 15:59:08,461 [init.Initialize] INFO : Checking if Zookeeper is
available. If this hangs, then you need to make sure zookeeper is running  
2021-07-07 15:59:08,938 [init.Initialize] ERROR: Fatal exception  
java.io.IOException: Failed to check if filesystem already initialized  
    at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
    at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
    at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
    at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
    at java.base/java.lang.Thread.run(Thread.java:829)  
Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to
localhost:8020 failed on connection exception: java.net.ConnectException:
Connection refused; For more details see:
<http://wiki.apache.org/hadoop/ConnectionRefused>  
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)  
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
    at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
    at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
    at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
    at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
    at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
    at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
    at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
    at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
    at org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
    at org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
    at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
    ... 4 more  
Caused by: java.net.ConnectException: Connection refused  
    at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
    at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
    at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
    at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
    ... 28 more  
2021-07-07 15:59:08,944 [start.Main] ERROR: Thread 'init' died.  
java.lang.RuntimeException: java.io.IOException: Failed to check if filesystem
already initialized  
    at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:997)  
    at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
    at java.base/java.lang.Thread.run(Thread.java:829)  
Caused by: java.io.IOException: Failed to check if filesystem already
initialized  
    at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
    at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
    at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
    ... 2 more  
Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to
localhost:8020 failed on connection exception: java.net.ConnectException:
Connection refused; For more details see:
<http://wiki.apache.org/hadoop/ConnectionRefused>  
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)  
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
    at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
    at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
    at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
    at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
    at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
    at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
    at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
    at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
    at org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
    at org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
    at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
    ... 4 more  
Caused by: java.net.ConnectException: Connection refused  
    at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
    at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
    at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
    at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
    ... 28 more





I am not able to find the mistake. I found similar questions on Stackoverflow,
but none of them solved my problem.

Thanks in advance for any idea.


RE: Hadoop ConnectException

Posted by de...@etcoleman.com.
Did you verify that Hadoop is really up and healthy?  Look at the Hadoop monitor pages and confirm that you can use the Hadoop cli to navigate around?  You may also need to update the accumulo configuration files / env to match your configuration.

 

You might what to look at using https://github.com/apache/fluo-uno as a quick way to stand up an instance for testing – and that might give to additional insights. 

 

From: Christine Buss <ch...@gmx.de> 
Sent: Wednesday, July 7, 2021 11:21 AM
To: user@accumulo.apache.org
Subject: Hadoop ConnectException

 

Hi,

 

I am using:

Java 11

Ubuntu 20.04.2

Hadoop 3.3.1

Zookeeper 3.7.0

Accumulo 2.0.1

 

 

I followed the instructions here:

https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html

and edited etc/hadoop/hadoop-env.sh,  etc/hadoop/core-site.xml, etc/hadoop/hdfs-site.xml accordingly.

'ssh localhost' works without a passphrase.

 

Then I started Zookeper, start-dfs.sh and start-yarn.sh:

christine@centauri:~$ ./zookeeper-3.4.9/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/christine/zookeeper-3.4.9/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
christine@centauri:~$ ./hadoop-3.3.1/sbin/start-dfs.sh
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [centauri]
centauri: Warning: Permanently added 'centauri,2003:d4:771c:3b00:7223:40a1:4c07:7c7b' (ECDSA) to the list of known hosts.
christine@centauri:~$ ./hadoop-3.3.1/sbin/start-yarn.sh
Starting resourcemanager
Starting nodemanagers
christine@centauri:~$ jps
3921 Jps
2387 QuorumPeerMain
3171 SecondaryNameNode
3732 NodeManager
2955 DataNode
3599 ResourceManager

 

BUT

when running 'accumulo init' I get this Error:

hristine@centauri:~$ ./accumulo-2.0.1/bin/accumulo init
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
2021-07-07 15:59:05,590 [conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /home/christine/accumulo-2.0.1/conf/accumulo.properties
2021-07-07 15:59:08,460 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose set to false in hdfs-site.xml: data loss is possible on hard system reset or power loss
2021-07-07 15:59:08,461 [init.Initialize] INFO : Hadoop Filesystem is hdfs://localhost:9000
2021-07-07 15:59:08,461 [init.Initialize] INFO : Accumulo data dirs are [hdfs://localhost:8020/accumulo]
2021-07-07 15:59:08,461 [init.Initialize] INFO : Zookeeper server is localhost:2181
2021-07-07 15:59:08,461 [init.Initialize] INFO : Checking if Zookeeper is available. If this hangs, then you need to make sure zookeeper is running
2021-07-07 15:59:08,938 [init.Initialize] ERROR: Fatal exception
java.io.IOException: Failed to check if filesystem already initialized
    at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)
    at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)
    at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)
    at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
    at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)
    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)
    at org.apache.hadoop.ipc.Client.call(Client.java:1519)
    at org.apache.hadoop.ipc.Client.call(Client.java:1416)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
    at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
    at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)
    at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)
    at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)
    at org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)
    at org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)
    at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)
    ... 4 more
Caused by: java.net.ConnectException: Connection refused
    at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)
    at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)
    at org.apache.hadoop.ipc.Client.call(Client.java:1463)
    ... 28 more
2021-07-07 15:59:08,944 [start.Main] ERROR: Thread 'init' died.
java.lang.RuntimeException: java.io.IOException: Failed to check if filesystem already initialized
    at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:997)
    at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
    at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.io.IOException: Failed to check if filesystem already initialized
    at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)
    at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)
    at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)
    ... 2 more
Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)
    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)
    at org.apache.hadoop.ipc.Client.call(Client.java:1519)
    at org.apache.hadoop.ipc.Client.call(Client.java:1416)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
    at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
    at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)
    at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)
    at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)
    at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)
    at org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)
    at org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)
    at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)
    ... 4 more
Caused by: java.net.ConnectException: Connection refused
    at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)
    at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)
    at org.apache.hadoop.ipc.Client.call(Client.java:1463)
    ... 28 more

 

 

I am not able to find the mistake. I found similar questions on Stackoverflow, but none of them solved my problem.

Thanks in advance for any idea.


Re: java.lang.AssertionError - accumulo examples

Posted by Christopher <ct...@apache.org>.
It looks like the examples were updated to work with 2.1.0-SNAPSHOT's
new Constraint SPI, which doesn't exist for 2.0.1. Try checking out
the 2.0 branch in the accumulo-examples, if you're trying to run them
with 2.0.

On Thu, Jul 15, 2021 at 12:03 PM Christine Buss
<ch...@gmx.de> wrote:
>
>
> And I should mention,
> I get the AssertionError when I remove the folder 'constraints' in
> /accumulo-examples/src/main/java/org/apache/accumulo/examples
>
> When I keep folder there, then I get this message:
> ERROR] COMPILATION ERROR :
> [INFO] -------------------------------------------------------------
> [ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[31,49] package org.apache.accumulo.core.data.constraints does not exist
> [ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[40,41] cannot find symbol
>   symbol: class Constraint
> [ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[54,28] cannot find symbol
>   symbol:   class Environment
>   location: class org.apache.accumulo.examples.constraints.MaxMutationSize
> [ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[35,49] package org.apache.accumulo.core.data.constraints does not exist
> [ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[44,47] cannot find symbol
>   symbol: class Constraint
> [ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[74,28] cannot find symbol
>   symbol:   class Environment
>   location: class org.apache.accumulo.examples.constraints.AlphaNumKeyConstraint
> [ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[32,49] package org.apache.accumulo.core.data.constraints does not exist
> [ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[41,48] cannot find symbol
>   symbol: class Constraint
> [ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[60,28] cannot find symbol
>   symbol:   class Environment
>   location: class org.apache.accumulo.examples.constraints.NumericValueConstraint
> [INFO] 9 errors
> [INFO] -------------------------------------------------------------
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD FAILURE
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time:  20.033 s
> [INFO] Finished at: 2021-07-15T18:01:27+02:00
> [INFO] ------------------------------------------------------------------------
> [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project accumulo-examples: Compilation failure: Compilation failure:
> [ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[31,49] package org.apache.accumulo.core.data.constraints does not exist
> [ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[40,41] cannot find symbol
> [ERROR]   symbol: class Constraint
> [ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[54,28] cannot find symbol
> [ERROR]   symbol:   class Environment
> [ERROR]   location: class org.apache.accumulo.examples.constraints.MaxMutationSize
> [ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[35,49] package org.apache.accumulo.core.data.constraints does not exist
> [ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[44,47] cannot find symbol
> [ERROR]   symbol: class Constraint
> [ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[74,28] cannot find symbol
> [ERROR]   symbol:   class Environment
> [ERROR]   location: class org.apache.accumulo.examples.constraints.AlphaNumKeyConstraint
> [ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[32,49] package org.apache.accumulo.core.data.constraints does not exist
> [ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[41,48] cannot find symbol
> [ERROR]   symbol: class Constraint
> [ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[60,28] cannot find symbol
> [ERROR]   symbol:   class Environment
> [ERROR]   location: class org.apache.accumulo.examples.constraints.NumericValueConstraint
> [ERROR] -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions, please read the following articles:
> [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
>
>
> Does that have anything to do with the Assertion Error?
>
>
>
>
> Gesendet: Donnerstag, 15. Juli 2021 um 17:48 Uhr
> Von: "Christine Buss" <ch...@gmx.de>
> An: user@accumulo.apache.org
> Betreff: java.lang.AssertionError - accumulo examples
> I cloned the accumulo examples, and followed the instructions in the READMe file:
> https://github.com/apache/accumulo-examples/blob/main/README.md
> but when I run  ./bin/build I get an AssertionError;
>
> [INFO] BUILD FAILURE
> [INFO] ------------------------------------------------------------------------
> [INFO] Total time:  22.366 s
> [INFO] Finished at: 2021-07-15T17:27:50+02:00
> [INFO] ------------------------------------------------------------------------
> ---------------------------------------------------
> constituent[0]: file:/usr/share/maven/conf/logging/
> constituent[1]: file:/usr/share/maven/lib/maven-repository-metadata-3.x.jar
> constituent[2]: file:/usr/share/maven/lib/plexus-interpolation.jar
> constituent[3]: file:/usr/share/maven/lib/maven-model-3.x.jar
> constituent[4]: file:/usr/share/maven/lib/commons-cli.jar
> constituent[5]: file:/usr/share/maven/lib/maven-resolver-impl.jar
> constituent[6]: file:/usr/share/maven/lib/wagon-provider-api.jar
> constituent[7]: file:/usr/share/maven/lib/maven-resolver-provider-3.x.jar
> constituent[8]: file:/usr/share/maven/lib/maven-resolver-util.jar
> constituent[9]: file:/usr/share/maven/lib/maven-settings-builder-3.x.jar
> constituent[10]: file:/usr/share/maven/lib/jsr250-api.jar
> constituent[11]: file:/usr/share/maven/lib/maven-builder-support-3.x.jar
> constituent[12]: file:/usr/share/maven/lib/maven-embedder-3.x.jar
> constituent[13]: file:/usr/share/maven/lib/sisu-plexus.jar
> constituent[14]: file:/usr/share/maven/lib/commons-lang3.jar
> constituent[15]: file:/usr/share/maven/lib/javax.inject.jar
> constituent[16]: file:/usr/share/maven/lib/maven-settings-3.x.jar
> constituent[17]: file:/usr/share/maven/lib/maven-resolver-connector-basic.jar
> constituent[18]: file:/usr/share/maven/lib/maven-resolver-api.jar
> constituent[19]: file:/usr/share/maven/lib/maven-resolver-transport-wagon.jar
> constituent[20]: file:/usr/share/maven/lib/aopalliance.jar
> constituent[21]: file:/usr/share/maven/lib/cdi-api.jar
> constituent[22]: file:/usr/share/maven/lib/wagon-file.jar
> constituent[23]: file:/usr/share/maven/lib/maven-model-builder-3.x.jar
> constituent[24]: file:/usr/share/maven/lib/commons-io.jar
> constituent[25]: file:/usr/share/maven/lib/wagon-http-shaded.jar
> constituent[26]: file:/usr/share/maven/lib/plexus-cipher.jar
> constituent[27]: file:/usr/share/maven/lib/maven-resolver-spi.jar
> constituent[28]: file:/usr/share/maven/lib/plexus-utils.jar
> constituent[29]: file:/usr/share/maven/lib/guava.jar
> constituent[30]: file:/usr/share/maven/lib/maven-core-3.x.jar
> constituent[31]: file:/usr/share/maven/lib/plexus-component-annotations.jar
> constituent[32]: file:/usr/share/maven/lib/guice.jar
> constituent[33]: file:/usr/share/maven/lib/sisu-inject.jar
> constituent[34]: file:/usr/share/maven/lib/maven-compat-3.x.jar
> constituent[35]: file:/usr/share/maven/lib/slf4j-api.jar
> constituent[36]: file:/usr/share/maven/lib/jansi.jar
> constituent[37]: file:/usr/share/maven/lib/jcl-over-slf4j.jar
> constituent[38]: file:/usr/share/maven/lib/plexus-sec-dispatcher.jar
> constituent[39]: file:/usr/share/maven/lib/maven-plugin-api-3.x.jar
> constituent[40]: file:/usr/share/maven/lib/maven-slf4j-provider-3.x.jar
> constituent[41]: file:/usr/share/maven/lib/maven-artifact-3.x.jar
> constituent[42]: file:/usr/share/maven/lib/maven-shared-utils.jar
> ---------------------------------------------------
> Exception in thread "main" java.lang.AssertionError
>     at jdk.compiler/com.sun.tools.javac.util.Assert.error(Assert.java:155)
>     at jdk.compiler/com.sun.tools.javac.util.Assert.check(Assert.java:46)
>     at jdk.compiler/com.sun.tools.javac.comp.Modules.enter(Modules.java:247)
>     at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.readSourceFile(JavaCompiler.java:837)
>     at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment$ImplicitCompleter.complete(JavacProcessingEnvironment.java:1530)
>     at jdk.compiler/com.sun.tools.javac.code.Symbol.complete(Symbol.java:642)
>     at jdk.compiler/com.sun.tools.javac.code.Symbol$ClassSymbol.complete(Symbol.java:1326)
>     at jdk.compiler/com.sun.tools.javac.code.Type$ClassType.complete(Type.java:1140)
>     at jdk.compiler/com.sun.tools.javac.code.Type$ClassType.getTypeArguments(Type.java:1066)
>     at jdk.compiler/com.sun.tools.javac.code.Printer.visitClassType(Printer.java:237)
>     at jdk.compiler/com.sun.tools.javac.code.Printer.visitClassType(Printer.java:52)
>     at jdk.compiler/com.sun.tools.javac.code.Type$ClassType.accept(Type.java:993)
>     at jdk.compiler/com.sun.tools.javac.code.Printer.visit(Printer.java:136)
>     at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArgument(AbstractDiagnosticFormatter.java:199)
>     at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArguments(AbstractDiagnosticFormatter.java:167)
>     at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:111)
>     at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:67)
>     at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArgument(AbstractDiagnosticFormatter.java:185)
>     at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArguments(AbstractDiagnosticFormatter.java:167)
>     at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:111)
>     at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:67)
>     at jdk.compiler/com.sun.tools.javac.util.JCDiagnostic.getMessage(JCDiagnostic.java:788)
>     at jdk.compiler/com.sun.tools.javac.api.ClientCodeWrapper$DiagnosticSourceUnwrapper.getMessage(ClientCodeWrapper.java:799)
>     at org.codehaus.plexus.compiler.javac.JavaxToolsCompiler.compileInProcess(JavaxToolsCompiler.java:131)
>     at org.codehaus.plexus.compiler.javac.JavacCompiler.performCompile(JavacCompiler.java:174)
>     at org.apache.maven.plugin.compiler.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:1134)
>     at org.apache.maven.plugin.compiler.TestCompilerMojo.execute(TestCompilerMojo.java:180)
>     at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
>     at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:210)
>     at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:156)
>     at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:148)
>     at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
>     at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
>     at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:56)
>     at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
>     at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:305)
>     at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:192)
>     at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:105)
>     at org.apache.maven.cli.MavenCli.execute(MavenCli.java:957)
>     at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:289)
>     at org.apache.maven.cli.MavenCli.main(MavenCli.java:193)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:282)
>     at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:225)
>     at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:406)
>     at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:347)
>
> I am not able to get rid of it,
> I am using Java 11.
>
> christine@centauri:~/accumulo-examples$ mvn --version |grep -i java
> Java version: 11.0.11, vendor: Ubuntu, runtime: /usr/lib/jvm/java-11-openjdk-amd64
> christine@centauri:~/accumulo-examples$ java -version
> openjdk version "11.0.11" 2021-04-20
> OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.20.04)
> OpenJDK 64-Bit Server VM (build 11.0.11+9-Ubuntu-0ubuntu2.20.04, mixed mode, sharing)
>
>
>
> I am able to run accumulo 2.0.1 finally. Not sure what the problem was, but after I ran bin/hdfs namenode -format
> and accumulo init once again it works.
>
> Thanks in advance for any help. I promise this is my last silly question. I searched google and stackoverflow but i was not able to solve this,
>
>

Aw: RE: java.lang.AssertionError - accumulo examples

Posted by Christine Buss <ch...@gmx.de>.

Thanks Mark and Christopher, yes the branch was the problem, it works now!



**Gesendet:**  Donnerstag, 15. Juli 2021 um 18:09 Uhr  
**Von:**  "Owens, Mark" <jm...@evoforge.org>  
**An:**  "user@accumulo.apache.org" <us...@accumulo.apache.org>  
**Betreff:**  RE: java.lang.AssertionError - accumulo examples

Are you running the examples against version 1.10.x or 2.x version of
Accumulo? The examples repo has been updated to run against the 2.x versions
of Accumulo. The 'main' branch is set up to run against the current 2.1.x
branch that is in development while the '2.0' branch should run against the
2.0.x version of Accumulo.





**From:** Christine Buss  <ch...@gmx.de>  
**Sent:** Thursday, July 15, 2021 12:03 PM  
**To:** user@accumulo.apache.org  
**Subject:** Aw: java.lang.AssertionError - accumulo examples





And I should mention,

I get the AssertionError when I remove the folder 'constraints' in

/accumulo-examples/src/main/java/org/apache/accumulo/examples



When I keep folder there, then I get this message:

ERROR] COMPILATION ERROR :  
[INFO] -------------------------------------------------------------  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[31,49]
package org.apache.accumulo.core.data.constraints does not exist  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[40,41]
cannot find symbol  
  symbol: class Constraint  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[54,28]
cannot find symbol  
  symbol:   class Environment  
  location: class org.apache.accumulo.examples.constraints.MaxMutationSize  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[35,49]
package org.apache.accumulo.core.data.constraints does not exist  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[44,47]
cannot find symbol  
  symbol: class Constraint  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[74,28]
cannot find symbol  
  symbol:   class Environment  
  location: class
org.apache.accumulo.examples.constraints.AlphaNumKeyConstraint  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[32,49]
package org.apache.accumulo.core.data.constraints does not exist  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[41,48]
cannot find symbol  
  symbol: class Constraint  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[60,28]
cannot find symbol  
  symbol:   class Environment  
  location: class
org.apache.accumulo.examples.constraints.NumericValueConstraint  
[INFO] 9 errors  
[INFO] -------------------------------------------------------------  
[INFO]
------------------------------------------------------------------------  
[INFO] BUILD FAILURE  
[INFO]
------------------------------------------------------------------------  
[INFO] Total time:  20.033 s  
[INFO] Finished at: 2021-07-15T18:01:27+02:00  
[INFO]
------------------------------------------------------------------------  
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-
plugin:3.8.1:compile (default-compile) on project accumulo-examples:
Compilation failure: Compilation failure:  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[31,49]
package org.apache.accumulo.core.data.constraints does not exist  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[40,41]
cannot find symbol  
[ERROR]   symbol: class Constraint  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[54,28]
cannot find symbol  
[ERROR]   symbol:   class Environment  
[ERROR]   location: class
org.apache.accumulo.examples.constraints.MaxMutationSize  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[35,49]
package org.apache.accumulo.core.data.constraints does not exist  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[44,47]
cannot find symbol  
[ERROR]   symbol: class Constraint  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[74,28]
cannot find symbol  
[ERROR]   symbol:   class Environment  
[ERROR]   location: class
org.apache.accumulo.examples.constraints.AlphaNumKeyConstraint  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[32,49]
package org.apache.accumulo.core.data.constraints does not exist  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[41,48]
cannot find symbol  
[ERROR]   symbol: class Constraint  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[60,28]
cannot find symbol  
[ERROR]   symbol:   class Environment  
[ERROR]   location: class
org.apache.accumulo.examples.constraints.NumericValueConstraint  
[ERROR] -> [Help 1]  
[ERROR]  
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.  
[ERROR] Re-run Maven using the -X switch to enable full debug logging.  
[ERROR]  
[ERROR] For more information about the errors and possible solutions, please
read the following articles:  
[ERROR] [Help 1] [
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException](http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException)





Does that have anything to do with the Assertion Error?









**Gesendet:**  Donnerstag, 15. Juli 2021 um 17:48 Uhr  
**Von:**  "Christine Buss"
<[christine.buss223@gmx.de](mailto:christine.buss223@gmx.de)>  
**An:**  [user@accumulo.apache.org](mailto:user@accumulo.apache.org)  
**Betreff:**  java.lang.AssertionError - accumulo examples

I cloned the accumulo examples, and followed the instructions in the READMe
file:

<https://github.com/apache/accumulo-examples/blob/main/README.md>

but when I run  ./bin/build I get an AssertionError;



[INFO] BUILD FAILURE  
[INFO]
------------------------------------------------------------------------  
[INFO] Total time:  22.366 s  
[INFO] Finished at: 2021-07-15T17:27:50+02:00  
[INFO]
------------------------------------------------------------------------  
\---------------------------------------------------  
constituent[0]: file:/usr/share/maven/conf/logging/  
constituent[1]: file:/usr/share/maven/lib/maven-repository-metadata-3.x.jar  
constituent[2]: file:/usr/share/maven/lib/plexus-interpolation.jar  
constituent[3]: file:/usr/share/maven/lib/maven-model-3.x.jar  
constituent[4]: file:/usr/share/maven/lib/commons-cli.jar  
constituent[5]: file:/usr/share/maven/lib/maven-resolver-impl.jar  
constituent[6]: file:/usr/share/maven/lib/wagon-provider-api.jar  
constituent[7]: file:/usr/share/maven/lib/maven-resolver-provider-3.x.jar  
constituent[8]: file:/usr/share/maven/lib/maven-resolver-util.jar  
constituent[9]: file:/usr/share/maven/lib/maven-settings-builder-3.x.jar  
constituent[10]: file:/usr/share/maven/lib/jsr250-api.jar  
constituent[11]: file:/usr/share/maven/lib/maven-builder-support-3.x.jar  
constituent[12]: file:/usr/share/maven/lib/maven-embedder-3.x.jar  
constituent[13]: file:/usr/share/maven/lib/sisu-plexus.jar  
constituent[14]: file:/usr/share/maven/lib/commons-lang3.jar  
constituent[15]: file:/usr/share/maven/lib/javax.inject.jar  
constituent[16]: file:/usr/share/maven/lib/maven-settings-3.x.jar  
constituent[17]: file:/usr/share/maven/lib/maven-resolver-connector-basic.jar  
constituent[18]: file:/usr/share/maven/lib/maven-resolver-api.jar  
constituent[19]: file:/usr/share/maven/lib/maven-resolver-transport-wagon.jar  
constituent[20]: file:/usr/share/maven/lib/aopalliance.jar  
constituent[21]: file:/usr/share/maven/lib/cdi-api.jar  
constituent[22]: file:/usr/share/maven/lib/wagon-file.jar  
constituent[23]: file:/usr/share/maven/lib/maven-model-builder-3.x.jar  
constituent[24]: file:/usr/share/maven/lib/commons-io.jar  
constituent[25]: file:/usr/share/maven/lib/wagon-http-shaded.jar  
constituent[26]: file:/usr/share/maven/lib/plexus-cipher.jar  
constituent[27]: file:/usr/share/maven/lib/maven-resolver-spi.jar  
constituent[28]: file:/usr/share/maven/lib/plexus-utils.jar  
constituent[29]: file:/usr/share/maven/lib/guava.jar  
constituent[30]: file:/usr/share/maven/lib/maven-core-3.x.jar  
constituent[31]: file:/usr/share/maven/lib/plexus-component-annotations.jar  
constituent[32]: file:/usr/share/maven/lib/guice.jar  
constituent[33]: file:/usr/share/maven/lib/sisu-inject.jar  
constituent[34]: file:/usr/share/maven/lib/maven-compat-3.x.jar  
constituent[35]: file:/usr/share/maven/lib/slf4j-api.jar  
constituent[36]: file:/usr/share/maven/lib/jansi.jar  
constituent[37]: file:/usr/share/maven/lib/jcl-over-slf4j.jar  
constituent[38]: file:/usr/share/maven/lib/plexus-sec-dispatcher.jar  
constituent[39]: file:/usr/share/maven/lib/maven-plugin-api-3.x.jar  
constituent[40]: file:/usr/share/maven/lib/maven-slf4j-provider-3.x.jar  
constituent[41]: file:/usr/share/maven/lib/maven-artifact-3.x.jar  
constituent[42]: file:/usr/share/maven/lib/maven-shared-utils.jar  
\---------------------------------------------------  
Exception in thread "main" java.lang.AssertionError  
    at jdk.compiler/com.sun.tools.javac.util.Assert.error(Assert.java:155)  
    at jdk.compiler/com.sun.tools.javac.util.Assert.check(Assert.java:46)  
    at jdk.compiler/com.sun.tools.javac.comp.Modules.enter(Modules.java:247)  
    at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.readSourceFile(JavaCompiler.java:837)  
    at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment$ImplicitCompleter.complete(JavacProcessingEnvironment.java:1530)  
    at jdk.compiler/com.sun.tools.javac.code.Symbol.complete(Symbol.java:642)  
    at jdk.compiler/com.sun.tools.javac.code.Symbol$ClassSymbol.complete(Symbol.java:1326)  
    at jdk.compiler/com.sun.tools.javac.code.Type$ClassType.complete(Type.java:1140)  
    at jdk.compiler/com.sun.tools.javac.code.Type$ClassType.getTypeArguments(Type.java:1066)  
    at jdk.compiler/com.sun.tools.javac.code.Printer.visitClassType(Printer.java:237)  
    at jdk.compiler/com.sun.tools.javac.code.Printer.visitClassType(Printer.java:52)  
    at jdk.compiler/com.sun.tools.javac.code.Type$ClassType.accept(Type.java:993)  
    at jdk.compiler/com.sun.tools.javac.code.Printer.visit(Printer.java:136)  
    at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArgument(AbstractDiagnosticFormatter.java:199)  
    at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArguments(AbstractDiagnosticFormatter.java:167)  
    at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:111)  
    at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:67)  
    at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArgument(AbstractDiagnosticFormatter.java:185)  
    at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArguments(AbstractDiagnosticFormatter.java:167)  
    at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:111)  
    at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:67)  
    at jdk.compiler/com.sun.tools.javac.util.JCDiagnostic.getMessage(JCDiagnostic.java:788)  
    at jdk.compiler/com.sun.tools.javac.api.ClientCodeWrapper$DiagnosticSourceUnwrapper.getMessage(ClientCodeWrapper.java:799)  
    at org.codehaus.plexus.compiler.javac.JavaxToolsCompiler.compileInProcess(JavaxToolsCompiler.java:131)  
    at org.codehaus.plexus.compiler.javac.JavacCompiler.performCompile(JavacCompiler.java:174)  
    at org.apache.maven.plugin.compiler.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:1134)  
    at org.apache.maven.plugin.compiler.TestCompilerMojo.execute(TestCompilerMojo.java:180)  
    at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)  
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:210)  
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:156)  
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:148)  
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)  
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)  
    at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:56)  
    at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)  
    at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:305)  
    at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:192)  
    at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:105)  
    at org.apache.maven.cli.MavenCli.execute(MavenCli.java:957)  
    at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:289)  
    at org.apache.maven.cli.MavenCli.main(MavenCli.java:193)  
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
    at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:282)  
    at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:225)  
    at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:406)  
    at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:347)



I am not able to get rid of it,

I am using Java 11.



christine@centauri:~/accumulo-examples$ mvn --version |grep -i java  
Java version: 11.0.11, vendor: Ubuntu, runtime: /usr/lib/jvm/java-11-openjdk-
amd64  
christine@centauri:~/accumulo-examples$ java -version  
openjdk version "11.0.11" 2021-04-20  
OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.20.04)  
OpenJDK 64-Bit Server VM (build 11.0.11+9-Ubuntu-0ubuntu2.20.04, mixed mode,
sharing)







I am able to run accumulo 2.0.1 finally. Not sure what the problem was, but
after I ran bin/hdfs namenode -format

and accumulo init once again it works.



Thanks in advance for any help. I promise this is my last silly question. I
searched google and stackoverflow but i was not able to solve this,






RE: java.lang.AssertionError - accumulo examples

Posted by "Owens, Mark" <jm...@evoforge.org>.
Are you running the examples against version 1.10.x or 2.x version of Accumulo? The examples repo has been updated to run against the 2.x versions of Accumulo. The ‘main’ branch is set up to run against the current 2.1.x branch that is in development while the ‘2.0’ branch should run against the 2.0.x version of Accumulo.


From: Christine Buss <ch...@gmx.de>
Sent: Thursday, July 15, 2021 12:03 PM
To: user@accumulo.apache.org
Subject: Aw: java.lang.AssertionError - accumulo examples


And I should mention,
I get the AssertionError when I remove the folder 'constraints' in
/accumulo-examples/src/main/java/org/apache/accumulo/examples

When I keep folder there, then I get this message:
ERROR] COMPILATION ERROR :
[INFO] -------------------------------------------------------------
[ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[31,49] package org.apache.accumulo.core.data.constraints does not exist
[ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[40,41] cannot find symbol
  symbol: class Constraint
[ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[54,28] cannot find symbol
  symbol:   class Environment
  location: class org.apache.accumulo.examples.constraints.MaxMutationSize
[ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[35,49] package org.apache.accumulo.core.data.constraints does not exist
[ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[44,47] cannot find symbol
  symbol: class Constraint
[ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[74,28] cannot find symbol
  symbol:   class Environment
  location: class org.apache.accumulo.examples.constraints.AlphaNumKeyConstraint
[ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[32,49] package org.apache.accumulo.core.data.constraints does not exist
[ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[41,48] cannot find symbol
  symbol: class Constraint
[ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[60,28] cannot find symbol
  symbol:   class Environment
  location: class org.apache.accumulo.examples.constraints.NumericValueConstraint
[INFO] 9 errors
[INFO] -------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  20.033 s
[INFO] Finished at: 2021-07-15T18:01:27+02:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:compile (default-compile) on project accumulo-examples: Compilation failure: Compilation failure:
[ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[31,49] package org.apache.accumulo.core.data.constraints does not exist
[ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[40,41] cannot find symbol
[ERROR]   symbol: class Constraint
[ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[54,28] cannot find symbol
[ERROR]   symbol:   class Environment
[ERROR]   location: class org.apache.accumulo.examples.constraints.MaxMutationSize
[ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[35,49] package org.apache.accumulo.core.data.constraints does not exist
[ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[44,47] cannot find symbol
[ERROR]   symbol: class Constraint
[ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[74,28] cannot find symbol
[ERROR]   symbol:   class Environment
[ERROR]   location: class org.apache.accumulo.examples.constraints.AlphaNumKeyConstraint
[ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[32,49] package org.apache.accumulo.core.data.constraints does not exist
[ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[41,48] cannot find symbol
[ERROR]   symbol: class Constraint
[ERROR] /home/christine/accumulo-examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[60,28] cannot find symbol
[ERROR]   symbol:   class Environment
[ERROR]   location: class org.apache.accumulo.examples.constraints.NumericValueConstraint
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException


Does that have anything to do with the Assertion Error?




Gesendet: Donnerstag, 15. Juli 2021 um 17:48 Uhr
Von: "Christine Buss" <ch...@gmx.de>>
An: user@accumulo.apache.org<ma...@accumulo.apache.org>
Betreff: java.lang.AssertionError - accumulo examples
I cloned the accumulo examples, and followed the instructions in the READMe file:
https://github.com/apache/accumulo-examples/blob/main/README.md
but when I run  ./bin/build I get an AssertionError;

[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  22.366 s
[INFO] Finished at: 2021-07-15T17:27:50+02:00
[INFO] ------------------------------------------------------------------------
---------------------------------------------------
constituent[0]: file:/usr/share/maven/conf/logging/
constituent[1]: file:/usr/share/maven/lib/maven-repository-metadata-3.x.jar
constituent[2]: file:/usr/share/maven/lib/plexus-interpolation.jar
constituent[3]: file:/usr/share/maven/lib/maven-model-3.x.jar
constituent[4]: file:/usr/share/maven/lib/commons-cli.jar
constituent[5]: file:/usr/share/maven/lib/maven-resolver-impl.jar
constituent[6]: file:/usr/share/maven/lib/wagon-provider-api.jar
constituent[7]: file:/usr/share/maven/lib/maven-resolver-provider-3.x.jar
constituent[8]: file:/usr/share/maven/lib/maven-resolver-util.jar
constituent[9]: file:/usr/share/maven/lib/maven-settings-builder-3.x.jar
constituent[10]: file:/usr/share/maven/lib/jsr250-api.jar
constituent[11]: file:/usr/share/maven/lib/maven-builder-support-3.x.jar
constituent[12]: file:/usr/share/maven/lib/maven-embedder-3.x.jar
constituent[13]: file:/usr/share/maven/lib/sisu-plexus.jar
constituent[14]: file:/usr/share/maven/lib/commons-lang3.jar
constituent[15]: file:/usr/share/maven/lib/javax.inject.jar
constituent[16]: file:/usr/share/maven/lib/maven-settings-3.x.jar
constituent[17]: file:/usr/share/maven/lib/maven-resolver-connector-basic.jar
constituent[18]: file:/usr/share/maven/lib/maven-resolver-api.jar
constituent[19]: file:/usr/share/maven/lib/maven-resolver-transport-wagon.jar
constituent[20]: file:/usr/share/maven/lib/aopalliance.jar
constituent[21]: file:/usr/share/maven/lib/cdi-api.jar
constituent[22]: file:/usr/share/maven/lib/wagon-file.jar
constituent[23]: file:/usr/share/maven/lib/maven-model-builder-3.x.jar
constituent[24]: file:/usr/share/maven/lib/commons-io.jar
constituent[25]: file:/usr/share/maven/lib/wagon-http-shaded.jar
constituent[26]: file:/usr/share/maven/lib/plexus-cipher.jar
constituent[27]: file:/usr/share/maven/lib/maven-resolver-spi.jar
constituent[28]: file:/usr/share/maven/lib/plexus-utils.jar
constituent[29]: file:/usr/share/maven/lib/guava.jar
constituent[30]: file:/usr/share/maven/lib/maven-core-3.x.jar
constituent[31]: file:/usr/share/maven/lib/plexus-component-annotations.jar
constituent[32]: file:/usr/share/maven/lib/guice.jar
constituent[33]: file:/usr/share/maven/lib/sisu-inject.jar
constituent[34]: file:/usr/share/maven/lib/maven-compat-3.x.jar
constituent[35]: file:/usr/share/maven/lib/slf4j-api.jar
constituent[36]: file:/usr/share/maven/lib/jansi.jar
constituent[37]: file:/usr/share/maven/lib/jcl-over-slf4j.jar
constituent[38]: file:/usr/share/maven/lib/plexus-sec-dispatcher.jar
constituent[39]: file:/usr/share/maven/lib/maven-plugin-api-3.x.jar
constituent[40]: file:/usr/share/maven/lib/maven-slf4j-provider-3.x.jar
constituent[41]: file:/usr/share/maven/lib/maven-artifact-3.x.jar
constituent[42]: file:/usr/share/maven/lib/maven-shared-utils.jar
---------------------------------------------------
Exception in thread "main" java.lang.AssertionError
    at jdk.compiler/com.sun.tools.javac.util.Assert.error(Assert.java:155)
    at jdk.compiler/com.sun.tools.javac.util.Assert.check(Assert.java:46)
    at jdk.compiler/com.sun.tools.javac.comp.Modules.enter(Modules.java:247)
    at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.readSourceFile(JavaCompiler.java:837)
    at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment$ImplicitCompleter.complete(JavacProcessingEnvironment.java:1530)
    at jdk.compiler/com.sun.tools.javac.code.Symbol.complete(Symbol.java:642)
    at jdk.compiler/com.sun.tools.javac.code.Symbol$ClassSymbol.complete(Symbol.java:1326)
    at jdk.compiler/com.sun.tools.javac.code.Type$ClassType.complete(Type.java:1140)
    at jdk.compiler/com.sun.tools.javac.code.Type$ClassType.getTypeArguments(Type.java:1066)
    at jdk.compiler/com.sun.tools.javac.code.Printer.visitClassType(Printer.java:237)
    at jdk.compiler/com.sun.tools.javac.code.Printer.visitClassType(Printer.java:52)
    at jdk.compiler/com.sun.tools.javac.code.Type$ClassType.accept(Type.java:993)
    at jdk.compiler/com.sun.tools.javac.code.Printer.visit(Printer.java:136)
    at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArgument(AbstractDiagnosticFormatter.java:199)
    at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArguments(AbstractDiagnosticFormatter.java:167)
    at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:111)
    at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:67)
    at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArgument(AbstractDiagnosticFormatter.java:185)
    at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArguments(AbstractDiagnosticFormatter.java:167)
    at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:111)
    at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:67)
    at jdk.compiler/com.sun.tools.javac.util.JCDiagnostic.getMessage(JCDiagnostic.java:788)
    at jdk.compiler/com.sun.tools.javac.api.ClientCodeWrapper$DiagnosticSourceUnwrapper.getMessage(ClientCodeWrapper.java:799)
    at org.codehaus.plexus.compiler.javac.JavaxToolsCompiler.compileInProcess(JavaxToolsCompiler.java:131)
    at org.codehaus.plexus.compiler.javac.JavacCompiler.performCompile(JavacCompiler.java:174)
    at org.apache.maven.plugin.compiler.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:1134)
    at org.apache.maven.plugin.compiler.TestCompilerMojo.execute(TestCompilerMojo.java:180)
    at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:210)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:156)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:148)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)
    at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:56)
    at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
    at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:305)
    at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:192)
    at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:105)
    at org.apache.maven.cli.MavenCli.execute(MavenCli.java:957)
    at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:289)
    at org.apache.maven.cli.MavenCli.main(MavenCli.java:193)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:282)
    at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:225)
    at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:406)
    at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:347)

I am not able to get rid of it,
I am using Java 11.

christine@centauri:~/accumulo-examples$ mvn --version |grep -i java
Java version: 11.0.11, vendor: Ubuntu, runtime: /usr/lib/jvm/java-11-openjdk-amd64
christine@centauri:~/accumulo-examples$ java -version
openjdk version "11.0.11" 2021-04-20
OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.20.04)
OpenJDK 64-Bit Server VM (build 11.0.11+9-Ubuntu-0ubuntu2.20.04, mixed mode, sharing)



I am able to run accumulo 2.0.1 finally. Not sure what the problem was, but after I ran bin/hdfs namenode -format
and accumulo init once again it works.

Thanks in advance for any help. I promise this is my last silly question. I searched google and stackoverflow but i was not able to solve this,



Aw: java.lang.AssertionError - accumulo examples

Posted by Christine Buss <ch...@gmx.de>.

And I should mention,

I get the AssertionError when I remove the folder 'constraints' in

/accumulo-examples/src/main/java/org/apache/accumulo/examples



When I keep folder there, then I get this message:

ERROR] COMPILATION ERROR :  
[INFO] -------------------------------------------------------------  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[31,49]
package org.apache.accumulo.core.data.constraints does not exist  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[40,41]
cannot find symbol  
  symbol: class Constraint  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[54,28]
cannot find symbol  
  symbol:   class Environment  
  location: class org.apache.accumulo.examples.constraints.MaxMutationSize  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[35,49]
package org.apache.accumulo.core.data.constraints does not exist  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[44,47]
cannot find symbol  
  symbol: class Constraint  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[74,28]
cannot find symbol  
  symbol:   class Environment  
  location: class
org.apache.accumulo.examples.constraints.AlphaNumKeyConstraint  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[32,49]
package org.apache.accumulo.core.data.constraints does not exist  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[41,48]
cannot find symbol  
  symbol: class Constraint  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[60,28]
cannot find symbol  
  symbol:   class Environment  
  location: class
org.apache.accumulo.examples.constraints.NumericValueConstraint  
[INFO] 9 errors  
[INFO] -------------------------------------------------------------  
[INFO]
------------------------------------------------------------------------  
[INFO] BUILD FAILURE  
[INFO]
------------------------------------------------------------------------  
[INFO] Total time:  20.033 s  
[INFO] Finished at: 2021-07-15T18:01:27+02:00  
[INFO]
------------------------------------------------------------------------  
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-
plugin:3.8.1:compile (default-compile) on project accumulo-examples:
Compilation failure: Compilation failure:  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[31,49]
package org.apache.accumulo.core.data.constraints does not exist  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[40,41]
cannot find symbol  
[ERROR]   symbol: class Constraint  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/MaxMutationSize.java:[54,28]
cannot find symbol  
[ERROR]   symbol:   class Environment  
[ERROR]   location: class
org.apache.accumulo.examples.constraints.MaxMutationSize  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[35,49]
package org.apache.accumulo.core.data.constraints does not exist  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[44,47]
cannot find symbol  
[ERROR]   symbol: class Constraint  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/AlphaNumKeyConstraint.java:[74,28]
cannot find symbol  
[ERROR]   symbol:   class Environment  
[ERROR]   location: class
org.apache.accumulo.examples.constraints.AlphaNumKeyConstraint  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[32,49]
package org.apache.accumulo.core.data.constraints does not exist  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[41,48]
cannot find symbol  
[ERROR]   symbol: class Constraint  
[ERROR] /home/christine/accumulo-
examples/src/main/java/org/apache/accumulo/examples/constraints/NumericValueConstraint.java:[60,28]
cannot find symbol  
[ERROR]   symbol:   class Environment  
[ERROR]   location: class
org.apache.accumulo.examples.constraints.NumericValueConstraint  
[ERROR] -> [Help 1]  
[ERROR]  
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.  
[ERROR] Re-run Maven using the -X switch to enable full debug logging.  
[ERROR]  
[ERROR] For more information about the errors and possible solutions, please
read the following articles:  
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException





Does that have anything to do with the Assertion Error?









**Gesendet:**  Donnerstag, 15. Juli 2021 um 17:48 Uhr  
**Von:**  "Christine Buss" <ch...@gmx.de>  
**An:**  user@accumulo.apache.org  
**Betreff:**  java.lang.AssertionError - accumulo examples

I cloned the accumulo examples, and followed the instructions in the READMe
file:

<https://github.com/apache/accumulo-examples/blob/main/README.md>

but when I run  ./bin/build I get an AssertionError;



[INFO] BUILD FAILURE  
[INFO]
------------------------------------------------------------------------  
[INFO] Total time:  22.366 s  
[INFO] Finished at: 2021-07-15T17:27:50+02:00  
[INFO]
------------------------------------------------------------------------  
\---------------------------------------------------  
constituent[0]: file:/usr/share/maven/conf/logging/  
constituent[1]: file:/usr/share/maven/lib/maven-repository-metadata-3.x.jar  
constituent[2]: file:/usr/share/maven/lib/plexus-interpolation.jar  
constituent[3]: file:/usr/share/maven/lib/maven-model-3.x.jar  
constituent[4]: file:/usr/share/maven/lib/commons-cli.jar  
constituent[5]: file:/usr/share/maven/lib/maven-resolver-impl.jar  
constituent[6]: file:/usr/share/maven/lib/wagon-provider-api.jar  
constituent[7]: file:/usr/share/maven/lib/maven-resolver-provider-3.x.jar  
constituent[8]: file:/usr/share/maven/lib/maven-resolver-util.jar  
constituent[9]: file:/usr/share/maven/lib/maven-settings-builder-3.x.jar  
constituent[10]: file:/usr/share/maven/lib/jsr250-api.jar  
constituent[11]: file:/usr/share/maven/lib/maven-builder-support-3.x.jar  
constituent[12]: file:/usr/share/maven/lib/maven-embedder-3.x.jar  
constituent[13]: file:/usr/share/maven/lib/sisu-plexus.jar  
constituent[14]: file:/usr/share/maven/lib/commons-lang3.jar  
constituent[15]: file:/usr/share/maven/lib/javax.inject.jar  
constituent[16]: file:/usr/share/maven/lib/maven-settings-3.x.jar  
constituent[17]: file:/usr/share/maven/lib/maven-resolver-connector-basic.jar  
constituent[18]: file:/usr/share/maven/lib/maven-resolver-api.jar  
constituent[19]: file:/usr/share/maven/lib/maven-resolver-transport-wagon.jar  
constituent[20]: file:/usr/share/maven/lib/aopalliance.jar  
constituent[21]: file:/usr/share/maven/lib/cdi-api.jar  
constituent[22]: file:/usr/share/maven/lib/wagon-file.jar  
constituent[23]: file:/usr/share/maven/lib/maven-model-builder-3.x.jar  
constituent[24]: file:/usr/share/maven/lib/commons-io.jar  
constituent[25]: file:/usr/share/maven/lib/wagon-http-shaded.jar  
constituent[26]: file:/usr/share/maven/lib/plexus-cipher.jar  
constituent[27]: file:/usr/share/maven/lib/maven-resolver-spi.jar  
constituent[28]: file:/usr/share/maven/lib/plexus-utils.jar  
constituent[29]: file:/usr/share/maven/lib/guava.jar  
constituent[30]: file:/usr/share/maven/lib/maven-core-3.x.jar  
constituent[31]: file:/usr/share/maven/lib/plexus-component-annotations.jar  
constituent[32]: file:/usr/share/maven/lib/guice.jar  
constituent[33]: file:/usr/share/maven/lib/sisu-inject.jar  
constituent[34]: file:/usr/share/maven/lib/maven-compat-3.x.jar  
constituent[35]: file:/usr/share/maven/lib/slf4j-api.jar  
constituent[36]: file:/usr/share/maven/lib/jansi.jar  
constituent[37]: file:/usr/share/maven/lib/jcl-over-slf4j.jar  
constituent[38]: file:/usr/share/maven/lib/plexus-sec-dispatcher.jar  
constituent[39]: file:/usr/share/maven/lib/maven-plugin-api-3.x.jar  
constituent[40]: file:/usr/share/maven/lib/maven-slf4j-provider-3.x.jar  
constituent[41]: file:/usr/share/maven/lib/maven-artifact-3.x.jar  
constituent[42]: file:/usr/share/maven/lib/maven-shared-utils.jar  
\---------------------------------------------------  
Exception in thread "main" java.lang.AssertionError  
    at jdk.compiler/com.sun.tools.javac.util.Assert.error(Assert.java:155)  
    at jdk.compiler/com.sun.tools.javac.util.Assert.check(Assert.java:46)  
    at jdk.compiler/com.sun.tools.javac.comp.Modules.enter(Modules.java:247)  
    at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.readSourceFile(JavaCompiler.java:837)  
    at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment$ImplicitCompleter.complete(JavacProcessingEnvironment.java:1530)  
    at jdk.compiler/com.sun.tools.javac.code.Symbol.complete(Symbol.java:642)  
    at jdk.compiler/com.sun.tools.javac.code.Symbol$ClassSymbol.complete(Symbol.java:1326)  
    at jdk.compiler/com.sun.tools.javac.code.Type$ClassType.complete(Type.java:1140)  
    at jdk.compiler/com.sun.tools.javac.code.Type$ClassType.getTypeArguments(Type.java:1066)  
    at jdk.compiler/com.sun.tools.javac.code.Printer.visitClassType(Printer.java:237)  
    at jdk.compiler/com.sun.tools.javac.code.Printer.visitClassType(Printer.java:52)  
    at jdk.compiler/com.sun.tools.javac.code.Type$ClassType.accept(Type.java:993)  
    at jdk.compiler/com.sun.tools.javac.code.Printer.visit(Printer.java:136)  
    at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArgument(AbstractDiagnosticFormatter.java:199)  
    at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArguments(AbstractDiagnosticFormatter.java:167)  
    at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:111)  
    at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:67)  
    at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArgument(AbstractDiagnosticFormatter.java:185)  
    at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArguments(AbstractDiagnosticFormatter.java:167)  
    at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:111)  
    at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:67)  
    at jdk.compiler/com.sun.tools.javac.util.JCDiagnostic.getMessage(JCDiagnostic.java:788)  
    at jdk.compiler/com.sun.tools.javac.api.ClientCodeWrapper$DiagnosticSourceUnwrapper.getMessage(ClientCodeWrapper.java:799)  
    at org.codehaus.plexus.compiler.javac.JavaxToolsCompiler.compileInProcess(JavaxToolsCompiler.java:131)  
    at org.codehaus.plexus.compiler.javac.JavacCompiler.performCompile(JavacCompiler.java:174)  
    at org.apache.maven.plugin.compiler.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:1134)  
    at org.apache.maven.plugin.compiler.TestCompilerMojo.execute(TestCompilerMojo.java:180)  
    at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)  
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:210)  
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:156)  
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:148)  
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)  
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)  
    at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:56)  
    at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)  
    at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:305)  
    at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:192)  
    at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:105)  
    at org.apache.maven.cli.MavenCli.execute(MavenCli.java:957)  
    at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:289)  
    at org.apache.maven.cli.MavenCli.main(MavenCli.java:193)  
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
    at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:282)  
    at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:225)  
    at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:406)  
    at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:347)



I am not able to get rid of it,

I am using Java 11.



christine@centauri:~/accumulo-examples$ mvn --version |grep -i java  
Java version: 11.0.11, vendor: Ubuntu, runtime: /usr/lib/jvm/java-11-openjdk-
amd64  
christine@centauri:~/accumulo-examples$ java -version  
openjdk version "11.0.11" 2021-04-20  
OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.20.04)  
OpenJDK 64-Bit Server VM (build 11.0.11+9-Ubuntu-0ubuntu2.20.04, mixed mode,
sharing)







I am able to run accumulo 2.0.1 finally. Not sure what the problem was, but
after I ran bin/hdfs namenode -format

and accumulo init once again it works.



Thanks in advance for any help. I promise this is my last silly question. I
searched google and stackoverflow but i was not able to solve this,






java.lang.AssertionError - accumulo examples

Posted by Christine Buss <ch...@gmx.de>.
I cloned the accumulo examples, and followed the instructions in the READMe
file:

https://github.com/apache/accumulo-examples/blob/main/README.md

but when I run  ./bin/build I get an AssertionError;



[INFO] BUILD FAILURE  
[INFO]
------------------------------------------------------------------------  
[INFO] Total time:  22.366 s  
[INFO] Finished at: 2021-07-15T17:27:50+02:00  
[INFO]
------------------------------------------------------------------------  
\---------------------------------------------------  
constituent[0]: file:/usr/share/maven/conf/logging/  
constituent[1]: file:/usr/share/maven/lib/maven-repository-metadata-3.x.jar  
constituent[2]: file:/usr/share/maven/lib/plexus-interpolation.jar  
constituent[3]: file:/usr/share/maven/lib/maven-model-3.x.jar  
constituent[4]: file:/usr/share/maven/lib/commons-cli.jar  
constituent[5]: file:/usr/share/maven/lib/maven-resolver-impl.jar  
constituent[6]: file:/usr/share/maven/lib/wagon-provider-api.jar  
constituent[7]: file:/usr/share/maven/lib/maven-resolver-provider-3.x.jar  
constituent[8]: file:/usr/share/maven/lib/maven-resolver-util.jar  
constituent[9]: file:/usr/share/maven/lib/maven-settings-builder-3.x.jar  
constituent[10]: file:/usr/share/maven/lib/jsr250-api.jar  
constituent[11]: file:/usr/share/maven/lib/maven-builder-support-3.x.jar  
constituent[12]: file:/usr/share/maven/lib/maven-embedder-3.x.jar  
constituent[13]: file:/usr/share/maven/lib/sisu-plexus.jar  
constituent[14]: file:/usr/share/maven/lib/commons-lang3.jar  
constituent[15]: file:/usr/share/maven/lib/javax.inject.jar  
constituent[16]: file:/usr/share/maven/lib/maven-settings-3.x.jar  
constituent[17]: file:/usr/share/maven/lib/maven-resolver-connector-basic.jar  
constituent[18]: file:/usr/share/maven/lib/maven-resolver-api.jar  
constituent[19]: file:/usr/share/maven/lib/maven-resolver-transport-wagon.jar  
constituent[20]: file:/usr/share/maven/lib/aopalliance.jar  
constituent[21]: file:/usr/share/maven/lib/cdi-api.jar  
constituent[22]: file:/usr/share/maven/lib/wagon-file.jar  
constituent[23]: file:/usr/share/maven/lib/maven-model-builder-3.x.jar  
constituent[24]: file:/usr/share/maven/lib/commons-io.jar  
constituent[25]: file:/usr/share/maven/lib/wagon-http-shaded.jar  
constituent[26]: file:/usr/share/maven/lib/plexus-cipher.jar  
constituent[27]: file:/usr/share/maven/lib/maven-resolver-spi.jar  
constituent[28]: file:/usr/share/maven/lib/plexus-utils.jar  
constituent[29]: file:/usr/share/maven/lib/guava.jar  
constituent[30]: file:/usr/share/maven/lib/maven-core-3.x.jar  
constituent[31]: file:/usr/share/maven/lib/plexus-component-annotations.jar  
constituent[32]: file:/usr/share/maven/lib/guice.jar  
constituent[33]: file:/usr/share/maven/lib/sisu-inject.jar  
constituent[34]: file:/usr/share/maven/lib/maven-compat-3.x.jar  
constituent[35]: file:/usr/share/maven/lib/slf4j-api.jar  
constituent[36]: file:/usr/share/maven/lib/jansi.jar  
constituent[37]: file:/usr/share/maven/lib/jcl-over-slf4j.jar  
constituent[38]: file:/usr/share/maven/lib/plexus-sec-dispatcher.jar  
constituent[39]: file:/usr/share/maven/lib/maven-plugin-api-3.x.jar  
constituent[40]: file:/usr/share/maven/lib/maven-slf4j-provider-3.x.jar  
constituent[41]: file:/usr/share/maven/lib/maven-artifact-3.x.jar  
constituent[42]: file:/usr/share/maven/lib/maven-shared-utils.jar  
\---------------------------------------------------  
Exception in thread "main" java.lang.AssertionError  
    at jdk.compiler/com.sun.tools.javac.util.Assert.error(Assert.java:155)  
    at jdk.compiler/com.sun.tools.javac.util.Assert.check(Assert.java:46)  
    at jdk.compiler/com.sun.tools.javac.comp.Modules.enter(Modules.java:247)  
    at jdk.compiler/com.sun.tools.javac.main.JavaCompiler.readSourceFile(JavaCompiler.java:837)  
    at jdk.compiler/com.sun.tools.javac.processing.JavacProcessingEnvironment$ImplicitCompleter.complete(JavacProcessingEnvironment.java:1530)  
    at jdk.compiler/com.sun.tools.javac.code.Symbol.complete(Symbol.java:642)  
    at jdk.compiler/com.sun.tools.javac.code.Symbol$ClassSymbol.complete(Symbol.java:1326)  
    at jdk.compiler/com.sun.tools.javac.code.Type$ClassType.complete(Type.java:1140)  
    at jdk.compiler/com.sun.tools.javac.code.Type$ClassType.getTypeArguments(Type.java:1066)  
    at jdk.compiler/com.sun.tools.javac.code.Printer.visitClassType(Printer.java:237)  
    at jdk.compiler/com.sun.tools.javac.code.Printer.visitClassType(Printer.java:52)  
    at jdk.compiler/com.sun.tools.javac.code.Type$ClassType.accept(Type.java:993)  
    at jdk.compiler/com.sun.tools.javac.code.Printer.visit(Printer.java:136)  
    at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArgument(AbstractDiagnosticFormatter.java:199)  
    at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArguments(AbstractDiagnosticFormatter.java:167)  
    at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:111)  
    at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:67)  
    at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArgument(AbstractDiagnosticFormatter.java:185)  
    at jdk.compiler/com.sun.tools.javac.util.AbstractDiagnosticFormatter.formatArguments(AbstractDiagnosticFormatter.java:167)  
    at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:111)  
    at jdk.compiler/com.sun.tools.javac.util.BasicDiagnosticFormatter.formatMessage(BasicDiagnosticFormatter.java:67)  
    at jdk.compiler/com.sun.tools.javac.util.JCDiagnostic.getMessage(JCDiagnostic.java:788)  
    at jdk.compiler/com.sun.tools.javac.api.ClientCodeWrapper$DiagnosticSourceUnwrapper.getMessage(ClientCodeWrapper.java:799)  
    at org.codehaus.plexus.compiler.javac.JavaxToolsCompiler.compileInProcess(JavaxToolsCompiler.java:131)  
    at org.codehaus.plexus.compiler.javac.JavacCompiler.performCompile(JavacCompiler.java:174)  
    at org.apache.maven.plugin.compiler.AbstractCompilerMojo.execute(AbstractCompilerMojo.java:1134)  
    at org.apache.maven.plugin.compiler.TestCompilerMojo.execute(TestCompilerMojo.java:180)  
    at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:137)  
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:210)  
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:156)  
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:148)  
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117)  
    at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:81)  
    at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:56)  
    at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)  
    at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:305)  
    at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:192)  
    at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:105)  
    at org.apache.maven.cli.MavenCli.execute(MavenCli.java:957)  
    at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:289)  
    at org.apache.maven.cli.MavenCli.main(MavenCli.java:193)  
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
    at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:282)  
    at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:225)  
    at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:406)  
    at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:347)



I am not able to get rid of it,

I am using Java 11.



christine@centauri:~/accumulo-examples$ mvn --version |grep -i java  
Java version: 11.0.11, vendor: Ubuntu, runtime: /usr/lib/jvm/java-11-openjdk-
amd64  
christine@centauri:~/accumulo-examples$ java -version  
openjdk version "11.0.11" 2021-04-20  
OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.20.04)  
OpenJDK 64-Bit Server VM (build 11.0.11+9-Ubuntu-0ubuntu2.20.04, mixed mode,
sharing)







I am able to run accumulo 2.0.1 finally. Not sure what the problem was, but
after I ran bin/hdfs namenode -format

and accumulo init once again it works.



Thanks in advance for any help. I promise this is my last silly question. I
searched google and stackoverflow but i was not able to solve this,






RE: Re: Re: Re: Hadoop ConnectException

Posted by de...@etcoleman.com.
Use jps -m to check which processes you have running.

 

Check the accumulo logs – are there any with *.err that have a size > 0?  The .err files will be created on an unexpected exit.  The other debug logs will provide a clearer picture of what is happening.

 

Tail the master debug log / and a tserver debug log – are they showing exceptions being thrown?

 

From: Christine Buss <ch...@gmx.de> 
Sent: Saturday, July 10, 2021 9:57 AM
To: user@accumulo.apache.org
Subject: Aw: Re: Re: Re: Hadoop ConnectException

 

 

Ok, so I in the file 'accumulo.properties' I changed

## Sets location in HDFS where Accumulo will store data
instance.volumes=hdfs://localhost:8020/accumulo

 

to

 

## Sets location in HDFS where Accumulo will store data
instance.volumes=hdfs://localhost:9000/accumulo

 

 

Then I was able to run 'accumulo init' and 'accumulo-cluster start'.

But when I run 'accumulo shell -u root' it hangs:

 

 

christine@centauri:~/accumulo-2.0.1/bin$ <mailto:christine@centauri:~/accumulo-2.0.1/bin$>  ./accumulo shell -u root
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Loading configuration from /home/christine/accumulo-2.0.1/conf/accumulo-client.properties
Password: *********

Shell - Apache Accumulo Interactive Shell
-
- version: 2.0.1
- instance name: accumulotest
- instance id: 5d8c404a-c741-48b3-b7a4-adaf19cc1499
-
- type 'help' for a list of available commands
-
2021-07-10 15:39:17,328 [clientImpl.ServerClient] WARN : There are no tablet servers: check that zookeeper and accumulo are running.

 

 

 

 

 

Gesendet: Samstag, 10. Juli 2021 um 14:43 Uhr
Von: "Christine Buss" <christine.buss223@gmx.de <ma...@gmx.de> >
An: user@accumulo.apache.org <ma...@accumulo.apache.org> 
Betreff: Aw: Re: Re: Re: Hadoop ConnectException

sorry found it:

The ‘accumulo-cluster’ command was created to manage Accumulo on cluster and replaces ‘start-all.sh’ and ‘stop-all.sh’

  

  

Gesendet: Samstag, 10. Juli 2021 um 12:14 Uhr
Von: "Christine Buss" <christine.buss223@gmx.de <ma...@gmx.de> >
An: user@accumulo.apache.org <ma...@accumulo.apache.org> 
Betreff: Aw: Re: Re: Re: Hadoop ConnectException

I am still trying to run accumulo 2.0.1

Question: instead of ./bin/start-all.sh you use what in 2.0.1 ?

  

  

Gesendet: Freitag, 09. Juli 2021 um 17:15 Uhr
Von: "Christopher" <ctubbsii@apache.org <ma...@apache.org> >
An: "accumulo-user" <user@accumulo.apache.org <ma...@accumulo.apache.org> >
Betreff: Re: Re: Re: Hadoop ConnectException

Oh, so you weren't able to get 2.0.1 working? That's unfortunate. If
you try 2.0.1 again and are able to figure out how to get past the
issue you were having, feel free to let us know what you did
differently.

On Fri, Jul 9, 2021 at 10:56 AM Christine Buss <christine.buss223@gmx.de <ma...@gmx.de> > wrote:
>
>
> yes of course!
> I deleted accumulo 2.0.1 and installed accumulo 1.10.1.
> Then edited the conf/ files. I think I didn't do that right before.
> And then it worked.
>
> Gesendet: Freitag, 09. Juli 2021 um 16:30 Uhr
> Von: "Christopher" <ctubbsii@apache.org <ma...@apache.org> >
> An: "accumulo-user" <user@accumulo.apache.org <ma...@accumulo.apache.org> >
> Betreff: Re: Re: Hadoop ConnectException
> Glad to hear you got it working! Can you share what your solution was in case it helps others?
>
> On Fri, Jul 9, 2021, 10:20 Christine Buss <christine.buss223@gmx.de <ma...@gmx.de> > wrote:
>>
>>
>> It works!! Thanks a lot to veryone!
>> I worked through all your hints and suggestions.
>>
>> Gesendet: Donnerstag, 08. Juli 2021 um 18:18 Uhr
>> Von: "Ed Coleman" <edcoleman@apache.org <ma...@apache.org> >
>> An: user@accumulo.apache.org <ma...@accumulo.apache.org> 
>> Betreff: Re: Hadoop ConnectException
>>
>> According to the Hadoop getting started guide (https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html) the resouce manager runs at: http://localhost:8088/
>>
>> Can you run hadoop commands like:
>> > hadoop fs -ls /accumulo (or whatever you've decided on as the destination for files)
>>
>> Did you check that accumulo-env.sh and other configuration files have been set-up for your environemnt?
>>
>>
>> On 2021/07/07 15:20:41, Christine Buss <christine.buss223@gmx.de <ma...@gmx.de> > wrote:
>> > Hi,
>> >
>> >
>> >
>> > I am using:
>> >
>> > Java 11
>> >
>> > Ubuntu 20.04.2
>> >
>> > Hadoop 3.3.1
>> >
>> > Zookeeper 3.7.0
>> >
>> > Accumulo 2.0.1
>> >
>> >
>> >
>> >
>> >
>> > I followed the instructions here:
>> >
>> > https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-
>> > common/SingleCluster.html
>> >
>> > and edited `etc/hadoop/hadoop-env.sh`, etc/hadoop/core-site.xml,
>> > etc/hadoop/hdfs-site.xml accordingly.
>> >
>> > 'ssh localhost' works without a passphrase.
>> >
>> >
>> >
>> > Then I started Zookeper, start-dfs.sh and start-yarn.sh:
>> >
>> > christine@centauri:~$ ./zookeeper-3.4.9/bin/zkServer.sh start
>> > ZooKeeper JMX enabled by default
>> > Using config: /home/christine/zookeeper-3.4.9/bin/../conf/zoo.cfg
>> > Starting zookeeper ... STARTED
>> > christine@centauri:~$ ./hadoop-3.3.1/sbin/start-dfs.sh
>> > Starting namenodes on [localhost]
>> > Starting datanodes
>> > Starting secondary namenodes [centauri]
>> > centauri: Warning: Permanently added
>> > 'centauri,2003:d4:771c:3b00:7223:40a1:4c07:7c7b' (ECDSA) to the list of known
>> > hosts.
>> > christine@centauri:~$ ./hadoop-3.3.1/sbin/start-yarn.sh
>> > Starting resourcemanager
>> > Starting nodemanagers
>> > christine@centauri:~$ jps
>> > 3921 Jps
>> > 2387 QuorumPeerMain
>> > 3171 SecondaryNameNode
>> > 3732 NodeManager
>> > 2955 DataNode
>> > 3599 ResourceManager
>> >
>> >
>> >
>> > BUT
>> >
>> > when running 'accumulo init' I get this Error:
>> >
>> > hristine@centauri:~$ ./accumulo-2.0.1/bin/accumulo init
>> > OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in
>> > version 9.0 and will likely be removed in a future release.
>> > 2021-07-07 15:59:05,590 [conf.SiteConfiguration] INFO : Found Accumulo
>> > configuration on classpath at
>> > /home/christine/accumulo-2.0.1/conf/accumulo.properties
>> > 2021-07-07 15:59:08,460 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose
>> > set to false in hdfs-site.xml: data loss is possible on hard system reset or
>> > power loss
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Hadoop Filesystem is
>> > hdfs://localhost:9000
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Accumulo data dirs are
>> > [hdfs://localhost:8020/accumulo]
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Zookeeper server is
>> > localhost:2181
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Checking if Zookeeper is
>> > available. If this hangs, then you need to make sure zookeeper is running
>> > 2021-07-07 15:59:08,938 [init.Initialize] ERROR: Fatal exception
>> > java.io.IOException: Failed to check if filesystem already initialized
>> > at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)
>> > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)
>> > at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)
>> > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
>> > at java.base/java.lang.Thread.run(Thread.java:829)
>> > Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to
>> > localhost:8020 failed on connection exception: java.net.ConnectException:
>> > Connection refused; For more details see:
>> > http://wiki.apache.org/hadoop/ConnectionRefused
>> > at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> > at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>> > at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> > at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>> > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)
>> > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)
>> > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1519)
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1416)
>> > at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
>> > at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
>> > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
>> > at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)
>> > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> > at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> > at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>> > at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
>> > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)
>> > at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)
>> > at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)
>> > at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>> > at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)
>> > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)
>> > at org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)
>> > at org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)
>> > at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)
>> > ... 4 more
>> > Caused by: java.net.ConnectException: Connection refused
>> > at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>> > at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
>> > at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>> > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)
>> > at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)
>> > at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)
>> > at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)
>> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1463)
>> > ... 28 more
>> > 2021-07-07 15:59:08,944 [start.Main] ERROR: Thread 'init' died.
>> > java.lang.RuntimeException: java.io.IOException: Failed to check if filesystem
>> > already initialized
>> > at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:997)
>> > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
>> > at java.base/java.lang.Thread.run(Thread.java:829)
>> > Caused by: java.io.IOException: Failed to check if filesystem already
>> > initialized
>> > at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)
>> > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)
>> > at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)
>> > ... 2 more
>> > Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to
>> > localhost:8020 failed on connection exception: java.net.ConnectException:
>> > Connection refused; For more details see:
>> > http://wiki.apache.org/hadoop/ConnectionRefused
>> > at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> > at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>> > at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> > at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>> > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)
>> > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)
>> > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1519)
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1416)
>> > at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
>> > at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
>> > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
>> > at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)
>> > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> > at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> > at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>> > at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
>> > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)
>> > at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)
>> > at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)
>> > at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>> > at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)
>> > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)
>> > at org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)
>> > at org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)
>> > at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)
>> > ... 4 more
>> > Caused by: java.net.ConnectException: Connection refused
>> > at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>> > at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
>> > at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>> > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)
>> > at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)
>> > at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)
>> > at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)
>> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1463)
>> > ... 28 more
>> >
>> >
>> >
>> >
>> >
>> > I am not able to find the mistake. I found similar questions on Stackoverflow,
>> > but none of them solved my problem.
>> >
>> > Thanks in advance for any idea.
>> >
>> >


Aw: Re: Re: Re: Hadoop ConnectException

Posted by Christine Buss <ch...@gmx.de>.

Ok, so I in the file 'accumulo.properties' I changed

## Sets location in HDFS where Accumulo will store data  
instance.volumes=hdfs://localhost:8020/accumulo



to



## Sets location in HDFS where Accumulo will store data  
instance.volumes=hdfs://localhost:9000/accumulo





Then I was able to run 'accumulo init' and 'accumulo-cluster start'.

But when I run 'accumulo shell -u root' it hangs:





christine@centauri:~/accumulo-2.0.1/bin$ ./accumulo shell -u root  
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in
version 9.0 and will likely be removed in a future release.  
Loading configuration from /home/christine/accumulo-2.0.1/conf/accumulo-
client.properties  
Password: *********

Shell - Apache Accumulo Interactive Shell  
-  
\- version: 2.0.1  
\- instance name: accumulotest  
\- instance id: 5d8c404a-c741-48b3-b7a4-adaf19cc1499  
-  
\- type 'help' for a list of available commands  
-  
2021-07-10 15:39:17,328 [clientImpl.ServerClient] WARN : There are no tablet
servers: check that zookeeper and accumulo are running.











**Gesendet:**  Samstag, 10. Juli 2021 um 14:43 Uhr  
**Von:**  "Christine Buss" <ch...@gmx.de>  
**An:**  user@accumulo.apache.org  
**Betreff:**  Aw: Re: Re: Re: Hadoop ConnectException

sorry found it:

The 'accumulo-cluster' command was created to manage Accumulo on cluster and
replaces 'start-all.sh' and 'stop-all.sh'





**Gesendet:**  Samstag, 10. Juli 2021 um 12:14 Uhr  
**Von:**  "Christine Buss" <ch...@gmx.de>  
**An:**  user@accumulo.apache.org  
**Betreff:**  Aw: Re: Re: Re: Hadoop ConnectException

I am still trying to run accumulo 2.0.1

Question: instead of `./bin/start-all.sh you use what in 2.0.1 ?`





**Gesendet:**  Freitag, 09. Juli 2021 um 17:15 Uhr  
**Von:**  "Christopher" <ct...@apache.org>  
**An:**  "accumulo-user" <us...@accumulo.apache.org>  
**Betreff:**  Re: Re: Re: Hadoop ConnectException

Oh, so you weren't able to get 2.0.1 working? That's unfortunate. If  
you try 2.0.1 again and are able to figure out how to get past the  
issue you were having, feel free to let us know what you did  
differently.  
  
On Fri, Jul 9, 2021 at 10:56 AM Christine Buss <ch...@gmx.de>
wrote:  
>  
>  
> yes of course!  
> I deleted accumulo 2.0.1 and installed accumulo 1.10.1.  
> Then edited the conf/ files. I think I didn't do that right before.  
> And then it worked.  
>  
> Gesendet: Freitag, 09. Juli 2021 um 16:30 Uhr  
> Von: "Christopher" <ct...@apache.org>  
> An: "accumulo-user" <us...@accumulo.apache.org>  
> Betreff: Re: Re: Hadoop ConnectException  
> Glad to hear you got it working! Can you share what your solution was in
case it helps others?  
>  
> On Fri, Jul 9, 2021, 10:20 Christine Buss <ch...@gmx.de> wrote:  
>>  
>>  
>> It works!! Thanks a lot to veryone!  
>> I worked through all your hints and suggestions.  
>>  
>> Gesendet: Donnerstag, 08. Juli 2021 um 18:18 Uhr  
>> Von: "Ed Coleman" <ed...@apache.org>  
>> An: user@accumulo.apache.org  
>> Betreff: Re: Hadoop ConnectException  
>>  
>> According to the Hadoop getting started guide
(<https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-
common/SingleCluster.html>) the resouce manager runs at:
http://localhost:8088/  
>>  
>> Can you run hadoop commands like:  
>> > hadoop fs -ls /accumulo (or whatever you've decided on as the destination
for files)  
>>  
>> Did you check that accumulo-env.sh and other configuration files have been
set-up for your environemnt?  
>>  
>>  
>> On 2021/07/07 15:20:41, Christine Buss <ch...@gmx.de> wrote:  
>> > Hi,  
>> >  
>> >  
>> >  
>> > I am using:  
>> >  
>> > Java 11  
>> >  
>> > Ubuntu 20.04.2  
>> >  
>> > Hadoop 3.3.1  
>> >  
>> > Zookeeper 3.7.0  
>> >  
>> > Accumulo 2.0.1  
>> >  
>> >  
>> >  
>> >  
>> >  
>> > I followed the instructions here:  
>> >  
>> > <https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop->  
>> > common/SingleCluster.html  
>> >  
>> > and edited `etc/hadoop/hadoop-env.sh`, etc/hadoop/core-site.xml,  
>> > etc/hadoop/hdfs-site.xml accordingly.  
>> >  
>> > 'ssh localhost' works without a passphrase.  
>> >  
>> >  
>> >  
>> > Then I started Zookeper, start-dfs.sh and start-yarn.sh:  
>> >  
>> > christine@centauri:~$ ./zookeeper-3.4.9/bin/zkServer.sh start  
>> > ZooKeeper JMX enabled by default  
>> > Using config: /home/christine/zookeeper-3.4.9/bin/../conf/zoo.cfg  
>> > Starting zookeeper ... STARTED  
>> > christine@centauri:~$ ./hadoop-3.3.1/sbin/start-dfs.sh  
>> > Starting namenodes on [localhost]  
>> > Starting datanodes  
>> > Starting secondary namenodes [centauri]  
>> > centauri: Warning: Permanently added  
>> > 'centauri,2003:d4:771c:3b00:7223:40a1:4c07:7c7b' (ECDSA) to the list of
known  
>> > hosts.  
>> > christine@centauri:~$ ./hadoop-3.3.1/sbin/start-yarn.sh  
>> > Starting resourcemanager  
>> > Starting nodemanagers  
>> > christine@centauri:~$ jps  
>> > 3921 Jps  
>> > 2387 QuorumPeerMain  
>> > 3171 SecondaryNameNode  
>> > 3732 NodeManager  
>> > 2955 DataNode  
>> > 3599 ResourceManager  
>> >  
>> >  
>> >  
>> > BUT  
>> >  
>> > when running 'accumulo init' I get this Error:  
>> >  
>> > hristine@centauri:~$ ./accumulo-2.0.1/bin/accumulo init  
>> > OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was
deprecated in  
>> > version 9.0 and will likely be removed in a future release.  
>> > 2021-07-07 15:59:05,590 [conf.SiteConfiguration] INFO : Found Accumulo  
>> > configuration on classpath at  
>> > /home/christine/accumulo-2.0.1/conf/accumulo.properties  
>> > 2021-07-07 15:59:08,460 [fs.VolumeManagerImpl] WARN :
dfs.datanode.synconclose  
>> > set to false in hdfs-site.xml: data loss is possible on hard system reset
or  
>> > power loss  
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Hadoop Filesystem is  
>> > hdfs://localhost:9000  
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Accumulo data dirs are  
>> > [hdfs://localhost:8020/accumulo]  
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Zookeeper server is  
>> > localhost:2181  
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Checking if Zookeeper is  
>> > available. If this hangs, then you need to make sure zookeeper is running  
>> > 2021-07-07 15:59:08,938 [init.Initialize] ERROR: Fatal exception  
>> > java.io.IOException: Failed to check if filesystem already initialized  
>> > at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
>> > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
>> > at
org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
>> > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
>> > at java.base/java.lang.Thread.run(Thread.java:829)  
>> > Caused by: java.net.ConnectException: Call From centauri/192.168.178.30
to  
>> > localhost:8020 failed on connection exception: java.net.ConnectException:  
>> > Connection refused; For more details see:  
>> > <http://wiki.apache.org/hadoop/ConnectionRefused>  
>> > at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)  
>> > at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
>> > at
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
>> > at
java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
>> > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
>> > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
>> > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
>> > at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
>> > at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
>> > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
>> > at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
>> > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)  
>> > at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
>> > at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
>> > at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
>> > at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
>> > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
>> > at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
>> > at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
>> > at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
>> > at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
>> > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
>> > at
org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
>> > at
org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
>> > at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
>> > ... 4 more  
>> > Caused by: java.net.ConnectException: Connection refused  
>> > at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
>> > at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
>> > at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
>> > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
>> > at
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
>> > at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
>> > at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
>> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
>> > ... 28 more  
>> > 2021-07-07 15:59:08,944 [start.Main] ERROR: Thread 'init' died.  
>> > java.lang.RuntimeException: java.io.IOException: Failed to check if
filesystem  
>> > already initialized  
>> > at
org.apache.accumulo.server.init.Initialize.execute(Initialize.java:997)  
>> > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
>> > at java.base/java.lang.Thread.run(Thread.java:829)  
>> > Caused by: java.io.IOException: Failed to check if filesystem already  
>> > initialized  
>> > at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
>> > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
>> > at
org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
>> > ... 2 more  
>> > Caused by: java.net.ConnectException: Call From centauri/192.168.178.30
to  
>> > localhost:8020 failed on connection exception: java.net.ConnectException:  
>> > Connection refused; For more details see:  
>> > <http://wiki.apache.org/hadoop/ConnectionRefused>  
>> > at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)  
>> > at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
>> > at
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
>> > at
java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
>> > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
>> > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
>> > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
>> > at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
>> > at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
>> > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
>> > at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
>> > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)  
>> > at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
>> > at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
>> > at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
>> > at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
>> > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
>> > at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
>> > at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
>> > at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
>> > at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
>> > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
>> > at
org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
>> > at
org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
>> > at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
>> > ... 4 more  
>> > Caused by: java.net.ConnectException: Connection refused  
>> > at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
>> > at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
>> > at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
>> > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
>> > at
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
>> > at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
>> > at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
>> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
>> > ... 28 more  
>> >  
>> >  
>> >  
>> >  
>> >  
>> > I am not able to find the mistake. I found similar questions on
Stackoverflow,  
>> > but none of them solved my problem.  
>> >  
>> > Thanks in advance for any idea.  
>> >  
>> >


Aw: Re: Re: Re: Hadoop ConnectException

Posted by Christine Buss <ch...@gmx.de>.
sorry found it:

The 'accumulo-cluster' command was created to manage Accumulo on cluster and
replaces 'start-all.sh' and 'stop-all.sh'





**Gesendet:**  Samstag, 10. Juli 2021 um 12:14 Uhr  
**Von:**  "Christine Buss" <ch...@gmx.de>  
**An:**  user@accumulo.apache.org  
**Betreff:**  Aw: Re: Re: Re: Hadoop ConnectException

I am still trying to run accumulo 2.0.1

Question: instead of `./bin/start-all.sh you use what in 2.0.1 ?`





**Gesendet:**  Freitag, 09. Juli 2021 um 17:15 Uhr  
**Von:**  "Christopher" <ct...@apache.org>  
**An:**  "accumulo-user" <us...@accumulo.apache.org>  
**Betreff:**  Re: Re: Re: Hadoop ConnectException

Oh, so you weren't able to get 2.0.1 working? That's unfortunate. If  
you try 2.0.1 again and are able to figure out how to get past the  
issue you were having, feel free to let us know what you did  
differently.  
  
On Fri, Jul 9, 2021 at 10:56 AM Christine Buss <ch...@gmx.de>
wrote:  
>  
>  
> yes of course!  
> I deleted accumulo 2.0.1 and installed accumulo 1.10.1.  
> Then edited the conf/ files. I think I didn't do that right before.  
> And then it worked.  
>  
> Gesendet: Freitag, 09. Juli 2021 um 16:30 Uhr  
> Von: "Christopher" <ct...@apache.org>  
> An: "accumulo-user" <us...@accumulo.apache.org>  
> Betreff: Re: Re: Hadoop ConnectException  
> Glad to hear you got it working! Can you share what your solution was in
case it helps others?  
>  
> On Fri, Jul 9, 2021, 10:20 Christine Buss <ch...@gmx.de> wrote:  
>>  
>>  
>> It works!! Thanks a lot to veryone!  
>> I worked through all your hints and suggestions.  
>>  
>> Gesendet: Donnerstag, 08. Juli 2021 um 18:18 Uhr  
>> Von: "Ed Coleman" <ed...@apache.org>  
>> An: user@accumulo.apache.org  
>> Betreff: Re: Hadoop ConnectException  
>>  
>> According to the Hadoop getting started guide
(<https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-
common/SingleCluster.html>) the resouce manager runs at:
http://localhost:8088/  
>>  
>> Can you run hadoop commands like:  
>> > hadoop fs -ls /accumulo (or whatever you've decided on as the destination
for files)  
>>  
>> Did you check that accumulo-env.sh and other configuration files have been
set-up for your environemnt?  
>>  
>>  
>> On 2021/07/07 15:20:41, Christine Buss <ch...@gmx.de> wrote:  
>> > Hi,  
>> >  
>> >  
>> >  
>> > I am using:  
>> >  
>> > Java 11  
>> >  
>> > Ubuntu 20.04.2  
>> >  
>> > Hadoop 3.3.1  
>> >  
>> > Zookeeper 3.7.0  
>> >  
>> > Accumulo 2.0.1  
>> >  
>> >  
>> >  
>> >  
>> >  
>> > I followed the instructions here:  
>> >  
>> > <https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop->  
>> > common/SingleCluster.html  
>> >  
>> > and edited `etc/hadoop/hadoop-env.sh`, etc/hadoop/core-site.xml,  
>> > etc/hadoop/hdfs-site.xml accordingly.  
>> >  
>> > 'ssh localhost' works without a passphrase.  
>> >  
>> >  
>> >  
>> > Then I started Zookeper, start-dfs.sh and start-yarn.sh:  
>> >  
>> > christine@centauri:~$ ./zookeeper-3.4.9/bin/zkServer.sh start  
>> > ZooKeeper JMX enabled by default  
>> > Using config: /home/christine/zookeeper-3.4.9/bin/../conf/zoo.cfg  
>> > Starting zookeeper ... STARTED  
>> > christine@centauri:~$ ./hadoop-3.3.1/sbin/start-dfs.sh  
>> > Starting namenodes on [localhost]  
>> > Starting datanodes  
>> > Starting secondary namenodes [centauri]  
>> > centauri: Warning: Permanently added  
>> > 'centauri,2003:d4:771c:3b00:7223:40a1:4c07:7c7b' (ECDSA) to the list of
known  
>> > hosts.  
>> > christine@centauri:~$ ./hadoop-3.3.1/sbin/start-yarn.sh  
>> > Starting resourcemanager  
>> > Starting nodemanagers  
>> > christine@centauri:~$ jps  
>> > 3921 Jps  
>> > 2387 QuorumPeerMain  
>> > 3171 SecondaryNameNode  
>> > 3732 NodeManager  
>> > 2955 DataNode  
>> > 3599 ResourceManager  
>> >  
>> >  
>> >  
>> > BUT  
>> >  
>> > when running 'accumulo init' I get this Error:  
>> >  
>> > hristine@centauri:~$ ./accumulo-2.0.1/bin/accumulo init  
>> > OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was
deprecated in  
>> > version 9.0 and will likely be removed in a future release.  
>> > 2021-07-07 15:59:05,590 [conf.SiteConfiguration] INFO : Found Accumulo  
>> > configuration on classpath at  
>> > /home/christine/accumulo-2.0.1/conf/accumulo.properties  
>> > 2021-07-07 15:59:08,460 [fs.VolumeManagerImpl] WARN :
dfs.datanode.synconclose  
>> > set to false in hdfs-site.xml: data loss is possible on hard system reset
or  
>> > power loss  
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Hadoop Filesystem is  
>> > hdfs://localhost:9000  
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Accumulo data dirs are  
>> > [hdfs://localhost:8020/accumulo]  
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Zookeeper server is  
>> > localhost:2181  
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Checking if Zookeeper is  
>> > available. If this hangs, then you need to make sure zookeeper is running  
>> > 2021-07-07 15:59:08,938 [init.Initialize] ERROR: Fatal exception  
>> > java.io.IOException: Failed to check if filesystem already initialized  
>> > at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
>> > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
>> > at
org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
>> > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
>> > at java.base/java.lang.Thread.run(Thread.java:829)  
>> > Caused by: java.net.ConnectException: Call From centauri/192.168.178.30
to  
>> > localhost:8020 failed on connection exception: java.net.ConnectException:  
>> > Connection refused; For more details see:  
>> > <http://wiki.apache.org/hadoop/ConnectionRefused>  
>> > at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)  
>> > at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
>> > at
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
>> > at
java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
>> > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
>> > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
>> > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
>> > at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
>> > at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
>> > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
>> > at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
>> > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)  
>> > at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
>> > at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
>> > at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
>> > at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
>> > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
>> > at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
>> > at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
>> > at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
>> > at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
>> > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
>> > at
org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
>> > at
org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
>> > at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
>> > ... 4 more  
>> > Caused by: java.net.ConnectException: Connection refused  
>> > at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
>> > at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
>> > at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
>> > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
>> > at
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
>> > at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
>> > at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
>> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
>> > ... 28 more  
>> > 2021-07-07 15:59:08,944 [start.Main] ERROR: Thread 'init' died.  
>> > java.lang.RuntimeException: java.io.IOException: Failed to check if
filesystem  
>> > already initialized  
>> > at
org.apache.accumulo.server.init.Initialize.execute(Initialize.java:997)  
>> > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
>> > at java.base/java.lang.Thread.run(Thread.java:829)  
>> > Caused by: java.io.IOException: Failed to check if filesystem already  
>> > initialized  
>> > at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
>> > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
>> > at
org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
>> > ... 2 more  
>> > Caused by: java.net.ConnectException: Call From centauri/192.168.178.30
to  
>> > localhost:8020 failed on connection exception: java.net.ConnectException:  
>> > Connection refused; For more details see:  
>> > <http://wiki.apache.org/hadoop/ConnectionRefused>  
>> > at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)  
>> > at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
>> > at
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
>> > at
java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
>> > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
>> > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
>> > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
>> > at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
>> > at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
>> > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
>> > at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
>> > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)  
>> > at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
>> > at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
>> > at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
>> > at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
>> > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
>> > at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
>> > at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
>> > at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
>> > at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
>> > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
>> > at
org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
>> > at
org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
>> > at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
>> > ... 4 more  
>> > Caused by: java.net.ConnectException: Connection refused  
>> > at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
>> > at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
>> > at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
>> > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
>> > at
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
>> > at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
>> > at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
>> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
>> > ... 28 more  
>> >  
>> >  
>> >  
>> >  
>> >  
>> > I am not able to find the mistake. I found similar questions on
Stackoverflow,  
>> > but none of them solved my problem.  
>> >  
>> > Thanks in advance for any idea.  
>> >  
>> >


Aw: Re: Re: Re: Hadoop ConnectException

Posted by Christine Buss <ch...@gmx.de>.
I am still trying to run accumulo 2.0.1

Question: instead of `./bin/start-all.sh you use what in 2.0.1 ?`





**Gesendet:**  Freitag, 09. Juli 2021 um 17:15 Uhr  
**Von:**  "Christopher" <ct...@apache.org>  
**An:**  "accumulo-user" <us...@accumulo.apache.org>  
**Betreff:**  Re: Re: Re: Hadoop ConnectException

Oh, so you weren't able to get 2.0.1 working? That's unfortunate. If  
you try 2.0.1 again and are able to figure out how to get past the  
issue you were having, feel free to let us know what you did  
differently.  
  
On Fri, Jul 9, 2021 at 10:56 AM Christine Buss <ch...@gmx.de>
wrote:  
>  
>  
> yes of course!  
> I deleted accumulo 2.0.1 and installed accumulo 1.10.1.  
> Then edited the conf/ files. I think I didn't do that right before.  
> And then it worked.  
>  
> Gesendet: Freitag, 09. Juli 2021 um 16:30 Uhr  
> Von: "Christopher" <ct...@apache.org>  
> An: "accumulo-user" <us...@accumulo.apache.org>  
> Betreff: Re: Re: Hadoop ConnectException  
> Glad to hear you got it working! Can you share what your solution was in
case it helps others?  
>  
> On Fri, Jul 9, 2021, 10:20 Christine Buss <ch...@gmx.de> wrote:  
>>  
>>  
>> It works!! Thanks a lot to veryone!  
>> I worked through all your hints and suggestions.  
>>  
>> Gesendet: Donnerstag, 08. Juli 2021 um 18:18 Uhr  
>> Von: "Ed Coleman" <ed...@apache.org>  
>> An: user@accumulo.apache.org  
>> Betreff: Re: Hadoop ConnectException  
>>  
>> According to the Hadoop getting started guide
(<https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-
common/SingleCluster.html>) the resouce manager runs at:
http://localhost:8088/  
>>  
>> Can you run hadoop commands like:  
>> > hadoop fs -ls /accumulo (or whatever you've decided on as the destination
for files)  
>>  
>> Did you check that accumulo-env.sh and other configuration files have been
set-up for your environemnt?  
>>  
>>  
>> On 2021/07/07 15:20:41, Christine Buss <ch...@gmx.de> wrote:  
>> > Hi,  
>> >  
>> >  
>> >  
>> > I am using:  
>> >  
>> > Java 11  
>> >  
>> > Ubuntu 20.04.2  
>> >  
>> > Hadoop 3.3.1  
>> >  
>> > Zookeeper 3.7.0  
>> >  
>> > Accumulo 2.0.1  
>> >  
>> >  
>> >  
>> >  
>> >  
>> > I followed the instructions here:  
>> >  
>> > <https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop->  
>> > common/SingleCluster.html  
>> >  
>> > and edited `etc/hadoop/hadoop-env.sh`, etc/hadoop/core-site.xml,  
>> > etc/hadoop/hdfs-site.xml accordingly.  
>> >  
>> > 'ssh localhost' works without a passphrase.  
>> >  
>> >  
>> >  
>> > Then I started Zookeper, start-dfs.sh and start-yarn.sh:  
>> >  
>> > christine@centauri:~$ ./zookeeper-3.4.9/bin/zkServer.sh start  
>> > ZooKeeper JMX enabled by default  
>> > Using config: /home/christine/zookeeper-3.4.9/bin/../conf/zoo.cfg  
>> > Starting zookeeper ... STARTED  
>> > christine@centauri:~$ ./hadoop-3.3.1/sbin/start-dfs.sh  
>> > Starting namenodes on [localhost]  
>> > Starting datanodes  
>> > Starting secondary namenodes [centauri]  
>> > centauri: Warning: Permanently added  
>> > 'centauri,2003:d4:771c:3b00:7223:40a1:4c07:7c7b' (ECDSA) to the list of
known  
>> > hosts.  
>> > christine@centauri:~$ ./hadoop-3.3.1/sbin/start-yarn.sh  
>> > Starting resourcemanager  
>> > Starting nodemanagers  
>> > christine@centauri:~$ jps  
>> > 3921 Jps  
>> > 2387 QuorumPeerMain  
>> > 3171 SecondaryNameNode  
>> > 3732 NodeManager  
>> > 2955 DataNode  
>> > 3599 ResourceManager  
>> >  
>> >  
>> >  
>> > BUT  
>> >  
>> > when running 'accumulo init' I get this Error:  
>> >  
>> > hristine@centauri:~$ ./accumulo-2.0.1/bin/accumulo init  
>> > OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was
deprecated in  
>> > version 9.0 and will likely be removed in a future release.  
>> > 2021-07-07 15:59:05,590 [conf.SiteConfiguration] INFO : Found Accumulo  
>> > configuration on classpath at  
>> > /home/christine/accumulo-2.0.1/conf/accumulo.properties  
>> > 2021-07-07 15:59:08,460 [fs.VolumeManagerImpl] WARN :
dfs.datanode.synconclose  
>> > set to false in hdfs-site.xml: data loss is possible on hard system reset
or  
>> > power loss  
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Hadoop Filesystem is  
>> > hdfs://localhost:9000  
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Accumulo data dirs are  
>> > [hdfs://localhost:8020/accumulo]  
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Zookeeper server is  
>> > localhost:2181  
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Checking if Zookeeper is  
>> > available. If this hangs, then you need to make sure zookeeper is running  
>> > 2021-07-07 15:59:08,938 [init.Initialize] ERROR: Fatal exception  
>> > java.io.IOException: Failed to check if filesystem already initialized  
>> > at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
>> > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
>> > at
org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
>> > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
>> > at java.base/java.lang.Thread.run(Thread.java:829)  
>> > Caused by: java.net.ConnectException: Call From centauri/192.168.178.30
to  
>> > localhost:8020 failed on connection exception: java.net.ConnectException:  
>> > Connection refused; For more details see:  
>> > <http://wiki.apache.org/hadoop/ConnectionRefused>  
>> > at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)  
>> > at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
>> > at
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
>> > at
java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
>> > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
>> > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
>> > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
>> > at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
>> > at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
>> > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
>> > at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
>> > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)  
>> > at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
>> > at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
>> > at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
>> > at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
>> > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
>> > at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
>> > at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
>> > at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
>> > at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
>> > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
>> > at
org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
>> > at
org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
>> > at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
>> > ... 4 more  
>> > Caused by: java.net.ConnectException: Connection refused  
>> > at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
>> > at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
>> > at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
>> > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
>> > at
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
>> > at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
>> > at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
>> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
>> > ... 28 more  
>> > 2021-07-07 15:59:08,944 [start.Main] ERROR: Thread 'init' died.  
>> > java.lang.RuntimeException: java.io.IOException: Failed to check if
filesystem  
>> > already initialized  
>> > at
org.apache.accumulo.server.init.Initialize.execute(Initialize.java:997)  
>> > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
>> > at java.base/java.lang.Thread.run(Thread.java:829)  
>> > Caused by: java.io.IOException: Failed to check if filesystem already  
>> > initialized  
>> > at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
>> > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
>> > at
org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
>> > ... 2 more  
>> > Caused by: java.net.ConnectException: Call From centauri/192.168.178.30
to  
>> > localhost:8020 failed on connection exception: java.net.ConnectException:  
>> > Connection refused; For more details see:  
>> > <http://wiki.apache.org/hadoop/ConnectionRefused>  
>> > at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)  
>> > at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
>> > at
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
>> > at
java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
>> > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
>> > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
>> > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
>> > at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
>> > at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
>> > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
>> > at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
>> > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)  
>> > at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
>> > at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
>> > at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
>> > at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
>> > at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
>> > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
>> > at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
>> > at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
>> > at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
>> > at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
>> > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
>> > at
org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
>> > at
org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
>> > at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
>> > ... 4 more  
>> > Caused by: java.net.ConnectException: Connection refused  
>> > at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
>> > at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
>> > at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
>> > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
>> > at
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
>> > at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
>> > at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
>> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
>> > ... 28 more  
>> >  
>> >  
>> >  
>> >  
>> >  
>> > I am not able to find the mistake. I found similar questions on
Stackoverflow,  
>> > but none of them solved my problem.  
>> >  
>> > Thanks in advance for any idea.  
>> >  
>> >


Re: Re: Re: Hadoop ConnectException

Posted by Christopher <ct...@apache.org>.
Oh, so you weren't able to get 2.0.1 working? That's unfortunate. If
you try 2.0.1 again and are able to figure out how to get past the
issue you were having, feel free to let us know what you did
differently.

On Fri, Jul 9, 2021 at 10:56 AM Christine Buss <ch...@gmx.de> wrote:
>
>
> yes of course!
> I deleted accumulo 2.0.1 and installed accumulo 1.10.1.
> Then edited the conf/ files. I think I didn't do that right before.
> And then it worked.
>
> Gesendet: Freitag, 09. Juli 2021 um 16:30 Uhr
> Von: "Christopher" <ct...@apache.org>
> An: "accumulo-user" <us...@accumulo.apache.org>
> Betreff: Re: Re: Hadoop ConnectException
> Glad to hear you got it working! Can you share what your solution was in case it helps others?
>
> On Fri, Jul 9, 2021, 10:20 Christine Buss <ch...@gmx.de> wrote:
>>
>>
>> It works!! Thanks a lot to veryone!
>> I worked through all your hints and suggestions.
>>
>> Gesendet: Donnerstag, 08. Juli 2021 um 18:18 Uhr
>> Von: "Ed Coleman" <ed...@apache.org>
>> An: user@accumulo.apache.org
>> Betreff: Re: Hadoop ConnectException
>>
>> According to the Hadoop getting started guide (https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html) the resouce manager runs at: http://localhost:8088/
>>
>> Can you run hadoop commands like:
>> > hadoop fs -ls /accumulo (or whatever you've decided on as the destination for files)
>>
>> Did you check that accumulo-env.sh and other configuration files have been set-up for your environemnt?
>>
>>
>> On 2021/07/07 15:20:41, Christine Buss <ch...@gmx.de> wrote:
>> > Hi,
>> >
>> >
>> >
>> > I am using:
>> >
>> > Java 11
>> >
>> > Ubuntu 20.04.2
>> >
>> > Hadoop 3.3.1
>> >
>> > Zookeeper 3.7.0
>> >
>> > Accumulo 2.0.1
>> >
>> >
>> >
>> >
>> >
>> > I followed the instructions here:
>> >
>> > https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-
>> > common/SingleCluster.html
>> >
>> > and edited `etc/hadoop/hadoop-env.sh`, etc/hadoop/core-site.xml,
>> > etc/hadoop/hdfs-site.xml accordingly.
>> >
>> > 'ssh localhost' works without a passphrase.
>> >
>> >
>> >
>> > Then I started Zookeper, start-dfs.sh and start-yarn.sh:
>> >
>> > christine@centauri:~$ ./zookeeper-3.4.9/bin/zkServer.sh start
>> > ZooKeeper JMX enabled by default
>> > Using config: /home/christine/zookeeper-3.4.9/bin/../conf/zoo.cfg
>> > Starting zookeeper ... STARTED
>> > christine@centauri:~$ ./hadoop-3.3.1/sbin/start-dfs.sh
>> > Starting namenodes on [localhost]
>> > Starting datanodes
>> > Starting secondary namenodes [centauri]
>> > centauri: Warning: Permanently added
>> > 'centauri,2003:d4:771c:3b00:7223:40a1:4c07:7c7b' (ECDSA) to the list of known
>> > hosts.
>> > christine@centauri:~$ ./hadoop-3.3.1/sbin/start-yarn.sh
>> > Starting resourcemanager
>> > Starting nodemanagers
>> > christine@centauri:~$ jps
>> > 3921 Jps
>> > 2387 QuorumPeerMain
>> > 3171 SecondaryNameNode
>> > 3732 NodeManager
>> > 2955 DataNode
>> > 3599 ResourceManager
>> >
>> >
>> >
>> > BUT
>> >
>> > when running 'accumulo init' I get this Error:
>> >
>> > hristine@centauri:~$ ./accumulo-2.0.1/bin/accumulo init
>> > OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in
>> > version 9.0 and will likely be removed in a future release.
>> > 2021-07-07 15:59:05,590 [conf.SiteConfiguration] INFO : Found Accumulo
>> > configuration on classpath at
>> > /home/christine/accumulo-2.0.1/conf/accumulo.properties
>> > 2021-07-07 15:59:08,460 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose
>> > set to false in hdfs-site.xml: data loss is possible on hard system reset or
>> > power loss
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Hadoop Filesystem is
>> > hdfs://localhost:9000
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Accumulo data dirs are
>> > [hdfs://localhost:8020/accumulo]
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Zookeeper server is
>> > localhost:2181
>> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Checking if Zookeeper is
>> > available. If this hangs, then you need to make sure zookeeper is running
>> > 2021-07-07 15:59:08,938 [init.Initialize] ERROR: Fatal exception
>> > java.io.IOException: Failed to check if filesystem already initialized
>> > at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)
>> > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)
>> > at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)
>> > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
>> > at java.base/java.lang.Thread.run(Thread.java:829)
>> > Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to
>> > localhost:8020 failed on connection exception: java.net.ConnectException:
>> > Connection refused; For more details see:
>> > http://wiki.apache.org/hadoop/ConnectionRefused
>> > at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> > at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>> > at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> > at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>> > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)
>> > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)
>> > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1519)
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1416)
>> > at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
>> > at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
>> > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
>> > at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)
>> > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> > at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> > at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>> > at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
>> > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)
>> > at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)
>> > at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)
>> > at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>> > at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)
>> > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)
>> > at org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)
>> > at org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)
>> > at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)
>> > ... 4 more
>> > Caused by: java.net.ConnectException: Connection refused
>> > at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>> > at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
>> > at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>> > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)
>> > at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)
>> > at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)
>> > at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)
>> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1463)
>> > ... 28 more
>> > 2021-07-07 15:59:08,944 [start.Main] ERROR: Thread 'init' died.
>> > java.lang.RuntimeException: java.io.IOException: Failed to check if filesystem
>> > already initialized
>> > at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:997)
>> > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
>> > at java.base/java.lang.Thread.run(Thread.java:829)
>> > Caused by: java.io.IOException: Failed to check if filesystem already
>> > initialized
>> > at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)
>> > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)
>> > at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)
>> > ... 2 more
>> > Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to
>> > localhost:8020 failed on connection exception: java.net.ConnectException:
>> > Connection refused; For more details see:
>> > http://wiki.apache.org/hadoop/ConnectionRefused
>> > at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>> > at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>> > at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>> > at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
>> > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)
>> > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)
>> > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1519)
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1416)
>> > at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
>> > at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
>> > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
>> > at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)
>> > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> > at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> > at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>> > at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
>> > at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
>> > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)
>> > at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)
>> > at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)
>> > at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>> > at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)
>> > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)
>> > at org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)
>> > at org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)
>> > at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)
>> > ... 4 more
>> > Caused by: java.net.ConnectException: Connection refused
>> > at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
>> > at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
>> > at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
>> > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)
>> > at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)
>> > at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)
>> > at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)
>> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)
>> > at org.apache.hadoop.ipc.Client.call(Client.java:1463)
>> > ... 28 more
>> >
>> >
>> >
>> >
>> >
>> > I am not able to find the mistake. I found similar questions on Stackoverflow,
>> > but none of them solved my problem.
>> >
>> > Thanks in advance for any idea.
>> >
>> >

Aw: Re: Re: Hadoop ConnectException

Posted by Christine Buss <ch...@gmx.de>.

yes of course!

I deleted accumulo 2.0.1 and installed accumulo 1.10.1.

Then edited the conf/ files. I think I didn't do that right before.

And then it worked.



**Gesendet:**  Freitag, 09. Juli 2021 um 16:30 Uhr  
**Von:**  "Christopher" <ct...@apache.org>  
**An:**  "accumulo-user" <us...@accumulo.apache.org>  
**Betreff:**  Re: Re: Hadoop ConnectException

Glad to hear you got it working! Can you share what your solution was in case
it helps others?



On Fri, Jul 9, 2021, 10:20 Christine Buss
<[christine.buss223@gmx.de](mailto:christine.buss223@gmx.de)> wrote:

>  
>

> It works!! Thanks a lot to veryone!

>

> I worked through all your hints and suggestions.

>

>  
>

> **Gesendet:**  Donnerstag, 08. Juli 2021 um 18:18 Uhr  
>  **Von:**  "Ed Coleman"
<[edcoleman@apache.org](mailto:edcoleman@apache.org)>  
>  **An:**  [user@accumulo.apache.org](mailto:user@accumulo.apache.org)  
>  **Betreff:**  Re: Hadoop ConnectException

>

>  
>  According to the Hadoop getting started guide
(<https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-
common/SingleCluster.html>) the resouce manager runs at:
<http://localhost:8088/>  
>  
>  Can you run hadoop commands like:  
>  > hadoop fs -ls /accumulo (or whatever you've decided on as the destination
for files)  
>  
>  Did you check that accumulo-env.sh and other configuration files have been
set-up for your environemnt?  
>  
>  
>  On 2021/07/07 15:20:41, Christine Buss
<[christine.buss223@gmx.de](mailto:christine.buss223@gmx.de)> wrote:  
>  > Hi,  
>  >  
>  >  
>  >  
>  > I am using:  
>  >  
>  > Java 11  
>  >  
>  > Ubuntu 20.04.2  
>  >  
>  > Hadoop 3.3.1  
>  >  
>  > Zookeeper 3.7.0  
>  >  
>  > Accumulo 2.0.1  
>  >  
>  >  
>  >  
>  >  
>  >  
>  > I followed the instructions here:  
>  >  
>  > <https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop->  
>  > common/SingleCluster.html  
>  >  
>  > and edited `etc/hadoop/hadoop-env.sh`, etc/hadoop/core-site.xml,  
>  > etc/hadoop/hdfs-site.xml accordingly.  
>  >  
>  > 'ssh localhost' works without a passphrase.  
>  >  
>  >  
>  >  
>  > Then I started Zookeper, start-dfs.sh and start-yarn.sh:  
>  >  
>  > christine@centauri:~$ ./zookeeper-3.4.9/bin/zkServer.sh start  
>  > ZooKeeper JMX enabled by default  
>  > Using config: /home/christine/zookeeper-3.4.9/bin/../conf/zoo.cfg  
>  > Starting zookeeper ... STARTED  
>  > christine@centauri:~$ ./hadoop-3.3.1/sbin/start-dfs.sh  
>  > Starting namenodes on [localhost]  
>  > Starting datanodes  
>  > Starting secondary namenodes [centauri]  
>  > centauri: Warning: Permanently added  
>  > 'centauri,2003:d4:771c:3b00:7223:40a1:4c07:7c7b' (ECDSA) to the list of
known  
>  > hosts.  
>  > christine@centauri:~$ ./hadoop-3.3.1/sbin/start-yarn.sh  
>  > Starting resourcemanager  
>  > Starting nodemanagers  
>  > christine@centauri:~$ jps  
>  > 3921 Jps  
>  > 2387 QuorumPeerMain  
>  > 3171 SecondaryNameNode  
>  > 3732 NodeManager  
>  > 2955 DataNode  
>  > 3599 ResourceManager  
>  >  
>  >  
>  >  
>  > BUT  
>  >  
>  > when running 'accumulo init' I get this Error:  
>  >  
>  > hristine@centauri:~$ ./accumulo-2.0.1/bin/accumulo init  
>  > OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was
deprecated in  
>  > version 9.0 and will likely be removed in a future release.  
>  > 2021-07-07 15:59:05,590 [conf.SiteConfiguration] INFO : Found Accumulo  
>  > configuration on classpath at  
>  > /home/christine/accumulo-2.0.1/conf/accumulo.properties  
>  > 2021-07-07 15:59:08,460 [fs.VolumeManagerImpl] WARN :
dfs.datanode.synconclose  
>  > set to false in hdfs-site.xml: data loss is possible on hard system reset
or  
>  > power loss  
>  > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Hadoop Filesystem is  
>  > hdfs://localhost:9000  
>  > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Accumulo data dirs are  
>  > [hdfs://localhost:8020/accumulo]  
>  > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Zookeeper server is  
>  > localhost:2181  
>  > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Checking if Zookeeper is  
>  > available. If this hangs, then you need to make sure zookeeper is running  
>  > 2021-07-07 15:59:08,938 [init.Initialize] ERROR: Fatal exception  
>  > java.io.IOException: Failed to check if filesystem already initialized  
>  > at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
>  > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
>  > at
org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
>  > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
>  > at java.base/java.lang.Thread.run(Thread.java:829)  
>  > Caused by: java.net.ConnectException: Call From
centauri/[192.168.178.30](http://192.168.178.30) to  
>  > localhost:8020 failed on connection exception: java.net.ConnectException:  
>  > Connection refused; For more details see:  
>  > <http://wiki.apache.org/hadoop/ConnectionRefused>  
>  > at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)  
>  > at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
>  > at
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
>  > at
java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
>  > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
>  > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
>  > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
>  > at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
>  > at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
>  > at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
>  > at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
>  > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
>  > at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
>  > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)  
>  > at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
>  > at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
>  > at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
>  > at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
>  > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
>  > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
>  > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
>  > at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
>  > at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
>  > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
>  > at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
>  > at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
>  > at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
>  > at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
>  > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
>  > at
org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
>  > at
org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
>  > at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
>  > ... 4 more  
>  > Caused by: java.net.ConnectException: Connection refused  
>  > at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
>  > at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
>  > at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
>  > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
>  > at
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
>  > at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
>  > at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
>  > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
>  > at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
>  > ... 28 more  
>  > 2021-07-07 15:59:08,944 [start.Main] ERROR: Thread 'init' died.  
>  > java.lang.RuntimeException: java.io.IOException: Failed to check if
filesystem  
>  > already initialized  
>  > at
org.apache.accumulo.server.init.Initialize.execute(Initialize.java:997)  
>  > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
>  > at java.base/java.lang.Thread.run(Thread.java:829)  
>  > Caused by: java.io.IOException: Failed to check if filesystem already  
>  > initialized  
>  > at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
>  > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
>  > at
org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
>  > ... 2 more  
>  > Caused by: java.net.ConnectException: Call From
centauri/[192.168.178.30](http://192.168.178.30) to  
>  > localhost:8020 failed on connection exception: java.net.ConnectException:  
>  > Connection refused; For more details see:  
>  > <http://wiki.apache.org/hadoop/ConnectionRefused>  
>  > at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)  
>  > at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
>  > at
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
>  > at
java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
>  > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
>  > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
>  > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
>  > at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
>  > at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
>  > at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
>  > at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
>  > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
>  > at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
>  > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)  
>  > at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
>  > at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
>  > at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
>  > at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
>  > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
>  > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
>  > at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
>  > at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
>  > at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
>  > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
>  > at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
>  > at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
>  > at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
>  > at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
>  > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
>  > at
org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
>  > at
org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
>  > at
org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
>  > ... 4 more  
>  > Caused by: java.net.ConnectException: Connection refused  
>  > at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
>  > at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
>  > at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
>  > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
>  > at
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
>  > at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
>  > at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
>  > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
>  > at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
>  > ... 28 more  
>  >  
>  >  
>  >  
>  >  
>  >  
>  > I am not able to find the mistake. I found similar questions on
Stackoverflow,  
>  > but none of them solved my problem.  
>  >  
>  > Thanks in advance for any idea.  
>  >  
>  >


Re: Re: Hadoop ConnectException

Posted by Christopher <ct...@apache.org>.
Glad to hear you got it working! Can you share what your solution was in
case it helps others?

On Fri, Jul 9, 2021, 10:20 Christine Buss <ch...@gmx.de> wrote:

>
> It works!! Thanks a lot to veryone!
> I worked through all your hints and suggestions.
>
> *Gesendet:* Donnerstag, 08. Juli 2021 um 18:18 Uhr
> *Von:* "Ed Coleman" <ed...@apache.org>
> *An:* user@accumulo.apache.org
> *Betreff:* Re: Hadoop ConnectException
>
> According to the Hadoop getting started guide (
> https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html)
> the resouce manager runs at: http://localhost:8088/
>
> Can you run hadoop commands like:
> > hadoop fs -ls /accumulo (or whatever you've decided on as the
> destination for files)
>
> Did you check that accumulo-env.sh and other configuration files have been
> set-up for your environemnt?
>
>
> On 2021/07/07 15:20:41, Christine Buss <ch...@gmx.de> wrote:
> > Hi,
> >
> >
> >
> > I am using:
> >
> > Java 11
> >
> > Ubuntu 20.04.2
> >
> > Hadoop 3.3.1
> >
> > Zookeeper 3.7.0
> >
> > Accumulo 2.0.1
> >
> >
> >
> >
> >
> > I followed the instructions here:
> >
> > https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-
> > common/SingleCluster.html
> >
> > and edited `etc/hadoop/hadoop-env.sh`, etc/hadoop/core-site.xml,
> > etc/hadoop/hdfs-site.xml accordingly.
> >
> > 'ssh localhost' works without a passphrase.
> >
> >
> >
> > Then I started Zookeper, start-dfs.sh and start-yarn.sh:
> >
> > christine@centauri:~$ ./zookeeper-3.4.9/bin/zkServer.sh start
> > ZooKeeper JMX enabled by default
> > Using config: /home/christine/zookeeper-3.4.9/bin/../conf/zoo.cfg
> > Starting zookeeper ... STARTED
> > christine@centauri:~$ ./hadoop-3.3.1/sbin/start-dfs.sh
> > Starting namenodes on [localhost]
> > Starting datanodes
> > Starting secondary namenodes [centauri]
> > centauri: Warning: Permanently added
> > 'centauri,2003:d4:771c:3b00:7223:40a1:4c07:7c7b' (ECDSA) to the list of
> known
> > hosts.
> > christine@centauri:~$ ./hadoop-3.3.1/sbin/start-yarn.sh
> > Starting resourcemanager
> > Starting nodemanagers
> > christine@centauri:~$ jps
> > 3921 Jps
> > 2387 QuorumPeerMain
> > 3171 SecondaryNameNode
> > 3732 NodeManager
> > 2955 DataNode
> > 3599 ResourceManager
> >
> >
> >
> > BUT
> >
> > when running 'accumulo init' I get this Error:
> >
> > hristine@centauri:~$ ./accumulo-2.0.1/bin/accumulo init
> > OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was
> deprecated in
> > version 9.0 and will likely be removed in a future release.
> > 2021-07-07 15:59:05,590 [conf.SiteConfiguration] INFO : Found Accumulo
> > configuration on classpath at
> > /home/christine/accumulo-2.0.1/conf/accumulo.properties
> > 2021-07-07 15:59:08,460 [fs.VolumeManagerImpl] WARN :
> dfs.datanode.synconclose
> > set to false in hdfs-site.xml: data loss is possible on hard system
> reset or
> > power loss
> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Hadoop Filesystem is
> > hdfs://localhost:9000
> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Accumulo data dirs are
> > [hdfs://localhost:8020/accumulo]
> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Zookeeper server is
> > localhost:2181
> > 2021-07-07 15:59:08,461 [init.Initialize] INFO : Checking if Zookeeper is
> > available. If this hangs, then you need to make sure zookeeper is running
> > 2021-07-07 15:59:08,938 [init.Initialize] ERROR: Fatal exception
> > java.io.IOException: Failed to check if filesystem already initialized
> > at
> org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)
> > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)
> > at
> org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)
> > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
> > at java.base/java.lang.Thread.run(Thread.java:829)
> > Caused by: java.net.ConnectException: Call From centauri/192.168.178.30
> to
> > localhost:8020 failed on connection exception: java.net.ConnectException:
> > Connection refused; For more details see:
> > http://wiki.apache.org/hadoop/ConnectionRefused
> > at
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> > at
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> > at
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> > at
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
> > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)
> > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)
> > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)
> > at org.apache.hadoop.ipc.Client.call(Client.java:1519)
> > at org.apache.hadoop.ipc.Client.call(Client.java:1416)
> > at
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
> > at
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
> > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
> > at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)
> > at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> > at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > at
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> > at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
> > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)
> > at
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)
> > at
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)
> > at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> > at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)
> > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)
> > at
> org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)
> > at
> org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)
> > at
> org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)
> > ... 4 more
> > Caused by: java.net.ConnectException: Connection refused
> > at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> > at
> java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
> > at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)
> > at
> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)
> > at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)
> > at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)
> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)
> > at org.apache.hadoop.ipc.Client.call(Client.java:1463)
> > ... 28 more
> > 2021-07-07 15:59:08,944 [start.Main] ERROR: Thread 'init' died.
> > java.lang.RuntimeException: java.io.IOException: Failed to check if
> filesystem
> > already initialized
> > at
> org.apache.accumulo.server.init.Initialize.execute(Initialize.java:997)
> > at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
> > at java.base/java.lang.Thread.run(Thread.java:829)
> > Caused by: java.io.IOException: Failed to check if filesystem already
> > initialized
> > at
> org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)
> > at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)
> > at
> org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)
> > ... 2 more
> > Caused by: java.net.ConnectException: Call From centauri/192.168.178.30
> to
> > localhost:8020 failed on connection exception: java.net.ConnectException:
> > Connection refused; For more details see:
> > http://wiki.apache.org/hadoop/ConnectionRefused
> > at
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> > at
> java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> > at
> java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> > at
> java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
> > at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)
> > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)
> > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)
> > at org.apache.hadoop.ipc.Client.call(Client.java:1519)
> > at org.apache.hadoop.ipc.Client.call(Client.java:1416)
> > at
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)
> > at
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)
> > at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)
> > at
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)
> > at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> > at
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > at
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.base/java.lang.reflect.Method.invoke(Method.java:566)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
> > at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
> > at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
> > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)
> > at
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)
> > at
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)
> > at
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> > at
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)
> > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)
> > at
> org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)
> > at
> org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)
> > at
> org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)
> > ... 4 more
> > Caused by: java.net.ConnectException: Connection refused
> > at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> > at
> java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)
> > at
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> > at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)
> > at
> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)
> > at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)
> > at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)
> > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)
> > at org.apache.hadoop.ipc.Client.call(Client.java:1463)
> > ... 28 more
> >
> >
> >
> >
> >
> > I am not able to find the mistake. I found similar questions on
> Stackoverflow,
> > but none of them solved my problem.
> >
> > Thanks in advance for any idea.
> >
> >
>

Aw: Re: Hadoop ConnectException

Posted by Christine Buss <ch...@gmx.de>.

It works!! Thanks a lot to veryone!

I worked through all your hints and suggestions.



**Gesendet:**  Donnerstag, 08. Juli 2021 um 18:18 Uhr  
**Von:**  "Ed Coleman" <ed...@apache.org>  
**An:**  user@accumulo.apache.org  
**Betreff:**  Re: Hadoop ConnectException

  
According to the Hadoop getting started guide
(<https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-
common/SingleCluster.html>) the resouce manager runs at:
http://localhost:8088/  
  
Can you run hadoop commands like:  
> hadoop fs -ls /accumulo (or whatever you've decided on as the destination
for files)  
  
Did you check that accumulo-env.sh and other configuration files have been
set-up for your environemnt?  
  
  
On 2021/07/07 15:20:41, Christine Buss <ch...@gmx.de> wrote:  
> Hi,  
>  
>  
>  
> I am using:  
>  
> Java 11  
>  
> Ubuntu 20.04.2  
>  
> Hadoop 3.3.1  
>  
> Zookeeper 3.7.0  
>  
> Accumulo 2.0.1  
>  
>  
>  
>  
>  
> I followed the instructions here:  
>  
> <https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop->  
> common/SingleCluster.html  
>  
> and edited `etc/hadoop/hadoop-env.sh`, etc/hadoop/core-site.xml,  
> etc/hadoop/hdfs-site.xml accordingly.  
>  
> 'ssh localhost' works without a passphrase.  
>  
>  
>  
> Then I started Zookeper, start-dfs.sh and start-yarn.sh:  
>  
> christine@centauri:~$ ./zookeeper-3.4.9/bin/zkServer.sh start  
> ZooKeeper JMX enabled by default  
> Using config: /home/christine/zookeeper-3.4.9/bin/../conf/zoo.cfg  
> Starting zookeeper ... STARTED  
> christine@centauri:~$ ./hadoop-3.3.1/sbin/start-dfs.sh  
> Starting namenodes on [localhost]  
> Starting datanodes  
> Starting secondary namenodes [centauri]  
> centauri: Warning: Permanently added  
> 'centauri,2003:d4:771c:3b00:7223:40a1:4c07:7c7b' (ECDSA) to the list of
known  
> hosts.  
> christine@centauri:~$ ./hadoop-3.3.1/sbin/start-yarn.sh  
> Starting resourcemanager  
> Starting nodemanagers  
> christine@centauri:~$ jps  
> 3921 Jps  
> 2387 QuorumPeerMain  
> 3171 SecondaryNameNode  
> 3732 NodeManager  
> 2955 DataNode  
> 3599 ResourceManager  
>  
>  
>  
> BUT  
>  
> when running 'accumulo init' I get this Error:  
>  
> hristine@centauri:~$ ./accumulo-2.0.1/bin/accumulo init  
> OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated
in  
> version 9.0 and will likely be removed in a future release.  
> 2021-07-07 15:59:05,590 [conf.SiteConfiguration] INFO : Found Accumulo  
> configuration on classpath at  
> /home/christine/accumulo-2.0.1/conf/accumulo.properties  
> 2021-07-07 15:59:08,460 [fs.VolumeManagerImpl] WARN :
dfs.datanode.synconclose  
> set to false in hdfs-site.xml: data loss is possible on hard system reset or  
> power loss  
> 2021-07-07 15:59:08,461 [init.Initialize] INFO : Hadoop Filesystem is  
> hdfs://localhost:9000  
> 2021-07-07 15:59:08,461 [init.Initialize] INFO : Accumulo data dirs are  
> [hdfs://localhost:8020/accumulo]  
> 2021-07-07 15:59:08,461 [init.Initialize] INFO : Zookeeper server is  
> localhost:2181  
> 2021-07-07 15:59:08,461 [init.Initialize] INFO : Checking if Zookeeper is  
> available. If this hangs, then you need to make sure zookeeper is running  
> 2021-07-07 15:59:08,938 [init.Initialize] ERROR: Fatal exception  
> java.io.IOException: Failed to check if filesystem already initialized  
> at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
> at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
> at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
> at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
> at java.base/java.lang.Thread.run(Thread.java:829)  
> Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to  
> localhost:8020 failed on connection exception: java.net.ConnectException:  
> Connection refused; For more details see:  
> <http://wiki.apache.org/hadoop/ConnectionRefused>  
> at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)  
> at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
> at
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
> at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
> at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
> at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
> at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
> at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
> at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
> at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)  
> at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
> at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
> at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
> at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
> at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
> at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
> at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
> at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
> at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
> at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
> at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
> at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
> at
org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
> at
org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
> at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
> ... 4 more  
> Caused by: java.net.ConnectException: Connection refused  
> at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
> at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
> at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
> at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
> at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
> ... 28 more  
> 2021-07-07 15:59:08,944 [start.Main] ERROR: Thread 'init' died.  
> java.lang.RuntimeException: java.io.IOException: Failed to check if
filesystem  
> already initialized  
> at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:997)  
> at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
> at java.base/java.lang.Thread.run(Thread.java:829)  
> Caused by: java.io.IOException: Failed to check if filesystem already  
> initialized  
> at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
> at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
> at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
> ... 2 more  
> Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to  
> localhost:8020 failed on connection exception: java.net.ConnectException:  
> Connection refused; For more details see:  
> <http://wiki.apache.org/hadoop/ConnectionRefused>  
> at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)  
> at
java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
> at
java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
> at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
> at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
> at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
> at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
> at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
> at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
> at
org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
> at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
> at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
> at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)  
> at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
> at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
> at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
> at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
> at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
> at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
> at
org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
> at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
> at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
> at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
> at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
> at
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
> at
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
> at
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
> at
org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
> at
org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
> at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
> ... 4 more  
> Caused by: java.net.ConnectException: Connection refused  
> at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
> at
java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
> at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
> at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
> at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
> ... 28 more  
>  
>  
>  
>  
>  
> I am not able to find the mistake. I found similar questions on
Stackoverflow,  
> but none of them solved my problem.  
>  
> Thanks in advance for any idea.  
>  
>


Re: Hadoop ConnectException

Posted by Ed Coleman <ed...@apache.org>.
According to the Hadoop getting started guide (https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html) the resouce manager runs at: http://localhost:8088/

Can you run hadoop commands like:
> hadoop fs -ls /accumulo  (or whatever you've decided on as the destination for files)

Did you check that accumulo-env.sh and other configuration files have been set-up for your environemnt?

 
On 2021/07/07 15:20:41, Christine Buss <ch...@gmx.de> wrote: 
> Hi,
> 
> 
> 
> I am using:
> 
> Java 11
> 
> Ubuntu 20.04.2
> 
> Hadoop 3.3.1
> 
> Zookeeper 3.7.0
> 
> Accumulo 2.0.1
> 
> 
> 
> 
> 
> I followed the instructions here:
> 
> https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-
> common/SingleCluster.html
> 
> and edited `etc/hadoop/hadoop-env.sh`,  etc/hadoop/core-site.xml,
> etc/hadoop/hdfs-site.xml accordingly.
> 
> 'ssh localhost' works without a passphrase.
> 
> 
> 
> Then I started Zookeper, start-dfs.sh and start-yarn.sh:
> 
> christine@centauri:~$ ./zookeeper-3.4.9/bin/zkServer.sh start  
> ZooKeeper JMX enabled by default  
> Using config: /home/christine/zookeeper-3.4.9/bin/../conf/zoo.cfg  
> Starting zookeeper ... STARTED  
> christine@centauri:~$ ./hadoop-3.3.1/sbin/start-dfs.sh  
> Starting namenodes on [localhost]  
> Starting datanodes  
> Starting secondary namenodes [centauri]  
> centauri: Warning: Permanently added
> 'centauri,2003:d4:771c:3b00:7223:40a1:4c07:7c7b' (ECDSA) to the list of known
> hosts.  
> christine@centauri:~$ ./hadoop-3.3.1/sbin/start-yarn.sh  
> Starting resourcemanager  
> Starting nodemanagers  
> christine@centauri:~$ jps  
> 3921 Jps  
> 2387 QuorumPeerMain  
> 3171 SecondaryNameNode  
> 3732 NodeManager  
> 2955 DataNode  
> 3599 ResourceManager
> 
> 
> 
> BUT
> 
> when running 'accumulo init' I get this Error:
> 
> hristine@centauri:~$ ./accumulo-2.0.1/bin/accumulo init  
> OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in
> version 9.0 and will likely be removed in a future release.  
> 2021-07-07 15:59:05,590 [conf.SiteConfiguration] INFO : Found Accumulo
> configuration on classpath at
> /home/christine/accumulo-2.0.1/conf/accumulo.properties  
> 2021-07-07 15:59:08,460 [fs.VolumeManagerImpl] WARN : dfs.datanode.synconclose
> set to false in hdfs-site.xml: data loss is possible on hard system reset or
> power loss  
> 2021-07-07 15:59:08,461 [init.Initialize] INFO : Hadoop Filesystem is
> hdfs://localhost:9000  
> 2021-07-07 15:59:08,461 [init.Initialize] INFO : Accumulo data dirs are
> [hdfs://localhost:8020/accumulo]  
> 2021-07-07 15:59:08,461 [init.Initialize] INFO : Zookeeper server is
> localhost:2181  
> 2021-07-07 15:59:08,461 [init.Initialize] INFO : Checking if Zookeeper is
> available. If this hangs, then you need to make sure zookeeper is running  
> 2021-07-07 15:59:08,938 [init.Initialize] ERROR: Fatal exception  
> java.io.IOException: Failed to check if filesystem already initialized  
>     at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
>     at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
>     at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
>     at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
>     at java.base/java.lang.Thread.run(Thread.java:829)  
> Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to
> localhost:8020 failed on connection exception: java.net.ConnectException:
> Connection refused; For more details see:
> http://wiki.apache.org/hadoop/ConnectionRefused  
>     at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)  
>     at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
>     at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
>     at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
>     at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
>     at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
>     at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
>     at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
>     at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
>     at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
>     at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
>     at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
>     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
>     at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
>     at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
>     at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
>     at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
>     at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
>     at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
>     at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
>     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
>     at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
>     at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
>     at org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
>     at org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
>     at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
>     ... 4 more  
> Caused by: java.net.ConnectException: Connection refused  
>     at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
>     at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
>     at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
>     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
>     at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
>     at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
>     at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
>     at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
>     at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
>     ... 28 more  
> 2021-07-07 15:59:08,944 [start.Main] ERROR: Thread 'init' died.  
> java.lang.RuntimeException: java.io.IOException: Failed to check if filesystem
> already initialized  
>     at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:997)  
>     at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)  
>     at java.base/java.lang.Thread.run(Thread.java:829)  
> Caused by: java.io.IOException: Failed to check if filesystem already
> initialized  
>     at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:285)  
>     at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:323)  
>     at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:991)  
>     ... 2 more  
> Caused by: java.net.ConnectException: Call From centauri/192.168.178.30 to
> localhost:8020 failed on connection exception: java.net.ConnectException:
> Connection refused; For more details see:
> http://wiki.apache.org/hadoop/ConnectionRefused  
>     at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)  
>     at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)  
>     at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)  
>     at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)  
>     at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:913)  
>     at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:828)  
>     at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1577)  
>     at org.apache.hadoop.ipc.Client.call(Client.java:1519)  
>     at org.apache.hadoop.ipc.Client.call(Client.java:1416)  
>     at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:242)  
>     at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:129)  
>     at com.sun.proxy.$Proxy18.getFileInfo(Unknown Source)  
>     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:965)  
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
>     at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)  
>     at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)  
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)  
>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)  
>     at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)  
>     at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)  
>     at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)  
>     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)  
>     at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)  
>     at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1731)  
>     at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1752)  
>     at org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1749)  
>     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)  
>     at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1764)  
>     at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1760)  
>     at org.apache.accumulo.server.fs.VolumeManagerImpl.exists(VolumeManagerImpl.java:254)  
>     at org.apache.accumulo.server.init.Initialize.isInitialized(Initialize.java:860)  
>     at org.apache.accumulo.server.init.Initialize.checkInit(Initialize.java:280)  
>     ... 4 more  
> Caused by: java.net.ConnectException: Connection refused  
>     at java.base/sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)  
>     at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:777)  
>     at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)  
>     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:586)  
>     at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:701)  
>     at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:822)  
>     at org.apache.hadoop.ipc.Client$Connection.access$3800(Client.java:414)  
>     at org.apache.hadoop.ipc.Client.getConnection(Client.java:1647)  
>     at org.apache.hadoop.ipc.Client.call(Client.java:1463)  
>     ... 28 more
> 
> 
> 
> 
> 
> I am not able to find the mistake. I found similar questions on Stackoverflow,
> but none of them solved my problem.
> 
> Thanks in advance for any idea.
> 
>