You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@accumulo.apache.org by "Samudrala, Ranganath [USA] via user" <us...@accumulo.apache.org> on 2023/01/10 14:21:29 UTC

accumulo init error in K8S

Hello,
I am trying to configure Accumulo in K8S using Helm chart. Hadoop and Zookeeper are up and running in the same K8S namespace.
accumulo.properties is as below:

  instance.volumes=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
  general.custom.volume.preferred.default=accumulo
  instance.zookeeper.host=accumulo-zookeeper
  # instance.secret=DEFAULT
  general.volume.chooser=org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
  general.custom.volume.preferred.logger=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
  trace.user=tracer
  trace.password=tracer
  instance.secret=accumulo
  tserver.cache.data.size=15M
  tserver.cache.index.size=40M
  tserver.memory.maps.max=128M
  tserver.memory.maps.native.enabled=true
  tserver.sort.buffer.size=50M
  tserver.total.mutation.queue.max=16M
  tserver.walog.max.size=128M

accumulo-client.properties is as below:

 auth.type=password
 auth.principal=root
 auth.token=root
 instance.name=accumulo
 # For Accumulo >=2.0.0
 instance.zookeepers=accumulo-zookeeper
 instance.zookeeper.host=accumulo-zookeeper

When I run 'accumulo init --add-volumes', I see an error as below and what is wrong with the setup?

java.lang.RuntimeException: None of the configured paths are initialized.
        at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119)
        at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449)
        at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543)
        at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
        at java.base/java.lang.Thread.run(Thread.java:829)
2023-01-09T21:22:13,530 [init] [org.apache.accumulo.start.Main] ERROR: Thread 'init' died.
java.lang.RuntimeException: None of the configured paths are initialized.
        at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
        at java.lang.Thread.run(Thread.java:829) ~[?:?]
Thread 'init' died.

I have attached complete log:


Re: accumulo init error in K8S

Posted by "Samudrala, Ranganath [USA] via user" <us...@accumulo.apache.org>.
When I take a thread dump, I see this thread probably hung. Has anybody reported an issue related to SecureRandom?

"init" #17 prio=5 os_prio=0 cpu=137.55ms elapsed=56.86s tid=0x0000558cd1870800 nid=0x34 runnable  [0x00007fbc59aa7000]
   java.lang.Thread.State: RUNNABLE
        at sun.security.pkcs11.Secmod.nssInitialize(jdk.crypto.cryptoki@11.0.17/Native Method)
        at sun.security.pkcs11.Secmod.initialize(jdk.crypto.cryptoki@11.0.17/Secmod.java:239)
        - locked <0x00000000f88f2928> (a sun.security.pkcs11.Secmod)
        at sun.security.pkcs11.SunPKCS11.<init>(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:243)
        at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:143)
        at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
        at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
        at sun.security.pkcs11.SunPKCS11.configure(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
        at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:251)
        at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:242)
        at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
        at sun.security.jca.ProviderConfig.doLoadProvider(java.base@11.0.17/ProviderConfig.java:242)
        at sun.security.jca.ProviderConfig.getProvider(java.base@11.0.17/ProviderConfig.java:222)
        - locked <0x00000000f889aac0> (a sun.security.jca.ProviderConfig)
        at sun.security.jca.ProviderList.getProvider(java.base@11.0.17/ProviderList.java:266)
        at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:156)
        at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:151)
        at java.util.AbstractList$Itr.next(java.base@11.0.17/AbstractList.java:371)
        at java.security.SecureRandom.getDefaultPRNG(java.base@11.0.17/SecureRandom.java:264)
        at java.security.SecureRandom.<init>(java.base@11.0.17/SecureRandom.java:219)
        at org.apache.accumulo.core.util.Retry.<clinit>(Retry.java:49)
        at org.apache.accumulo.core.fate.zookeeper.ZooReader.<clinit>(ZooReader.java:46)
        at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:532)
        at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
        at org.apache.accumulo.start.Main$$Lambda$145/0x00000008401af040.run(Unknown Source)
        at java.lang.Thread.run(java.base@11.0.17/Thread.java:829)

________________________________
From: Samudrala, Ranganath [USA] via user <us...@accumulo.apache.org>
Sent: Tuesday, January 10, 2023 9:21 AM
To: user@accumulo.apache.org <us...@accumulo.apache.org>
Subject: [External] accumulo init error in K8S

Hello,
I am trying to configure Accumulo in K8S using Helm chart. Hadoop and Zookeeper are up and running in the same K8S namespace.
accumulo.properties is as below:

  instance.volumes=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
  general.custom.volume.preferred.default=accumulo
  instance.zookeeper.host=accumulo-zookeeper
  # instance.secret=DEFAULT
  general.volume.chooser=org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
  general.custom.volume.preferred.logger=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
  trace.user=tracer
  trace.password=tracer
  instance.secret=accumulo
  tserver.cache.data.size=15M
  tserver.cache.index.size=40M
  tserver.memory.maps.max=128M
  tserver.memory.maps.native.enabled=true
  tserver.sort.buffer.size=50M
  tserver.total.mutation.queue.max=16M
  tserver.walog.max.size=128M

accumulo-client.properties is as below:

 auth.type=password
 auth.principal=root
 auth.token=root
 instance.name=accumulo
 # For Accumulo >=2.0.0
 instance.zookeepers=accumulo-zookeeper
 instance.zookeeper.host=accumulo-zookeeper

When I run 'accumulo init --add-volumes', I see an error as below and what is wrong with the setup?

java.lang.RuntimeException: None of the configured paths are initialized.
        at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119)
        at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449)
        at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543)
        at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
        at java.base/java.lang.Thread.run(Thread.java:829)
2023-01-09T21:22:13,530 [init] [org.apache.accumulo.start.Main] ERROR: Thread 'init' died.
java.lang.RuntimeException: None of the configured paths are initialized.
        at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
        at java.lang.Thread.run(Thread.java:829) ~[?:?]
Thread 'init' died.

I have attached complete log:


Re: [External] Re: accumulo init error in K8S

Posted by "Samudrala, Ranganath [USA] via user" <us...@accumulo.apache.org>.
Hello
Thanks for the feedback. I did change the instance.volumes from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo to hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo and I have made progress since.

I also ran 'accumulo init --upload-accumulo-props' first before running the manager and other services.

thanks
Ranga

________________________________
From: Ed Coleman <ed...@apache.org>
Sent: Tuesday, January 10, 2023 6:27 PM
To: user@accumulo.apache.org <us...@accumulo.apache.org>
Subject: Re: [External] Re: accumulo init error in K8S

I suspect that your hdfs config / specification may need adjusted, but not sure.

"accumulo init" should create the necessary accumulo paths - but with your config you may need to manually create the /accumulo/data0 directory (if that's what it is) so that init can create the accumulo root from there.

About the configs in ZooKeeper:

In 2.1 the configuration is stored on the config node.  You could use a ZooKeeper client stat command and see that it has a non-zero data length.  If you do a zkCli get, it returns a binary array (the values are compressed).  There is a utility to dump the config

accumulo zoo-info-viewer --print-props

The viewer also has other options that may help.

accumulo -zoo-info-viewer  --print-instances  (should display the instance ids in ZooKeeper)

accumulo zoo-info-viewer --instanceName    (should be able to find your instance id in hdfs)

accumulo zoo-info-viewer --instanceId  (if you have the instance id)

On 2023/01/10 22:17:13 "Samudrala, Ranganath [USA] via user" wrote:
> Instance.volumes: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
>
> So, does it make sense that I am expecting instance_id folder underneath /accumulo/data0/accumulo folder.
>
> Thanks
> Ranga
>
> From: Ed Coleman <ed...@apache.org>
> Date: Tuesday, January 10, 2023 at 12:03 PM
> To: user@accumulo.apache.org <us...@accumulo.apache.org>
> Subject: Re: [External] Re: accumulo init error in K8S
> Can you use manager instead of master - it has been renamed to manager, but maybe we missed some old references.
>
> After you run accumulo init, what is in hadoop?
>
> > hadoop fs -ls -R /accumulo
> drwxr-xr-x   - x x         0 2023-01-10 16:49 /accumulo/instance_id
> -rw-r--r--   3 x x         0 2023-01-10 16:49 /accumulo/instance_id/bdcdd3d8-7623-4882-aae7-357a9db2efd4
> drwxr-xr-x   - x x         0 2023-01-10 16:49 /accumulo/tables
> ...
> drwx------   - x x         0 2023-01-10 16:49 /accumulo/version
> drwx------   - x x         0 2023-01-10 16:49 /accumulo/version/10
>
> Running
>
> > accumulo tserver
>
> accumulo tserver
> 2023-01-10T16:53:26,858 [conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /home/etcolem/workspace/fluo-uno/install/accumulo-2.1.1-SNAPSHOT/conf/accumulo.properties
> 2023-01-10T16:53:27,766 [tserver.TabletServer] INFO : Version 2.1.1-SNAPSHOT
> 2023-01-10T16:53:27,766 [tserver.TabletServer] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
> 2023-01-10T16:53:27,816 [metrics.MetricsUtil] INFO : Metric producer PropStoreMetrics initialize
> 2023-01-10T16:53:27,931 [server.ServerContext] INFO : tserver starting
> 2023-01-10T16:53:27,931 [server.ServerContext] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
> 2023-01-10T16:53:27,933 [server.ServerContext] INFO : Data Version 10
>
> When starting a manager / master - are you seeing:
>
> 2023-01-10T16:57:26,125 [balancer.TableLoadBalancer] INFO : Loaded class org.apache.accumulo.core.spi.balancer.SimpleLoadBalancer for table +r
> 2023-01-10T16:57:26,126 [balancer.SimpleLoadBalancer] WARN : Not balancing because we don't have any tservers.
>
> tservers should be started first, before the other management processes.
>
> The initial manager start-up should look like:
>
> > accumulo manager
> 2023-01-10T16:56:43,649 [conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /home/etcolem/workspace/fluo-uno/install/accumulo-2.1.1-SNAPSHOT/conf/accumulo.properties
> 2023-01-10T16:56:44,581 [manager.Manager] INFO : Version 2.1.1-SNAPSHOT
> 2023-01-10T16:56:44,582 [manager.Manager] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
> 2023-01-10T16:56:44,627 [metrics.MetricsUtil] INFO : Metric producer PropStoreMetrics initialize
> 2023-01-10T16:56:44,742 [server.ServerContext] INFO : manager starting
> 2023-01-10T16:56:44,742 [server.ServerContext] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
> 2023-01-10T16:56:44,745 [server.ServerContext] INFO : Data Version 10
> 2023-01-10T16:56:44,745 [server.ServerContext] INFO : Attempting to talk to zookeeper
> 2023-01-10T16:56:44,746 [server.ServerContext] INFO : ZooKeeper connected and initialized, attempting to talk to HDFS
> 2023-01-10T16:56:44,761 [server.ServerContext] INFO : Connected to HDFS
>
> And then key things to look for after the config dump:
>
> 023-01-10T16:56:44,802 [manager.Manager] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
> 2023-01-10T16:56:44,825 [manager.Manager] INFO : SASL is not enabled, delegation tokens will not be available
> 2023-01-10T16:56:44,872 [metrics.MetricsUtil] INFO : Metric producer ThriftMetrics initialize
> 2023-01-10T16:56:44,888 [manager.Manager] INFO : Started Manager client service at ip-10-113-15-42.evoforge.org:9999
> 2023-01-10T16:56:44,890 [manager.Manager] INFO : trying to get manager lock
> 2023-01-10T16:56:44,900 [manager.EventCoordinator] INFO : State changed from INITIAL to HAVE_LOCK
>
>
>
>
> On 2023/01/10 16:22:11 "Samudrala, Ranganath [USA] via user" wrote:
> > Yes, I am trying to set Accumulo v2.1.0 with Hadoop v3.3.4.
> > ________________________________
> > From: Samudrala, Ranganath [USA] <Sa...@bah.com>
> > Sent: Tuesday, January 10, 2023 11:21 AM
> > To: user@accumulo.apache.org <us...@accumulo.apache.org>
> > Subject: Re: [External] Re: accumulo init error in K8S
> >
> > I am starting these services manually, one at a time. For example, after 'accumulo init' completed, I ran 'accumulo master' I get this error:
> >
> > bash-5.1$ accumulo master
> > 2023-01-10T15:53:30,143 [main] [org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Using Accumulo configuration at /opt/acc
> > umulo/conf/accumulo.properties
> > Using Accumulo configuration at /opt/accumulo/conf/accumulo.properties
> > 2023-01-10T15:53:30,207 [main] [org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Create 2nd tier ClassLoader using URLs:
> > []
> > Create 2nd tier ClassLoader using URLs: []
> > 2023-01-10T15:53:30,372 [main] [org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating ThreadPoolExecutor for Scheduled Future
> >  Checker with 1 core threads and 1 max threads 180000 MILLISECONDS timeout
> > Creating ThreadPoolExecutor for Scheduled Future Checker with 1 core threads and 1 max threads 180000 MILLISECONDS timeout
> > 2023-01-10T15:53:30,379 [main] [org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating ThreadPoolExecutor for zoo_change_updat
> > e with 2 core threads and 2 max threads 180000 MILLISECONDS timeout
> > Creating ThreadPoolExecutor for zoo_change_update with 2 core threads and 2 max threads 180000 MILLISECONDS timeout
> > 2023-01-10T15:53:30,560 [master] [org.apache.accumulo.core.conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /op
> > t/accumulo/conf/accumulo.properties
> > Found Accumulo configuration on classpath at /opt/accumulo/conf/accumulo.properties
> > 2023-01-10T15:53:30,736 [master] [org.apache.hadoop.util.Shell] DEBUG: setsid exited with exit code 0
> > setsid exited with exit code 0
> > 2023-01-10T15:53:30,780 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
> > eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(a
> > lways=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"GetGroups"})
> > field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org
> > .apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"G
> > etGroups"})
> > 2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
> > eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metri
> > c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Rate of failed kerberos logins and latenc
> > y (milliseconds)"})
> > field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @
> > org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value=
> > {"Rate of failed kerberos logins and latency (milliseconds)"})
> > 2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
> > eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metri
> > c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Rate of successful kerberos logins and la
> > tency (milliseconds)"})
> > field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @
> > org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value=
> > {"Rate of successful kerberos logins and latency (milliseconds)"})
> > 2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private org.apache.hadoop.metrics2.li
> > b.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.a
> > nnotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Renewal failures since las
> > t successful login"})
> > field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures wi
> > th annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=
> > DEFAULT, value={"Renewal failures since last successful login"})
> > 2023-01-10T15:53:30,785 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private org.apache.hadoop.metrics2.li
> > b.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metr
> > ics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Renewal failures sin
> > ce startup"})
> > field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTo
> > tal with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10,
> >  type=DEFAULT, value={"Renewal failures since startup"})
> > 2023-01-10T15:53:30,789 [master] [org.apache.hadoop.metrics2.impl.MetricsSystemImpl] DEBUG: UgiMetrics, User and group related metrics
> > UgiMetrics, User and group related metrics
> > 2023-01-10T15:53:30,808 [master] [org.apache.hadoop.security.SecurityUtil] DEBUG: Setting hadoop.security.token.service.use_ip to true
> > Setting hadoop.security.token.service.use_ip to true
> > 2023-01-10T15:53:30,827 [master] [org.apache.hadoop.security.Groups] DEBUG:  Creating new Groups object
> >  Creating new Groups object
> > 2023-01-10T15:53:30,829 [master] [org.apache.hadoop.util.NativeCodeLoader] DEBUG: Trying to load the custom-built native-hadoop library...
> > Trying to load the custom-built native-hadoop library...
> > 2023-01-10T15:53:30,830 [master] [org.apache.hadoop.util.NativeCodeLoader] DEBUG: Loaded the native-hadoop library
> > Loaded the native-hadoop library
> > 2023-01-10T15:53:30,830 [master] [org.apache.hadoop.security.JniBasedUnixGroupsMapping] DEBUG: Using JniBasedUnixGroupsMapping for Group r
> > esolution
> > Using JniBasedUnixGroupsMapping for Group resolution
> > 2023-01-10T15:53:30,831 [master] [org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback] DEBUG: Group mapping impl=org.apache.h
> > adoop.security.JniBasedUnixGroupsMapping
> > Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
> > 2023-01-10T15:53:30,854 [master] [org.apache.hadoop.security.Groups] DEBUG: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGrou
> > psMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
> > Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
> > 2023-01-10T15:53:30,869 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: Hadoop login
> > Hadoop login
> > 2023-01-10T15:53:30,870 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: hadoop login commit
> > hadoop login commit
> > 2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: Using user: "accumulo" with name: accumulo
> > Using user: "accumulo" with name: accumulo
> > 2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: User entry: "accumulo"
> > User entry: "accumulo"
> > 2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: UGI loginUser: accumulo (auth:SIMPLE)
> > UGI loginUser: accumulo (auth:SIMPLE)
> > 2023-01-10T15:53:30,872 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Starting: Acquiring creator semaphore for hdfs://accumulo-hdfs-n
> > amenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> > Starting: Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> > 2023-01-10T15:53:30,873 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.
> > accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 0:00.000s
> > Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 0:00.000s
> > 2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Starting: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-h
> > dfs-namenodes:8020/accumulo/data0/accumulo
> > Starting: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> > 2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Loading filesystems
> > Loading filesystems
> > 2023-01-10T15:53:30,887 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/h
> > adoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > 2023-01-10T15:53:30,892 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem fro
> > m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > 2023-01-10T15:53:30,894 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hado
> > op/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > 2023-01-10T15:53:30,896 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /o
> > pt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > 2023-01-10T15:53:30,897 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from
> > /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > 2023-01-10T15:53:30,905 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem fro
> > m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > 2023-01-10T15:53:30,912 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem
> > from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > 2023-01-10T15:53:30,913 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSyste
> > m from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > 2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem
> > from /opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar
> > s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem from /opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar
> > 2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Looking for FS supporting hdfs
> > Looking for FS supporting hdfs
> > 2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: looking for configuration option fs.hdfs.impl
> > looking for configuration option fs.hdfs.impl
> > 2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Looking in service filesystems for implementation class
> > Looking in service filesystems for implementation class
> > 2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSyste
> > m
> > FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem
> > 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.use.legacy.blockreader.local = false
> > dfs.client.use.legacy.blockreader.local = false
> > 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.read.shortcircuit = false
> > dfs.client.read.shortcircuit = false
> > 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.domain.socket.data.traffic = false
> > dfs.client.domain.socket.data.traffic = false
> > 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.domain.socket.path =
> > dfs.domain.socket.path =
> > 2023-01-10T15:53:30,980 [master] [org.apache.hadoop.hdfs.DFSClient] DEBUG: Sets dfs.client.block.write.replace-datanode-on-failure.min-rep
> > lication to 0
> > Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
> > 2023-01-10T15:53:30,990 [master] [org.apache.hadoop.io.retry.RetryUtils] DEBUG: multipleLinearRandomRetry = null
> > multipleLinearRandomRetry = null
> > 2023-01-10T15:53:31,011 [master] [org.apache.hadoop.ipc.Server] DEBUG: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apach
> > e.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798
> > rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, rpcInvoker=org.apac
> > he.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798
> > ...
> > ... LONG PAUSE HERE - ALMOST 10 minutes
> > ...
> > 2023-01-10T16:03:52,316 [master] [org.apache.hadoop.ipc.Client] DEBUG: getting client out of cache: Client-5197ff3375714e029d5cdcb1ac53e74
> > 2
> > getting client out of cache: Client-5197ff3375714e029d5cdcb1ac53e742
> > 2023-01-10T16:03:52,679 [client DomainSocketWatcher] [org.apache.hadoop.net.unix.DomainSocketWatcher] DEBUG: org.apache.hadoop.net.unix.Do
> > mainSocketWatcher$2@35d323c6: starting with interruptCheckPeriodMs = 60000
> > org.apache.hadoop.net.unix.DomainSocketWatcher$2@35d323c6: starting with interruptCheckPeriodMs = 60000
> > 2023-01-10T16:03:52,686 [master] [org.apache.hadoop.util.PerformanceAdvisory] DEBUG: Both short-circuit local reads and UNIX domain socket
> >  are disabled.
> > Both short-circuit local reads and UNIX domain socket are disabled.
> > 2023-01-10T16:03:52,694 [master] [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil] DEBUG: DataTransferProtocol not
> > using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
> > DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
> > 2023-01-10T16:03:52,697 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-nameno
> > des:8020/accumulo/data0/accumulo: duration 10:21.822s
> > Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 10:21.822s
> > 2023-01-10T16:03:52,718 [master] [org.apache.accumulo.core.conf.ConfigurationTypeHelper] DEBUG: Loaded class : org.apache.accumulo.core.sp
> > i.fs.PreferredVolumeChooser
> > Loaded class : org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
> > 2023-01-10T16:03:52,761 [master] [org.apache.hadoop.ipc.Client] DEBUG: The ping interval is 60000 ms.
> > The ping interval is 60000 ms.
> > 2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: Connecting to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.4
> > 2.15.98:8020
> > Connecting to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020
> > 2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: Setup connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenode
> > s/10.42.15.98:8020
> > Setup connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020
> > 2023-01-10T16:03:52,787 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> > ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> > 98:8020 from accumulo: starting, having connections 1
> > IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: starting, having con
> > nections 1
> > 2023-01-10T16:03:52,791 [IPC Parameter Sending Thread #0] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accum
> > ulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getLi
> > sting
> > IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo sending #0 org.apache
> > .hadoop.hdfs.protocol.ClientProtocol.getListing
> > 2023-01-10T16:03:52,801 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> > ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> > 98:8020 from accumulo got value #0
> > IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo got value #0
> > 2023-01-10T16:03:52,801 [master] [org.apache.hadoop.ipc.ProtobufRpcEngine2] DEBUG: Call: getListing took 72ms
> > Call: getListing took 72ms
> > 2023-01-10T16:03:52,804 [master] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs
> > -namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > 2023-01-10T16:03:52,804 [master] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs
> > -namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > Thread 'master' died.
> > java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
> > 020/accumulo/data0/accumulo/instance_id
> >         at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218)
> >         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102)
> >         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
> >         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
> >         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
> >         at org.apache.accumulo.manager.Manager.main(Manager.java:408)
> >         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
> >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
> >         at java.base/java.lang.Thread.run(Thread.java:829)
> > 2023-01-10T16:03:52,808 [master] [org.apache.accumulo.start.Main] ERROR: Thread 'master' died.
> > java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
> > 020/accumulo/data0/accumulo/instance_id
> >         at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) ~[accumulo-server-base-2.1.0.jar:2.1.
> > 0]
> >         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) ~[accumulo-manager-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.manager.Manager.main(Manager.java:408) ~[accumulo-manager-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) ~[accumulo-manager-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
> >         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> > Thread 'master' died.
> > java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
> > 020/accumulo/data0/accumulo/instance_id
> >         at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) ~[accumulo-server-base-2.1.0.jar:2.1.
> > 0]
> >         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) ~[accumulo-manager-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.manager.Manager.main(Manager.java:408) ~[accumulo-manager-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) ~[accumulo-manager-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
> >         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> > 2023-01-10T16:03:52,812 [shutdown-hook-0] [org.apache.hadoop.fs.FileSystem] DEBUG: FileSystem.close() by method: org.apache.hadoop.hdfs.Di
> > stributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo (auth:SIMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-n
> > amenodes:8020; URI: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; Object Identity Hash: 50257de5
> > FileSystem.close() by method: org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo (auth:S
> > IMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; URI: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; O
> > bject Identity Hash: 50257de5
> > 2023-01-10T16:03:52,814 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: stopping client from cache: Client-5197ff3375714e029d5cdcb
> > 1ac53e742
> > stopping client from cache: Client-5197ff3375714e029d5cdcb1ac53e742
> > 2023-01-10T16:03:52,815 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: removing client from cache: Client-5197ff3375714e029d5cdcb
> > 1ac53e742
> > removing client from cache: Client-5197ff3375714e029d5cdcb1ac53e742
> > 2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: stopping actual client because no more references remain:
> > Client-5197ff3375714e029d5cdcb1ac53e742
> > stopping actual client because no more references remain: Client-5197ff3375714e029d5cdcb1ac53e742
> > 2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: Stopping client
> > Stopping client
> > 2023-01-10T16:03:52,820 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> > ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> > 98:8020 from accumulo: closed
> > IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: closed
> > 2023-01-10T16:03:52,820 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> > ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> > 98:8020 from accumulo: stopped, remaining connections 0
> > IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: stopped, remaining c
> > onnections 0
> > 2023-01-10T16:03:52,820 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] DEBUG: Completed shutdown in 0.010 seconds; Timeouts: 0
> > Completed shutdown in 0.010 seconds; Timeouts: 0
> > 2023-01-10T16:03:52,843 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] DEBUG: ShutdownHookManager completed shutdown.
> > ShutdownHookManager completed shutdown.
> >
> > ________________________________
> > From: Ed Coleman <ed...@apache.org>
> > Sent: Tuesday, January 10, 2023 11:17 AM
> > To: user@accumulo.apache.org <us...@accumulo.apache.org>
> > Subject: Re: [External] Re: accumulo init error in K8S
> >
> > Running init does not start the Accumulo services.  Is the manager and are the tserver processes running?
> >
> > I may have missed it, but what version are you trying to use?  2.1?
> >
> > A quick look of the documentation at https://urldefense.com/v3/__https://accumulo.apache.org/docs/2.x/administration/in-depth-install*migrating-accumulo-from-non-ha-namenode-to-ha-namenode__;Iw!!May37g!P1_tpmliRYyFnNtR6-s3nURqK4U5zUTzKAvHLKk5QroP1DMNZWY_K0IkBcM8HbzEqjfMAyEJNps2l_nqThZf3TWV$<https://urldefense.com/v3/__https:/accumulo.apache.org/docs/2.x/administration/in-depth-install*migrating-accumulo-from-non-ha-namenode-to-ha-namenode__;Iw!!May37g!P1_tpmliRYyFnNtR6-s3nURqK4U5zUTzKAvHLKk5QroP1DMNZWY_K0IkBcM8HbzEqjfMAyEJNps2l_nqThZf3TWV$>  I would assume that add-volumes may not be required if your initial configuration is correct.
> >
> > At this point, logs may help more than stack traces.
> >
> > Ed C
> >
> > On 2023/01/10 16:01:49 "Samudrala, Ranganath [USA] via user" wrote:
> > > Yes, I ran it just now. I had debug enabled, so the prompt for instance name was hidden. I had to enter a few CRs to see the prompt. Once the prompts for instance name and password were answered, I can see entries for the accumulo config in the zookeeper.
> > >
> > > Should I run 'accumulo init --add-volumes' now?
> > >
> > > If I run 'accumulo master' and it seems to be hung up the thread:
> > >
> > > "master" #17 prio=5 os_prio=0 cpu=572.10ms elapsed=146.84s tid=0x000056488630b800 nid=0x90 runnable  [0x00007f5d63753000]
> > >    java.lang.Thread.State: RUNNABLE
> > >         at sun.security.pkcs11.Secmod.nssInitialize(jdk.crypto.cryptoki@11.0.17/Native Method)
> > >         at sun.security.pkcs11.Secmod.initialize(jdk.crypto.cryptoki@11.0.17/Secmod.java:239)
> > >         - locked <0x00000000ffd4eb18> (a sun.security.pkcs11.Secmod)
> > >         at sun.security.pkcs11.SunPKCS11.<init>(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:243)
> > >         at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:143)
> > >         at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
> > >         at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
> > >         at sun.security.pkcs11.SunPKCS11.configure(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
> > >         at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:251)
> > >         at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:242)
> > >         at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
> > >         at sun.security.jca.ProviderConfig.doLoadProvider(java.base@11.0.17/ProviderConfig.java:242)
> > >         at sun.security.jca.ProviderConfig.getProvider(java.base@11.0.17/ProviderConfig.java:222)
> > >         - locked <0x00000000ffff9560> (a sun.security.jca.ProviderConfig)
> > >         at sun.security.jca.ProviderList.getProvider(java.base@11.0.17/ProviderList.java:266)
> > >         at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:156)
> > >         at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:151)
> > >         at java.util.AbstractList$Itr.next(java.base@11.0.17/AbstractList.java:371)
> > >         at java.security.SecureRandom.getDefaultPRNG(java.base@11.0.17/SecureRandom.java:264)
> > >         at java.security.SecureRandom.<init>(java.base@11.0.17/SecureRandom.java:219)
> > >         at java.util.UUID$Holder.<clinit>(java.base@11.0.17/UUID.java:101)
> > >         at java.util.UUID.randomUUID(java.base@11.0.17/UUID.java:147)
> > >         at org.apache.hadoop.ipc.ClientId.getClientId(ClientId.java:42)
> > >         at org.apache.hadoop.ipc.Client.<init>(Client.java:1367)
> > >         at org.apache.hadoop.ipc.ClientCache.getClient(ClientCache.java:59)
> > >         - locked <0x00000000fffc3458> (a org.apache.hadoop.ipc.ClientCache)
> > >         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:158)
> > >         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:145)
> > >         at org.apache.hadoop.ipc.ProtobufRpcEngine2.getProxy(ProtobufRpcEngine2.java:111)
> > >         at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:629)
> > >         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithAlignmentContext(NameNodeProxiesClient.java:365)
> > >         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createNonHAProxyWithClientProtocol(NameNodeProxiesClient.java:343)
> > >         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:135)
> > >         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:374)
> > >         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:308)
> > >         at org.apache.hadoop.hdfs.DistributedFileSystem.initDFSClient(DistributedFileSystem.java:202)
> > >         at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:187)
> > >         at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
> > >         at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
> > >         at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
> > >         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
> > >         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
> > >         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
> > >         at org.apache.accumulo.core.volume.VolumeImpl.<init>(VolumeImpl.java:45)
> > >         at org.apache.accumulo.server.fs.VolumeManagerImpl.get(VolumeManagerImpl.java:371)
> > >         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:96)
> > >         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
> > >         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
> > >         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
> > >         at org.apache.accumulo.manager.Manager.main(Manager.java:408)
> > >         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
> > >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
> > >         at org.apache.accumulo.start.Main$$Lambda$145/0x00000008401a5040.run(Unknown Source)
> > >         at java.lang.Thread.run(java.base@11.0.17/Thread.java:829)
> > >
> > >
> > >
> > > I will wait and see when there is more log output.
> > >
> > > Thanks
> > > Ranga
> > >
> > > ________________________________
> > > From: Ed Coleman <ed...@apache.org>
> > > Sent: Tuesday, January 10, 2023 10:16 AM
> > > To: user@accumulo.apache.org <us...@accumulo.apache.org>
> > > Subject: [External] Re: accumulo init error in K8S
> > >
> > > Have you tried running accumulo init without the --add-volumes?  From your attached log it looks like it cannot find a valid instance id
> > >
> > > 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > > Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > > 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > > unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > >
> > >
> > > On 2023/01/10 14:21:29 "Samudrala, Ranganath [USA] via user" wrote:
> > > > Hello,
> > > > I am trying to configure Accumulo in K8S using Helm chart. Hadoop and Zookeeper are up and running in the same K8S namespace.
> > > > accumulo.properties is as below:
> > > >
> > > >   instance.volumes=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> > > >   general.custom.volume.preferred.default=accumulo
> > > >   instance.zookeeper.host=accumulo-zookeeper
> > > >   # instance.secret=DEFAULT
> > > >   general.volume.chooser=org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
> > > >   general.custom.volume.preferred.logger=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> > > >   trace.user=tracer
> > > >   trace.password=tracer
> > > >   instance.secret=accumulo
> > > >   tserver.cache.data.size=15M
> > > >   tserver.cache.index.size=40M
> > > >   tserver.memory.maps.max=128M
> > > >   tserver.memory.maps.native.enabled=true
> > > >   tserver.sort.buffer.size=50M
> > > >   tserver.total.mutation.queue.max=16M
> > > >   tserver.walog.max.size=128M
> > > >
> > > > accumulo-client.properties is as below:
> > > >
> > > >  auth.type=password
> > > >  auth.principal=root
> > > >  auth.token=root
> > > >  instance.name=accumulo
> > > >  # For Accumulo >=2.0.0
> > > >  instance.zookeepers=accumulo-zookeeper
> > > >  instance.zookeeper.host=accumulo-zookeeper
> > > >
> > > > When I run 'accumulo init --add-volumes', I see an error as below and what is wrong with the setup?
> > > >
> > > > java.lang.RuntimeException: None of the configured paths are initialized.
> > > >         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119)
> > > >         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449)
> > > >         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543)
> > > >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
> > > >         at java.base/java.lang.Thread.run(Thread.java:829)
> > > > 2023-01-09T21:22:13,530 [init] [org.apache.accumulo.start.Main] ERROR: Thread 'init' died.
> > > > java.lang.RuntimeException: None of the configured paths are initialized.
> > > >         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> > > >         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> > > >         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> > > >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
> > > >         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> > > > Thread 'init' died.
> > > >
> > > > I have attached complete log:
> > > >
> > > >
> > >
> >
>

Re: [External] Re: accumulo init error in K8S

Posted by Ed Coleman <ed...@apache.org>.
I suspect that your hdfs config / specification may need adjusted, but not sure.

"accumulo init" should create the necessary accumulo paths - but with your config you may need to manually create the /accumulo/data0 directory (if that's what it is) so that init can create the accumulo root from there.

About the configs in ZooKeeper:

In 2.1 the configuration is stored on the config node.  You could use a ZooKeeper client stat command and see that it has a non-zero data length.  If you do a zkCli get, it returns a binary array (the values are compressed).  There is a utility to dump the config

accumulo zoo-info-viewer --print-props

The viewer also has other options that may help.

accumulo -zoo-info-viewer  --print-instances  (should display the instance ids in ZooKeeper)

accumulo zoo-info-viewer --instanceName    (should be able to find your instance id in hdfs)

accumulo zoo-info-viewer --instanceId  (if you have the instance id)

On 2023/01/10 22:17:13 "Samudrala, Ranganath [USA] via user" wrote:
> Instance.volumes: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> 
> So, does it make sense that I am expecting instance_id folder underneath /accumulo/data0/accumulo folder.
> 
> Thanks
> Ranga
> 
> From: Ed Coleman <ed...@apache.org>
> Date: Tuesday, January 10, 2023 at 12:03 PM
> To: user@accumulo.apache.org <us...@accumulo.apache.org>
> Subject: Re: [External] Re: accumulo init error in K8S
> Can you use manager instead of master - it has been renamed to manager, but maybe we missed some old references.
> 
> After you run accumulo init, what is in hadoop?
> 
> > hadoop fs -ls -R /accumulo
> drwxr-xr-x   - x x         0 2023-01-10 16:49 /accumulo/instance_id
> -rw-r--r--   3 x x         0 2023-01-10 16:49 /accumulo/instance_id/bdcdd3d8-7623-4882-aae7-357a9db2efd4
> drwxr-xr-x   - x x         0 2023-01-10 16:49 /accumulo/tables
> ...
> drwx------   - x x         0 2023-01-10 16:49 /accumulo/version
> drwx------   - x x         0 2023-01-10 16:49 /accumulo/version/10
> 
> Running
> 
> > accumulo tserver
> 
> accumulo tserver
> 2023-01-10T16:53:26,858 [conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /home/etcolem/workspace/fluo-uno/install/accumulo-2.1.1-SNAPSHOT/conf/accumulo.properties
> 2023-01-10T16:53:27,766 [tserver.TabletServer] INFO : Version 2.1.1-SNAPSHOT
> 2023-01-10T16:53:27,766 [tserver.TabletServer] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
> 2023-01-10T16:53:27,816 [metrics.MetricsUtil] INFO : Metric producer PropStoreMetrics initialize
> 2023-01-10T16:53:27,931 [server.ServerContext] INFO : tserver starting
> 2023-01-10T16:53:27,931 [server.ServerContext] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
> 2023-01-10T16:53:27,933 [server.ServerContext] INFO : Data Version 10
> 
> When starting a manager / master - are you seeing:
> 
> 2023-01-10T16:57:26,125 [balancer.TableLoadBalancer] INFO : Loaded class org.apache.accumulo.core.spi.balancer.SimpleLoadBalancer for table +r
> 2023-01-10T16:57:26,126 [balancer.SimpleLoadBalancer] WARN : Not balancing because we don't have any tservers.
> 
> tservers should be started first, before the other management processes.
> 
> The initial manager start-up should look like:
> 
> > accumulo manager
> 2023-01-10T16:56:43,649 [conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /home/etcolem/workspace/fluo-uno/install/accumulo-2.1.1-SNAPSHOT/conf/accumulo.properties
> 2023-01-10T16:56:44,581 [manager.Manager] INFO : Version 2.1.1-SNAPSHOT
> 2023-01-10T16:56:44,582 [manager.Manager] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
> 2023-01-10T16:56:44,627 [metrics.MetricsUtil] INFO : Metric producer PropStoreMetrics initialize
> 2023-01-10T16:56:44,742 [server.ServerContext] INFO : manager starting
> 2023-01-10T16:56:44,742 [server.ServerContext] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
> 2023-01-10T16:56:44,745 [server.ServerContext] INFO : Data Version 10
> 2023-01-10T16:56:44,745 [server.ServerContext] INFO : Attempting to talk to zookeeper
> 2023-01-10T16:56:44,746 [server.ServerContext] INFO : ZooKeeper connected and initialized, attempting to talk to HDFS
> 2023-01-10T16:56:44,761 [server.ServerContext] INFO : Connected to HDFS
> 
> And then key things to look for after the config dump:
> 
> 023-01-10T16:56:44,802 [manager.Manager] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
> 2023-01-10T16:56:44,825 [manager.Manager] INFO : SASL is not enabled, delegation tokens will not be available
> 2023-01-10T16:56:44,872 [metrics.MetricsUtil] INFO : Metric producer ThriftMetrics initialize
> 2023-01-10T16:56:44,888 [manager.Manager] INFO : Started Manager client service at ip-10-113-15-42.evoforge.org:9999
> 2023-01-10T16:56:44,890 [manager.Manager] INFO : trying to get manager lock
> 2023-01-10T16:56:44,900 [manager.EventCoordinator] INFO : State changed from INITIAL to HAVE_LOCK
> 
> 
> 
> 
> On 2023/01/10 16:22:11 "Samudrala, Ranganath [USA] via user" wrote:
> > Yes, I am trying to set Accumulo v2.1.0 with Hadoop v3.3.4.
> > ________________________________
> > From: Samudrala, Ranganath [USA] <Sa...@bah.com>
> > Sent: Tuesday, January 10, 2023 11:21 AM
> > To: user@accumulo.apache.org <us...@accumulo.apache.org>
> > Subject: Re: [External] Re: accumulo init error in K8S
> >
> > I am starting these services manually, one at a time. For example, after 'accumulo init' completed, I ran 'accumulo master' I get this error:
> >
> > bash-5.1$ accumulo master
> > 2023-01-10T15:53:30,143 [main] [org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Using Accumulo configuration at /opt/acc
> > umulo/conf/accumulo.properties
> > Using Accumulo configuration at /opt/accumulo/conf/accumulo.properties
> > 2023-01-10T15:53:30,207 [main] [org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Create 2nd tier ClassLoader using URLs:
> > []
> > Create 2nd tier ClassLoader using URLs: []
> > 2023-01-10T15:53:30,372 [main] [org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating ThreadPoolExecutor for Scheduled Future
> >  Checker with 1 core threads and 1 max threads 180000 MILLISECONDS timeout
> > Creating ThreadPoolExecutor for Scheduled Future Checker with 1 core threads and 1 max threads 180000 MILLISECONDS timeout
> > 2023-01-10T15:53:30,379 [main] [org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating ThreadPoolExecutor for zoo_change_updat
> > e with 2 core threads and 2 max threads 180000 MILLISECONDS timeout
> > Creating ThreadPoolExecutor for zoo_change_update with 2 core threads and 2 max threads 180000 MILLISECONDS timeout
> > 2023-01-10T15:53:30,560 [master] [org.apache.accumulo.core.conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /op
> > t/accumulo/conf/accumulo.properties
> > Found Accumulo configuration on classpath at /opt/accumulo/conf/accumulo.properties
> > 2023-01-10T15:53:30,736 [master] [org.apache.hadoop.util.Shell] DEBUG: setsid exited with exit code 0
> > setsid exited with exit code 0
> > 2023-01-10T15:53:30,780 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
> > eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(a
> > lways=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"GetGroups"})
> > field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org
> > .apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"G
> > etGroups"})
> > 2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
> > eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metri
> > c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Rate of failed kerberos logins and latenc
> > y (milliseconds)"})
> > field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @
> > org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value=
> > {"Rate of failed kerberos logins and latency (milliseconds)"})
> > 2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
> > eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metri
> > c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Rate of successful kerberos logins and la
> > tency (milliseconds)"})
> > field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @
> > org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value=
> > {"Rate of successful kerberos logins and latency (milliseconds)"})
> > 2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private org.apache.hadoop.metrics2.li
> > b.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.a
> > nnotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Renewal failures since las
> > t successful login"})
> > field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures wi
> > th annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=
> > DEFAULT, value={"Renewal failures since last successful login"})
> > 2023-01-10T15:53:30,785 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private org.apache.hadoop.metrics2.li
> > b.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metr
> > ics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Renewal failures sin
> > ce startup"})
> > field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTo
> > tal with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10,
> >  type=DEFAULT, value={"Renewal failures since startup"})
> > 2023-01-10T15:53:30,789 [master] [org.apache.hadoop.metrics2.impl.MetricsSystemImpl] DEBUG: UgiMetrics, User and group related metrics
> > UgiMetrics, User and group related metrics
> > 2023-01-10T15:53:30,808 [master] [org.apache.hadoop.security.SecurityUtil] DEBUG: Setting hadoop.security.token.service.use_ip to true
> > Setting hadoop.security.token.service.use_ip to true
> > 2023-01-10T15:53:30,827 [master] [org.apache.hadoop.security.Groups] DEBUG:  Creating new Groups object
> >  Creating new Groups object
> > 2023-01-10T15:53:30,829 [master] [org.apache.hadoop.util.NativeCodeLoader] DEBUG: Trying to load the custom-built native-hadoop library...
> > Trying to load the custom-built native-hadoop library...
> > 2023-01-10T15:53:30,830 [master] [org.apache.hadoop.util.NativeCodeLoader] DEBUG: Loaded the native-hadoop library
> > Loaded the native-hadoop library
> > 2023-01-10T15:53:30,830 [master] [org.apache.hadoop.security.JniBasedUnixGroupsMapping] DEBUG: Using JniBasedUnixGroupsMapping for Group r
> > esolution
> > Using JniBasedUnixGroupsMapping for Group resolution
> > 2023-01-10T15:53:30,831 [master] [org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback] DEBUG: Group mapping impl=org.apache.h
> > adoop.security.JniBasedUnixGroupsMapping
> > Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
> > 2023-01-10T15:53:30,854 [master] [org.apache.hadoop.security.Groups] DEBUG: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGrou
> > psMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
> > Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
> > 2023-01-10T15:53:30,869 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: Hadoop login
> > Hadoop login
> > 2023-01-10T15:53:30,870 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: hadoop login commit
> > hadoop login commit
> > 2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: Using user: "accumulo" with name: accumulo
> > Using user: "accumulo" with name: accumulo
> > 2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: User entry: "accumulo"
> > User entry: "accumulo"
> > 2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: UGI loginUser: accumulo (auth:SIMPLE)
> > UGI loginUser: accumulo (auth:SIMPLE)
> > 2023-01-10T15:53:30,872 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Starting: Acquiring creator semaphore for hdfs://accumulo-hdfs-n
> > amenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> > Starting: Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> > 2023-01-10T15:53:30,873 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.
> > accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 0:00.000s
> > Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 0:00.000s
> > 2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Starting: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-h
> > dfs-namenodes:8020/accumulo/data0/accumulo
> > Starting: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> > 2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Loading filesystems
> > Loading filesystems
> > 2023-01-10T15:53:30,887 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/h
> > adoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > 2023-01-10T15:53:30,892 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem fro
> > m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > 2023-01-10T15:53:30,894 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hado
> > op/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > 2023-01-10T15:53:30,896 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /o
> > pt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > 2023-01-10T15:53:30,897 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from
> > /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > 2023-01-10T15:53:30,905 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem fro
> > m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > 2023-01-10T15:53:30,912 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem
> > from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > 2023-01-10T15:53:30,913 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSyste
> > m from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> > 2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem
> > from /opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar
> > s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem from /opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar
> > 2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Looking for FS supporting hdfs
> > Looking for FS supporting hdfs
> > 2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: looking for configuration option fs.hdfs.impl
> > looking for configuration option fs.hdfs.impl
> > 2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Looking in service filesystems for implementation class
> > Looking in service filesystems for implementation class
> > 2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSyste
> > m
> > FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem
> > 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.use.legacy.blockreader.local = false
> > dfs.client.use.legacy.blockreader.local = false
> > 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.read.shortcircuit = false
> > dfs.client.read.shortcircuit = false
> > 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.domain.socket.data.traffic = false
> > dfs.client.domain.socket.data.traffic = false
> > 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.domain.socket.path =
> > dfs.domain.socket.path =
> > 2023-01-10T15:53:30,980 [master] [org.apache.hadoop.hdfs.DFSClient] DEBUG: Sets dfs.client.block.write.replace-datanode-on-failure.min-rep
> > lication to 0
> > Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
> > 2023-01-10T15:53:30,990 [master] [org.apache.hadoop.io.retry.RetryUtils] DEBUG: multipleLinearRandomRetry = null
> > multipleLinearRandomRetry = null
> > 2023-01-10T15:53:31,011 [master] [org.apache.hadoop.ipc.Server] DEBUG: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apach
> > e.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798
> > rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, rpcInvoker=org.apac
> > he.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798
> > ...
> > ... LONG PAUSE HERE - ALMOST 10 minutes
> > ...
> > 2023-01-10T16:03:52,316 [master] [org.apache.hadoop.ipc.Client] DEBUG: getting client out of cache: Client-5197ff3375714e029d5cdcb1ac53e74
> > 2
> > getting client out of cache: Client-5197ff3375714e029d5cdcb1ac53e742
> > 2023-01-10T16:03:52,679 [client DomainSocketWatcher] [org.apache.hadoop.net.unix.DomainSocketWatcher] DEBUG: org.apache.hadoop.net.unix.Do
> > mainSocketWatcher$2@35d323c6: starting with interruptCheckPeriodMs = 60000
> > org.apache.hadoop.net.unix.DomainSocketWatcher$2@35d323c6: starting with interruptCheckPeriodMs = 60000
> > 2023-01-10T16:03:52,686 [master] [org.apache.hadoop.util.PerformanceAdvisory] DEBUG: Both short-circuit local reads and UNIX domain socket
> >  are disabled.
> > Both short-circuit local reads and UNIX domain socket are disabled.
> > 2023-01-10T16:03:52,694 [master] [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil] DEBUG: DataTransferProtocol not
> > using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
> > DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
> > 2023-01-10T16:03:52,697 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-nameno
> > des:8020/accumulo/data0/accumulo: duration 10:21.822s
> > Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 10:21.822s
> > 2023-01-10T16:03:52,718 [master] [org.apache.accumulo.core.conf.ConfigurationTypeHelper] DEBUG: Loaded class : org.apache.accumulo.core.sp
> > i.fs.PreferredVolumeChooser
> > Loaded class : org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
> > 2023-01-10T16:03:52,761 [master] [org.apache.hadoop.ipc.Client] DEBUG: The ping interval is 60000 ms.
> > The ping interval is 60000 ms.
> > 2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: Connecting to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.4
> > 2.15.98:8020
> > Connecting to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020
> > 2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: Setup connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenode
> > s/10.42.15.98:8020
> > Setup connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020
> > 2023-01-10T16:03:52,787 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> > ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> > 98:8020 from accumulo: starting, having connections 1
> > IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: starting, having con
> > nections 1
> > 2023-01-10T16:03:52,791 [IPC Parameter Sending Thread #0] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accum
> > ulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getLi
> > sting
> > IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo sending #0 org.apache
> > .hadoop.hdfs.protocol.ClientProtocol.getListing
> > 2023-01-10T16:03:52,801 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> > ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> > 98:8020 from accumulo got value #0
> > IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo got value #0
> > 2023-01-10T16:03:52,801 [master] [org.apache.hadoop.ipc.ProtobufRpcEngine2] DEBUG: Call: getListing took 72ms
> > Call: getListing took 72ms
> > 2023-01-10T16:03:52,804 [master] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs
> > -namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > 2023-01-10T16:03:52,804 [master] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs
> > -namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > Thread 'master' died.
> > java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
> > 020/accumulo/data0/accumulo/instance_id
> >         at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218)
> >         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102)
> >         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
> >         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
> >         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
> >         at org.apache.accumulo.manager.Manager.main(Manager.java:408)
> >         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
> >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
> >         at java.base/java.lang.Thread.run(Thread.java:829)
> > 2023-01-10T16:03:52,808 [master] [org.apache.accumulo.start.Main] ERROR: Thread 'master' died.
> > java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
> > 020/accumulo/data0/accumulo/instance_id
> >         at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) ~[accumulo-server-base-2.1.0.jar:2.1.
> > 0]
> >         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) ~[accumulo-manager-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.manager.Manager.main(Manager.java:408) ~[accumulo-manager-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) ~[accumulo-manager-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
> >         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> > Thread 'master' died.
> > java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
> > 020/accumulo/data0/accumulo/instance_id
> >         at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) ~[accumulo-server-base-2.1.0.jar:2.1.
> > 0]
> >         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) ~[accumulo-manager-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.manager.Manager.main(Manager.java:408) ~[accumulo-manager-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) ~[accumulo-manager-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
> >         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> > 2023-01-10T16:03:52,812 [shutdown-hook-0] [org.apache.hadoop.fs.FileSystem] DEBUG: FileSystem.close() by method: org.apache.hadoop.hdfs.Di
> > stributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo (auth:SIMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-n
> > amenodes:8020; URI: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; Object Identity Hash: 50257de5
> > FileSystem.close() by method: org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo (auth:S
> > IMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; URI: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; O
> > bject Identity Hash: 50257de5
> > 2023-01-10T16:03:52,814 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: stopping client from cache: Client-5197ff3375714e029d5cdcb
> > 1ac53e742
> > stopping client from cache: Client-5197ff3375714e029d5cdcb1ac53e742
> > 2023-01-10T16:03:52,815 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: removing client from cache: Client-5197ff3375714e029d5cdcb
> > 1ac53e742
> > removing client from cache: Client-5197ff3375714e029d5cdcb1ac53e742
> > 2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: stopping actual client because no more references remain:
> > Client-5197ff3375714e029d5cdcb1ac53e742
> > stopping actual client because no more references remain: Client-5197ff3375714e029d5cdcb1ac53e742
> > 2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: Stopping client
> > Stopping client
> > 2023-01-10T16:03:52,820 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> > ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> > 98:8020 from accumulo: closed
> > IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: closed
> > 2023-01-10T16:03:52,820 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> > ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> > 98:8020 from accumulo: stopped, remaining connections 0
> > IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: stopped, remaining c
> > onnections 0
> > 2023-01-10T16:03:52,820 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] DEBUG: Completed shutdown in 0.010 seconds; Timeouts: 0
> > Completed shutdown in 0.010 seconds; Timeouts: 0
> > 2023-01-10T16:03:52,843 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] DEBUG: ShutdownHookManager completed shutdown.
> > ShutdownHookManager completed shutdown.
> >
> > ________________________________
> > From: Ed Coleman <ed...@apache.org>
> > Sent: Tuesday, January 10, 2023 11:17 AM
> > To: user@accumulo.apache.org <us...@accumulo.apache.org>
> > Subject: Re: [External] Re: accumulo init error in K8S
> >
> > Running init does not start the Accumulo services.  Is the manager and are the tserver processes running?
> >
> > I may have missed it, but what version are you trying to use?  2.1?
> >
> > A quick look of the documentation at https://urldefense.com/v3/__https://accumulo.apache.org/docs/2.x/administration/in-depth-install*migrating-accumulo-from-non-ha-namenode-to-ha-namenode__;Iw!!May37g!P1_tpmliRYyFnNtR6-s3nURqK4U5zUTzKAvHLKk5QroP1DMNZWY_K0IkBcM8HbzEqjfMAyEJNps2l_nqThZf3TWV$<https://urldefense.com/v3/__https:/accumulo.apache.org/docs/2.x/administration/in-depth-install*migrating-accumulo-from-non-ha-namenode-to-ha-namenode__;Iw!!May37g!P1_tpmliRYyFnNtR6-s3nURqK4U5zUTzKAvHLKk5QroP1DMNZWY_K0IkBcM8HbzEqjfMAyEJNps2l_nqThZf3TWV$>  I would assume that add-volumes may not be required if your initial configuration is correct.
> >
> > At this point, logs may help more than stack traces.
> >
> > Ed C
> >
> > On 2023/01/10 16:01:49 "Samudrala, Ranganath [USA] via user" wrote:
> > > Yes, I ran it just now. I had debug enabled, so the prompt for instance name was hidden. I had to enter a few CRs to see the prompt. Once the prompts for instance name and password were answered, I can see entries for the accumulo config in the zookeeper.
> > >
> > > Should I run 'accumulo init --add-volumes' now?
> > >
> > > If I run 'accumulo master' and it seems to be hung up the thread:
> > >
> > > "master" #17 prio=5 os_prio=0 cpu=572.10ms elapsed=146.84s tid=0x000056488630b800 nid=0x90 runnable  [0x00007f5d63753000]
> > >    java.lang.Thread.State: RUNNABLE
> > >         at sun.security.pkcs11.Secmod.nssInitialize(jdk.crypto.cryptoki@11.0.17/Native Method)
> > >         at sun.security.pkcs11.Secmod.initialize(jdk.crypto.cryptoki@11.0.17/Secmod.java:239)
> > >         - locked <0x00000000ffd4eb18> (a sun.security.pkcs11.Secmod)
> > >         at sun.security.pkcs11.SunPKCS11.<init>(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:243)
> > >         at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:143)
> > >         at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
> > >         at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
> > >         at sun.security.pkcs11.SunPKCS11.configure(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
> > >         at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:251)
> > >         at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:242)
> > >         at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
> > >         at sun.security.jca.ProviderConfig.doLoadProvider(java.base@11.0.17/ProviderConfig.java:242)
> > >         at sun.security.jca.ProviderConfig.getProvider(java.base@11.0.17/ProviderConfig.java:222)
> > >         - locked <0x00000000ffff9560> (a sun.security.jca.ProviderConfig)
> > >         at sun.security.jca.ProviderList.getProvider(java.base@11.0.17/ProviderList.java:266)
> > >         at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:156)
> > >         at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:151)
> > >         at java.util.AbstractList$Itr.next(java.base@11.0.17/AbstractList.java:371)
> > >         at java.security.SecureRandom.getDefaultPRNG(java.base@11.0.17/SecureRandom.java:264)
> > >         at java.security.SecureRandom.<init>(java.base@11.0.17/SecureRandom.java:219)
> > >         at java.util.UUID$Holder.<clinit>(java.base@11.0.17/UUID.java:101)
> > >         at java.util.UUID.randomUUID(java.base@11.0.17/UUID.java:147)
> > >         at org.apache.hadoop.ipc.ClientId.getClientId(ClientId.java:42)
> > >         at org.apache.hadoop.ipc.Client.<init>(Client.java:1367)
> > >         at org.apache.hadoop.ipc.ClientCache.getClient(ClientCache.java:59)
> > >         - locked <0x00000000fffc3458> (a org.apache.hadoop.ipc.ClientCache)
> > >         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:158)
> > >         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:145)
> > >         at org.apache.hadoop.ipc.ProtobufRpcEngine2.getProxy(ProtobufRpcEngine2.java:111)
> > >         at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:629)
> > >         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithAlignmentContext(NameNodeProxiesClient.java:365)
> > >         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createNonHAProxyWithClientProtocol(NameNodeProxiesClient.java:343)
> > >         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:135)
> > >         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:374)
> > >         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:308)
> > >         at org.apache.hadoop.hdfs.DistributedFileSystem.initDFSClient(DistributedFileSystem.java:202)
> > >         at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:187)
> > >         at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
> > >         at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
> > >         at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
> > >         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
> > >         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
> > >         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
> > >         at org.apache.accumulo.core.volume.VolumeImpl.<init>(VolumeImpl.java:45)
> > >         at org.apache.accumulo.server.fs.VolumeManagerImpl.get(VolumeManagerImpl.java:371)
> > >         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:96)
> > >         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
> > >         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
> > >         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
> > >         at org.apache.accumulo.manager.Manager.main(Manager.java:408)
> > >         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
> > >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
> > >         at org.apache.accumulo.start.Main$$Lambda$145/0x00000008401a5040.run(Unknown Source)
> > >         at java.lang.Thread.run(java.base@11.0.17/Thread.java:829)
> > >
> > >
> > >
> > > I will wait and see when there is more log output.
> > >
> > > Thanks
> > > Ranga
> > >
> > > ________________________________
> > > From: Ed Coleman <ed...@apache.org>
> > > Sent: Tuesday, January 10, 2023 10:16 AM
> > > To: user@accumulo.apache.org <us...@accumulo.apache.org>
> > > Subject: [External] Re: accumulo init error in K8S
> > >
> > > Have you tried running accumulo init without the --add-volumes?  From your attached log it looks like it cannot find a valid instance id
> > >
> > > 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > > Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > > 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > > unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > >
> > >
> > > On 2023/01/10 14:21:29 "Samudrala, Ranganath [USA] via user" wrote:
> > > > Hello,
> > > > I am trying to configure Accumulo in K8S using Helm chart. Hadoop and Zookeeper are up and running in the same K8S namespace.
> > > > accumulo.properties is as below:
> > > >
> > > >   instance.volumes=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> > > >   general.custom.volume.preferred.default=accumulo
> > > >   instance.zookeeper.host=accumulo-zookeeper
> > > >   # instance.secret=DEFAULT
> > > >   general.volume.chooser=org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
> > > >   general.custom.volume.preferred.logger=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> > > >   trace.user=tracer
> > > >   trace.password=tracer
> > > >   instance.secret=accumulo
> > > >   tserver.cache.data.size=15M
> > > >   tserver.cache.index.size=40M
> > > >   tserver.memory.maps.max=128M
> > > >   tserver.memory.maps.native.enabled=true
> > > >   tserver.sort.buffer.size=50M
> > > >   tserver.total.mutation.queue.max=16M
> > > >   tserver.walog.max.size=128M
> > > >
> > > > accumulo-client.properties is as below:
> > > >
> > > >  auth.type=password
> > > >  auth.principal=root
> > > >  auth.token=root
> > > >  instance.name=accumulo
> > > >  # For Accumulo >=2.0.0
> > > >  instance.zookeepers=accumulo-zookeeper
> > > >  instance.zookeeper.host=accumulo-zookeeper
> > > >
> > > > When I run 'accumulo init --add-volumes', I see an error as below and what is wrong with the setup?
> > > >
> > > > java.lang.RuntimeException: None of the configured paths are initialized.
> > > >         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119)
> > > >         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449)
> > > >         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543)
> > > >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
> > > >         at java.base/java.lang.Thread.run(Thread.java:829)
> > > > 2023-01-09T21:22:13,530 [init] [org.apache.accumulo.start.Main] ERROR: Thread 'init' died.
> > > > java.lang.RuntimeException: None of the configured paths are initialized.
> > > >         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> > > >         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> > > >         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> > > >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
> > > >         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> > > > Thread 'init' died.
> > > >
> > > > I have attached complete log:
> > > >
> > > >
> > >
> >
> 

Re: [External] Re: accumulo init error in K8S

Posted by "Samudrala, Ranganath [USA] via user" <us...@accumulo.apache.org>.
Instance.volumes: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo

So, does it make sense that I am expecting instance_id folder underneath /accumulo/data0/accumulo folder.

Thanks
Ranga

From: Ed Coleman <ed...@apache.org>
Date: Tuesday, January 10, 2023 at 12:03 PM
To: user@accumulo.apache.org <us...@accumulo.apache.org>
Subject: Re: [External] Re: accumulo init error in K8S
Can you use manager instead of master - it has been renamed to manager, but maybe we missed some old references.

After you run accumulo init, what is in hadoop?

> hadoop fs -ls -R /accumulo
drwxr-xr-x   - x x         0 2023-01-10 16:49 /accumulo/instance_id
-rw-r--r--   3 x x         0 2023-01-10 16:49 /accumulo/instance_id/bdcdd3d8-7623-4882-aae7-357a9db2efd4
drwxr-xr-x   - x x         0 2023-01-10 16:49 /accumulo/tables
...
drwx------   - x x         0 2023-01-10 16:49 /accumulo/version
drwx------   - x x         0 2023-01-10 16:49 /accumulo/version/10

Running

> accumulo tserver

accumulo tserver
2023-01-10T16:53:26,858 [conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /home/etcolem/workspace/fluo-uno/install/accumulo-2.1.1-SNAPSHOT/conf/accumulo.properties
2023-01-10T16:53:27,766 [tserver.TabletServer] INFO : Version 2.1.1-SNAPSHOT
2023-01-10T16:53:27,766 [tserver.TabletServer] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
2023-01-10T16:53:27,816 [metrics.MetricsUtil] INFO : Metric producer PropStoreMetrics initialize
2023-01-10T16:53:27,931 [server.ServerContext] INFO : tserver starting
2023-01-10T16:53:27,931 [server.ServerContext] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
2023-01-10T16:53:27,933 [server.ServerContext] INFO : Data Version 10

When starting a manager / master - are you seeing:

2023-01-10T16:57:26,125 [balancer.TableLoadBalancer] INFO : Loaded class org.apache.accumulo.core.spi.balancer.SimpleLoadBalancer for table +r
2023-01-10T16:57:26,126 [balancer.SimpleLoadBalancer] WARN : Not balancing because we don't have any tservers.

tservers should be started first, before the other management processes.

The initial manager start-up should look like:

> accumulo manager
2023-01-10T16:56:43,649 [conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /home/etcolem/workspace/fluo-uno/install/accumulo-2.1.1-SNAPSHOT/conf/accumulo.properties
2023-01-10T16:56:44,581 [manager.Manager] INFO : Version 2.1.1-SNAPSHOT
2023-01-10T16:56:44,582 [manager.Manager] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
2023-01-10T16:56:44,627 [metrics.MetricsUtil] INFO : Metric producer PropStoreMetrics initialize
2023-01-10T16:56:44,742 [server.ServerContext] INFO : manager starting
2023-01-10T16:56:44,742 [server.ServerContext] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
2023-01-10T16:56:44,745 [server.ServerContext] INFO : Data Version 10
2023-01-10T16:56:44,745 [server.ServerContext] INFO : Attempting to talk to zookeeper
2023-01-10T16:56:44,746 [server.ServerContext] INFO : ZooKeeper connected and initialized, attempting to talk to HDFS
2023-01-10T16:56:44,761 [server.ServerContext] INFO : Connected to HDFS

And then key things to look for after the config dump:

023-01-10T16:56:44,802 [manager.Manager] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
2023-01-10T16:56:44,825 [manager.Manager] INFO : SASL is not enabled, delegation tokens will not be available
2023-01-10T16:56:44,872 [metrics.MetricsUtil] INFO : Metric producer ThriftMetrics initialize
2023-01-10T16:56:44,888 [manager.Manager] INFO : Started Manager client service at ip-10-113-15-42.evoforge.org:9999
2023-01-10T16:56:44,890 [manager.Manager] INFO : trying to get manager lock
2023-01-10T16:56:44,900 [manager.EventCoordinator] INFO : State changed from INITIAL to HAVE_LOCK




On 2023/01/10 16:22:11 "Samudrala, Ranganath [USA] via user" wrote:
> Yes, I am trying to set Accumulo v2.1.0 with Hadoop v3.3.4.
> ________________________________
> From: Samudrala, Ranganath [USA] <Sa...@bah.com>
> Sent: Tuesday, January 10, 2023 11:21 AM
> To: user@accumulo.apache.org <us...@accumulo.apache.org>
> Subject: Re: [External] Re: accumulo init error in K8S
>
> I am starting these services manually, one at a time. For example, after 'accumulo init' completed, I ran 'accumulo master' I get this error:
>
> bash-5.1$ accumulo master
> 2023-01-10T15:53:30,143 [main] [org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Using Accumulo configuration at /opt/acc
> umulo/conf/accumulo.properties
> Using Accumulo configuration at /opt/accumulo/conf/accumulo.properties
> 2023-01-10T15:53:30,207 [main] [org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Create 2nd tier ClassLoader using URLs:
> []
> Create 2nd tier ClassLoader using URLs: []
> 2023-01-10T15:53:30,372 [main] [org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating ThreadPoolExecutor for Scheduled Future
>  Checker with 1 core threads and 1 max threads 180000 MILLISECONDS timeout
> Creating ThreadPoolExecutor for Scheduled Future Checker with 1 core threads and 1 max threads 180000 MILLISECONDS timeout
> 2023-01-10T15:53:30,379 [main] [org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating ThreadPoolExecutor for zoo_change_updat
> e with 2 core threads and 2 max threads 180000 MILLISECONDS timeout
> Creating ThreadPoolExecutor for zoo_change_update with 2 core threads and 2 max threads 180000 MILLISECONDS timeout
> 2023-01-10T15:53:30,560 [master] [org.apache.accumulo.core.conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /op
> t/accumulo/conf/accumulo.properties
> Found Accumulo configuration on classpath at /opt/accumulo/conf/accumulo.properties
> 2023-01-10T15:53:30,736 [master] [org.apache.hadoop.util.Shell] DEBUG: setsid exited with exit code 0
> setsid exited with exit code 0
> 2023-01-10T15:53:30,780 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
> eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(a
> lways=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"GetGroups"})
> field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org
> .apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"G
> etGroups"})
> 2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
> eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metri
> c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Rate of failed kerberos logins and latenc
> y (milliseconds)"})
> field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @
> org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value=
> {"Rate of failed kerberos logins and latency (milliseconds)"})
> 2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
> eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metri
> c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Rate of successful kerberos logins and la
> tency (milliseconds)"})
> field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @
> org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value=
> {"Rate of successful kerberos logins and latency (milliseconds)"})
> 2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private org.apache.hadoop.metrics2.li
> b.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.a
> nnotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Renewal failures since las
> t successful login"})
> field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures wi
> th annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=
> DEFAULT, value={"Renewal failures since last successful login"})
> 2023-01-10T15:53:30,785 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private org.apache.hadoop.metrics2.li
> b.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metr
> ics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Renewal failures sin
> ce startup"})
> field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTo
> tal with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10,
>  type=DEFAULT, value={"Renewal failures since startup"})
> 2023-01-10T15:53:30,789 [master] [org.apache.hadoop.metrics2.impl.MetricsSystemImpl] DEBUG: UgiMetrics, User and group related metrics
> UgiMetrics, User and group related metrics
> 2023-01-10T15:53:30,808 [master] [org.apache.hadoop.security.SecurityUtil] DEBUG: Setting hadoop.security.token.service.use_ip to true
> Setting hadoop.security.token.service.use_ip to true
> 2023-01-10T15:53:30,827 [master] [org.apache.hadoop.security.Groups] DEBUG:  Creating new Groups object
>  Creating new Groups object
> 2023-01-10T15:53:30,829 [master] [org.apache.hadoop.util.NativeCodeLoader] DEBUG: Trying to load the custom-built native-hadoop library...
> Trying to load the custom-built native-hadoop library...
> 2023-01-10T15:53:30,830 [master] [org.apache.hadoop.util.NativeCodeLoader] DEBUG: Loaded the native-hadoop library
> Loaded the native-hadoop library
> 2023-01-10T15:53:30,830 [master] [org.apache.hadoop.security.JniBasedUnixGroupsMapping] DEBUG: Using JniBasedUnixGroupsMapping for Group r
> esolution
> Using JniBasedUnixGroupsMapping for Group resolution
> 2023-01-10T15:53:30,831 [master] [org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback] DEBUG: Group mapping impl=org.apache.h
> adoop.security.JniBasedUnixGroupsMapping
> Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
> 2023-01-10T15:53:30,854 [master] [org.apache.hadoop.security.Groups] DEBUG: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGrou
> psMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
> Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
> 2023-01-10T15:53:30,869 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: Hadoop login
> Hadoop login
> 2023-01-10T15:53:30,870 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: hadoop login commit
> hadoop login commit
> 2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: Using user: "accumulo" with name: accumulo
> Using user: "accumulo" with name: accumulo
> 2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: User entry: "accumulo"
> User entry: "accumulo"
> 2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: UGI loginUser: accumulo (auth:SIMPLE)
> UGI loginUser: accumulo (auth:SIMPLE)
> 2023-01-10T15:53:30,872 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Starting: Acquiring creator semaphore for hdfs://accumulo-hdfs-n
> amenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> Starting: Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> 2023-01-10T15:53:30,873 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.
> accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 0:00.000s
> Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 0:00.000s
> 2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Starting: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-h
> dfs-namenodes:8020/accumulo/data0/accumulo
> Starting: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> 2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Loading filesystems
> Loading filesystems
> 2023-01-10T15:53:30,887 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/h
> adoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,892 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem fro
> m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,894 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hado
> op/share/hadoop/client/hadoop-client-api-3.3.4.jar
> har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,896 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /o
> pt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,897 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from
> /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,905 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem fro
> m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,912 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem
> from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,913 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSyste
> m from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem
> from /opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar
> s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem from /opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar
> 2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Looking for FS supporting hdfs
> Looking for FS supporting hdfs
> 2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: looking for configuration option fs.hdfs.impl
> looking for configuration option fs.hdfs.impl
> 2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Looking in service filesystems for implementation class
> Looking in service filesystems for implementation class
> 2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSyste
> m
> FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem
> 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.use.legacy.blockreader.local = false
> dfs.client.use.legacy.blockreader.local = false
> 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.read.shortcircuit = false
> dfs.client.read.shortcircuit = false
> 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.domain.socket.data.traffic = false
> dfs.client.domain.socket.data.traffic = false
> 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.domain.socket.path =
> dfs.domain.socket.path =
> 2023-01-10T15:53:30,980 [master] [org.apache.hadoop.hdfs.DFSClient] DEBUG: Sets dfs.client.block.write.replace-datanode-on-failure.min-rep
> lication to 0
> Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
> 2023-01-10T15:53:30,990 [master] [org.apache.hadoop.io.retry.RetryUtils] DEBUG: multipleLinearRandomRetry = null
> multipleLinearRandomRetry = null
> 2023-01-10T15:53:31,011 [master] [org.apache.hadoop.ipc.Server] DEBUG: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apach
> e.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798
> rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, rpcInvoker=org.apac
> he.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798
> ...
> ... LONG PAUSE HERE - ALMOST 10 minutes
> ...
> 2023-01-10T16:03:52,316 [master] [org.apache.hadoop.ipc.Client] DEBUG: getting client out of cache: Client-5197ff3375714e029d5cdcb1ac53e74
> 2
> getting client out of cache: Client-5197ff3375714e029d5cdcb1ac53e742
> 2023-01-10T16:03:52,679 [client DomainSocketWatcher] [org.apache.hadoop.net.unix.DomainSocketWatcher] DEBUG: org.apache.hadoop.net.unix.Do
> mainSocketWatcher$2@35d323c6: starting with interruptCheckPeriodMs = 60000
> org.apache.hadoop.net.unix.DomainSocketWatcher$2@35d323c6: starting with interruptCheckPeriodMs = 60000
> 2023-01-10T16:03:52,686 [master] [org.apache.hadoop.util.PerformanceAdvisory] DEBUG: Both short-circuit local reads and UNIX domain socket
>  are disabled.
> Both short-circuit local reads and UNIX domain socket are disabled.
> 2023-01-10T16:03:52,694 [master] [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil] DEBUG: DataTransferProtocol not
> using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
> DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
> 2023-01-10T16:03:52,697 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-nameno
> des:8020/accumulo/data0/accumulo: duration 10:21.822s
> Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 10:21.822s
> 2023-01-10T16:03:52,718 [master] [org.apache.accumulo.core.conf.ConfigurationTypeHelper] DEBUG: Loaded class : org.apache.accumulo.core.sp
> i.fs.PreferredVolumeChooser
> Loaded class : org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
> 2023-01-10T16:03:52,761 [master] [org.apache.hadoop.ipc.Client] DEBUG: The ping interval is 60000 ms.
> The ping interval is 60000 ms.
> 2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: Connecting to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.4
> 2.15.98:8020
> Connecting to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020
> 2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: Setup connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenode
> s/10.42.15.98:8020
> Setup connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020
> 2023-01-10T16:03:52,787 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> 98:8020 from accumulo: starting, having connections 1
> IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: starting, having con
> nections 1
> 2023-01-10T16:03:52,791 [IPC Parameter Sending Thread #0] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accum
> ulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getLi
> sting
> IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo sending #0 org.apache
> .hadoop.hdfs.protocol.ClientProtocol.getListing
> 2023-01-10T16:03:52,801 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> 98:8020 from accumulo got value #0
> IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo got value #0
> 2023-01-10T16:03:52,801 [master] [org.apache.hadoop.ipc.ProtobufRpcEngine2] DEBUG: Call: getListing took 72ms
> Call: getListing took 72ms
> 2023-01-10T16:03:52,804 [master] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs
> -namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> 2023-01-10T16:03:52,804 [master] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs
> -namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> Thread 'master' died.
> java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
> 020/accumulo/data0/accumulo/instance_id
>         at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218)
>         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102)
>         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
>         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
>         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
>         at org.apache.accumulo.manager.Manager.main(Manager.java:408)
>         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
>         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2023-01-10T16:03:52,808 [master] [org.apache.accumulo.start.Main] ERROR: Thread 'master' died.
> java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
> 020/accumulo/data0/accumulo/instance_id
>         at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) ~[accumulo-server-base-2.1.0.jar:2.1.
> 0]
>         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) ~[accumulo-manager-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.manager.Manager.main(Manager.java:408) ~[accumulo-manager-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) ~[accumulo-manager-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
>         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> Thread 'master' died.
> java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
> 020/accumulo/data0/accumulo/instance_id
>         at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) ~[accumulo-server-base-2.1.0.jar:2.1.
> 0]
>         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) ~[accumulo-manager-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.manager.Manager.main(Manager.java:408) ~[accumulo-manager-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) ~[accumulo-manager-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
>         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> 2023-01-10T16:03:52,812 [shutdown-hook-0] [org.apache.hadoop.fs.FileSystem] DEBUG: FileSystem.close() by method: org.apache.hadoop.hdfs.Di
> stributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo (auth:SIMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-n
> amenodes:8020; URI: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; Object Identity Hash: 50257de5
> FileSystem.close() by method: org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo (auth:S
> IMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; URI: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; O
> bject Identity Hash: 50257de5
> 2023-01-10T16:03:52,814 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: stopping client from cache: Client-5197ff3375714e029d5cdcb
> 1ac53e742
> stopping client from cache: Client-5197ff3375714e029d5cdcb1ac53e742
> 2023-01-10T16:03:52,815 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: removing client from cache: Client-5197ff3375714e029d5cdcb
> 1ac53e742
> removing client from cache: Client-5197ff3375714e029d5cdcb1ac53e742
> 2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: stopping actual client because no more references remain:
> Client-5197ff3375714e029d5cdcb1ac53e742
> stopping actual client because no more references remain: Client-5197ff3375714e029d5cdcb1ac53e742
> 2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: Stopping client
> Stopping client
> 2023-01-10T16:03:52,820 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> 98:8020 from accumulo: closed
> IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: closed
> 2023-01-10T16:03:52,820 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> 98:8020 from accumulo: stopped, remaining connections 0
> IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: stopped, remaining c
> onnections 0
> 2023-01-10T16:03:52,820 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] DEBUG: Completed shutdown in 0.010 seconds; Timeouts: 0
> Completed shutdown in 0.010 seconds; Timeouts: 0
> 2023-01-10T16:03:52,843 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] DEBUG: ShutdownHookManager completed shutdown.
> ShutdownHookManager completed shutdown.
>
> ________________________________
> From: Ed Coleman <ed...@apache.org>
> Sent: Tuesday, January 10, 2023 11:17 AM
> To: user@accumulo.apache.org <us...@accumulo.apache.org>
> Subject: Re: [External] Re: accumulo init error in K8S
>
> Running init does not start the Accumulo services.  Is the manager and are the tserver processes running?
>
> I may have missed it, but what version are you trying to use?  2.1?
>
> A quick look of the documentation at https://urldefense.com/v3/__https://accumulo.apache.org/docs/2.x/administration/in-depth-install*migrating-accumulo-from-non-ha-namenode-to-ha-namenode__;Iw!!May37g!P1_tpmliRYyFnNtR6-s3nURqK4U5zUTzKAvHLKk5QroP1DMNZWY_K0IkBcM8HbzEqjfMAyEJNps2l_nqThZf3TWV$<https://urldefense.com/v3/__https:/accumulo.apache.org/docs/2.x/administration/in-depth-install*migrating-accumulo-from-non-ha-namenode-to-ha-namenode__;Iw!!May37g!P1_tpmliRYyFnNtR6-s3nURqK4U5zUTzKAvHLKk5QroP1DMNZWY_K0IkBcM8HbzEqjfMAyEJNps2l_nqThZf3TWV$>  I would assume that add-volumes may not be required if your initial configuration is correct.
>
> At this point, logs may help more than stack traces.
>
> Ed C
>
> On 2023/01/10 16:01:49 "Samudrala, Ranganath [USA] via user" wrote:
> > Yes, I ran it just now. I had debug enabled, so the prompt for instance name was hidden. I had to enter a few CRs to see the prompt. Once the prompts for instance name and password were answered, I can see entries for the accumulo config in the zookeeper.
> >
> > Should I run 'accumulo init --add-volumes' now?
> >
> > If I run 'accumulo master' and it seems to be hung up the thread:
> >
> > "master" #17 prio=5 os_prio=0 cpu=572.10ms elapsed=146.84s tid=0x000056488630b800 nid=0x90 runnable  [0x00007f5d63753000]
> >    java.lang.Thread.State: RUNNABLE
> >         at sun.security.pkcs11.Secmod.nssInitialize(jdk.crypto.cryptoki@11.0.17/Native Method)
> >         at sun.security.pkcs11.Secmod.initialize(jdk.crypto.cryptoki@11.0.17/Secmod.java:239)
> >         - locked <0x00000000ffd4eb18> (a sun.security.pkcs11.Secmod)
> >         at sun.security.pkcs11.SunPKCS11.<init>(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:243)
> >         at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:143)
> >         at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
> >         at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
> >         at sun.security.pkcs11.SunPKCS11.configure(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
> >         at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:251)
> >         at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:242)
> >         at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
> >         at sun.security.jca.ProviderConfig.doLoadProvider(java.base@11.0.17/ProviderConfig.java:242)
> >         at sun.security.jca.ProviderConfig.getProvider(java.base@11.0.17/ProviderConfig.java:222)
> >         - locked <0x00000000ffff9560> (a sun.security.jca.ProviderConfig)
> >         at sun.security.jca.ProviderList.getProvider(java.base@11.0.17/ProviderList.java:266)
> >         at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:156)
> >         at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:151)
> >         at java.util.AbstractList$Itr.next(java.base@11.0.17/AbstractList.java:371)
> >         at java.security.SecureRandom.getDefaultPRNG(java.base@11.0.17/SecureRandom.java:264)
> >         at java.security.SecureRandom.<init>(java.base@11.0.17/SecureRandom.java:219)
> >         at java.util.UUID$Holder.<clinit>(java.base@11.0.17/UUID.java:101)
> >         at java.util.UUID.randomUUID(java.base@11.0.17/UUID.java:147)
> >         at org.apache.hadoop.ipc.ClientId.getClientId(ClientId.java:42)
> >         at org.apache.hadoop.ipc.Client.<init>(Client.java:1367)
> >         at org.apache.hadoop.ipc.ClientCache.getClient(ClientCache.java:59)
> >         - locked <0x00000000fffc3458> (a org.apache.hadoop.ipc.ClientCache)
> >         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:158)
> >         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:145)
> >         at org.apache.hadoop.ipc.ProtobufRpcEngine2.getProxy(ProtobufRpcEngine2.java:111)
> >         at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:629)
> >         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithAlignmentContext(NameNodeProxiesClient.java:365)
> >         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createNonHAProxyWithClientProtocol(NameNodeProxiesClient.java:343)
> >         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:135)
> >         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:374)
> >         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:308)
> >         at org.apache.hadoop.hdfs.DistributedFileSystem.initDFSClient(DistributedFileSystem.java:202)
> >         at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:187)
> >         at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
> >         at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
> >         at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
> >         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
> >         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
> >         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
> >         at org.apache.accumulo.core.volume.VolumeImpl.<init>(VolumeImpl.java:45)
> >         at org.apache.accumulo.server.fs.VolumeManagerImpl.get(VolumeManagerImpl.java:371)
> >         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:96)
> >         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
> >         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
> >         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
> >         at org.apache.accumulo.manager.Manager.main(Manager.java:408)
> >         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
> >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
> >         at org.apache.accumulo.start.Main$$Lambda$145/0x00000008401a5040.run(Unknown Source)
> >         at java.lang.Thread.run(java.base@11.0.17/Thread.java:829)
> >
> >
> >
> > I will wait and see when there is more log output.
> >
> > Thanks
> > Ranga
> >
> > ________________________________
> > From: Ed Coleman <ed...@apache.org>
> > Sent: Tuesday, January 10, 2023 10:16 AM
> > To: user@accumulo.apache.org <us...@accumulo.apache.org>
> > Subject: [External] Re: accumulo init error in K8S
> >
> > Have you tried running accumulo init without the --add-volumes?  From your attached log it looks like it cannot find a valid instance id
> >
> > 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> >
> >
> > On 2023/01/10 14:21:29 "Samudrala, Ranganath [USA] via user" wrote:
> > > Hello,
> > > I am trying to configure Accumulo in K8S using Helm chart. Hadoop and Zookeeper are up and running in the same K8S namespace.
> > > accumulo.properties is as below:
> > >
> > >   instance.volumes=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> > >   general.custom.volume.preferred.default=accumulo
> > >   instance.zookeeper.host=accumulo-zookeeper
> > >   # instance.secret=DEFAULT
> > >   general.volume.chooser=org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
> > >   general.custom.volume.preferred.logger=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> > >   trace.user=tracer
> > >   trace.password=tracer
> > >   instance.secret=accumulo
> > >   tserver.cache.data.size=15M
> > >   tserver.cache.index.size=40M
> > >   tserver.memory.maps.max=128M
> > >   tserver.memory.maps.native.enabled=true
> > >   tserver.sort.buffer.size=50M
> > >   tserver.total.mutation.queue.max=16M
> > >   tserver.walog.max.size=128M
> > >
> > > accumulo-client.properties is as below:
> > >
> > >  auth.type=password
> > >  auth.principal=root
> > >  auth.token=root
> > >  instance.name=accumulo
> > >  # For Accumulo >=2.0.0
> > >  instance.zookeepers=accumulo-zookeeper
> > >  instance.zookeeper.host=accumulo-zookeeper
> > >
> > > When I run 'accumulo init --add-volumes', I see an error as below and what is wrong with the setup?
> > >
> > > java.lang.RuntimeException: None of the configured paths are initialized.
> > >         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119)
> > >         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449)
> > >         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543)
> > >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
> > >         at java.base/java.lang.Thread.run(Thread.java:829)
> > > 2023-01-09T21:22:13,530 [init] [org.apache.accumulo.start.Main] ERROR: Thread 'init' died.
> > > java.lang.RuntimeException: None of the configured paths are initialized.
> > >         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> > >         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> > >         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> > >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
> > >         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> > > Thread 'init' died.
> > >
> > > I have attached complete log:
> > >
> > >
> >
>

Re: [External] Re: accumulo init error in K8S

Posted by "Samudrala, Ranganath [USA] via user" <us...@accumulo.apache.org>.
1.  I do not see config entries in the zookeeper – even if I invoke ‘accumulo init --upload-accumulo-props’

  1.  I do not see any folder created / in HDFS.
  2.  Even if I create /accumulo in HDFS using the command ‘hdfs dfs -mkdir /accumulo’ nothing is created under that folder.

====== Do we need to pre-create the accumulo folder in HDFS?===

$ hdfs dfs -ls /accumulo
2023-01-10 21:05:23,794 DEBUG [main] [org.apache.hadoop.hdfs.client.impl.DfsClientConf]: dfs.client.use.legacy.blockreader.local = false
2023-01-10 21:05:23,795 DEBUG [main] [org.apache.hadoop.hdfs.client.impl.DfsClientConf]: dfs.client.read.shortcircuit = false
2023-01-10 21:05:23,795 DEBUG [main] [org.apache.hadoop.hdfs.client.impl.DfsClientConf]: dfs.client.domain.socket.data.traffic = false
2023-01-10 21:05:23,795 DEBUG [main] [org.apache.hadoop.hdfs.client.impl.DfsClientConf]: dfs.domain.socket.path =
2023-01-10 21:05:23,810 DEBUG [main] [org.apache.hadoop.hdfs.DFSClient]: Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
2023-01-10 21:05:24,405 DEBUG [main] [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil]: DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
ls: `/accumulo': No such file or directory

Zookeeper has config folder but it is empty:
[zk: localhost:2181(CONNECTED) 2] ls /
[accumulo, zookeeper]
[zk: localhost:2181(CONNECTED) 3] ls /accumulo
[a308c828-53dc-4f6f-a0d8-e177f37920df]
[zk: localhost:2181(CONNECTED) 4] ls /accumulo/a308c828-53dc-4f6f-a0d8-e177f37920df
[config]
[zk: localhost:2181(CONNECTED) 5] ls /accumulo/a308c828-53dc-4f6f-a0d8-e177f37920df/config
Insufficient permission : /accumulo/a308c828-53dc-4f6f-a0d8-e177f37920df/config
[zk: localhost:2181(CONNECTED) 6] addauth digest accumulo:accumulo
[zk: localhost:2181(CONNECTED) 7] ls /accumulo/a308c828-53dc-4f6f-a0d8-e177f37920df/config
[]
[zk: localhost:2181(CONNECTED) 8] ls /accumulo/a308c828-53dc-4f6f-a0d8-e177f37920df/config
[]
[zk: localhost:2181(CONNECTED) 9]

Starting tserver causes it to exit soon

$ accumulo tserver
.
.

2023-01-10T21:13:29,735 [tserver] [org.apache.hadoop.ipc.ProtobufRpcEngine2] DEBUG: Call: getListing took 81ms
Call: getListing took 81ms
2023-01-10T21:13:29,739 [tserver] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
2023-01-10T21:13:29,739 [tserver] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
Thread 'tserver' died.
java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
                at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218)
                at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102)
                at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
                at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
                at org.apache.accumulo.tserver.TabletServer.<init>(TabletServer.java:249)
                at org.apache.accumulo.tserver.TabletServer.main(TabletServer.java:243)
                at org.apache.accumulo.tserver.TServerExecutable.execute(TServerExecutable.java:45)
                at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
                at java.base/java.lang.Thread.run(Thread.java:829)
2023-01-10T21:13:29,741 [tserver] [org.apache.accumulo.start.Main] ERROR: Thread 'tserver' died.
java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
                at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) ~[accumulo-server-base-2.1.0.jar:2.1.0]
                at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) ~[accumulo-server-base-2.1.0.jar:2.1.0]
                at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) ~[accumulo-server-base-2.1.0.jar:2.1.0]
                at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) ~[accumulo-server-base-2.1.0.jar:2.1.0]
                at org.apache.accumulo.tserver.TabletServer.<init>(TabletServer.java:249) ~[accumulo-tserver-2.1.0.jar:2.1.0]
                at org.apache.accumulo.tserver.TabletServer.main(TabletServer.java:243) ~[accumulo-tserver-2.1.0.jar:2.1.0]
                at org.apache.accumulo.tserver.TServerExecutable.execute(TServerExecutable.java:45) ~[accumulo-tserver-2.1.0.jar:2.1.0]
                at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
                at java.lang.Thread.run(Thread.java:829) ~[?:?]
Thread 'tserver' died.
java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
                at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) ~[accumulo-server-base-2.1.0.jar:2.1.0]
                at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) ~[accumulo-server-base-2.1.0.jar:2.1.0]
                at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) ~[accumulo-server-base-2.1.0.jar:2.1.0]
                at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) ~[accumulo-server-base-2.1.0.jar:2.1.0]
                at org.apache.accumulo.tserver.TabletServer.<init>(TabletServer.java:249) ~[accumulo-tserver-2.1.0.jar:2.1.0]
                at org.apache.accumulo.tserver.TabletServer.main(TabletServer.java:243) ~[accumulo-tserver-2.1.0.jar:2.1.0]
                at org.apache.accumulo.tserver.TServerExecutable.execute(TServerExecutable.java:45) ~[accumulo-tserver-2.1.0.jar:2.1.0]
                at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
                at java.lang.Thread.run(Thread.java:829) ~[?:?]
2023-01-10T21:13:29,745 [shutdown-hook-0] [org.apache.hadoop.fs.FileSystem] DEBUG: FileSystem.close() by method: org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo (auth:SIMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; URI: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; Object Identity Hash: 1bb6253
FileSystem.close() by method: org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo (auth:SIMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; URI: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; Object Identity Hash: 1bb6253
2023-01-10T21:13:29,747 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: stopping client from cache: Client-46aef4cb0e6148e5b10e953995d97169
stopping client from cache: Client-46aef4cb0e6148e5b10e953995d97169
2023-01-10T21:13:29,747 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: removing client from cache: Client-46aef4cb0e6148e5b10e953995d97169
removing client from cache: Client-46aef4cb0e6148e5b10e953995d97169
2023-01-10T21:13:29,748 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: stopping actual client because no more references remain: Client-46aef4cb0e6148e5b10e953995d97169
stopping actual client because no more references remain: Client-46aef4cb0e6148e5b10e953995d97169
2023-01-10T21:13:29,748 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: Stopping client
Stopping client
2023-01-10T21:13:29,749 [IPC Client (892990358) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.121:8020 from accumulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (892990358) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.121:8020 from accumulo: closed
IPC Client (892990358) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.121:8020 from accumulo: closed
2023-01-10T21:13:29,749 [IPC Client (892990358) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.121:8020 from accumulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (892990358) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.121:8020 from accumulo: stopped, remaining connections 0
IPC Client (892990358) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.121:8020 from accumulo: stopped, remaining connections 0
2023-01-10T21:13:29,751 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] DEBUG: Completed shutdown in 0.009 seconds; Timeouts: 0
Completed shutdown in 0.009 seconds; Timeouts: 0
2023-01-10T21:13:29,774 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] DEBUG: ShutdownHookManager completed shutdown.
ShutdownHookManager completed shutdown.
bash-5.1
From: Ed Coleman <ed...@apache.org>
Date: Tuesday, January 10, 2023 at 12:03 PM
To: user@accumulo.apache.org <us...@accumulo.apache.org>
Subject: Re: [External] Re: accumulo init error in K8S
Can you use manager instead of master - it has been renamed to manager, but maybe we missed some old references.

After you run accumulo init, what is in hadoop?

> hadoop fs -ls -R /accumulo
drwxr-xr-x   - x x         0 2023-01-10 16:49 /accumulo/instance_id
-rw-r--r--   3 x x         0 2023-01-10 16:49 /accumulo/instance_id/bdcdd3d8-7623-4882-aae7-357a9db2efd4
drwxr-xr-x   - x x         0 2023-01-10 16:49 /accumulo/tables
...
drwx------   - x x         0 2023-01-10 16:49 /accumulo/version
drwx------   - x x         0 2023-01-10 16:49 /accumulo/version/10

Running

> accumulo tserver

accumulo tserver
2023-01-10T16:53:26,858 [conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /home/etcolem/workspace/fluo-uno/install/accumulo-2.1.1-SNAPSHOT/conf/accumulo.properties
2023-01-10T16:53:27,766 [tserver.TabletServer] INFO : Version 2.1.1-SNAPSHOT
2023-01-10T16:53:27,766 [tserver.TabletServer] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
2023-01-10T16:53:27,816 [metrics.MetricsUtil] INFO : Metric producer PropStoreMetrics initialize
2023-01-10T16:53:27,931 [server.ServerContext] INFO : tserver starting
2023-01-10T16:53:27,931 [server.ServerContext] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
2023-01-10T16:53:27,933 [server.ServerContext] INFO : Data Version 10

When starting a manager / master - are you seeing:

2023-01-10T16:57:26,125 [balancer.TableLoadBalancer] INFO : Loaded class org.apache.accumulo.core.spi.balancer.SimpleLoadBalancer for table +r
2023-01-10T16:57:26,126 [balancer.SimpleLoadBalancer] WARN : Not balancing because we don't have any tservers.

tservers should be started first, before the other management processes.

The initial manager start-up should look like:

> accumulo manager
2023-01-10T16:56:43,649 [conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /home/etcolem/workspace/fluo-uno/install/accumulo-2.1.1-SNAPSHOT/conf/accumulo.properties
2023-01-10T16:56:44,581 [manager.Manager] INFO : Version 2.1.1-SNAPSHOT
2023-01-10T16:56:44,582 [manager.Manager] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
2023-01-10T16:56:44,627 [metrics.MetricsUtil] INFO : Metric producer PropStoreMetrics initialize
2023-01-10T16:56:44,742 [server.ServerContext] INFO : manager starting
2023-01-10T16:56:44,742 [server.ServerContext] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
2023-01-10T16:56:44,745 [server.ServerContext] INFO : Data Version 10
2023-01-10T16:56:44,745 [server.ServerContext] INFO : Attempting to talk to zookeeper
2023-01-10T16:56:44,746 [server.ServerContext] INFO : ZooKeeper connected and initialized, attempting to talk to HDFS
2023-01-10T16:56:44,761 [server.ServerContext] INFO : Connected to HDFS

And then key things to look for after the config dump:

023-01-10T16:56:44,802 [manager.Manager] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
2023-01-10T16:56:44,825 [manager.Manager] INFO : SASL is not enabled, delegation tokens will not be available
2023-01-10T16:56:44,872 [metrics.MetricsUtil] INFO : Metric producer ThriftMetrics initialize
2023-01-10T16:56:44,888 [manager.Manager] INFO : Started Manager client service at ip-10-113-15-42.evoforge.org:9999
2023-01-10T16:56:44,890 [manager.Manager] INFO : trying to get manager lock
2023-01-10T16:56:44,900 [manager.EventCoordinator] INFO : State changed from INITIAL to HAVE_LOCK




On 2023/01/10 16:22:11 "Samudrala, Ranganath [USA] via user" wrote:
> Yes, I am trying to set Accumulo v2.1.0 with Hadoop v3.3.4.
> ________________________________
> From: Samudrala, Ranganath [USA] <Sa...@bah.com>
> Sent: Tuesday, January 10, 2023 11:21 AM
> To: user@accumulo.apache.org <us...@accumulo.apache.org>
> Subject: Re: [External] Re: accumulo init error in K8S
>
> I am starting these services manually, one at a time. For example, after 'accumulo init' completed, I ran 'accumulo master' I get this error:
>
> bash-5.1$ accumulo master
> 2023-01-10T15:53:30,143 [main] [org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Using Accumulo configuration at /opt/acc
> umulo/conf/accumulo.properties
> Using Accumulo configuration at /opt/accumulo/conf/accumulo.properties
> 2023-01-10T15:53:30,207 [main] [org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Create 2nd tier ClassLoader using URLs:
> []
> Create 2nd tier ClassLoader using URLs: []
> 2023-01-10T15:53:30,372 [main] [org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating ThreadPoolExecutor for Scheduled Future
>  Checker with 1 core threads and 1 max threads 180000 MILLISECONDS timeout
> Creating ThreadPoolExecutor for Scheduled Future Checker with 1 core threads and 1 max threads 180000 MILLISECONDS timeout
> 2023-01-10T15:53:30,379 [main] [org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating ThreadPoolExecutor for zoo_change_updat
> e with 2 core threads and 2 max threads 180000 MILLISECONDS timeout
> Creating ThreadPoolExecutor for zoo_change_update with 2 core threads and 2 max threads 180000 MILLISECONDS timeout
> 2023-01-10T15:53:30,560 [master] [org.apache.accumulo.core.conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /op
> t/accumulo/conf/accumulo.properties
> Found Accumulo configuration on classpath at /opt/accumulo/conf/accumulo.properties
> 2023-01-10T15:53:30,736 [master] [org.apache.hadoop.util.Shell] DEBUG: setsid exited with exit code 0
> setsid exited with exit code 0
> 2023-01-10T15:53:30,780 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
> eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(a
> lways=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"GetGroups"})
> field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org
> .apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"G
> etGroups"})
> 2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
> eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metri
> c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Rate of failed kerberos logins and latenc
> y (milliseconds)"})
> field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @
> org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value=
> {"Rate of failed kerberos logins and latency (milliseconds)"})
> 2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
> eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metri
> c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Rate of successful kerberos logins and la
> tency (milliseconds)"})
> field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @
> org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value=
> {"Rate of successful kerberos logins and latency (milliseconds)"})
> 2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private org.apache.hadoop.metrics2.li
> b.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.a
> nnotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Renewal failures since las
> t successful login"})
> field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures wi
> th annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=
> DEFAULT, value={"Renewal failures since last successful login"})
> 2023-01-10T15:53:30,785 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private org.apache.hadoop.metrics2.li
> b.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metr
> ics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Renewal failures sin
> ce startup"})
> field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTo
> tal with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10,
>  type=DEFAULT, value={"Renewal failures since startup"})
> 2023-01-10T15:53:30,789 [master] [org.apache.hadoop.metrics2.impl.MetricsSystemImpl] DEBUG: UgiMetrics, User and group related metrics
> UgiMetrics, User and group related metrics
> 2023-01-10T15:53:30,808 [master] [org.apache.hadoop.security.SecurityUtil] DEBUG: Setting hadoop.security.token.service.use_ip to true
> Setting hadoop.security.token.service.use_ip to true
> 2023-01-10T15:53:30,827 [master] [org.apache.hadoop.security.Groups] DEBUG:  Creating new Groups object
>  Creating new Groups object
> 2023-01-10T15:53:30,829 [master] [org.apache.hadoop.util.NativeCodeLoader] DEBUG: Trying to load the custom-built native-hadoop library...
> Trying to load the custom-built native-hadoop library...
> 2023-01-10T15:53:30,830 [master] [org.apache.hadoop.util.NativeCodeLoader] DEBUG: Loaded the native-hadoop library
> Loaded the native-hadoop library
> 2023-01-10T15:53:30,830 [master] [org.apache.hadoop.security.JniBasedUnixGroupsMapping] DEBUG: Using JniBasedUnixGroupsMapping for Group r
> esolution
> Using JniBasedUnixGroupsMapping for Group resolution
> 2023-01-10T15:53:30,831 [master] [org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback] DEBUG: Group mapping impl=org.apache.h
> adoop.security.JniBasedUnixGroupsMapping
> Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
> 2023-01-10T15:53:30,854 [master] [org.apache.hadoop.security.Groups] DEBUG: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGrou
> psMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
> Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
> 2023-01-10T15:53:30,869 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: Hadoop login
> Hadoop login
> 2023-01-10T15:53:30,870 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: hadoop login commit
> hadoop login commit
> 2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: Using user: "accumulo" with name: accumulo
> Using user: "accumulo" with name: accumulo
> 2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: User entry: "accumulo"
> User entry: "accumulo"
> 2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: UGI loginUser: accumulo (auth:SIMPLE)
> UGI loginUser: accumulo (auth:SIMPLE)
> 2023-01-10T15:53:30,872 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Starting: Acquiring creator semaphore for hdfs://accumulo-hdfs-n
> amenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> Starting: Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> 2023-01-10T15:53:30,873 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.
> accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 0:00.000s
> Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 0:00.000s
> 2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Starting: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-h
> dfs-namenodes:8020/accumulo/data0/accumulo
> Starting: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> 2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Loading filesystems
> Loading filesystems
> 2023-01-10T15:53:30,887 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/h
> adoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,892 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem fro
> m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,894 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hado
> op/share/hadoop/client/hadoop-client-api-3.3.4.jar
> har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,896 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /o
> pt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,897 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from
> /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,905 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem fro
> m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,912 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem
> from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,913 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSyste
> m from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem
> from /opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar
> s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem from /opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar
> 2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Looking for FS supporting hdfs
> Looking for FS supporting hdfs
> 2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: looking for configuration option fs.hdfs.impl
> looking for configuration option fs.hdfs.impl
> 2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Looking in service filesystems for implementation class
> Looking in service filesystems for implementation class
> 2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSyste
> m
> FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem
> 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.use.legacy.blockreader.local = false
> dfs.client.use.legacy.blockreader.local = false
> 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.read.shortcircuit = false
> dfs.client.read.shortcircuit = false
> 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.domain.socket.data.traffic = false
> dfs.client.domain.socket.data.traffic = false
> 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.domain.socket.path =
> dfs.domain.socket.path =
> 2023-01-10T15:53:30,980 [master] [org.apache.hadoop.hdfs.DFSClient] DEBUG: Sets dfs.client.block.write.replace-datanode-on-failure.min-rep
> lication to 0
> Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
> 2023-01-10T15:53:30,990 [master] [org.apache.hadoop.io.retry.RetryUtils] DEBUG: multipleLinearRandomRetry = null
> multipleLinearRandomRetry = null
> 2023-01-10T15:53:31,011 [master] [org.apache.hadoop.ipc.Server] DEBUG: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apach
> e.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798
> rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, rpcInvoker=org.apac
> he.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798
> ...
> ... LONG PAUSE HERE - ALMOST 10 minutes
> ...
> 2023-01-10T16:03:52,316 [master] [org.apache.hadoop.ipc.Client] DEBUG: getting client out of cache: Client-5197ff3375714e029d5cdcb1ac53e74
> 2
> getting client out of cache: Client-5197ff3375714e029d5cdcb1ac53e742
> 2023-01-10T16:03:52,679 [client DomainSocketWatcher] [org.apache.hadoop.net.unix.DomainSocketWatcher] DEBUG: org.apache.hadoop.net.unix.Do
> mainSocketWatcher$2@35d323c6: starting with interruptCheckPeriodMs = 60000
> org.apache.hadoop.net.unix.DomainSocketWatcher$2@35d323c6: starting with interruptCheckPeriodMs = 60000
> 2023-01-10T16:03:52,686 [master] [org.apache.hadoop.util.PerformanceAdvisory] DEBUG: Both short-circuit local reads and UNIX domain socket
>  are disabled.
> Both short-circuit local reads and UNIX domain socket are disabled.
> 2023-01-10T16:03:52,694 [master] [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil] DEBUG: DataTransferProtocol not
> using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
> DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
> 2023-01-10T16:03:52,697 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-nameno
> des:8020/accumulo/data0/accumulo: duration 10:21.822s
> Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 10:21.822s
> 2023-01-10T16:03:52,718 [master] [org.apache.accumulo.core.conf.ConfigurationTypeHelper] DEBUG: Loaded class : org.apache.accumulo.core.sp
> i.fs.PreferredVolumeChooser
> Loaded class : org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
> 2023-01-10T16:03:52,761 [master] [org.apache.hadoop.ipc.Client] DEBUG: The ping interval is 60000 ms.
> The ping interval is 60000 ms.
> 2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: Connecting to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.4
> 2.15.98:8020
> Connecting to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020
> 2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: Setup connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenode
> s/10.42.15.98:8020
> Setup connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020
> 2023-01-10T16:03:52,787 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> 98:8020 from accumulo: starting, having connections 1
> IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: starting, having con
> nections 1
> 2023-01-10T16:03:52,791 [IPC Parameter Sending Thread #0] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accum
> ulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getLi
> sting
> IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo sending #0 org.apache
> .hadoop.hdfs.protocol.ClientProtocol.getListing
> 2023-01-10T16:03:52,801 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> 98:8020 from accumulo got value #0
> IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo got value #0
> 2023-01-10T16:03:52,801 [master] [org.apache.hadoop.ipc.ProtobufRpcEngine2] DEBUG: Call: getListing took 72ms
> Call: getListing took 72ms
> 2023-01-10T16:03:52,804 [master] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs
> -namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> 2023-01-10T16:03:52,804 [master] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs
> -namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> Thread 'master' died.
> java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
> 020/accumulo/data0/accumulo/instance_id
>         at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218)
>         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102)
>         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
>         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
>         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
>         at org.apache.accumulo.manager.Manager.main(Manager.java:408)
>         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
>         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2023-01-10T16:03:52,808 [master] [org.apache.accumulo.start.Main] ERROR: Thread 'master' died.
> java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
> 020/accumulo/data0/accumulo/instance_id
>         at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) ~[accumulo-server-base-2.1.0.jar:2.1.
> 0]
>         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) ~[accumulo-manager-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.manager.Manager.main(Manager.java:408) ~[accumulo-manager-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) ~[accumulo-manager-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
>         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> Thread 'master' died.
> java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
> 020/accumulo/data0/accumulo/instance_id
>         at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) ~[accumulo-server-base-2.1.0.jar:2.1.
> 0]
>         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) ~[accumulo-manager-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.manager.Manager.main(Manager.java:408) ~[accumulo-manager-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) ~[accumulo-manager-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
>         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> 2023-01-10T16:03:52,812 [shutdown-hook-0] [org.apache.hadoop.fs.FileSystem] DEBUG: FileSystem.close() by method: org.apache.hadoop.hdfs.Di
> stributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo (auth:SIMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-n
> amenodes:8020; URI: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; Object Identity Hash: 50257de5
> FileSystem.close() by method: org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo (auth:S
> IMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; URI: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; O
> bject Identity Hash: 50257de5
> 2023-01-10T16:03:52,814 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: stopping client from cache: Client-5197ff3375714e029d5cdcb
> 1ac53e742
> stopping client from cache: Client-5197ff3375714e029d5cdcb1ac53e742
> 2023-01-10T16:03:52,815 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: removing client from cache: Client-5197ff3375714e029d5cdcb
> 1ac53e742
> removing client from cache: Client-5197ff3375714e029d5cdcb1ac53e742
> 2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: stopping actual client because no more references remain:
> Client-5197ff3375714e029d5cdcb1ac53e742
> stopping actual client because no more references remain: Client-5197ff3375714e029d5cdcb1ac53e742
> 2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: Stopping client
> Stopping client
> 2023-01-10T16:03:52,820 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> 98:8020 from accumulo: closed
> IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: closed
> 2023-01-10T16:03:52,820 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> 98:8020 from accumulo: stopped, remaining connections 0
> IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: stopped, remaining c
> onnections 0
> 2023-01-10T16:03:52,820 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] DEBUG: Completed shutdown in 0.010 seconds; Timeouts: 0
> Completed shutdown in 0.010 seconds; Timeouts: 0
> 2023-01-10T16:03:52,843 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] DEBUG: ShutdownHookManager completed shutdown.
> ShutdownHookManager completed shutdown.
>
> ________________________________
> From: Ed Coleman <ed...@apache.org>
> Sent: Tuesday, January 10, 2023 11:17 AM
> To: user@accumulo.apache.org <us...@accumulo.apache.org>
> Subject: Re: [External] Re: accumulo init error in K8S
>
> Running init does not start the Accumulo services.  Is the manager and are the tserver processes running?
>
> I may have missed it, but what version are you trying to use?  2.1?
>
> A quick look of the documentation at https://urldefense.com/v3/__https://accumulo.apache.org/docs/2.x/administration/in-depth-install*migrating-accumulo-from-non-ha-namenode-to-ha-namenode__;Iw!!May37g!P1_tpmliRYyFnNtR6-s3nURqK4U5zUTzKAvHLKk5QroP1DMNZWY_K0IkBcM8HbzEqjfMAyEJNps2l_nqThZf3TWV$<https://urldefense.com/v3/__https:/accumulo.apache.org/docs/2.x/administration/in-depth-install*migrating-accumulo-from-non-ha-namenode-to-ha-namenode__;Iw!!May37g!P1_tpmliRYyFnNtR6-s3nURqK4U5zUTzKAvHLKk5QroP1DMNZWY_K0IkBcM8HbzEqjfMAyEJNps2l_nqThZf3TWV$>  I would assume that add-volumes may not be required if your initial configuration is correct.
>
> At this point, logs may help more than stack traces.
>
> Ed C
>
> On 2023/01/10 16:01:49 "Samudrala, Ranganath [USA] via user" wrote:
> > Yes, I ran it just now. I had debug enabled, so the prompt for instance name was hidden. I had to enter a few CRs to see the prompt. Once the prompts for instance name and password were answered, I can see entries for the accumulo config in the zookeeper.
> >
> > Should I run 'accumulo init --add-volumes' now?
> >
> > If I run 'accumulo master' and it seems to be hung up the thread:
> >
> > "master" #17 prio=5 os_prio=0 cpu=572.10ms elapsed=146.84s tid=0x000056488630b800 nid=0x90 runnable  [0x00007f5d63753000]
> >    java.lang.Thread.State: RUNNABLE
> >         at sun.security.pkcs11.Secmod.nssInitialize(jdk.crypto.cryptoki@11.0.17/Native Method)
> >         at sun.security.pkcs11.Secmod.initialize(jdk.crypto.cryptoki@11.0.17/Secmod.java:239)
> >         - locked <0x00000000ffd4eb18> (a sun.security.pkcs11.Secmod)
> >         at sun.security.pkcs11.SunPKCS11.<init>(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:243)
> >         at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:143)
> >         at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
> >         at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
> >         at sun.security.pkcs11.SunPKCS11.configure(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
> >         at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:251)
> >         at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:242)
> >         at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
> >         at sun.security.jca.ProviderConfig.doLoadProvider(java.base@11.0.17/ProviderConfig.java:242)
> >         at sun.security.jca.ProviderConfig.getProvider(java.base@11.0.17/ProviderConfig.java:222)
> >         - locked <0x00000000ffff9560> (a sun.security.jca.ProviderConfig)
> >         at sun.security.jca.ProviderList.getProvider(java.base@11.0.17/ProviderList.java:266)
> >         at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:156)
> >         at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:151)
> >         at java.util.AbstractList$Itr.next(java.base@11.0.17/AbstractList.java:371)
> >         at java.security.SecureRandom.getDefaultPRNG(java.base@11.0.17/SecureRandom.java:264)
> >         at java.security.SecureRandom.<init>(java.base@11.0.17/SecureRandom.java:219)
> >         at java.util.UUID$Holder.<clinit>(java.base@11.0.17/UUID.java:101)
> >         at java.util.UUID.randomUUID(java.base@11.0.17/UUID.java:147)
> >         at org.apache.hadoop.ipc.ClientId.getClientId(ClientId.java:42)
> >         at org.apache.hadoop.ipc.Client.<init>(Client.java:1367)
> >         at org.apache.hadoop.ipc.ClientCache.getClient(ClientCache.java:59)
> >         - locked <0x00000000fffc3458> (a org.apache.hadoop.ipc.ClientCache)
> >         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:158)
> >         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:145)
> >         at org.apache.hadoop.ipc.ProtobufRpcEngine2.getProxy(ProtobufRpcEngine2.java:111)
> >         at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:629)
> >         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithAlignmentContext(NameNodeProxiesClient.java:365)
> >         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createNonHAProxyWithClientProtocol(NameNodeProxiesClient.java:343)
> >         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:135)
> >         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:374)
> >         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:308)
> >         at org.apache.hadoop.hdfs.DistributedFileSystem.initDFSClient(DistributedFileSystem.java:202)
> >         at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:187)
> >         at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
> >         at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
> >         at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
> >         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
> >         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
> >         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
> >         at org.apache.accumulo.core.volume.VolumeImpl.<init>(VolumeImpl.java:45)
> >         at org.apache.accumulo.server.fs.VolumeManagerImpl.get(VolumeManagerImpl.java:371)
> >         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:96)
> >         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
> >         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
> >         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
> >         at org.apache.accumulo.manager.Manager.main(Manager.java:408)
> >         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
> >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
> >         at org.apache.accumulo.start.Main$$Lambda$145/0x00000008401a5040.run(Unknown Source)
> >         at java.lang.Thread.run(java.base@11.0.17/Thread.java:829)
> >
> >
> >
> > I will wait and see when there is more log output.
> >
> > Thanks
> > Ranga
> >
> > ________________________________
> > From: Ed Coleman <ed...@apache.org>
> > Sent: Tuesday, January 10, 2023 10:16 AM
> > To: user@accumulo.apache.org <us...@accumulo.apache.org>
> > Subject: [External] Re: accumulo init error in K8S
> >
> > Have you tried running accumulo init without the --add-volumes?  From your attached log it looks like it cannot find a valid instance id
> >
> > 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> >
> >
> > On 2023/01/10 14:21:29 "Samudrala, Ranganath [USA] via user" wrote:
> > > Hello,
> > > I am trying to configure Accumulo in K8S using Helm chart. Hadoop and Zookeeper are up and running in the same K8S namespace.
> > > accumulo.properties is as below:
> > >
> > >   instance.volumes=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> > >   general.custom.volume.preferred.default=accumulo
> > >   instance.zookeeper.host=accumulo-zookeeper
> > >   # instance.secret=DEFAULT
> > >   general.volume.chooser=org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
> > >   general.custom.volume.preferred.logger=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> > >   trace.user=tracer
> > >   trace.password=tracer
> > >   instance.secret=accumulo
> > >   tserver.cache.data.size=15M
> > >   tserver.cache.index.size=40M
> > >   tserver.memory.maps.max=128M
> > >   tserver.memory.maps.native.enabled=true
> > >   tserver.sort.buffer.size=50M
> > >   tserver.total.mutation.queue.max=16M
> > >   tserver.walog.max.size=128M
> > >
> > > accumulo-client.properties is as below:
> > >
> > >  auth.type=password
> > >  auth.principal=root
> > >  auth.token=root
> > >  instance.name=accumulo
> > >  # For Accumulo >=2.0.0
> > >  instance.zookeepers=accumulo-zookeeper
> > >  instance.zookeeper.host=accumulo-zookeeper
> > >
> > > When I run 'accumulo init --add-volumes', I see an error as below and what is wrong with the setup?
> > >
> > > java.lang.RuntimeException: None of the configured paths are initialized.
> > >         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119)
> > >         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449)
> > >         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543)
> > >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
> > >         at java.base/java.lang.Thread.run(Thread.java:829)
> > > 2023-01-09T21:22:13,530 [init] [org.apache.accumulo.start.Main] ERROR: Thread 'init' died.
> > > java.lang.RuntimeException: None of the configured paths are initialized.
> > >         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> > >         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> > >         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> > >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
> > >         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> > > Thread 'init' died.
> > >
> > > I have attached complete log:
> > >
> > >
> >
>

Re: [External] Re: accumulo init error in K8S

Posted by Ed Coleman <ed...@apache.org>.
Can you use manager instead of master - it has been renamed to manager, but maybe we missed some old references.

After you run accumulo init, what is in hadoop?

> hadoop fs -ls -R /accumulo
drwxr-xr-x   - x x         0 2023-01-10 16:49 /accumulo/instance_id
-rw-r--r--   3 x x         0 2023-01-10 16:49 /accumulo/instance_id/bdcdd3d8-7623-4882-aae7-357a9db2efd4
drwxr-xr-x   - x x         0 2023-01-10 16:49 /accumulo/tables
...
drwx------   - x x         0 2023-01-10 16:49 /accumulo/version
drwx------   - x x         0 2023-01-10 16:49 /accumulo/version/10

Running

> accumulo tserver

accumulo tserver
2023-01-10T16:53:26,858 [conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /home/etcolem/workspace/fluo-uno/install/accumulo-2.1.1-SNAPSHOT/conf/accumulo.properties
2023-01-10T16:53:27,766 [tserver.TabletServer] INFO : Version 2.1.1-SNAPSHOT
2023-01-10T16:53:27,766 [tserver.TabletServer] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
2023-01-10T16:53:27,816 [metrics.MetricsUtil] INFO : Metric producer PropStoreMetrics initialize
2023-01-10T16:53:27,931 [server.ServerContext] INFO : tserver starting
2023-01-10T16:53:27,931 [server.ServerContext] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
2023-01-10T16:53:27,933 [server.ServerContext] INFO : Data Version 10

When starting a manager / master - are you seeing:

2023-01-10T16:57:26,125 [balancer.TableLoadBalancer] INFO : Loaded class org.apache.accumulo.core.spi.balancer.SimpleLoadBalancer for table +r
2023-01-10T16:57:26,126 [balancer.SimpleLoadBalancer] WARN : Not balancing because we don't have any tservers.

tservers should be started first, before the other management processes.

The initial manager start-up should look like:

> accumulo manager
2023-01-10T16:56:43,649 [conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /home/etcolem/workspace/fluo-uno/install/accumulo-2.1.1-SNAPSHOT/conf/accumulo.properties
2023-01-10T16:56:44,581 [manager.Manager] INFO : Version 2.1.1-SNAPSHOT
2023-01-10T16:56:44,582 [manager.Manager] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
2023-01-10T16:56:44,627 [metrics.MetricsUtil] INFO : Metric producer PropStoreMetrics initialize
2023-01-10T16:56:44,742 [server.ServerContext] INFO : manager starting
2023-01-10T16:56:44,742 [server.ServerContext] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
2023-01-10T16:56:44,745 [server.ServerContext] INFO : Data Version 10
2023-01-10T16:56:44,745 [server.ServerContext] INFO : Attempting to talk to zookeeper
2023-01-10T16:56:44,746 [server.ServerContext] INFO : ZooKeeper connected and initialized, attempting to talk to HDFS
2023-01-10T16:56:44,761 [server.ServerContext] INFO : Connected to HDFS

And then key things to look for after the config dump:

023-01-10T16:56:44,802 [manager.Manager] INFO : Instance bdcdd3d8-7623-4882-aae7-357a9db2efd4
2023-01-10T16:56:44,825 [manager.Manager] INFO : SASL is not enabled, delegation tokens will not be available
2023-01-10T16:56:44,872 [metrics.MetricsUtil] INFO : Metric producer ThriftMetrics initialize
2023-01-10T16:56:44,888 [manager.Manager] INFO : Started Manager client service at ip-10-113-15-42.evoforge.org:9999
2023-01-10T16:56:44,890 [manager.Manager] INFO : trying to get manager lock
2023-01-10T16:56:44,900 [manager.EventCoordinator] INFO : State changed from INITIAL to HAVE_LOCK




On 2023/01/10 16:22:11 "Samudrala, Ranganath [USA] via user" wrote:
> Yes, I am trying to set Accumulo v2.1.0 with Hadoop v3.3.4.
> ________________________________
> From: Samudrala, Ranganath [USA] <Sa...@bah.com>
> Sent: Tuesday, January 10, 2023 11:21 AM
> To: user@accumulo.apache.org <us...@accumulo.apache.org>
> Subject: Re: [External] Re: accumulo init error in K8S
> 
> I am starting these services manually, one at a time. For example, after 'accumulo init' completed, I ran 'accumulo master' I get this error:
> 
> bash-5.1$ accumulo master
> 2023-01-10T15:53:30,143 [main] [org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Using Accumulo configuration at /opt/acc
> umulo/conf/accumulo.properties
> Using Accumulo configuration at /opt/accumulo/conf/accumulo.properties
> 2023-01-10T15:53:30,207 [main] [org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Create 2nd tier ClassLoader using URLs:
> []
> Create 2nd tier ClassLoader using URLs: []
> 2023-01-10T15:53:30,372 [main] [org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating ThreadPoolExecutor for Scheduled Future
>  Checker with 1 core threads and 1 max threads 180000 MILLISECONDS timeout
> Creating ThreadPoolExecutor for Scheduled Future Checker with 1 core threads and 1 max threads 180000 MILLISECONDS timeout
> 2023-01-10T15:53:30,379 [main] [org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating ThreadPoolExecutor for zoo_change_updat
> e with 2 core threads and 2 max threads 180000 MILLISECONDS timeout
> Creating ThreadPoolExecutor for zoo_change_update with 2 core threads and 2 max threads 180000 MILLISECONDS timeout
> 2023-01-10T15:53:30,560 [master] [org.apache.accumulo.core.conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /op
> t/accumulo/conf/accumulo.properties
> Found Accumulo configuration on classpath at /opt/accumulo/conf/accumulo.properties
> 2023-01-10T15:53:30,736 [master] [org.apache.hadoop.util.Shell] DEBUG: setsid exited with exit code 0
> setsid exited with exit code 0
> 2023-01-10T15:53:30,780 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
> eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(a
> lways=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"GetGroups"})
> field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org
> .apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"G
> etGroups"})
> 2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
> eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metri
> c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Rate of failed kerberos logins and latenc
> y (milliseconds)"})
> field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @
> org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value=
> {"Rate of failed kerberos logins and latency (milliseconds)"})
> 2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
> eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metri
> c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Rate of successful kerberos logins and la
> tency (milliseconds)"})
> field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @
> org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value=
> {"Rate of successful kerberos logins and latency (milliseconds)"})
> 2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private org.apache.hadoop.metrics2.li
> b.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.a
> nnotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Renewal failures since las
> t successful login"})
> field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures wi
> th annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=
> DEFAULT, value={"Renewal failures since last successful login"})
> 2023-01-10T15:53:30,785 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private org.apache.hadoop.metrics2.li
> b.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metr
> ics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Renewal failures sin
> ce startup"})
> field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTo
> tal with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10,
>  type=DEFAULT, value={"Renewal failures since startup"})
> 2023-01-10T15:53:30,789 [master] [org.apache.hadoop.metrics2.impl.MetricsSystemImpl] DEBUG: UgiMetrics, User and group related metrics
> UgiMetrics, User and group related metrics
> 2023-01-10T15:53:30,808 [master] [org.apache.hadoop.security.SecurityUtil] DEBUG: Setting hadoop.security.token.service.use_ip to true
> Setting hadoop.security.token.service.use_ip to true
> 2023-01-10T15:53:30,827 [master] [org.apache.hadoop.security.Groups] DEBUG:  Creating new Groups object
>  Creating new Groups object
> 2023-01-10T15:53:30,829 [master] [org.apache.hadoop.util.NativeCodeLoader] DEBUG: Trying to load the custom-built native-hadoop library...
> Trying to load the custom-built native-hadoop library...
> 2023-01-10T15:53:30,830 [master] [org.apache.hadoop.util.NativeCodeLoader] DEBUG: Loaded the native-hadoop library
> Loaded the native-hadoop library
> 2023-01-10T15:53:30,830 [master] [org.apache.hadoop.security.JniBasedUnixGroupsMapping] DEBUG: Using JniBasedUnixGroupsMapping for Group r
> esolution
> Using JniBasedUnixGroupsMapping for Group resolution
> 2023-01-10T15:53:30,831 [master] [org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback] DEBUG: Group mapping impl=org.apache.h
> adoop.security.JniBasedUnixGroupsMapping
> Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
> 2023-01-10T15:53:30,854 [master] [org.apache.hadoop.security.Groups] DEBUG: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGrou
> psMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
> Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
> 2023-01-10T15:53:30,869 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: Hadoop login
> Hadoop login
> 2023-01-10T15:53:30,870 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: hadoop login commit
> hadoop login commit
> 2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: Using user: "accumulo" with name: accumulo
> Using user: "accumulo" with name: accumulo
> 2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: User entry: "accumulo"
> User entry: "accumulo"
> 2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: UGI loginUser: accumulo (auth:SIMPLE)
> UGI loginUser: accumulo (auth:SIMPLE)
> 2023-01-10T15:53:30,872 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Starting: Acquiring creator semaphore for hdfs://accumulo-hdfs-n
> amenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> Starting: Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> 2023-01-10T15:53:30,873 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.
> accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 0:00.000s
> Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 0:00.000s
> 2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Starting: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-h
> dfs-namenodes:8020/accumulo/data0/accumulo
> Starting: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> 2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Loading filesystems
> Loading filesystems
> 2023-01-10T15:53:30,887 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/h
> adoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,892 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem fro
> m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,894 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hado
> op/share/hadoop/client/hadoop-client-api-3.3.4.jar
> har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,896 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /o
> pt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,897 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from
> /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,905 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem fro
> m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,912 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem
> from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,913 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSyste
> m from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
> 2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem
> from /opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar
> s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem from /opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar
> 2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Looking for FS supporting hdfs
> Looking for FS supporting hdfs
> 2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: looking for configuration option fs.hdfs.impl
> looking for configuration option fs.hdfs.impl
> 2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Looking in service filesystems for implementation class
> Looking in service filesystems for implementation class
> 2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSyste
> m
> FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem
> 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.use.legacy.blockreader.local = false
> dfs.client.use.legacy.blockreader.local = false
> 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.read.shortcircuit = false
> dfs.client.read.shortcircuit = false
> 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.domain.socket.data.traffic = false
> dfs.client.domain.socket.data.traffic = false
> 2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.domain.socket.path =
> dfs.domain.socket.path =
> 2023-01-10T15:53:30,980 [master] [org.apache.hadoop.hdfs.DFSClient] DEBUG: Sets dfs.client.block.write.replace-datanode-on-failure.min-rep
> lication to 0
> Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
> 2023-01-10T15:53:30,990 [master] [org.apache.hadoop.io.retry.RetryUtils] DEBUG: multipleLinearRandomRetry = null
> multipleLinearRandomRetry = null
> 2023-01-10T15:53:31,011 [master] [org.apache.hadoop.ipc.Server] DEBUG: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apach
> e.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798
> rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, rpcInvoker=org.apac
> he.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798
> ...
> ... LONG PAUSE HERE - ALMOST 10 minutes
> ...
> 2023-01-10T16:03:52,316 [master] [org.apache.hadoop.ipc.Client] DEBUG: getting client out of cache: Client-5197ff3375714e029d5cdcb1ac53e74
> 2
> getting client out of cache: Client-5197ff3375714e029d5cdcb1ac53e742
> 2023-01-10T16:03:52,679 [client DomainSocketWatcher] [org.apache.hadoop.net.unix.DomainSocketWatcher] DEBUG: org.apache.hadoop.net.unix.Do
> mainSocketWatcher$2@35d323c6: starting with interruptCheckPeriodMs = 60000
> org.apache.hadoop.net.unix.DomainSocketWatcher$2@35d323c6: starting with interruptCheckPeriodMs = 60000
> 2023-01-10T16:03:52,686 [master] [org.apache.hadoop.util.PerformanceAdvisory] DEBUG: Both short-circuit local reads and UNIX domain socket
>  are disabled.
> Both short-circuit local reads and UNIX domain socket are disabled.
> 2023-01-10T16:03:52,694 [master] [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil] DEBUG: DataTransferProtocol not
> using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
> DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
> 2023-01-10T16:03:52,697 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-nameno
> des:8020/accumulo/data0/accumulo: duration 10:21.822s
> Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 10:21.822s
> 2023-01-10T16:03:52,718 [master] [org.apache.accumulo.core.conf.ConfigurationTypeHelper] DEBUG: Loaded class : org.apache.accumulo.core.sp
> i.fs.PreferredVolumeChooser
> Loaded class : org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
> 2023-01-10T16:03:52,761 [master] [org.apache.hadoop.ipc.Client] DEBUG: The ping interval is 60000 ms.
> The ping interval is 60000 ms.
> 2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: Connecting to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.4
> 2.15.98:8020
> Connecting to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020
> 2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: Setup connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenode
> s/10.42.15.98:8020
> Setup connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020
> 2023-01-10T16:03:52,787 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> 98:8020 from accumulo: starting, having connections 1
> IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: starting, having con
> nections 1
> 2023-01-10T16:03:52,791 [IPC Parameter Sending Thread #0] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accum
> ulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getLi
> sting
> IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo sending #0 org.apache
> .hadoop.hdfs.protocol.ClientProtocol.getListing
> 2023-01-10T16:03:52,801 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> 98:8020 from accumulo got value #0
> IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo got value #0
> 2023-01-10T16:03:52,801 [master] [org.apache.hadoop.ipc.ProtobufRpcEngine2] DEBUG: Call: getListing took 72ms
> Call: getListing took 72ms
> 2023-01-10T16:03:52,804 [master] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs
> -namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> 2023-01-10T16:03:52,804 [master] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs
> -namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> Thread 'master' died.
> java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
> 020/accumulo/data0/accumulo/instance_id
>         at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218)
>         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102)
>         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
>         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
>         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
>         at org.apache.accumulo.manager.Manager.main(Manager.java:408)
>         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
>         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2023-01-10T16:03:52,808 [master] [org.apache.accumulo.start.Main] ERROR: Thread 'master' died.
> java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
> 020/accumulo/data0/accumulo/instance_id
>         at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) ~[accumulo-server-base-2.1.0.jar:2.1.
> 0]
>         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) ~[accumulo-manager-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.manager.Manager.main(Manager.java:408) ~[accumulo-manager-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) ~[accumulo-manager-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
>         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> Thread 'master' died.
> java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
> 020/accumulo/data0/accumulo/instance_id
>         at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) ~[accumulo-server-base-2.1.0.jar:2.1.
> 0]
>         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) ~[accumulo-manager-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.manager.Manager.main(Manager.java:408) ~[accumulo-manager-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) ~[accumulo-manager-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
>         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> 2023-01-10T16:03:52,812 [shutdown-hook-0] [org.apache.hadoop.fs.FileSystem] DEBUG: FileSystem.close() by method: org.apache.hadoop.hdfs.Di
> stributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo (auth:SIMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-n
> amenodes:8020; URI: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; Object Identity Hash: 50257de5
> FileSystem.close() by method: org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo (auth:S
> IMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; URI: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; O
> bject Identity Hash: 50257de5
> 2023-01-10T16:03:52,814 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: stopping client from cache: Client-5197ff3375714e029d5cdcb
> 1ac53e742
> stopping client from cache: Client-5197ff3375714e029d5cdcb1ac53e742
> 2023-01-10T16:03:52,815 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: removing client from cache: Client-5197ff3375714e029d5cdcb
> 1ac53e742
> removing client from cache: Client-5197ff3375714e029d5cdcb1ac53e742
> 2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: stopping actual client because no more references remain:
> Client-5197ff3375714e029d5cdcb1ac53e742
> stopping actual client because no more references remain: Client-5197ff3375714e029d5cdcb1ac53e742
> 2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: Stopping client
> Stopping client
> 2023-01-10T16:03:52,820 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> 98:8020 from accumulo: closed
> IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: closed
> 2023-01-10T16:03:52,820 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
> ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
> 98:8020 from accumulo: stopped, remaining connections 0
> IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: stopped, remaining c
> onnections 0
> 2023-01-10T16:03:52,820 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] DEBUG: Completed shutdown in 0.010 seconds; Timeouts: 0
> Completed shutdown in 0.010 seconds; Timeouts: 0
> 2023-01-10T16:03:52,843 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] DEBUG: ShutdownHookManager completed shutdown.
> ShutdownHookManager completed shutdown.
> 
> ________________________________
> From: Ed Coleman <ed...@apache.org>
> Sent: Tuesday, January 10, 2023 11:17 AM
> To: user@accumulo.apache.org <us...@accumulo.apache.org>
> Subject: Re: [External] Re: accumulo init error in K8S
> 
> Running init does not start the Accumulo services.  Is the manager and are the tserver processes running?
> 
> I may have missed it, but what version are you trying to use?  2.1?
> 
> A quick look of the documentation at https://urldefense.com/v3/__https://accumulo.apache.org/docs/2.x/administration/in-depth-install*migrating-accumulo-from-non-ha-namenode-to-ha-namenode__;Iw!!May37g!P1_tpmliRYyFnNtR6-s3nURqK4U5zUTzKAvHLKk5QroP1DMNZWY_K0IkBcM8HbzEqjfMAyEJNps2l_nqThZf3TWV$  I would assume that add-volumes may not be required if your initial configuration is correct.
> 
> At this point, logs may help more than stack traces.
> 
> Ed C
> 
> On 2023/01/10 16:01:49 "Samudrala, Ranganath [USA] via user" wrote:
> > Yes, I ran it just now. I had debug enabled, so the prompt for instance name was hidden. I had to enter a few CRs to see the prompt. Once the prompts for instance name and password were answered, I can see entries for the accumulo config in the zookeeper.
> >
> > Should I run 'accumulo init --add-volumes' now?
> >
> > If I run 'accumulo master' and it seems to be hung up the thread:
> >
> > "master" #17 prio=5 os_prio=0 cpu=572.10ms elapsed=146.84s tid=0x000056488630b800 nid=0x90 runnable  [0x00007f5d63753000]
> >    java.lang.Thread.State: RUNNABLE
> >         at sun.security.pkcs11.Secmod.nssInitialize(jdk.crypto.cryptoki@11.0.17/Native Method)
> >         at sun.security.pkcs11.Secmod.initialize(jdk.crypto.cryptoki@11.0.17/Secmod.java:239)
> >         - locked <0x00000000ffd4eb18> (a sun.security.pkcs11.Secmod)
> >         at sun.security.pkcs11.SunPKCS11.<init>(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:243)
> >         at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:143)
> >         at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
> >         at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
> >         at sun.security.pkcs11.SunPKCS11.configure(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
> >         at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:251)
> >         at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:242)
> >         at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
> >         at sun.security.jca.ProviderConfig.doLoadProvider(java.base@11.0.17/ProviderConfig.java:242)
> >         at sun.security.jca.ProviderConfig.getProvider(java.base@11.0.17/ProviderConfig.java:222)
> >         - locked <0x00000000ffff9560> (a sun.security.jca.ProviderConfig)
> >         at sun.security.jca.ProviderList.getProvider(java.base@11.0.17/ProviderList.java:266)
> >         at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:156)
> >         at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:151)
> >         at java.util.AbstractList$Itr.next(java.base@11.0.17/AbstractList.java:371)
> >         at java.security.SecureRandom.getDefaultPRNG(java.base@11.0.17/SecureRandom.java:264)
> >         at java.security.SecureRandom.<init>(java.base@11.0.17/SecureRandom.java:219)
> >         at java.util.UUID$Holder.<clinit>(java.base@11.0.17/UUID.java:101)
> >         at java.util.UUID.randomUUID(java.base@11.0.17/UUID.java:147)
> >         at org.apache.hadoop.ipc.ClientId.getClientId(ClientId.java:42)
> >         at org.apache.hadoop.ipc.Client.<init>(Client.java:1367)
> >         at org.apache.hadoop.ipc.ClientCache.getClient(ClientCache.java:59)
> >         - locked <0x00000000fffc3458> (a org.apache.hadoop.ipc.ClientCache)
> >         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:158)
> >         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:145)
> >         at org.apache.hadoop.ipc.ProtobufRpcEngine2.getProxy(ProtobufRpcEngine2.java:111)
> >         at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:629)
> >         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithAlignmentContext(NameNodeProxiesClient.java:365)
> >         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createNonHAProxyWithClientProtocol(NameNodeProxiesClient.java:343)
> >         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:135)
> >         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:374)
> >         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:308)
> >         at org.apache.hadoop.hdfs.DistributedFileSystem.initDFSClient(DistributedFileSystem.java:202)
> >         at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:187)
> >         at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
> >         at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
> >         at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
> >         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
> >         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
> >         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
> >         at org.apache.accumulo.core.volume.VolumeImpl.<init>(VolumeImpl.java:45)
> >         at org.apache.accumulo.server.fs.VolumeManagerImpl.get(VolumeManagerImpl.java:371)
> >         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:96)
> >         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
> >         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
> >         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
> >         at org.apache.accumulo.manager.Manager.main(Manager.java:408)
> >         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
> >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
> >         at org.apache.accumulo.start.Main$$Lambda$145/0x00000008401a5040.run(Unknown Source)
> >         at java.lang.Thread.run(java.base@11.0.17/Thread.java:829)
> >
> >
> >
> > I will wait and see when there is more log output.
> >
> > Thanks
> > Ranga
> >
> > ________________________________
> > From: Ed Coleman <ed...@apache.org>
> > Sent: Tuesday, January 10, 2023 10:16 AM
> > To: user@accumulo.apache.org <us...@accumulo.apache.org>
> > Subject: [External] Re: accumulo init error in K8S
> >
> > Have you tried running accumulo init without the --add-volumes?  From your attached log it looks like it cannot find a valid instance id
> >
> > 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> > unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> >
> >
> > On 2023/01/10 14:21:29 "Samudrala, Ranganath [USA] via user" wrote:
> > > Hello,
> > > I am trying to configure Accumulo in K8S using Helm chart. Hadoop and Zookeeper are up and running in the same K8S namespace.
> > > accumulo.properties is as below:
> > >
> > >   instance.volumes=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> > >   general.custom.volume.preferred.default=accumulo
> > >   instance.zookeeper.host=accumulo-zookeeper
> > >   # instance.secret=DEFAULT
> > >   general.volume.chooser=org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
> > >   general.custom.volume.preferred.logger=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> > >   trace.user=tracer
> > >   trace.password=tracer
> > >   instance.secret=accumulo
> > >   tserver.cache.data.size=15M
> > >   tserver.cache.index.size=40M
> > >   tserver.memory.maps.max=128M
> > >   tserver.memory.maps.native.enabled=true
> > >   tserver.sort.buffer.size=50M
> > >   tserver.total.mutation.queue.max=16M
> > >   tserver.walog.max.size=128M
> > >
> > > accumulo-client.properties is as below:
> > >
> > >  auth.type=password
> > >  auth.principal=root
> > >  auth.token=root
> > >  instance.name=accumulo
> > >  # For Accumulo >=2.0.0
> > >  instance.zookeepers=accumulo-zookeeper
> > >  instance.zookeeper.host=accumulo-zookeeper
> > >
> > > When I run 'accumulo init --add-volumes', I see an error as below and what is wrong with the setup?
> > >
> > > java.lang.RuntimeException: None of the configured paths are initialized.
> > >         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119)
> > >         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449)
> > >         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543)
> > >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
> > >         at java.base/java.lang.Thread.run(Thread.java:829)
> > > 2023-01-09T21:22:13,530 [init] [org.apache.accumulo.start.Main] ERROR: Thread 'init' died.
> > > java.lang.RuntimeException: None of the configured paths are initialized.
> > >         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> > >         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> > >         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> > >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
> > >         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> > > Thread 'init' died.
> > >
> > > I have attached complete log:
> > >
> > >
> >
> 

Re: [External] Re: accumulo init error in K8S

Posted by "Samudrala, Ranganath [USA] via user" <us...@accumulo.apache.org>.
In the log for master, I see the below - question is what is the meaning of "got value #0"? Did the getListing request succeed? If it did, then did the logic receive only a #0 ? What is the equivalent HDFS command ? Is it 'hdfs dfs -ls /' ? why do I not see "instance_id" in HDFS ? What should I do for "instance_id" to be created by Accumulo in HDFS ?

2023-01-10T16:03:52,791 [IPC Parameter Sending Thread #0] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accum
ulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getLi
sting
IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo sending #0 org.apache
.hadoop.hdfs.protocol.ClientProtocol.getListing
2023-01-10T16:03:52,801 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
98:8020 from accumulo got value #0
IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo got value #0
2023-01-10T16:03:52,801 [master] [org.apache.hadoop.ipc.ProtobufRpcEngine2] DEBUG: Call: getListing took 72ms
Call: getListing took 72ms

________________________________
From: Samudrala, Ranganath [USA] via user <us...@accumulo.apache.org>
Sent: Tuesday, January 10, 2023 11:22 AM
To: user@accumulo.apache.org <us...@accumulo.apache.org>
Subject: Re: [External] Re: accumulo init error in K8S

Yes, I am trying to set Accumulo v2.1.0 with Hadoop v3.3.4.
________________________________
From: Samudrala, Ranganath [USA] <Sa...@bah.com>
Sent: Tuesday, January 10, 2023 11:21 AM
To: user@accumulo.apache.org <us...@accumulo.apache.org>
Subject: Re: [External] Re: accumulo init error in K8S

I am starting these services manually, one at a time. For example, after 'accumulo init' completed, I ran 'accumulo master' I get this error:

bash-5.1$ accumulo master
2023-01-10T15:53:30,143 [main] [org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Using Accumulo configuration at /opt/acc
umulo/conf/accumulo.properties
Using Accumulo configuration at /opt/accumulo/conf/accumulo.properties
2023-01-10T15:53:30,207 [main] [org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Create 2nd tier ClassLoader using URLs:
[]
Create 2nd tier ClassLoader using URLs: []
2023-01-10T15:53:30,372 [main] [org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating ThreadPoolExecutor for Scheduled Future
 Checker with 1 core threads and 1 max threads 180000 MILLISECONDS timeout
Creating ThreadPoolExecutor for Scheduled Future Checker with 1 core threads and 1 max threads 180000 MILLISECONDS timeout
2023-01-10T15:53:30,379 [main] [org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating ThreadPoolExecutor for zoo_change_updat
e with 2 core threads and 2 max threads 180000 MILLISECONDS timeout
Creating ThreadPoolExecutor for zoo_change_update with 2 core threads and 2 max threads 180000 MILLISECONDS timeout
2023-01-10T15:53:30,560 [master] [org.apache.accumulo.core.conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /op
t/accumulo/conf/accumulo.properties
Found Accumulo configuration on classpath at /opt/accumulo/conf/accumulo.properties
2023-01-10T15:53:30,736 [master] [org.apache.hadoop.util.Shell] DEBUG: setsid exited with exit code 0
setsid exited with exit code 0
2023-01-10T15:53:30,780 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(a
lways=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"GetGroups"})
field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org
.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"G
etGroups"})
2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metri
c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Rate of failed kerberos logins and latenc
y (milliseconds)"})
field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @
org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value=
{"Rate of failed kerberos logins and latency (milliseconds)"})
2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metri
c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Rate of successful kerberos logins and la
tency (milliseconds)"})
field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @
org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value=
{"Rate of successful kerberos logins and latency (milliseconds)"})
2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private org.apache.hadoop.metrics2.li
b.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.a
nnotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Renewal failures since las
t successful login"})
field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures wi
th annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=
DEFAULT, value={"Renewal failures since last successful login"})
2023-01-10T15:53:30,785 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private org.apache.hadoop.metrics2.li
b.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metr
ics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Renewal failures sin
ce startup"})
field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTo
tal with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10,
 type=DEFAULT, value={"Renewal failures since startup"})
2023-01-10T15:53:30,789 [master] [org.apache.hadoop.metrics2.impl.MetricsSystemImpl] DEBUG: UgiMetrics, User and group related metrics
UgiMetrics, User and group related metrics
2023-01-10T15:53:30,808 [master] [org.apache.hadoop.security.SecurityUtil] DEBUG: Setting hadoop.security.token.service.use_ip to true
Setting hadoop.security.token.service.use_ip to true
2023-01-10T15:53:30,827 [master] [org.apache.hadoop.security.Groups] DEBUG:  Creating new Groups object
 Creating new Groups object
2023-01-10T15:53:30,829 [master] [org.apache.hadoop.util.NativeCodeLoader] DEBUG: Trying to load the custom-built native-hadoop library...
Trying to load the custom-built native-hadoop library...
2023-01-10T15:53:30,830 [master] [org.apache.hadoop.util.NativeCodeLoader] DEBUG: Loaded the native-hadoop library
Loaded the native-hadoop library
2023-01-10T15:53:30,830 [master] [org.apache.hadoop.security.JniBasedUnixGroupsMapping] DEBUG: Using JniBasedUnixGroupsMapping for Group r
esolution
Using JniBasedUnixGroupsMapping for Group resolution
2023-01-10T15:53:30,831 [master] [org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback] DEBUG: Group mapping impl=org.apache.h
adoop.security.JniBasedUnixGroupsMapping
Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
2023-01-10T15:53:30,854 [master] [org.apache.hadoop.security.Groups] DEBUG: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGrou
psMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
2023-01-10T15:53:30,869 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: Hadoop login
Hadoop login
2023-01-10T15:53:30,870 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: hadoop login commit
hadoop login commit
2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: Using user: "accumulo" with name: accumulo
Using user: "accumulo" with name: accumulo
2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: User entry: "accumulo"
User entry: "accumulo"
2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: UGI loginUser: accumulo (auth:SIMPLE)
UGI loginUser: accumulo (auth:SIMPLE)
2023-01-10T15:53:30,872 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Starting: Acquiring creator semaphore for hdfs://accumulo-hdfs-n
amenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
Starting: Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
2023-01-10T15:53:30,873 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.
accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 0:00.000s
Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 0:00.000s
2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Starting: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-h
dfs-namenodes:8020/accumulo/data0/accumulo
Starting: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Loading filesystems
Loading filesystems
2023-01-10T15:53:30,887 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/h
adoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,892 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem fro
m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,894 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hado
op/share/hadoop/client/hadoop-client-api-3.3.4.jar
har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,896 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /o
pt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,897 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from
/opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,905 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem fro
m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,912 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem
from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,913 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSyste
m from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem
from /opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar
s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem from /opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar
2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Looking for FS supporting hdfs
Looking for FS supporting hdfs
2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: looking for configuration option fs.hdfs.impl
looking for configuration option fs.hdfs.impl
2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Looking in service filesystems for implementation class
Looking in service filesystems for implementation class
2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSyste
m
FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem
2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.use.legacy.blockreader.local = false
dfs.client.use.legacy.blockreader.local = false
2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.read.shortcircuit = false
dfs.client.read.shortcircuit = false
2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.domain.socket.data.traffic = false
dfs.client.domain.socket.data.traffic = false
2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.domain.socket.path =
dfs.domain.socket.path =
2023-01-10T15:53:30,980 [master] [org.apache.hadoop.hdfs.DFSClient] DEBUG: Sets dfs.client.block.write.replace-datanode-on-failure.min-rep
lication to 0
Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
2023-01-10T15:53:30,990 [master] [org.apache.hadoop.io.retry.RetryUtils] DEBUG: multipleLinearRandomRetry = null
multipleLinearRandomRetry = null
2023-01-10T15:53:31,011 [master] [org.apache.hadoop.ipc.Server] DEBUG: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apach
e.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798
rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, rpcInvoker=org.apac
he.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798
...
... LONG PAUSE HERE - ALMOST 10 minutes
...
2023-01-10T16:03:52,316 [master] [org.apache.hadoop.ipc.Client] DEBUG: getting client out of cache: Client-5197ff3375714e029d5cdcb1ac53e74
2
getting client out of cache: Client-5197ff3375714e029d5cdcb1ac53e742
2023-01-10T16:03:52,679 [client DomainSocketWatcher] [org.apache.hadoop.net.unix.DomainSocketWatcher] DEBUG: org.apache.hadoop.net.unix.Do
mainSocketWatcher$2@35d323c6: starting with interruptCheckPeriodMs = 60000
org.apache.hadoop.net.unix.DomainSocketWatcher$2@35d323c6: starting with interruptCheckPeriodMs = 60000
2023-01-10T16:03:52,686 [master] [org.apache.hadoop.util.PerformanceAdvisory] DEBUG: Both short-circuit local reads and UNIX domain socket
 are disabled.
Both short-circuit local reads and UNIX domain socket are disabled.
2023-01-10T16:03:52,694 [master] [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil] DEBUG: DataTransferProtocol not
using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
2023-01-10T16:03:52,697 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-nameno
des:8020/accumulo/data0/accumulo: duration 10:21.822s
Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 10:21.822s
2023-01-10T16:03:52,718 [master] [org.apache.accumulo.core.conf.ConfigurationTypeHelper] DEBUG: Loaded class : org.apache.accumulo.core.sp
i.fs.PreferredVolumeChooser
Loaded class : org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
2023-01-10T16:03:52,761 [master] [org.apache.hadoop.ipc.Client] DEBUG: The ping interval is 60000 ms.
The ping interval is 60000 ms.
2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: Connecting to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.4
2.15.98:8020
Connecting to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020
2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: Setup connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenode
s/10.42.15.98:8020
Setup connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020
2023-01-10T16:03:52,787 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
98:8020 from accumulo: starting, having connections 1
IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: starting, having con
nections 1
2023-01-10T16:03:52,791 [IPC Parameter Sending Thread #0] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accum
ulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getLi
sting
IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo sending #0 org.apache
.hadoop.hdfs.protocol.ClientProtocol.getListing
2023-01-10T16:03:52,801 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
98:8020 from accumulo got value #0
IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo got value #0
2023-01-10T16:03:52,801 [master] [org.apache.hadoop.ipc.ProtobufRpcEngine2] DEBUG: Call: getListing took 72ms
Call: getListing took 72ms
2023-01-10T16:03:52,804 [master] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs
-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
2023-01-10T16:03:52,804 [master] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs
-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
Thread 'master' died.
java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
020/accumulo/data0/accumulo/instance_id
        at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218)
        at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102)
        at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
        at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
        at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
        at org.apache.accumulo.manager.Manager.main(Manager.java:408)
        at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
        at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
        at java.base/java.lang.Thread.run(Thread.java:829)
2023-01-10T16:03:52,808 [master] [org.apache.accumulo.start.Main] ERROR: Thread 'master' died.
java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
020/accumulo/data0/accumulo/instance_id
        at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) ~[accumulo-server-base-2.1.0.jar:2.1.
0]
        at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) ~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.Manager.main(Manager.java:408) ~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) ~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
        at java.lang.Thread.run(Thread.java:829) ~[?:?]
Thread 'master' died.
java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
020/accumulo/data0/accumulo/instance_id
        at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) ~[accumulo-server-base-2.1.0.jar:2.1.
0]
        at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) ~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.Manager.main(Manager.java:408) ~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) ~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
        at java.lang.Thread.run(Thread.java:829) ~[?:?]
2023-01-10T16:03:52,812 [shutdown-hook-0] [org.apache.hadoop.fs.FileSystem] DEBUG: FileSystem.close() by method: org.apache.hadoop.hdfs.Di
stributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo (auth:SIMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-n
amenodes:8020; URI: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; Object Identity Hash: 50257de5
FileSystem.close() by method: org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo (auth:S
IMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; URI: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; O
bject Identity Hash: 50257de5
2023-01-10T16:03:52,814 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: stopping client from cache: Client-5197ff3375714e029d5cdcb
1ac53e742
stopping client from cache: Client-5197ff3375714e029d5cdcb1ac53e742
2023-01-10T16:03:52,815 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: removing client from cache: Client-5197ff3375714e029d5cdcb
1ac53e742
removing client from cache: Client-5197ff3375714e029d5cdcb1ac53e742
2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: stopping actual client because no more references remain:
Client-5197ff3375714e029d5cdcb1ac53e742
stopping actual client because no more references remain: Client-5197ff3375714e029d5cdcb1ac53e742
2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: Stopping client
Stopping client
2023-01-10T16:03:52,820 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
98:8020 from accumulo: closed
IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: closed
2023-01-10T16:03:52,820 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
98:8020 from accumulo: stopped, remaining connections 0
IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: stopped, remaining c
onnections 0
2023-01-10T16:03:52,820 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] DEBUG: Completed shutdown in 0.010 seconds; Timeouts: 0
Completed shutdown in 0.010 seconds; Timeouts: 0
2023-01-10T16:03:52,843 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] DEBUG: ShutdownHookManager completed shutdown.
ShutdownHookManager completed shutdown.

________________________________
From: Ed Coleman <ed...@apache.org>
Sent: Tuesday, January 10, 2023 11:17 AM
To: user@accumulo.apache.org <us...@accumulo.apache.org>
Subject: Re: [External] Re: accumulo init error in K8S

Running init does not start the Accumulo services.  Is the manager and are the tserver processes running?

I may have missed it, but what version are you trying to use?  2.1?

A quick look of the documentation at https://urldefense.com/v3/__https://accumulo.apache.org/docs/2.x/administration/in-depth-install*migrating-accumulo-from-non-ha-namenode-to-ha-namenode__;Iw!!May37g!P1_tpmliRYyFnNtR6-s3nURqK4U5zUTzKAvHLKk5QroP1DMNZWY_K0IkBcM8HbzEqjfMAyEJNps2l_nqThZf3TWV$  I would assume that add-volumes may not be required if your initial configuration is correct.

At this point, logs may help more than stack traces.

Ed C

On 2023/01/10 16:01:49 "Samudrala, Ranganath [USA] via user" wrote:
> Yes, I ran it just now. I had debug enabled, so the prompt for instance name was hidden. I had to enter a few CRs to see the prompt. Once the prompts for instance name and password were answered, I can see entries for the accumulo config in the zookeeper.
>
> Should I run 'accumulo init --add-volumes' now?
>
> If I run 'accumulo master' and it seems to be hung up the thread:
>
> "master" #17 prio=5 os_prio=0 cpu=572.10ms elapsed=146.84s tid=0x000056488630b800 nid=0x90 runnable  [0x00007f5d63753000]
>    java.lang.Thread.State: RUNNABLE
>         at sun.security.pkcs11.Secmod.nssInitialize(jdk.crypto.cryptoki@11.0.17/Native Method)
>         at sun.security.pkcs11.Secmod.initialize(jdk.crypto.cryptoki@11.0.17/Secmod.java:239)
>         - locked <0x00000000ffd4eb18> (a sun.security.pkcs11.Secmod)
>         at sun.security.pkcs11.SunPKCS11.<init>(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:243)
>         at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:143)
>         at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
>         at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
>         at sun.security.pkcs11.SunPKCS11.configure(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
>         at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:251)
>         at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:242)
>         at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
>         at sun.security.jca.ProviderConfig.doLoadProvider(java.base@11.0.17/ProviderConfig.java:242)
>         at sun.security.jca.ProviderConfig.getProvider(java.base@11.0.17/ProviderConfig.java:222)
>         - locked <0x00000000ffff9560> (a sun.security.jca.ProviderConfig)
>         at sun.security.jca.ProviderList.getProvider(java.base@11.0.17/ProviderList.java:266)
>         at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:156)
>         at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:151)
>         at java.util.AbstractList$Itr.next(java.base@11.0.17/AbstractList.java:371)
>         at java.security.SecureRandom.getDefaultPRNG(java.base@11.0.17/SecureRandom.java:264)
>         at java.security.SecureRandom.<init>(java.base@11.0.17/SecureRandom.java:219)
>         at java.util.UUID$Holder.<clinit>(java.base@11.0.17/UUID.java:101)
>         at java.util.UUID.randomUUID(java.base@11.0.17/UUID.java:147)
>         at org.apache.hadoop.ipc.ClientId.getClientId(ClientId.java:42)
>         at org.apache.hadoop.ipc.Client.<init>(Client.java:1367)
>         at org.apache.hadoop.ipc.ClientCache.getClient(ClientCache.java:59)
>         - locked <0x00000000fffc3458> (a org.apache.hadoop.ipc.ClientCache)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:158)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:145)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine2.getProxy(ProtobufRpcEngine2.java:111)
>         at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:629)
>         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithAlignmentContext(NameNodeProxiesClient.java:365)
>         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createNonHAProxyWithClientProtocol(NameNodeProxiesClient.java:343)
>         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:135)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:374)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:308)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.initDFSClient(DistributedFileSystem.java:202)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:187)
>         at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
>         at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
>         at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
>         at org.apache.accumulo.core.volume.VolumeImpl.<init>(VolumeImpl.java:45)
>         at org.apache.accumulo.server.fs.VolumeManagerImpl.get(VolumeManagerImpl.java:371)
>         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:96)
>         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
>         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
>         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
>         at org.apache.accumulo.manager.Manager.main(Manager.java:408)
>         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
>         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
>         at org.apache.accumulo.start.Main$$Lambda$145/0x00000008401a5040.run(Unknown Source)
>         at java.lang.Thread.run(java.base@11.0.17/Thread.java:829)
>
>
>
> I will wait and see when there is more log output.
>
> Thanks
> Ranga
>
> ________________________________
> From: Ed Coleman <ed...@apache.org>
> Sent: Tuesday, January 10, 2023 10:16 AM
> To: user@accumulo.apache.org <us...@accumulo.apache.org>
> Subject: [External] Re: accumulo init error in K8S
>
> Have you tried running accumulo init without the --add-volumes?  From your attached log it looks like it cannot find a valid instance id
>
> 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
>
>
> On 2023/01/10 14:21:29 "Samudrala, Ranganath [USA] via user" wrote:
> > Hello,
> > I am trying to configure Accumulo in K8S using Helm chart. Hadoop and Zookeeper are up and running in the same K8S namespace.
> > accumulo.properties is as below:
> >
> >   instance.volumes=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> >   general.custom.volume.preferred.default=accumulo
> >   instance.zookeeper.host=accumulo-zookeeper
> >   # instance.secret=DEFAULT
> >   general.volume.chooser=org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
> >   general.custom.volume.preferred.logger=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> >   trace.user=tracer
> >   trace.password=tracer
> >   instance.secret=accumulo
> >   tserver.cache.data.size=15M
> >   tserver.cache.index.size=40M
> >   tserver.memory.maps.max=128M
> >   tserver.memory.maps.native.enabled=true
> >   tserver.sort.buffer.size=50M
> >   tserver.total.mutation.queue.max=16M
> >   tserver.walog.max.size=128M
> >
> > accumulo-client.properties is as below:
> >
> >  auth.type=password
> >  auth.principal=root
> >  auth.token=root
> >  instance.name=accumulo
> >  # For Accumulo >=2.0.0
> >  instance.zookeepers=accumulo-zookeeper
> >  instance.zookeeper.host=accumulo-zookeeper
> >
> > When I run 'accumulo init --add-volumes', I see an error as below and what is wrong with the setup?
> >
> > java.lang.RuntimeException: None of the configured paths are initialized.
> >         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119)
> >         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449)
> >         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543)
> >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
> >         at java.base/java.lang.Thread.run(Thread.java:829)
> > 2023-01-09T21:22:13,530 [init] [org.apache.accumulo.start.Main] ERROR: Thread 'init' died.
> > java.lang.RuntimeException: None of the configured paths are initialized.
> >         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
> >         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> > Thread 'init' died.
> >
> > I have attached complete log:
> >
> >
>

Re: [External] Re: accumulo init error in K8S

Posted by "Samudrala, Ranganath [USA] via user" <us...@accumulo.apache.org>.
Yes, I am trying to set Accumulo v2.1.0 with Hadoop v3.3.4.
________________________________
From: Samudrala, Ranganath [USA] <Sa...@bah.com>
Sent: Tuesday, January 10, 2023 11:21 AM
To: user@accumulo.apache.org <us...@accumulo.apache.org>
Subject: Re: [External] Re: accumulo init error in K8S

I am starting these services manually, one at a time. For example, after 'accumulo init' completed, I ran 'accumulo master' I get this error:

bash-5.1$ accumulo master
2023-01-10T15:53:30,143 [main] [org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Using Accumulo configuration at /opt/acc
umulo/conf/accumulo.properties
Using Accumulo configuration at /opt/accumulo/conf/accumulo.properties
2023-01-10T15:53:30,207 [main] [org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Create 2nd tier ClassLoader using URLs:
[]
Create 2nd tier ClassLoader using URLs: []
2023-01-10T15:53:30,372 [main] [org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating ThreadPoolExecutor for Scheduled Future
 Checker with 1 core threads and 1 max threads 180000 MILLISECONDS timeout
Creating ThreadPoolExecutor for Scheduled Future Checker with 1 core threads and 1 max threads 180000 MILLISECONDS timeout
2023-01-10T15:53:30,379 [main] [org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating ThreadPoolExecutor for zoo_change_updat
e with 2 core threads and 2 max threads 180000 MILLISECONDS timeout
Creating ThreadPoolExecutor for zoo_change_update with 2 core threads and 2 max threads 180000 MILLISECONDS timeout
2023-01-10T15:53:30,560 [master] [org.apache.accumulo.core.conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /op
t/accumulo/conf/accumulo.properties
Found Accumulo configuration on classpath at /opt/accumulo/conf/accumulo.properties
2023-01-10T15:53:30,736 [master] [org.apache.hadoop.util.Shell] DEBUG: setsid exited with exit code 0
setsid exited with exit code 0
2023-01-10T15:53:30,780 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(a
lways=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"GetGroups"})
field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org
.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"G
etGroups"})
2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metri
c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Rate of failed kerberos logins and latenc
y (milliseconds)"})
field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @
org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value=
{"Rate of failed kerberos logins and latency (milliseconds)"})
2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metri
c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Rate of successful kerberos logins and la
tency (milliseconds)"})
field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @
org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value=
{"Rate of successful kerberos logins and latency (milliseconds)"})
2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private org.apache.hadoop.metrics2.li
b.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.a
nnotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Renewal failures since las
t successful login"})
field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures wi
th annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=
DEFAULT, value={"Renewal failures since last successful login"})
2023-01-10T15:53:30,785 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private org.apache.hadoop.metrics2.li
b.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metr
ics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Renewal failures sin
ce startup"})
field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTo
tal with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10,
 type=DEFAULT, value={"Renewal failures since startup"})
2023-01-10T15:53:30,789 [master] [org.apache.hadoop.metrics2.impl.MetricsSystemImpl] DEBUG: UgiMetrics, User and group related metrics
UgiMetrics, User and group related metrics
2023-01-10T15:53:30,808 [master] [org.apache.hadoop.security.SecurityUtil] DEBUG: Setting hadoop.security.token.service.use_ip to true
Setting hadoop.security.token.service.use_ip to true
2023-01-10T15:53:30,827 [master] [org.apache.hadoop.security.Groups] DEBUG:  Creating new Groups object
 Creating new Groups object
2023-01-10T15:53:30,829 [master] [org.apache.hadoop.util.NativeCodeLoader] DEBUG: Trying to load the custom-built native-hadoop library...
Trying to load the custom-built native-hadoop library...
2023-01-10T15:53:30,830 [master] [org.apache.hadoop.util.NativeCodeLoader] DEBUG: Loaded the native-hadoop library
Loaded the native-hadoop library
2023-01-10T15:53:30,830 [master] [org.apache.hadoop.security.JniBasedUnixGroupsMapping] DEBUG: Using JniBasedUnixGroupsMapping for Group r
esolution
Using JniBasedUnixGroupsMapping for Group resolution
2023-01-10T15:53:30,831 [master] [org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback] DEBUG: Group mapping impl=org.apache.h
adoop.security.JniBasedUnixGroupsMapping
Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
2023-01-10T15:53:30,854 [master] [org.apache.hadoop.security.Groups] DEBUG: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGrou
psMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
2023-01-10T15:53:30,869 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: Hadoop login
Hadoop login
2023-01-10T15:53:30,870 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: hadoop login commit
hadoop login commit
2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: Using user: "accumulo" with name: accumulo
Using user: "accumulo" with name: accumulo
2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: User entry: "accumulo"
User entry: "accumulo"
2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: UGI loginUser: accumulo (auth:SIMPLE)
UGI loginUser: accumulo (auth:SIMPLE)
2023-01-10T15:53:30,872 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Starting: Acquiring creator semaphore for hdfs://accumulo-hdfs-n
amenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
Starting: Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
2023-01-10T15:53:30,873 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.
accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 0:00.000s
Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 0:00.000s
2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Starting: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-h
dfs-namenodes:8020/accumulo/data0/accumulo
Starting: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Loading filesystems
Loading filesystems
2023-01-10T15:53:30,887 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/h
adoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,892 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem fro
m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,894 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hado
op/share/hadoop/client/hadoop-client-api-3.3.4.jar
har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,896 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /o
pt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,897 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from
/opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,905 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem fro
m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,912 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem
from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,913 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSyste
m from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem
from /opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar
s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem from /opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar
2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Looking for FS supporting hdfs
Looking for FS supporting hdfs
2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: looking for configuration option fs.hdfs.impl
looking for configuration option fs.hdfs.impl
2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Looking in service filesystems for implementation class
Looking in service filesystems for implementation class
2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSyste
m
FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem
2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.use.legacy.blockreader.local = false
dfs.client.use.legacy.blockreader.local = false
2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.read.shortcircuit = false
dfs.client.read.shortcircuit = false
2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.domain.socket.data.traffic = false
dfs.client.domain.socket.data.traffic = false
2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.domain.socket.path =
dfs.domain.socket.path =
2023-01-10T15:53:30,980 [master] [org.apache.hadoop.hdfs.DFSClient] DEBUG: Sets dfs.client.block.write.replace-datanode-on-failure.min-rep
lication to 0
Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
2023-01-10T15:53:30,990 [master] [org.apache.hadoop.io.retry.RetryUtils] DEBUG: multipleLinearRandomRetry = null
multipleLinearRandomRetry = null
2023-01-10T15:53:31,011 [master] [org.apache.hadoop.ipc.Server] DEBUG: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apach
e.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798
rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, rpcInvoker=org.apac
he.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798
...
... LONG PAUSE HERE - ALMOST 10 minutes
...
2023-01-10T16:03:52,316 [master] [org.apache.hadoop.ipc.Client] DEBUG: getting client out of cache: Client-5197ff3375714e029d5cdcb1ac53e74
2
getting client out of cache: Client-5197ff3375714e029d5cdcb1ac53e742
2023-01-10T16:03:52,679 [client DomainSocketWatcher] [org.apache.hadoop.net.unix.DomainSocketWatcher] DEBUG: org.apache.hadoop.net.unix.Do
mainSocketWatcher$2@35d323c6: starting with interruptCheckPeriodMs = 60000
org.apache.hadoop.net.unix.DomainSocketWatcher$2@35d323c6: starting with interruptCheckPeriodMs = 60000
2023-01-10T16:03:52,686 [master] [org.apache.hadoop.util.PerformanceAdvisory] DEBUG: Both short-circuit local reads and UNIX domain socket
 are disabled.
Both short-circuit local reads and UNIX domain socket are disabled.
2023-01-10T16:03:52,694 [master] [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil] DEBUG: DataTransferProtocol not
using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
2023-01-10T16:03:52,697 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-nameno
des:8020/accumulo/data0/accumulo: duration 10:21.822s
Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 10:21.822s
2023-01-10T16:03:52,718 [master] [org.apache.accumulo.core.conf.ConfigurationTypeHelper] DEBUG: Loaded class : org.apache.accumulo.core.sp
i.fs.PreferredVolumeChooser
Loaded class : org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
2023-01-10T16:03:52,761 [master] [org.apache.hadoop.ipc.Client] DEBUG: The ping interval is 60000 ms.
The ping interval is 60000 ms.
2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: Connecting to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.4
2.15.98:8020
Connecting to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020
2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: Setup connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenode
s/10.42.15.98:8020
Setup connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020
2023-01-10T16:03:52,787 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
98:8020 from accumulo: starting, having connections 1
IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: starting, having con
nections 1
2023-01-10T16:03:52,791 [IPC Parameter Sending Thread #0] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accum
ulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getLi
sting
IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo sending #0 org.apache
.hadoop.hdfs.protocol.ClientProtocol.getListing
2023-01-10T16:03:52,801 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
98:8020 from accumulo got value #0
IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo got value #0
2023-01-10T16:03:52,801 [master] [org.apache.hadoop.ipc.ProtobufRpcEngine2] DEBUG: Call: getListing took 72ms
Call: getListing took 72ms
2023-01-10T16:03:52,804 [master] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs
-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
2023-01-10T16:03:52,804 [master] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs
-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
Thread 'master' died.
java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
020/accumulo/data0/accumulo/instance_id
        at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218)
        at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102)
        at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
        at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
        at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
        at org.apache.accumulo.manager.Manager.main(Manager.java:408)
        at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
        at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
        at java.base/java.lang.Thread.run(Thread.java:829)
2023-01-10T16:03:52,808 [master] [org.apache.accumulo.start.Main] ERROR: Thread 'master' died.
java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
020/accumulo/data0/accumulo/instance_id
        at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) ~[accumulo-server-base-2.1.0.jar:2.1.
0]
        at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) ~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.Manager.main(Manager.java:408) ~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) ~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
        at java.lang.Thread.run(Thread.java:829) ~[?:?]
Thread 'master' died.
java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
020/accumulo/data0/accumulo/instance_id
        at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) ~[accumulo-server-base-2.1.0.jar:2.1.
0]
        at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) ~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.Manager.main(Manager.java:408) ~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) ~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
        at java.lang.Thread.run(Thread.java:829) ~[?:?]
2023-01-10T16:03:52,812 [shutdown-hook-0] [org.apache.hadoop.fs.FileSystem] DEBUG: FileSystem.close() by method: org.apache.hadoop.hdfs.Di
stributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo (auth:SIMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-n
amenodes:8020; URI: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; Object Identity Hash: 50257de5
FileSystem.close() by method: org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo (auth:S
IMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; URI: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; O
bject Identity Hash: 50257de5
2023-01-10T16:03:52,814 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: stopping client from cache: Client-5197ff3375714e029d5cdcb
1ac53e742
stopping client from cache: Client-5197ff3375714e029d5cdcb1ac53e742
2023-01-10T16:03:52,815 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: removing client from cache: Client-5197ff3375714e029d5cdcb
1ac53e742
removing client from cache: Client-5197ff3375714e029d5cdcb1ac53e742
2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: stopping actual client because no more references remain:
Client-5197ff3375714e029d5cdcb1ac53e742
stopping actual client because no more references remain: Client-5197ff3375714e029d5cdcb1ac53e742
2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: Stopping client
Stopping client
2023-01-10T16:03:52,820 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
98:8020 from accumulo: closed
IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: closed
2023-01-10T16:03:52,820 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
98:8020 from accumulo: stopped, remaining connections 0
IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: stopped, remaining c
onnections 0
2023-01-10T16:03:52,820 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] DEBUG: Completed shutdown in 0.010 seconds; Timeouts: 0
Completed shutdown in 0.010 seconds; Timeouts: 0
2023-01-10T16:03:52,843 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] DEBUG: ShutdownHookManager completed shutdown.
ShutdownHookManager completed shutdown.

________________________________
From: Ed Coleman <ed...@apache.org>
Sent: Tuesday, January 10, 2023 11:17 AM
To: user@accumulo.apache.org <us...@accumulo.apache.org>
Subject: Re: [External] Re: accumulo init error in K8S

Running init does not start the Accumulo services.  Is the manager and are the tserver processes running?

I may have missed it, but what version are you trying to use?  2.1?

A quick look of the documentation at https://urldefense.com/v3/__https://accumulo.apache.org/docs/2.x/administration/in-depth-install*migrating-accumulo-from-non-ha-namenode-to-ha-namenode__;Iw!!May37g!P1_tpmliRYyFnNtR6-s3nURqK4U5zUTzKAvHLKk5QroP1DMNZWY_K0IkBcM8HbzEqjfMAyEJNps2l_nqThZf3TWV$  I would assume that add-volumes may not be required if your initial configuration is correct.

At this point, logs may help more than stack traces.

Ed C

On 2023/01/10 16:01:49 "Samudrala, Ranganath [USA] via user" wrote:
> Yes, I ran it just now. I had debug enabled, so the prompt for instance name was hidden. I had to enter a few CRs to see the prompt. Once the prompts for instance name and password were answered, I can see entries for the accumulo config in the zookeeper.
>
> Should I run 'accumulo init --add-volumes' now?
>
> If I run 'accumulo master' and it seems to be hung up the thread:
>
> "master" #17 prio=5 os_prio=0 cpu=572.10ms elapsed=146.84s tid=0x000056488630b800 nid=0x90 runnable  [0x00007f5d63753000]
>    java.lang.Thread.State: RUNNABLE
>         at sun.security.pkcs11.Secmod.nssInitialize(jdk.crypto.cryptoki@11.0.17/Native Method)
>         at sun.security.pkcs11.Secmod.initialize(jdk.crypto.cryptoki@11.0.17/Secmod.java:239)
>         - locked <0x00000000ffd4eb18> (a sun.security.pkcs11.Secmod)
>         at sun.security.pkcs11.SunPKCS11.<init>(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:243)
>         at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:143)
>         at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
>         at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
>         at sun.security.pkcs11.SunPKCS11.configure(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
>         at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:251)
>         at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:242)
>         at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
>         at sun.security.jca.ProviderConfig.doLoadProvider(java.base@11.0.17/ProviderConfig.java:242)
>         at sun.security.jca.ProviderConfig.getProvider(java.base@11.0.17/ProviderConfig.java:222)
>         - locked <0x00000000ffff9560> (a sun.security.jca.ProviderConfig)
>         at sun.security.jca.ProviderList.getProvider(java.base@11.0.17/ProviderList.java:266)
>         at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:156)
>         at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:151)
>         at java.util.AbstractList$Itr.next(java.base@11.0.17/AbstractList.java:371)
>         at java.security.SecureRandom.getDefaultPRNG(java.base@11.0.17/SecureRandom.java:264)
>         at java.security.SecureRandom.<init>(java.base@11.0.17/SecureRandom.java:219)
>         at java.util.UUID$Holder.<clinit>(java.base@11.0.17/UUID.java:101)
>         at java.util.UUID.randomUUID(java.base@11.0.17/UUID.java:147)
>         at org.apache.hadoop.ipc.ClientId.getClientId(ClientId.java:42)
>         at org.apache.hadoop.ipc.Client.<init>(Client.java:1367)
>         at org.apache.hadoop.ipc.ClientCache.getClient(ClientCache.java:59)
>         - locked <0x00000000fffc3458> (a org.apache.hadoop.ipc.ClientCache)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:158)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:145)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine2.getProxy(ProtobufRpcEngine2.java:111)
>         at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:629)
>         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithAlignmentContext(NameNodeProxiesClient.java:365)
>         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createNonHAProxyWithClientProtocol(NameNodeProxiesClient.java:343)
>         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:135)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:374)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:308)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.initDFSClient(DistributedFileSystem.java:202)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:187)
>         at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
>         at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
>         at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
>         at org.apache.accumulo.core.volume.VolumeImpl.<init>(VolumeImpl.java:45)
>         at org.apache.accumulo.server.fs.VolumeManagerImpl.get(VolumeManagerImpl.java:371)
>         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:96)
>         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
>         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
>         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
>         at org.apache.accumulo.manager.Manager.main(Manager.java:408)
>         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
>         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
>         at org.apache.accumulo.start.Main$$Lambda$145/0x00000008401a5040.run(Unknown Source)
>         at java.lang.Thread.run(java.base@11.0.17/Thread.java:829)
>
>
>
> I will wait and see when there is more log output.
>
> Thanks
> Ranga
>
> ________________________________
> From: Ed Coleman <ed...@apache.org>
> Sent: Tuesday, January 10, 2023 10:16 AM
> To: user@accumulo.apache.org <us...@accumulo.apache.org>
> Subject: [External] Re: accumulo init error in K8S
>
> Have you tried running accumulo init without the --add-volumes?  From your attached log it looks like it cannot find a valid instance id
>
> 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
>
>
> On 2023/01/10 14:21:29 "Samudrala, Ranganath [USA] via user" wrote:
> > Hello,
> > I am trying to configure Accumulo in K8S using Helm chart. Hadoop and Zookeeper are up and running in the same K8S namespace.
> > accumulo.properties is as below:
> >
> >   instance.volumes=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> >   general.custom.volume.preferred.default=accumulo
> >   instance.zookeeper.host=accumulo-zookeeper
> >   # instance.secret=DEFAULT
> >   general.volume.chooser=org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
> >   general.custom.volume.preferred.logger=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> >   trace.user=tracer
> >   trace.password=tracer
> >   instance.secret=accumulo
> >   tserver.cache.data.size=15M
> >   tserver.cache.index.size=40M
> >   tserver.memory.maps.max=128M
> >   tserver.memory.maps.native.enabled=true
> >   tserver.sort.buffer.size=50M
> >   tserver.total.mutation.queue.max=16M
> >   tserver.walog.max.size=128M
> >
> > accumulo-client.properties is as below:
> >
> >  auth.type=password
> >  auth.principal=root
> >  auth.token=root
> >  instance.name=accumulo
> >  # For Accumulo >=2.0.0
> >  instance.zookeepers=accumulo-zookeeper
> >  instance.zookeeper.host=accumulo-zookeeper
> >
> > When I run 'accumulo init --add-volumes', I see an error as below and what is wrong with the setup?
> >
> > java.lang.RuntimeException: None of the configured paths are initialized.
> >         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119)
> >         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449)
> >         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543)
> >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
> >         at java.base/java.lang.Thread.run(Thread.java:829)
> > 2023-01-09T21:22:13,530 [init] [org.apache.accumulo.start.Main] ERROR: Thread 'init' died.
> > java.lang.RuntimeException: None of the configured paths are initialized.
> >         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
> >         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> > Thread 'init' died.
> >
> > I have attached complete log:
> >
> >
>

Re: [External] Re: accumulo init error in K8S

Posted by "Samudrala, Ranganath [USA] via user" <us...@accumulo.apache.org>.
I am starting these services manually, one at a time. For example, after 'accumulo init' completed, I ran 'accumulo master' I get this error:

bash-5.1$ accumulo master
2023-01-10T15:53:30,143 [main] [org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Using Accumulo configuration at /opt/acc
umulo/conf/accumulo.properties
Using Accumulo configuration at /opt/accumulo/conf/accumulo.properties
2023-01-10T15:53:30,207 [main] [org.apache.accumulo.start.classloader.AccumuloClassLoader] DEBUG: Create 2nd tier ClassLoader using URLs:
[]
Create 2nd tier ClassLoader using URLs: []
2023-01-10T15:53:30,372 [main] [org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating ThreadPoolExecutor for Scheduled Future
 Checker with 1 core threads and 1 max threads 180000 MILLISECONDS timeout
Creating ThreadPoolExecutor for Scheduled Future Checker with 1 core threads and 1 max threads 180000 MILLISECONDS timeout
2023-01-10T15:53:30,379 [main] [org.apache.accumulo.core.util.threads.ThreadPools] DEBUG: Creating ThreadPoolExecutor for zoo_change_updat
e with 2 core threads and 2 max threads 180000 MILLISECONDS timeout
Creating ThreadPoolExecutor for zoo_change_update with 2 core threads and 2 max threads 180000 MILLISECONDS timeout
2023-01-10T15:53:30,560 [master] [org.apache.accumulo.core.conf.SiteConfiguration] INFO : Found Accumulo configuration on classpath at /op
t/accumulo/conf/accumulo.properties
Found Accumulo configuration on classpath at /opt/accumulo/conf/accumulo.properties
2023-01-10T15:53:30,736 [master] [org.apache.hadoop.util.Shell] DEBUG: setsid exited with exit code 0
setsid exited with exit code 0
2023-01-10T15:53:30,780 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(a
lways=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"GetGroups"})
field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org
.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"G
etGroups"})
2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metri
c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Rate of failed kerberos logins and latenc
y (milliseconds)"})
field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @
org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value=
{"Rate of failed kerberos logins and latency (milliseconds)"})
2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field org.apache.hadoop.metrics2.lib.Mutabl
eRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metri
c(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Rate of successful kerberos logins and la
tency (milliseconds)"})
field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @
org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value=
{"Rate of successful kerberos logins and latency (milliseconds)"})
2023-01-10T15:53:30,784 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private org.apache.hadoop.metrics2.li
b.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures with annotation @org.apache.hadoop.metrics2.a
nnotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Renewal failures since las
t successful login"})
field private org.apache.hadoop.metrics2.lib.MutableGaugeInt org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailures wi
th annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=
DEFAULT, value={"Renewal failures since last successful login"})
2023-01-10T15:53:30,785 [master] [org.apache.hadoop.metrics2.lib.MutableMetricsFactory] DEBUG: field private org.apache.hadoop.metrics2.li
b.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTotal with annotation @org.apache.hadoop.metr
ics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10, type=DEFAULT, value={"Renewal failures sin
ce startup"})
field private org.apache.hadoop.metrics2.lib.MutableGaugeLong org.apache.hadoop.security.UserGroupInformation$UgiMetrics.renewalFailuresTo
tal with annotation @org.apache.hadoop.metrics2.annotation.Metric(always=false, sampleName="Ops", valueName="Time", about="", interval=10,
 type=DEFAULT, value={"Renewal failures since startup"})
2023-01-10T15:53:30,789 [master] [org.apache.hadoop.metrics2.impl.MetricsSystemImpl] DEBUG: UgiMetrics, User and group related metrics
UgiMetrics, User and group related metrics
2023-01-10T15:53:30,808 [master] [org.apache.hadoop.security.SecurityUtil] DEBUG: Setting hadoop.security.token.service.use_ip to true
Setting hadoop.security.token.service.use_ip to true
2023-01-10T15:53:30,827 [master] [org.apache.hadoop.security.Groups] DEBUG:  Creating new Groups object
 Creating new Groups object
2023-01-10T15:53:30,829 [master] [org.apache.hadoop.util.NativeCodeLoader] DEBUG: Trying to load the custom-built native-hadoop library...
Trying to load the custom-built native-hadoop library...
2023-01-10T15:53:30,830 [master] [org.apache.hadoop.util.NativeCodeLoader] DEBUG: Loaded the native-hadoop library
Loaded the native-hadoop library
2023-01-10T15:53:30,830 [master] [org.apache.hadoop.security.JniBasedUnixGroupsMapping] DEBUG: Using JniBasedUnixGroupsMapping for Group r
esolution
Using JniBasedUnixGroupsMapping for Group resolution
2023-01-10T15:53:30,831 [master] [org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback] DEBUG: Group mapping impl=org.apache.h
adoop.security.JniBasedUnixGroupsMapping
Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
2023-01-10T15:53:30,854 [master] [org.apache.hadoop.security.Groups] DEBUG: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGrou
psMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
2023-01-10T15:53:30,869 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: Hadoop login
Hadoop login
2023-01-10T15:53:30,870 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: hadoop login commit
hadoop login commit
2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: Using user: "accumulo" with name: accumulo
Using user: "accumulo" with name: accumulo
2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: User entry: "accumulo"
User entry: "accumulo"
2023-01-10T15:53:30,871 [master] [org.apache.hadoop.security.UserGroupInformation] DEBUG: UGI loginUser: accumulo (auth:SIMPLE)
UGI loginUser: accumulo (auth:SIMPLE)
2023-01-10T15:53:30,872 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Starting: Acquiring creator semaphore for hdfs://accumulo-hdfs-n
amenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
Starting: Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
2023-01-10T15:53:30,873 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.
accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 0:00.000s
Acquiring creator semaphore for hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 0:00.000s
2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Starting: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-h
dfs-namenodes:8020/accumulo/data0/accumulo
Starting: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
2023-01-10T15:53:30,875 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Loading filesystems
Loading filesystems
2023-01-10T15:53:30,887 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/h
adoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
file:// = class org.apache.hadoop.fs.LocalFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,892 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem fro
m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
viewfs:// = class org.apache.hadoop.fs.viewfs.ViewFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,894 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hado
op/share/hadoop/client/hadoop-client-api-3.3.4.jar
har:// = class org.apache.hadoop.fs.HarFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,896 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /o
pt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
http:// = class org.apache.hadoop.fs.http.HttpFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,897 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from
/opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
https:// = class org.apache.hadoop.fs.http.HttpsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,905 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem fro
m /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
hdfs:// = class org.apache.hadoop.hdfs.DistributedFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,912 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem
from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
webhdfs:// = class org.apache.hadoop.hdfs.web.WebHdfsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,913 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSyste
m from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
swebhdfs:// = class org.apache.hadoop.hdfs.web.SWebHdfsFileSystem from /opt/hadoop/share/hadoop/client/hadoop-client-api-3.3.4.jar
2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem
from /opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar
s3n:// = class org.apache.hadoop.fs.s3native.NativeS3FileSystem from /opt/hadoop/share/hadoop/hdfs/hadoop-aws-3.3.4.jar
2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Looking for FS supporting hdfs
Looking for FS supporting hdfs
2023-01-10T15:53:30,916 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: looking for configuration option fs.hdfs.impl
looking for configuration option fs.hdfs.impl
2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Looking in service filesystems for implementation class
Looking in service filesystems for implementation class
2023-01-10T15:53:30,939 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSyste
m
FS for hdfs is class org.apache.hadoop.hdfs.DistributedFileSystem
2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.use.legacy.blockreader.local = false
dfs.client.use.legacy.blockreader.local = false
2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.read.shortcircuit = false
dfs.client.read.shortcircuit = false
2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.client.domain.socket.data.traffic = false
dfs.client.domain.socket.data.traffic = false
2023-01-10T15:53:30,969 [master] [org.apache.hadoop.hdfs.client.impl.DfsClientConf] DEBUG: dfs.domain.socket.path =
dfs.domain.socket.path =
2023-01-10T15:53:30,980 [master] [org.apache.hadoop.hdfs.DFSClient] DEBUG: Sets dfs.client.block.write.replace-datanode-on-failure.min-rep
lication to 0
Sets dfs.client.block.write.replace-datanode-on-failure.min-replication to 0
2023-01-10T15:53:30,990 [master] [org.apache.hadoop.io.retry.RetryUtils] DEBUG: multipleLinearRandomRetry = null
multipleLinearRandomRetry = null
2023-01-10T15:53:31,011 [master] [org.apache.hadoop.ipc.Server] DEBUG: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apach
e.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798
rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine2$RpcProtobufRequest, rpcInvoker=org.apac
he.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker@3ca2c798
...
... LONG PAUSE HERE - ALMOST 10 minutes
...
2023-01-10T16:03:52,316 [master] [org.apache.hadoop.ipc.Client] DEBUG: getting client out of cache: Client-5197ff3375714e029d5cdcb1ac53e74
2
getting client out of cache: Client-5197ff3375714e029d5cdcb1ac53e742
2023-01-10T16:03:52,679 [client DomainSocketWatcher] [org.apache.hadoop.net.unix.DomainSocketWatcher] DEBUG: org.apache.hadoop.net.unix.Do
mainSocketWatcher$2@35d323c6: starting with interruptCheckPeriodMs = 60000
org.apache.hadoop.net.unix.DomainSocketWatcher$2@35d323c6: starting with interruptCheckPeriodMs = 60000
2023-01-10T16:03:52,686 [master] [org.apache.hadoop.util.PerformanceAdvisory] DEBUG: Both short-circuit local reads and UNIX domain socket
 are disabled.
Both short-circuit local reads and UNIX domain socket are disabled.
2023-01-10T16:03:52,694 [master] [org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil] DEBUG: DataTransferProtocol not
using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
DataTransferProtocol not using SaslPropertiesResolver, no QOP found in configuration for dfs.data.transfer.protection
2023-01-10T16:03:52,697 [master] [org.apache.hadoop.fs.FileSystem] DEBUG: Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-nameno
des:8020/accumulo/data0/accumulo: duration 10:21.822s
Creating FS hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo: duration 10:21.822s
2023-01-10T16:03:52,718 [master] [org.apache.accumulo.core.conf.ConfigurationTypeHelper] DEBUG: Loaded class : org.apache.accumulo.core.sp
i.fs.PreferredVolumeChooser
Loaded class : org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
2023-01-10T16:03:52,761 [master] [org.apache.hadoop.ipc.Client] DEBUG: The ping interval is 60000 ms.
The ping interval is 60000 ms.
2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: Connecting to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.4
2.15.98:8020
Connecting to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020
2023-01-10T16:03:52,763 [master] [org.apache.hadoop.ipc.Client] DEBUG: Setup connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenode
s/10.42.15.98:8020
Setup connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020
2023-01-10T16:03:52,787 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
98:8020 from accumulo: starting, having connections 1
IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: starting, having con
nections 1
2023-01-10T16:03:52,791 [IPC Parameter Sending Thread #0] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accum
ulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo sending #0 org.apache.hadoop.hdfs.protocol.ClientProtocol.getLi
sting
IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo sending #0 org.apache
.hadoop.hdfs.protocol.ClientProtocol.getListing
2023-01-10T16:03:52,801 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
98:8020 from accumulo got value #0
IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo got value #0
2023-01-10T16:03:52,801 [master] [org.apache.hadoop.ipc.ProtobufRpcEngine2] DEBUG: Call: getListing took 72ms
Call: getListing took 72ms
2023-01-10T16:03:52,804 [master] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs
-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
2023-01-10T16:03:52,804 [master] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs
-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
Thread 'master' died.
java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
020/accumulo/data0/accumulo/instance_id
        at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218)
        at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102)
        at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
        at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
        at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
        at org.apache.accumulo.manager.Manager.main(Manager.java:408)
        at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
        at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
        at java.base/java.lang.Thread.run(Thread.java:829)
2023-01-10T16:03:52,808 [master] [org.apache.accumulo.start.Main] ERROR: Thread 'master' died.
java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
020/accumulo/data0/accumulo/instance_id
        at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) ~[accumulo-server-base-2.1.0.jar:2.1.
0]
        at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) ~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.Manager.main(Manager.java:408) ~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) ~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
        at java.lang.Thread.run(Thread.java:829) ~[?:?]
Thread 'master' died.
java.lang.RuntimeException: Accumulo not initialized, there is no instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8
020/accumulo/data0/accumulo/instance_id
        at org.apache.accumulo.server.fs.VolumeManager.getInstanceIDFromHdfs(VolumeManager.java:218) ~[accumulo-server-base-2.1.0.jar:2.1.
0]
        at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:102) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47) ~[accumulo-server-base-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.Manager.<init>(Manager.java:414) ~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.Manager.main(Manager.java:408) ~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46) ~[accumulo-manager-2.1.0.jar:2.1.0]
        at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
        at java.lang.Thread.run(Thread.java:829) ~[?:?]
2023-01-10T16:03:52,812 [shutdown-hook-0] [org.apache.hadoop.fs.FileSystem] DEBUG: FileSystem.close() by method: org.apache.hadoop.hdfs.Di
stributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo (auth:SIMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-n
amenodes:8020; URI: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; Object Identity Hash: 50257de5
FileSystem.close() by method: org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1518)); Key: (accumulo (auth:S
IMPLE))@hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; URI: hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020; O
bject Identity Hash: 50257de5
2023-01-10T16:03:52,814 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: stopping client from cache: Client-5197ff3375714e029d5cdcb
1ac53e742
stopping client from cache: Client-5197ff3375714e029d5cdcb1ac53e742
2023-01-10T16:03:52,815 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: removing client from cache: Client-5197ff3375714e029d5cdcb
1ac53e742
removing client from cache: Client-5197ff3375714e029d5cdcb1ac53e742
2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: stopping actual client because no more references remain:
Client-5197ff3375714e029d5cdcb1ac53e742
stopping actual client because no more references remain: Client-5197ff3375714e029d5cdcb1ac53e742
2023-01-10T16:03:52,816 [shutdown-hook-0] [org.apache.hadoop.ipc.Client] DEBUG: Stopping client
Stopping client
2023-01-10T16:03:52,820 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
98:8020 from accumulo: closed
IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: closed
2023-01-10T16:03:52,820 [IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accum
ulo] [org.apache.hadoop.ipc.Client] DEBUG: IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.
98:8020 from accumulo: stopped, remaining connections 0
IPC Client (585906429) connection to accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes/10.42.15.98:8020 from accumulo: stopped, remaining c
onnections 0
2023-01-10T16:03:52,820 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] DEBUG: Completed shutdown in 0.010 seconds; Timeouts: 0
Completed shutdown in 0.010 seconds; Timeouts: 0
2023-01-10T16:03:52,843 [Thread-5] [org.apache.hadoop.util.ShutdownHookManager] DEBUG: ShutdownHookManager completed shutdown.
ShutdownHookManager completed shutdown.

________________________________
From: Ed Coleman <ed...@apache.org>
Sent: Tuesday, January 10, 2023 11:17 AM
To: user@accumulo.apache.org <us...@accumulo.apache.org>
Subject: Re: [External] Re: accumulo init error in K8S

Running init does not start the Accumulo services.  Is the manager and are the tserver processes running?

I may have missed it, but what version are you trying to use?  2.1?

A quick look of the documentation at https://urldefense.com/v3/__https://accumulo.apache.org/docs/2.x/administration/in-depth-install*migrating-accumulo-from-non-ha-namenode-to-ha-namenode__;Iw!!May37g!P1_tpmliRYyFnNtR6-s3nURqK4U5zUTzKAvHLKk5QroP1DMNZWY_K0IkBcM8HbzEqjfMAyEJNps2l_nqThZf3TWV$  I would assume that add-volumes may not be required if your initial configuration is correct.

At this point, logs may help more than stack traces.

Ed C

On 2023/01/10 16:01:49 "Samudrala, Ranganath [USA] via user" wrote:
> Yes, I ran it just now. I had debug enabled, so the prompt for instance name was hidden. I had to enter a few CRs to see the prompt. Once the prompts for instance name and password were answered, I can see entries for the accumulo config in the zookeeper.
>
> Should I run 'accumulo init --add-volumes' now?
>
> If I run 'accumulo master' and it seems to be hung up the thread:
>
> "master" #17 prio=5 os_prio=0 cpu=572.10ms elapsed=146.84s tid=0x000056488630b800 nid=0x90 runnable  [0x00007f5d63753000]
>    java.lang.Thread.State: RUNNABLE
>         at sun.security.pkcs11.Secmod.nssInitialize(jdk.crypto.cryptoki@11.0.17/Native Method)
>         at sun.security.pkcs11.Secmod.initialize(jdk.crypto.cryptoki@11.0.17/Secmod.java:239)
>         - locked <0x00000000ffd4eb18> (a sun.security.pkcs11.Secmod)
>         at sun.security.pkcs11.SunPKCS11.<init>(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:243)
>         at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:143)
>         at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
>         at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
>         at sun.security.pkcs11.SunPKCS11.configure(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
>         at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:251)
>         at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:242)
>         at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
>         at sun.security.jca.ProviderConfig.doLoadProvider(java.base@11.0.17/ProviderConfig.java:242)
>         at sun.security.jca.ProviderConfig.getProvider(java.base@11.0.17/ProviderConfig.java:222)
>         - locked <0x00000000ffff9560> (a sun.security.jca.ProviderConfig)
>         at sun.security.jca.ProviderList.getProvider(java.base@11.0.17/ProviderList.java:266)
>         at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:156)
>         at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:151)
>         at java.util.AbstractList$Itr.next(java.base@11.0.17/AbstractList.java:371)
>         at java.security.SecureRandom.getDefaultPRNG(java.base@11.0.17/SecureRandom.java:264)
>         at java.security.SecureRandom.<init>(java.base@11.0.17/SecureRandom.java:219)
>         at java.util.UUID$Holder.<clinit>(java.base@11.0.17/UUID.java:101)
>         at java.util.UUID.randomUUID(java.base@11.0.17/UUID.java:147)
>         at org.apache.hadoop.ipc.ClientId.getClientId(ClientId.java:42)
>         at org.apache.hadoop.ipc.Client.<init>(Client.java:1367)
>         at org.apache.hadoop.ipc.ClientCache.getClient(ClientCache.java:59)
>         - locked <0x00000000fffc3458> (a org.apache.hadoop.ipc.ClientCache)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:158)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:145)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine2.getProxy(ProtobufRpcEngine2.java:111)
>         at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:629)
>         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithAlignmentContext(NameNodeProxiesClient.java:365)
>         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createNonHAProxyWithClientProtocol(NameNodeProxiesClient.java:343)
>         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:135)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:374)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:308)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.initDFSClient(DistributedFileSystem.java:202)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:187)
>         at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
>         at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
>         at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
>         at org.apache.accumulo.core.volume.VolumeImpl.<init>(VolumeImpl.java:45)
>         at org.apache.accumulo.server.fs.VolumeManagerImpl.get(VolumeManagerImpl.java:371)
>         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:96)
>         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
>         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
>         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
>         at org.apache.accumulo.manager.Manager.main(Manager.java:408)
>         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
>         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
>         at org.apache.accumulo.start.Main$$Lambda$145/0x00000008401a5040.run(Unknown Source)
>         at java.lang.Thread.run(java.base@11.0.17/Thread.java:829)
>
>
>
> I will wait and see when there is more log output.
>
> Thanks
> Ranga
>
> ________________________________
> From: Ed Coleman <ed...@apache.org>
> Sent: Tuesday, January 10, 2023 10:16 AM
> To: user@accumulo.apache.org <us...@accumulo.apache.org>
> Subject: [External] Re: accumulo init error in K8S
>
> Have you tried running accumulo init without the --add-volumes?  From your attached log it looks like it cannot find a valid instance id
>
> 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
>
>
> On 2023/01/10 14:21:29 "Samudrala, Ranganath [USA] via user" wrote:
> > Hello,
> > I am trying to configure Accumulo in K8S using Helm chart. Hadoop and Zookeeper are up and running in the same K8S namespace.
> > accumulo.properties is as below:
> >
> >   instance.volumes=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> >   general.custom.volume.preferred.default=accumulo
> >   instance.zookeeper.host=accumulo-zookeeper
> >   # instance.secret=DEFAULT
> >   general.volume.chooser=org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
> >   general.custom.volume.preferred.logger=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> >   trace.user=tracer
> >   trace.password=tracer
> >   instance.secret=accumulo
> >   tserver.cache.data.size=15M
> >   tserver.cache.index.size=40M
> >   tserver.memory.maps.max=128M
> >   tserver.memory.maps.native.enabled=true
> >   tserver.sort.buffer.size=50M
> >   tserver.total.mutation.queue.max=16M
> >   tserver.walog.max.size=128M
> >
> > accumulo-client.properties is as below:
> >
> >  auth.type=password
> >  auth.principal=root
> >  auth.token=root
> >  instance.name=accumulo
> >  # For Accumulo >=2.0.0
> >  instance.zookeepers=accumulo-zookeeper
> >  instance.zookeeper.host=accumulo-zookeeper
> >
> > When I run 'accumulo init --add-volumes', I see an error as below and what is wrong with the setup?
> >
> > java.lang.RuntimeException: None of the configured paths are initialized.
> >         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119)
> >         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449)
> >         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543)
> >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
> >         at java.base/java.lang.Thread.run(Thread.java:829)
> > 2023-01-09T21:22:13,530 [init] [org.apache.accumulo.start.Main] ERROR: Thread 'init' died.
> > java.lang.RuntimeException: None of the configured paths are initialized.
> >         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
> >         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> > Thread 'init' died.
> >
> > I have attached complete log:
> >
> >
>

Re: [External] Re: accumulo init error in K8S

Posted by Ed Coleman <ed...@apache.org>.
Running init does not start the Accumulo services.  Is the manager and are the tserver processes running?

I may have missed it, but what version are you trying to use?  2.1?

A quick look of the documentation at https://accumulo.apache.org/docs/2.x/administration/in-depth-install#migrating-accumulo-from-non-ha-namenode-to-ha-namenode I would assume that add-volumes may not be required if your initial configuration is correct.

At this point, logs may help more than stack traces.

Ed C

On 2023/01/10 16:01:49 "Samudrala, Ranganath [USA] via user" wrote:
> Yes, I ran it just now. I had debug enabled, so the prompt for instance name was hidden. I had to enter a few CRs to see the prompt. Once the prompts for instance name and password were answered, I can see entries for the accumulo config in the zookeeper.
> 
> Should I run 'accumulo init --add-volumes' now?
> 
> If I run 'accumulo master' and it seems to be hung up the thread:
> 
> "master" #17 prio=5 os_prio=0 cpu=572.10ms elapsed=146.84s tid=0x000056488630b800 nid=0x90 runnable  [0x00007f5d63753000]
>    java.lang.Thread.State: RUNNABLE
>         at sun.security.pkcs11.Secmod.nssInitialize(jdk.crypto.cryptoki@11.0.17/Native Method)
>         at sun.security.pkcs11.Secmod.initialize(jdk.crypto.cryptoki@11.0.17/Secmod.java:239)
>         - locked <0x00000000ffd4eb18> (a sun.security.pkcs11.Secmod)
>         at sun.security.pkcs11.SunPKCS11.<init>(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:243)
>         at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:143)
>         at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
>         at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
>         at sun.security.pkcs11.SunPKCS11.configure(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
>         at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:251)
>         at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:242)
>         at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
>         at sun.security.jca.ProviderConfig.doLoadProvider(java.base@11.0.17/ProviderConfig.java:242)
>         at sun.security.jca.ProviderConfig.getProvider(java.base@11.0.17/ProviderConfig.java:222)
>         - locked <0x00000000ffff9560> (a sun.security.jca.ProviderConfig)
>         at sun.security.jca.ProviderList.getProvider(java.base@11.0.17/ProviderList.java:266)
>         at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:156)
>         at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:151)
>         at java.util.AbstractList$Itr.next(java.base@11.0.17/AbstractList.java:371)
>         at java.security.SecureRandom.getDefaultPRNG(java.base@11.0.17/SecureRandom.java:264)
>         at java.security.SecureRandom.<init>(java.base@11.0.17/SecureRandom.java:219)
>         at java.util.UUID$Holder.<clinit>(java.base@11.0.17/UUID.java:101)
>         at java.util.UUID.randomUUID(java.base@11.0.17/UUID.java:147)
>         at org.apache.hadoop.ipc.ClientId.getClientId(ClientId.java:42)
>         at org.apache.hadoop.ipc.Client.<init>(Client.java:1367)
>         at org.apache.hadoop.ipc.ClientCache.getClient(ClientCache.java:59)
>         - locked <0x00000000fffc3458> (a org.apache.hadoop.ipc.ClientCache)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:158)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:145)
>         at org.apache.hadoop.ipc.ProtobufRpcEngine2.getProxy(ProtobufRpcEngine2.java:111)
>         at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:629)
>         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithAlignmentContext(NameNodeProxiesClient.java:365)
>         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createNonHAProxyWithClientProtocol(NameNodeProxiesClient.java:343)
>         at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:135)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:374)
>         at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:308)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.initDFSClient(DistributedFileSystem.java:202)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:187)
>         at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
>         at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
>         at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
>         at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
>         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
>         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
>         at org.apache.accumulo.core.volume.VolumeImpl.<init>(VolumeImpl.java:45)
>         at org.apache.accumulo.server.fs.VolumeManagerImpl.get(VolumeManagerImpl.java:371)
>         at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:96)
>         at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
>         at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
>         at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
>         at org.apache.accumulo.manager.Manager.main(Manager.java:408)
>         at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
>         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
>         at org.apache.accumulo.start.Main$$Lambda$145/0x00000008401a5040.run(Unknown Source)
>         at java.lang.Thread.run(java.base@11.0.17/Thread.java:829)
> 
> 
> 
> I will wait and see when there is more log output.
> 
> Thanks
> Ranga
> 
> ________________________________
> From: Ed Coleman <ed...@apache.org>
> Sent: Tuesday, January 10, 2023 10:16 AM
> To: user@accumulo.apache.org <us...@accumulo.apache.org>
> Subject: [External] Re: accumulo init error in K8S
> 
> Have you tried running accumulo init without the --add-volumes?  From your attached log it looks like it cannot find a valid instance id
> 
> 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> 2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
> 
> 
> On 2023/01/10 14:21:29 "Samudrala, Ranganath [USA] via user" wrote:
> > Hello,
> > I am trying to configure Accumulo in K8S using Helm chart. Hadoop and Zookeeper are up and running in the same K8S namespace.
> > accumulo.properties is as below:
> >
> >   instance.volumes=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> >   general.custom.volume.preferred.default=accumulo
> >   instance.zookeeper.host=accumulo-zookeeper
> >   # instance.secret=DEFAULT
> >   general.volume.chooser=org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
> >   general.custom.volume.preferred.logger=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
> >   trace.user=tracer
> >   trace.password=tracer
> >   instance.secret=accumulo
> >   tserver.cache.data.size=15M
> >   tserver.cache.index.size=40M
> >   tserver.memory.maps.max=128M
> >   tserver.memory.maps.native.enabled=true
> >   tserver.sort.buffer.size=50M
> >   tserver.total.mutation.queue.max=16M
> >   tserver.walog.max.size=128M
> >
> > accumulo-client.properties is as below:
> >
> >  auth.type=password
> >  auth.principal=root
> >  auth.token=root
> >  instance.name=accumulo
> >  # For Accumulo >=2.0.0
> >  instance.zookeepers=accumulo-zookeeper
> >  instance.zookeeper.host=accumulo-zookeeper
> >
> > When I run 'accumulo init --add-volumes', I see an error as below and what is wrong with the setup?
> >
> > java.lang.RuntimeException: None of the configured paths are initialized.
> >         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119)
> >         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449)
> >         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543)
> >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
> >         at java.base/java.lang.Thread.run(Thread.java:829)
> > 2023-01-09T21:22:13,530 [init] [org.apache.accumulo.start.Main] ERROR: Thread 'init' died.
> > java.lang.RuntimeException: None of the configured paths are initialized.
> >         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543) ~[accumulo-server-base-2.1.0.jar:2.1.0]
> >         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
> >         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> > Thread 'init' died.
> >
> > I have attached complete log:
> >
> >
> 

Re: [External] Re: accumulo init error in K8S

Posted by "Samudrala, Ranganath [USA] via user" <us...@accumulo.apache.org>.
Yes, I ran it just now. I had debug enabled, so the prompt for instance name was hidden. I had to enter a few CRs to see the prompt. Once the prompts for instance name and password were answered, I can see entries for the accumulo config in the zookeeper.

Should I run 'accumulo init --add-volumes' now?

If I run 'accumulo master' and it seems to be hung up the thread:

"master" #17 prio=5 os_prio=0 cpu=572.10ms elapsed=146.84s tid=0x000056488630b800 nid=0x90 runnable  [0x00007f5d63753000]
   java.lang.Thread.State: RUNNABLE
        at sun.security.pkcs11.Secmod.nssInitialize(jdk.crypto.cryptoki@11.0.17/Native Method)
        at sun.security.pkcs11.Secmod.initialize(jdk.crypto.cryptoki@11.0.17/Secmod.java:239)
        - locked <0x00000000ffd4eb18> (a sun.security.pkcs11.Secmod)
        at sun.security.pkcs11.SunPKCS11.<init>(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:243)
        at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:143)
        at sun.security.pkcs11.SunPKCS11$1.run(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
        at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
        at sun.security.pkcs11.SunPKCS11.configure(jdk.crypto.cryptoki@11.0.17/SunPKCS11.java:140)
        at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:251)
        at sun.security.jca.ProviderConfig$3.run(java.base@11.0.17/ProviderConfig.java:242)
        at java.security.AccessController.doPrivileged(java.base@11.0.17/Native Method)
        at sun.security.jca.ProviderConfig.doLoadProvider(java.base@11.0.17/ProviderConfig.java:242)
        at sun.security.jca.ProviderConfig.getProvider(java.base@11.0.17/ProviderConfig.java:222)
        - locked <0x00000000ffff9560> (a sun.security.jca.ProviderConfig)
        at sun.security.jca.ProviderList.getProvider(java.base@11.0.17/ProviderList.java:266)
        at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:156)
        at sun.security.jca.ProviderList$3.get(java.base@11.0.17/ProviderList.java:151)
        at java.util.AbstractList$Itr.next(java.base@11.0.17/AbstractList.java:371)
        at java.security.SecureRandom.getDefaultPRNG(java.base@11.0.17/SecureRandom.java:264)
        at java.security.SecureRandom.<init>(java.base@11.0.17/SecureRandom.java:219)
        at java.util.UUID$Holder.<clinit>(java.base@11.0.17/UUID.java:101)
        at java.util.UUID.randomUUID(java.base@11.0.17/UUID.java:147)
        at org.apache.hadoop.ipc.ClientId.getClientId(ClientId.java:42)
        at org.apache.hadoop.ipc.Client.<init>(Client.java:1367)
        at org.apache.hadoop.ipc.ClientCache.getClient(ClientCache.java:59)
        - locked <0x00000000fffc3458> (a org.apache.hadoop.ipc.ClientCache)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:158)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.<init>(ProtobufRpcEngine2.java:145)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2.getProxy(ProtobufRpcEngine2.java:111)
        at org.apache.hadoop.ipc.RPC.getProtocolProxy(RPC.java:629)
        at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithAlignmentContext(NameNodeProxiesClient.java:365)
        at org.apache.hadoop.hdfs.NameNodeProxiesClient.createNonHAProxyWithClientProtocol(NameNodeProxiesClient.java:343)
        at org.apache.hadoop.hdfs.NameNodeProxiesClient.createProxyWithClientProtocol(NameNodeProxiesClient.java:135)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:374)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:308)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initDFSClient(DistributedFileSystem.java:202)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:187)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3469)
        at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
        at org.apache.accumulo.core.volume.VolumeImpl.<init>(VolumeImpl.java:45)
        at org.apache.accumulo.server.fs.VolumeManagerImpl.get(VolumeManagerImpl.java:371)
        at org.apache.accumulo.server.ServerInfo.<init>(ServerInfo.java:96)
        at org.apache.accumulo.server.ServerContext.<init>(ServerContext.java:106)
        at org.apache.accumulo.server.AbstractServer.<init>(AbstractServer.java:47)
        at org.apache.accumulo.manager.Manager.<init>(Manager.java:414)
        at org.apache.accumulo.manager.Manager.main(Manager.java:408)
        at org.apache.accumulo.manager.MasterExecutable.execute(MasterExecutable.java:46)
        at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
        at org.apache.accumulo.start.Main$$Lambda$145/0x00000008401a5040.run(Unknown Source)
        at java.lang.Thread.run(java.base@11.0.17/Thread.java:829)



I will wait and see when there is more log output.

Thanks
Ranga

________________________________
From: Ed Coleman <ed...@apache.org>
Sent: Tuesday, January 10, 2023 10:16 AM
To: user@accumulo.apache.org <us...@accumulo.apache.org>
Subject: [External] Re: accumulo init error in K8S

Have you tried running accumulo init without the --add-volumes?  From your attached log it looks like it cannot find a valid instance id

2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id


On 2023/01/10 14:21:29 "Samudrala, Ranganath [USA] via user" wrote:
> Hello,
> I am trying to configure Accumulo in K8S using Helm chart. Hadoop and Zookeeper are up and running in the same K8S namespace.
> accumulo.properties is as below:
>
>   instance.volumes=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
>   general.custom.volume.preferred.default=accumulo
>   instance.zookeeper.host=accumulo-zookeeper
>   # instance.secret=DEFAULT
>   general.volume.chooser=org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
>   general.custom.volume.preferred.logger=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
>   trace.user=tracer
>   trace.password=tracer
>   instance.secret=accumulo
>   tserver.cache.data.size=15M
>   tserver.cache.index.size=40M
>   tserver.memory.maps.max=128M
>   tserver.memory.maps.native.enabled=true
>   tserver.sort.buffer.size=50M
>   tserver.total.mutation.queue.max=16M
>   tserver.walog.max.size=128M
>
> accumulo-client.properties is as below:
>
>  auth.type=password
>  auth.principal=root
>  auth.token=root
>  instance.name=accumulo
>  # For Accumulo >=2.0.0
>  instance.zookeepers=accumulo-zookeeper
>  instance.zookeeper.host=accumulo-zookeeper
>
> When I run 'accumulo init --add-volumes', I see an error as below and what is wrong with the setup?
>
> java.lang.RuntimeException: None of the configured paths are initialized.
>         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119)
>         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449)
>         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543)
>         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2023-01-09T21:22:13,530 [init] [org.apache.accumulo.start.Main] ERROR: Thread 'init' died.
> java.lang.RuntimeException: None of the configured paths are initialized.
>         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
>         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> Thread 'init' died.
>
> I have attached complete log:
>
>

Re: accumulo init error in K8S

Posted by Ed Coleman <ed...@apache.org>.
Have you tried running accumulo init without the --add-volumes?  From your attached log it looks like it cannot find a valid instance id

2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] DEBUG: Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
Trying to read instance id from hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
2023-01-09T21:22:13,522 [init] [org.apache.accumulo.server.fs.VolumeManager] ERROR: unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id
unable to obtain instance id at hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo/instance_id


On 2023/01/10 14:21:29 "Samudrala, Ranganath [USA] via user" wrote:
> Hello,
> I am trying to configure Accumulo in K8S using Helm chart. Hadoop and Zookeeper are up and running in the same K8S namespace.
> accumulo.properties is as below:
> 
>   instance.volumes=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
>   general.custom.volume.preferred.default=accumulo
>   instance.zookeeper.host=accumulo-zookeeper
>   # instance.secret=DEFAULT
>   general.volume.chooser=org.apache.accumulo.core.spi.fs.PreferredVolumeChooser
>   general.custom.volume.preferred.logger=hdfs://accumulo-hdfs-namenode-0.accumulo-hdfs-namenodes:8020/accumulo/data0/accumulo
>   trace.user=tracer
>   trace.password=tracer
>   instance.secret=accumulo
>   tserver.cache.data.size=15M
>   tserver.cache.index.size=40M
>   tserver.memory.maps.max=128M
>   tserver.memory.maps.native.enabled=true
>   tserver.sort.buffer.size=50M
>   tserver.total.mutation.queue.max=16M
>   tserver.walog.max.size=128M
> 
> accumulo-client.properties is as below:
> 
>  auth.type=password
>  auth.principal=root
>  auth.token=root
>  instance.name=accumulo
>  # For Accumulo >=2.0.0
>  instance.zookeepers=accumulo-zookeeper
>  instance.zookeeper.host=accumulo-zookeeper
> 
> When I run 'accumulo init --add-volumes', I see an error as below and what is wrong with the setup?
> 
> java.lang.RuntimeException: None of the configured paths are initialized.
>         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119)
>         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449)
>         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543)
>         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122)
>         at java.base/java.lang.Thread.run(Thread.java:829)
> 2023-01-09T21:22:13,530 [init] [org.apache.accumulo.start.Main] ERROR: Thread 'init' died.
> java.lang.RuntimeException: None of the configured paths are initialized.
>         at org.apache.accumulo.server.ServerDirs.checkBaseUris(ServerDirs.java:119) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.server.init.Initialize.addVolumes(Initialize.java:449) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:543) ~[accumulo-server-base-2.1.0.jar:2.1.0]
>         at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:122) ~[accumulo-start-2.1.0.jar:2.1.0]
>         at java.lang.Thread.run(Thread.java:829) ~[?:?]
> Thread 'init' died.
> 
> I have attached complete log:
> 
>