You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Ravi Prakash <ra...@apache.org> on 2017/12/04 17:23:45 UTC

Re: YARN Auth issue

Hi Yaniv!

The project looks interesting. Good luck! Sorry about the late reply.

"HADOOP_HOME or hadoop.home.dir are not set." indicates that the
environment variable has either not been set or the the directory doesn't
have the requisite Hadoop binaries. You can connect a debugger to the
NodeManager, set a breakpoint over here :
https://github.com/apache/hadoop/blob/37ca4169508c3003dbe9044fefd37eb8cd8c0503/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java#L172
and see what kind of bash scripts are written to launch the container. I'm
sorry I am not familiar with the sandboxes. I'm guessing though that it
might not be relevant here because eventually Hadoop code does run (from
the stack trace later). If you export "export
YARN_NODEMANAGER_OPTS='-Xdebug
-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=1045' " you should
be able to connect to the NodeManager with a debugger on port 1045.

The other exception seems more relevant. Each service in Hadoop has a
SecretManager and stores tokens of a specific type. e.g. The HDFS NameNode
stores tokens of HDFS_DELEGATION_TOKEN type. Maybe the token offered by the
client to the service is not the right type? I have often changed the bash
scripts that launch containers and inserted the debugger string "Xdebug
-Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=1052'" (another
port, and suspend true) into the java command that launches the container.
You can then step through the code of the ApplicationMaster.

I hope this helps.

Cheers
Ravi




On Wed, Nov 29, 2017 at 3:15 AM, Yaniv Rodenski <ya...@shinto.io> wrote:

> Hi All,
>
> I'm from Apache Amaterasu (Incubating) and we have been adding YARN
> support recently (we currently support Mesos only).
>
> In our application master, we are getting a strange behavior, where the
> call to rmClient.registerApplicationMaster(appMasterHostname, 0, "") is
> blocking indefinitely. We've enabled debug logging for the application
> master and the log (see below) is indicating an authentication issue, but
> we are basing our code on the distributed shell example and seem to do all
> the needed token parsing etc. however one thing we have noticed is that on
> our dev environments UserGroupInformation.isSecurityEnabled is false. We’ve
> tried running this on both Hortonworks sandbox (docker) and
> Cloudera-quickstart (VM), and the results are the same.
>
> Any ideas what could be causing the errors below?
>
> ApplicationMaster start
> DEBUG [main] (Shell.java:344) - Failed to detect a valid hadoop home
> directory
> java.io.IOException: HADOOP_HOME or hadoop.home.dir are not set.
> at org.apache.hadoop.util.Shell.checkHadoopHome(Shell.java:326)
> at org.apache.hadoop.util.Shell.<clinit>(Shell.java:351)
> at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
> at org.apache.hadoop.yarn.conf.YarnConfiguration.<clinit>(
> YarnConfiguration.java:648)
> at org.apache.amaterasu.leader.yarn.ApplicationMaster.
> execute(ApplicationMaster.scala:96)
> at org.apache.amaterasu.leader.yarn.ApplicationMaster$.
> delayedEndpoint$org$apache$amaterasu$leader$yarn$ApplicationMaster$1(
> ApplicationMaster.scala:199)
> at org.apache.amaterasu.leader.yarn.ApplicationMaster$
> delayedInit$body.apply(ApplicationMaster.scala:196)
> at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
> at scala.runtime.AbstractFunction0.apply$mcV$
> sp(AbstractFunction0.scala:12)
> at scala.App$$anonfun$main$1.apply(App.scala:76)
> at scala.App$$anonfun$main$1.apply(App.scala:76)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at scala.collection.generic.TraversableForwarder$class.
> foreach(TraversableForwarder.scala:35)
> at scala.App$class.main(App.scala:76)
> at org.apache.amaterasu.leader.yarn.ApplicationMaster$.main(
> ApplicationMaster.scala:196)
> at org.apache.amaterasu.leader.yarn.ApplicationMaster.main(
> ApplicationMaster.scala)
> DEBUG [main] (Shell.java:420) - setsid exited with exit code 0
> DEBUG [main] (MutableMetricsFactory.java:42) - field
> org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.
> UserGroupInformation$UgiMetrics.loginSuccess with annotation
> @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, value=[Rate
> of successful kerberos logins and latency (milliseconds)], about=,
> type=DEFAULT, always=false, sampleName=Ops)
> DEBUG [main] (MutableMetricsFactory.java:42) - field
> org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.
> UserGroupInformation$UgiMetrics.loginFailure with annotation
> @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, value=[Rate
> of failed kerberos logins and latency (milliseconds)], about=,
> type=DEFAULT, always=false, sampleName=Ops)
> DEBUG [main] (MutableMetricsFactory.java:42) - field
> org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.
> UserGroupInformation$UgiMetrics.getGroups with annotation
> @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time,
> value=[GetGroups], about=, type=DEFAULT, always=false, sampleName=Ops)
> DEBUG [main] (MetricsSystemImpl.java:232) - UgiMetrics, User and group
> related metrics
> DEBUG [main] (Groups.java:301) -  Creating new Groups object
> DEBUG [main] (NativeCodeLoader.java:46) - Trying to load the custom-built
> native-hadoop library...
> DEBUG [main] (NativeCodeLoader.java:50) - Loaded the native-hadoop library
> DEBUG [main] (JniBasedUnixGroupsMapping.java:50) - Using
> JniBasedUnixGroupsMapping for Group resolution
> DEBUG [main] (JniBasedUnixGroupsMappingWithFallback.java:45) - Group
> mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMapping
> DEBUG [main] (Groups.java:112) - Group mapping impl=org.apache.hadoop.
> security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000;
> warningDeltaMs=5000
> DEBUG [main] (UserGroupInformation.java:222) - hadoop login
> DEBUG [main] (UserGroupInformation.java:157) - hadoop login commit
> DEBUG [main] (UserGroupInformation.java:187) - using local
> user:UnixPrincipal: yarn
> DEBUG [main] (UserGroupInformation.java:193) - Using user:
> "UnixPrincipal: yarn" with name yarn
> DEBUG [main] (UserGroupInformation.java:203) - User entry: "yarn"
> DEBUG [main] (UserGroupInformation.java:827) - UGI loginUser:yarn
> (auth:SIMPLE)
> DEBUG [main] (DFSClient.java:457) - dfs.client.use.legacy.blockreader.local
> = false
> DEBUG [main] (DFSClient.java:460) - dfs.client.read.shortcircuit = true
> DEBUG [main] (DFSClient.java:463) - dfs.client.domain.socket.data.traffic
> = false
> DEBUG [main] (DFSClient.java:466) - dfs.domain.socket.path =
> /var/run/hadoop-hdfs/dn._PORT
> DEBUG [main] (RetryUtils.java:75) - multipleLinearRandomRetry = null
> DEBUG [main] (Server.java:234) - rpcKind=RPC_PROTOCOL_BUFFER,
> rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper,
> rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$
> ProtoBufRpcInvoker@3074195d
> DEBUG [main] (ClientCache.java:63) - getting client out of cache:
> org.apache.hadoop.ipc.Client@40996815
> DEBUG [client DomainSocketWatcher] (DomainSocketWatcher.java:455) -
> org.apache.hadoop.net.unix.DomainSocketWatcher$2@1d60e6d8: starting with
> interruptCheckPeriodMs = 60000
> DEBUG [main] (DomainSocketFactory.java:120) - The short-circuit local
> reads feature is enabled.
> DEBUG [main] (Client.java:434) - The ping interval is 60000 ms.
> DEBUG [main] (Client.java:704) - Connecting to quickstart.cloudera/
> 10.0.2.15:8020
> DEBUG [main] (UserGroupInformation.java:1722) - PrivilegedAction as:yarn
> (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.
> setupIOstreams(Client.java:725)
> DEBUG [main] (SaslRpcClient.java:457) - Sending sasl message state:
> NEGOTIATE
>
> DEBUG [main] (SaslRpcClient.java:389) - Received SASL message state:
> NEGOTIATE
> auths {
>   method: "TOKEN"
>   mechanism: "DIGEST-MD5"
>   protocol: ""
>   serverId: "default"
>   challenge: "realm=\"default\",nonce=\"H9UunAhL6SueSIrO6hWHhHk7V8Yq8h
> dABcTXddrj\",qop=\"auth\",charset=utf-8,algorithm=md5-sess"
> }
> auths {
>   method: "SIMPLE"
>   mechanism: ""
> }
>
> DEBUG [main] (SaslRpcClient.java:264) - Get token info proto:interface
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB
> info:@org.apache.hadoop.security.token.TokenInfo(value=class
> org.apache.hadoop.hdfs.security.token.delegation.DelegationTokenSelector)
> DEBUG [main] (SaslRpcClient.java:176) - Use SIMPLE authentication for
> protocol ClientNamenodeProtocolPB
> DEBUG [main] (SaslRpcClient.java:457) - Sending sasl message state:
> INITIATE
> auths {
>   method: "SIMPLE"
>   mechanism: ""
> }
>
> DEBUG [IPC Client (1585910593) connection to quickstart.cloudera/
> 10.0.2.15:8020 from yarn] (Client.java:974) - IPC Client (1585910593)
> connection to quickstart.cloudera/10.0.2.15:8020 from yarn: starting,
> having connections 1
> DEBUG [IPC Parameter Sending Thread #0] (Client.java:1037) - IPC Client
> (1585910593) connection to quickstart.cloudera/10.0.2.15:8020 from yarn
> sending #0
> DEBUG [IPC Client (1585910593) connection to quickstart.cloudera/
> 10.0.2.15:8020 from yarn] (Client.java:1094) - IPC Client (1585910593)
> connection to quickstart.cloudera/10.0.2.15:8020 from yarn got value #0
> DEBUG [main] (ProtobufRpcEngine.java:250) - Call: getFileInfo took 361ms
> Started execute
> DEBUG [main] (AbstractService.java:452) - Service:
> org.apache.hadoop.yarn.client.api.async.NMClientAsync entered state INITED
>  INFO [main] (NMClientAsyncImpl.java:107) - Upper bound of the thread pool
> size is 500
> DEBUG [main] (AbstractService.java:452) - Service:
> org.apache.hadoop.yarn.client.api.impl.NMClientImpl entered state INITED
>  INFO [main] (ContainerManagementProtocolProxy.java:81) -
> yarn.client.max-cached-nodemanagers-proxies : 0
> DEBUG [main] (YarnRPC.java:59) - Creating YarnRPC for
> org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
> DEBUG [main] (AbstractService.java:197) - Service
> org.apache.hadoop.yarn.client.api.impl.NMClientImpl is started
> DEBUG [main] (AbstractService.java:197) - Service
> org.apache.hadoop.yarn.client.api.async.NMClientAsync is started
> DEBUG [main] (AbstractService.java:452) - Service:
> org.apache.hadoop.yarn.client.api.async.AMRMClientAsync entered state
> INITED
> DEBUG [main] (AbstractService.java:452) - Service:
> org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl entered state INITED
>  INFO [main] (RMProxy.java:98) - Connecting to ResourceManager at /
> 0.0.0.0:8030
> DEBUG [main] (UserGroupInformation.java:1722) - PrivilegedAction as:yarn
> (auth:SIMPLE) from:org.apache.hadoop.yarn.client.RMProxy.getProxy(
> RMProxy.java:136)
> DEBUG [main] (YarnRPC.java:59) - Creating YarnRPC for
> org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
> DEBUG [main] (HadoopYarnProtoRPC.java:47) - Creating a HadoopYarnProtoRpc
> proxy for protocol interface org.apache.hadoop.yarn.api.
> ApplicationMasterProtocol
> DEBUG [main] (ClientCache.java:63) - getting client out of cache:
> org.apache.hadoop.ipc.Client@40996815
> DEBUG [main] (AbstractService.java:197) - Service
> org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl is started
> DEBUG [main] (AbstractService.java:197) - Service
> org.apache.hadoop.yarn.client.api.async.AMRMClientAsync is started
> Registering application
> DEBUG [main] (Client.java:434) - The ping interval is 60000 ms.
> DEBUG [main] (Client.java:704) - Connecting to /0.0.0.0:8030
> DEBUG [main] (UserGroupInformation.java:1722) - PrivilegedAction as:yarn
> (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.
> setupIOstreams(Client.java:725)
> DEBUG [main] (SaslRpcClient.java:457) - Sending sasl message state:
> NEGOTIATE
>
> DEBUG [main] (SaslRpcClient.java:389) - Received SASL message state:
> NEGOTIATE
> auths {
>   method: "TOKEN"
>   mechanism: "DIGEST-MD5"
>   protocol: ""
>   serverId: "default"
>   challenge: "realm=\"default\",nonce=\"PWgJLNJ4RtktjdEfJHMHHGysStGOSY
> kOO4PheWYZ\",qop=\"auth\",charset=utf-8,algorithm=md5-sess"
> }
>
> DEBUG [main] (SaslRpcClient.java:264) - Get token info proto:interface
> org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB info:null
> DEBUG [main] (UserGroupInformation.java:1702) - PrivilegedActionException
> as:yarn (auth:SIMPLE) cause:org.apache.hadoop.security.AccessControlException:
> Client cannot authenticate via:[TOKEN]
> DEBUG [main] (UserGroupInformation.java:1722) - PrivilegedAction as:yarn
> (auth:SIMPLE) from:org.apache.hadoop.ipc.Client$Connection.
> handleSaslConnectionFailure(Client.java:650)
>  WARN [main] (Client.java:682) - Exception encountered while connecting to
> the server : org.apache.hadoop.security.AccessControlException: Client
> cannot authenticate via:[TOKEN]
> DEBUG [main] (UserGroupInformation.java:1702) - PrivilegedActionException
> as:yarn (auth:SIMPLE) cause:java.io.IOException: org.apache.hadoop.security.AccessControlException:
> Client cannot authenticate via:[TOKEN]
> DEBUG [main] (Client.java:1188) - closing ipc connection to /0.0.0.0:8030:
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN]
> java.io.IOException: org.apache.hadoop.security.AccessControlException:
> Client cannot authenticate via:[TOKEN]
> at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:687)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1698)
> at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(
> Client.java:650)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:737)
> at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
> at org.apache.hadoop.ipc.Client.call(Client.java:1451)
> at org.apache.hadoop.ipc.Client.call(Client.java:1412)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.
> invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy17.registerApplicationMaster(Unknown Source)
> at org.apache.hadoop.yarn.api.impl.pb.client.
> ApplicationMasterProtocolPBClientImpl.registerApplicationMaster(
> ApplicationMasterProtocolPBClientImpl.java:107)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(
> RetryInvocationHandler.java:191)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(
> RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy18.registerApplicationMaster(Unknown Source)
> at org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl.
> registerApplicationMaster(AMRMClientImpl.java:223)
> at org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl.
> registerApplicationMaster(AMRMClientImpl.java:214)
> at org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl.
> registerApplicationMaster(AMRMClientAsyncImpl.java:139)
> at org.apache.amaterasu.leader.yarn.ApplicationMaster.
> execute(ApplicationMaster.scala:139)
> at org.apache.amaterasu.leader.yarn.ApplicationMaster$.
> delayedEndpoint$org$apache$amaterasu$leader$yarn$ApplicationMaster$1(
> ApplicationMaster.scala:199)
> at org.apache.amaterasu.leader.yarn.ApplicationMaster$
> delayedInit$body.apply(ApplicationMaster.scala:196)
> at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
> at scala.runtime.AbstractFunction0.apply$mcV$
> sp(AbstractFunction0.scala:12)
> at scala.App$$anonfun$main$1.apply(App.scala:76)
> at scala.App$$anonfun$main$1.apply(App.scala:76)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at scala.collection.generic.TraversableForwarder$class.
> foreach(TraversableForwarder.scala:35)
> at scala.App$class.main(App.scala:76)
> at org.apache.amaterasu.leader.yarn.ApplicationMaster$.main(
> ApplicationMaster.scala:196)
> at org.apache.amaterasu.leader.yarn.ApplicationMaster.main(
> ApplicationMaster.scala)
> Caused by: org.apache.hadoop.security.AccessControlException: Client
> cannot authenticate via:[TOKEN]
> at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(
> SaslRpcClient.java:172)
> at org.apache.hadoop.security.SaslRpcClient.saslConnect(
> SaslRpcClient.java:396)
> at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.
> java:560)
> at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:375)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:729)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:725)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1698)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:725)
> ... 29 more
> DEBUG [main] (Client.java:1197) - IPC Client (1585910593) connection to /
> 0.0.0.0:8030 from yarn: closed
> DEBUG [IPC Client (1585910593) connection to quickstart.cloudera/
> 10.0.2.15:8020 from yarn] (Client.java:1197) - IPC Client (1585910593)
> connection to quickstart.cloudera/10.0.2.15:8020 from yarn: closed
> DEBUG [IPC Client (1585910593) connection to quickstart.cloudera/
> 10.0.2.15:8020 from yarn] (Client.java:992) - IPC Client (1585910593)
> connection to quickstart.cloudera/10.0.2.15:8020 from yarn: stopped,
> remaining connections 0
> DEBUG [main] (RetryInvocationHandler.java:147) - Exception while invoking
> registerApplicationMaster of class ApplicationMasterProtocolPBClientImpl
> over null. Retrying after sleeping for 30000ms.
> java.io.IOException: Failed on local exception: java.io.IOException:
> org.apache.hadoop.security.AccessControlException: Client cannot
> authenticate via:[TOKEN]; Host Details : local host is:
> "quickstart.cloudera/10.0.2.15"; destination host is: "0.0.0.0":8030;
> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
> at org.apache.hadoop.ipc.Client.call(Client.java:1479)
> at org.apache.hadoop.ipc.Client.call(Client.java:1412)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.
> invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy17.registerApplicationMaster(Unknown Source)
> at org.apache.hadoop.yarn.api.impl.pb.client.
> ApplicationMasterProtocolPBClientImpl.registerApplicationMaster(
> ApplicationMasterProtocolPBClientImpl.java:107)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(
> RetryInvocationHandler.java:191)
> at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(
> RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy18.registerApplicationMaster(Unknown Source)
> at org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl.
> registerApplicationMaster(AMRMClientImpl.java:223)
> at org.apache.hadoop.yarn.client.api.impl.AMRMClientImpl.
> registerApplicationMaster(AMRMClientImpl.java:214)
> at org.apache.hadoop.yarn.client.api.async.impl.AMRMClientAsyncImpl.
> registerApplicationMaster(AMRMClientAsyncImpl.java:139)
> at org.apache.amaterasu.leader.yarn.ApplicationMaster.
> execute(ApplicationMaster.scala:139)
> at org.apache.amaterasu.leader.yarn.ApplicationMaster$.
> delayedEndpoint$org$apache$amaterasu$leader$yarn$ApplicationMaster$1(
> ApplicationMaster.scala:199)
> at org.apache.amaterasu.leader.yarn.ApplicationMaster$
> delayedInit$body.apply(ApplicationMaster.scala:196)
> at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
> at scala.runtime.AbstractFunction0.apply$mcV$
> sp(AbstractFunction0.scala:12)
> at scala.App$$anonfun$main$1.apply(App.scala:76)
> at scala.App$$anonfun$main$1.apply(App.scala:76)
> at scala.collection.immutable.List.foreach(List.scala:392)
> at scala.collection.generic.TraversableForwarder$class.
> foreach(TraversableForwarder.scala:35)
> at scala.App$class.main(App.scala:76)
> at org.apache.amaterasu.leader.yarn.ApplicationMaster$.main(
> ApplicationMaster.scala:196)
> at org.apache.amaterasu.leader.yarn.ApplicationMaster.main(
> ApplicationMaster.scala)
> Caused by: java.io.IOException: org.apache.hadoop.security.AccessControlException:
> Client cannot authenticate via:[TOKEN]
> at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:687)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1698)
> at org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(
> Client.java:650)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:737)
> at org.apache.hadoop.ipc.Client$Connection.access$2900(Client.java:375)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1528)
> at org.apache.hadoop.ipc.Client.call(Client.java:1451)
> ... 26 more
> Caused by: org.apache.hadoop.security.AccessControlException: Client
> cannot authenticate via:[TOKEN]
> at org.apache.hadoop.security.SaslRpcClient.selectSaslClient(
> SaslRpcClient.java:172)
> at org.apache.hadoop.security.SaslRpcClient.saslConnect(
> SaslRpcClient.java:396)
> at org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.
> java:560)
> at org.apache.hadoop.ipc.Client$Connection.access$1900(Client.java:375)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:729)
> at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:725)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at org.apache.hadoop.security.UserGroupInformation.doAs(
> UserGroupInformation.java:1698)
> at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:725)
> ... 29 more
>
> Thanks in advance,
> --
> Yaniv Rodenski
>