You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-issues@hadoop.apache.org by "Hadoop QA (JIRA)" <ji...@apache.org> on 2015/09/19 02:49:04 UTC

[jira] [Commented] (MAPREDUCE-6484) Yarn Client uses local address instead of RM address as token renewer in a secure cluster when RM HA is enabled.

    [ https://issues.apache.org/jira/browse/MAPREDUCE-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14876756#comment-14876756 ] 

Hadoop QA commented on MAPREDUCE-6484:
--------------------------------------

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 57s | Pre-patch trunk compilation is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any @author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to include 1 new or modified test files. |
| {color:green}+1{color} | javac |   9m 37s | There were no new javac warning messages. |
| {color:green}+1{color} | javadoc |  12m 16s | There were no new javadoc warning messages. |
| {color:green}+1{color} | release audit |   0m 29s | The applied patch does not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   0m 55s | The applied patch generated  1 new checkstyle issues (total was 9, now 10). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that end in whitespace. |
| {color:green}+1{color} | install |   1m 47s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 46s | The patch built with eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 37s | The patch does not introduce any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | mapreduce tests |   2m 22s | Tests passed in hadoop-mapreduce-client-core. |
| | |  48m 50s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | http://issues.apache.org/jira/secure/attachment/12761227/YARN-4187.000.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 94dec5a |
| checkstyle |  https://builds.apache.org/job/PreCommit-YARN-Build/9216/artifact/patchprocess/diffcheckstylehadoop-mapreduce-client-core.txt |
| hadoop-mapreduce-client-core test log | https://builds.apache.org/job/PreCommit-YARN-Build/9216/artifact/patchprocess/testrun_hadoop-mapreduce-client-core.txt |
| Test Results | https://builds.apache.org/job/PreCommit-YARN-Build/9216/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | https://builds.apache.org/job/PreCommit-YARN-Build/9216/console |


This message was automatically generated.

> Yarn Client uses local address instead of RM address as token renewer in a secure cluster when RM HA is enabled.
> ----------------------------------------------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-6484
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6484
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: client, security
>            Reporter: zhihai xu
>            Assignee: zhihai xu
>         Attachments: YARN-4187.000.patch
>
>
> Yarn Client uses local address instead of RM address as token renewer in a secure cluster when RM HA is enabled. This will cause HDFS token renew failure for renewer "nobody"  if the rules from {{hadoop.security.auth_to_local}} exclude the client address in HDFS {{DelegationTokenIdentifier}}.
> The reason why the local address is returned is: When HA is enabled, "yarn.resourcemanager.address" may not be set,  if {{HOSTNAME_PATTERN}}("_HOST") is used in "yarn.resourcemanager.principal", the default address "0.0.0.0:8032" will be used,  Based on the following code at SecurityUtil.java, the local address will be used to replace "0.0.0.0".
> {code}
>   private static String replacePattern(String[] components, String hostname)
>       throws IOException {
>     String fqdn = hostname;
>     if (fqdn == null || fqdn.isEmpty() || fqdn.equals("0.0.0.0")) {
>       fqdn = getLocalHostName();
>     }
>     return components[0] + "/" + fqdn.toLowerCase(Locale.US) + "@" + components[2];
>   }
>   static String getLocalHostName() throws UnknownHostException {
>     return InetAddress.getLocalHost().getCanonicalHostName();
>   }
>   public static String getServerPrincipal(String principalConfig,
>       InetAddress addr) throws IOException {
>     String[] components = getComponents(principalConfig);
>     if (components == null || components.length != 3
>         || !components[1].equals(HOSTNAME_PATTERN)) {
>       return principalConfig;
>     } else {
>       if (addr == null) {
>         throw new IOException("Can't replace " + HOSTNAME_PATTERN
>             + " pattern since client address is null");
>       }
>       return replacePattern(components, addr.getCanonicalHostName());
>     }
>   }
> {code}
> The following is the exception which cause the job fail:
> {code}
> 15/09/12 16:27:24 WARN security.UserGroupInformation: PriviledgedActionException as:test@EXAMPLE.COM (auth:KERBEROS) cause:java.io.IOException: Failed to run job : yarn tries to renew a token with renewer nobody
> at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.renewToken(AbstractDelegationTokenSecretManager.java:464)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewDelegationToken(FSNamesystem.java:7109)
> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.renewDelegationToken(NameNodeRpcServer.java:512)
> at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.renewDelegationToken(AuthorizationProviderProxyClientProtocol.java:648)
> at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:975)
> at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> java.io.IOException: Failed to run job : yarn tries to renew a token with renewer nobody
> at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.renewToken(AbstractDelegationTokenSecretManager.java:464)
> at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renewDelegationToken(FSNamesystem.java:7109)
> at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.renewDelegationToken(NameNodeRpcServer.java:512)
> at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.renewDelegationToken(AuthorizationProviderProxyClientProtocol.java:648)
> at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.renewDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:975)
> at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
> at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:587)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1026)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
> at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:300)
> at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:438)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1295)
> at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1292)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
> at org.apache.hadoop.mapreduce.Job.submit(Job.java:1292)
> at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1313)
> at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
> at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:145)
> at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)