You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "linwukang (JIRA)" <ji...@apache.org> on 2018/04/24 06:45:00 UTC

[jira] [Commented] (HIVE-15767) Hive On Spark is not working on secure clusters from Oozie

    [ https://issues.apache.org/jira/browse/HIVE-15767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16449392#comment-16449392 ] 

linwukang commented on HIVE-15767:
----------------------------------

Hi [~gezapeti] , after apply this patch, i find that the Hive On Spark worked with yarn, all tasks is finished successfully. but there's another error throws at the end of the progress:

 
{code:java}
2018-04-24T14:28:46,409 INFO [116dbf89-2982-407d-9b64-4206b3bbe105 main] lockmgr.DbTxnManager: Stopped heartbeat for query: flowagent_20180424142839_be68e2b9-aca9-4023-89f8-6a18d53dd0c5
2018-04-24T14:28:46,409 INFO [116dbf89-2982-407d-9b64-4206b3bbe105 main] lockmgr.DbLockManager: releaseLocks: [lockid:438 queryId=flowagent_20180424142839_be68e2b9-aca9-4023-89f8-6a18d53dd0c5 txnid:0]
2018-04-24T14:28:46,422 ERROR [116dbf89-2982-407d-9b64-4206b3bbe105 main] CliDriver: Failed with exception java.io.IOException:org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token can be issued only with kerberos or web authentication
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:6635)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:563)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:988)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1727)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)

java.io.IOException: org.apache.hadoop.ipc.RemoteException(java.io.IOException): Delegation Token can be issued only with kerberos or web authentication
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getDelegationToken(FSNamesystem.java:6635)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getDelegationToken(NameNodeRpcServer.java:563)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getDelegationToken(ClientNamenodeProtocolServerSideTranslatorPB.java:988)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1727)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2045)

at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:521)
at org.apache.hadoop.hive.ql.exec.FetchOperator.pushRow(FetchOperator.java:428)
at org.apache.hadoop.hive.ql.exec.FetchTask.fetch(FetchTask.java:147)
at org.apache.hadoop.hive.ql.Driver.getResults(Driver.java:2208)
at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:253)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:474)
at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:490)
at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:793)
{code}

> Hive On Spark is not working on secure clusters from Oozie
> ----------------------------------------------------------
>
>                 Key: HIVE-15767
>                 URL: https://issues.apache.org/jira/browse/HIVE-15767
>             Project: Hive
>          Issue Type: Bug
>          Components: Spark
>    Affects Versions: 1.2.1, 2.1.1
>            Reporter: Peter Cseh
>            Assignee: Peter Cseh
>            Priority: Major
>             Fix For: 3.0.0
>
>         Attachments: HIVE-15767-001.patch, HIVE-15767-002.patch, HIVE-15767.1.patch
>
>
> When a HiveAction is launched form Oozie with Hive On Spark enabled, we're getting errors:
> {noformat}
> Caused by: java.io.IOException: Exception reading file:/yarn/nm/usercache/yshi/appcache/application_1485271416004_0022/container_1485271416004_0022_01_000002/container_tokens
>         at org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:188)
>         at org.apache.hadoop.mapreduce.security.TokenCache.mergeBinaryTokens(TokenCache.java:155)
> {noformat}
> This is caused by passing the {{mapreduce.job.credentials.binary}} property to the Spark configuration in RemoteHiveSparkClient.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)