You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Peter Vary (JIRA)" <ji...@apache.org> on 2017/02/17 09:55:41 UTC

[jira] [Commented] (HIVE-15963) When the token renewal period is short MapRedLocalTask might fail with "token (...) is expired"

    [ https://issues.apache.org/jira/browse/HIVE-15963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15871566#comment-15871566 ] 

Peter Vary commented on HIVE-15963:
-----------------------------------

Might have a similar solution like that

> When the token renewal period is short MapRedLocalTask might fail with "token (...) is expired"
> -----------------------------------------------------------------------------------------------
>
>                 Key: HIVE-15963
>                 URL: https://issues.apache.org/jira/browse/HIVE-15963
>             Project: Hive
>          Issue Type: Bug
>          Components: HiveServer2
>    Affects Versions: 1.1.0, 2.1.0, 2.2.0, 2.1.1
>            Reporter: Peter Vary
>            Assignee: Peter Vary
>            Priority: Minor
>
> When {{hadoop.kms.authentication.delegation-token.renew-interval.sec}} configuration is set to a low value, and the MapRedLocalTask runs for longer time than this value, then the MapRedLocalTask might fail with the following exception:
> {code}
> 2017-01-13 14:36:01,213 ERROR [main]: mr.MapredLocalTask (MapredLocalTask.java:executeInProcess(387)) - Hive Runtime Error: Map local work failed
> java.io.IOException: org.apache.hadoop.security.authentication.client.AuthenticationException: org.apache.hadoop.security.token.SecretManager$InvalidToken: token (kms-dt owner=hive, renewer=hive, realUser=, issueDate=1484346896791, maxDate=1484951696791, sequenceNumber=1021852, masterKeyId=58) is expired
>         at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:508)
>         at org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.startForward(MapredLocalTask.java:435)
>         at org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.startForward(MapredLocalTask.java:410)
>         at org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask.executeInProcess(MapredLocalTask.java:376)
>         at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.main(ExecDriver.java:735)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
>         at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> Caused by: org.apache.hadoop.security.authentication.client.AuthenticationException: org.apache.hadoop.security.token.SecretManager$InvalidToken: token (kms-dt owner=hive, renewer=hive, realUser=, issueDate=1484346896791, maxDate=1484951696791, sequenceNumber=1021852, masterKeyId=58) is expired
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>         at org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157)
>         at org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:627)
>         at org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:585)
>         at org.apache.hadoop.crypto.key.kms.KMSClientProvider.decryptEncryptedKey(KMSClientProvider.java:852)
>         at org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:388)
>         at org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1440)
>         at org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:1510)
>         at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:328)
>         at org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:322)
>         at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>         at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:322)
>         at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:783)
>         at parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:407)
>         at parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:386)
>         at parquet.hadoop.ParquetFileReader.readFooter(ParquetFileReader.java:372)
>         at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.getSplit(ParquetRecordReaderWrapper.java:252)
>         at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:95)
>         at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.<init>(ParquetRecordReaderWrapper.java:81)
>         at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:72)
>         at org.apache.hadoop.hive.ql.exec.FetchOperator$FetchInputFormatSplit.getRecordReader(FetchOperator.java:674)
>         at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:324)
>         at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:446)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)