You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flume.apache.org by "Prashanth (Jira)" <ji...@apache.org> on 2022/07/25 06:24:00 UTC

[jira] [Created] (FLUME-3431) Hive sink kerberos issue - No valid credentials provided

Prashanth created FLUME-3431:
--------------------------------

             Summary: Hive sink kerberos issue - No valid credentials provided
                 Key: FLUME-3431
                 URL: https://issues.apache.org/jira/browse/FLUME-3431
             Project: Flume
          Issue Type: Question
          Components: Sinks+Sources
    Affects Versions: 1.10.0
            Reporter: Prashanth


I am unable to test Hive sink with Flume on a kerberized cluster, see below exception. Also, I noticed there is no option in Flume user guide to provide a keytab/principal similar to HDFS and HBase sinks. How do we use Hive sink on a kerberized cluster?
 
{code:java}
2022-07-21T07:39:29,035 ERROR [hive-sink1-call-runner-0] org.apache.thrift.transport.TSaslTransport - SASL negotiation failure
  javax.security.sasl.SaslException: GSS initiate failed
    at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) ~[?:1.8.0_331]
    at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:96) ~[libthrift-0.14.1.jar:0.14.1]
    at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:236) [libthrift-0.14.1.jar:0.14.1]
    at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:39) [libthrift-0.14.1.jar:0.14.1]
    at org.apache.hadoop.hive.metastore.security.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:51) [hive-exec-3.1.2.jar:3.1.2]
    at org.apache.hadoop.hive.metastore.security.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:48) [hive-exec-3.1.2.jar:3.1.2]
    at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_331]
    at javax.security.auth.Subject.doAs(Subject.java:422) [?:1.8.0_331]
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) [hadoop-common-3.1.2.odh.1.0.dfc6dd56066.jar:?]
    at org.apache.hadoop.hive.metastore.security.TUGIAssumingTransport.open(TUGIAssumingTransport.java:48) [hive-exec-3.1.2.jar:3.1.2]
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:531) [hive-exec-3.1.2.jar:3.1.2]
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:225) [hive-exec-3.1.2.jar:3.1.2]
    at org.apache.hive.hcatalog.common.HiveClientCache$CacheableHiveMetaStoreClient.<init>(HiveClientCache.java:409) [hive-hcatalog-core-3.1.2.jar:3.1.2]
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:1.8.0_331]
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [?:1.8.0_331]
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [?:1.8.0_331]
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423) [?:1.8.0_331]
    at org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84) [hive-exec-3.1.2.jar:3.1.2]
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:95) [hive-exec-3.1.2.jar:3.1.2]
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148) [hive-exec-3.1.2.jar:3.1.2]
    at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:133) [hive-exec-3.1.2.jar:3.1.2]
    at org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:297) [hive-hcatalog-core-3.1.2.jar:3.1.2]
    at org.apache.hive.hcatalog.common.HiveClientCache$5.call(HiveClientCache.java:292) [hive-hcatalog-core-3.1.2.jar:3.1.2]
    at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4767) [guava-11.0.2.jar:?]
    at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568) [guava-11.0.2.jar:?]
    at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350) [guava-11.0.2.jar:?]
    at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313) [guava-11.0.2.jar:?]
    at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228) [guava-11.0.2.jar:?]
    at com.google.common.cache.LocalCache.get(LocalCache.java:3965) [guava-11.0.2.jar:?]
    at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764) [guava-11.0.2.jar:?]
    at org.apache.hive.hcatalog.common.HiveClientCache.getOrCreate(HiveClientCache.java:292) [hive-hcatalog-core-3.1.2.jar:3.1.2]
    at org.apache.hive.hcatalog.common.HiveClientCache.get(HiveClientCache.java:267) [hive-hcatalog-core-3.1.2.jar:3.1.2]
    at org.apache.hive.hcatalog.common.HCatUtil.getHiveMetastoreClient(HCatUtil.java:569) [hive-hcatalog-core-3.1.2.jar:3.1.2]
    at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.getMetaStoreClient(HiveEndPoint.java:529) [hive-hcatalog-streaming-3.1.2.jar:3.1.2]
    at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.<init>(HiveEndPoint.java:327) [hive-hcatalog-streaming-3.1.2.jar:3.1.2]
    at org.apache.hive.hcatalog.streaming.HiveEndPoint$ConnectionImpl.<init>(HiveEndPoint.java:294) [hive-hcatalog-streaming-3.1.2.jar:3.1.2]
    at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnectionImpl(HiveEndPoint.java:229) [hive-hcatalog-streaming-3.1.2.jar:3.1.2]
    at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnection(HiveEndPoint.java:206) [hive-hcatalog-streaming-3.1.2.jar:3.1.2]
    at org.apache.hive.hcatalog.streaming.HiveEndPoint.newConnection(HiveEndPoint.java:117) [hive-hcatalog-streaming-3.1.2.jar:3.1.2]
    at org.apache.flume.sink.hive.HiveWriter$8.call(HiveWriter.java:379) [flume-hive-sink-1.10.0.jar:1.10.0]
    at org.apache.flume.sink.hive.HiveWriter$8.call(HiveWriter.java:376) [flume-hive-sink-1.10.0.jar:1.10.0]
    at org.apache.flume.sink.hive.HiveWriter$11.call(HiveWriter.java:428) [flume-hive-sink-1.10.0.jar:1.10.0]
    at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_331]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_331]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_331]
    at java.lang.Thread.run(Thread.java:750) [?:1.8.0_331]
  Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Server not found in Kerberos database (7) - LOOKING_UP_SERVER)
    at sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:772) ~[?:1.8.0_331]
    at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248) ~[?:1.8.0_331]
    at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) ~[?:1.8.0_331]
    at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ~[?:1.8.0_331]
    ... 45 more {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@flume.apache.org
For additional commands, e-mail: issues-help@flume.apache.org