You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@nifi.apache.org by "Julius Kovacs (Jira)" <ji...@apache.org> on 2021/01/14 11:23:00 UTC

[jira] [Comment Edited] (NIFI-7831) KeytabCredentialsService not working with HBase Clients

    [ https://issues.apache.org/jira/browse/NIFI-7831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17264714#comment-17264714 ] 

Julius Kovacs edited comment on NIFI-7831 at 1/14/21, 11:22 AM:
----------------------------------------------------------------

I think this issue is not limited to just HBase clients as we are facing this issue with the KerberosCredentialService while using HDFS processors (list, put, delete, etc.). After 24 hours the flow gets stuck with an error message saying that a valid Kerberos TGT was not found. (NiFi version 1.12.1)

I built version 1.13.0 from source, and back-ported only the Kerberos Credential Service . nar file, but unfortunately the same thing happens.

We had to update to version 1.12.1 from 1.10 because of a handful of problems we were facing with NiFi registry (all of them remedied by upgrading to 1.12.1), and so downgrading would be rather painful. For now we are disabling/enabling the credential service manually every day while we look for a different solution - any and all advice is appreciated.

EDIT: I now realize my mistake, the Kerberos Credential Service is not the culprit here. I will try back porting those components to version 1.12.1 and see if that fixes this issue.


was (Author: matagyula):
I think this issue is not limited to just HBase clients as we are facing this issue with the KerberosCredentialService while using HDFS processors (list, put, delete, etc.). After 24 hours the flow gets stuck with an error message saying that a valid Kerberos TGT was not found. (NiFi version 1.12.1)

I built version 1.13.0 from source, and back-ported only the Kerberos Credential Service . nar file, but unfortunately the same thing happens.

We had to update to version 1.12.1 from 1.10 because of a handful of problems we were facing with NiFi registry (all of them remedied by upgrading to 1.12.1), and so downgrading would be rather painful. For now we are disabling/enabling the credential service manually every day while we look for a different solution - any and all advice is appreciated.

> KeytabCredentialsService not working with HBase Clients
> -------------------------------------------------------
>
>                 Key: NIFI-7831
>                 URL: https://issues.apache.org/jira/browse/NIFI-7831
>             Project: Apache NiFi
>          Issue Type: Bug
>    Affects Versions: 1.12.0
>            Reporter: Manuel Navarro
>            Assignee: Tamas Palfy
>            Priority: Major
>             Fix For: 1.13.0
>
>
> HBase Client (both 1.x and 2.x) is not able to renew ticket after expiration with KeytabCredentialsService configured (same behaviour with principal and password configured directly in the controller service). The same KeytabCredentialsService works ok with Hive and Hbase clients configured in the same NIFI cluster. 
> Note that the same configuration works ok in version 1.11 (error start to appear after upgrade from 1.11 to 1.12). 
> After 24hours (time renewal period in our case), the following error appears using HBase_2_ClientServices + HBase_2_ClientMapCacheService : 
> {code:java}
> 2020-09-17 09:00:27,014 ERROR [Relogin service.Chore.1] org.apache.hadoop.hbase.AuthUtil Got exception while trying to refresh credentials: loginUserFromKeyTab must be done first java.io.IOException: loginUserFromKeyTab must be done first at org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1194) at org.apache.hadoop.security.UserGroupInformation.checkTGTAndReloginFromKeytab(UserGroupInformation.java:1125) at org.apache.hadoop.hbase.AuthUtil$1.chore(AuthUtil.java:206) at org.apache.hadoop.hbase.ScheduledChore.run(ScheduledChore.java:186) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
> {code}
>  
> With HBase_1_1_2_ClientServices + HBase_1_1_2_ClientMapCacheService the following error appears: 
>  
> {code:java}
>  2020-09-22 12:18:37,184 WARN [hconnection-0x55d9d8d1-shared--pool3-t769] o.a.hadoop.hbase.ipc.AbstractRpcClient Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 2020-09-22 12:18:37,197 ERROR [hconnection-0x55d9d8d1-shared--pool3-t769] o.a.hadoop.hbase.ipc.AbstractRpcClient SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'. javax.security.sasl.SaslException: GSS initiate failed at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:612) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:157) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:738) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:735) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:735) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:897) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:866) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1208) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:328) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.multi(ClientProtos.java:32879) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:128) at org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:53) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210) at org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:723) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
> {code}
>  
> Environment: Apache NIFI 1.12, RHEL 7.7, openjdk version "1.8.0_222-ea"
> Regards!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)