You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-user@lucene.apache.org by Andrew Bumstead <an...@bigdatapartnership.com> on 2016/01/08 17:51:54 UTC

Kerberos ticket not renewing when storing index on Kerberized HDFS

Hello,

I have Solr Cloud configured to stores its index files on a Kerberized HDFS
(I followed documentation at
https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+HDFS), and
have been able to index some documents with the files being written to the
HDFS as expected. However, it appears that some time after starting, Solr
is unable to connect to HDFS as it no longer has a valid Kerberos TGT. The
time-frame of this occurring is consistent with my default Kerberos ticket
lifetime of 24 hours, so it appears as though Solr is not renewing its
Kerberos ticket upon expiry. A restart of Solr resolves the issue again for
24 hours.

Is there any configuration I can add to make Solr automatically renew its
ticket or is this an issue with Solr?

The following is the stack trace I am getting in Solr.

java.io.IOException: Failed on local exception: java.io.IOException:
Couldn't setup connection for solr/sandbox.hortonworks.com@HORTONWORKS.COM
to sandbox.hortonworks.com/10.0.2.15:8020; Host Details : local host is: "
sandbox.hortonworks.com/10.0.2.15"; destination host is: "
sandbox.hortonworks.com":8020;
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
        at org.apache.hadoop.ipc.Client.call(Client.java:1472)
        at org.apache.hadoop.ipc.Client.call(Client.java:1399)
        at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy10.renewLease(Unknown Source)
        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:571)
        at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy11.renewLease(Unknown Source)
        at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:879)
        at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417)
        at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)
        at
org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
        at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Couldn't setup connection for solr/
sandbox.hortonworks.com@HORTONWORKS.COM to
sandbox.hortonworks.com/10.0.2.15:8020
        at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:672)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at
org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:643)
        at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:730)
        at
org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
        at org.apache.hadoop.ipc.Client.call(Client.java:1438)
        ... 16 more
Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused
by GSSException: No valid credentials provided (Mechanism level: Failed to
find any Kerberos tgt)]
        at
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
        at
org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:413)
        at
org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:553)
        at
org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:368)
        at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:722)
        at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:718)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:717)
        ... 19 more
Caused by: GSSException: No valid credentials provided (Mechanism level:
Failed to find any Kerberos tgt)
        at
sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
        at
sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
        at
sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
        at
sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
        at
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
        at
sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
        at
com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
        ... 28 more


This is my collection configuration.

<directoryFactory name="DirectoryFactory" class="solr.HdfsDirectoryFactory">
    <str name="solr.hdfs.home">hdfs://sandbox.hortonworks.com/user/solr
</str>
    <str name="solr.hdfs.confdir">/usr/hdp/current/hadoop-client/conf</str>
    <bool name="solr.hdfs.blockcache.enabled">true</bool>
    <int name="solr.hdfs.blockcache.slab.count">1</int>
    <bool name="solr.hdfs.blockcache.direct.memory.allocation">false</bool>
    <int name="solr.hdfs.blockcache.blocksperbank">16384</int>
    <bool name="solr.hdfs.blockcache.read.enabled">true</bool>
    <bool name="solr.hdfs.blockcache.write.enabled">false</bool>
    <bool name="solr.hdfs.nrtcachingdirectory.enable">true</bool>
    <int name="solr.hdfs.nrtcachingdirectory.maxmergesizemb">16</int>
    <int name="solr.hdfs.nrtcachingdirectory.maxcachedmb">192</int>
    <bool name="solr.hdfs.security.kerberos.enabled">true</bool>
    <str
name="solr.hdfs.security.kerberos.keytabfile">/etc/solr/conf/solr.keytab</str>
    <str name="solr.hdfs.security.kerberos.principal">solr/
sandbox.hortonworks.com@HORTONWORKS.COM</str>
</directoryFactory>

Thanks,

Andrew Bumstead

-- 
 

*NOTICE AND DISCLAIMER*

This email (including attachments) is confidential. If you are not the 
intended recipient, notify the sender immediately, delete this email from 
your system and do not disclose or use for any purpose.

Business Address: Eagle House, 163 City Road, London, EC1V 1NR. United 
Kingdom
Registered Office: Finsgate, 5-7 Cranwood Street, London, EC1V 9EE. United 
Kingdom
Big Data Partnership Limited is a company registered in England & Wales 
with Company No 7904824

Re: Kerberos ticket not renewing when storing index on Kerberized HDFS

Posted by Andrew Bumstead <an...@bigdatapartnership.com>.
Thanks Ishan, I've raised a JIRA for it.

On 11 January 2016 at 20:17, Ishan Chattopadhyaya <ichattopadhyaya@gmail.com
> wrote:

> Not sure how reliably renewals are taken care of in the context of
> kerberized HDFS, but here's my 10-15 minute analysis.
> Seems to me that the auto renewal thread is not spawned [0]. This relies on
> kinit.
> Not sure if having a login configuration with renewTGT is sufficient (which
> seems to be passed in by default, unless there's a jaas config being
> explicitly passed in with renewTGT=false). As per the last comments from
> Devraj & Owen [1] kinit based logins have worked more reliably.
>
> If you can rule out any setup issues, I suggest you file a JIRA and someone
> who has worked on the HdfsDirectoryFactory would be able to suggest better.
> Thanks,
> Ishan
>
> [0] -
>
> http://grepcode.com/file/repo1.maven.org/maven2/org.apache.hadoop/hadoop-common/2.7.1/org/apache/hadoop/security/UserGroupInformation.java#UserGroupInformation.spawnAutoRenewalThreadForUserCreds%28%29
>
> [1] - https://issues.apache.org/jira/browse/HADOOP-6656
>
> On Fri, Jan 8, 2016 at 10:21 PM, Andrew Bumstead <
> andrew.bumstead@bigdatapartnership.com> wrote:
>
> > Hello,
> >
> > I have Solr Cloud configured to stores its index files on a Kerberized
> HDFS
> > (I followed documentation at
> > https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+HDFS),
> > and
> > have been able to index some documents with the files being written to
> the
> > HDFS as expected. However, it appears that some time after starting, Solr
> > is unable to connect to HDFS as it no longer has a valid Kerberos TGT.
> The
> > time-frame of this occurring is consistent with my default Kerberos
> ticket
> > lifetime of 24 hours, so it appears as though Solr is not renewing its
> > Kerberos ticket upon expiry. A restart of Solr resolves the issue again
> for
> > 24 hours.
> >
> > Is there any configuration I can add to make Solr automatically renew its
> > ticket or is this an issue with Solr?
> >
> > The following is the stack trace I am getting in Solr.
> >
> > java.io.IOException: Failed on local exception: java.io.IOException:
> > Couldn't setup connection for solr/
> sandbox.hortonworks.com@HORTONWORKS.COM
> > to sandbox.hortonworks.com/10.0.2.15:8020; Host Details : local host
> is: "
> > sandbox.hortonworks.com/10.0.2.15"; destination host is: "
> > sandbox.hortonworks.com":8020;
> >         at
> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
> >         at org.apache.hadoop.ipc.Client.call(Client.java:1472)
> >         at org.apache.hadoop.ipc.Client.call(Client.java:1399)
> >         at
> >
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
> >         at com.sun.proxy.$Proxy10.renewLease(Unknown Source)
> >         at
> >
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:571)
> >         at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
> >         at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >         at java.lang.reflect.Method.invoke(Method.java:606)
> >         at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> >         at
> >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> >         at com.sun.proxy.$Proxy11.renewLease(Unknown Source)
> >         at
> org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:879)
> >         at
> org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417)
> >         at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)
> >         at
> > org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
> >         at
> org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
> >         at java.lang.Thread.run(Thread.java:745)
> > Caused by: java.io.IOException: Couldn't setup connection for solr/
> > sandbox.hortonworks.com@HORTONWORKS.COM to
> > sandbox.hortonworks.com/10.0.2.15:8020
> >         at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:672)
> >         at java.security.AccessController.doPrivileged(Native Method)
> >         at javax.security.auth.Subject.doAs(Subject.java:415)
> >         at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> >         at
> >
> >
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:643)
> >         at
> > org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:730)
> >         at
> > org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
> >         at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
> >         at org.apache.hadoop.ipc.Client.call(Client.java:1438)
> >         ... 16 more
> > Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused
> > by GSSException: No valid credentials provided (Mechanism level: Failed
> to
> > find any Kerberos tgt)]
> >         at
> >
> >
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
> >         at
> >
> >
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:413)
> >         at
> >
> >
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:553)
> >         at
> > org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:368)
> >         at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:722)
> >         at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:718)
> >         at java.security.AccessController.doPrivileged(Native Method)
> >         at javax.security.auth.Subject.doAs(Subject.java:415)
> >         at
> >
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
> >         at
> > org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:717)
> >         ... 19 more
> > Caused by: GSSException: No valid credentials provided (Mechanism level:
> > Failed to find any Kerberos tgt)
> >         at
> >
> >
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
> >         at
> >
> >
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
> >         at
> >
> >
> sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
> >         at
> >
> >
> sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
> >         at
> > sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
> >         at
> > sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
> >         at
> >
> >
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
> >         ... 28 more
> >
> >
> > This is my collection configuration.
> >
> > <directoryFactory name="DirectoryFactory"
> > class="solr.HdfsDirectoryFactory">
> >     <str name="solr.hdfs.home">hdfs://sandbox.hortonworks.com/user/solr
> > </str>
> >     <str
> name="solr.hdfs.confdir">/usr/hdp/current/hadoop-client/conf</str>
> >     <bool name="solr.hdfs.blockcache.enabled">true</bool>
> >     <int name="solr.hdfs.blockcache.slab.count">1</int>
> >     <bool
> name="solr.hdfs.blockcache.direct.memory.allocation">false</bool>
> >     <int name="solr.hdfs.blockcache.blocksperbank">16384</int>
> >     <bool name="solr.hdfs.blockcache.read.enabled">true</bool>
> >     <bool name="solr.hdfs.blockcache.write.enabled">false</bool>
> >     <bool name="solr.hdfs.nrtcachingdirectory.enable">true</bool>
> >     <int name="solr.hdfs.nrtcachingdirectory.maxmergesizemb">16</int>
> >     <int name="solr.hdfs.nrtcachingdirectory.maxcachedmb">192</int>
> >     <bool name="solr.hdfs.security.kerberos.enabled">true</bool>
> >     <str
> >
> >
> name="solr.hdfs.security.kerberos.keytabfile">/etc/solr/conf/solr.keytab</str>
> >     <str name="solr.hdfs.security.kerberos.principal">solr/
> > sandbox.hortonworks.com@HORTONWORKS.COM</str>
> > </directoryFactory>
> >
> > Thanks,
> >
> > Andrew Bumstead
> >
> > --
> >
> >
> > *NOTICE AND DISCLAIMER*
> >
> > This email (including attachments) is confidential. If you are not the
> > intended recipient, notify the sender immediately, delete this email from
> > your system and do not disclose or use for any purpose.
> >
> > Business Address: Eagle House, 163 City Road, London, EC1V 1NR. United
> > Kingdom
> > Registered Office: Finsgate, 5-7 Cranwood Street, London, EC1V 9EE.
> United
> > Kingdom
> > Big Data Partnership Limited is a company registered in England & Wales
> > with Company No 7904824
> >
>

-- 
 

*NOTICE AND DISCLAIMER*

This email (including attachments) is confidential. If you are not the 
intended recipient, notify the sender immediately, delete this email from 
your system and do not disclose or use for any purpose.

Business Address: Eagle House, 163 City Road, London, EC1V 1NR. United 
Kingdom
Registered Office: Finsgate, 5-7 Cranwood Street, London, EC1V 9EE. United 
Kingdom
Big Data Partnership Limited is a company registered in England & Wales 
with Company No 7904824

Re: Kerberos ticket not renewing when storing index on Kerberized HDFS

Posted by Ishan Chattopadhyaya <ic...@gmail.com>.
Not sure how reliably renewals are taken care of in the context of
kerberized HDFS, but here's my 10-15 minute analysis.
Seems to me that the auto renewal thread is not spawned [0]. This relies on
kinit.
Not sure if having a login configuration with renewTGT is sufficient (which
seems to be passed in by default, unless there's a jaas config being
explicitly passed in with renewTGT=false). As per the last comments from
Devraj & Owen [1] kinit based logins have worked more reliably.

If you can rule out any setup issues, I suggest you file a JIRA and someone
who has worked on the HdfsDirectoryFactory would be able to suggest better.
Thanks,
Ishan

[0] -
http://grepcode.com/file/repo1.maven.org/maven2/org.apache.hadoop/hadoop-common/2.7.1/org/apache/hadoop/security/UserGroupInformation.java#UserGroupInformation.spawnAutoRenewalThreadForUserCreds%28%29

[1] - https://issues.apache.org/jira/browse/HADOOP-6656

On Fri, Jan 8, 2016 at 10:21 PM, Andrew Bumstead <
andrew.bumstead@bigdatapartnership.com> wrote:

> Hello,
>
> I have Solr Cloud configured to stores its index files on a Kerberized HDFS
> (I followed documentation at
> https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+HDFS),
> and
> have been able to index some documents with the files being written to the
> HDFS as expected. However, it appears that some time after starting, Solr
> is unable to connect to HDFS as it no longer has a valid Kerberos TGT. The
> time-frame of this occurring is consistent with my default Kerberos ticket
> lifetime of 24 hours, so it appears as though Solr is not renewing its
> Kerberos ticket upon expiry. A restart of Solr resolves the issue again for
> 24 hours.
>
> Is there any configuration I can add to make Solr automatically renew its
> ticket or is this an issue with Solr?
>
> The following is the stack trace I am getting in Solr.
>
> java.io.IOException: Failed on local exception: java.io.IOException:
> Couldn't setup connection for solr/sandbox.hortonworks.com@HORTONWORKS.COM
> to sandbox.hortonworks.com/10.0.2.15:8020; Host Details : local host is: "
> sandbox.hortonworks.com/10.0.2.15"; destination host is: "
> sandbox.hortonworks.com":8020;
>         at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1472)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1399)
>         at
>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
>         at com.sun.proxy.$Proxy10.renewLease(Unknown Source)
>         at
>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.renewLease(ClientNamenodeProtocolTranslatorPB.java:571)
>         at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
>         at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>         at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>         at com.sun.proxy.$Proxy11.renewLease(Unknown Source)
>         at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:879)
>         at org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:417)
>         at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:442)
>         at
> org.apache.hadoop.hdfs.LeaseRenewer.access$700(LeaseRenewer.java:71)
>         at org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:298)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: Couldn't setup connection for solr/
> sandbox.hortonworks.com@HORTONWORKS.COM to
> sandbox.hortonworks.com/10.0.2.15:8020
>         at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:672)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at
>
> org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:643)
>         at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:730)
>         at
> org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
>         at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
>         at org.apache.hadoop.ipc.Client.call(Client.java:1438)
>         ... 16 more
> Caused by: javax.security.sasl.SaslException: GSS initiate failed [Caused
> by GSSException: No valid credentials provided (Mechanism level: Failed to
> find any Kerberos tgt)]
>         at
>
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
>         at
>
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:413)
>         at
>
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:553)
>         at
> org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:368)
>         at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:722)
>         at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:718)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:415)
>         at
>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
>         at
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:717)
>         ... 19 more
> Caused by: GSSException: No valid credentials provided (Mechanism level:
> Failed to find any Kerberos tgt)
>         at
>
> sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
>         at
>
> sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
>         at
>
> sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
>         at
>
> sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
>         at
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
>         at
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
>         at
>
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
>         ... 28 more
>
>
> This is my collection configuration.
>
> <directoryFactory name="DirectoryFactory"
> class="solr.HdfsDirectoryFactory">
>     <str name="solr.hdfs.home">hdfs://sandbox.hortonworks.com/user/solr
> </str>
>     <str name="solr.hdfs.confdir">/usr/hdp/current/hadoop-client/conf</str>
>     <bool name="solr.hdfs.blockcache.enabled">true</bool>
>     <int name="solr.hdfs.blockcache.slab.count">1</int>
>     <bool name="solr.hdfs.blockcache.direct.memory.allocation">false</bool>
>     <int name="solr.hdfs.blockcache.blocksperbank">16384</int>
>     <bool name="solr.hdfs.blockcache.read.enabled">true</bool>
>     <bool name="solr.hdfs.blockcache.write.enabled">false</bool>
>     <bool name="solr.hdfs.nrtcachingdirectory.enable">true</bool>
>     <int name="solr.hdfs.nrtcachingdirectory.maxmergesizemb">16</int>
>     <int name="solr.hdfs.nrtcachingdirectory.maxcachedmb">192</int>
>     <bool name="solr.hdfs.security.kerberos.enabled">true</bool>
>     <str
>
> name="solr.hdfs.security.kerberos.keytabfile">/etc/solr/conf/solr.keytab</str>
>     <str name="solr.hdfs.security.kerberos.principal">solr/
> sandbox.hortonworks.com@HORTONWORKS.COM</str>
> </directoryFactory>
>
> Thanks,
>
> Andrew Bumstead
>
> --
>
>
> *NOTICE AND DISCLAIMER*
>
> This email (including attachments) is confidential. If you are not the
> intended recipient, notify the sender immediately, delete this email from
> your system and do not disclose or use for any purpose.
>
> Business Address: Eagle House, 163 City Road, London, EC1V 1NR. United
> Kingdom
> Registered Office: Finsgate, 5-7 Cranwood Street, London, EC1V 9EE. United
> Kingdom
> Big Data Partnership Limited is a company registered in England & Wales
> with Company No 7904824
>