You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Tao Wang (JIRA)" <ji...@apache.org> on 2015/06/27 04:44:04 UTC

[jira] [Created] (SPARK-8676) After TGT expires, Thrift Server get "No invalid credentials" exception

Tao Wang created SPARK-8676:
-------------------------------

             Summary: After TGT expires, Thrift Server get "No invalid credentials" exception
                 Key: SPARK-8676
                 URL: https://issues.apache.org/jira/browse/SPARK-8676
             Project: Spark
          Issue Type: Bug
          Components: SQL
            Reporter: Tao Wang


I ran Thrit Server in secure Hadoop. With "spark.yarn.keytab" and "spark.yarn.principal" configured in spark-defaults.conf, "hive.server2.authentication.kerberos.principal" and "hive.server2.authentication.kerberos.keytab" in hive-site.xml. But after the ticket which can be get from Kerberos using principal expires, it throw the excepiton below:

{code}
2015-06-26 19:16:32,411 | WARN  | [LeaseRenewer:spark@hacluster, clients=[DFSClient_NONMAPREDUCE_-809515000_1], created at java.lang.Throwable: TRACE
	at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.<init>(LeaseRenewer.java:206)
	at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.<init>(LeaseRenewer.java:75)
	at org.apache.hadoop.hdfs.client.impl.LeaseRenewer$Factory.get(LeaseRenewer.java:147)
	at org.apache.hadoop.hdfs.client.impl.LeaseRenewer$Factory.access$100(LeaseRenewer.java:94)
	at org.apache.hadoop.hdfs.client.impl.LeaseRenewer.getInstance(LeaseRenewer.java:84)
	at org.apache.hadoop.hdfs.DFSClient.getLeaseRenewer(DFSClient.java:480)
	at org.apache.hadoop.hdfs.DFSClient.beginFileLease(DFSClient.java:486)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1361)
	at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1288)
	at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:486)
	at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:482)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:482)
	at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:388)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:890)
	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:365)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:338)
	at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
	at org.apache.spark.deploy.yarn.Client.copyFileToRemote(Client.scala:232)
	at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:288)
	at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:559)
	at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:115)
	at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:58)
	at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:141)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:497)
	at org.apache.spark.sql.hive.thriftserver.SparkSQLEnv$.init(SparkSQLEnv.scala:51)
	at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2$.main(HiveThriftServer2.scala:73)
	at org.apache.spark.sql.hive.thriftserver.HiveThriftServer2.main(HiveThriftServer2.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:666)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:172)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:195)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:114)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
] | Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] | org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:693)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org