You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Partha Pratim Ghosh (JIRA)" <ji...@apache.org> on 2016/06/29 13:27:57 UTC

[jira] [Created] (SPARK-16298) spark.yarn.principal not working

Partha Pratim Ghosh created SPARK-16298:
-------------------------------------------

             Summary: spark.yarn.principal not working
                 Key: SPARK-16298
                 URL: https://issues.apache.org/jira/browse/SPARK-16298
             Project: Spark
          Issue Type: Bug
            Reporter: Partha Pratim Ghosh


I am opening a Spark configuration with spark.yarn.principal and spark.yarn.keytab. However, this is not authenticating the underlying HDFS with the same principal and keytab. Instead, seems it is picking up from ticket cache. Without this feature the spark.yarn.principal and spark.yarn.keytab doesn't seem to be logical.

Sample code - 

SparkConf conf = new SparkConf().setMaster("yarn-client").setAppName("spark-test")
						.set("spark.repl.class.uri", classServerUri);
				conf.set("spark.yarn.principal", principal);
				conf.set("spark.yarn.keytab", keytab);
				conf.setSparkHome(sparkBasePath);
				
				if (execUri != null) {
					conf.set("spark.executor.uri", execUri);
				}

				conf.set("spark.executor.memory", "8g");
				conf.set("spark.scheduler.mode", "FAIR");
				SparkContext sparkContext = new SparkContext(conf);

Please advise how this can be achieved.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org