You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:33:39 UTC

[jira] [Resolved] (SPARK-16298) spark.yarn.principal not working

     [ https://issues.apache.org/jira/browse/SPARK-16298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-16298.
----------------------------------
    Resolution: Incomplete

> spark.yarn.principal not working
> --------------------------------
>
>                 Key: SPARK-16298
>                 URL: https://issues.apache.org/jira/browse/SPARK-16298
>             Project: Spark
>          Issue Type: Bug
>            Reporter: Partha Pratim Ghosh
>            Priority: Major
>              Labels: bulk-closed
>
> I am opening a Spark configuration with spark.yarn.principal and spark.yarn.keytab. However, this is not authenticating the underlying HDFS with the same principal and keytab. Instead, seems it is picking up from ticket cache. Without this feature the spark.yarn.principal and spark.yarn.keytab doesn't seem to be logical.
> Sample code - 
> SparkConf conf = new SparkConf().setMaster("yarn-client").setAppName("spark-test")
> 						.set("spark.repl.class.uri", classServerUri);
> 				conf.set("spark.yarn.principal", principal);
> 				conf.set("spark.yarn.keytab", keytab);
> 				conf.setSparkHome(sparkBasePath);
> 				
> 				if (execUri != null) {
> 					conf.set("spark.executor.uri", execUri);
> 				}
> 				conf.set("spark.executor.memory", "8g");
> 				conf.set("spark.scheduler.mode", "FAIR");
> 				SparkContext sparkContext = new SparkContext(conf);
> Please advise how this can be achieved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org