You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2016/09/29 15:10:21 UTC

[jira] [Resolved] (SPARK-17722) YarnScheduler: Initial job has not accepted any resources

     [ https://issues.apache.org/jira/browse/SPARK-17722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-17722.
-------------------------------
    Resolution: Not A Problem

> YarnScheduler: Initial job has not accepted any resources
> ---------------------------------------------------------
>
>                 Key: SPARK-17722
>                 URL: https://issues.apache.org/jira/browse/SPARK-17722
>             Project: Spark
>          Issue Type: Bug
>            Reporter: Partha Pratim Ghosh
>
> Connected spark in yarn mode from eclipse java. On trying to run task it is giving the following - 
> YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources. The request is going to Hadoop cluster scheduler and from there we can see the job in spark UI. But there it is saying that no task has been assigned for this.
> Same code is running from spark-submit where we need to remove the following lines - 
> System.setProperty("java.security.krb5.conf", "C:\\xxx\\krb5.conf");
> 		
> 		org.apache.hadoop.conf.Configuration conf = new     
> 		org.apache.hadoop.conf.Configuration();
> 		conf.set("hadoop.security.authentication", "kerberos");
> 		UserGroupInformation.setConfiguration(conf);
> Following is the configuration - 
> import org.apache.hadoop.security.UserGroupInformation;
> import org.apache.spark.SparkConf;
> import org.apache.spark.api.java.JavaRDD;
> import org.apache.spark.api.java.JavaSparkContext;
> import org.apache.spark.sql.DataFrame;
> import org.apache.spark.sql.SQLContext;
> public class TestConnectivity {
> 	/**
> 	 * @param args
> 	 */
> 	public static void main(String[] args) {
> 		System.setProperty("java.security.krb5.conf", "C:\\xxx\\krb5.conf");
> 		
> 		org.apache.hadoop.conf.Configuration conf = new     
> 		org.apache.hadoop.conf.Configuration();
> 		conf.set("hadoop.security.authentication", "kerberos");
> 		UserGroupInformation.setConfiguration(conf);
> 		 SparkConf config = new SparkConf().setAppName("Test Spark ");
> 		 config = config.setMaster("yarn-client");
> 		 config .set("spark.dynamicAllocation.enabled", "false");
> 		 config.set("spark.executor.memory", "2g");
> 		 config.set("spark.executor.instances", "1");
> 		 config.set("spark.executor.cores", "2");
> 		 //config.set("spark.driver.memory", "2g");
> 		 //config.set("spark.driver.cores", "1");
> 		 /*config.set("spark.executor.am.memory", "2g");
> 		 config.set("spark.executor.am.cores", "2");*/
> 		 config.set("spark.cores.max", "4");
> 		 config.set("yarn.nodemanager.resource.cpu-vcores","4");
> 		 config.set("spark.yarn.queue","root.root");
> 		 /*config.set("spark.deploy.defaultCores", "2");
> 		 config.set("spark.task.cpus", "2");*/
> 		 config.set("spark.yarn.jar", "file:/C:/xxx/spark-assembly_2.10-1.6.0-cdh5.7.1.jar");
> 		    JavaSparkContext sc = new JavaSparkContext(config);
> 		    SQLContext sqlcontext = new SQLContext(sc);
> 			    DataFrame df = sqlcontext.jsonFile(logFile);
> 			    JavaRDD<String> logData = sc.textFile("sparkexamples/Employee.json").cache();
> 			   DataFrame df = sqlcontext.jsonRDD(logData);
> 			   
> 			    df.show();
> 			    df.printSchema();
> 		    
> 		    //UserGroupInformation.setConfiguration(conf);
>   
> 	}
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org