You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Jack Jiang (JIRA)" <ji...@apache.org> on 2016/07/21 02:18:20 UTC

[jira] [Created] (SPARK-16659) use Maven project to submit spark application via yarn-client

Jack Jiang created SPARK-16659:
----------------------------------

             Summary: use Maven project to submit spark application via yarn-client
                 Key: SPARK-16659
                 URL: https://issues.apache.org/jira/browse/SPARK-16659
             Project: Spark
          Issue Type: Question
            Reporter: Jack Jiang


i want to use spark sql to execute hive sql in my maven project,here is the main code:
		System.setProperty("hadoop.home.dir",
				"D:\\hadoop-common-2.2.0-bin-master");
		SparkConf sparkConf = new SparkConf()
				.setAppName("test").setMaster("yarn-client");
		// .set("hive.metastore.uris", "thrift://172.30.115.59:9083");
		SparkContext ctx = new SparkContext(sparkConf);
		// ctx.addJar("lib/hive-hbase-handler-0.14.0.2.2.6.0-2800.jar");
		HiveContext sqlContext = new org.apache.spark.sql.hive.HiveContext(ctx);
		String[] tables = sqlContext.tableNames();
		for (String tablename : tables) {
			System.out.println("tablename : " + tablename);
		}
when i run it,it comes to a error:
10:16:17,496  INFO Client:59 - 
	 client token: N/A
	 diagnostics: Application application_1468409747983_0280 failed 2 times due to AM Container for appattempt_1468409747983_0280_000002 exited with  exitCode: -1000
For more detailed output, check application tracking page:http://hadoop003.icccuat.com:8088/proxy/application_1468409747983_0280/Then, click on links to logs of each attempt.
Diagnostics: File file:/C:/Users/uatxj990267/AppData/Local/Temp/spark-8874c486-893d-4ac3-a088-48e4cdb484e1/__spark_conf__9007071161920501082.zip does not exist
java.io.FileNotFoundException: File file:/C:/Users/uatxj990267/AppData/Local/Temp/spark-8874c486-893d-4ac3-a088-48e4cdb484e1/__spark_conf__9007071161920501082.zip does not exist
	at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:608)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:821)
	at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:598)
	at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:414)
	at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251)
	at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61)
	at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
	at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
	at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
	at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)

Failing this attempt. Failing the application.
	 ApplicationMaster host: N/A
	 ApplicationMaster RPC port: -1
	 queue: default
	 start time: 1469067373412
	 final status: FAILED
	 tracking URL: http://hadoop003.icccuat.com:8088/cluster/app/application_1468409747983_0280
	 user: uatxj990267
10:16:17,496 ERROR SparkContext:96 - Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
	at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:123)
	at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:63)
	at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:523)
	at com.huateng.test.SparkSqlDemo.main(SparkSqlDemo.java:33)
but when i change this code setMaster("yarn-client") to setMaster(local[2]),it's OK?what's wrong with it ?can anyone help me?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org