You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by 另一片天 <95...@qq.com> on 2016/06/22 06:10:20 UTC

回复: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2   /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
Warning: Local jar /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar does not exist, skipping.
java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

get error at once
------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:04
收件人: "另一片天"<95...@qq.com>; 
抄送: "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher



How about supplying the jar directly in spark submit - 

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--driver-memory 512m \
--num-executors 2 \
--executor-memory 512m \
--executor-cores 2 \
/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar


On Wed, Jun 22, 2016 at 3:59 PM, 另一片天 <95...@qq.com> wrote:
i  config this  para  at spark-defaults.conf
spark.yarn.jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar


then ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2    10:




Error: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher  

but  i don't config that para ,there no error  why???that para is only avoid Uploading resource file(jar package)??

Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

Posted by Yash Sharma <ya...@gmail.com>.
I meant try having the full path in the spark submit command-

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--driver-memory 512m \
--num-executors 2 \
--executor-memory 512m \
--executor-cores 2 \
hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar



On Wed, Jun 22, 2016 at 4:23 PM, 另一片天 <95...@qq.com> wrote:

> shihj@master:~/workspace/hadoop-2.6.4$ bin/hadoop fs -ls
> hdfs://master:9000/user/shihj/spark_lib
> Found 1 items
> -rw-r--r--   3 shihj supergroup  118955968 2016-06-22 10:24
> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
> shihj@master:~/workspace/hadoop-2.6.4$
> can find the jar on all nodes.
>
>
> ------------------ 原始邮件 ------------------
> *发件人:* "Yash Sharma";<ya...@gmail.com>;
> *发送时间:* 2016年6月22日(星期三) 下午2:18
> *收件人:* "Saisai Shao"<sa...@gmail.com>;
> *抄送:* "另一片天"<95...@qq.com>; "user"<us...@spark.apache.org>;
> *主题:* Re: Could not find or load main class
> org.apache.spark.deploy.yarn.ExecutorLauncher
>
> Try providing the jar with the hdfs prefix. Its probably just because its
> not able to find the jar on all nodes.
>
> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.
> 1-hadoop2.6.0.jar
>
> Is it able to run on local mode ?
>
> On Wed, Jun 22, 2016 at 4:14 PM, Saisai Shao <sa...@gmail.com>
> wrote:
>
>> spark.yarn.jar (none) The location of the Spark jar file, in case
>> overriding the default location is desired. By default, Spark on YARN will
>> use a Spark jar installed locally, but the Spark jar can also be in a
>> world-readable location on HDFS. This allows YARN to cache it on nodes so
>> that it doesn't need to be distributed each time an application runs. To
>> point to a jar on HDFS, for example, set this configuration to
>> hdfs:///some/path.
>>
>> spark.yarn.jar is used for spark run-time system jar, which is spark
>> assembly jar, not the application jar (example-assembly jar). So in your
>> case you upload the example-assembly jar into hdfs, in which spark system
>> jars are not packed, so ExecutorLaucher cannot be found.
>>
>> Thanks
>> Saisai
>>
>> On Wed, Jun 22, 2016 at 2:10 PM, 另一片天 <95...@qq.com> wrote:
>>
>>> shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$
>>> ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
>>> yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m
>>> --executor-cores 2
>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
>>> Warning: Local jar
>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar does not exist,
>>> skipping.
>>> java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>>> at java.lang.Class.forName0(Native Method)
>>> at java.lang.Class.forName(Class.java:348)
>>> at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
>>> at
>>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
>>> at
>>> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>> get error at once
>>> ------------------ 原始邮件 ------------------
>>> *发件人:* "Yash Sharma";<ya...@gmail.com>;
>>> *发送时间:* 2016年6月22日(星期三) 下午2:04
>>> *收件人:* "另一片天"<95...@qq.com>;
>>> *抄送:* "user"<us...@spark.apache.org>;
>>> *主题:* Re: Could not find or load main class
>>> org.apache.spark.deploy.yarn.ExecutorLauncher
>>>
>>> How about supplying the jar directly in spark submit -
>>>
>>> ./bin/spark-submit \
>>>> --class org.apache.spark.examples.SparkPi \
>>>> --master yarn-client \
>>>> --driver-memory 512m \
>>>> --num-executors 2 \
>>>> --executor-memory 512m \
>>>> --executor-cores 2 \
>>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>>>
>>>
>>> On Wed, Jun 22, 2016 at 3:59 PM, 另一片天 <95...@qq.com> wrote:
>>>
>>>> i  config this  para  at spark-defaults.conf
>>>> spark.yarn.jar
>>>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>>>>
>>>> then ./bin/spark-submit --class org.apache.spark.examples.SparkPi
>>>> --master yarn-client --driver-memory 512m --num-executors 2
>>>> --executor-memory 512m --executor-cores 2    10:
>>>>
>>>>
>>>>
>>>>    - Error: Could not find or load main class
>>>>    org.apache.spark.deploy.yarn.ExecutorLauncher
>>>>
>>>> but  i don't config that para ,there no error  why???that para is only
>>>> avoid Uploading resource file(jar package)??
>>>>
>>>
>>>
>>
>

回复: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

Posted by 另一片天 <95...@qq.com>.
shihj@master:~/workspace/hadoop-2.6.4$ bin/hadoop fs -ls hdfs://master:9000/user/shihj/spark_lib
Found 1 items
-rw-r--r--   3 shihj supergroup  118955968 2016-06-22 10:24 hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
shihj@master:~/workspace/hadoop-2.6.4$ 

can find the jar on all nodes.




------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:18
收件人: "Saisai Shao"<sa...@gmail.com>; 
抄送: "另一片天"<95...@qq.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher



Try providing the jar with the hdfs prefix. Its probably just because its not able to find the jar on all nodes.

hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar



Is it able to run on local mode ?


On Wed, Jun 22, 2016 at 4:14 PM, Saisai Shao <sa...@gmail.com> wrote:
spark.yarn.jar(none)The location of the Spark jar file, in case overriding the default location is desired. By default, Spark on YARN will use a Spark jar installed locally, but the Spark jar can also be in a world-readable location on HDFS. This allows YARN to cache it on nodes so that it doesn't need to be distributed each time an application runs. To point to a jar on HDFS, for example, set this configuration to hdfs:///some/path.


spark.yarn.jar is used for spark run-time system jar, which is spark assembly jar, not the application jar (example-assembly jar). So in your case you upload the example-assembly jar into hdfs, in which spark system jars are not packed, so ExecutorLaucher cannot be found.


Thanks
Saisai


On Wed, Jun 22, 2016 at 2:10 PM, 另一片天 <95...@qq.com> wrote:
shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2   /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
Warning: Local jar /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar does not exist, skipping.
java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

get error at once
------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:04
收件人: "另一片天"<95...@qq.com>; 
抄送: "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher



How about supplying the jar directly in spark submit - 

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--driver-memory 512m \
--num-executors 2 \
--executor-memory 512m \
--executor-cores 2 \
/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar


On Wed, Jun 22, 2016 at 3:59 PM, 另一片天 <95...@qq.com> wrote:
i  config this  para  at spark-defaults.conf
spark.yarn.jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar


then ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2    10:




Error: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher  

but  i don't config that para ,there no error  why???that para is only avoid Uploading resource file(jar package)??

Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

Posted by Yash Sharma <ya...@gmail.com>.
Ok, we moved to the next level :)

Could you share more info on the error. You could get logs by the command -

yarn logs -applicationId application_1466568126079_0006


On Wed, Jun 22, 2016 at 4:38 PM, 另一片天 <95...@qq.com> wrote:

> shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$
> ./bin/spark-submit \
> > --class org.apache.spark.examples.SparkPi \
> > --master yarn-cluster \
> > --driver-memory 512m \
> > --num-executors 2 \
> > --executor-memory 512m \
> > --executor-cores 2 \
> >
> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
> 16/06/22 14:36:10 INFO RMProxy: Connecting to ResourceManager at master/
> 192.168.20.137:8032
> 16/06/22 14:36:10 INFO Client: Requesting a new application from cluster
> with 2 NodeManagers
> 16/06/22 14:36:10 INFO Client: Verifying our application has not requested
> more than the maximum memory capability of the cluster (8192 MB per
> container)
> 16/06/22 14:36:10 INFO Client: Will allocate AM container, with 896 MB
> memory including 384 MB overhead
> 16/06/22 14:36:10 INFO Client: Setting up container launch context for our
> AM
> 16/06/22 14:36:10 INFO Client: Setting up the launch environment for our
> AM container
> 16/06/22 14:36:10 INFO Client: Preparing resources for our AM container
> Java HotSpot(TM) Server VM warning: You have loaded library
> /tmp/libnetty-transport-native-epoll3453573359049032130.so which might have
> disabled stack guard. The VM will try to fix the stack guard now.
> It's highly recommended that you fix the library with 'execstack -c
> <libfile>', or link it with '-z noexecstack'.
> 16/06/22 14:36:11 INFO Client: Source and destination file systems are the
> same. Not copying
> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
> 16/06/22 14:36:11 WARN Client: Resource
> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
> added multiple times to distributed cache.
> 16/06/22 14:36:11 INFO Client: Uploading resource
> file:/tmp/spark-cf23c5a3-d3fb-4f98-9cd2-bbf268766bbc/__spark_conf__7248368026523433025.zip
> ->
> hdfs://master:9000/user/shihj/.sparkStaging/application_1466568126079_0006/__spark_conf__7248368026523433025.zip
> 16/06/22 14:36:13 INFO SecurityManager: Changing view acls to: shihj
> 16/06/22 14:36:13 INFO SecurityManager: Changing modify acls to: shihj
> 16/06/22 14:36:13 INFO SecurityManager: SecurityManager: authentication
> disabled; ui acls disabled; users with view permissions: Set(shihj); users
> with modify permissions: Set(shihj)
> 16/06/22 14:36:13 INFO Client: Submitting application 6 to ResourceManager
> 16/06/22 14:36:13 INFO YarnClientImpl: Submitted application
> application_1466568126079_0006
> 16/06/22 14:36:14 INFO Client: Application report for
> application_1466568126079_0006 (state: ACCEPTED)
> 16/06/22 14:36:14 INFO Client:
> client token: N/A
> diagnostics: N/A
> ApplicationMaster host: N/A
> ApplicationMaster RPC port: -1
> queue: default
> start time: 1466577373576
> final status: UNDEFINED
> tracking URL: http://master:8088/proxy/application_1466568126079_0006/
> user: shihj
> 16/06/22 14:36:15 INFO Client: Application report for
> application_1466568126079_0006 (state: ACCEPTED)
> 16/06/22 14:36:16 INFO Client: Application report for
> application_1466568126079_0006 (state: ACCEPTED)
> 16/06/22 14:36:17 INFO Client: Application report for
> application_1466568126079_0006 (state: ACCEPTED)
> 16/06/22 14:36:18 INFO Client: Application report for
> application_1466568126079_0006 (state: ACCEPTED)
> 16/06/22 14:36:19 INFO Client: Application report for
> application_1466568126079_0006 (state: ACCEPTED)
> 16/06/22 14:36:20 INFO Client: Application report for
> application_1466568126079_0006 (state: ACCEPTED)
> 16/06/22 14:36:21 INFO Client: Application report for
> application_1466568126079_0006 (state: ACCEPTED)
> 16/06/22 14:36:22 INFO Client: Application report for
> application_1466568126079_0006 (state: ACCEPTED)
> 16/06/22 14:36:23 INFO Client: Application report for
> application_1466568126079_0006 (state: ACCEPTED)
> 16/06/22 14:36:24 INFO Client: Application report for
> application_1466568126079_0006 (state: ACCEPTED)
> 16/06/22 14:36:25 INFO Client: Application report for
> application_1466568126079_0006 (state: ACCEPTED)
> 16/06/22 14:36:26 INFO Client: Application report for
> application_1466568126079_0006 (state: ACCEPTED)
> 16/06/22 14:36:27 INFO Client: Application report for
> application_1466568126079_0006 (state: FAILED)
> 16/06/22 14:36:27 INFO Client:
> client token: N/A
> diagnostics: Application application_1466568126079_0006 failed 2 times due
> to AM Container for appattempt_1466568126079_0006_000002 exited with
>  exitCode: 1
> For more detailed output, check application tracking page:
> http://master:8088/proxy/application_1466568126079_0006/Then, click on
> links to logs of each attempt.
> Diagnostics: Exception from container-launch.
> Container id: container_1466568126079_0006_02_000001
> Exit code: 1
> Stack trace: ExitCodeException exitCode=1:
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
> at org.apache.hadoop.util.Shell.run(Shell.java:455)
> at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
> at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 1
> Failing this attempt. Failing the application.
> ApplicationMaster host: N/A
> ApplicationMaster RPC port: -1
> queue: default
> start time: 1466577373576
> final status: FAILED
> tracking URL:
> http://master:8088/cluster/app/application_1466568126079_0006
> user: shihj
> 16/06/22 14:36:27 INFO Client: Deleting staging directory
> .sparkStaging/application_1466568126079_0006
> Exception in thread "main" org.apache.spark.SparkException: Application
> application_1466568126079_0006 finished with failed status
> at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034)
> at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081)
> at org.apache.spark.deploy.yarn.Client.main(Client.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> 16/06/22 14:36:27 INFO ShutdownHookManager: Shutdown hook called
> 16/06/22 14:36:27 INFO ShutdownHookManager: Deleting directory
> /tmp/spark-cf23c5a3-d3fb-4f98-9cd2-bbf268766bbc
>
>
>
> ------------------ 原始邮件 ------------------
> *发件人:* "Yash Sharma";<ya...@gmail.com>;
> *发送时间:* 2016年6月22日(星期三) 下午2:34
> *收件人:* "另一片天"<95...@qq.com>;
> *抄送:* "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>;
>
> *主题:* Re: Could not find or load main class
> org.apache.spark.deploy.yarn.ExecutorLauncher
>
> Try with : --master yarn-cluster
>
> On Wed, Jun 22, 2016 at 4:30 PM, 另一片天 <95...@qq.com> wrote:
>
>> ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
>> yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m
>> --executor-cores 2
>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>> 10
>> Warning: Skip remote jar
>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar.
>> java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
>> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>> at java.lang.Class.forName0(Native Method)
>> at java.lang.Class.forName(Class.java:348)
>> at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
>> at
>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
>> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>
>>
>>
>> ------------------ 原始邮件 ------------------
>> *发件人:* "Yash Sharma";<ya...@gmail.com>;
>> *发送时间:* 2016年6月22日(星期三) 下午2:28
>> *收件人:* "另一片天"<95...@qq.com>;
>> *抄送:* "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>;
>>
>> *主题:* Re: Could not find or load main class
>> org.apache.spark.deploy.yarn.ExecutorLauncher
>>
>> Or better , try the master as yarn-cluster,
>>
>> ./bin/spark-submit \
>> --class org.apache.spark.examples.SparkPi \
>> --master yarn-cluster \
>> --driver-memory 512m \
>> --num-executors 2 \
>> --executor-memory 512m \
>> --executor-cores 2 \
>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.
>> 1-hadoop2.6.0.jar
>>
>> On Wed, Jun 22, 2016 at 4:27 PM, 另一片天 <95...@qq.com> wrote:
>>
>>> Is it able to run on local mode ?
>>>
>>> what mean?? standalone mode ?
>>>
>>>
>>> ------------------ 原始邮件 ------------------
>>> *发件人:* "Yash Sharma";<ya...@gmail.com>;
>>> *发送时间:* 2016年6月22日(星期三) 下午2:18
>>> *收件人:* "Saisai Shao"<sa...@gmail.com>;
>>> *抄送:* "另一片天"<95...@qq.com>; "user"<us...@spark.apache.org>;
>>> *主题:* Re: Could not find or load main class
>>> org.apache.spark.deploy.yarn.ExecutorLauncher
>>>
>>> Try providing the jar with the hdfs prefix. Its probably just because
>>> its not able to find the jar on all nodes.
>>>
>>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.
>>> 1-hadoop2.6.0.jar
>>>
>>> Is it able to run on local mode ?
>>>
>>> On Wed, Jun 22, 2016 at 4:14 PM, Saisai Shao <sa...@gmail.com>
>>> wrote:
>>>
>>>> spark.yarn.jar (none) The location of the Spark jar file, in case
>>>> overriding the default location is desired. By default, Spark on YARN will
>>>> use a Spark jar installed locally, but the Spark jar can also be in a
>>>> world-readable location on HDFS. This allows YARN to cache it on nodes so
>>>> that it doesn't need to be distributed each time an application runs. To
>>>> point to a jar on HDFS, for example, set this configuration to
>>>> hdfs:///some/path.
>>>>
>>>> spark.yarn.jar is used for spark run-time system jar, which is spark
>>>> assembly jar, not the application jar (example-assembly jar). So in your
>>>> case you upload the example-assembly jar into hdfs, in which spark system
>>>> jars are not packed, so ExecutorLaucher cannot be found.
>>>>
>>>> Thanks
>>>> Saisai
>>>>
>>>> On Wed, Jun 22, 2016 at 2:10 PM, 另一片天 <95...@qq.com> wrote:
>>>>
>>>>> shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$
>>>>> ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
>>>>> yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m
>>>>> --executor-cores 2
>>>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
>>>>> Warning: Local jar
>>>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar does not exist,
>>>>> skipping.
>>>>> java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
>>>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>>>>> at java.lang.Class.forName0(Native Method)
>>>>> at java.lang.Class.forName(Class.java:348)
>>>>> at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
>>>>> at
>>>>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
>>>>> at
>>>>> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>>>>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>>>>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>>>>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>>>> get error at once
>>>>> ------------------ 原始邮件 ------------------
>>>>> *发件人:* "Yash Sharma";<ya...@gmail.com>;
>>>>> *发送时间:* 2016年6月22日(星期三) 下午2:04
>>>>> *收件人:* "另一片天"<95...@qq.com>;
>>>>> *抄送:* "user"<us...@spark.apache.org>;
>>>>> *主题:* Re: Could not find or load main class
>>>>> org.apache.spark.deploy.yarn.ExecutorLauncher
>>>>>
>>>>> How about supplying the jar directly in spark submit -
>>>>>
>>>>> ./bin/spark-submit \
>>>>>> --class org.apache.spark.examples.SparkPi \
>>>>>> --master yarn-client \
>>>>>> --driver-memory 512m \
>>>>>> --num-executors 2 \
>>>>>> --executor-memory 512m \
>>>>>> --executor-cores 2 \
>>>>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>>>>>
>>>>>
>>>>> On Wed, Jun 22, 2016 at 3:59 PM, 另一片天 <95...@qq.com> wrote:
>>>>>
>>>>>> i  config this  para  at spark-defaults.conf
>>>>>> spark.yarn.jar
>>>>>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>>>>>>
>>>>>> then ./bin/spark-submit --class org.apache.spark.examples.SparkPi
>>>>>> --master yarn-client --driver-memory 512m --num-executors 2
>>>>>> --executor-memory 512m --executor-cores 2    10:
>>>>>>
>>>>>>
>>>>>>
>>>>>>    - Error: Could not find or load main class
>>>>>>    org.apache.spark.deploy.yarn.ExecutorLauncher
>>>>>>
>>>>>> but  i don't config that para ,there no error  why???that para is
>>>>>> only avoid Uploading resource file(jar package)??
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

回复: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

Posted by 另一片天 <95...@qq.com>.
shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$ ./bin/spark-submit \
> --class org.apache.spark.examples.SparkPi \
> --master yarn-cluster \
> --driver-memory 512m \
> --num-executors 2 \
> --executor-memory 512m \
> --executor-cores 2 \
> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
16/06/22 14:36:10 INFO RMProxy: Connecting to ResourceManager at master/192.168.20.137:8032
16/06/22 14:36:10 INFO Client: Requesting a new application from cluster with 2 NodeManagers
16/06/22 14:36:10 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
16/06/22 14:36:10 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
16/06/22 14:36:10 INFO Client: Setting up container launch context for our AM
16/06/22 14:36:10 INFO Client: Setting up the launch environment for our AM container
16/06/22 14:36:10 INFO Client: Preparing resources for our AM container
Java HotSpot(TM) Server VM warning: You have loaded library /tmp/libnetty-transport-native-epoll3453573359049032130.so which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
16/06/22 14:36:11 INFO Client: Source and destination file systems are the same. Not copying hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
16/06/22 14:36:11 WARN Client: Resource hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar added multiple times to distributed cache.
16/06/22 14:36:11 INFO Client: Uploading resource file:/tmp/spark-cf23c5a3-d3fb-4f98-9cd2-bbf268766bbc/__spark_conf__7248368026523433025.zip -> hdfs://master:9000/user/shihj/.sparkStaging/application_1466568126079_0006/__spark_conf__7248368026523433025.zip
16/06/22 14:36:13 INFO SecurityManager: Changing view acls to: shihj
16/06/22 14:36:13 INFO SecurityManager: Changing modify acls to: shihj
16/06/22 14:36:13 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(shihj); users with modify permissions: Set(shihj)
16/06/22 14:36:13 INFO Client: Submitting application 6 to ResourceManager
16/06/22 14:36:13 INFO YarnClientImpl: Submitted application application_1466568126079_0006
16/06/22 14:36:14 INFO Client: Application report for application_1466568126079_0006 (state: ACCEPTED)
16/06/22 14:36:14 INFO Client: 
	 client token: N/A
	 diagnostics: N/A
	 ApplicationMaster host: N/A
	 ApplicationMaster RPC port: -1
	 queue: default
	 start time: 1466577373576
	 final status: UNDEFINED
	 tracking URL: http://master:8088/proxy/application_1466568126079_0006/
	 user: shihj
16/06/22 14:36:15 INFO Client: Application report for application_1466568126079_0006 (state: ACCEPTED)
16/06/22 14:36:16 INFO Client: Application report for application_1466568126079_0006 (state: ACCEPTED)
16/06/22 14:36:17 INFO Client: Application report for application_1466568126079_0006 (state: ACCEPTED)
16/06/22 14:36:18 INFO Client: Application report for application_1466568126079_0006 (state: ACCEPTED)
16/06/22 14:36:19 INFO Client: Application report for application_1466568126079_0006 (state: ACCEPTED)
16/06/22 14:36:20 INFO Client: Application report for application_1466568126079_0006 (state: ACCEPTED)
16/06/22 14:36:21 INFO Client: Application report for application_1466568126079_0006 (state: ACCEPTED)
16/06/22 14:36:22 INFO Client: Application report for application_1466568126079_0006 (state: ACCEPTED)
16/06/22 14:36:23 INFO Client: Application report for application_1466568126079_0006 (state: ACCEPTED)
16/06/22 14:36:24 INFO Client: Application report for application_1466568126079_0006 (state: ACCEPTED)
16/06/22 14:36:25 INFO Client: Application report for application_1466568126079_0006 (state: ACCEPTED)
16/06/22 14:36:26 INFO Client: Application report for application_1466568126079_0006 (state: ACCEPTED)
16/06/22 14:36:27 INFO Client: Application report for application_1466568126079_0006 (state: FAILED)
16/06/22 14:36:27 INFO Client: 
	 client token: N/A
	 diagnostics: Application application_1466568126079_0006 failed 2 times due to AM Container for appattempt_1466568126079_0006_000002 exited with  exitCode: 1
For more detailed output, check application tracking page:http://master:8088/proxy/application_1466568126079_0006/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1466568126079_0006_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
	at org.apache.hadoop.util.Shell.run(Shell.java:455)
	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
	at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
	at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)




Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
	 ApplicationMaster host: N/A
	 ApplicationMaster RPC port: -1
	 queue: default
	 start time: 1466577373576
	 final status: FAILED
	 tracking URL: http://master:8088/cluster/app/application_1466568126079_0006
	 user: shihj
16/06/22 14:36:27 INFO Client: Deleting staging directory .sparkStaging/application_1466568126079_0006
Exception in thread "main" org.apache.spark.SparkException: Application application_1466568126079_0006 finished with failed status
	at org.apache.spark.deploy.yarn.Client.run(Client.scala:1034)
	at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1081)
	at org.apache.spark.deploy.yarn.Client.main(Client.scala)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/06/22 14:36:27 INFO ShutdownHookManager: Shutdown hook called
16/06/22 14:36:27 INFO ShutdownHookManager: Deleting directory /tmp/spark-cf23c5a3-d3fb-4f98-9cd2-bbf268766bbc







------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:34
收件人: "另一片天"<95...@qq.com>; 
抄送: "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher



Try with : --master yarn-cluster

On Wed, Jun 22, 2016 at 4:30 PM, 另一片天 <95...@qq.com> wrote:
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2   hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
Warning: Skip remote jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar.
java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)







------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:28
收件人: "另一片天"<95...@qq.com>; 
抄送: "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher





Or better , try the master as yarn-cluster,

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-cluster \
--driver-memory 512m \
--num-executors 2 \
--executor-memory 512m \
--executor-cores 2 \
hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar



On Wed, Jun 22, 2016 at 4:27 PM, 另一片天 <95...@qq.com> wrote:
Is it able to run on local mode ?


what mean?? standalone mode ?




------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:18
收件人: "Saisai Shao"<sa...@gmail.com>; 
抄送: "另一片天"<95...@qq.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher





Try providing the jar with the hdfs prefix. Its probably just because its not able to find the jar on all nodes.

hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar



Is it able to run on local mode ?


On Wed, Jun 22, 2016 at 4:14 PM, Saisai Shao <sa...@gmail.com> wrote:
spark.yarn.jar(none)The location of the Spark jar file, in case overriding the default location is desired. By default, Spark on YARN will use a Spark jar installed locally, but the Spark jar can also be in a world-readable location on HDFS. This allows YARN to cache it on nodes so that it doesn't need to be distributed each time an application runs. To point to a jar on HDFS, for example, set this configuration to hdfs:///some/path.


spark.yarn.jar is used for spark run-time system jar, which is spark assembly jar, not the application jar (example-assembly jar). So in your case you upload the example-assembly jar into hdfs, in which spark system jars are not packed, so ExecutorLaucher cannot be found.


Thanks
Saisai


On Wed, Jun 22, 2016 at 2:10 PM, 另一片天 <95...@qq.com> wrote:
shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2   /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
Warning: Local jar /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar does not exist, skipping.
java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

get error at once
------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:04
收件人: "另一片天"<95...@qq.com>; 
抄送: "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher



How about supplying the jar directly in spark submit - 

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--driver-memory 512m \
--num-executors 2 \
--executor-memory 512m \
--executor-cores 2 \
/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar


On Wed, Jun 22, 2016 at 3:59 PM, 另一片天 <95...@qq.com> wrote:
i  config this  para  at spark-defaults.conf
spark.yarn.jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar


then ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2    10:




Error: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher  

but  i don't config that para ,there no error  why???that para is only avoid Uploading resource file(jar package)??

回复: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

Posted by 另一片天 <95...@qq.com>.
thanks  you-all  patience help very much,i change the para
spark.yarn.jar   spark.yarn.jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
to
spark.yarn.jar   spark.yarn.jar hdfs://master:9000/user/shihj/spark_lib/spark-assembly-1.6.1-hadoop2.6.0.jar


then run well。




thanks you-all again。






------------------ 原始邮件 ------------------
发件人: "另一片天";<95...@qq.com>;
发送时间: 2016年6月22日(星期三) 下午3:10
收件人: "Yash Sharma"<ya...@gmail.com>; 
抄送: "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>; 
主题: 回复: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher



yes,it run well








shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$ ./bin/spark-submit \
> --class org.apache.spark.examples.SparkPi \
> --master local[4] \
> lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
16/06/22 15:08:14 INFO SparkContext: Running Spark version 1.6.1
16/06/22 15:08:14 WARN SparkConf: 
SPARK_WORKER_INSTANCES was detected (set to '1').
This is deprecated in Spark 1.0+.


Please instead use:
 - ./spark-submit with --num-executors to specify the number of executors
 - Or set SPARK_EXECUTOR_INSTANCES
 - spark.executor.instances to configure the number of instances in the spark config.
        
16/06/22 15:08:15 INFO SecurityManager: Changing view acls to: shihj
16/06/22 15:08:15 INFO SecurityManager: Changing modify acls to: shihj
16/06/22 15:08:15 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(shihj); users with modify permissions: Set(shihj)
16/06/22 15:08:16 INFO Utils: Successfully started service 'sparkDriver' on port 43865.
16/06/22 15:08:16 INFO Slf4jLogger: Slf4jLogger started
16/06/22 15:08:16 INFO Remoting: Starting remoting
16/06/22 15:08:17 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.20.137:39308]
16/06/22 15:08:17 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 39308.
16/06/22 15:08:17 INFO SparkEnv: Registering MapOutputTracker
16/06/22 15:08:17 INFO SparkEnv: Registering BlockManagerMaster
16/06/22 15:08:17 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-3195b7f2-126d-4734-a681-6ec00727352a
16/06/22 15:08:17 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
16/06/22 15:08:17 INFO SparkEnv: Registering OutputCommitCoordinator
16/06/22 15:08:18 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/06/22 15:08:18 INFO SparkUI: Started SparkUI at http://192.168.20.137:4040
16/06/22 15:08:18 INFO HttpFileServer: HTTP File server directory is /tmp/spark-c91a579d-1a18-4f75-ae05-137d9a286080/httpd-961023ad-cc05-4e3e-b648-19581093df11
16/06/22 15:08:18 INFO HttpServer: Starting HTTP Server
16/06/22 15:08:18 INFO Utils: Successfully started service 'HTTP file server' on port 49924.
16/06/22 15:08:22 INFO SparkContext: Added JAR file:/usr/local/spark/spark-1.6.1-bin-hadoop2.6/lib/spark-examples-1.6.1-hadoop2.6.0.jar at http://192.168.20.137:49924/jars/spark-examples-1.6.1-hadoop2.6.0.jar with timestamp 1466579302122
16/06/22 15:08:22 INFO Executor: Starting executor ID driver on host localhost
16/06/22 15:08:22 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33520.
16/06/22 15:08:22 INFO NettyBlockTransferService: Server created on 33520
16/06/22 15:08:22 INFO BlockManagerMaster: Trying to register BlockManager
16/06/22 15:08:22 INFO BlockManagerMasterEndpoint: Registering block manager localhost:33520 with 511.1 MB RAM, BlockManagerId(driver, localhost, 33520)
16/06/22 15:08:22 INFO BlockManagerMaster: Registered BlockManager
16/06/22 15:08:23 INFO SparkContext: Starting job: reduce at SparkPi.scala:36
16/06/22 15:08:23 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:36) with 10 output partitions
16/06/22 15:08:23 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:36)
16/06/22 15:08:23 INFO DAGScheduler: Parents of final stage: List()
16/06/22 15:08:23 INFO DAGScheduler: Missing parents: List()
16/06/22 15:08:23 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32), which has no missing parents
16/06/22 15:08:23 WARN SizeEstimator: Failed to check whether UseCompressedOops is set; assuming yes
16/06/22 15:08:23 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1904.0 B, free 1904.0 B)
16/06/22 15:08:24 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1216.0 B, free 3.0 KB)
16/06/22 15:08:24 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:33520 (size: 1216.0 B, free: 511.1 MB)
16/06/22 15:08:24 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006
16/06/22 15:08:24 INFO DAGScheduler: Submitting 10 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32)
16/06/22 15:08:24 INFO TaskSchedulerImpl: Adding task set 0.0 with 10 tasks
16/06/22 15:08:24 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:24 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, partition 1,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:24 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, localhost, partition 2,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:24 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, localhost, partition 3,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:24 INFO Executor: Running task 2.0 in stage 0.0 (TID 2)
16/06/22 15:08:24 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
16/06/22 15:08:24 INFO Executor: Running task 3.0 in stage 0.0 (TID 3)
16/06/22 15:08:24 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
16/06/22 15:08:24 INFO Executor: Fetching http://192.168.20.137:49924/jars/spark-examples-1.6.1-hadoop2.6.0.jar with timestamp 1466579302122
16/06/22 15:08:24 INFO Utils: Fetching http://192.168.20.137:49924/jars/spark-examples-1.6.1-hadoop2.6.0.jar to /tmp/spark-c91a579d-1a18-4f75-ae05-137d9a286080/userFiles-eff08530-4501-44c4-a6e8-9ac7709b2732/fetchFileTemp6247932809883110092.tmp
16/06/22 15:08:29 INFO Executor: Adding file:/tmp/spark-c91a579d-1a18-4f75-ae05-137d9a286080/userFiles-eff08530-4501-44c4-a6e8-9ac7709b2732/spark-examples-1.6.1-hadoop2.6.0.jar to class loader
16/06/22 15:08:29 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1031 bytes result sent to driver
16/06/22 15:08:29 INFO Executor: Finished task 2.0 in stage 0.0 (TID 2). 1031 bytes result sent to driver
16/06/22 15:08:29 INFO Executor: Finished task 3.0 in stage 0.0 (TID 3). 1031 bytes result sent to driver
16/06/22 15:08:29 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 1031 bytes result sent to driver
16/06/22 15:08:29 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, localhost, partition 4,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:29 INFO TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, localhost, partition 5,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:29 INFO Executor: Running task 4.0 in stage 0.0 (TID 4)
16/06/22 15:08:29 INFO Executor: Running task 5.0 in stage 0.0 (TID 5)
16/06/22 15:08:29 INFO TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, localhost, partition 6,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:29 INFO Executor: Running task 6.0 in stage 0.0 (TID 6)
16/06/22 15:08:29 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 5690 ms on localhost (1/10)
16/06/22 15:08:29 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 5652 ms on localhost (2/10)
16/06/22 15:08:29 INFO TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, localhost, partition 7,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:29 INFO Executor: Running task 7.0 in stage 0.0 (TID 7)
16/06/22 15:08:29 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 5688 ms on localhost (3/10)
16/06/22 15:08:29 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 5695 ms on localhost (4/10)
16/06/22 15:08:29 INFO Executor: Finished task 4.0 in stage 0.0 (TID 4). 1031 bytes result sent to driver
16/06/22 15:08:29 INFO TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, localhost, partition 8,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:29 INFO TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 165 ms on localhost (5/10)
16/06/22 15:08:29 INFO Executor: Running task 8.0 in stage 0.0 (TID 8)
16/06/22 15:08:29 INFO Executor: Finished task 5.0 in stage 0.0 (TID 5). 1031 bytes result sent to driver
16/06/22 15:08:29 INFO Executor: Finished task 7.0 in stage 0.0 (TID 7). 1031 bytes result sent to driver
16/06/22 15:08:29 INFO TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, localhost, partition 9,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:29 INFO Executor: Running task 9.0 in stage 0.0 (TID 9)
16/06/22 15:08:29 INFO TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 184 ms on localhost (6/10)
16/06/22 15:08:29 INFO TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 99 ms on localhost (7/10)
16/06/22 15:08:29 INFO Executor: Finished task 6.0 in stage 0.0 (TID 6). 1031 bytes result sent to driver
16/06/22 15:08:30 INFO TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 190 ms on localhost (8/10)
16/06/22 15:08:30 INFO Executor: Finished task 9.0 in stage 0.0 (TID 9). 1031 bytes result sent to driver
16/06/22 15:08:30 INFO TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 155 ms on localhost (9/10)
16/06/22 15:08:30 INFO Executor: Finished task 8.0 in stage 0.0 (TID 8). 1031 bytes result sent to driver
16/06/22 15:08:30 INFO TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 217 ms on localhost (10/10)
16/06/22 15:08:30 INFO DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:36) finished in 6.038 s
16/06/22 15:08:30 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
16/06/22 15:08:30 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:36, took 6.624021 s
Pi is roughly 3.142184
16/06/22 15:08:30 INFO SparkUI: Stopped Spark web UI at http://192.168.20.137:4040
16/06/22 15:08:30 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/06/22 15:08:30 INFO MemoryStore: MemoryStore cleared
16/06/22 15:08:30 INFO BlockManager: BlockManager stopped
16/06/22 15:08:30 INFO BlockManagerMaster: BlockManagerMaster stopped
16/06/22 15:08:30 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/06/22 15:08:31 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/06/22 15:08:31 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/06/22 15:08:31 INFO SparkContext: Successfully stopped SparkContext
16/06/22 15:08:31 INFO ShutdownHookManager: Shutdown hook called
16/06/22 15:08:31 INFO ShutdownHookManager: Deleting directory /tmp/spark-c91a579d-1a18-4f75-ae05-137d9a286080
16/06/22 15:08:31 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
16/06/22 15:08:31 INFO ShutdownHookManager: Deleting directory /tmp/spark-c91a579d-1a18-4f75-ae05-137d9a286080/httpd-961023ad-cc05-4e3e-b648-19581093df11
shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$ 







------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午3:06
收件人: "另一片天"<95...@qq.com>; 
抄送: "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher



I cannot get a lot of info from these logs but it surely seems like yarn setup issue. Did you try the local mode to check if it works -

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master local[4] \
spark-examples-1.6.1-hadoop2.6.0.jar 10

Note - the jar is a local one 


On Wed, Jun 22, 2016 at 4:50 PM, 另一片天 <95...@qq.com> wrote:
 Application application_1466568126079_0006 failed 2 times due to AM Container for appattempt_1466568126079_0006_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://master:8088/proxy/application_1466568126079_0006/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1466568126079_0006_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application. 



but command get error


shihj@master:~/workspace/hadoop-2.6.4$ yarn logs -applicationId application_1466568126079_0006
Usage: yarn [options]


yarn: error: no such option: -a







------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:46
收件人: "另一片天"<95...@qq.com>; 
抄送: "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher





Are you able to run anything else on the cluster, I suspect its yarn that not able to run the class. If you could just share the logs in pastebin we could confirm that.

On Wed, Jun 22, 2016 at 4:43 PM, 另一片天 <95...@qq.com> wrote:
i  want to avoid Uploading resource file (especially jar package),because them very big,the application will wait for too long,there are good method??
so i config that para, but not get the my want to effect。




------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:34
收件人: "另一片天"<95...@qq.com>; 
抄送: "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher





Try with : --master yarn-cluster

On Wed, Jun 22, 2016 at 4:30 PM, 另一片天 <95...@qq.com> wrote:
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2   hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
Warning: Skip remote jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar.
java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)







------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:28
收件人: "另一片天"<95...@qq.com>; 
抄送: "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher





Or better , try the master as yarn-cluster,

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-cluster \
--driver-memory 512m \
--num-executors 2 \
--executor-memory 512m \
--executor-cores 2 \
hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar



On Wed, Jun 22, 2016 at 4:27 PM, 另一片天 <95...@qq.com> wrote:
Is it able to run on local mode ?


what mean?? standalone mode ?




------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:18
收件人: "Saisai Shao"<sa...@gmail.com>; 
抄送: "另一片天"<95...@qq.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher





Try providing the jar with the hdfs prefix. Its probably just because its not able to find the jar on all nodes.

hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar



Is it able to run on local mode ?


On Wed, Jun 22, 2016 at 4:14 PM, Saisai Shao <sa...@gmail.com> wrote:
spark.yarn.jar(none)The location of the Spark jar file, in case overriding the default location is desired. By default, Spark on YARN will use a Spark jar installed locally, but the Spark jar can also be in a world-readable location on HDFS. This allows YARN to cache it on nodes so that it doesn't need to be distributed each time an application runs. To point to a jar on HDFS, for example, set this configuration to hdfs:///some/path.


spark.yarn.jar is used for spark run-time system jar, which is spark assembly jar, not the application jar (example-assembly jar). So in your case you upload the example-assembly jar into hdfs, in which spark system jars are not packed, so ExecutorLaucher cannot be found.


Thanks
Saisai


On Wed, Jun 22, 2016 at 2:10 PM, 另一片天 <95...@qq.com> wrote:
shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2   /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
Warning: Local jar /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar does not exist, skipping.
java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

get error at once
------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:04
收件人: "另一片天"<95...@qq.com>; 
抄送: "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher



How about supplying the jar directly in spark submit - 

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--driver-memory 512m \
--num-executors 2 \
--executor-memory 512m \
--executor-cores 2 \
/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar


On Wed, Jun 22, 2016 at 3:59 PM, 另一片天 <95...@qq.com> wrote:
i  config this  para  at spark-defaults.conf
spark.yarn.jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar


then ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2    10:




Error: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher  

but  i don't config that para ,there no error  why???that para is only avoid Uploading resource file(jar package)??

回复: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

Posted by 另一片天 <95...@qq.com>.
yes,it run well








shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$ ./bin/spark-submit \
> --class org.apache.spark.examples.SparkPi \
> --master local[4] \
> lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
16/06/22 15:08:14 INFO SparkContext: Running Spark version 1.6.1
16/06/22 15:08:14 WARN SparkConf: 
SPARK_WORKER_INSTANCES was detected (set to '1').
This is deprecated in Spark 1.0+.


Please instead use:
 - ./spark-submit with --num-executors to specify the number of executors
 - Or set SPARK_EXECUTOR_INSTANCES
 - spark.executor.instances to configure the number of instances in the spark config.
        
16/06/22 15:08:15 INFO SecurityManager: Changing view acls to: shihj
16/06/22 15:08:15 INFO SecurityManager: Changing modify acls to: shihj
16/06/22 15:08:15 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(shihj); users with modify permissions: Set(shihj)
16/06/22 15:08:16 INFO Utils: Successfully started service 'sparkDriver' on port 43865.
16/06/22 15:08:16 INFO Slf4jLogger: Slf4jLogger started
16/06/22 15:08:16 INFO Remoting: Starting remoting
16/06/22 15:08:17 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.20.137:39308]
16/06/22 15:08:17 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 39308.
16/06/22 15:08:17 INFO SparkEnv: Registering MapOutputTracker
16/06/22 15:08:17 INFO SparkEnv: Registering BlockManagerMaster
16/06/22 15:08:17 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-3195b7f2-126d-4734-a681-6ec00727352a
16/06/22 15:08:17 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
16/06/22 15:08:17 INFO SparkEnv: Registering OutputCommitCoordinator
16/06/22 15:08:18 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/06/22 15:08:18 INFO SparkUI: Started SparkUI at http://192.168.20.137:4040
16/06/22 15:08:18 INFO HttpFileServer: HTTP File server directory is /tmp/spark-c91a579d-1a18-4f75-ae05-137d9a286080/httpd-961023ad-cc05-4e3e-b648-19581093df11
16/06/22 15:08:18 INFO HttpServer: Starting HTTP Server
16/06/22 15:08:18 INFO Utils: Successfully started service 'HTTP file server' on port 49924.
16/06/22 15:08:22 INFO SparkContext: Added JAR file:/usr/local/spark/spark-1.6.1-bin-hadoop2.6/lib/spark-examples-1.6.1-hadoop2.6.0.jar at http://192.168.20.137:49924/jars/spark-examples-1.6.1-hadoop2.6.0.jar with timestamp 1466579302122
16/06/22 15:08:22 INFO Executor: Starting executor ID driver on host localhost
16/06/22 15:08:22 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 33520.
16/06/22 15:08:22 INFO NettyBlockTransferService: Server created on 33520
16/06/22 15:08:22 INFO BlockManagerMaster: Trying to register BlockManager
16/06/22 15:08:22 INFO BlockManagerMasterEndpoint: Registering block manager localhost:33520 with 511.1 MB RAM, BlockManagerId(driver, localhost, 33520)
16/06/22 15:08:22 INFO BlockManagerMaster: Registered BlockManager
16/06/22 15:08:23 INFO SparkContext: Starting job: reduce at SparkPi.scala:36
16/06/22 15:08:23 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:36) with 10 output partitions
16/06/22 15:08:23 INFO DAGScheduler: Final stage: ResultStage 0 (reduce at SparkPi.scala:36)
16/06/22 15:08:23 INFO DAGScheduler: Parents of final stage: List()
16/06/22 15:08:23 INFO DAGScheduler: Missing parents: List()
16/06/22 15:08:23 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32), which has no missing parents
16/06/22 15:08:23 WARN SizeEstimator: Failed to check whether UseCompressedOops is set; assuming yes
16/06/22 15:08:23 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1904.0 B, free 1904.0 B)
16/06/22 15:08:24 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1216.0 B, free 3.0 KB)
16/06/22 15:08:24 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:33520 (size: 1216.0 B, free: 511.1 MB)
16/06/22 15:08:24 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006
16/06/22 15:08:24 INFO DAGScheduler: Submitting 10 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at map at SparkPi.scala:32)
16/06/22 15:08:24 INFO TaskSchedulerImpl: Adding task set 0.0 with 10 tasks
16/06/22 15:08:24 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:24 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, partition 1,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:24 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, localhost, partition 2,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:24 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, localhost, partition 3,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:24 INFO Executor: Running task 2.0 in stage 0.0 (TID 2)
16/06/22 15:08:24 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
16/06/22 15:08:24 INFO Executor: Running task 3.0 in stage 0.0 (TID 3)
16/06/22 15:08:24 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
16/06/22 15:08:24 INFO Executor: Fetching http://192.168.20.137:49924/jars/spark-examples-1.6.1-hadoop2.6.0.jar with timestamp 1466579302122
16/06/22 15:08:24 INFO Utils: Fetching http://192.168.20.137:49924/jars/spark-examples-1.6.1-hadoop2.6.0.jar to /tmp/spark-c91a579d-1a18-4f75-ae05-137d9a286080/userFiles-eff08530-4501-44c4-a6e8-9ac7709b2732/fetchFileTemp6247932809883110092.tmp
16/06/22 15:08:29 INFO Executor: Adding file:/tmp/spark-c91a579d-1a18-4f75-ae05-137d9a286080/userFiles-eff08530-4501-44c4-a6e8-9ac7709b2732/spark-examples-1.6.1-hadoop2.6.0.jar to class loader
16/06/22 15:08:29 INFO Executor: Finished task 0.0 in stage 0.0 (TID 0). 1031 bytes result sent to driver
16/06/22 15:08:29 INFO Executor: Finished task 2.0 in stage 0.0 (TID 2). 1031 bytes result sent to driver
16/06/22 15:08:29 INFO Executor: Finished task 3.0 in stage 0.0 (TID 3). 1031 bytes result sent to driver
16/06/22 15:08:29 INFO Executor: Finished task 1.0 in stage 0.0 (TID 1). 1031 bytes result sent to driver
16/06/22 15:08:29 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, localhost, partition 4,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:29 INFO TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, localhost, partition 5,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:29 INFO Executor: Running task 4.0 in stage 0.0 (TID 4)
16/06/22 15:08:29 INFO Executor: Running task 5.0 in stage 0.0 (TID 5)
16/06/22 15:08:29 INFO TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, localhost, partition 6,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:29 INFO Executor: Running task 6.0 in stage 0.0 (TID 6)
16/06/22 15:08:29 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 5690 ms on localhost (1/10)
16/06/22 15:08:29 INFO TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 5652 ms on localhost (2/10)
16/06/22 15:08:29 INFO TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, localhost, partition 7,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:29 INFO Executor: Running task 7.0 in stage 0.0 (TID 7)
16/06/22 15:08:29 INFO TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 5688 ms on localhost (3/10)
16/06/22 15:08:29 INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 5695 ms on localhost (4/10)
16/06/22 15:08:29 INFO Executor: Finished task 4.0 in stage 0.0 (TID 4). 1031 bytes result sent to driver
16/06/22 15:08:29 INFO TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, localhost, partition 8,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:29 INFO TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 165 ms on localhost (5/10)
16/06/22 15:08:29 INFO Executor: Running task 8.0 in stage 0.0 (TID 8)
16/06/22 15:08:29 INFO Executor: Finished task 5.0 in stage 0.0 (TID 5). 1031 bytes result sent to driver
16/06/22 15:08:29 INFO Executor: Finished task 7.0 in stage 0.0 (TID 7). 1031 bytes result sent to driver
16/06/22 15:08:29 INFO TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, localhost, partition 9,PROCESS_LOCAL, 2157 bytes)
16/06/22 15:08:29 INFO Executor: Running task 9.0 in stage 0.0 (TID 9)
16/06/22 15:08:29 INFO TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 184 ms on localhost (6/10)
16/06/22 15:08:29 INFO TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 99 ms on localhost (7/10)
16/06/22 15:08:29 INFO Executor: Finished task 6.0 in stage 0.0 (TID 6). 1031 bytes result sent to driver
16/06/22 15:08:30 INFO TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 190 ms on localhost (8/10)
16/06/22 15:08:30 INFO Executor: Finished task 9.0 in stage 0.0 (TID 9). 1031 bytes result sent to driver
16/06/22 15:08:30 INFO TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 155 ms on localhost (9/10)
16/06/22 15:08:30 INFO Executor: Finished task 8.0 in stage 0.0 (TID 8). 1031 bytes result sent to driver
16/06/22 15:08:30 INFO TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 217 ms on localhost (10/10)
16/06/22 15:08:30 INFO DAGScheduler: ResultStage 0 (reduce at SparkPi.scala:36) finished in 6.038 s
16/06/22 15:08:30 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
16/06/22 15:08:30 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:36, took 6.624021 s
Pi is roughly 3.142184
16/06/22 15:08:30 INFO SparkUI: Stopped Spark web UI at http://192.168.20.137:4040
16/06/22 15:08:30 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/06/22 15:08:30 INFO MemoryStore: MemoryStore cleared
16/06/22 15:08:30 INFO BlockManager: BlockManager stopped
16/06/22 15:08:30 INFO BlockManagerMaster: BlockManagerMaster stopped
16/06/22 15:08:30 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/06/22 15:08:31 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/06/22 15:08:31 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/06/22 15:08:31 INFO SparkContext: Successfully stopped SparkContext
16/06/22 15:08:31 INFO ShutdownHookManager: Shutdown hook called
16/06/22 15:08:31 INFO ShutdownHookManager: Deleting directory /tmp/spark-c91a579d-1a18-4f75-ae05-137d9a286080
16/06/22 15:08:31 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
16/06/22 15:08:31 INFO ShutdownHookManager: Deleting directory /tmp/spark-c91a579d-1a18-4f75-ae05-137d9a286080/httpd-961023ad-cc05-4e3e-b648-19581093df11
shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$ 







------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午3:06
收件人: "另一片天"<95...@qq.com>; 
抄送: "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher



I cannot get a lot of info from these logs but it surely seems like yarn setup issue. Did you try the local mode to check if it works -

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master local[4] \
spark-examples-1.6.1-hadoop2.6.0.jar 10

Note - the jar is a local one 


On Wed, Jun 22, 2016 at 4:50 PM, 另一片天 <95...@qq.com> wrote:
 Application application_1466568126079_0006 failed 2 times due to AM Container for appattempt_1466568126079_0006_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://master:8088/proxy/application_1466568126079_0006/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1466568126079_0006_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application. 



but command get error


shihj@master:~/workspace/hadoop-2.6.4$ yarn logs -applicationId application_1466568126079_0006
Usage: yarn [options]


yarn: error: no such option: -a







------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:46
收件人: "另一片天"<95...@qq.com>; 
抄送: "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher





Are you able to run anything else on the cluster, I suspect its yarn that not able to run the class. If you could just share the logs in pastebin we could confirm that.

On Wed, Jun 22, 2016 at 4:43 PM, 另一片天 <95...@qq.com> wrote:
i  want to avoid Uploading resource file (especially jar package),because them very big,the application will wait for too long,there are good method??
so i config that para, but not get the my want to effect。




------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:34
收件人: "另一片天"<95...@qq.com>; 
抄送: "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher





Try with : --master yarn-cluster

On Wed, Jun 22, 2016 at 4:30 PM, 另一片天 <95...@qq.com> wrote:
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2   hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
Warning: Skip remote jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar.
java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)







------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:28
收件人: "另一片天"<95...@qq.com>; 
抄送: "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher





Or better , try the master as yarn-cluster,

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-cluster \
--driver-memory 512m \
--num-executors 2 \
--executor-memory 512m \
--executor-cores 2 \
hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar



On Wed, Jun 22, 2016 at 4:27 PM, 另一片天 <95...@qq.com> wrote:
Is it able to run on local mode ?


what mean?? standalone mode ?




------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:18
收件人: "Saisai Shao"<sa...@gmail.com>; 
抄送: "另一片天"<95...@qq.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher





Try providing the jar with the hdfs prefix. Its probably just because its not able to find the jar on all nodes.

hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar



Is it able to run on local mode ?


On Wed, Jun 22, 2016 at 4:14 PM, Saisai Shao <sa...@gmail.com> wrote:
spark.yarn.jar(none)The location of the Spark jar file, in case overriding the default location is desired. By default, Spark on YARN will use a Spark jar installed locally, but the Spark jar can also be in a world-readable location on HDFS. This allows YARN to cache it on nodes so that it doesn't need to be distributed each time an application runs. To point to a jar on HDFS, for example, set this configuration to hdfs:///some/path.


spark.yarn.jar is used for spark run-time system jar, which is spark assembly jar, not the application jar (example-assembly jar). So in your case you upload the example-assembly jar into hdfs, in which spark system jars are not packed, so ExecutorLaucher cannot be found.


Thanks
Saisai


On Wed, Jun 22, 2016 at 2:10 PM, 另一片天 <95...@qq.com> wrote:
shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2   /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
Warning: Local jar /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar does not exist, skipping.
java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

get error at once
------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:04
收件人: "另一片天"<95...@qq.com>; 
抄送: "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher



How about supplying the jar directly in spark submit - 

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--driver-memory 512m \
--num-executors 2 \
--executor-memory 512m \
--executor-cores 2 \
/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar


On Wed, Jun 22, 2016 at 3:59 PM, 另一片天 <95...@qq.com> wrote:
i  config this  para  at spark-defaults.conf
spark.yarn.jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar


then ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2    10:




Error: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher  

but  i don't config that para ,there no error  why???that para is only avoid Uploading resource file(jar package)??

Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

Posted by Yash Sharma <ya...@gmail.com>.
I cannot get a lot of info from these logs but it surely seems like yarn
setup issue. Did you try the local mode to check if it works -

./bin/spark-submit \
> --class org.apache.spark.examples.SparkPi \
> --master local[4] \
> spark-examples-1.6.1-hadoop2.6.0.jar 10


Note - the jar is a local one

On Wed, Jun 22, 2016 at 4:50 PM, 另一片天 <95...@qq.com> wrote:

>  Application application_1466568126079_0006 failed 2 times due to AM
> Container for appattempt_1466568126079_0006_000002 exited with exitCode: 1
> For more detailed output, check application tracking page:
> http://master:8088/proxy/application_1466568126079_0006/Then, click on
> links to logs of each attempt.
> Diagnostics: Exception from container-launch.
> Container id: container_1466568126079_0006_02_000001
> Exit code: 1
> Stack trace: ExitCodeException exitCode=1:
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
> at org.apache.hadoop.util.Shell.run(Shell.java:455)
> at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
> at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Container exited with a non-zero exit code 1
> Failing this attempt. Failing the application.
>
> but command get error
>
> shihj@master:~/workspace/hadoop-2.6.4$ yarn logs -applicationId
> application_1466568126079_0006
> Usage: yarn [options]
>
> yarn: error: no such option: -a
>
>
>
> ------------------ 原始邮件 ------------------
> *发件人:* "Yash Sharma";<ya...@gmail.com>;
> *发送时间:* 2016年6月22日(星期三) 下午2:46
> *收件人:* "另一片天"<95...@qq.com>;
> *抄送:* "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>;
>
> *主题:* Re: Could not find or load main class
> org.apache.spark.deploy.yarn.ExecutorLauncher
>
> Are you able to run anything else on the cluster, I suspect its yarn that
> not able to run the class. If you could just share the logs in pastebin we
> could confirm that.
>
> On Wed, Jun 22, 2016 at 4:43 PM, 另一片天 <95...@qq.com> wrote:
>
>> i  want to avoid Uploading resource file (especially jar package),because
>> them very big,the application will wait for too long,there are good method??
>> so i config that para, but not get the my want to effect。
>>
>>
>> ------------------ 原始邮件 ------------------
>> *发件人:* "Yash Sharma";<ya...@gmail.com>;
>> *发送时间:* 2016年6月22日(星期三) 下午2:34
>> *收件人:* "另一片天"<95...@qq.com>;
>> *抄送:* "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>;
>>
>> *主题:* Re: Could not find or load main class
>> org.apache.spark.deploy.yarn.ExecutorLauncher
>>
>> Try with : --master yarn-cluster
>>
>> On Wed, Jun 22, 2016 at 4:30 PM, 另一片天 <95...@qq.com> wrote:
>>
>>> ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
>>> yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m
>>> --executor-cores 2
>>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>>> 10
>>> Warning: Skip remote jar
>>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar.
>>> java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>>> at java.lang.Class.forName0(Native Method)
>>> at java.lang.Class.forName(Class.java:348)
>>> at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
>>> at
>>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
>>> at
>>> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>>
>>>
>>>
>>> ------------------ 原始邮件 ------------------
>>> *发件人:* "Yash Sharma";<ya...@gmail.com>;
>>> *发送时间:* 2016年6月22日(星期三) 下午2:28
>>> *收件人:* "另一片天"<95...@qq.com>;
>>> *抄送:* "Saisai Shao"<sa...@gmail.com>; "user"<
>>> user@spark.apache.org>;
>>> *主题:* Re: Could not find or load main class
>>> org.apache.spark.deploy.yarn.ExecutorLauncher
>>>
>>> Or better , try the master as yarn-cluster,
>>>
>>> ./bin/spark-submit \
>>> --class org.apache.spark.examples.SparkPi \
>>> --master yarn-cluster \
>>> --driver-memory 512m \
>>> --num-executors 2 \
>>> --executor-memory 512m \
>>> --executor-cores 2 \
>>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.
>>> 1-hadoop2.6.0.jar
>>>
>>> On Wed, Jun 22, 2016 at 4:27 PM, 另一片天 <95...@qq.com> wrote:
>>>
>>>> Is it able to run on local mode ?
>>>>
>>>> what mean?? standalone mode ?
>>>>
>>>>
>>>> ------------------ 原始邮件 ------------------
>>>> *发件人:* "Yash Sharma";<ya...@gmail.com>;
>>>> *发送时间:* 2016年6月22日(星期三) 下午2:18
>>>> *收件人:* "Saisai Shao"<sa...@gmail.com>;
>>>> *抄送:* "另一片天"<95...@qq.com>; "user"<us...@spark.apache.org>;
>>>> *主题:* Re: Could not find or load main class
>>>> org.apache.spark.deploy.yarn.ExecutorLauncher
>>>>
>>>> Try providing the jar with the hdfs prefix. Its probably just because
>>>> its not able to find the jar on all nodes.
>>>>
>>>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.
>>>> 1-hadoop2.6.0.jar
>>>>
>>>> Is it able to run on local mode ?
>>>>
>>>> On Wed, Jun 22, 2016 at 4:14 PM, Saisai Shao <sa...@gmail.com>
>>>> wrote:
>>>>
>>>>> spark.yarn.jar (none) The location of the Spark jar file, in case
>>>>> overriding the default location is desired. By default, Spark on YARN will
>>>>> use a Spark jar installed locally, but the Spark jar can also be in a
>>>>> world-readable location on HDFS. This allows YARN to cache it on nodes so
>>>>> that it doesn't need to be distributed each time an application runs. To
>>>>> point to a jar on HDFS, for example, set this configuration to
>>>>> hdfs:///some/path.
>>>>>
>>>>> spark.yarn.jar is used for spark run-time system jar, which is spark
>>>>> assembly jar, not the application jar (example-assembly jar). So in your
>>>>> case you upload the example-assembly jar into hdfs, in which spark system
>>>>> jars are not packed, so ExecutorLaucher cannot be found.
>>>>>
>>>>> Thanks
>>>>> Saisai
>>>>>
>>>>> On Wed, Jun 22, 2016 at 2:10 PM, 另一片天 <95...@qq.com> wrote:
>>>>>
>>>>>> shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$
>>>>>> ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
>>>>>> yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m
>>>>>> --executor-cores 2
>>>>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
>>>>>> Warning: Local jar
>>>>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar does not exist,
>>>>>> skipping.
>>>>>> java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
>>>>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>>>>>> at java.lang.Class.forName0(Native Method)
>>>>>> at java.lang.Class.forName(Class.java:348)
>>>>>> at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
>>>>>> at
>>>>>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
>>>>>> at
>>>>>> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>>>>>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>>>>>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>>>>>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>>>>> get error at once
>>>>>> ------------------ 原始邮件 ------------------
>>>>>> *发件人:* "Yash Sharma";<ya...@gmail.com>;
>>>>>> *发送时间:* 2016年6月22日(星期三) 下午2:04
>>>>>> *收件人:* "另一片天"<95...@qq.com>;
>>>>>> *抄送:* "user"<us...@spark.apache.org>;
>>>>>> *主题:* Re: Could not find or load main class
>>>>>> org.apache.spark.deploy.yarn.ExecutorLauncher
>>>>>>
>>>>>> How about supplying the jar directly in spark submit -
>>>>>>
>>>>>> ./bin/spark-submit \
>>>>>>> --class org.apache.spark.examples.SparkPi \
>>>>>>> --master yarn-client \
>>>>>>> --driver-memory 512m \
>>>>>>> --num-executors 2 \
>>>>>>> --executor-memory 512m \
>>>>>>> --executor-cores 2 \
>>>>>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>>>>>>
>>>>>>
>>>>>> On Wed, Jun 22, 2016 at 3:59 PM, 另一片天 <95...@qq.com> wrote:
>>>>>>
>>>>>>> i  config this  para  at spark-defaults.conf
>>>>>>> spark.yarn.jar
>>>>>>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>>>>>>>
>>>>>>> then ./bin/spark-submit --class org.apache.spark.examples.SparkPi
>>>>>>> --master yarn-client --driver-memory 512m --num-executors 2
>>>>>>> --executor-memory 512m --executor-cores 2    10:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>    - Error: Could not find or load main class
>>>>>>>    org.apache.spark.deploy.yarn.ExecutorLauncher
>>>>>>>
>>>>>>> but  i don't config that para ,there no error  why???that para is
>>>>>>> only avoid Uploading resource file(jar package)??
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

回复: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

Posted by 另一片天 <95...@qq.com>.
Application application_1466568126079_0006 failed 2 times due to AM Container for appattempt_1466568126079_0006_000002 exited with exitCode: 1
For more detailed output, check application tracking page:http://master:8088/proxy/application_1466568126079_0006/Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1466568126079_0006_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
at org.apache.hadoop.util.Shell.run(Shell.java:455)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Container exited with a non-zero exit code 1
Failing this attempt. Failing the application. 



but command get error


shihj@master:~/workspace/hadoop-2.6.4$ yarn logs -applicationId application_1466568126079_0006
Usage: yarn [options]


yarn: error: no such option: -a







------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:46
收件人: "另一片天"<95...@qq.com>; 
抄送: "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher



Are you able to run anything else on the cluster, I suspect its yarn that not able to run the class. If you could just share the logs in pastebin we could confirm that.

On Wed, Jun 22, 2016 at 4:43 PM, 另一片天 <95...@qq.com> wrote:
i  want to avoid Uploading resource file (especially jar package),because them very big,the application will wait for too long,there are good method??
so i config that para, but not get the my want to effect。




------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:34
收件人: "另一片天"<95...@qq.com>; 
抄送: "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher





Try with : --master yarn-cluster

On Wed, Jun 22, 2016 at 4:30 PM, 另一片天 <95...@qq.com> wrote:
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2   hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
Warning: Skip remote jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar.
java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)







------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:28
收件人: "另一片天"<95...@qq.com>; 
抄送: "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher





Or better , try the master as yarn-cluster,

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-cluster \
--driver-memory 512m \
--num-executors 2 \
--executor-memory 512m \
--executor-cores 2 \
hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar



On Wed, Jun 22, 2016 at 4:27 PM, 另一片天 <95...@qq.com> wrote:
Is it able to run on local mode ?


what mean?? standalone mode ?




------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:18
收件人: "Saisai Shao"<sa...@gmail.com>; 
抄送: "另一片天"<95...@qq.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher





Try providing the jar with the hdfs prefix. Its probably just because its not able to find the jar on all nodes.

hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar



Is it able to run on local mode ?


On Wed, Jun 22, 2016 at 4:14 PM, Saisai Shao <sa...@gmail.com> wrote:
spark.yarn.jar(none)The location of the Spark jar file, in case overriding the default location is desired. By default, Spark on YARN will use a Spark jar installed locally, but the Spark jar can also be in a world-readable location on HDFS. This allows YARN to cache it on nodes so that it doesn't need to be distributed each time an application runs. To point to a jar on HDFS, for example, set this configuration to hdfs:///some/path.


spark.yarn.jar is used for spark run-time system jar, which is spark assembly jar, not the application jar (example-assembly jar). So in your case you upload the example-assembly jar into hdfs, in which spark system jars are not packed, so ExecutorLaucher cannot be found.


Thanks
Saisai


On Wed, Jun 22, 2016 at 2:10 PM, 另一片天 <95...@qq.com> wrote:
shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2   /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
Warning: Local jar /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar does not exist, skipping.
java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

get error at once
------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:04
收件人: "另一片天"<95...@qq.com>; 
抄送: "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher



How about supplying the jar directly in spark submit - 

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--driver-memory 512m \
--num-executors 2 \
--executor-memory 512m \
--executor-cores 2 \
/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar


On Wed, Jun 22, 2016 at 3:59 PM, 另一片天 <95...@qq.com> wrote:
i  config this  para  at spark-defaults.conf
spark.yarn.jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar


then ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2    10:




Error: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher  

but  i don't config that para ,there no error  why???that para is only avoid Uploading resource file(jar package)??

Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

Posted by Yash Sharma <ya...@gmail.com>.
Are you able to run anything else on the cluster, I suspect its yarn that
not able to run the class. If you could just share the logs in pastebin we
could confirm that.

On Wed, Jun 22, 2016 at 4:43 PM, 另一片天 <95...@qq.com> wrote:

> i  want to avoid Uploading resource file (especially jar package),because
> them very big,the application will wait for too long,there are good method??
> so i config that para, but not get the my want to effect。
>
>
> ------------------ 原始邮件 ------------------
> *发件人:* "Yash Sharma";<ya...@gmail.com>;
> *发送时间:* 2016年6月22日(星期三) 下午2:34
> *收件人:* "另一片天"<95...@qq.com>;
> *抄送:* "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>;
>
> *主题:* Re: Could not find or load main class
> org.apache.spark.deploy.yarn.ExecutorLauncher
>
> Try with : --master yarn-cluster
>
> On Wed, Jun 22, 2016 at 4:30 PM, 另一片天 <95...@qq.com> wrote:
>
>> ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
>> yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m
>> --executor-cores 2
>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>> 10
>> Warning: Skip remote jar
>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar.
>> java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
>> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>> at java.lang.Class.forName0(Native Method)
>> at java.lang.Class.forName(Class.java:348)
>> at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
>> at
>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
>> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>
>>
>>
>> ------------------ 原始邮件 ------------------
>> *发件人:* "Yash Sharma";<ya...@gmail.com>;
>> *发送时间:* 2016年6月22日(星期三) 下午2:28
>> *收件人:* "另一片天"<95...@qq.com>;
>> *抄送:* "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>;
>>
>> *主题:* Re: Could not find or load main class
>> org.apache.spark.deploy.yarn.ExecutorLauncher
>>
>> Or better , try the master as yarn-cluster,
>>
>> ./bin/spark-submit \
>> --class org.apache.spark.examples.SparkPi \
>> --master yarn-cluster \
>> --driver-memory 512m \
>> --num-executors 2 \
>> --executor-memory 512m \
>> --executor-cores 2 \
>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.
>> 1-hadoop2.6.0.jar
>>
>> On Wed, Jun 22, 2016 at 4:27 PM, 另一片天 <95...@qq.com> wrote:
>>
>>> Is it able to run on local mode ?
>>>
>>> what mean?? standalone mode ?
>>>
>>>
>>> ------------------ 原始邮件 ------------------
>>> *发件人:* "Yash Sharma";<ya...@gmail.com>;
>>> *发送时间:* 2016年6月22日(星期三) 下午2:18
>>> *收件人:* "Saisai Shao"<sa...@gmail.com>;
>>> *抄送:* "另一片天"<95...@qq.com>; "user"<us...@spark.apache.org>;
>>> *主题:* Re: Could not find or load main class
>>> org.apache.spark.deploy.yarn.ExecutorLauncher
>>>
>>> Try providing the jar with the hdfs prefix. Its probably just because
>>> its not able to find the jar on all nodes.
>>>
>>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.
>>> 1-hadoop2.6.0.jar
>>>
>>> Is it able to run on local mode ?
>>>
>>> On Wed, Jun 22, 2016 at 4:14 PM, Saisai Shao <sa...@gmail.com>
>>> wrote:
>>>
>>>> spark.yarn.jar (none) The location of the Spark jar file, in case
>>>> overriding the default location is desired. By default, Spark on YARN will
>>>> use a Spark jar installed locally, but the Spark jar can also be in a
>>>> world-readable location on HDFS. This allows YARN to cache it on nodes so
>>>> that it doesn't need to be distributed each time an application runs. To
>>>> point to a jar on HDFS, for example, set this configuration to
>>>> hdfs:///some/path.
>>>>
>>>> spark.yarn.jar is used for spark run-time system jar, which is spark
>>>> assembly jar, not the application jar (example-assembly jar). So in your
>>>> case you upload the example-assembly jar into hdfs, in which spark system
>>>> jars are not packed, so ExecutorLaucher cannot be found.
>>>>
>>>> Thanks
>>>> Saisai
>>>>
>>>> On Wed, Jun 22, 2016 at 2:10 PM, 另一片天 <95...@qq.com> wrote:
>>>>
>>>>> shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$
>>>>> ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
>>>>> yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m
>>>>> --executor-cores 2
>>>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
>>>>> Warning: Local jar
>>>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar does not exist,
>>>>> skipping.
>>>>> java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
>>>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>>>>> at java.lang.Class.forName0(Native Method)
>>>>> at java.lang.Class.forName(Class.java:348)
>>>>> at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
>>>>> at
>>>>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
>>>>> at
>>>>> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>>>>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>>>>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>>>>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>>>> get error at once
>>>>> ------------------ 原始邮件 ------------------
>>>>> *发件人:* "Yash Sharma";<ya...@gmail.com>;
>>>>> *发送时间:* 2016年6月22日(星期三) 下午2:04
>>>>> *收件人:* "另一片天"<95...@qq.com>;
>>>>> *抄送:* "user"<us...@spark.apache.org>;
>>>>> *主题:* Re: Could not find or load main class
>>>>> org.apache.spark.deploy.yarn.ExecutorLauncher
>>>>>
>>>>> How about supplying the jar directly in spark submit -
>>>>>
>>>>> ./bin/spark-submit \
>>>>>> --class org.apache.spark.examples.SparkPi \
>>>>>> --master yarn-client \
>>>>>> --driver-memory 512m \
>>>>>> --num-executors 2 \
>>>>>> --executor-memory 512m \
>>>>>> --executor-cores 2 \
>>>>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>>>>>
>>>>>
>>>>> On Wed, Jun 22, 2016 at 3:59 PM, 另一片天 <95...@qq.com> wrote:
>>>>>
>>>>>> i  config this  para  at spark-defaults.conf
>>>>>> spark.yarn.jar
>>>>>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>>>>>>
>>>>>> then ./bin/spark-submit --class org.apache.spark.examples.SparkPi
>>>>>> --master yarn-client --driver-memory 512m --num-executors 2
>>>>>> --executor-memory 512m --executor-cores 2    10:
>>>>>>
>>>>>>
>>>>>>
>>>>>>    - Error: Could not find or load main class
>>>>>>    org.apache.spark.deploy.yarn.ExecutorLauncher
>>>>>>
>>>>>> but  i don't config that para ,there no error  why???that para is
>>>>>> only avoid Uploading resource file(jar package)??
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

回复: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

Posted by 另一片天 <95...@qq.com>.
i  want to avoid Uploading resource file (especially jar package),because them very big,the application will wait for too long,there are good method??
so i config that para, but not get the my want to effect。




------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:34
收件人: "另一片天"<95...@qq.com>; 
抄送: "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher



Try with : --master yarn-cluster

On Wed, Jun 22, 2016 at 4:30 PM, 另一片天 <95...@qq.com> wrote:
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2   hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
Warning: Skip remote jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar.
java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)







------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:28
收件人: "另一片天"<95...@qq.com>; 
抄送: "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher





Or better , try the master as yarn-cluster,

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-cluster \
--driver-memory 512m \
--num-executors 2 \
--executor-memory 512m \
--executor-cores 2 \
hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar



On Wed, Jun 22, 2016 at 4:27 PM, 另一片天 <95...@qq.com> wrote:
Is it able to run on local mode ?


what mean?? standalone mode ?




------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:18
收件人: "Saisai Shao"<sa...@gmail.com>; 
抄送: "另一片天"<95...@qq.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher





Try providing the jar with the hdfs prefix. Its probably just because its not able to find the jar on all nodes.

hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar



Is it able to run on local mode ?


On Wed, Jun 22, 2016 at 4:14 PM, Saisai Shao <sa...@gmail.com> wrote:
spark.yarn.jar(none)The location of the Spark jar file, in case overriding the default location is desired. By default, Spark on YARN will use a Spark jar installed locally, but the Spark jar can also be in a world-readable location on HDFS. This allows YARN to cache it on nodes so that it doesn't need to be distributed each time an application runs. To point to a jar on HDFS, for example, set this configuration to hdfs:///some/path.


spark.yarn.jar is used for spark run-time system jar, which is spark assembly jar, not the application jar (example-assembly jar). So in your case you upload the example-assembly jar into hdfs, in which spark system jars are not packed, so ExecutorLaucher cannot be found.


Thanks
Saisai


On Wed, Jun 22, 2016 at 2:10 PM, 另一片天 <95...@qq.com> wrote:
shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2   /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
Warning: Local jar /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar does not exist, skipping.
java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

get error at once
------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:04
收件人: "另一片天"<95...@qq.com>; 
抄送: "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher



How about supplying the jar directly in spark submit - 

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--driver-memory 512m \
--num-executors 2 \
--executor-memory 512m \
--executor-cores 2 \
/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar


On Wed, Jun 22, 2016 at 3:59 PM, 另一片天 <95...@qq.com> wrote:
i  config this  para  at spark-defaults.conf
spark.yarn.jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar


then ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2    10:




Error: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher  

but  i don't config that para ,there no error  why???that para is only avoid Uploading resource file(jar package)??

Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

Posted by Yash Sharma <ya...@gmail.com>.
Try with : --master yarn-cluster

On Wed, Jun 22, 2016 at 4:30 PM, 另一片天 <95...@qq.com> wrote:

> ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
> yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m
> --executor-cores 2
> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
> 10
> Warning: Skip remote jar
> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar.
> java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
> at
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>
>
>
> ------------------ 原始邮件 ------------------
> *发件人:* "Yash Sharma";<ya...@gmail.com>;
> *发送时间:* 2016年6月22日(星期三) 下午2:28
> *收件人:* "另一片天"<95...@qq.com>;
> *抄送:* "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>;
>
> *主题:* Re: Could not find or load main class
> org.apache.spark.deploy.yarn.ExecutorLauncher
>
> Or better , try the master as yarn-cluster,
>
> ./bin/spark-submit \
> --class org.apache.spark.examples.SparkPi \
> --master yarn-cluster \
> --driver-memory 512m \
> --num-executors 2 \
> --executor-memory 512m \
> --executor-cores 2 \
> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.
> 1-hadoop2.6.0.jar
>
> On Wed, Jun 22, 2016 at 4:27 PM, 另一片天 <95...@qq.com> wrote:
>
>> Is it able to run on local mode ?
>>
>> what mean?? standalone mode ?
>>
>>
>> ------------------ 原始邮件 ------------------
>> *发件人:* "Yash Sharma";<ya...@gmail.com>;
>> *发送时间:* 2016年6月22日(星期三) 下午2:18
>> *收件人:* "Saisai Shao"<sa...@gmail.com>;
>> *抄送:* "另一片天"<95...@qq.com>; "user"<us...@spark.apache.org>;
>> *主题:* Re: Could not find or load main class
>> org.apache.spark.deploy.yarn.ExecutorLauncher
>>
>> Try providing the jar with the hdfs prefix. Its probably just because its
>> not able to find the jar on all nodes.
>>
>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.
>> 1-hadoop2.6.0.jar
>>
>> Is it able to run on local mode ?
>>
>> On Wed, Jun 22, 2016 at 4:14 PM, Saisai Shao <sa...@gmail.com>
>> wrote:
>>
>>> spark.yarn.jar (none) The location of the Spark jar file, in case
>>> overriding the default location is desired. By default, Spark on YARN will
>>> use a Spark jar installed locally, but the Spark jar can also be in a
>>> world-readable location on HDFS. This allows YARN to cache it on nodes so
>>> that it doesn't need to be distributed each time an application runs. To
>>> point to a jar on HDFS, for example, set this configuration to
>>> hdfs:///some/path.
>>>
>>> spark.yarn.jar is used for spark run-time system jar, which is spark
>>> assembly jar, not the application jar (example-assembly jar). So in your
>>> case you upload the example-assembly jar into hdfs, in which spark system
>>> jars are not packed, so ExecutorLaucher cannot be found.
>>>
>>> Thanks
>>> Saisai
>>>
>>> On Wed, Jun 22, 2016 at 2:10 PM, 另一片天 <95...@qq.com> wrote:
>>>
>>>> shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$
>>>> ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
>>>> yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m
>>>> --executor-cores 2
>>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
>>>> Warning: Local jar
>>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar does not exist,
>>>> skipping.
>>>> java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
>>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>>>> at java.lang.Class.forName0(Native Method)
>>>> at java.lang.Class.forName(Class.java:348)
>>>> at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
>>>> at
>>>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
>>>> at
>>>> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>>>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>>>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>>>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>>> get error at once
>>>> ------------------ 原始邮件 ------------------
>>>> *发件人:* "Yash Sharma";<ya...@gmail.com>;
>>>> *发送时间:* 2016年6月22日(星期三) 下午2:04
>>>> *收件人:* "另一片天"<95...@qq.com>;
>>>> *抄送:* "user"<us...@spark.apache.org>;
>>>> *主题:* Re: Could not find or load main class
>>>> org.apache.spark.deploy.yarn.ExecutorLauncher
>>>>
>>>> How about supplying the jar directly in spark submit -
>>>>
>>>> ./bin/spark-submit \
>>>>> --class org.apache.spark.examples.SparkPi \
>>>>> --master yarn-client \
>>>>> --driver-memory 512m \
>>>>> --num-executors 2 \
>>>>> --executor-memory 512m \
>>>>> --executor-cores 2 \
>>>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>>>>
>>>>
>>>> On Wed, Jun 22, 2016 at 3:59 PM, 另一片天 <95...@qq.com> wrote:
>>>>
>>>>> i  config this  para  at spark-defaults.conf
>>>>> spark.yarn.jar
>>>>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>>>>>
>>>>> then ./bin/spark-submit --class org.apache.spark.examples.SparkPi
>>>>> --master yarn-client --driver-memory 512m --num-executors 2
>>>>> --executor-memory 512m --executor-cores 2    10:
>>>>>
>>>>>
>>>>>
>>>>>    - Error: Could not find or load main class
>>>>>    org.apache.spark.deploy.yarn.ExecutorLauncher
>>>>>
>>>>> but  i don't config that para ,there no error  why???that para is only
>>>>> avoid Uploading resource file(jar package)??
>>>>>
>>>>
>>>>
>>>
>>
>

回复: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

Posted by 另一片天 <95...@qq.com>.
./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2   hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
Warning: Skip remote jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar.
java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)







------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:28
收件人: "另一片天"<95...@qq.com>; 
抄送: "Saisai Shao"<sa...@gmail.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher



Or better , try the master as yarn-cluster,

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-cluster \
--driver-memory 512m \
--num-executors 2 \
--executor-memory 512m \
--executor-cores 2 \
hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar



On Wed, Jun 22, 2016 at 4:27 PM, 另一片天 <95...@qq.com> wrote:
Is it able to run on local mode ?


what mean?? standalone mode ?




------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:18
收件人: "Saisai Shao"<sa...@gmail.com>; 
抄送: "另一片天"<95...@qq.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher





Try providing the jar with the hdfs prefix. Its probably just because its not able to find the jar on all nodes.

hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar



Is it able to run on local mode ?


On Wed, Jun 22, 2016 at 4:14 PM, Saisai Shao <sa...@gmail.com> wrote:
spark.yarn.jar(none)The location of the Spark jar file, in case overriding the default location is desired. By default, Spark on YARN will use a Spark jar installed locally, but the Spark jar can also be in a world-readable location on HDFS. This allows YARN to cache it on nodes so that it doesn't need to be distributed each time an application runs. To point to a jar on HDFS, for example, set this configuration to hdfs:///some/path.


spark.yarn.jar is used for spark run-time system jar, which is spark assembly jar, not the application jar (example-assembly jar). So in your case you upload the example-assembly jar into hdfs, in which spark system jars are not packed, so ExecutorLaucher cannot be found.


Thanks
Saisai


On Wed, Jun 22, 2016 at 2:10 PM, 另一片天 <95...@qq.com> wrote:
shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2   /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
Warning: Local jar /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar does not exist, skipping.
java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

get error at once
------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:04
收件人: "另一片天"<95...@qq.com>; 
抄送: "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher



How about supplying the jar directly in spark submit - 

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--driver-memory 512m \
--num-executors 2 \
--executor-memory 512m \
--executor-cores 2 \
/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar


On Wed, Jun 22, 2016 at 3:59 PM, 另一片天 <95...@qq.com> wrote:
i  config this  para  at spark-defaults.conf
spark.yarn.jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar


then ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2    10:




Error: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher  

but  i don't config that para ,there no error  why???that para is only avoid Uploading resource file(jar package)??

Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

Posted by Yash Sharma <ya...@gmail.com>.
Or better , try the master as yarn-cluster,

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-cluster \
--driver-memory 512m \
--num-executors 2 \
--executor-memory 512m \
--executor-cores 2 \
hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar

On Wed, Jun 22, 2016 at 4:27 PM, 另一片天 <95...@qq.com> wrote:

> Is it able to run on local mode ?
>
> what mean?? standalone mode ?
>
>
> ------------------ 原始邮件 ------------------
> *发件人:* "Yash Sharma";<ya...@gmail.com>;
> *发送时间:* 2016年6月22日(星期三) 下午2:18
> *收件人:* "Saisai Shao"<sa...@gmail.com>;
> *抄送:* "另一片天"<95...@qq.com>; "user"<us...@spark.apache.org>;
> *主题:* Re: Could not find or load main class
> org.apache.spark.deploy.yarn.ExecutorLauncher
>
> Try providing the jar with the hdfs prefix. Its probably just because its
> not able to find the jar on all nodes.
>
> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.
> 1-hadoop2.6.0.jar
>
> Is it able to run on local mode ?
>
> On Wed, Jun 22, 2016 at 4:14 PM, Saisai Shao <sa...@gmail.com>
> wrote:
>
>> spark.yarn.jar (none) The location of the Spark jar file, in case
>> overriding the default location is desired. By default, Spark on YARN will
>> use a Spark jar installed locally, but the Spark jar can also be in a
>> world-readable location on HDFS. This allows YARN to cache it on nodes so
>> that it doesn't need to be distributed each time an application runs. To
>> point to a jar on HDFS, for example, set this configuration to
>> hdfs:///some/path.
>>
>> spark.yarn.jar is used for spark run-time system jar, which is spark
>> assembly jar, not the application jar (example-assembly jar). So in your
>> case you upload the example-assembly jar into hdfs, in which spark system
>> jars are not packed, so ExecutorLaucher cannot be found.
>>
>> Thanks
>> Saisai
>>
>> On Wed, Jun 22, 2016 at 2:10 PM, 另一片天 <95...@qq.com> wrote:
>>
>>> shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$
>>> ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
>>> yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m
>>> --executor-cores 2
>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
>>> Warning: Local jar
>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar does not exist,
>>> skipping.
>>> java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
>>> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>>> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>>> at java.lang.Class.forName0(Native Method)
>>> at java.lang.Class.forName(Class.java:348)
>>> at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
>>> at
>>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
>>> at
>>> org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>>> get error at once
>>> ------------------ 原始邮件 ------------------
>>> *发件人:* "Yash Sharma";<ya...@gmail.com>;
>>> *发送时间:* 2016年6月22日(星期三) 下午2:04
>>> *收件人:* "另一片天"<95...@qq.com>;
>>> *抄送:* "user"<us...@spark.apache.org>;
>>> *主题:* Re: Could not find or load main class
>>> org.apache.spark.deploy.yarn.ExecutorLauncher
>>>
>>> How about supplying the jar directly in spark submit -
>>>
>>> ./bin/spark-submit \
>>>> --class org.apache.spark.examples.SparkPi \
>>>> --master yarn-client \
>>>> --driver-memory 512m \
>>>> --num-executors 2 \
>>>> --executor-memory 512m \
>>>> --executor-cores 2 \
>>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>>>
>>>
>>> On Wed, Jun 22, 2016 at 3:59 PM, 另一片天 <95...@qq.com> wrote:
>>>
>>>> i  config this  para  at spark-defaults.conf
>>>> spark.yarn.jar
>>>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>>>>
>>>> then ./bin/spark-submit --class org.apache.spark.examples.SparkPi
>>>> --master yarn-client --driver-memory 512m --num-executors 2
>>>> --executor-memory 512m --executor-cores 2    10:
>>>>
>>>>
>>>>
>>>>    - Error: Could not find or load main class
>>>>    org.apache.spark.deploy.yarn.ExecutorLauncher
>>>>
>>>> but  i don't config that para ,there no error  why???that para is only
>>>> avoid Uploading resource file(jar package)??
>>>>
>>>
>>>
>>
>

回复: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

Posted by 另一片天 <95...@qq.com>.
Is it able to run on local mode ?


what mean?? standalone mode ?




------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:18
收件人: "Saisai Shao"<sa...@gmail.com>; 
抄送: "另一片天"<95...@qq.com>; "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher



Try providing the jar with the hdfs prefix. Its probably just because its not able to find the jar on all nodes.

hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar



Is it able to run on local mode ?


On Wed, Jun 22, 2016 at 4:14 PM, Saisai Shao <sa...@gmail.com> wrote:
spark.yarn.jar(none)The location of the Spark jar file, in case overriding the default location is desired. By default, Spark on YARN will use a Spark jar installed locally, but the Spark jar can also be in a world-readable location on HDFS. This allows YARN to cache it on nodes so that it doesn't need to be distributed each time an application runs. To point to a jar on HDFS, for example, set this configuration to hdfs:///some/path.


spark.yarn.jar is used for spark run-time system jar, which is spark assembly jar, not the application jar (example-assembly jar). So in your case you upload the example-assembly jar into hdfs, in which spark system jars are not packed, so ExecutorLaucher cannot be found.


Thanks
Saisai


On Wed, Jun 22, 2016 at 2:10 PM, 另一片天 <95...@qq.com> wrote:
shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2   /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
Warning: Local jar /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar does not exist, skipping.
java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
	at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
	at java.lang.Class.forName0(Native Method)
	at java.lang.Class.forName(Class.java:348)
	at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
	at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
	at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
	at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

get error at once
------------------ 原始邮件 ------------------
发件人: "Yash Sharma";<ya...@gmail.com>;
发送时间: 2016年6月22日(星期三) 下午2:04
收件人: "另一片天"<95...@qq.com>; 
抄送: "user"<us...@spark.apache.org>; 
主题: Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher



How about supplying the jar directly in spark submit - 

./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--driver-memory 512m \
--num-executors 2 \
--executor-memory 512m \
--executor-cores 2 \
/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar


On Wed, Jun 22, 2016 at 3:59 PM, 另一片天 <95...@qq.com> wrote:
i  config this  para  at spark-defaults.conf
spark.yarn.jar hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar


then ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m --executor-cores 2    10:




Error: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher  

but  i don't config that para ,there no error  why???that para is only avoid Uploading resource file(jar package)??

Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

Posted by Yash Sharma <ya...@gmail.com>.
Try providing the jar with the hdfs prefix. Its probably just because its
not able to find the jar on all nodes.

hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar

Is it able to run on local mode ?

On Wed, Jun 22, 2016 at 4:14 PM, Saisai Shao <sa...@gmail.com> wrote:

> spark.yarn.jar (none) The location of the Spark jar file, in case
> overriding the default location is desired. By default, Spark on YARN will
> use a Spark jar installed locally, but the Spark jar can also be in a
> world-readable location on HDFS. This allows YARN to cache it on nodes so
> that it doesn't need to be distributed each time an application runs. To
> point to a jar on HDFS, for example, set this configuration to
> hdfs:///some/path.
>
> spark.yarn.jar is used for spark run-time system jar, which is spark
> assembly jar, not the application jar (example-assembly jar). So in your
> case you upload the example-assembly jar into hdfs, in which spark system
> jars are not packed, so ExecutorLaucher cannot be found.
>
> Thanks
> Saisai
>
> On Wed, Jun 22, 2016 at 2:10 PM, 另一片天 <95...@qq.com> wrote:
>
>> shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$
>> ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
>> yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m
>> --executor-cores 2
>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
>> Warning: Local jar
>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar does not exist,
>> skipping.
>> java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
>> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>> at java.lang.Class.forName0(Native Method)
>> at java.lang.Class.forName(Class.java:348)
>> at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
>> at
>> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
>> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
>> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
>> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
>> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
>> get error at once
>> ------------------ 原始邮件 ------------------
>> *发件人:* "Yash Sharma";<ya...@gmail.com>;
>> *发送时间:* 2016年6月22日(星期三) 下午2:04
>> *收件人:* "另一片天"<95...@qq.com>;
>> *抄送:* "user"<us...@spark.apache.org>;
>> *主题:* Re: Could not find or load main class
>> org.apache.spark.deploy.yarn.ExecutorLauncher
>>
>> How about supplying the jar directly in spark submit -
>>
>> ./bin/spark-submit \
>>> --class org.apache.spark.examples.SparkPi \
>>> --master yarn-client \
>>> --driver-memory 512m \
>>> --num-executors 2 \
>>> --executor-memory 512m \
>>> --executor-cores 2 \
>>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>>
>>
>> On Wed, Jun 22, 2016 at 3:59 PM, 另一片天 <95...@qq.com> wrote:
>>
>>> i  config this  para  at spark-defaults.conf
>>> spark.yarn.jar
>>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>>>
>>> then ./bin/spark-submit --class org.apache.spark.examples.SparkPi
>>> --master yarn-client --driver-memory 512m --num-executors 2
>>> --executor-memory 512m --executor-cores 2    10:
>>>
>>>
>>>
>>>    - Error: Could not find or load main class
>>>    org.apache.spark.deploy.yarn.ExecutorLauncher
>>>
>>> but  i don't config that para ,there no error  why???that para is only
>>> avoid Uploading resource file(jar package)??
>>>
>>
>>
>

Re: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher

Posted by Saisai Shao <sa...@gmail.com>.
spark.yarn.jar (none) The location of the Spark jar file, in case
overriding the default location is desired. By default, Spark on YARN will
use a Spark jar installed locally, but the Spark jar can also be in a
world-readable location on HDFS. This allows YARN to cache it on nodes so
that it doesn't need to be distributed each time an application runs. To
point to a jar on HDFS, for example, set this configuration to
hdfs:///some/path.

spark.yarn.jar is used for spark run-time system jar, which is spark
assembly jar, not the application jar (example-assembly jar). So in your
case you upload the example-assembly jar into hdfs, in which spark system
jars are not packed, so ExecutorLaucher cannot be found.

Thanks
Saisai

On Wed, Jun 22, 2016 at 2:10 PM, 另一片天 <95...@qq.com> wrote:

> shihj@master:/usr/local/spark/spark-1.6.1-bin-hadoop2.6$
> ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master
> yarn-client --driver-memory 512m --num-executors 2 --executor-memory 512m
> --executor-cores 2
> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar 10
> Warning: Local jar
> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar does not exist,
> skipping.
> java.lang.ClassNotFoundException: org.apache.spark.examples.SparkPi
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at java.lang.Class.forName0(Native Method)
> at java.lang.Class.forName(Class.java:348)
> at org.apache.spark.util.Utils$.classForName(Utils.scala:174)
> at
> org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:689)
> at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
> at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> get error at once
> ------------------ 原始邮件 ------------------
> *发件人:* "Yash Sharma";<ya...@gmail.com>;
> *发送时间:* 2016年6月22日(星期三) 下午2:04
> *收件人:* "另一片天"<95...@qq.com>;
> *抄送:* "user"<us...@spark.apache.org>;
> *主题:* Re: Could not find or load main class
> org.apache.spark.deploy.yarn.ExecutorLauncher
>
> How about supplying the jar directly in spark submit -
>
> ./bin/spark-submit \
>> --class org.apache.spark.examples.SparkPi \
>> --master yarn-client \
>> --driver-memory 512m \
>> --num-executors 2 \
>> --executor-memory 512m \
>> --executor-cores 2 \
>> /user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>
>
> On Wed, Jun 22, 2016 at 3:59 PM, 另一片天 <95...@qq.com> wrote:
>
>> i  config this  para  at spark-defaults.conf
>> spark.yarn.jar
>> hdfs://master:9000/user/shihj/spark_lib/spark-examples-1.6.1-hadoop2.6.0.jar
>>
>> then ./bin/spark-submit --class org.apache.spark.examples.SparkPi
>> --master yarn-client --driver-memory 512m --num-executors 2
>> --executor-memory 512m --executor-cores 2    10:
>>
>>
>>
>>    - Error: Could not find or load main class
>>    org.apache.spark.deploy.yarn.ExecutorLauncher
>>
>> but  i don't config that para ,there no error  why???that para is only
>> avoid Uploading resource file(jar package)??
>>
>
>