You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2014/11/25 13:09:12 UTC

[jira] [Resolved] (SPARK-2290) Do not send SPARK_HOME from driver to executors

     [ https://issues.apache.org/jira/browse/SPARK-2290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-2290.
------------------------------
    Resolution: Duplicate

According to PR discussion, this duplicates SPARK-2454, which was resolved in https://github.com/apache/spark/pull/1734

> Do not send SPARK_HOME from driver to executors
> -----------------------------------------------
>
>                 Key: SPARK-2290
>                 URL: https://issues.apache.org/jira/browse/SPARK-2290
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>            Reporter: YanTang Zhai
>            Assignee: Patrick Wendell
>
> The client path is /data/home/spark/test/spark-1.0.0 while the worker deploy path is /data/home/spark/spark-1.0.0 which is different from the client path. Then an application is launched using the ./bin/spark-submit --class JobTaskJoin --master spark://172.25.38.244:7077 --executor-memory 128M ../jobtaskjoin_2.10-1.0.0.jar. However the application is failed because an exception occurs at 
> java.io.IOException: Cannot run program "/data/home/spark/test/spark-1.0.0-bin-0.20.2-cdh3u3/bin/compute-classpath.sh" (in directory "."): error=2, No such file or directory
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
>         at org.apache.spark.util.Utils$.executeAndGetOutput(Utils.scala:759)
>         at org.apache.spark.deploy.worker.CommandUtils$.buildJavaOpts(CommandUtils.scala:72)
>         at org.apache.spark.deploy.worker.CommandUtils$.buildCommandSeq(CommandUtils.scala:37)
>         at org.apache.spark.deploy.worker.ExecutorRunner.getCommandSeq(ExecutorRunner.scala:109)
>         at org.apache.spark.deploy.worker.ExecutorRunner.fetchAndRunExecutor(ExecutorRunner.scala:124)
>         at org.apache.spark.deploy.worker.ExecutorRunner$$anon$1.run(ExecutorRunner.scala:58)
> Caused by: java.io.IOException: error=2, No such file or directory
>         at java.lang.UNIXProcess.forkAndExec(Native Method)
>         at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
>         at java.lang.ProcessImpl.start(ProcessImpl.java:130)
>         at java.lang.ProcessBuilder.start(ProcessBuilder.java:1021)
>         ... 6 more
> Therefore, I think worker should not use appDesc.sparkHome when LaunchExecutor, Contrarily, worker could use its own sparkHome directly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org