You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2015/01/25 00:09:34 UTC

[jira] [Resolved] (SPARK-4831) Current directory always on classpath with spark-submit

     [ https://issues.apache.org/jira/browse/SPARK-4831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-4831.
------------------------------
       Resolution: Fixed
    Fix Version/s: 1.3.0
         Assignee: Daniel Darabos

Looks like this was merged in https://github.com/apache/spark/commit/7cb3f54793124c527d62906c565aba2c3544e422

> Current directory always on classpath with spark-submit
> -------------------------------------------------------
>
>                 Key: SPARK-4831
>                 URL: https://issues.apache.org/jira/browse/SPARK-4831
>             Project: Spark
>          Issue Type: Bug
>          Components: Deploy
>    Affects Versions: 1.1.1, 1.2.0
>            Reporter: Daniel Darabos
>            Assignee: Daniel Darabos
>            Priority: Minor
>             Fix For: 1.3.0
>
>
> We had a situation where we were launching an application with spark-submit, and a file (play.plugins) was on the classpath twice, causing problems (trying to register plugins twice). Upon investigating how it got on the classpath twice, we found that it was present in one of our jars, and also in the current working directory. But the one in the current working directory should not be on the classpath. We never asked spark-submit to put the current directory on the classpath.
> I think this is caused by a line in [compute-classpath.sh|https://github.com/apache/spark/blob/v1.2.0-rc2/bin/compute-classpath.sh#L28]:
> {code}
> CLASSPATH="$SPARK_CLASSPATH:$SPARK_SUBMIT_CLASSPATH"
> {code}
> Now if SPARK_CLASSPATH is empty, the empty string is added to the classpath, which means the current working directory.
> We tried setting SPARK_CLASSPATH to a bogus value, but that is [not allowed|https://github.com/apache/spark/blob/v1.2.0-rc2/core/src/main/scala/org/apache/spark/SparkConf.scala#L312].
> What is the right solution? Only add SPARK_CLASSPATH if it's non-empty? I can send a pull request for that I think. Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org