You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Andrew Or (JIRA)" <ji...@apache.org> on 2014/09/19 01:08:33 UTC

[jira] [Reopened] (SPARK-3560) In yarn-cluster mode, the same jars are distributed through multiple mechanisms.

     [ https://issues.apache.org/jira/browse/SPARK-3560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Or reopened SPARK-3560:
------------------------------
      Assignee: Min Shen

Reopening just to reassign. Closing right afterwards, please disregard.

> In yarn-cluster mode, the same jars are distributed through multiple mechanisms.
> --------------------------------------------------------------------------------
>
>                 Key: SPARK-3560
>                 URL: https://issues.apache.org/jira/browse/SPARK-3560
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.1.0
>            Reporter: Sandy Ryza
>            Assignee: Min Shen
>            Priority: Critical
>             Fix For: 1.1.1, 1.2.0
>
>
> In yarn-cluster mode, jars given to spark-submit's --jars argument should be distributed to executors through the distributed cache, not through fetching.
> Currently, Spark tries to distribute the jars both ways, which can cause executor errors related to trying to overwrite symlinks without write permissions.
> It looks like this was introduced by SPARK-2260, which sets spark.jars in yarn-cluster mode.  Setting spark.jars is necessary for standalone cluster deploy mode, but harmful for yarn cluster deploy mode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org