You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Jon Morra (JIRA)" <ji...@apache.org> on 2016/09/19 18:16:21 UTC

[jira] [Commented] (SPARK-5377) Dynamically add jar into Spark Driver's classpath.

    [ https://issues.apache.org/jira/browse/SPARK-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15504217#comment-15504217 ] 

Jon Morra commented on SPARK-5377:
----------------------------------

I would like to revisit this issue as well.  Some of our jobs which require on the fly compilation (such as scoring an H2O model) won't work in cluster mode without this.

> Dynamically add jar into Spark Driver's classpath.
> --------------------------------------------------
>
>                 Key: SPARK-5377
>                 URL: https://issues.apache.org/jira/browse/SPARK-5377
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 1.2.0
>            Reporter: Chengxiang Li
>
> Spark support dynamically add jar to executor classpath through SparkContext::addJar(), while it does not support dynamically add jar into driver classpath. In most case(if not all the case), user dynamically add jar with SparkContext::addJar()  because some classes from the jar would be referred in upcoming Spark job, which means the classes need to be loaded in Spark driver side either,e.g during serialization. I think it make sense to add an API to add jar into driver classpath, or just make it available in SparkContext::addJar(). HIVE-9410 is a real case from Hive on Spark.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org