You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Chengxiang Li (JIRA)" <ji...@apache.org> on 2014/07/10 09:52:04 UTC

[jira] [Work started] (HIVE-7371) Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch]

     [ https://issues.apache.org/jira/browse/HIVE-7371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Work on HIVE-7371 started by Chengxiang Li.

> Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch]
> -----------------------------------------------------------------------------
>
>                 Key: HIVE-7371
>                 URL: https://issues.apache.org/jira/browse/HIVE-7371
>             Project: Hive
>          Issue Type: Task
>          Components: Spark
>            Reporter: Xuefu Zhang
>            Assignee: Chengxiang Li
>
> Currently, Spark client ships all Hive JARs, including those that Hive depends on, to Spark cluster when a query is executed by Spark. This is not efficient, causing potential library conflicts. Ideally, only a minimum set of JARs needs to be shipped. This task is to identify such a set.
> We should learn from current MR cluster, for which I assume only hive-exec JAR is shipped to MR cluster.
> We also need to ensure that user-supplied JARs are also shipped to Spark cluster, in a similar fashion as MR does.



--
This message was sent by Atlassian JIRA
(v6.2#6252)