You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Xuefu Zhang (JIRA)" <ji...@apache.org> on 2014/07/09 17:02:04 UTC

[jira] [Created] (HIVE-7371) Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch]

Xuefu Zhang created HIVE-7371:
---------------------------------

             Summary: Identify a minimum set of JARs needed to ship to Spark cluster [Spark Branch]
                 Key: HIVE-7371
                 URL: https://issues.apache.org/jira/browse/HIVE-7371
             Project: Hive
          Issue Type: Task
          Components: Spark
            Reporter: Xuefu Zhang


Currently, Spark client ships all Hive JARs, including those that Hive depends on, to Spark cluster when a query is executed by Spark. This is not efficient, causing potential library conflicts. Ideally, only a minimum set of JARs needs to be shipped. This task is to identify such a set.

We should learn from current MR cluster, for which I assume only hive-exec JAR is shipped to MR cluster.

We also need to ensure that user-supplied JARs are also shipped to Spark cluster, in a similar fashion as MR does.



--
This message was sent by Atlassian JIRA
(v6.2#6252)