You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Chengxiang Li (JIRA)" <ji...@apache.org> on 2014/11/06 14:15:34 UTC

[jira] [Commented] (HIVE-8548) Integrate with remote Spark context after HIVE-8528 [Spark Branch]

    [ https://issues.apache.org/jira/browse/HIVE-8548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14200155#comment-14200155 ] 

Chengxiang Li commented on HIVE-8548:
-------------------------------------

[~xuefuz], if we set spark.master as local, hive users connect to HiveServer2 which use local spark context to submit job with a seperate session for each user, we may still hit into multi spark context issue. so HiveServer2 could only use remote spark context, and CLI may use either local spark context or remote spark context, which we can add a parameter to configure it and set local spark context as default, what do you think about it?

> Integrate with remote Spark context after HIVE-8528 [Spark Branch]
> ------------------------------------------------------------------
>
>                 Key: HIVE-8548
>                 URL: https://issues.apache.org/jira/browse/HIVE-8548
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Xuefu Zhang
>            Assignee: Chengxiang Li
>
> With HIVE-8528, HiverSever2 should use remote Spark context to submit job and monitor progress, etc. This is necessary if Hive runs on standalone cluster, Yarn, or Mesos. If Hive runs with spark.master=local, we should continue using SparkContext in current way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)