You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Lefty Leverenz (JIRA)" <ji...@apache.org> on 2017/09/05 06:42:00 UTC

[jira] [Commented] (HIVE-11363) Prewarm Hive on Spark containers [Spark Branch]

    [ https://issues.apache.org/jira/browse/HIVE-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16153161#comment-16153161 ] 

Lefty Leverenz commented on HIVE-11363:
---------------------------------------

The wiki has been updated, so I removed the TODOC-SPARK and TODOC1.3 labels.

> Prewarm Hive on Spark containers [Spark Branch]
> -----------------------------------------------
>
>                 Key: HIVE-11363
>                 URL: https://issues.apache.org/jira/browse/HIVE-11363
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>    Affects Versions: 1.1.0
>            Reporter: Xuefu Zhang
>            Assignee: Xuefu Zhang
>             Fix For: spark-branch, 1.3.0, 2.0.0
>
>         Attachments: HIVE-11363.1-spark.patch, HIVE-11363.2-spark.patch, HIVE-11363.3-spark.patch, HIVE-11363.4-spark.patch, HIVE-11363.5-spark.patch
>
>
> When Hive job is launched by Oozie, a Hive session is created and job script is executed. Session is closed when Hive job is completed. Thus, Hive session is not shared among Hive jobs either in an Oozie workflow or across workflows. Since the parallelism of a Hive job executed on Spark is impacted by the available executors, such Hive jobs will suffer the executor ramp-up overhead. The idea here is to wait a bit so that enough executors can come up before a job can be executed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)