You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hive.apache.org by "Xuefu Zhang (JIRA)" <ji...@apache.org> on 2014/11/26 03:10:12 UTC

[jira] [Comment Edited] (HIVE-8957) Remote spark context needs to clean up itself in case of connection timeout [Spark Branch]

    [ https://issues.apache.org/jira/browse/HIVE-8957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14225525#comment-14225525 ] 

Xuefu Zhang edited comment on HIVE-8957 at 11/26/14 2:09 AM:
-------------------------------------------------------------

-[~mvalleavila]- [~vanzin], could you advise what is right thing to do here? Calling stop() actually makes Hive process hang. Thanks.


was (Author: xuefuz):
[~mvalleavila], could you advise what is right thing to do here? Calling stop() actually makes Hive process hang. Thanks.

> Remote spark context needs to clean up itself in case of connection timeout [Spark Branch]
> ------------------------------------------------------------------------------------------
>
>                 Key: HIVE-8957
>                 URL: https://issues.apache.org/jira/browse/HIVE-8957
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Xuefu Zhang
>            Assignee: Xuefu Zhang
>         Attachments: HIVE-8957.1-spark.patch
>
>
> In the current SparkClient implementation (class SparkClientImpl), the constructor does some initialization and in the end waits for the remote driver to connect. In case of timeout, it just throws an exception without cleaning itself. The cleanup is necessary to release system resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)