You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Sahil Takiar (JIRA)" <ji...@apache.org> on 2018/10/24 15:45:00 UTC

[jira] [Comment Edited] (HIVE-20790) SparkSession should be able to close a session while it is being opened

    [ https://issues.apache.org/jira/browse/HIVE-20790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16662444#comment-16662444 ] 

Sahil Takiar edited comment on HIVE-20790 at 10/24/18 3:44 PM:
---------------------------------------------------------------

Yet another edge to consider, guarding against multiple threads calling {{open()}} in parallel. HIVE-20737 fixed this issue by re-using the {{closeLock()}} to synchronize calls to the {{open()}} method. However, as I stated in the RB comments of HIVE-20737 we should use a separate lock to guard against this behavior.

We should add some unit tests for this scenario as well.


was (Author: stakiar):
Yet another edge to consider, guarding against multiple threads calling {{open()}} in parallel. HIVE-20737 fixed this issue by re-using the {{closeLock()}} to synchronize calls to the {{open()}} method. However, as I stated in the RB comments of HIVE-20737 we should use a separate lock to guard against this behavior.

> SparkSession should be able to close a session while it is being opened
> -----------------------------------------------------------------------
>
>                 Key: HIVE-20790
>                 URL: https://issues.apache.org/jira/browse/HIVE-20790
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Sahil Takiar
>            Assignee: Antal Sinkovits
>            Priority: Major
>
> In HIVE-14162 we adding locks to {{SparkSessionImpl}} to support scenarios where we want to close the session due to a timeout. However, the locks remove the ability to close a session while it is being opened. This is important to allow cancelling of a session while it is being setup. This can be useful on busy clusters where there may not be enough YARN containers to setup the Spark Remote Driver.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)