You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "Rui Li (JIRA)" <ji...@apache.org> on 2016/03/14 13:25:33 UTC

[jira] [Commented] (HIVE-12650) Spark-submit is killed when Hive times out. Killing spark-submit doesn't cancel AM request. When AM is finally launched, it tries to connect back to Hive and gets refused.

    [ https://issues.apache.org/jira/browse/HIVE-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15193174#comment-15193174 ] 

Rui Li commented on HIVE-12650:
-------------------------------

The timeout is necessary in case the RSC crashes due to some errors. But the issue here shows that it could also because the RSC is just waiting for resources from a busy cluster. I think we need a way to distinguish these two scenarios and don't timeout on the latter.

> Spark-submit is killed when Hive times out. Killing spark-submit doesn't cancel AM request. When AM is finally launched, it tries to connect back to Hive and gets refused.
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HIVE-12650
>                 URL: https://issues.apache.org/jira/browse/HIVE-12650
>             Project: Hive
>          Issue Type: Bug
>    Affects Versions: 1.1.1, 1.2.1
>            Reporter: JoneZhang
>            Assignee: Xuefu Zhang
>
> I think hive.spark.client.server.connect.timeout should be set greater than spark.yarn.am.waitTime. The default value for 
> spark.yarn.am.waitTime is 100s, and the default value for hive.spark.client.server.connect.timeout is 90s, which is not good. We can increase it to a larger value such as 120s.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)