You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hive.apache.org by "duandingrui (JIRA)" <ji...@apache.org> on 2016/12/01 13:18:58 UTC
[jira] [Commented] (HIVE-9847) Hive should not allow additional
attemps when RSC fails [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-9847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15711955#comment-15711955 ]
duandingrui commented on HIVE-9847:
-----------------------------------
Could you tell me why set yarn.resourcemanager.am.max-attempts to 1? Thank you
> Hive should not allow additional attemps when RSC fails [Spark Branch]
> ----------------------------------------------------------------------
>
> Key: HIVE-9847
> URL: https://issues.apache.org/jira/browse/HIVE-9847
> Project: Hive
> Issue Type: Bug
> Components: Spark
> Reporter: Jimmy Xiang
> Assignee: Jimmy Xiang
> Priority: Trivial
> Fix For: 1.2.0
>
> Attachments: HIVE-9847.1-spark.patch, HIVE-9847.2-spark.patch
>
>
> In yarn-cluster mode, if RSC fails at the first time, yarn will restart it. HoS should set "yarn.resourcemanager.am.max-attempts" to 1 to disallow such restarting when submitting Spark jobs to Yarn in cluster mode.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)