You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Marcelo Vanzin (JIRA)" <ji...@apache.org> on 2016/06/10 00:32:21 UTC

[jira] [Resolved] (SPARK-12447) Only update AM's internal state when executor is successfully launched by NM

     [ https://issues.apache.org/jira/browse/SPARK-12447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Marcelo Vanzin resolved SPARK-12447.
------------------------------------
       Resolution: Fixed
         Assignee: Saisai Shao  (was: Apache Spark)
    Fix Version/s: 2.0.0

> Only update AM's internal state when executor is successfully launched by NM
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-12447
>                 URL: https://issues.apache.org/jira/browse/SPARK-12447
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.6.0
>            Reporter: Saisai Shao
>            Assignee: Saisai Shao
>             Fix For: 2.0.0
>
>
> Currently {{YarnAllocator}} will update its managed states like {{numExecutorsRunning}} after container is allocated but before executor are successfully launched. 
> This happened when Spark configuration is wrong (like spark_shuffle aux-service is not configured in NM occasionally), which makes executor fail to launch, or NM lost when NMClient is communicated.
> In the current implementation, state will also be updated even executor is failed to launch, this will lead to incorrect state of AM. Also lingering container will only be release after timeout, this will introduce resource waste.
> So here we should update the states only after executor is correctly launched, otherwise we should release container ASAP to make it fail fast and retry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org