You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2016/09/27 19:07:20 UTC

[jira] [Assigned] (SPARK-17667) Make locking fine grained in YarnAllocator#enqueueGetLossReasonRequest

     [ https://issues.apache.org/jira/browse/SPARK-17667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-17667:
------------------------------------

    Assignee: Apache Spark

> Make locking fine grained in YarnAllocator#enqueueGetLossReasonRequest
> ----------------------------------------------------------------------
>
>                 Key: SPARK-17667
>                 URL: https://issues.apache.org/jira/browse/SPARK-17667
>             Project: Spark
>          Issue Type: Bug
>          Components: YARN
>    Affects Versions: 1.6.2, 2.0.0
>            Reporter: Ashwin Shankar
>            Assignee: Apache Spark
>
> Following up on the discussion in SPARK-15725, one of the reason for AM hanging with dynamic allocation(DA) is the way locking is done in YarnAllocator. We noticed that when executors go down during the shrink phase of DA, AM gets locked up. On taking thread dump, we see threads trying to get loss for reason via YarnAllocator#enqueueGetLossReasonRequest, and they are all BLOCKED waiting for lock acquired by allocate call. This gets worse when the number of executors go down are in the thousands, and I've seen AM hang in the order of minutes. This jira is created to make the locking little more fine grained by remembering the executors that were killed via AM, and then serve the GetExecutorLossReason requests with that information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org