You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Ryan Williams (JIRA)" <ji...@apache.org> on 2015/10/23 17:28:27 UTC

[jira] [Updated] (SPARK-11285) Infinite TaskCommitDenied loop

     [ https://issues.apache.org/jira/browse/SPARK-11285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ryan Williams updated SPARK-11285:
----------------------------------
    Description: 
I've seen several apps enter this failing state in the last couple of days. I've gathered all the documentation I can about two of them:

* [application_1444948191538_0051|https://www.dropbox.com/sh/ku9btpsbwrizx9y/AAAXIY0VhMqFabJBCtTVYxtma?dl=0]
* [application_1444948191538_0116|https://www.dropbox.com/home/spark/application_1444948191538_0116]

Both were run on Spark 1.5.0 on in yarn-client mode with dynamic allocation of executors.

In application_1444948191538_0051, partitions 5808 and 6109 in stage-attempt 1.0 failed 7948 and 7921 times, respectively, before I killed the app. In both cases, the first two attempts failed due to {{ExecutorLostFailure}}'s, and the remaining ~7900 attempts all failed due to {{TaskCommitDenied}}'s, over ~6hrs at a rate of about once per ~4s. See the last several thousand lines of [application_1444948191538_0051/driver|https://www.dropbox.com/s/f3zghuzuxobyzem/driver?dl=0].

In application_1444948191538_0116, partition 10593 in stage-attempt 6.0 failed its first attempt due to an ExecutorLostFailure, and then a subsequent 219 attempts in ~22mins due to {{TaskCommitDenied}}'s before I killed the app. Again, [the driver logs|https://www.dropbox.com/s/ay1398p017qp712/driver?dl=0] enumerate each attempt.

I'm guessing that the OutputCommitCoordinator is getting stuck due to early failed attempts?

I'm trying to re-run some of these jobs on a 1.5.1 release and will let you know if I repro it there as well.

  was:
I've seen several apps enter this failing state in the last couple of days. I've gathered all the documentation I can about two of them, [application_1444948191538_0051|https://www.dropbox.com/sh/ku9btpsbwrizx9y/AAAXIY0VhMqFabJBCtTVYxtma?dl=0] and [application_1444948191538_0116|https://www.dropbox.com/home/spark/application_1444948191538_0116]. Both were run on Spark 1.5.0 on in yarn-client mode with dynamic allocation of executors.

In application_1444948191538_0051, partitions 5808 and 6109 in stage-attempt 1.0 failed 7948 and 7921 times, respectively, before I killed the app. In both cases, the first two attempts failed due to {{ExecutorLostFailure}}'s, and the remaining ~7900 attempts all failed due to {{TaskCommitDenied}}'s, over ~6hrs at a rate of about once per ~4s. See the last several thousand lines of [application_1444948191538_0051/driver|https://www.dropbox.com/s/f3zghuzuxobyzem/driver?dl=0].

In application_1444948191538_0116, partition 10593 in stage-attempt 6.0 failed its first attempt due to an ExecutorLostFailure, and then a subsequent 219 attempts in ~22mins due to {{TaskCommitDenied}}'s before I killed the app. Again, [the driver logs|https://www.dropbox.com/s/ay1398p017qp712/driver?dl=0] enumerate each attempt.

I'm guessing that the OutputCommitCoordinator is getting stuck due to early failed attempts?

I'm trying to re-run some of these jobs on a 1.5.1 release and will let you know if I repro it there as well.


> Infinite TaskCommitDenied loop
> ------------------------------
>
>                 Key: SPARK-11285
>                 URL: https://issues.apache.org/jira/browse/SPARK-11285
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.5.0
>            Reporter: Ryan Williams
>
> I've seen several apps enter this failing state in the last couple of days. I've gathered all the documentation I can about two of them:
> * [application_1444948191538_0051|https://www.dropbox.com/sh/ku9btpsbwrizx9y/AAAXIY0VhMqFabJBCtTVYxtma?dl=0]
> * [application_1444948191538_0116|https://www.dropbox.com/home/spark/application_1444948191538_0116]
> Both were run on Spark 1.5.0 on in yarn-client mode with dynamic allocation of executors.
> In application_1444948191538_0051, partitions 5808 and 6109 in stage-attempt 1.0 failed 7948 and 7921 times, respectively, before I killed the app. In both cases, the first two attempts failed due to {{ExecutorLostFailure}}'s, and the remaining ~7900 attempts all failed due to {{TaskCommitDenied}}'s, over ~6hrs at a rate of about once per ~4s. See the last several thousand lines of [application_1444948191538_0051/driver|https://www.dropbox.com/s/f3zghuzuxobyzem/driver?dl=0].
> In application_1444948191538_0116, partition 10593 in stage-attempt 6.0 failed its first attempt due to an ExecutorLostFailure, and then a subsequent 219 attempts in ~22mins due to {{TaskCommitDenied}}'s before I killed the app. Again, [the driver logs|https://www.dropbox.com/s/ay1398p017qp712/driver?dl=0] enumerate each attempt.
> I'm guessing that the OutputCommitCoordinator is getting stuck due to early failed attempts?
> I'm trying to re-run some of these jobs on a 1.5.1 release and will let you know if I repro it there as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org