You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Ryan Williams (JIRA)" <ji...@apache.org> on 2015/10/23 17:28:27 UTC

[jira] [Commented] (SPARK-11285) Infinite TaskCommitDenied loop

    [ https://issues.apache.org/jira/browse/SPARK-11285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14971161#comment-14971161 ] 

Ryan Williams commented on SPARK-11285:
---------------------------------------

A quick note on the structure of the linked directories with each application's logs from above:

The directories are prepared by [this script|https://github.com/hammerlab/yarn-logs-helpers/blob/master/yarn-container-logs] that I use to parse aggregated YARN logs out into individual containers, which contains:
* {{events.json}}: the event-log file
* {{driver}}: the driver's stdout, symlink to {{drivers/0}}.
* {{app_master}}: the ApplicationMaster container's output; symlink to {{app_masters/0}}.
* {{containers}}: directory containing the output of every executor container.
* {{eids}}: directory containing the output of each executor in the form of symlinks to the relevant file in the {{containers}} directory.
* {{tids}}: directory symlinking each task ID to the container-output-file for that task.
* {{hosts}}: directories for each host with symlinks to the containers that ran on them.

> Infinite TaskCommitDenied loop
> ------------------------------
>
>                 Key: SPARK-11285
>                 URL: https://issues.apache.org/jira/browse/SPARK-11285
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.5.0
>            Reporter: Ryan Williams
>
> I've seen several apps enter this failing state in the last couple of days. I've gathered all the documentation I can about two of them, [application_1444948191538_0051|https://www.dropbox.com/sh/ku9btpsbwrizx9y/AAAXIY0VhMqFabJBCtTVYxtma?dl=0] and [application_1444948191538_0116|https://www.dropbox.com/home/spark/application_1444948191538_0116]. Both were run on Spark 1.5.0 on in yarn-client mode with dynamic allocation of executors.
> In application_1444948191538_0051, partitions 5808 and 6109 in stage-attempt 1.0 failed 7948 and 7921 times, respectively, before I killed the app. In both cases, the first two attempts failed due to {{ExecutorLostFailure}}'s, and the remaining ~7900 attempts all failed due to {{TaskCommitDenied}}'s, over ~6hrs at a rate of about once per ~4s. See the last several thousand lines of [application_1444948191538_0051/driver|https://www.dropbox.com/s/f3zghuzuxobyzem/driver?dl=0].
> In application_1444948191538_0116, partition 10593 in stage-attempt 6.0 failed its first attempt due to an ExecutorLostFailure, and then a subsequent 219 attempts in ~22mins due to {{TaskCommitDenied}}'s before I killed the app. Again, [the driver logs|https://www.dropbox.com/s/ay1398p017qp712/driver?dl=0] enumerate each attempt.
> I'm guessing that the OutputCommitCoordinator is getting stuck due to early failed attempts?
> I'm trying to re-run some of these jobs on a 1.5.1 release and will let you know if I repro it there as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org