You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Bang Xiao (JIRA)" <ji...@apache.org> on 2018/01/29 08:27:00 UTC
[jira] [Created] (SPARK-23252) When NodeManager and
CoarseGrainedExecutorBackend processes are killed, the job will be blocked
Bang Xiao created SPARK-23252:
---------------------------------
Summary: When NodeManager and CoarseGrainedExecutorBackend processes are killed, the job will be blocked
Key: SPARK-23252
URL: https://issues.apache.org/jira/browse/SPARK-23252
Project: Spark
Issue Type: Bug
Components: Spark Core
Affects Versions: 2.2.0
Reporter: Bang Xiao
This happens when 'spark.dynamicAllocation.enabled' is set to be 'true'. We use Yarn as our resource manager.
1,spark-submit "JavaWordCount" application in yarn-client mode
2, Kill NodeManager and CoarseGrainedExecutorBackend processes in one node when the job is in stage 0
if we just kill all CoarseGrainedExecutorBackend in that node, TaskSetManager will pending the failure task to resubmit. but if the NodeManager and CoarseGrainedExecutorBackend processes killed simultaneously,the whole job will be blocked.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org