You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Nikita Gorbachevski (Jira)" <ji...@apache.org> on 2019/08/22 09:23:00 UTC
[jira] [Comment Edited] (SPARK-22876)
spark.yarn.am.attemptFailuresValidityInterval does not work correctly
[ https://issues.apache.org/jira/browse/SPARK-22876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16913181#comment-16913181 ]
Nikita Gorbachevski edited comment on SPARK-22876 at 8/22/19 9:22 AM:
----------------------------------------------------------------------
Hi [~praveentallapudi], these options still work in cases when yarn kills driver forcefully and shutdown hook is not invoked, e.g. OOM or node manager failure. For other cases i implemented the same feature programmatically via running SparkContext in separate thread and stop/start it on non fatal exceptions from the main thread.
was (Author: choojoyq):
Hi [~praveentallapudi], these options still work in cases when yarn kills driver forcefully and shutdown hook is not invoked, e.g. OOM or node manager failure. For other cases i implemented the same feature programmatically via running SparkContext is separated thread and stop/start it on non fatal exceptions from main thread.
> spark.yarn.am.attemptFailuresValidityInterval does not work correctly
> ---------------------------------------------------------------------
>
> Key: SPARK-22876
> URL: https://issues.apache.org/jira/browse/SPARK-22876
> Project: Spark
> Issue Type: Bug
> Components: YARN
> Affects Versions: 2.2.0
> Environment: hadoop version 2.7.3
> Reporter: Jinhan Zhong
> Priority: Minor
> Labels: bulk-closed
>
> I assume we can use spark.yarn.maxAppAttempts together with spark.yarn.am.attemptFailuresValidityInterval to make a long running application avoid stopping after acceptable number of failures.
> But after testing, I found that the application always stops after failing n times ( n is minimum value of spark.yarn.maxAppAttempts and yarn.resourcemanager.am.max-attempts from client yarn-site.xml)
> for example, following setup will allow the application master to fail 20 times.
> * spark.yarn.am.attemptFailuresValidityInterval=1s
> * spark.yarn.maxAppAttempts=20
> * yarn client: yarn.resourcemanager.am.max-attempts=20
> * yarn resource manager: yarn.resourcemanager.am.max-attempts=3
> And after checking the source code, I found in source file ApplicationMaster.scala https://github.com/apache/spark/blob/master/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala#L293
> there's a ShutdownHook that checks the attempt id against the maxAppAttempts, if attempt id >= maxAppAttempts, it will try to unregister the application and the application will finish.
> is this a expected design or a bug?
--
This message was sent by Atlassian Jira
(v8.3.2#803003)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org