You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "tgravescs (via GitHub)" <gi...@apache.org> on 2023/08/21 13:26:27 UTC

[GitHub] [spark] tgravescs commented on pull request #42570: [SPARK-22876][YARN] Respect YARN AM failure validity interval

tgravescs commented on PR #42570:
URL: https://github.com/apache/spark/pull/42570#issuecomment-1686328788

   so to clarify the issue here, the original PR that added this validityInterval config (https://github.com/apache/spark/pull/8857/files) just seems to call the YARN setAttemptFailuresValidityInterval().  So you are saying that config works on the YARN side but on the Spark side we think its the last attempt when it really isn't. That makes sense. 
   
   I'm not sure I understand how your second point above though (Spark thinks the application will retry but YARN thinks it was the last attempt.) happens with this config?  what is the scenario there?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org