You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/11/14 16:56:57 UTC

[GitHub] [spark] pan3793 commented on a diff in pull request #38622: [SPARK-39601][YARN] AllocationFailure should not be treated as exitCausedByApp when driver is shutting down

pan3793 commented on code in PR #38622:
URL: https://github.com/apache/spark/pull/38622#discussion_r1021822652


##########
resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala:
##########
@@ -815,6 +815,7 @@ private[spark] class ApplicationMaster(
       case Shutdown(code) =>
         exitCode = code
         shutdown = true
+        allocator.setShutdown(true)

Review Comment:
   > it looks like this Shutdown message is only sent in Client mode
   
   Yea, I just noticed that, do you think it's a good idea to send `Shutdown` in cluster mode as well? or any other suggestions? cc @AngersZhuuuu as you are the author of that code.
   
   > the log message in the description has YarnClusterSchedulerBackend which makes me think this is cluster mode.
   
   You are right, my reported job failed in cluster mode, and I think both yarn client/cluster modes have this issue.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org