You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/11/15 12:25:30 UTC

[GitHub] [spark] Ngone51 commented on a diff in pull request #38441: [SPARK-40979][CORE] Keep removed executor info due to decommission

Ngone51 commented on code in PR #38441:
URL: https://github.com/apache/spark/pull/38441#discussion_r1022725414


##########
core/src/main/scala/org/apache/spark/internal/config/package.scala:
##########
@@ -2024,6 +2024,16 @@ package object config {
       .stringConf
       .createOptional
 
+  private[spark] val SCHEDULER_MAX_RETAINED_REMOVED_EXECUTORS =
+    ConfigBuilder("spark.scheduler.maxRetainedRemovedExecutors")
+      .internal()
+      .doc("Max number of removed executors by decommission to retain. This affects " +
+        "whether fetch failure caused by removed decommissioned executors could be ignored " +
+        "when spark.stage.ignoreDecommissionFetchFailure is enabled.")

Review Comment:
   ```suggestion
           s"when ${STAGE_IGNORE_DECOMMISSION_FETCH_FAILURE.key} is enabled.")
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org