You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2020/06/30 13:55:46 UTC

[GitHub] [spark] tgravescs commented on a change in pull request #28924: [SPARK-32091][CORE] Ignore timeout error when remove blocks on the lost executor

tgravescs commented on a change in pull request #28924:
URL: https://github.com/apache/spark/pull/28924#discussion_r447684033



##########
File path: core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedClusterMessage.scala
##########
@@ -132,4 +132,6 @@ private[spark] object CoarseGrainedClusterMessages {
   // Used internally by executors to shut themselves down.
   case object Shutdown extends CoarseGrainedClusterMessage
 
+  // The message to check whether the executor is alive or not.

Review comment:
       might be nice just to clarify saying check if scheduler thinks executor is alive or not

##########
File path: core/src/main/scala/org/apache/spark/util/RpcUtils.scala
##########
@@ -54,6 +56,13 @@ private[spark] object RpcUtils {
     RpcTimeout(conf, Seq(RPC_LOOKUP_TIMEOUT.key, NETWORK_TIMEOUT.key), "120s")
   }
 
+  /**
+   * Infinite timeout is used internally, so there's no actual timeout property controls it.

Review comment:
       maybe say "timeout configuration property that controls it".




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org