You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2020/11/14 21:09:35 UTC

[GitHub] [spark] Victsm commented on a change in pull request #30312: [WIP][SPARK-32917][SHUFFLE][CORE][test-maven][test-hadoop2.7] Adds support for executors to push shuffle blocks after successful map task completion

Victsm commented on a change in pull request #30312:
URL: https://github.com/apache/spark/pull/30312#discussion_r523463471



##########
File path: common/network-common/src/main/java/org/apache/spark/network/client/TransportClientFactory.java
##########
@@ -254,7 +254,7 @@ TransportClient createClient(InetSocketAddress address)
       // Disable Nagle's Algorithm since we don't want packets to wait
       .option(ChannelOption.TCP_NODELAY, true)
       .option(ChannelOption.SO_KEEPALIVE, true)
-      .option(ChannelOption.CONNECT_TIMEOUT_MILLIS, conf.connectionTimeoutMs())
+      .option(ChannelOption.CONNECT_TIMEOUT_MILLIS, conf.connectionCreationTimeoutMs())

Review comment:
       The purpose is mainly to separate these 2 timeouts into 2 separate configs so we can configure them independent of each other.
   If you think it would be better to keep the default behavior, we can change the default value for this config back to 120s.
   However, as @otterc mentioned, having the default value for connection establishment timeout set to 120s is unnecessarily high.
   We want to quickly identify a `bad` node but give sufficient time for a `busy` node to respond shuffle data.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org