You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@uniffle.apache.org by GitBox <gi...@apache.org> on 2022/08/05 09:49:03 UTC

[GitHub] [incubator-uniffle] zuston commented on a diff in pull request #128: [Improvement] Avoid starting unused threads in spark driver

zuston commented on code in PR #128:
URL: https://github.com/apache/incubator-uniffle/pull/128#discussion_r938650067


##########
client-spark/spark3/src/main/java/org/apache/spark/shuffle/RssShuffleManager.java:
##########
@@ -179,18 +179,20 @@ public RssShuffleManager(SparkConf conf, boolean isDriver) {
     LOG.info("Disable external shuffle service in RssShuffleManager.");
     taskToSuccessBlockIds = Maps.newConcurrentMap();
     taskToFailedBlockIds = Maps.newConcurrentMap();
-    // for non-driver executor, start a thread for sending shuffle data to shuffle server
-    LOG.info("RSS data send thread is starting");
-    eventLoop = defaultEventLoop;
-    eventLoop.start();
-    int poolSize = sparkConf.get(RssSparkConfig.RSS_CLIENT_SEND_THREAD_POOL_SIZE);
-    int keepAliveTime = sparkConf.get(RssSparkConfig.RSS_CLIENT_SEND_THREAD_POOL_KEEPALIVE);
-    threadPoolExecutor = new ThreadPoolExecutor(poolSize, poolSize * 2, keepAliveTime, TimeUnit.SECONDS,
-        Queues.newLinkedBlockingQueue(Integer.MAX_VALUE),
-        ThreadUtils.getThreadFactory("SendData-%d"));
+
     if (isDriver) {
       heartBeatScheduledExecutorService = Executors.newSingleThreadScheduledExecutor(
           ThreadUtils.getThreadFactory("rss-heartbeat-%d"));
+    } else {
+      // for non-driver executor, start a thread for sending shuffle data to shuffle server
+      LOG.info("RSS data send thread is starting");
+      eventLoop = defaultEventLoop;
+      eventLoop.start();
+      int poolSize = sparkConf.get(RssSparkConfig.RSS_CLIENT_SEND_THREAD_POOL_SIZE);
+      int keepAliveTime = sparkConf.get(RssSparkConfig.RSS_CLIENT_SEND_THREAD_POOL_KEEPALIVE);
+      threadPoolExecutor = new ThreadPoolExecutor(poolSize, poolSize * 2, keepAliveTime, TimeUnit.SECONDS,

Review Comment:
   OK. I will add.
   
   > threadPoolExecutor may be null, when you stop, the Spark will throw NPE.
   
   Could u give more infos why it will be null? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@uniffle.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@uniffle.apache.org
For additional commands, e-mail: issues-help@uniffle.apache.org