You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/11/11 11:14:07 UTC

[GitHub] [spark] hvanhovell commented on a diff in pull request #38613: [SPARK-41005][CONNECT][PYTHON][FOLLOW-UP] Fetch/send partitions in parallel for Arrow based collect

hvanhovell commented on code in PR #38613:
URL: https://github.com/apache/spark/pull/38613#discussion_r1020124708


##########
connector/connect/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectStreamHandler.scala:
##########
@@ -144,36 +144,10 @@ class SparkConnectStreamHandler(responseObserver: StreamObserver[Response]) exte
             .toArrowBatchIterator(iter, schema, maxRecordsPerBatch, timeZoneId)
         }
 
-        val signal = new Object
-        val partitions = collection.mutable.Map.empty[Int, Array[Batch]]
-
-        val processPartition = (iter: Iterator[Batch]) => iter.toArray
-
         // This callback is executed by the DAGScheduler thread.
-        // After fetching a partition, it inserts the partition into the Map, and then
-        // wakes up the main thread.
-        val resultHandler = (partitionId: Int, partition: Array[Batch]) => {
-          signal.synchronized {
-            partitions(partitionId) = partition
-            signal.notify()
-          }
-          ()
-        }
-
-        spark.sparkContext.runJob(batches, processPartition, resultHandler)
-
-        // The man thread will wait until 0-th partition is available,
-        // then send it to client and wait for next partition.
-        var currentPartitionId = 0
-        while (currentPartitionId < numPartitions) {
-          val partition = signal.synchronized {
-            while (!partitions.contains(currentPartitionId)) {
-              signal.wait()
-            }
-            partitions.remove(currentPartitionId).get
-          }
-
-          partition.foreach { case (bytes, count) =>
+        def writeBatches(arrowBatches: Array[Batch]): Unit = {

Review Comment:
   The reason why I suggested to use locks and the main thread to write the results is exactly what this comment is trying to convey. You don't want these operations to happen inside the DAGScheduler thread. If you keep that blocked for something none scheduling related, you will stop all other scheduling. This is particularly bad in an environment where you might have multiple users running code at the same time.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org