You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "heyihong (via GitHub)" <gi...@apache.org> on 2023/08/03 13:20:31 UTC

[GitHub] [spark] heyihong commented on a diff in pull request #42321: [SPARK-44657][CONNECT] Fix incorrect limit handling in ArrowBatchWithSchemaIterator and config parsing of CONNECT_GRPC_ARROW_MAX_BATCH_SIZE

heyihong commented on code in PR #42321:
URL: https://github.com/apache/spark/pull/42321#discussion_r1283195331


##########
connector/connect/server/src/main/scala/org/apache/spark/sql/connect/execution/SparkConnectPlanExecution.scala:
##########
@@ -100,7 +100,8 @@ private[execution] class SparkConnectPlanExecution(executeHolder: ExecuteHolder)
     val maxRecordsPerBatch = spark.sessionState.conf.arrowMaxRecordsPerBatch
     val timeZoneId = spark.sessionState.conf.sessionLocalTimeZone
     // Conservatively sets it 70% because the size is not accurate but estimated.
-    val maxBatchSize = (SparkEnv.get.conf.get(CONNECT_GRPC_ARROW_MAX_BATCH_SIZE) * 0.7).toLong
+    val maxBatchSize =

Review Comment:
   Should we add some tests for preventing future regressions?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org