You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/11/11 03:44:18 UTC

[GitHub] [spark] zhengruifeng opened a new pull request, #38612: [SPARK-41108][CONNECT] Control the max size of arrow batch

zhengruifeng opened a new pull request, #38612:
URL: https://github.com/apache/spark/pull/38612

   ### What changes were proposed in this pull request?
   
   Control the max size of arrow batch
   
   
   ### Why are the changes needed?
   
   as per the suggestion https://github.com/apache/spark/pull/38468#discussion_r1018951362
   
   
   ### Does this PR introduce _any_ user-facing change?
   No
   
   
   ### How was this patch tested?
   existing tests
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] zhengruifeng commented on a diff in pull request #38612: [SPARK-41108][CONNECT] Control the max size of arrow batch

Posted by GitBox <gi...@apache.org>.
zhengruifeng commented on code in PR #38612:
URL: https://github.com/apache/spark/pull/38612#discussion_r1019843494


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/arrow/ArrowConverters.scala:
##########
@@ -161,17 +166,23 @@ private[sql] object ArrowConverters extends Logging {
         val writeChannel = new WriteChannel(Channels.newChannel(out))
 
         var rowCount = 0L
+        var estimatedBatchSize = arrowSchemaSize
         Utils.tryWithSafeFinally {
-          while (rowIter.hasNext && (maxRecordsPerBatch <= 0 || rowCount < maxRecordsPerBatch)) {
+          // Always write the schema.
+          MessageSerializer.serialize(writeChannel, arrowSchema)
+
+          // Always write the first row.
+          while (rowIter.hasNext && (rowCount == 0 || estimatedBatchSize < maxBatchSize)) {
             val row = rowIter.next()
             arrowWriter.write(row)
+            estimatedBatchSize += row.asInstanceOf[UnsafeRow].getSizeInBytes

Review Comment:
   will update `maxBatchSize * 0.7`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon closed pull request #38612: [SPARK-41108][CONNECT] Control the max size of arrow batch

Posted by GitBox <gi...@apache.org>.
HyukjinKwon closed pull request #38612: [SPARK-41108][CONNECT] Control the max size of arrow batch
URL: https://github.com/apache/spark/pull/38612


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #38612: [SPARK-41108][CONNECT] Control the max size of arrow batch

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on code in PR #38612:
URL: https://github.com/apache/spark/pull/38612#discussion_r1019838326


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/arrow/ArrowConverters.scala:
##########
@@ -161,17 +166,23 @@ private[sql] object ArrowConverters extends Logging {
         val writeChannel = new WriteChannel(Channels.newChannel(out))
 
         var rowCount = 0L
+        var estimatedBatchSize = arrowSchemaSize
         Utils.tryWithSafeFinally {
-          while (rowIter.hasNext && (maxRecordsPerBatch <= 0 || rowCount < maxRecordsPerBatch)) {
+          // Always write the schema.
+          MessageSerializer.serialize(writeChannel, arrowSchema)
+
+          // Always write the first row.
+          while (rowIter.hasNext && (rowCount == 0 || estimatedBatchSize < maxBatchSize)) {
             val row = rowIter.next()
             arrowWriter.write(row)
+            estimatedBatchSize += row.asInstanceOf[UnsafeRow].getSizeInBytes

Review Comment:
   The size of message should be based on Arrow but we are only able to know the size of the batch when Arrow batch is created.
   
   So I am fine with the current approach. I do believe that `UnsafeRow` has bigger size than `ArrowBatch` in general.
   
   One nit would be we should probably set the lower size in `maxBatchSize` to be conservative. For example, `maxBatchSize * 0.7`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on pull request #38612: [SPARK-41108][CONNECT] Control the max size of arrow batch

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on PR #38612:
URL: https://github.com/apache/spark/pull/38612#issuecomment-1311251418

   Merged to master.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on pull request #38612: [SPARK-41108][CONNECT] Control the max size of arrow batch

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on PR #38612:
URL: https://github.com/apache/spark/pull/38612#issuecomment-1311251340

   Let me actually merge and refactor this out. I am working on it actually.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] zhengruifeng commented on a diff in pull request #38612: [SPARK-41108][CONNECT] Control the max size of arrow batch

Posted by GitBox <gi...@apache.org>.
zhengruifeng commented on code in PR #38612:
URL: https://github.com/apache/spark/pull/38612#discussion_r1019806473


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/arrow/ArrowConverters.scala:
##########
@@ -161,17 +166,23 @@ private[sql] object ArrowConverters extends Logging {
         val writeChannel = new WriteChannel(Channels.newChannel(out))
 
         var rowCount = 0L
+        var estimatedBatchSize = arrowSchemaSize
         Utils.tryWithSafeFinally {
-          while (rowIter.hasNext && (maxRecordsPerBatch <= 0 || rowCount < maxRecordsPerBatch)) {
+          // Always write the schema.
+          MessageSerializer.serialize(writeChannel, arrowSchema)
+
+          // Always write the first row.
+          while (rowIter.hasNext && (rowCount == 0 || estimatedBatchSize < maxBatchSize)) {
             val row = rowIter.next()
             arrowWriter.write(row)
+            estimatedBatchSize += row.asInstanceOf[UnsafeRow].getSizeInBytes

Review Comment:
   refer to how the size is computed in [BroadcastExchange](https://github.com/apache/spark/blob/e17d8ecabcad6e84428752b977120ff355a4007a/sql/core/src/main/scala/org/apache/spark/sql/execution/exchange/BroadcastExchangeExec.scala#L150-L158) 
   
   but not 100% sure, should I use this instead?
   ```
   row match {
      case unsafe: UnsafeRow => estimatedBatchSize += unsafe.getSizeInBytes
      case _ => estimatedBatchSize += SizeEstimator.estimate(row)
   }
   ```
   
   cc @HyukjinKwon 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org