You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2020/07/11 00:32:00 UTC

[jira] [Updated] (SPARK-23258) Should not split Arrow record batches based on row count

     [ https://issues.apache.org/jira/browse/SPARK-23258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon updated SPARK-23258:
---------------------------------
    Labels:   (was: bulk-closed)

> Should not split Arrow record batches based on row count
> --------------------------------------------------------
>
>                 Key: SPARK-23258
>                 URL: https://issues.apache.org/jira/browse/SPARK-23258
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.3.0
>            Reporter: Bryan Cutler
>            Assignee: Bryan Cutler
>            Priority: Major
>
> Currently when executing scalarĀ {{pandas_udf}} or using {{toPandas()}} the Arrow record batches are split up once the record count reaches a max value, which is configured with "spark.sql.execution.arrow.maxRecordsPerBatch".  This is not ideal because the number of columns is not taken into account and if there are many columns, then OOMs can occur.  An alternative approach could be to look at the size of the Arrow buffers being used and cap it at a certain size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org