You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2019/08/27 06:46:00 UTC

[jira] [Comment Edited] (SPARK-28881) toPandas with Arrow returns an empty DataFrame when the result size exceeds `spark.driver.maxResultSize`

    [ https://issues.apache.org/jira/browse/SPARK-28881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16916440#comment-16916440 ] 

Hyukjin Kwon edited comment on SPARK-28881 at 8/27/19 6:45 AM:
---------------------------------------------------------------

This scenario is a problem as it returns an empty DataFrame in branch-2.4.
This is fixed in the master as part of SPARK-27992.

{code}
./bin/pyspark --conf spark.driver.maxResultSize=1m
spark.conf.set("spark.sql.execution.arrow.enabled",True)
spark.range(10000000).toPandas()
{code}

{code}
Empty DataFrame
Columns: [id]
Index: []
{code}

or it can return partial results:

{code}
./bin/pyspark --conf spark.driver.maxResultSize=1m
spark.conf.set("spark.sql.execution.arrow.enabled", True)
spark.range(0, 330000, 1, 100).toPandas()
{code}

{code}
...
75897  75897
75898  75898
75899  75899

[75900 rows x 1 columns]
{code}


was (Author: hyukjin.kwon):
This scenario is a problem as it returns an empty DataFrame in branch-2.4.
This is fixed in the master as part of SPARK-27992.

{code}
./bin/pyspark --conf spark.driver.maxResultSize=1m
spark.conf.set("spark.sql.execution.arrow.enabled",True)
spark.range(10000000).toPandas()
{code}

{code}
Empty DataFrame
Columns: [id]
Index: []
{code}

> toPandas with Arrow returns an empty DataFrame when the result size exceeds `spark.driver.maxResultSize`
> --------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-28881
>                 URL: https://issues.apache.org/jira/browse/SPARK-28881
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark, SQL
>    Affects Versions: 2.4.0, 2.4.1, 2.4.2, 2.4.3
>            Reporter: Hyukjin Kwon
>            Priority: Major
>
> {code}
> ./bin/pyspark --conf spark.driver.maxResultSize=1m
> spark.conf.set("spark.sql.execution.arrow.enabled",True)
> spark.range(10000000).toPandas()
> {code}
> The codes above returns an empty dataframe.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org