You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Derek M Miller (JIRA)" <ji...@apache.org> on 2017/11/22 20:41:00 UTC

[jira] [Created] (SPARK-22584) dataframe write partitionBy out of disk/java heap issues

Derek M Miller created SPARK-22584:
--------------------------------------

             Summary: dataframe write partitionBy out of disk/java heap issues
                 Key: SPARK-22584
                 URL: https://issues.apache.org/jira/browse/SPARK-22584
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 2.2.0
            Reporter: Derek M Miller


I have been seeing some issues with partitionBy for the dataframe writer. I currently have a file that is 6mb, just for testing, and it has around 1487 rows and 21 columns. There is nothing out of the ordinary with the columns, having either a DoubleType or String The partitionBy calls two different partitions with verified low cardinality. One partition has 30 unique values and the other one has 2 unique values.

```scala
        df
            .write.partitionBy("first", "second")
            .mode(SaveMode.Overwrite)
            .parquet(s"$location$example/$corrId/")
```

When running this example on Amazon's EMR with 5 r4.xlarges (30 gb of memory), I am getting a java heap out of memory error. I have maximizeResourceAllocation set, and verified on the instances. I have even set it to false, explicitly set the driver and executor memory to 16g, but still had the same issue. Occasionally I get an error about disk space, and the job seems to work if I use an r3.xlarge (that has the ssd). But that seems weird that 6mb of data needs to spill to disk.

The problem mainly seems to be centered around two + partitions vs 1. If I just use either of the partitions only, I have no problems. It's also worth noting that each of the partitions are evenly distributed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org