You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Nathan Howell (JIRA)" <ji...@apache.org> on 2014/08/06 09:29:11 UTC

[jira] [Created] (SPARK-2876) RDD.partitionBy loads entire partition into memory

Nathan Howell created SPARK-2876:
------------------------------------

             Summary: RDD.partitionBy loads entire partition into memory
                 Key: SPARK-2876
                 URL: https://issues.apache.org/jira/browse/SPARK-2876
             Project: Spark
          Issue Type: Bug
          Components: PySpark
    Affects Versions: 1.0.1
            Reporter: Nathan Howell


{{RDD.partitionBy}} fails due to an OOM in the PySpark daemon process when given a relatively large dataset. It seems that the use of {{BatchedSerializer(UNLIMITED_BATCH_SIZE)}} is suspect, most other RDD methods use {{self._jrdd_deserializer}}.

{code}
y = x.keyBy(...)
z = y.partitionBy(512) # fails
z = y.repartition(512) # succeeds
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org