You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Lv, Qi (JIRA)" <ji...@apache.org> on 2014/11/27 09:15:12 UTC

[jira] [Commented] (SPARK-4315) PySpark pickling of pyspark.sql.Row objects is extremely inefficient

    [ https://issues.apache.org/jira/browse/SPARK-4315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14227376#comment-14227376 ] 

Lv, Qi commented on SPARK-4315:
-------------------------------

I'm interested in this issue, but I can't reproduce your problem. 
I constructed a very simple workload according to your description, like this:

from pyspark.sql import SQLContext
from pyspark import SparkContext

sc = SparkContext(appName="test")
sqlContext = SQLContext(sc)

lines = sc.parallelize(range(1000000), 12)
people = lines.map(lambda x:{"name": str(x % 1000), "age":x})

schemaPeople = sqlContext.inferSchema(people)
schemaPeople.registerAsTable("people")

grouped = schemaPeople.groupBy(lambda x:x.name)
grouped.collect()

And tested over spark-1.1(2f9b2bd) and spark-master(0fe54cff).
It finished in 3-4 seconds on both spark versions.

After disabled _restore_object's cache ( adding "return _create_cls(dataType)(obj)" ), it becomes obviously slow(waited minutes, no need to wait more).

Could you please give me more detailed information?

> PySpark pickling of pyspark.sql.Row objects is extremely inefficient
> --------------------------------------------------------------------
>
>                 Key: SPARK-4315
>                 URL: https://issues.apache.org/jira/browse/SPARK-4315
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 1.1.0
>         Environment: Ubuntu, Python 2.7, Spark 1.1.0
>            Reporter: Adam Davison
>
> Working with an RDD of pyspark.sql.Row objects, created by reading a file with SQLContext in a local PySpark context.
> Operations on the RDD, such as: data.groupBy(lambda x: x.field_name) are extremely slow (more than 10x slower than an equivalent Scala/Spark implementation). Obviously I expected it to be somewhat slower, but I did a bit of digging given the difference was so huge.
> Luckily it's fairly easy to add profiling to the Python workers. I see that the vast majority of time is spent in:
> spark-1.1.0-bin-cdh4/python/pyspark/sql.py:757(_restore_object)
> It seems that this line attempts to accelerate pickling of Rows with the use of a cache. Some debugging reveals that this cache becomes quite big (100s of entries). Disabling the cache by adding:
> return _create_cls(dataType)(obj)
> as the first line of _restore_object made my query run 5x faster. Implying that the caching is not providing the desired acceleration...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org