You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Stuart (JIRA)" <ji...@apache.org> on 2017/09/12 16:06:01 UTC
[jira] [Updated] (SPARK-21985) PySpark PairDeserializer is broken
for double-zipped RDDs
[ https://issues.apache.org/jira/browse/SPARK-21985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Stuart updated SPARK-21985:
---------------------------
Description:
PySpark fails to deserialize double-zipped RDDs. For example, the following example used to work in Spark 2.0.2:
{{
>>> a = sc.parallelize('aaa')
>>> b = sc.parallelize('bbb')
>>> c = sc.parallelize('ccc')
>>> a_bc = a.zip( b.zip(c) )
>>> a_bc.collect()
[('a', ('b', 'c')), ('a', ('b', 'c')), ('a', ('b', 'c'))]
}}
But in Spark >=2.1.0, it fails:
{{
>>> a_bc.collect()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/magnetic/workspace/spark-2.2.0-bin-hadoop2.7/python/pyspark/rdd.py", line 810, in collect
return list(_load_from_socket(port, self._jrdd_deserializer))
File "/magnetic/workspace/spark-2.2.0-bin-hadoop2.7/python/pyspark/serializers.py", line 329, in _load_stream_without_unbatching
if len(key_batch) != len(val_batch):
TypeError: object of type 'itertools.izip' has no len()
}}
As you can see, the error seems to be caused by [a check in the PairDeserializer class|https://github.com/apache/spark/blob/master/python/pyspark/serializers.py#L346-L348]:
{{
if len(key_batch) != len(val_batch):
raise ValueError("Can not deserialize PairRDD with different number of items"
" in batches: (%d, %d)" % (len(key_batch), len(val_batch)))
}}
If that check is removed, then the example above works without error. Can the check simply be removed?
was:
PySpark fails to deserialize double-zipped RDDs. For example, the following example used to work in Spark 2.0.2:
{{>>> a = sc.parallelize('aaa')
>>> b = sc.parallelize('bbb')
>>> c = sc.parallelize('ccc')
>>> a_bc = a.zip( b.zip(c) )
>>> a_bc.collect()
[('a', ('b', 'c')), ('a', ('b', 'c')), ('a', ('b', 'c'))]}}
But in Spark >=2.1.0, it fails:
{{>>> a_bc.collect()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/magnetic/workspace/spark-2.2.0-bin-hadoop2.7/python/pyspark/rdd.py", line 810, in collect
return list(_load_from_socket(port, self._jrdd_deserializer))
File "/magnetic/workspace/spark-2.2.0-bin-hadoop2.7/python/pyspark/serializers.py", line 329, in _load_stream_without_unbatching
if len(key_batch) != len(val_batch):
TypeError: object of type 'itertools.izip' has no len()}}
As you can see, the error seems to be caused by [a check in the PairDeserializer class|https://github.com/apache/spark/blob/master/python/pyspark/serializers.py#L346-L348]:
{{if len(key_batch) != len(val_batch):
raise ValueError("Can not deserialize PairRDD with different number of items"
" in batches: (%d, %d)" % (len(key_batch), len(val_batch)))
}}
If that check is removed, then the example above works without error. Can the check simply be removed?
> PySpark PairDeserializer is broken for double-zipped RDDs
> ---------------------------------------------------------
>
> Key: SPARK-21985
> URL: https://issues.apache.org/jira/browse/SPARK-21985
> Project: Spark
> Issue Type: Bug
> Components: PySpark
> Affects Versions: 2.1.0, 2.1.1, 2.2.0
> Reporter: Stuart
> Labels: bug
>
> PySpark fails to deserialize double-zipped RDDs. For example, the following example used to work in Spark 2.0.2:
> {{
> >>> a = sc.parallelize('aaa')
> >>> b = sc.parallelize('bbb')
> >>> c = sc.parallelize('ccc')
> >>> a_bc = a.zip( b.zip(c) )
> >>> a_bc.collect()
> [('a', ('b', 'c')), ('a', ('b', 'c')), ('a', ('b', 'c'))]
> }}
> But in Spark >=2.1.0, it fails:
> {{
> >>> a_bc.collect()
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> File "/magnetic/workspace/spark-2.2.0-bin-hadoop2.7/python/pyspark/rdd.py", line 810, in collect
> return list(_load_from_socket(port, self._jrdd_deserializer))
> File "/magnetic/workspace/spark-2.2.0-bin-hadoop2.7/python/pyspark/serializers.py", line 329, in _load_stream_without_unbatching
> if len(key_batch) != len(val_batch):
> TypeError: object of type 'itertools.izip' has no len()
> }}
> As you can see, the error seems to be caused by [a check in the PairDeserializer class|https://github.com/apache/spark/blob/master/python/pyspark/serializers.py#L346-L348]:
> {{
> if len(key_batch) != len(val_batch):
> raise ValueError("Can not deserialize PairRDD with different number of items"
> " in batches: (%d, %d)" % (len(key_batch), len(val_batch)))
> }}
> If that check is removed, then the example above works without error. Can the check simply be removed?
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org