You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Diana Carroll (JIRA)" <ji...@apache.org> on 2015/08/11 16:38:45 UTC
[jira] [Created] (SPARK-9821) pyspark reduceByKey should allow a
custom partitioner
Diana Carroll created SPARK-9821:
------------------------------------
Summary: pyspark reduceByKey should allow a custom partitioner
Key: SPARK-9821
URL: https://issues.apache.org/jira/browse/SPARK-9821
Project: Spark
Issue Type: Bug
Components: PySpark
Affects Versions: 1.3.0
Reporter: Diana Carroll
In Scala, I can supply a custom partitioner to reduceByKey (and other aggregation/repartitioning methods like aggregateByKey and combinedByKey), but as far as I can tell from the Pyspark API, there's no way to do the same in Python.
Here's an example of my code in Scala:
{code}weblogs.map(s => (getFileType(s), 1)).reduceByKey(new FileTypePartitioner(),_+_){code}
But I can't figure out how to do the same in Python. The closest I can get is to call repartition before reduceByKey like so:
{code}weblogs.map(lambda s: (getFileType(s), 1)).partitionBy(3,hash_filetype).reduceByKey(lambda v1,v2: v1+v2).collect(){code}
But that defeats the purpose, because I'm shuffling twice instead of once, so my performance is worse instead of better.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org