You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2019/05/21 04:22:11 UTC
[jira] [Updated] (SPARK-18078) Add option for customize
zipPartition task preferred locations
[ https://issues.apache.org/jira/browse/SPARK-18078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hyukjin Kwon updated SPARK-18078:
---------------------------------
Labels: bulk-closed (was: )
> Add option for customize zipPartition task preferred locations
> --------------------------------------------------------------
>
> Key: SPARK-18078
> URL: https://issues.apache.org/jira/browse/SPARK-18078
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Reporter: Weichen Xu
> Priority: Minor
> Labels: bulk-closed
> Original Estimate: 24h
> Remaining Estimate: 24h
>
> `RDD.zipPartitions` task preferred locations strategy will use the intersection of corresponding zipped partitions locations, if the intersection is null, it use union of these locations.
> but in special case, I want to customize the task preferred locations for better performance. A typical case is in spark-tfocus *LinopMatrixAdjoint* operator: a distributed matrix(DMatrix) multiplying a distributed vector(DVector), it use RDD.zipPartitions (DMatrix and DVector RDD must be partitioned in the same way beforehand).
> https://github.com/databricks/spark-tfocs/blob/master/src/main/scala/org/apache/spark/mllib/optimization/tfocs/fs/dvector/vector/LinopMatrixAdjoint.scala
> Usually, the `DMatrix` RDD will be much larger than `DVector` RDD, we hope the zipPartition task always locates on `DMatrix` partition's location. it will get better data locality than the default preferred location strategy.
> I think it make sense to add an option for this.
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org