You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Ankur Dave (JIRA)" <ji...@apache.org> on 2015/01/24 04:37:34 UTC

[jira] [Resolved] (SPARK-5351) Can't zip RDDs with unequal numbers of partitions in ReplicatedVertexView.upgrade()

     [ https://issues.apache.org/jira/browse/SPARK-5351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ankur Dave resolved SPARK-5351.
-------------------------------
       Resolution: Fixed
    Fix Version/s: 1.2.1
                   1.3.0

Issue resolved by pull request 4136
https://github.com/apache/spark/pull/4136

> Can't zip RDDs with unequal numbers of partitions in ReplicatedVertexView.upgrade()
> -----------------------------------------------------------------------------------
>
>                 Key: SPARK-5351
>                 URL: https://issues.apache.org/jira/browse/SPARK-5351
>             Project: Spark
>          Issue Type: Bug
>          Components: GraphX
>            Reporter: Takeshi Yamamuro
>             Fix For: 1.3.0, 1.2.1
>
>
> If the value of 'spark.default.parallelism' does not match the number of partitoins in EdgePartition(EdgeRDDImpl), 
> the following error occurs in ReplicatedVertexView.scala:72;
> object GraphTest extends Logging {
> def run[VD: ClassTag, ED: ClassTag](graph: Graph[VD, ED]): VertexRDD[Int] = {
> graph.aggregateMessages[Int](
> ctx => {
> ctx.sendToSrc(1)
> ctx.sendToDst(2)
> },
> _ + _)
> }
> }
> val g = GraphLoader.edgeListFile(sc, "graph.txt")
> val rdd = GraphTest.run(g)
> java.lang.IllegalArgumentException: Can't zip RDDs with unequal numbers of partitions
> 	at org.apache.spark.rdd.ZippedPartitionsBaseRDD.getPartitions(ZippedPartitionsRDD.scala:57)
> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:206)
> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
> 	at scala.Option.getOrElse(Option.scala:120)
> 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:204)
> 	at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:206)
> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
> 	at scala.Option.getOrElse(Option.scala:120)
> 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:204)
> 	at org.apache.spark.ShuffleDependency.<init>(Dependency.scala:82)
> 	at org.apache.spark.rdd.ShuffledRDD.getDependencies(ShuffledRDD.scala:80)
> 	at org.apache.spark.rdd.RDD$$anonfun$dependencies$2.apply(RDD.scala:193)
> 	at org.apache.spark.rdd.RDD$$anonfun$dependencies$2.apply(RDD.scala:191)
>     ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org