You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Manu Zhang (Jira)" <ji...@apache.org> on 2020/08/31 06:52:00 UTC

[jira] [Created] (SPARK-32753) Deduplicating and repartitioning the same column create duplicate rows with AQE

Manu Zhang created SPARK-32753:
----------------------------------

             Summary: Deduplicating and repartitioning the same column create duplicate rows with AQE
                 Key: SPARK-32753
                 URL: https://issues.apache.org/jira/browse/SPARK-32753
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 3.0.0
            Reporter: Manu Zhang


To reproduce:
spark.range(10).union(spark.range(10)).createOrReplaceTempView("v1")
val df = spark.sql("select id from v1 group by id distribute by id") 
println(df.collect().toArray.mkString(","))
println(df.queryExecution.executedPlan)// With AQE[4],[0],[3],[2],[1],[7],[6],[8],[5],[9],[4],[0],[3],[2],[1],[7],[6],[8],[5],[9]
AdaptiveSparkPlan(isFinalPlan=true)
+- CustomShuffleReader local
   +- ShuffleQueryStage 0
      +- Exchange hashpartitioning(id#183L, 10), true         +- *(3) HashAggregate(keys=[id#183L], functions=[], output=[id#183L])
            +- Union
               :- *(1) Range (0, 10, step=1, splits=2)
               +- *(2) Range (0, 10, step=1, splits=2)// Without AQE[4],[7],[0],[6],[8],[3],[2],[5],[1],[9]
*(4) HashAggregate(keys=[id#206L], functions=[], output=[id#206L])
+- Exchange hashpartitioning(id#206L, 10), true   +- *(3) HashAggregate(keys=[id#206L], functions=[], output=[id#206L])
      +- Union
         :- *(1) Range (0, 10, step=1, splits=2)
         +- *(2) Range (0, 10, step=1, splits=2)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org