You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Dongjoon Hyun (JIRA)" <ji...@apache.org> on 2016/06/27 17:35:51 UTC

[jira] [Updated] (SPARK-16052) Improve `CollapseRepartition` optimizer for Repartition/RepartitionBy

     [ https://issues.apache.org/jira/browse/SPARK-16052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Dongjoon Hyun updated SPARK-16052:
----------------------------------
    Description: 
This issue improves `CollapseRepartition` to optimize the adjacent combinations of **Repartition** and **RepartitionBy**. Also, this issue adds a testsuite for this optimizer.

**Before**
{code}
scala> spark.range(10).repartition(1, $"id").repartition(1, $"id").explain
== Physical Plan ==
Exchange hashpartitioning(id#0L, 1)
+- Exchange hashpartitioning(id#0L, 1)
   +- *Range (0, 10, splits=8)

scala> spark.range(10).repartition(1, $"id").repartition($"id").explain
== Physical Plan ==
Exchange hashpartitioning(id#6L, 200)
+- Exchange hashpartitioning(id#6L, 1)
   +- *Range (0, 10, splits=8)
{code}

**After**
{code}
scala> spark.range(10).repartition(1, $"id").repartition(1, $"id").explain
== Physical Plan ==
Exchange hashpartitioning(id#0L, 1)
+- *Range (0, 10, splits=8)

scala> spark.range(10).repartition(1, $"id").repartition($"id").explain
== Physical Plan ==
Exchange hashpartitioning(id#6L, 200)
+- *Range (0, 10, splits=8)
{code}

  was:
This issue adds a new optimizer, `CollapseRepartitionBy`.

**Before**
{code}
scala> spark.range(10).repartition(1, $"id").repartition(1, $"id").explain
== Physical Plan ==
Exchange hashpartitioning(id#0L, 1)
+- Exchange hashpartitioning(id#0L, 1)
   +- *Range (0, 10, splits=8)
{code}

**After**
{code}
scala> spark.range(10).repartition(1, $"id").repartition(1, $"id").explain
== Physical Plan ==
Exchange hashpartitioning(id#0L, 1)
+- *Range (0, 10, splits=8)
{code}

    Component/s:     (was: Optimizer)
                 SQL
        Summary: Improve `CollapseRepartition` optimizer for Repartition/RepartitionBy  (was: Add CollapseRepartitionBy optimizer)

> Improve `CollapseRepartition` optimizer for Repartition/RepartitionBy
> ---------------------------------------------------------------------
>
>                 Key: SPARK-16052
>                 URL: https://issues.apache.org/jira/browse/SPARK-16052
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>            Reporter: Dongjoon Hyun
>
> This issue improves `CollapseRepartition` to optimize the adjacent combinations of **Repartition** and **RepartitionBy**. Also, this issue adds a testsuite for this optimizer.
> **Before**
> {code}
> scala> spark.range(10).repartition(1, $"id").repartition(1, $"id").explain
> == Physical Plan ==
> Exchange hashpartitioning(id#0L, 1)
> +- Exchange hashpartitioning(id#0L, 1)
>    +- *Range (0, 10, splits=8)
> scala> spark.range(10).repartition(1, $"id").repartition($"id").explain
> == Physical Plan ==
> Exchange hashpartitioning(id#6L, 200)
> +- Exchange hashpartitioning(id#6L, 1)
>    +- *Range (0, 10, splits=8)
> {code}
> **After**
> {code}
> scala> spark.range(10).repartition(1, $"id").repartition(1, $"id").explain
> == Physical Plan ==
> Exchange hashpartitioning(id#0L, 1)
> +- *Range (0, 10, splits=8)
> scala> spark.range(10).repartition(1, $"id").repartition($"id").explain
> == Physical Plan ==
> Exchange hashpartitioning(id#6L, 200)
> +- *Range (0, 10, splits=8)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org