You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Takeshi Yamamuro (JIRA)" <ji...@apache.org> on 2016/06/21 08:11:58 UTC

[jira] [Created] (SPARK-16094) Support HashAggregateExec for non-partial aggregates

Takeshi Yamamuro created SPARK-16094:
----------------------------------------

             Summary: Support HashAggregateExec for non-partial aggregates
                 Key: SPARK-16094
                 URL: https://issues.apache.org/jira/browse/SPARK-16094
             Project: Spark
          Issue Type: Improvement
          Components: SQL
    Affects Versions: 1.6.1
            Reporter: Takeshi Yamamuro


The current spark cannot use `HashAggregateExec` for non-partial aggregates because `Collect` (`CollectSet`/`CollectList`) has a single shared buffer inside. Since SortAggregateExec is expensive for bigger data, we'd better off fixing this.

This ticket is intended to change plans from
{code}
SortAggregate(key=[key#3077], functions=[collect_set(value#3078, 0, 0)], output=[key#3077,collect_set(value)#3088])
+- *Sort [key#3077 ASC], false, 0
   +- Exchange hashpartitioning(key#3077, 5)
      +- Scan ExistingRDD[key#3077,value#3078]
{code}

into

{code}
HashAggregate(keys=[key#3077], functions=[collect_set(value#3078, 0, 0)], output=[key#3077, collect_set(value)#3088])
+- Exchange hashpartitioning(key#3077, 5)
   +- Scan ExistingRDD[key#3077,value#3078]
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org