You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "F. H. (Jira)" <ji...@apache.org> on 2022/01/14 15:51:00 UTC

[jira] [Commented] (SPARK-18591) Replace hash-based aggregates with sort-based ones if inputs already sorted

    [ https://issues.apache.org/jira/browse/SPARK-18591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17476233#comment-17476233 ] 

F. H. commented on SPARK-18591:
-------------------------------

I just found the same behavior. Even if I directly put ".sort(*group_keys)" before the groupby, Spark is hash-repartitioning. Is there a solution to this?

> Replace hash-based aggregates with sort-based ones if inputs already sorted
> ---------------------------------------------------------------------------
>
>                 Key: SPARK-18591
>                 URL: https://issues.apache.org/jira/browse/SPARK-18591
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 2.0.2
>            Reporter: Takeshi Yamamuro
>            Priority: Major
>              Labels: bulk-closed
>
> Spark currently uses sort-based aggregates only in limited condition; the cases where spark cannot use partial aggregates and hash-based ones.
> However, if input ordering has already satisfied the requirements of sort-based aggregates, it seems sort-based ones are faster than the other.
> {code}
> ./bin/spark-shell --conf spark.sql.shuffle.partitions=1
> val df = spark.range(10000000).selectExpr("id AS key", "id % 10 AS value").sort($"key").cache
> def timer[R](block: => R): R = {
>   val t0 = System.nanoTime()
>   val result = block
>   val t1 = System.nanoTime()
>   println("Elapsed time: " + ((t1 - t0 + 0.0) / 1000000000.0)+ "s")
>   result
> }
> timer {
>   df.groupBy("key").count().count
> }
> // codegen'd hash aggregate
> Elapsed time: 7.116962977s
> // non-codegen'd sort aggregarte
> Elapsed time: 3.088816662s
> {code}
> If codegen'd sort-based aggregates are supported in SPARK-16844, this seems to make the performance gap bigger;
> {code}
> - codegen'd sort aggregate
> Elapsed time: 1.645234684s
> {code} 
> Therefore, it'd be better to use sort-based ones in this case.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org