You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Josh Rosen (JIRA)" <ji...@apache.org> on 2019/05/07 22:55:00 UTC

[jira] [Created] (SPARK-27653) Add max_by() / min_by() SQL aggregate functions

Josh Rosen created SPARK-27653:
----------------------------------

             Summary: Add max_by() / min_by() SQL aggregate functions
                 Key: SPARK-27653
                 URL: https://issues.apache.org/jira/browse/SPARK-27653
             Project: Spark
          Issue Type: New Feature
          Components: SQL
    Affects Versions: 3.0.0
            Reporter: Josh Rosen


It would be useful if Spark SQL supported the {{max_by()}} SQL aggregate function. Quoting from the [Presto docs|https://prestodb.github.io/docs/current/functions/aggregate.html#max_by]:
{quote}max_by(x, y) → [same as x]
 Returns the value of x associated with the maximum value of y over all input values.
{quote}
{{min_by}} works similarly.

Technically I can emulate this behavior using window functions but the resulting syntax is much more verbose and non-intuitive compared to {{max_by}} / {{min_by}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org