You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/10/23 02:04:42 UTC

[GitHub] [spark] TomokoKomiyama edited a comment on issue #26192: [SPARK-29535][SQL] ADD some aggregate functions for Column in RelationalGroupedDataset.scala

TomokoKomiyama edited a comment on issue #26192: [SPARK-29535][SQL] ADD some aggregate functions for Column in RelationalGroupedDataset.scala
URL: https://github.com/apache/spark/pull/26192#issuecomment-545224163
 
 
   @HyukjinKwon 
   We can use `agg`, but I think it would be easier for users to use these aggregation functions with Colmun type in the similar way with String one. 
   `df.groupBy("_c0").max("_c1")`
   `df.groupBy("_c0").max($"_c1")`
   Other functions can pass String type parameters because of the legacy reason and these will be removed someday, right?
   I think it would be better that these aggregation functions(max, min...) can pass Column type parameters for that time.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org