You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2015/07/26 19:02:04 UTC

[jira] [Commented] (SPARK-9361) Refactor new aggregation code to reduce the times of checking compatibility

    [ https://issues.apache.org/jira/browse/SPARK-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14642027#comment-14642027 ] 

Apache Spark commented on SPARK-9361:
-------------------------------------

User 'viirya' has created a pull request for this issue:
https://github.com/apache/spark/pull/7677

> Refactor new aggregation code to reduce the times of checking compatibility
> ---------------------------------------------------------------------------
>
>                 Key: SPARK-9361
>                 URL: https://issues.apache.org/jira/browse/SPARK-9361
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>            Reporter: Liang-Chi Hsieh
>
> Currently, we call aggregate.Utils.tryConvert in many places to check it the logical.aggregate can be run with new aggregation. But looks like aggregate.Utils.tryConvert costs much time to run. We should only call tryConvert once and keep it value in logical.aggregate and reuse it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org