You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by cloud-fan <gi...@git.apache.org> on 2018/10/23 11:52:34 UTC
[GitHub] spark issue #22144: [SPARK-24935][SQL] : Problem with Executing Hive UDF's f...
Github user cloud-fan commented on the issue:
https://github.com/apache/spark/pull/22144
My feeling is that, hive compatibility is not that important to Spark at this point. *ALL* aggregate functions in Spark (including Spark UDAF) support partial aggregate, but now we need to complicate the aggregation framework and support un-partial-able aggregate functions, only for a few Hive UDAFs.
Unless there is a simple workaround, or we can justify that Spark needs un-partial-able aggregate functions, IMO it's not worth to introduce this feature.
BTW this PR doesn't even have a test, so I'm not sure if we can have a simple workaround for it.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org