You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by "alamb (via GitHub)" <gi...@apache.org> on 2023/05/01 13:08:27 UTC

[GitHub] [arrow-datafusion] alamb commented on a diff in pull request #6166: feat: make threshold for using scalar update in aggregate configurable

alamb commented on code in PR #6166:
URL: https://github.com/apache/arrow-datafusion/pull/6166#discussion_r1181560644


##########
datafusion/common/src/config.rs:
##########
@@ -260,6 +263,23 @@ config_namespace! {
     }
 }
 
+config_namespace! {
+    /// Options related to aggregate execution
+    pub struct AggregateOptions {
+        /// Specifies the threshold for using `ScalarValue`s to update
+        /// accumulators during high-cardinality aggregations for each input batch.
+        ///
+        /// The aggregation is considered high-cardinality if the number of affected groups
+        /// is greater than or equal to `batch_size / scalar_update_factor`. In such cases,
+        /// `ScalarValue`s are utilized for updating accumulators, rather than the default
+        /// batch-slice approach. This can lead to performance improvements.
+        ///
+        /// By adjusting the `scalar_update_factor`, you can balance the trade-off between

Review Comment:
   💯  for the text helping users understand the tradeoff



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscribe@arrow.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org