You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2022/09/28 00:35:56 UTC

[GitHub] [iceberg] huaxingao commented on a diff in pull request #5872: push down min/max/count to iceberg

huaxingao commented on code in PR #5872:
URL: https://github.com/apache/iceberg/pull/5872#discussion_r981836950


##########
core/src/main/java/org/apache/iceberg/TableProperties.java:
##########
@@ -349,4 +349,7 @@ private TableProperties() {}
 
   public static final String UPSERT_ENABLED = "write.upsert.enabled";
   public static final boolean UPSERT_ENABLED_DEFAULT = false;
+
+  public static final String AGGREGATE_PUSHDOWN_ENABLED = "aggregate.pushdown.enabled";
+  public static final String AGGREGATE_PUSHDOWN_ENABLED_DEFAULT = "false";

Review Comment:
   Thanks for your comment!
   
   I actually have thought about this when I wrote the code. The aggregate push down logic is decided inside `SparkScanBuilder`. I was debating myself if I should build the aggregates row inside `SparkScanBuilder` or `SparkLocalScan`. It seems more natural to build the aggregates row in `SparkLocalScan` so I put it there, but if I move this to `SparkScanBuilder`, then when I build the aggregates row using statistic, if the statistic are not available, I can fall back. I will change to that approach.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org