You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2021/12/13 13:40:40 UTC

[GitHub] [iceberg] RussellSpitzer commented on a change in pull request #3724: [Spark][Core]: Expire data files that read 0 pieces of data during rewriting

RussellSpitzer commented on a change in pull request #3724:
URL: https://github.com/apache/iceberg/pull/3724#discussion_r767765801



##########
File path: spark/v3.2/spark/src/test/java/org/apache/iceberg/spark/actions/TestRewriteDataFilesAction.java
##########
@@ -1055,12 +1265,11 @@ protected Table createTable(int files) {
     return table;
   }
 
-  protected Table createTablePartitioned(int partitions, int files) {
+  protected Table createTablePartitioned(int partitions, int files, Map<String, String> options) {

Review comment:
       Instead of adding a parameter and changing all the usages, it may be cleaner to add another method
   ```java
   protected Table createTablePartitioned(int partitions, int files, Map<String, String> options) {
   all this code
   }
   ```
   
   And keep a version 
   ```java
    protected Table createTablePartitioned(int partitions, int files) {
        createTablePartitioned(partitions, files, Maps.newHashMap())
    }
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org