You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by "szehon-ho (via GitHub)" <gi...@apache.org> on 2023/04/11 00:36:22 UTC

[GitHub] [iceberg] szehon-ho commented on a diff in pull request #7175: Core, Spark 3.3: Add FileRewriter API

szehon-ho commented on code in PR #7175:
URL: https://github.com/apache/iceberg/pull/7175#discussion_r1160954691


##########
core/src/main/java/org/apache/iceberg/actions/FileRewriter.java:
##########
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.actions;
+
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.iceberg.ContentFile;
+import org.apache.iceberg.ContentScanTask;
+
+/**
+ * A class for rewriting content files.
+ *
+ * @param <T> the Java type of tasks to read content files
+ * @param <F> the Java type of content files
+ */
+public interface FileRewriter<T extends ContentScanTask<F>, F extends ContentFile<F>> {
+  /** Returns a description for this rewriter. */
+  default String description() {
+    return getClass().getName();
+  }
+
+  /**
+   * Returns a set of supported options for this rewriter. This is an allowed-list and any options
+   * not specified here will be rejected at runtime.
+   *
+   * @return returns a set of supported options

Review Comment:
   Nit: extra "returns"



##########
core/src/main/java/org/apache/iceberg/actions/SizeBasedDataRewriter.java:
##########
@@ -0,0 +1,110 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.actions;
+
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import java.util.stream.Collectors;
+import org.apache.iceberg.DataFile;
+import org.apache.iceberg.FileScanTask;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.TableProperties;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableSet;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
+import org.apache.iceberg.util.PropertyUtil;
+
+public abstract class SizeBasedDataRewriter extends SizeBasedFileRewriter<FileScanTask, DataFile> {
+
+  /**
+   * The minimum number of deletes that needs to be associated with a data file for it to be
+   * considered for rewriting. If a data file has this number of deletes or more, it will be
+   * rewritten regardless of its file size determined by {@link #MIN_FILE_SIZE_BYTES} and {@link
+   * #MAX_FILE_SIZE_BYTES}. If a file group contains a file that satisfies this condition, the file
+   * group will be rewritten regardless of the number of files in the file group determined by
+   * {@link #MIN_INPUT_FILES}
+   *
+   * <p>Defaults to Integer.MAX_VALUE, which means this feature is not enabled by default.
+   */
+  public static final String DELETE_FILE_THRESHOLD = "delete-file-threshold";
+
+  public static final int DELETE_FILE_THRESHOLD_DEFAULT = Integer.MAX_VALUE;
+
+  private int deleteFileThreshold;
+
+  protected SizeBasedDataRewriter(Table table) {
+    super(table);
+  }
+
+  @Override
+  public Set<String> validOptions() {
+    return ImmutableSet.<String>builder()
+        .addAll(super.validOptions())
+        .add(DELETE_FILE_THRESHOLD)
+        .build();
+  }
+
+  @Override
+  public void init(Map<String, String> options) {
+    super.init(options);
+    this.deleteFileThreshold = deleteFileThreshold(options);
+  }
+
+  @Override
+  protected Iterable<FileScanTask> doSelectFiles(Iterable<FileScanTask> tasks) {
+    return Iterables.filter(tasks, task -> hasSuboptimalSize(task) || hasTooManyDeletes(task));
+  }
+
+  private boolean hasTooManyDeletes(FileScanTask task) {

Review Comment:
   Remove 'has' to just 'tooManyDeletes' ?



##########
core/src/main/java/org/apache/iceberg/actions/SizeBasedFileRewriter.java:
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.actions;
+
+import java.math.RoundingMode;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.iceberg.ContentFile;
+import org.apache.iceberg.ContentScanTask;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableSet;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.math.LongMath;
+import org.apache.iceberg.util.BinPacking;
+import org.apache.iceberg.util.PropertyUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A file rewriter that determines which files to rewrite based on their size.
+ *
+ * <p>If files are smaller than the {@link #MIN_FILE_SIZE_BYTES} threshold or larger than the {@link
+ * #MAX_FILE_SIZE_BYTES} threshold, they are considered targets for being rewritten.
+ *
+ * <p>Once selected, files are grouped based on the {@link BinPacking bin-packing algorithm} into
+ * groups of no more than {@link #MAX_FILE_GROUP_SIZE_BYTES}. Groups will be actually rewritten if
+ * they contain more than {@link #MIN_INPUT_FILES} or if they would produce at least one file of
+ * {@link #TARGET_FILE_SIZE_BYTES}.
+ *
+ * <p>Note that implementations may add extra conditions for selecting files or filtering groups.
+ */
+abstract class SizeBasedFileRewriter<T extends ContentScanTask<F>, F extends ContentFile<F>>
+    implements FileRewriter<T, F> {
+
+  private static final Logger LOG = LoggerFactory.getLogger(SizeBasedFileRewriter.class);
+
+  /** The target output file size that this file rewriter will attempt to generate. */
+  public static final String TARGET_FILE_SIZE_BYTES = "target-file-size-bytes";
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files smaller than this value will be

Review Comment:
   The word 'adjusts' seems strange here.  (file is not changed?)   
   
   Also 'functions independently' seems not clear.  Can we clarify, ex : 
   
   Any file with size under this threshold will be re-written, regardless of ...
   
   Also, one thought, as here we mention regardless of "MAX_FILE_SIZE_BYTES".  Does it make sense to just say "regardless of any other criteria", as there is also the question of whether we need to check tooManyDeletes as well.



##########
core/src/main/java/org/apache/iceberg/actions/SizeBasedFileRewriter.java:
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.actions;
+
+import java.math.RoundingMode;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.iceberg.ContentFile;
+import org.apache.iceberg.ContentScanTask;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableSet;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.math.LongMath;
+import org.apache.iceberg.util.BinPacking;
+import org.apache.iceberg.util.PropertyUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A file rewriter that determines which files to rewrite based on their size.
+ *
+ * <p>If files are smaller than the {@link #MIN_FILE_SIZE_BYTES} threshold or larger than the {@link
+ * #MAX_FILE_SIZE_BYTES} threshold, they are considered targets for being rewritten.
+ *
+ * <p>Once selected, files are grouped based on the {@link BinPacking bin-packing algorithm} into
+ * groups of no more than {@link #MAX_FILE_GROUP_SIZE_BYTES}. Groups will be actually rewritten if
+ * they contain more than {@link #MIN_INPUT_FILES} or if they would produce at least one file of
+ * {@link #TARGET_FILE_SIZE_BYTES}.
+ *
+ * <p>Note that implementations may add extra conditions for selecting files or filtering groups.
+ */
+abstract class SizeBasedFileRewriter<T extends ContentScanTask<F>, F extends ContentFile<F>>
+    implements FileRewriter<T, F> {
+
+  private static final Logger LOG = LoggerFactory.getLogger(SizeBasedFileRewriter.class);
+
+  /** The target output file size that this file rewriter will attempt to generate. */
+  public static final String TARGET_FILE_SIZE_BYTES = "target-file-size-bytes";
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files smaller than this value will be
+   * considered for rewriting. This functions independently of {@link #MAX_FILE_SIZE_BYTES}.
+   *
+   * <p>Defaults to 75% of the target file size.
+   */
+  public static final String MIN_FILE_SIZE_BYTES = "min-file-size-bytes";
+
+  public static final double MIN_FILE_SIZE_DEFAULT_RATIO = 0.75;
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files larger than this value will be
+   * considered for rewriting. This functions independently of {@link #MIN_FILE_SIZE_BYTES}.
+   *
+   * <p>Defaults to 180% of the target file size.
+   */
+  public static final String MAX_FILE_SIZE_BYTES = "max-file-size-bytes";
+
+  public static final double MAX_FILE_SIZE_DEFAULT_RATIO = 1.80;
+
+  /**
+   * The minimum number of files that need to be in a file group for it to be considered for
+   * compaction if the total size of that group is less than the target file size. This can also be

Review Comment:
   Is it true:
   
   ```
     private boolean shouldRewrite(List<FileScanTask> group) {
       return hasEnoughInputFiles(group)
           || hasEnoughData(group)
           || hasTooMuchData(group)
           || anyTaskHasTooManyDeletes(group);
   ```
   
   Shouldn't it be "any file group exceeding this number of files will be rewritten regardless of other criteria"



##########
core/src/main/java/org/apache/iceberg/actions/SizeBasedFileRewriter.java:
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.actions;
+
+import java.math.RoundingMode;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.iceberg.ContentFile;
+import org.apache.iceberg.ContentScanTask;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableSet;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.math.LongMath;
+import org.apache.iceberg.util.BinPacking;
+import org.apache.iceberg.util.PropertyUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A file rewriter that determines which files to rewrite based on their size.
+ *
+ * <p>If files are smaller than the {@link #MIN_FILE_SIZE_BYTES} threshold or larger than the {@link
+ * #MAX_FILE_SIZE_BYTES} threshold, they are considered targets for being rewritten.
+ *
+ * <p>Once selected, files are grouped based on the {@link BinPacking bin-packing algorithm} into
+ * groups of no more than {@link #MAX_FILE_GROUP_SIZE_BYTES}. Groups will be actually rewritten if
+ * they contain more than {@link #MIN_INPUT_FILES} or if they would produce at least one file of
+ * {@link #TARGET_FILE_SIZE_BYTES}.
+ *
+ * <p>Note that implementations may add extra conditions for selecting files or filtering groups.
+ */
+abstract class SizeBasedFileRewriter<T extends ContentScanTask<F>, F extends ContentFile<F>>
+    implements FileRewriter<T, F> {
+
+  private static final Logger LOG = LoggerFactory.getLogger(SizeBasedFileRewriter.class);
+
+  /** The target output file size that this file rewriter will attempt to generate. */
+  public static final String TARGET_FILE_SIZE_BYTES = "target-file-size-bytes";
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files smaller than this value will be
+   * considered for rewriting. This functions independently of {@link #MAX_FILE_SIZE_BYTES}.
+   *
+   * <p>Defaults to 75% of the target file size.
+   */
+  public static final String MIN_FILE_SIZE_BYTES = "min-file-size-bytes";
+
+  public static final double MIN_FILE_SIZE_DEFAULT_RATIO = 0.75;
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files larger than this value will be
+   * considered for rewriting. This functions independently of {@link #MIN_FILE_SIZE_BYTES}.
+   *
+   * <p>Defaults to 180% of the target file size.
+   */
+  public static final String MAX_FILE_SIZE_BYTES = "max-file-size-bytes";
+
+  public static final double MAX_FILE_SIZE_DEFAULT_RATIO = 1.80;
+
+  /**
+   * The minimum number of files that need to be in a file group for it to be considered for
+   * compaction if the total size of that group is less than the target file size. This can also be
+   * thought of as the maximum number of wrongly sized files that could remain in a partition after
+   * rewriting.
+   */
+  public static final String MIN_INPUT_FILES = "min-input-files";
+
+  public static final int MIN_INPUT_FILES_DEFAULT = 5;
+
+  /** Overrides other options and forces rewriting of all files. */
+  public static final String REWRITE_ALL = "rewrite-all";
+
+  public static final boolean REWRITE_ALL_DEFAULT = false;
+
+  /**
+   * The entire rewrite operation is broken down into pieces based on partitioning and within
+   * partitions based on size into groups. These subunits of the rewrite are referred to as file
+   * groups. This option controls the largest amount of data that should be rewritten in a single
+   * group. It helps with breaking down the rewriting of very large partitions which may not be
+   * rewritable otherwise due to the resource constraints of the cluster. For example, a sort-based
+   * rewrite may not scale to TB sized partitions, those partitions need to be worked on in small
+   * subsections to avoid exhaustion of resources.
+   *
+   * <p>When grouping files, the file rewriter will use this value to limit the files which will be
+   * included in a single file group. A group will be processed by a single framework "action". For
+   * example, in Spark this means that each group would be rewritten in its own Spark job. A group
+   * will never contain files for multiple output partitions.
+   */
+  public static final String MAX_FILE_GROUP_SIZE_BYTES = "max-file-group-size-bytes";
+
+  public static final long MAX_FILE_GROUP_SIZE_BYTES_DEFAULT = 100L * 1024 * 1024 * 1024; // 100 GB
+
+  private final Table table;
+  private long targetFileSize;
+  private long minFileSize;
+  private long maxFileSize;
+  private int minInputFiles;
+  private boolean rewriteAll;
+  private long maxGroupSize;
+
+  protected SizeBasedFileRewriter(Table table) {
+    this.table = table;
+  }
+
+  protected abstract long defaultTargetFileSize();
+
+  protected abstract Iterable<T> doSelectFiles(Iterable<T> tasks);
+
+  protected abstract List<List<T>> filterFileGroups(List<List<T>> groups);
+
+  protected Table table() {
+    return table;
+  }
+
+  @Override
+  public Set<String> validOptions() {
+    return ImmutableSet.of(
+        TARGET_FILE_SIZE_BYTES,
+        MIN_FILE_SIZE_BYTES,
+        MAX_FILE_SIZE_BYTES,
+        MIN_INPUT_FILES,
+        REWRITE_ALL,
+        MAX_FILE_GROUP_SIZE_BYTES);
+  }
+
+  @Override
+  public void init(Map<String, String> options) {
+    Map<String, Long> sizeThresholds = sizeThresholds(options);
+    this.targetFileSize = sizeThresholds.get(TARGET_FILE_SIZE_BYTES);
+    this.minFileSize = sizeThresholds.get(MIN_FILE_SIZE_BYTES);
+    this.maxFileSize = sizeThresholds.get(MAX_FILE_SIZE_BYTES);
+
+    this.minInputFiles = minInputFiles(options);
+    this.rewriteAll = rewriteAll(options);
+    this.maxGroupSize = maxGroupSize(options);
+
+    if (rewriteAll) {
+      LOG.info("Configured to rewrite all provided files in table {}", table.name());
+    }
+  }
+
+  @Override
+  public Iterable<T> selectFiles(Iterable<T> tasks) {
+    return rewriteAll ? tasks : doSelectFiles(tasks);
+  }
+
+  protected boolean hasSuboptimalSize(T task) {
+    return task.length() < minFileSize || task.length() > maxFileSize;
+  }
+
+  @Override
+  public Iterable<List<T>> planFileGroups(Iterable<T> tasks) {
+    BinPacking.ListPacker<T> packer = new BinPacking.ListPacker<>(maxGroupSize, 1, false);
+    List<List<T>> groups = packer.pack(tasks, ContentScanTask::length);
+    return rewriteAll ? groups : filterFileGroups(groups);
+  }
+
+  protected boolean hasEnoughInputFiles(List<T> group) {
+    return group.size() > 1 && group.size() >= minInputFiles;
+  }
+
+  protected boolean hasEnoughData(List<T> group) {
+    return group.size() > 1 && inputSize(group) > targetFileSize;
+  }
+
+  protected boolean hasTooMuchData(List<T> group) {
+    return inputSize(group) > maxFileSize;
+  }
+
+  protected long inputSize(List<T> group) {
+    return group.stream().mapToLong(ContentScanTask::length).sum();
+  }
+
+  /**
+   * Determines the preferable number of output files when rewriting a particular file group.
+   *
+   * <p>If the rewriter is handling 10.1 GB of data with a target file size of 1 GB, it could
+   * produce 11 files, one of which would only have 0.1 GB. This would most likely be less
+   * preferable to 10 files with 1.01 GB each. So this method decides whether to round up or round
+   * down based on what the estimated average file size will be if the remainder (0.1 GB) is
+   * distributed amongst other files. If the new average file size is no more than 10% greater than
+   * the target file size, then this method will round down when determining the number of output
+   * files. Otherwise, the remainder will be written into a separate file.
+   *
+   * @param inputSize a total input size for a file group
+   * @return the number of files this rewriter should create
+   */
+  protected long numOutputFiles(long inputSize) {
+    if (inputSize < targetFileSize) {
+      return 1;
+    }
+
+    long numFilesWithRemainder = LongMath.divide(inputSize, targetFileSize, RoundingMode.CEILING);
+    long numFilesWithoutRemainder = LongMath.divide(inputSize, targetFileSize, RoundingMode.FLOOR);
+    long avgFileSizeWithoutRemainder = inputSize / numFilesWithoutRemainder;
+
+    if (LongMath.mod(inputSize, targetFileSize) > minFileSize) {
+      // the remainder file is of a valid size for this rewrite so keep it
+      return numFilesWithRemainder;
+
+    } else if (avgFileSizeWithoutRemainder < Math.min(1.1 * targetFileSize, writeMaxFileSize())) {
+      // if the reminder is distributed amongst other files,
+      // the average file size will be no more than 10% bigger than the target file size
+      // so round down and distribute remainder amongst other files
+      return numFilesWithoutRemainder;
+
+    } else {
+      // keep the remainder file as it is not OK to distribute it amongst other files
+      return numFilesWithRemainder;
+    }
+  }
+
+  /**
+   * Estimates a larger max target file size than the target size used in task creation to avoid
+   * tasks which are predicted to have a certain size, but exceed that target size when serde is
+   * complete creating tiny remainder files.

Review Comment:
   Hard to read, a comma may help:
   
   "when serde is complete, creating tiny remainder files"



##########
core/src/main/java/org/apache/iceberg/actions/SizeBasedFileRewriter.java:
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.actions;
+
+import java.math.RoundingMode;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.iceberg.ContentFile;
+import org.apache.iceberg.ContentScanTask;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableSet;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.math.LongMath;
+import org.apache.iceberg.util.BinPacking;
+import org.apache.iceberg.util.PropertyUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A file rewriter that determines which files to rewrite based on their size.
+ *
+ * <p>If files are smaller than the {@link #MIN_FILE_SIZE_BYTES} threshold or larger than the {@link
+ * #MAX_FILE_SIZE_BYTES} threshold, they are considered targets for being rewritten.
+ *
+ * <p>Once selected, files are grouped based on the {@link BinPacking bin-packing algorithm} into
+ * groups of no more than {@link #MAX_FILE_GROUP_SIZE_BYTES}. Groups will be actually rewritten if
+ * they contain more than {@link #MIN_INPUT_FILES} or if they would produce at least one file of
+ * {@link #TARGET_FILE_SIZE_BYTES}.
+ *
+ * <p>Note that implementations may add extra conditions for selecting files or filtering groups.
+ */
+abstract class SizeBasedFileRewriter<T extends ContentScanTask<F>, F extends ContentFile<F>>
+    implements FileRewriter<T, F> {
+
+  private static final Logger LOG = LoggerFactory.getLogger(SizeBasedFileRewriter.class);
+
+  /** The target output file size that this file rewriter will attempt to generate. */
+  public static final String TARGET_FILE_SIZE_BYTES = "target-file-size-bytes";
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files smaller than this value will be
+   * considered for rewriting. This functions independently of {@link #MAX_FILE_SIZE_BYTES}.
+   *
+   * <p>Defaults to 75% of the target file size.
+   */
+  public static final String MIN_FILE_SIZE_BYTES = "min-file-size-bytes";
+
+  public static final double MIN_FILE_SIZE_DEFAULT_RATIO = 0.75;
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files larger than this value will be
+   * considered for rewriting. This functions independently of {@link #MIN_FILE_SIZE_BYTES}.
+   *
+   * <p>Defaults to 180% of the target file size.
+   */
+  public static final String MAX_FILE_SIZE_BYTES = "max-file-size-bytes";
+
+  public static final double MAX_FILE_SIZE_DEFAULT_RATIO = 1.80;
+
+  /**
+   * The minimum number of files that need to be in a file group for it to be considered for
+   * compaction if the total size of that group is less than the target file size. This can also be
+   * thought of as the maximum number of wrongly sized files that could remain in a partition after
+   * rewriting.
+   */
+  public static final String MIN_INPUT_FILES = "min-input-files";
+
+  public static final int MIN_INPUT_FILES_DEFAULT = 5;
+
+  /** Overrides other options and forces rewriting of all files. */
+  public static final String REWRITE_ALL = "rewrite-all";
+
+  public static final boolean REWRITE_ALL_DEFAULT = false;
+
+  /**
+   * The entire rewrite operation is broken down into pieces based on partitioning and within
+   * partitions based on size into groups. These subunits of the rewrite are referred to as file
+   * groups. This option controls the largest amount of data that should be rewritten in a single
+   * group. It helps with breaking down the rewriting of very large partitions which may not be
+   * rewritable otherwise due to the resource constraints of the cluster. For example, a sort-based
+   * rewrite may not scale to TB sized partitions, those partitions need to be worked on in small
+   * subsections to avoid exhaustion of resources.
+   *
+   * <p>When grouping files, the file rewriter will use this value to limit the files which will be
+   * included in a single file group. A group will be processed by a single framework "action". For
+   * example, in Spark this means that each group would be rewritten in its own Spark job. A group
+   * will never contain files for multiple output partitions.
+   */
+  public static final String MAX_FILE_GROUP_SIZE_BYTES = "max-file-group-size-bytes";
+
+  public static final long MAX_FILE_GROUP_SIZE_BYTES_DEFAULT = 100L * 1024 * 1024 * 1024; // 100 GB
+
+  private final Table table;
+  private long targetFileSize;
+  private long minFileSize;
+  private long maxFileSize;
+  private int minInputFiles;
+  private boolean rewriteAll;
+  private long maxGroupSize;
+
+  protected SizeBasedFileRewriter(Table table) {
+    this.table = table;
+  }
+
+  protected abstract long defaultTargetFileSize();
+
+  protected abstract Iterable<T> doSelectFiles(Iterable<T> tasks);
+
+  protected abstract List<List<T>> filterFileGroups(List<List<T>> groups);
+
+  protected Table table() {
+    return table;
+  }
+
+  @Override
+  public Set<String> validOptions() {
+    return ImmutableSet.of(
+        TARGET_FILE_SIZE_BYTES,
+        MIN_FILE_SIZE_BYTES,
+        MAX_FILE_SIZE_BYTES,
+        MIN_INPUT_FILES,
+        REWRITE_ALL,
+        MAX_FILE_GROUP_SIZE_BYTES);
+  }
+
+  @Override
+  public void init(Map<String, String> options) {
+    Map<String, Long> sizeThresholds = sizeThresholds(options);
+    this.targetFileSize = sizeThresholds.get(TARGET_FILE_SIZE_BYTES);
+    this.minFileSize = sizeThresholds.get(MIN_FILE_SIZE_BYTES);
+    this.maxFileSize = sizeThresholds.get(MAX_FILE_SIZE_BYTES);
+
+    this.minInputFiles = minInputFiles(options);
+    this.rewriteAll = rewriteAll(options);
+    this.maxGroupSize = maxGroupSize(options);
+
+    if (rewriteAll) {
+      LOG.info("Configured to rewrite all provided files in table {}", table.name());
+    }
+  }
+
+  @Override
+  public Iterable<T> selectFiles(Iterable<T> tasks) {
+    return rewriteAll ? tasks : doSelectFiles(tasks);
+  }
+
+  protected boolean hasSuboptimalSize(T task) {
+    return task.length() < minFileSize || task.length() > maxFileSize;
+  }
+
+  @Override
+  public Iterable<List<T>> planFileGroups(Iterable<T> tasks) {
+    BinPacking.ListPacker<T> packer = new BinPacking.ListPacker<>(maxGroupSize, 1, false);
+    List<List<T>> groups = packer.pack(tasks, ContentScanTask::length);
+    return rewriteAll ? groups : filterFileGroups(groups);
+  }
+
+  protected boolean hasEnoughInputFiles(List<T> group) {
+    return group.size() > 1 && group.size() >= minInputFiles;
+  }
+
+  protected boolean hasEnoughData(List<T> group) {
+    return group.size() > 1 && inputSize(group) > targetFileSize;
+  }
+
+  protected boolean hasTooMuchData(List<T> group) {
+    return inputSize(group) > maxFileSize;
+  }
+
+  protected long inputSize(List<T> group) {
+    return group.stream().mapToLong(ContentScanTask::length).sum();
+  }
+
+  /**
+   * Determines the preferable number of output files when rewriting a particular file group.
+   *
+   * <p>If the rewriter is handling 10.1 GB of data with a target file size of 1 GB, it could
+   * produce 11 files, one of which would only have 0.1 GB. This would most likely be less
+   * preferable to 10 files with 1.01 GB each. So this method decides whether to round up or round
+   * down based on what the estimated average file size will be if the remainder (0.1 GB) is
+   * distributed amongst other files. If the new average file size is no more than 10% greater than
+   * the target file size, then this method will round down when determining the number of output
+   * files. Otherwise, the remainder will be written into a separate file.
+   *
+   * @param inputSize a total input size for a file group
+   * @return the number of files this rewriter should create
+   */
+  protected long numOutputFiles(long inputSize) {
+    if (inputSize < targetFileSize) {
+      return 1;
+    }
+
+    long numFilesWithRemainder = LongMath.divide(inputSize, targetFileSize, RoundingMode.CEILING);
+    long numFilesWithoutRemainder = LongMath.divide(inputSize, targetFileSize, RoundingMode.FLOOR);
+    long avgFileSizeWithoutRemainder = inputSize / numFilesWithoutRemainder;
+
+    if (LongMath.mod(inputSize, targetFileSize) > minFileSize) {
+      // the remainder file is of a valid size for this rewrite so keep it
+      return numFilesWithRemainder;
+
+    } else if (avgFileSizeWithoutRemainder < Math.min(1.1 * targetFileSize, writeMaxFileSize())) {
+      // if the reminder is distributed amongst other files,
+      // the average file size will be no more than 10% bigger than the target file size
+      // so round down and distribute remainder amongst other files
+      return numFilesWithoutRemainder;
+
+    } else {
+      // keep the remainder file as it is not OK to distribute it amongst other files
+      return numFilesWithRemainder;
+    }
+  }
+
+  /**
+   * Estimates a larger max target file size than the target size used in task creation to avoid
+   * tasks which are predicted to have a certain size, but exceed that target size when serde is
+   * complete creating tiny remainder files.
+   *
+   * <p>While we create tasks that should all be smaller than our target size, there is a chance
+   * that the actual data will end up being larger than our target size due to various factors of
+   * compression, serialization and other factors outside our control. If this occurs, instead of
+   * making a single file that is close in size to our target, we would end up producing one file of
+   * the target size, and then a small extra file with the remaining data. For example, if our
+   * target is 512 MB, we may generate a rewrite task that should be 500 MB. When we write the data
+   * we may find we actually have to write out 530 MB. If we use the target size while writing we

Review Comment:
   nit: comma before we
   
   "while writing, we..."



##########
core/src/main/java/org/apache/iceberg/actions/FileRewriter.java:
##########
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.actions;
+
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.iceberg.ContentFile;
+import org.apache.iceberg.ContentScanTask;
+
+/**
+ * A class for rewriting content files.
+ *
+ * @param <T> the Java type of tasks to read content files
+ * @param <F> the Java type of content files
+ */
+public interface FileRewriter<T extends ContentScanTask<F>, F extends ContentFile<F>> {
+  /** Returns a description for this rewriter. */

Review Comment:
   Nit: newline before this class



##########
core/src/main/java/org/apache/iceberg/actions/SizeBasedFileRewriter.java:
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.actions;
+
+import java.math.RoundingMode;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.iceberg.ContentFile;
+import org.apache.iceberg.ContentScanTask;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableSet;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.math.LongMath;
+import org.apache.iceberg.util.BinPacking;
+import org.apache.iceberg.util.PropertyUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A file rewriter that determines which files to rewrite based on their size.
+ *
+ * <p>If files are smaller than the {@link #MIN_FILE_SIZE_BYTES} threshold or larger than the {@link
+ * #MAX_FILE_SIZE_BYTES} threshold, they are considered targets for being rewritten.
+ *
+ * <p>Once selected, files are grouped based on the {@link BinPacking bin-packing algorithm} into
+ * groups of no more than {@link #MAX_FILE_GROUP_SIZE_BYTES}. Groups will be actually rewritten if
+ * they contain more than {@link #MIN_INPUT_FILES} or if they would produce at least one file of
+ * {@link #TARGET_FILE_SIZE_BYTES}.
+ *
+ * <p>Note that implementations may add extra conditions for selecting files or filtering groups.
+ */
+abstract class SizeBasedFileRewriter<T extends ContentScanTask<F>, F extends ContentFile<F>>
+    implements FileRewriter<T, F> {
+
+  private static final Logger LOG = LoggerFactory.getLogger(SizeBasedFileRewriter.class);
+
+  /** The target output file size that this file rewriter will attempt to generate. */
+  public static final String TARGET_FILE_SIZE_BYTES = "target-file-size-bytes";
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files smaller than this value will be
+   * considered for rewriting. This functions independently of {@link #MAX_FILE_SIZE_BYTES}.
+   *
+   * <p>Defaults to 75% of the target file size.
+   */
+  public static final String MIN_FILE_SIZE_BYTES = "min-file-size-bytes";
+
+  public static final double MIN_FILE_SIZE_DEFAULT_RATIO = 0.75;
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files larger than this value will be
+   * considered for rewriting. This functions independently of {@link #MIN_FILE_SIZE_BYTES}.
+   *
+   * <p>Defaults to 180% of the target file size.
+   */
+  public static final String MAX_FILE_SIZE_BYTES = "max-file-size-bytes";
+
+  public static final double MAX_FILE_SIZE_DEFAULT_RATIO = 1.80;
+
+  /**
+   * The minimum number of files that need to be in a file group for it to be considered for
+   * compaction if the total size of that group is less than the target file size. This can also be
+   * thought of as the maximum number of wrongly sized files that could remain in a partition after
+   * rewriting.
+   */
+  public static final String MIN_INPUT_FILES = "min-input-files";
+
+  public static final int MIN_INPUT_FILES_DEFAULT = 5;
+
+  /** Overrides other options and forces rewriting of all files. */
+  public static final String REWRITE_ALL = "rewrite-all";
+
+  public static final boolean REWRITE_ALL_DEFAULT = false;
+
+  /**
+   * The entire rewrite operation is broken down into pieces based on partitioning and within

Review Comment:
   Also it's easier to read with a comma:
   
   The entire rewrite operation is broken down into pieces based on partitioning, and size-based groups within a partition.



##########
core/src/main/java/org/apache/iceberg/actions/FileRewriter.java:
##########
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.actions;
+
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.iceberg.ContentFile;
+import org.apache.iceberg.ContentScanTask;
+
+/**
+ * A class for rewriting content files.
+ *
+ * @param <T> the Java type of tasks to read content files
+ * @param <F> the Java type of content files
+ */
+public interface FileRewriter<T extends ContentScanTask<F>, F extends ContentFile<F>> {
+  /** Returns a description for this rewriter. */
+  default String description() {
+    return getClass().getName();
+  }
+
+  /**
+   * Returns a set of supported options for this rewriter. This is an allowed-list and any options
+   * not specified here will be rejected at runtime.
+   *
+   * @return returns a set of supported options
+   */
+  Set<String> validOptions();
+
+  /**
+   * Initializes this rewriter using provided options.
+   *
+   * @param options options to initialize this rewriter
+   */
+  void init(Map<String, String> options);
+
+  /**
+   * Selects files which this rewriter believes are valid targets to be rewritten.
+   *
+   * @param tasks an iterable of scan task for files in a partition
+   * @return the iterable containing only scan task for files to be rewritten
+   */
+  Iterable<T> selectFiles(Iterable<T> tasks);
+
+  /**
+   * Groups scan tasks into lists which will be processed in a single executable unit. Each group
+   * will end up being rewritten as an independent set of changes. This creates the jobs which will
+   * eventually be run by the underlying action.
+   *
+   * @param tasks an iterable of scan tasks for files to be rewritten
+   * @return the iterable of lists of scan tasks for files which will be processed together
+   */
+  Iterable<List<T>> planFileGroups(Iterable<T> tasks);

Review Comment:
   
   
   Optional: do you think 'plan' is necessary, or can we call it 'groupFiles'
   
   Also, it wasnt immediately clear, that not all tasks need to in a returned group.  I think we can document it.  Another option, going with this approach (having this interface define methods that filter/aggregate files), would it make sense  to make selectGroups into a separate API for clarity?
   
   



##########
core/src/main/java/org/apache/iceberg/actions/SizeBasedFileRewriter.java:
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.actions;
+
+import java.math.RoundingMode;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.iceberg.ContentFile;
+import org.apache.iceberg.ContentScanTask;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableSet;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.math.LongMath;
+import org.apache.iceberg.util.BinPacking;
+import org.apache.iceberg.util.PropertyUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A file rewriter that determines which files to rewrite based on their size.
+ *
+ * <p>If files are smaller than the {@link #MIN_FILE_SIZE_BYTES} threshold or larger than the {@link
+ * #MAX_FILE_SIZE_BYTES} threshold, they are considered targets for being rewritten.
+ *
+ * <p>Once selected, files are grouped based on the {@link BinPacking bin-packing algorithm} into
+ * groups of no more than {@link #MAX_FILE_GROUP_SIZE_BYTES}. Groups will be actually rewritten if
+ * they contain more than {@link #MIN_INPUT_FILES} or if they would produce at least one file of
+ * {@link #TARGET_FILE_SIZE_BYTES}.
+ *
+ * <p>Note that implementations may add extra conditions for selecting files or filtering groups.
+ */
+abstract class SizeBasedFileRewriter<T extends ContentScanTask<F>, F extends ContentFile<F>>
+    implements FileRewriter<T, F> {
+
+  private static final Logger LOG = LoggerFactory.getLogger(SizeBasedFileRewriter.class);
+
+  /** The target output file size that this file rewriter will attempt to generate. */
+  public static final String TARGET_FILE_SIZE_BYTES = "target-file-size-bytes";
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files smaller than this value will be
+   * considered for rewriting. This functions independently of {@link #MAX_FILE_SIZE_BYTES}.
+   *
+   * <p>Defaults to 75% of the target file size.
+   */
+  public static final String MIN_FILE_SIZE_BYTES = "min-file-size-bytes";
+
+  public static final double MIN_FILE_SIZE_DEFAULT_RATIO = 0.75;
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files larger than this value will be

Review Comment:
   Same comment as above



##########
core/src/main/java/org/apache/iceberg/actions/SizeBasedFileRewriter.java:
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.actions;
+
+import java.math.RoundingMode;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.iceberg.ContentFile;
+import org.apache.iceberg.ContentScanTask;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableSet;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.math.LongMath;
+import org.apache.iceberg.util.BinPacking;
+import org.apache.iceberg.util.PropertyUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A file rewriter that determines which files to rewrite based on their size.
+ *
+ * <p>If files are smaller than the {@link #MIN_FILE_SIZE_BYTES} threshold or larger than the {@link
+ * #MAX_FILE_SIZE_BYTES} threshold, they are considered targets for being rewritten.
+ *
+ * <p>Once selected, files are grouped based on the {@link BinPacking bin-packing algorithm} into
+ * groups of no more than {@link #MAX_FILE_GROUP_SIZE_BYTES}. Groups will be actually rewritten if
+ * they contain more than {@link #MIN_INPUT_FILES} or if they would produce at least one file of
+ * {@link #TARGET_FILE_SIZE_BYTES}.
+ *
+ * <p>Note that implementations may add extra conditions for selecting files or filtering groups.
+ */
+abstract class SizeBasedFileRewriter<T extends ContentScanTask<F>, F extends ContentFile<F>>
+    implements FileRewriter<T, F> {
+
+  private static final Logger LOG = LoggerFactory.getLogger(SizeBasedFileRewriter.class);
+
+  /** The target output file size that this file rewriter will attempt to generate. */
+  public static final String TARGET_FILE_SIZE_BYTES = "target-file-size-bytes";
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files smaller than this value will be
+   * considered for rewriting. This functions independently of {@link #MAX_FILE_SIZE_BYTES}.
+   *
+   * <p>Defaults to 75% of the target file size.
+   */
+  public static final String MIN_FILE_SIZE_BYTES = "min-file-size-bytes";
+
+  public static final double MIN_FILE_SIZE_DEFAULT_RATIO = 0.75;
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files larger than this value will be
+   * considered for rewriting. This functions independently of {@link #MIN_FILE_SIZE_BYTES}.
+   *
+   * <p>Defaults to 180% of the target file size.
+   */
+  public static final String MAX_FILE_SIZE_BYTES = "max-file-size-bytes";
+
+  public static final double MAX_FILE_SIZE_DEFAULT_RATIO = 1.80;
+
+  /**
+   * The minimum number of files that need to be in a file group for it to be considered for
+   * compaction if the total size of that group is less than the target file size. This can also be
+   * thought of as the maximum number of wrongly sized files that could remain in a partition after
+   * rewriting.
+   */
+  public static final String MIN_INPUT_FILES = "min-input-files";
+
+  public static final int MIN_INPUT_FILES_DEFAULT = 5;
+
+  /** Overrides other options and forces rewriting of all files. */
+  public static final String REWRITE_ALL = "rewrite-all";
+
+  public static final boolean REWRITE_ALL_DEFAULT = false;
+
+  /**
+   * The entire rewrite operation is broken down into pieces based on partitioning and within

Review Comment:
   This definition/context of 'file groups' is good, but shouldn't it be higher (maybe the class level?)  Some options higher up like MIN_INPUT_FILES already talk about file groups, and it misses this context.  (Up to ... refered to as file groups).



##########
core/src/main/java/org/apache/iceberg/actions/SizeBasedFileRewriter.java:
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.actions;
+
+import java.math.RoundingMode;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.iceberg.ContentFile;
+import org.apache.iceberg.ContentScanTask;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableSet;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.math.LongMath;
+import org.apache.iceberg.util.BinPacking;
+import org.apache.iceberg.util.PropertyUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A file rewriter that determines which files to rewrite based on their size.
+ *
+ * <p>If files are smaller than the {@link #MIN_FILE_SIZE_BYTES} threshold or larger than the {@link
+ * #MAX_FILE_SIZE_BYTES} threshold, they are considered targets for being rewritten.
+ *
+ * <p>Once selected, files are grouped based on the {@link BinPacking bin-packing algorithm} into
+ * groups of no more than {@link #MAX_FILE_GROUP_SIZE_BYTES}. Groups will be actually rewritten if
+ * they contain more than {@link #MIN_INPUT_FILES} or if they would produce at least one file of
+ * {@link #TARGET_FILE_SIZE_BYTES}.
+ *
+ * <p>Note that implementations may add extra conditions for selecting files or filtering groups.
+ */
+abstract class SizeBasedFileRewriter<T extends ContentScanTask<F>, F extends ContentFile<F>>
+    implements FileRewriter<T, F> {
+
+  private static final Logger LOG = LoggerFactory.getLogger(SizeBasedFileRewriter.class);
+
+  /** The target output file size that this file rewriter will attempt to generate. */
+  public static final String TARGET_FILE_SIZE_BYTES = "target-file-size-bytes";
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files smaller than this value will be
+   * considered for rewriting. This functions independently of {@link #MAX_FILE_SIZE_BYTES}.
+   *
+   * <p>Defaults to 75% of the target file size.
+   */
+  public static final String MIN_FILE_SIZE_BYTES = "min-file-size-bytes";
+
+  public static final double MIN_FILE_SIZE_DEFAULT_RATIO = 0.75;
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files larger than this value will be
+   * considered for rewriting. This functions independently of {@link #MIN_FILE_SIZE_BYTES}.
+   *
+   * <p>Defaults to 180% of the target file size.
+   */
+  public static final String MAX_FILE_SIZE_BYTES = "max-file-size-bytes";
+
+  public static final double MAX_FILE_SIZE_DEFAULT_RATIO = 1.80;
+
+  /**
+   * The minimum number of files that need to be in a file group for it to be considered for
+   * compaction if the total size of that group is less than the target file size. This can also be
+   * thought of as the maximum number of wrongly sized files that could remain in a partition after
+   * rewriting.
+   */
+  public static final String MIN_INPUT_FILES = "min-input-files";
+
+  public static final int MIN_INPUT_FILES_DEFAULT = 5;
+
+  /** Overrides other options and forces rewriting of all files. */
+  public static final String REWRITE_ALL = "rewrite-all";
+
+  public static final boolean REWRITE_ALL_DEFAULT = false;
+
+  /**
+   * The entire rewrite operation is broken down into pieces based on partitioning and within
+   * partitions based on size into groups. These subunits of the rewrite are referred to as file
+   * groups. This option controls the largest amount of data that should be rewritten in a single
+   * group. It helps with breaking down the rewriting of very large partitions which may not be
+   * rewritable otherwise due to the resource constraints of the cluster. For example, a sort-based
+   * rewrite may not scale to TB sized partitions, those partitions need to be worked on in small
+   * subsections to avoid exhaustion of resources.
+   *
+   * <p>When grouping files, the file rewriter will use this value to limit the files which will be

Review Comment:
   Same, I feel this context is more useful in class level.



##########
core/src/main/java/org/apache/iceberg/actions/SizeBasedFileRewriter.java:
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.actions;
+
+import java.math.RoundingMode;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.iceberg.ContentFile;
+import org.apache.iceberg.ContentScanTask;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableSet;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.math.LongMath;
+import org.apache.iceberg.util.BinPacking;
+import org.apache.iceberg.util.PropertyUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A file rewriter that determines which files to rewrite based on their size.
+ *
+ * <p>If files are smaller than the {@link #MIN_FILE_SIZE_BYTES} threshold or larger than the {@link
+ * #MAX_FILE_SIZE_BYTES} threshold, they are considered targets for being rewritten.
+ *
+ * <p>Once selected, files are grouped based on the {@link BinPacking bin-packing algorithm} into
+ * groups of no more than {@link #MAX_FILE_GROUP_SIZE_BYTES}. Groups will be actually rewritten if
+ * they contain more than {@link #MIN_INPUT_FILES} or if they would produce at least one file of
+ * {@link #TARGET_FILE_SIZE_BYTES}.
+ *
+ * <p>Note that implementations may add extra conditions for selecting files or filtering groups.
+ */
+abstract class SizeBasedFileRewriter<T extends ContentScanTask<F>, F extends ContentFile<F>>
+    implements FileRewriter<T, F> {
+
+  private static final Logger LOG = LoggerFactory.getLogger(SizeBasedFileRewriter.class);
+
+  /** The target output file size that this file rewriter will attempt to generate. */
+  public static final String TARGET_FILE_SIZE_BYTES = "target-file-size-bytes";
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files smaller than this value will be
+   * considered for rewriting. This functions independently of {@link #MAX_FILE_SIZE_BYTES}.
+   *
+   * <p>Defaults to 75% of the target file size.
+   */
+  public static final String MIN_FILE_SIZE_BYTES = "min-file-size-bytes";
+
+  public static final double MIN_FILE_SIZE_DEFAULT_RATIO = 0.75;
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files larger than this value will be
+   * considered for rewriting. This functions independently of {@link #MIN_FILE_SIZE_BYTES}.
+   *
+   * <p>Defaults to 180% of the target file size.
+   */
+  public static final String MAX_FILE_SIZE_BYTES = "max-file-size-bytes";
+
+  public static final double MAX_FILE_SIZE_DEFAULT_RATIO = 1.80;
+
+  /**
+   * The minimum number of files that need to be in a file group for it to be considered for
+   * compaction if the total size of that group is less than the target file size. This can also be
+   * thought of as the maximum number of wrongly sized files that could remain in a partition after
+   * rewriting.
+   */
+  public static final String MIN_INPUT_FILES = "min-input-files";
+
+  public static final int MIN_INPUT_FILES_DEFAULT = 5;
+
+  /** Overrides other options and forces rewriting of all files. */
+  public static final String REWRITE_ALL = "rewrite-all";
+
+  public static final boolean REWRITE_ALL_DEFAULT = false;
+
+  /**
+   * The entire rewrite operation is broken down into pieces based on partitioning and within
+   * partitions based on size into groups. These subunits of the rewrite are referred to as file
+   * groups. This option controls the largest amount of data that should be rewritten in a single
+   * group. It helps with breaking down the rewriting of very large partitions which may not be
+   * rewritable otherwise due to the resource constraints of the cluster. For example, a sort-based
+   * rewrite may not scale to TB sized partitions, those partitions need to be worked on in small
+   * subsections to avoid exhaustion of resources.
+   *
+   * <p>When grouping files, the file rewriter will use this value to limit the files which will be
+   * included in a single file group. A group will be processed by a single framework "action". For
+   * example, in Spark this means that each group would be rewritten in its own Spark job. A group
+   * will never contain files for multiple output partitions.
+   */
+  public static final String MAX_FILE_GROUP_SIZE_BYTES = "max-file-group-size-bytes";
+
+  public static final long MAX_FILE_GROUP_SIZE_BYTES_DEFAULT = 100L * 1024 * 1024 * 1024; // 100 GB
+
+  private final Table table;
+  private long targetFileSize;
+  private long minFileSize;
+  private long maxFileSize;
+  private int minInputFiles;
+  private boolean rewriteAll;
+  private long maxGroupSize;
+
+  protected SizeBasedFileRewriter(Table table) {
+    this.table = table;
+  }
+
+  protected abstract long defaultTargetFileSize();
+
+  protected abstract Iterable<T> doSelectFiles(Iterable<T> tasks);
+
+  protected abstract List<List<T>> filterFileGroups(List<List<T>> groups);
+
+  protected Table table() {
+    return table;
+  }
+
+  @Override
+  public Set<String> validOptions() {
+    return ImmutableSet.of(
+        TARGET_FILE_SIZE_BYTES,
+        MIN_FILE_SIZE_BYTES,
+        MAX_FILE_SIZE_BYTES,
+        MIN_INPUT_FILES,
+        REWRITE_ALL,
+        MAX_FILE_GROUP_SIZE_BYTES);
+  }
+
+  @Override
+  public void init(Map<String, String> options) {
+    Map<String, Long> sizeThresholds = sizeThresholds(options);
+    this.targetFileSize = sizeThresholds.get(TARGET_FILE_SIZE_BYTES);
+    this.minFileSize = sizeThresholds.get(MIN_FILE_SIZE_BYTES);
+    this.maxFileSize = sizeThresholds.get(MAX_FILE_SIZE_BYTES);
+
+    this.minInputFiles = minInputFiles(options);
+    this.rewriteAll = rewriteAll(options);
+    this.maxGroupSize = maxGroupSize(options);
+
+    if (rewriteAll) {
+      LOG.info("Configured to rewrite all provided files in table {}", table.name());
+    }
+  }
+
+  @Override
+  public Iterable<T> selectFiles(Iterable<T> tasks) {
+    return rewriteAll ? tasks : doSelectFiles(tasks);
+  }
+
+  protected boolean hasSuboptimalSize(T task) {
+    return task.length() < minFileSize || task.length() > maxFileSize;
+  }
+
+  @Override
+  public Iterable<List<T>> planFileGroups(Iterable<T> tasks) {
+    BinPacking.ListPacker<T> packer = new BinPacking.ListPacker<>(maxGroupSize, 1, false);
+    List<List<T>> groups = packer.pack(tasks, ContentScanTask::length);
+    return rewriteAll ? groups : filterFileGroups(groups);
+  }
+
+  protected boolean hasEnoughInputFiles(List<T> group) {
+    return group.size() > 1 && group.size() >= minInputFiles;
+  }
+
+  protected boolean hasEnoughData(List<T> group) {
+    return group.size() > 1 && inputSize(group) > targetFileSize;
+  }
+
+  protected boolean hasTooMuchData(List<T> group) {
+    return inputSize(group) > maxFileSize;
+  }
+
+  protected long inputSize(List<T> group) {
+    return group.stream().mapToLong(ContentScanTask::length).sum();
+  }
+
+  /**
+   * Determines the preferable number of output files when rewriting a particular file group.
+   *
+   * <p>If the rewriter is handling 10.1 GB of data with a target file size of 1 GB, it could
+   * produce 11 files, one of which would only have 0.1 GB. This would most likely be less
+   * preferable to 10 files with 1.01 GB each. So this method decides whether to round up or round
+   * down based on what the estimated average file size will be if the remainder (0.1 GB) is
+   * distributed amongst other files. If the new average file size is no more than 10% greater than
+   * the target file size, then this method will round down when determining the number of output
+   * files. Otherwise, the remainder will be written into a separate file.
+   *
+   * @param inputSize a total input size for a file group
+   * @return the number of files this rewriter should create
+   */
+  protected long numOutputFiles(long inputSize) {
+    if (inputSize < targetFileSize) {
+      return 1;
+    }
+
+    long numFilesWithRemainder = LongMath.divide(inputSize, targetFileSize, RoundingMode.CEILING);
+    long numFilesWithoutRemainder = LongMath.divide(inputSize, targetFileSize, RoundingMode.FLOOR);
+    long avgFileSizeWithoutRemainder = inputSize / numFilesWithoutRemainder;
+
+    if (LongMath.mod(inputSize, targetFileSize) > minFileSize) {
+      // the remainder file is of a valid size for this rewrite so keep it
+      return numFilesWithRemainder;
+
+    } else if (avgFileSizeWithoutRemainder < Math.min(1.1 * targetFileSize, writeMaxFileSize())) {
+      // if the reminder is distributed amongst other files,
+      // the average file size will be no more than 10% bigger than the target file size
+      // so round down and distribute remainder amongst other files
+      return numFilesWithoutRemainder;
+
+    } else {
+      // keep the remainder file as it is not OK to distribute it amongst other files
+      return numFilesWithRemainder;
+    }
+  }
+
+  /**
+   * Estimates a larger max target file size than the target size used in task creation to avoid
+   * tasks which are predicted to have a certain size, but exceed that target size when serde is
+   * complete creating tiny remainder files.
+   *
+   * <p>While we create tasks that should all be smaller than our target size, there is a chance
+   * that the actual data will end up being larger than our target size due to various factors of
+   * compression, serialization and other factors outside our control. If this occurs, instead of
+   * making a single file that is close in size to our target, we would end up producing one file of
+   * the target size, and then a small extra file with the remaining data. For example, if our

Review Comment:
   Suggest to put "For example" on new paragraph



##########
core/src/main/java/org/apache/iceberg/actions/SizeBasedFileRewriter.java:
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.actions;
+
+import java.math.RoundingMode;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.iceberg.ContentFile;
+import org.apache.iceberg.ContentScanTask;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableSet;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.math.LongMath;
+import org.apache.iceberg.util.BinPacking;
+import org.apache.iceberg.util.PropertyUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A file rewriter that determines which files to rewrite based on their size.
+ *
+ * <p>If files are smaller than the {@link #MIN_FILE_SIZE_BYTES} threshold or larger than the {@link
+ * #MAX_FILE_SIZE_BYTES} threshold, they are considered targets for being rewritten.
+ *
+ * <p>Once selected, files are grouped based on the {@link BinPacking bin-packing algorithm} into
+ * groups of no more than {@link #MAX_FILE_GROUP_SIZE_BYTES}. Groups will be actually rewritten if
+ * they contain more than {@link #MIN_INPUT_FILES} or if they would produce at least one file of
+ * {@link #TARGET_FILE_SIZE_BYTES}.
+ *
+ * <p>Note that implementations may add extra conditions for selecting files or filtering groups.
+ */
+abstract class SizeBasedFileRewriter<T extends ContentScanTask<F>, F extends ContentFile<F>>
+    implements FileRewriter<T, F> {
+
+  private static final Logger LOG = LoggerFactory.getLogger(SizeBasedFileRewriter.class);
+
+  /** The target output file size that this file rewriter will attempt to generate. */
+  public static final String TARGET_FILE_SIZE_BYTES = "target-file-size-bytes";
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files smaller than this value will be
+   * considered for rewriting. This functions independently of {@link #MAX_FILE_SIZE_BYTES}.
+   *
+   * <p>Defaults to 75% of the target file size.
+   */
+  public static final String MIN_FILE_SIZE_BYTES = "min-file-size-bytes";
+
+  public static final double MIN_FILE_SIZE_DEFAULT_RATIO = 0.75;
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files larger than this value will be
+   * considered for rewriting. This functions independently of {@link #MIN_FILE_SIZE_BYTES}.
+   *
+   * <p>Defaults to 180% of the target file size.
+   */
+  public static final String MAX_FILE_SIZE_BYTES = "max-file-size-bytes";
+
+  public static final double MAX_FILE_SIZE_DEFAULT_RATIO = 1.80;
+
+  /**
+   * The minimum number of files that need to be in a file group for it to be considered for
+   * compaction if the total size of that group is less than the target file size. This can also be
+   * thought of as the maximum number of wrongly sized files that could remain in a partition after
+   * rewriting.
+   */
+  public static final String MIN_INPUT_FILES = "min-input-files";
+
+  public static final int MIN_INPUT_FILES_DEFAULT = 5;
+
+  /** Overrides other options and forces rewriting of all files. */
+  public static final String REWRITE_ALL = "rewrite-all";
+
+  public static final boolean REWRITE_ALL_DEFAULT = false;
+
+  /**
+   * The entire rewrite operation is broken down into pieces based on partitioning and within
+   * partitions based on size into groups. These subunits of the rewrite are referred to as file
+   * groups. This option controls the largest amount of data that should be rewritten in a single
+   * group. It helps with breaking down the rewriting of very large partitions which may not be
+   * rewritable otherwise due to the resource constraints of the cluster. For example, a sort-based
+   * rewrite may not scale to TB sized partitions, those partitions need to be worked on in small

Review Comment:
   Nit: missing and
   
   ``` TB-sized partitions, and those partitions```



##########
core/src/main/java/org/apache/iceberg/actions/FileRewriter.java:
##########
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.actions;
+
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.iceberg.ContentFile;
+import org.apache.iceberg.ContentScanTask;
+
+/**
+ * A class for rewriting content files.
+ *
+ * @param <T> the Java type of tasks to read content files
+ * @param <F> the Java type of content files
+ */
+public interface FileRewriter<T extends ContentScanTask<F>, F extends ContentFile<F>> {
+  /** Returns a description for this rewriter. */
+  default String description() {
+    return getClass().getName();
+  }
+
+  /**
+   * Returns a set of supported options for this rewriter. This is an allowed-list and any options
+   * not specified here will be rejected at runtime.
+   *
+   * @return returns a set of supported options

Review Comment:
   Is return annotation  redundant? (comparing with other javadoc comments)



##########
core/src/main/java/org/apache/iceberg/actions/SizeBasedFileRewriter.java:
##########
@@ -0,0 +1,320 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.actions;
+
+import java.math.RoundingMode;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+import org.apache.iceberg.ContentFile;
+import org.apache.iceberg.ContentScanTask;
+import org.apache.iceberg.Table;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.ImmutableSet;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.math.LongMath;
+import org.apache.iceberg.util.BinPacking;
+import org.apache.iceberg.util.PropertyUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * A file rewriter that determines which files to rewrite based on their size.
+ *
+ * <p>If files are smaller than the {@link #MIN_FILE_SIZE_BYTES} threshold or larger than the {@link
+ * #MAX_FILE_SIZE_BYTES} threshold, they are considered targets for being rewritten.
+ *
+ * <p>Once selected, files are grouped based on the {@link BinPacking bin-packing algorithm} into
+ * groups of no more than {@link #MAX_FILE_GROUP_SIZE_BYTES}. Groups will be actually rewritten if
+ * they contain more than {@link #MIN_INPUT_FILES} or if they would produce at least one file of
+ * {@link #TARGET_FILE_SIZE_BYTES}.
+ *
+ * <p>Note that implementations may add extra conditions for selecting files or filtering groups.
+ */
+abstract class SizeBasedFileRewriter<T extends ContentScanTask<F>, F extends ContentFile<F>>
+    implements FileRewriter<T, F> {
+
+  private static final Logger LOG = LoggerFactory.getLogger(SizeBasedFileRewriter.class);
+
+  /** The target output file size that this file rewriter will attempt to generate. */
+  public static final String TARGET_FILE_SIZE_BYTES = "target-file-size-bytes";
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files smaller than this value will be
+   * considered for rewriting. This functions independently of {@link #MAX_FILE_SIZE_BYTES}.
+   *
+   * <p>Defaults to 75% of the target file size.
+   */
+  public static final String MIN_FILE_SIZE_BYTES = "min-file-size-bytes";
+
+  public static final double MIN_FILE_SIZE_DEFAULT_RATIO = 0.75;
+
+  /**
+   * Adjusts files which will be considered for rewriting. Files larger than this value will be
+   * considered for rewriting. This functions independently of {@link #MIN_FILE_SIZE_BYTES}.
+   *
+   * <p>Defaults to 180% of the target file size.
+   */
+  public static final String MAX_FILE_SIZE_BYTES = "max-file-size-bytes";
+
+  public static final double MAX_FILE_SIZE_DEFAULT_RATIO = 1.80;
+
+  /**
+   * The minimum number of files that need to be in a file group for it to be considered for
+   * compaction if the total size of that group is less than the target file size. This can also be
+   * thought of as the maximum number of wrongly sized files that could remain in a partition after
+   * rewriting.
+   */
+  public static final String MIN_INPUT_FILES = "min-input-files";
+
+  public static final int MIN_INPUT_FILES_DEFAULT = 5;
+
+  /** Overrides other options and forces rewriting of all files. */
+  public static final String REWRITE_ALL = "rewrite-all";
+
+  public static final boolean REWRITE_ALL_DEFAULT = false;
+
+  /**
+   * The entire rewrite operation is broken down into pieces based on partitioning and within
+   * partitions based on size into groups. These subunits of the rewrite are referred to as file
+   * groups. This option controls the largest amount of data that should be rewritten in a single
+   * group. It helps with breaking down the rewriting of very large partitions which may not be
+   * rewritable otherwise due to the resource constraints of the cluster. For example, a sort-based
+   * rewrite may not scale to TB sized partitions, those partitions need to be worked on in small
+   * subsections to avoid exhaustion of resources.
+   *
+   * <p>When grouping files, the file rewriter will use this value to limit the files which will be
+   * included in a single file group. A group will be processed by a single framework "action". For
+   * example, in Spark this means that each group would be rewritten in its own Spark job. A group
+   * will never contain files for multiple output partitions.
+   */
+  public static final String MAX_FILE_GROUP_SIZE_BYTES = "max-file-group-size-bytes";
+
+  public static final long MAX_FILE_GROUP_SIZE_BYTES_DEFAULT = 100L * 1024 * 1024 * 1024; // 100 GB
+
+  private final Table table;
+  private long targetFileSize;
+  private long minFileSize;
+  private long maxFileSize;
+  private int minInputFiles;
+  private boolean rewriteAll;
+  private long maxGroupSize;
+
+  protected SizeBasedFileRewriter(Table table) {
+    this.table = table;
+  }
+
+  protected abstract long defaultTargetFileSize();
+
+  protected abstract Iterable<T> doSelectFiles(Iterable<T> tasks);
+
+  protected abstract List<List<T>> filterFileGroups(List<List<T>> groups);
+
+  protected Table table() {
+    return table;
+  }
+
+  @Override
+  public Set<String> validOptions() {
+    return ImmutableSet.of(
+        TARGET_FILE_SIZE_BYTES,
+        MIN_FILE_SIZE_BYTES,
+        MAX_FILE_SIZE_BYTES,
+        MIN_INPUT_FILES,
+        REWRITE_ALL,
+        MAX_FILE_GROUP_SIZE_BYTES);
+  }
+
+  @Override
+  public void init(Map<String, String> options) {
+    Map<String, Long> sizeThresholds = sizeThresholds(options);
+    this.targetFileSize = sizeThresholds.get(TARGET_FILE_SIZE_BYTES);
+    this.minFileSize = sizeThresholds.get(MIN_FILE_SIZE_BYTES);
+    this.maxFileSize = sizeThresholds.get(MAX_FILE_SIZE_BYTES);
+
+    this.minInputFiles = minInputFiles(options);
+    this.rewriteAll = rewriteAll(options);
+    this.maxGroupSize = maxGroupSize(options);
+
+    if (rewriteAll) {
+      LOG.info("Configured to rewrite all provided files in table {}", table.name());
+    }
+  }
+
+  @Override
+  public Iterable<T> selectFiles(Iterable<T> tasks) {
+    return rewriteAll ? tasks : doSelectFiles(tasks);
+  }
+
+  protected boolean hasSuboptimalSize(T task) {
+    return task.length() < minFileSize || task.length() > maxFileSize;
+  }
+
+  @Override
+  public Iterable<List<T>> planFileGroups(Iterable<T> tasks) {
+    BinPacking.ListPacker<T> packer = new BinPacking.ListPacker<>(maxGroupSize, 1, false);
+    List<List<T>> groups = packer.pack(tasks, ContentScanTask::length);
+    return rewriteAll ? groups : filterFileGroups(groups);
+  }
+
+  protected boolean hasEnoughInputFiles(List<T> group) {
+    return group.size() > 1 && group.size() >= minInputFiles;
+  }
+
+  protected boolean hasEnoughData(List<T> group) {
+    return group.size() > 1 && inputSize(group) > targetFileSize;
+  }
+
+  protected boolean hasTooMuchData(List<T> group) {
+    return inputSize(group) > maxFileSize;
+  }
+
+  protected long inputSize(List<T> group) {
+    return group.stream().mapToLong(ContentScanTask::length).sum();
+  }
+
+  /**
+   * Determines the preferable number of output files when rewriting a particular file group.
+   *
+   * <p>If the rewriter is handling 10.1 GB of data with a target file size of 1 GB, it could
+   * produce 11 files, one of which would only have 0.1 GB. This would most likely be less
+   * preferable to 10 files with 1.01 GB each. So this method decides whether to round up or round
+   * down based on what the estimated average file size will be if the remainder (0.1 GB) is
+   * distributed amongst other files. If the new average file size is no more than 10% greater than
+   * the target file size, then this method will round down when determining the number of output
+   * files. Otherwise, the remainder will be written into a separate file.
+   *
+   * @param inputSize a total input size for a file group
+   * @return the number of files this rewriter should create
+   */
+  protected long numOutputFiles(long inputSize) {
+    if (inputSize < targetFileSize) {
+      return 1;
+    }
+
+    long numFilesWithRemainder = LongMath.divide(inputSize, targetFileSize, RoundingMode.CEILING);
+    long numFilesWithoutRemainder = LongMath.divide(inputSize, targetFileSize, RoundingMode.FLOOR);
+    long avgFileSizeWithoutRemainder = inputSize / numFilesWithoutRemainder;
+
+    if (LongMath.mod(inputSize, targetFileSize) > minFileSize) {
+      // the remainder file is of a valid size for this rewrite so keep it
+      return numFilesWithRemainder;
+
+    } else if (avgFileSizeWithoutRemainder < Math.min(1.1 * targetFileSize, writeMaxFileSize())) {
+      // if the reminder is distributed amongst other files,
+      // the average file size will be no more than 10% bigger than the target file size
+      // so round down and distribute remainder amongst other files
+      return numFilesWithoutRemainder;
+
+    } else {
+      // keep the remainder file as it is not OK to distribute it amongst other files
+      return numFilesWithRemainder;
+    }
+  }
+
+  /**
+   * Estimates a larger max target file size than the target size used in task creation to avoid
+   * tasks which are predicted to have a certain size, but exceed that target size when serde is
+   * complete creating tiny remainder files.

Review Comment:
   Also I realize this explanation is just repeated on the below paragraph.  Can't this be simpler and just be:
   
   
   "Estimates a larger max target file size than the target size used in task creation to avoid creating tiny remainder files."



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org