You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2020/08/04 01:45:18 UTC

[GitHub] [iceberg] rdblue opened a new pull request #1288: Update scan planning with DeleteFiles in each task

rdblue opened a new pull request #1288:
URL: https://github.com/apache/iceberg/pull/1288


   This adds `DeleteFileIndex` to scan delete manifests and index delete files, updates `ManifestGroup` to use the index when planning tasks, and adds delete files to `FileScanTask`.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on a change in pull request #1288: Update scan planning with DeleteFiles in each task

Posted by GitBox <gi...@apache.org>.
rdblue commented on a change in pull request #1288:
URL: https://github.com/apache/iceberg/pull/1288#discussion_r465914567



##########
File path: core/src/main/java/org/apache/iceberg/DeleteFileIndex.java
##########
@@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg;
+
+import com.github.benmanes.caffeine.cache.Caffeine;
+import com.github.benmanes.caffeine.cache.LoadingCache;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.Set;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+import org.apache.iceberg.exceptions.RuntimeIOException;
+import org.apache.iceberg.expressions.Expression;
+import org.apache.iceberg.expressions.Expressions;
+import org.apache.iceberg.expressions.ManifestEvaluator;
+import org.apache.iceberg.expressions.Projections;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
+import org.apache.iceberg.relocated.com.google.common.collect.ListMultimap;
+import org.apache.iceberg.relocated.com.google.common.collect.Lists;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.collect.Multimaps;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.util.Pair;
+import org.apache.iceberg.util.StructLikeWrapper;
+import org.apache.iceberg.util.Tasks;
+
+/**
+ * An index of {@link DeleteFile delete files} by sequence number.
+ * <p>
+ * Use {@link #builderFor(FileIO, Iterable)} to construct an index, and {@link #forDataFile(int, long, DataFile)} or
+ * {@link #forEntry(int, ManifestEntry)} to get the the delete files to apply to a given data file.
+ */
+class DeleteFileIndex {
+  private static final DeleteFile[] NO_DELETE_FILES = new DeleteFile[0];
+
+  private final long[] globalSeqs;
+  private final DeleteFile[] globalDeletes;
+  private final Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition;
+  private final ThreadLocal<StructLikeWrapper> lookupWrapper = ThreadLocal.withInitial(
+      () -> StructLikeWrapper.wrap(null));
+
+  DeleteFileIndex(long[] globalSeqs, DeleteFile[] globalDeletes,
+                  Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition) {
+    this.globalSeqs = globalSeqs;
+    this.globalDeletes = globalDeletes;
+    this.sortedDeletesByPartition = sortedDeletesByPartition;
+  }
+
+  DeleteFile[] forEntry(int specId, ManifestEntry<DataFile> entry) {
+    return forDataFile(specId, entry.sequenceNumber(), entry.file());
+  }
+
+  DeleteFile[] forDataFile(int specId, long sequenceNumber, DataFile file) {
+    Pair<long[], DeleteFile[]> partitionDeletes = sortedDeletesByPartition
+        .get(Pair.of(specId, lookupWrapper.get().set(file.partition())));
+
+    if (partitionDeletes == null) {
+      return limitBySequenceNumber(sequenceNumber, globalSeqs, globalDeletes);
+    } else if (globalDeletes == null) {
+      return limitBySequenceNumber(sequenceNumber, partitionDeletes.first(), partitionDeletes.second());
+    } else {
+      return Stream.concat(
+          Stream.of(limitBySequenceNumber(sequenceNumber, globalSeqs, globalDeletes)),
+          Stream.of(limitBySequenceNumber(sequenceNumber, partitionDeletes.first(), partitionDeletes.second()))
+      ).toArray(DeleteFile[]::new);
+    }
+  }
+
+  private static DeleteFile[] limitBySequenceNumber(long sequenceNumber, long[] seqs, DeleteFile[] files) {
+    if (files == null) {
+      return NO_DELETE_FILES;
+    }
+
+    int pos = Arrays.binarySearch(seqs, sequenceNumber);
+    int start;
+    if (pos < 0) {
+      // the sequence number was not found, where it would be inserted is -(pos + 1)
+      start = -(pos + 1);
+    } else {
+      // the sequence number was found, but may not be the first
+      // find the first delete file with the given sequence number by decrementing the position
+      start = pos;
+      while (start > 0 && seqs[start - 1] >= sequenceNumber) {
+        start -= 1;
+      }
+    }
+
+    return Arrays.copyOfRange(files, start, files.length);
+  }
+
+  static Builder builderFor(FileIO io, Iterable<ManifestFile> deleteManifests) {
+    return new Builder(io, Sets.newHashSet(deleteManifests));
+  }
+
+  static class Builder {
+    private final FileIO io;
+    private final Set<ManifestFile> deleteManifests;
+    private Map<Integer, PartitionSpec> specsById;
+    private Expression dataFilter;
+    private Expression partitionFilter;
+    private boolean caseSensitive;
+    private ExecutorService executorService;
+
+    Builder(FileIO io, Set<ManifestFile> deleteManifests) {
+      this.io = io;
+      this.deleteManifests = Sets.newHashSet(deleteManifests);
+      this.specsById = null;
+      this.dataFilter = Expressions.alwaysTrue();
+      this.partitionFilter = Expressions.alwaysTrue();
+      this.caseSensitive = true;
+      this.executorService = null;
+    }
+
+    Builder specsById(Map<Integer, PartitionSpec> newSpecsById) {
+      this.specsById = newSpecsById;
+      return this;
+    }
+
+    Builder filterData(Expression newDataFilter) {
+      this.dataFilter = Expressions.and(dataFilter, newDataFilter);
+      return this;
+    }
+
+    Builder filterPartitions(Expression newPartitionFilter) {
+      this.partitionFilter = Expressions.and(partitionFilter, newPartitionFilter);
+      return this;
+    }
+
+    Builder caseSensitive(boolean newCaseSensitive) {
+      this.caseSensitive = newCaseSensitive;
+      return this;
+    }
+
+    Builder planWith(ExecutorService newExecutorService) {
+      this.executorService = newExecutorService;
+      return this;
+    }
+
+    DeleteFileIndex build() {
+      // read all of the matching delete manifests in parallel and accumulate the matching files in a queue
+      Queue<Pair<Integer, ManifestEntry<DeleteFile>>> deleteEntries = new ConcurrentLinkedQueue<>();
+      Tasks.foreach(deleteManifestReaders())
+          .stopOnFailure().throwFailureWhenFinished()
+          .executeWith(executorService)
+          .run(specIdAndReader -> {
+            try (CloseableIterable<ManifestEntry<DeleteFile>> reader = specIdAndReader.second()) {
+              for (ManifestEntry<DeleteFile> entry : reader) {
+                // copy with stats for better filtering against data file stats
+                deleteEntries.add(Pair.of(specIdAndReader.first(), entry.copy()));
+              }
+            } catch (IOException e) {
+              throw new RuntimeIOException("Failed to close", e);
+            }
+          });
+
+      // build a map from (specId, partition) to delete file entries
+      ListMultimap<Pair<Integer, StructLikeWrapper>, ManifestEntry<DeleteFile>> deleteFilesByPartition =
+          Multimaps.newListMultimap(Maps.newHashMap(), Lists::newArrayList);
+      for (Pair<Integer, ManifestEntry<DeleteFile>> specIdAndEntry : deleteEntries) {
+        int specId = specIdAndEntry.first();
+        ManifestEntry<DeleteFile> entry = specIdAndEntry.second();
+        deleteFilesByPartition.put(Pair.of(specId, StructLikeWrapper.wrap(entry.file().partition())), entry);
+      }
+
+      // sort the entries in each map value by sequence number and split into sequence numbers and delete files lists
+      Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition = Maps.newHashMap();
+      // also, separate out equality deletes in an unpartitioned spec that should be applied globally
+      long[] globalApplySeqs = null;
+      DeleteFile[] globalDeletes = null;
+      for (Pair<Integer, StructLikeWrapper> partition : deleteFilesByPartition.keySet()) {
+        if (specsById.get(partition.first()).isUnpartitioned()) {
+          Preconditions.checkState(globalDeletes == null, "Detected multiple partition specs with no partitions");
+
+          List<Pair<Long, DeleteFile>> eqFilesSortedBySeq = deleteFilesByPartition.get(partition).stream()
+              .filter(entry -> entry.file().content() == FileContent.EQUALITY_DELETES)
+              .map(entry ->
+                  // a delete file is indexed by the sequence number it should be applied to
+                  Pair.of(entry.sequenceNumber() - 1, entry.file()))
+              .sorted(Comparator.comparingLong(Pair::first))
+              .collect(Collectors.toList());
+
+          globalApplySeqs = eqFilesSortedBySeq.stream().mapToLong(Pair::first).toArray();
+          globalDeletes = eqFilesSortedBySeq.stream().map(Pair::second).toArray(DeleteFile[]::new);
+
+          List<Pair<Long, DeleteFile>> posFilesSortedBySeq = deleteFilesByPartition.get(partition).stream()
+              .filter(entry -> entry.file().content() == FileContent.POSITION_DELETES)
+              .map(entry -> Pair.of(entry.sequenceNumber(), entry.file()))
+              .sorted(Comparator.comparingLong(Pair::first))
+              .collect(Collectors.toList());
+
+          long[] seqs = posFilesSortedBySeq.stream().mapToLong(Pair::first).toArray();
+          DeleteFile[] files = posFilesSortedBySeq.stream().map(Pair::second).toArray(DeleteFile[]::new);
+
+          sortedDeletesByPartition.put(partition, Pair.of(seqs, files));
+
+        } else {
+          List<Pair<Long, DeleteFile>> filesSortedBySeq = deleteFilesByPartition.get(partition).stream()
+              .map(entry -> {
+                // a delete file is indexed by the sequence number it should be applied to
+                long applySeq = entry.sequenceNumber() -
+                    (entry.file().content() == FileContent.EQUALITY_DELETES ? 1 : 0);

Review comment:
       Exactly.
   
   If you want to replace a row with an equality delete, then the delete must apply to data before the snapshot is committed. Otherwise, you'd delete the row that you're writing as a replacement because the delete is not targeted to a file/position within a partition.
   
   We can support same-sequence-number deletes for positional deletes because they target specific rows in files. And, supporting this is how we can delete within a checkpoint for streaming use cases when we encounter the same record twice.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] aokolnychyi commented on pull request #1288: Update scan planning with DeleteFiles in each task

Posted by GitBox <gi...@apache.org>.
aokolnychyi commented on pull request #1288:
URL: https://github.com/apache/iceberg/pull/1288#issuecomment-669483202


   Great work, @rdblue! Nice to get this in.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on a change in pull request #1288: Update scan planning with DeleteFiles in each task

Posted by GitBox <gi...@apache.org>.
rdblue commented on a change in pull request #1288:
URL: https://github.com/apache/iceberg/pull/1288#discussion_r464751445



##########
File path: core/src/main/java/org/apache/iceberg/GenericManifestEntry.java
##########
@@ -45,6 +45,7 @@ private GenericManifestEntry(GenericManifestEntry<F> toCopy, boolean fullCopy) {
     this.schema = toCopy.schema;
     this.status = toCopy.status;
     this.snapshotId = toCopy.snapshotId;
+    this.sequenceNumber = toCopy.sequenceNumber;

Review comment:
       This was missing and causing tests to fail because manifest entries are copied.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] aokolnychyi commented on a change in pull request #1288: Update scan planning with DeleteFiles in each task

Posted by GitBox <gi...@apache.org>.
aokolnychyi commented on a change in pull request #1288:
URL: https://github.com/apache/iceberg/pull/1288#discussion_r465865976



##########
File path: core/src/main/java/org/apache/iceberg/DeleteFileIndex.java
##########
@@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg;
+
+import com.github.benmanes.caffeine.cache.Caffeine;
+import com.github.benmanes.caffeine.cache.LoadingCache;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.Set;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+import org.apache.iceberg.exceptions.RuntimeIOException;
+import org.apache.iceberg.expressions.Expression;
+import org.apache.iceberg.expressions.Expressions;
+import org.apache.iceberg.expressions.ManifestEvaluator;
+import org.apache.iceberg.expressions.Projections;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
+import org.apache.iceberg.relocated.com.google.common.collect.ListMultimap;
+import org.apache.iceberg.relocated.com.google.common.collect.Lists;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.collect.Multimaps;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.util.Pair;
+import org.apache.iceberg.util.StructLikeWrapper;
+import org.apache.iceberg.util.Tasks;
+
+/**
+ * An index of {@link DeleteFile delete files} by sequence number.
+ * <p>
+ * Use {@link #builderFor(FileIO, Iterable)} to construct an index, and {@link #forDataFile(int, long, DataFile)} or
+ * {@link #forEntry(int, ManifestEntry)} to get the the delete files to apply to a given data file.
+ */
+class DeleteFileIndex {
+  private static final DeleteFile[] NO_DELETE_FILES = new DeleteFile[0];
+
+  private final long[] globalSeqs;
+  private final DeleteFile[] globalDeletes;
+  private final Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition;
+  private final ThreadLocal<StructLikeWrapper> lookupWrapper = ThreadLocal.withInitial(
+      () -> StructLikeWrapper.wrap(null));
+
+  DeleteFileIndex(long[] globalSeqs, DeleteFile[] globalDeletes,
+                  Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition) {
+    this.globalSeqs = globalSeqs;
+    this.globalDeletes = globalDeletes;
+    this.sortedDeletesByPartition = sortedDeletesByPartition;
+  }
+
+  DeleteFile[] forEntry(int specId, ManifestEntry<DataFile> entry) {
+    return forDataFile(specId, entry.sequenceNumber(), entry.file());
+  }
+
+  DeleteFile[] forDataFile(int specId, long sequenceNumber, DataFile file) {
+    Pair<long[], DeleteFile[]> partitionDeletes = sortedDeletesByPartition
+        .get(Pair.of(specId, lookupWrapper.get().set(file.partition())));

Review comment:
       nit: would it make sense to split this into two lines?

##########
File path: core/src/main/java/org/apache/iceberg/DeleteFileIndex.java
##########
@@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg;
+
+import com.github.benmanes.caffeine.cache.Caffeine;
+import com.github.benmanes.caffeine.cache.LoadingCache;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.Set;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+import org.apache.iceberg.exceptions.RuntimeIOException;
+import org.apache.iceberg.expressions.Expression;
+import org.apache.iceberg.expressions.Expressions;
+import org.apache.iceberg.expressions.ManifestEvaluator;
+import org.apache.iceberg.expressions.Projections;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
+import org.apache.iceberg.relocated.com.google.common.collect.ListMultimap;
+import org.apache.iceberg.relocated.com.google.common.collect.Lists;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.collect.Multimaps;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.util.Pair;
+import org.apache.iceberg.util.StructLikeWrapper;
+import org.apache.iceberg.util.Tasks;
+
+/**
+ * An index of {@link DeleteFile delete files} by sequence number.
+ * <p>
+ * Use {@link #builderFor(FileIO, Iterable)} to construct an index, and {@link #forDataFile(int, long, DataFile)} or
+ * {@link #forEntry(int, ManifestEntry)} to get the the delete files to apply to a given data file.
+ */
+class DeleteFileIndex {
+  private static final DeleteFile[] NO_DELETE_FILES = new DeleteFile[0];
+
+  private final long[] globalSeqs;
+  private final DeleteFile[] globalDeletes;
+  private final Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition;
+  private final ThreadLocal<StructLikeWrapper> lookupWrapper = ThreadLocal.withInitial(
+      () -> StructLikeWrapper.wrap(null));
+
+  DeleteFileIndex(long[] globalSeqs, DeleteFile[] globalDeletes,
+                  Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition) {
+    this.globalSeqs = globalSeqs;
+    this.globalDeletes = globalDeletes;
+    this.sortedDeletesByPartition = sortedDeletesByPartition;
+  }
+
+  DeleteFile[] forEntry(int specId, ManifestEntry<DataFile> entry) {
+    return forDataFile(specId, entry.sequenceNumber(), entry.file());
+  }
+
+  DeleteFile[] forDataFile(int specId, long sequenceNumber, DataFile file) {
+    Pair<long[], DeleteFile[]> partitionDeletes = sortedDeletesByPartition
+        .get(Pair.of(specId, lookupWrapper.get().set(file.partition())));
+
+    if (partitionDeletes == null) {
+      return limitBySequenceNumber(sequenceNumber, globalSeqs, globalDeletes);
+    } else if (globalDeletes == null) {
+      return limitBySequenceNumber(sequenceNumber, partitionDeletes.first(), partitionDeletes.second());
+    } else {
+      return Stream.concat(
+          Stream.of(limitBySequenceNumber(sequenceNumber, globalSeqs, globalDeletes)),
+          Stream.of(limitBySequenceNumber(sequenceNumber, partitionDeletes.first(), partitionDeletes.second()))
+      ).toArray(DeleteFile[]::new);
+    }
+  }
+
+  private static DeleteFile[] limitBySequenceNumber(long sequenceNumber, long[] seqs, DeleteFile[] files) {
+    if (files == null) {
+      return NO_DELETE_FILES;
+    }
+
+    int pos = Arrays.binarySearch(seqs, sequenceNumber);
+    int start;
+    if (pos < 0) {
+      // the sequence number was not found, where it would be inserted is -(pos + 1)
+      start = -(pos + 1);
+    } else {
+      // the sequence number was found, but may not be the first
+      // find the first delete file with the given sequence number by decrementing the position
+      start = pos;
+      while (start > 0 && seqs[start - 1] >= sequenceNumber) {
+        start -= 1;
+      }
+    }
+
+    return Arrays.copyOfRange(files, start, files.length);
+  }
+
+  static Builder builderFor(FileIO io, Iterable<ManifestFile> deleteManifests) {
+    return new Builder(io, Sets.newHashSet(deleteManifests));
+  }
+
+  static class Builder {
+    private final FileIO io;
+    private final Set<ManifestFile> deleteManifests;
+    private Map<Integer, PartitionSpec> specsById;
+    private Expression dataFilter;
+    private Expression partitionFilter;
+    private boolean caseSensitive;
+    private ExecutorService executorService;
+
+    Builder(FileIO io, Set<ManifestFile> deleteManifests) {
+      this.io = io;
+      this.deleteManifests = Sets.newHashSet(deleteManifests);
+      this.specsById = null;
+      this.dataFilter = Expressions.alwaysTrue();
+      this.partitionFilter = Expressions.alwaysTrue();
+      this.caseSensitive = true;
+      this.executorService = null;
+    }
+
+    Builder specsById(Map<Integer, PartitionSpec> newSpecsById) {
+      this.specsById = newSpecsById;
+      return this;
+    }
+
+    Builder filterData(Expression newDataFilter) {
+      this.dataFilter = Expressions.and(dataFilter, newDataFilter);
+      return this;
+    }
+
+    Builder filterPartitions(Expression newPartitionFilter) {
+      this.partitionFilter = Expressions.and(partitionFilter, newPartitionFilter);
+      return this;
+    }
+
+    Builder caseSensitive(boolean newCaseSensitive) {
+      this.caseSensitive = newCaseSensitive;
+      return this;
+    }
+
+    Builder planWith(ExecutorService newExecutorService) {
+      this.executorService = newExecutorService;
+      return this;
+    }
+
+    DeleteFileIndex build() {
+      // read all of the matching delete manifests in parallel and accumulate the matching files in a queue
+      Queue<Pair<Integer, ManifestEntry<DeleteFile>>> deleteEntries = new ConcurrentLinkedQueue<>();
+      Tasks.foreach(deleteManifestReaders())
+          .stopOnFailure().throwFailureWhenFinished()
+          .executeWith(executorService)
+          .run(specIdAndReader -> {
+            try (CloseableIterable<ManifestEntry<DeleteFile>> reader = specIdAndReader.second()) {
+              for (ManifestEntry<DeleteFile> entry : reader) {
+                // copy with stats for better filtering against data file stats
+                deleteEntries.add(Pair.of(specIdAndReader.first(), entry.copy()));
+              }
+            } catch (IOException e) {
+              throw new RuntimeIOException("Failed to close", e);
+            }
+          });
+
+      // build a map from (specId, partition) to delete file entries
+      ListMultimap<Pair<Integer, StructLikeWrapper>, ManifestEntry<DeleteFile>> deleteFilesByPartition =
+          Multimaps.newListMultimap(Maps.newHashMap(), Lists::newArrayList);
+      for (Pair<Integer, ManifestEntry<DeleteFile>> specIdAndEntry : deleteEntries) {
+        int specId = specIdAndEntry.first();
+        ManifestEntry<DeleteFile> entry = specIdAndEntry.second();
+        deleteFilesByPartition.put(Pair.of(specId, StructLikeWrapper.wrap(entry.file().partition())), entry);
+      }
+
+      // sort the entries in each map value by sequence number and split into sequence numbers and delete files lists
+      Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition = Maps.newHashMap();
+      // also, separate out equality deletes in an unpartitioned spec that should be applied globally
+      long[] globalApplySeqs = null;
+      DeleteFile[] globalDeletes = null;
+      for (Pair<Integer, StructLikeWrapper> partition : deleteFilesByPartition.keySet()) {
+        if (specsById.get(partition.first()).isUnpartitioned()) {
+          Preconditions.checkState(globalDeletes == null, "Detected multiple partition specs with no partitions");
+
+          List<Pair<Long, DeleteFile>> eqFilesSortedBySeq = deleteFilesByPartition.get(partition).stream()

Review comment:
       We iterate through delete files twice because we don't anticipate too many global delete files?

##########
File path: core/src/main/java/org/apache/iceberg/DeleteFileIndex.java
##########
@@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg;
+
+import com.github.benmanes.caffeine.cache.Caffeine;
+import com.github.benmanes.caffeine.cache.LoadingCache;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.Set;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+import org.apache.iceberg.exceptions.RuntimeIOException;
+import org.apache.iceberg.expressions.Expression;
+import org.apache.iceberg.expressions.Expressions;
+import org.apache.iceberg.expressions.ManifestEvaluator;
+import org.apache.iceberg.expressions.Projections;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
+import org.apache.iceberg.relocated.com.google.common.collect.ListMultimap;
+import org.apache.iceberg.relocated.com.google.common.collect.Lists;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.collect.Multimaps;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.util.Pair;
+import org.apache.iceberg.util.StructLikeWrapper;
+import org.apache.iceberg.util.Tasks;
+
+/**
+ * An index of {@link DeleteFile delete files} by sequence number.
+ * <p>
+ * Use {@link #builderFor(FileIO, Iterable)} to construct an index, and {@link #forDataFile(int, long, DataFile)} or
+ * {@link #forEntry(int, ManifestEntry)} to get the the delete files to apply to a given data file.
+ */
+class DeleteFileIndex {
+  private static final DeleteFile[] NO_DELETE_FILES = new DeleteFile[0];
+
+  private final long[] globalSeqs;
+  private final DeleteFile[] globalDeletes;
+  private final Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition;
+  private final ThreadLocal<StructLikeWrapper> lookupWrapper = ThreadLocal.withInitial(

Review comment:
       Do we need `ThreadLocal` as we plan jobs using multiple threads?

##########
File path: core/src/main/java/org/apache/iceberg/DeleteFileIndex.java
##########
@@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg;
+
+import com.github.benmanes.caffeine.cache.Caffeine;
+import com.github.benmanes.caffeine.cache.LoadingCache;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.Set;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+import org.apache.iceberg.exceptions.RuntimeIOException;
+import org.apache.iceberg.expressions.Expression;
+import org.apache.iceberg.expressions.Expressions;
+import org.apache.iceberg.expressions.ManifestEvaluator;
+import org.apache.iceberg.expressions.Projections;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
+import org.apache.iceberg.relocated.com.google.common.collect.ListMultimap;
+import org.apache.iceberg.relocated.com.google.common.collect.Lists;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.collect.Multimaps;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.util.Pair;
+import org.apache.iceberg.util.StructLikeWrapper;
+import org.apache.iceberg.util.Tasks;
+
+/**
+ * An index of {@link DeleteFile delete files} by sequence number.
+ * <p>
+ * Use {@link #builderFor(FileIO, Iterable)} to construct an index, and {@link #forDataFile(int, long, DataFile)} or
+ * {@link #forEntry(int, ManifestEntry)} to get the the delete files to apply to a given data file.
+ */
+class DeleteFileIndex {
+  private static final DeleteFile[] NO_DELETE_FILES = new DeleteFile[0];
+
+  private final long[] globalSeqs;
+  private final DeleteFile[] globalDeletes;
+  private final Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition;
+  private final ThreadLocal<StructLikeWrapper> lookupWrapper = ThreadLocal.withInitial(
+      () -> StructLikeWrapper.wrap(null));
+
+  DeleteFileIndex(long[] globalSeqs, DeleteFile[] globalDeletes,
+                  Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition) {
+    this.globalSeqs = globalSeqs;
+    this.globalDeletes = globalDeletes;
+    this.sortedDeletesByPartition = sortedDeletesByPartition;
+  }
+
+  DeleteFile[] forEntry(int specId, ManifestEntry<DataFile> entry) {
+    return forDataFile(specId, entry.sequenceNumber(), entry.file());
+  }
+
+  DeleteFile[] forDataFile(int specId, long sequenceNumber, DataFile file) {
+    Pair<long[], DeleteFile[]> partitionDeletes = sortedDeletesByPartition
+        .get(Pair.of(specId, lookupWrapper.get().set(file.partition())));
+
+    if (partitionDeletes == null) {
+      return limitBySequenceNumber(sequenceNumber, globalSeqs, globalDeletes);
+    } else if (globalDeletes == null) {
+      return limitBySequenceNumber(sequenceNumber, partitionDeletes.first(), partitionDeletes.second());
+    } else {
+      return Stream.concat(
+          Stream.of(limitBySequenceNumber(sequenceNumber, globalSeqs, globalDeletes)),
+          Stream.of(limitBySequenceNumber(sequenceNumber, partitionDeletes.first(), partitionDeletes.second()))
+      ).toArray(DeleteFile[]::new);
+    }
+  }
+
+  private static DeleteFile[] limitBySequenceNumber(long sequenceNumber, long[] seqs, DeleteFile[] files) {
+    if (files == null) {
+      return NO_DELETE_FILES;
+    }
+
+    int pos = Arrays.binarySearch(seqs, sequenceNumber);
+    int start;
+    if (pos < 0) {
+      // the sequence number was not found, where it would be inserted is -(pos + 1)
+      start = -(pos + 1);
+    } else {
+      // the sequence number was found, but may not be the first
+      // find the first delete file with the given sequence number by decrementing the position
+      start = pos;
+      while (start > 0 && seqs[start - 1] >= sequenceNumber) {
+        start -= 1;
+      }
+    }
+
+    return Arrays.copyOfRange(files, start, files.length);
+  }
+
+  static Builder builderFor(FileIO io, Iterable<ManifestFile> deleteManifests) {
+    return new Builder(io, Sets.newHashSet(deleteManifests));
+  }
+
+  static class Builder {
+    private final FileIO io;
+    private final Set<ManifestFile> deleteManifests;
+    private Map<Integer, PartitionSpec> specsById;
+    private Expression dataFilter;
+    private Expression partitionFilter;
+    private boolean caseSensitive;
+    private ExecutorService executorService;
+
+    Builder(FileIO io, Set<ManifestFile> deleteManifests) {
+      this.io = io;
+      this.deleteManifests = Sets.newHashSet(deleteManifests);
+      this.specsById = null;
+      this.dataFilter = Expressions.alwaysTrue();
+      this.partitionFilter = Expressions.alwaysTrue();
+      this.caseSensitive = true;
+      this.executorService = null;
+    }
+
+    Builder specsById(Map<Integer, PartitionSpec> newSpecsById) {
+      this.specsById = newSpecsById;
+      return this;
+    }
+
+    Builder filterData(Expression newDataFilter) {
+      this.dataFilter = Expressions.and(dataFilter, newDataFilter);
+      return this;
+    }
+
+    Builder filterPartitions(Expression newPartitionFilter) {
+      this.partitionFilter = Expressions.and(partitionFilter, newPartitionFilter);
+      return this;
+    }
+
+    Builder caseSensitive(boolean newCaseSensitive) {
+      this.caseSensitive = newCaseSensitive;
+      return this;
+    }
+
+    Builder planWith(ExecutorService newExecutorService) {
+      this.executorService = newExecutorService;
+      return this;
+    }
+
+    DeleteFileIndex build() {
+      // read all of the matching delete manifests in parallel and accumulate the matching files in a queue
+      Queue<Pair<Integer, ManifestEntry<DeleteFile>>> deleteEntries = new ConcurrentLinkedQueue<>();
+      Tasks.foreach(deleteManifestReaders())
+          .stopOnFailure().throwFailureWhenFinished()
+          .executeWith(executorService)
+          .run(specIdAndReader -> {
+            try (CloseableIterable<ManifestEntry<DeleteFile>> reader = specIdAndReader.second()) {
+              for (ManifestEntry<DeleteFile> entry : reader) {
+                // copy with stats for better filtering against data file stats
+                deleteEntries.add(Pair.of(specIdAndReader.first(), entry.copy()));
+              }
+            } catch (IOException e) {
+              throw new RuntimeIOException("Failed to close", e);
+            }
+          });
+
+      // build a map from (specId, partition) to delete file entries
+      ListMultimap<Pair<Integer, StructLikeWrapper>, ManifestEntry<DeleteFile>> deleteFilesByPartition =
+          Multimaps.newListMultimap(Maps.newHashMap(), Lists::newArrayList);
+      for (Pair<Integer, ManifestEntry<DeleteFile>> specIdAndEntry : deleteEntries) {
+        int specId = specIdAndEntry.first();
+        ManifestEntry<DeleteFile> entry = specIdAndEntry.second();
+        deleteFilesByPartition.put(Pair.of(specId, StructLikeWrapper.wrap(entry.file().partition())), entry);
+      }
+
+      // sort the entries in each map value by sequence number and split into sequence numbers and delete files lists
+      Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition = Maps.newHashMap();
+      // also, separate out equality deletes in an unpartitioned spec that should be applied globally
+      long[] globalApplySeqs = null;
+      DeleteFile[] globalDeletes = null;
+      for (Pair<Integer, StructLikeWrapper> partition : deleteFilesByPartition.keySet()) {
+        if (specsById.get(partition.first()).isUnpartitioned()) {
+          Preconditions.checkState(globalDeletes == null, "Detected multiple partition specs with no partitions");
+
+          List<Pair<Long, DeleteFile>> eqFilesSortedBySeq = deleteFilesByPartition.get(partition).stream()
+              .filter(entry -> entry.file().content() == FileContent.EQUALITY_DELETES)
+              .map(entry ->
+                  // a delete file is indexed by the sequence number it should be applied to
+                  Pair.of(entry.sequenceNumber() - 1, entry.file()))
+              .sorted(Comparator.comparingLong(Pair::first))
+              .collect(Collectors.toList());
+
+          globalApplySeqs = eqFilesSortedBySeq.stream().mapToLong(Pair::first).toArray();
+          globalDeletes = eqFilesSortedBySeq.stream().map(Pair::second).toArray(DeleteFile[]::new);
+
+          List<Pair<Long, DeleteFile>> posFilesSortedBySeq = deleteFilesByPartition.get(partition).stream()
+              .filter(entry -> entry.file().content() == FileContent.POSITION_DELETES)
+              .map(entry -> Pair.of(entry.sequenceNumber(), entry.file()))
+              .sorted(Comparator.comparingLong(Pair::first))
+              .collect(Collectors.toList());
+
+          long[] seqs = posFilesSortedBySeq.stream().mapToLong(Pair::first).toArray();
+          DeleteFile[] files = posFilesSortedBySeq.stream().map(Pair::second).toArray(DeleteFile[]::new);
+
+          sortedDeletesByPartition.put(partition, Pair.of(seqs, files));
+
+        } else {
+          List<Pair<Long, DeleteFile>> filesSortedBySeq = deleteFilesByPartition.get(partition).stream()
+              .map(entry -> {
+                // a delete file is indexed by the sequence number it should be applied to
+                long applySeq = entry.sequenceNumber() -
+                    (entry.file().content() == FileContent.EQUALITY_DELETES ? 1 : 0);

Review comment:
       Why do we treat positional and equality deletes differently? Is it because the equality delete should not delete data in the snapshot it was added?

##########
File path: core/src/main/java/org/apache/iceberg/DeleteFileIndex.java
##########
@@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg;
+
+import com.github.benmanes.caffeine.cache.Caffeine;
+import com.github.benmanes.caffeine.cache.LoadingCache;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.Set;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+import org.apache.iceberg.exceptions.RuntimeIOException;
+import org.apache.iceberg.expressions.Expression;
+import org.apache.iceberg.expressions.Expressions;
+import org.apache.iceberg.expressions.ManifestEvaluator;
+import org.apache.iceberg.expressions.Projections;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
+import org.apache.iceberg.relocated.com.google.common.collect.ListMultimap;
+import org.apache.iceberg.relocated.com.google.common.collect.Lists;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.collect.Multimaps;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.util.Pair;
+import org.apache.iceberg.util.StructLikeWrapper;
+import org.apache.iceberg.util.Tasks;
+
+/**
+ * An index of {@link DeleteFile delete files} by sequence number.
+ * <p>
+ * Use {@link #builderFor(FileIO, Iterable)} to construct an index, and {@link #forDataFile(int, long, DataFile)} or
+ * {@link #forEntry(int, ManifestEntry)} to get the the delete files to apply to a given data file.
+ */
+class DeleteFileIndex {
+  private static final DeleteFile[] NO_DELETE_FILES = new DeleteFile[0];
+
+  private final long[] globalSeqs;
+  private final DeleteFile[] globalDeletes;
+  private final Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition;
+  private final ThreadLocal<StructLikeWrapper> lookupWrapper = ThreadLocal.withInitial(

Review comment:
       Why we cannot just construct `Pair` in `forDataFile` using `file.partition()`?

##########
File path: core/src/main/java/org/apache/iceberg/DeleteFileIndex.java
##########
@@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg;
+
+import com.github.benmanes.caffeine.cache.Caffeine;
+import com.github.benmanes.caffeine.cache.LoadingCache;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.Set;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+import org.apache.iceberg.exceptions.RuntimeIOException;
+import org.apache.iceberg.expressions.Expression;
+import org.apache.iceberg.expressions.Expressions;
+import org.apache.iceberg.expressions.ManifestEvaluator;
+import org.apache.iceberg.expressions.Projections;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
+import org.apache.iceberg.relocated.com.google.common.collect.ListMultimap;
+import org.apache.iceberg.relocated.com.google.common.collect.Lists;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.collect.Multimaps;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.util.Pair;
+import org.apache.iceberg.util.StructLikeWrapper;
+import org.apache.iceberg.util.Tasks;
+
+/**
+ * An index of {@link DeleteFile delete files} by sequence number.
+ * <p>
+ * Use {@link #builderFor(FileIO, Iterable)} to construct an index, and {@link #forDataFile(int, long, DataFile)} or
+ * {@link #forEntry(int, ManifestEntry)} to get the the delete files to apply to a given data file.
+ */
+class DeleteFileIndex {
+  private static final DeleteFile[] NO_DELETE_FILES = new DeleteFile[0];
+
+  private final long[] globalSeqs;
+  private final DeleteFile[] globalDeletes;
+  private final Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition;
+  private final ThreadLocal<StructLikeWrapper> lookupWrapper = ThreadLocal.withInitial(
+      () -> StructLikeWrapper.wrap(null));
+
+  DeleteFileIndex(long[] globalSeqs, DeleteFile[] globalDeletes,
+                  Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition) {
+    this.globalSeqs = globalSeqs;
+    this.globalDeletes = globalDeletes;
+    this.sortedDeletesByPartition = sortedDeletesByPartition;
+  }
+
+  DeleteFile[] forEntry(int specId, ManifestEntry<DataFile> entry) {
+    return forDataFile(specId, entry.sequenceNumber(), entry.file());
+  }
+
+  DeleteFile[] forDataFile(int specId, long sequenceNumber, DataFile file) {
+    Pair<long[], DeleteFile[]> partitionDeletes = sortedDeletesByPartition
+        .get(Pair.of(specId, lookupWrapper.get().set(file.partition())));
+
+    if (partitionDeletes == null) {
+      return limitBySequenceNumber(sequenceNumber, globalSeqs, globalDeletes);
+    } else if (globalDeletes == null) {
+      return limitBySequenceNumber(sequenceNumber, partitionDeletes.first(), partitionDeletes.second());
+    } else {
+      return Stream.concat(
+          Stream.of(limitBySequenceNumber(sequenceNumber, globalSeqs, globalDeletes)),
+          Stream.of(limitBySequenceNumber(sequenceNumber, partitionDeletes.first(), partitionDeletes.second()))
+      ).toArray(DeleteFile[]::new);
+    }
+  }
+
+  private static DeleteFile[] limitBySequenceNumber(long sequenceNumber, long[] seqs, DeleteFile[] files) {
+    if (files == null) {
+      return NO_DELETE_FILES;
+    }
+
+    int pos = Arrays.binarySearch(seqs, sequenceNumber);
+    int start;
+    if (pos < 0) {
+      // the sequence number was not found, where it would be inserted is -(pos + 1)
+      start = -(pos + 1);

Review comment:
       Does this cover cases when all delete files or none match? I think `Arrays.copyOfRange` will throw an exception if `from > original.length`.

##########
File path: core/src/main/java/org/apache/iceberg/DeleteFileIndex.java
##########
@@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg;
+
+import com.github.benmanes.caffeine.cache.Caffeine;
+import com.github.benmanes.caffeine.cache.LoadingCache;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.Set;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+import org.apache.iceberg.exceptions.RuntimeIOException;
+import org.apache.iceberg.expressions.Expression;
+import org.apache.iceberg.expressions.Expressions;
+import org.apache.iceberg.expressions.ManifestEvaluator;
+import org.apache.iceberg.expressions.Projections;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
+import org.apache.iceberg.relocated.com.google.common.collect.ListMultimap;
+import org.apache.iceberg.relocated.com.google.common.collect.Lists;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.collect.Multimaps;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.util.Pair;
+import org.apache.iceberg.util.StructLikeWrapper;
+import org.apache.iceberg.util.Tasks;
+
+/**
+ * An index of {@link DeleteFile delete files} by sequence number.
+ * <p>
+ * Use {@link #builderFor(FileIO, Iterable)} to construct an index, and {@link #forDataFile(int, long, DataFile)} or
+ * {@link #forEntry(int, ManifestEntry)} to get the the delete files to apply to a given data file.
+ */
+class DeleteFileIndex {
+  private static final DeleteFile[] NO_DELETE_FILES = new DeleteFile[0];
+
+  private final long[] globalSeqs;
+  private final DeleteFile[] globalDeletes;
+  private final Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition;
+  private final ThreadLocal<StructLikeWrapper> lookupWrapper = ThreadLocal.withInitial(
+      () -> StructLikeWrapper.wrap(null));
+
+  DeleteFileIndex(long[] globalSeqs, DeleteFile[] globalDeletes,
+                  Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition) {
+    this.globalSeqs = globalSeqs;
+    this.globalDeletes = globalDeletes;
+    this.sortedDeletesByPartition = sortedDeletesByPartition;
+  }
+
+  DeleteFile[] forEntry(int specId, ManifestEntry<DataFile> entry) {
+    return forDataFile(specId, entry.sequenceNumber(), entry.file());
+  }
+
+  DeleteFile[] forDataFile(int specId, long sequenceNumber, DataFile file) {
+    Pair<long[], DeleteFile[]> partitionDeletes = sortedDeletesByPartition
+        .get(Pair.of(specId, lookupWrapper.get().set(file.partition())));
+
+    if (partitionDeletes == null) {
+      return limitBySequenceNumber(sequenceNumber, globalSeqs, globalDeletes);
+    } else if (globalDeletes == null) {
+      return limitBySequenceNumber(sequenceNumber, partitionDeletes.first(), partitionDeletes.second());
+    } else {
+      return Stream.concat(
+          Stream.of(limitBySequenceNumber(sequenceNumber, globalSeqs, globalDeletes)),
+          Stream.of(limitBySequenceNumber(sequenceNumber, partitionDeletes.first(), partitionDeletes.second()))
+      ).toArray(DeleteFile[]::new);
+    }
+  }
+
+  private static DeleteFile[] limitBySequenceNumber(long sequenceNumber, long[] seqs, DeleteFile[] files) {
+    if (files == null) {
+      return NO_DELETE_FILES;
+    }
+
+    int pos = Arrays.binarySearch(seqs, sequenceNumber);
+    int start;
+    if (pos < 0) {
+      // the sequence number was not found, where it would be inserted is -(pos + 1)
+      start = -(pos + 1);
+    } else {
+      // the sequence number was found, but may not be the first
+      // find the first delete file with the given sequence number by decrementing the position
+      start = pos;
+      while (start > 0 && seqs[start - 1] >= sequenceNumber) {
+        start -= 1;
+      }
+    }
+
+    return Arrays.copyOfRange(files, start, files.length);
+  }
+
+  static Builder builderFor(FileIO io, Iterable<ManifestFile> deleteManifests) {
+    return new Builder(io, Sets.newHashSet(deleteManifests));
+  }
+
+  static class Builder {
+    private final FileIO io;
+    private final Set<ManifestFile> deleteManifests;
+    private Map<Integer, PartitionSpec> specsById;
+    private Expression dataFilter;
+    private Expression partitionFilter;
+    private boolean caseSensitive;
+    private ExecutorService executorService;
+
+    Builder(FileIO io, Set<ManifestFile> deleteManifests) {
+      this.io = io;
+      this.deleteManifests = Sets.newHashSet(deleteManifests);
+      this.specsById = null;
+      this.dataFilter = Expressions.alwaysTrue();
+      this.partitionFilter = Expressions.alwaysTrue();
+      this.caseSensitive = true;
+      this.executorService = null;
+    }
+
+    Builder specsById(Map<Integer, PartitionSpec> newSpecsById) {
+      this.specsById = newSpecsById;
+      return this;
+    }
+
+    Builder filterData(Expression newDataFilter) {
+      this.dataFilter = Expressions.and(dataFilter, newDataFilter);
+      return this;
+    }
+
+    Builder filterPartitions(Expression newPartitionFilter) {
+      this.partitionFilter = Expressions.and(partitionFilter, newPartitionFilter);
+      return this;
+    }
+
+    Builder caseSensitive(boolean newCaseSensitive) {
+      this.caseSensitive = newCaseSensitive;
+      return this;
+    }
+
+    Builder planWith(ExecutorService newExecutorService) {
+      this.executorService = newExecutorService;
+      return this;
+    }
+
+    DeleteFileIndex build() {
+      // read all of the matching delete manifests in parallel and accumulate the matching files in a queue
+      Queue<Pair<Integer, ManifestEntry<DeleteFile>>> deleteEntries = new ConcurrentLinkedQueue<>();
+      Tasks.foreach(deleteManifestReaders())
+          .stopOnFailure().throwFailureWhenFinished()
+          .executeWith(executorService)
+          .run(specIdAndReader -> {
+            try (CloseableIterable<ManifestEntry<DeleteFile>> reader = specIdAndReader.second()) {
+              for (ManifestEntry<DeleteFile> entry : reader) {
+                // copy with stats for better filtering against data file stats
+                deleteEntries.add(Pair.of(specIdAndReader.first(), entry.copy()));
+              }
+            } catch (IOException e) {
+              throw new RuntimeIOException("Failed to close", e);
+            }
+          });
+
+      // build a map from (specId, partition) to delete file entries
+      ListMultimap<Pair<Integer, StructLikeWrapper>, ManifestEntry<DeleteFile>> deleteFilesByPartition =
+          Multimaps.newListMultimap(Maps.newHashMap(), Lists::newArrayList);
+      for (Pair<Integer, ManifestEntry<DeleteFile>> specIdAndEntry : deleteEntries) {
+        int specId = specIdAndEntry.first();
+        ManifestEntry<DeleteFile> entry = specIdAndEntry.second();
+        deleteFilesByPartition.put(Pair.of(specId, StructLikeWrapper.wrap(entry.file().partition())), entry);
+      }
+
+      // sort the entries in each map value by sequence number and split into sequence numbers and delete files lists
+      Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition = Maps.newHashMap();
+      // also, separate out equality deletes in an unpartitioned spec that should be applied globally
+      long[] globalApplySeqs = null;
+      DeleteFile[] globalDeletes = null;
+      for (Pair<Integer, StructLikeWrapper> partition : deleteFilesByPartition.keySet()) {
+        if (specsById.get(partition.first()).isUnpartitioned()) {
+          Preconditions.checkState(globalDeletes == null, "Detected multiple partition specs with no partitions");
+
+          List<Pair<Long, DeleteFile>> eqFilesSortedBySeq = deleteFilesByPartition.get(partition).stream()
+              .filter(entry -> entry.file().content() == FileContent.EQUALITY_DELETES)
+              .map(entry ->
+                  // a delete file is indexed by the sequence number it should be applied to
+                  Pair.of(entry.sequenceNumber() - 1, entry.file()))
+              .sorted(Comparator.comparingLong(Pair::first))
+              .collect(Collectors.toList());
+
+          globalApplySeqs = eqFilesSortedBySeq.stream().mapToLong(Pair::first).toArray();
+          globalDeletes = eqFilesSortedBySeq.stream().map(Pair::second).toArray(DeleteFile[]::new);
+
+          List<Pair<Long, DeleteFile>> posFilesSortedBySeq = deleteFilesByPartition.get(partition).stream()
+              .filter(entry -> entry.file().content() == FileContent.POSITION_DELETES)
+              .map(entry -> Pair.of(entry.sequenceNumber(), entry.file()))
+              .sorted(Comparator.comparingLong(Pair::first))
+              .collect(Collectors.toList());
+
+          long[] seqs = posFilesSortedBySeq.stream().mapToLong(Pair::first).toArray();
+          DeleteFile[] files = posFilesSortedBySeq.stream().map(Pair::second).toArray(DeleteFile[]::new);
+
+          sortedDeletesByPartition.put(partition, Pair.of(seqs, files));
+
+        } else {
+          List<Pair<Long, DeleteFile>> filesSortedBySeq = deleteFilesByPartition.get(partition).stream()
+              .map(entry -> {
+                // a delete file is indexed by the sequence number it should be applied to
+                long applySeq = entry.sequenceNumber() -
+                    (entry.file().content() == FileContent.EQUALITY_DELETES ? 1 : 0);
+                return Pair.of(applySeq, entry.file());
+              })
+              .sorted(Comparator.comparingLong(Pair::first))
+              .collect(Collectors.toList());
+
+          long[] seqs = filesSortedBySeq.stream().mapToLong(Pair::first).toArray();
+          DeleteFile[] files = filesSortedBySeq.stream().map(Pair::second).toArray(DeleteFile[]::new);
+
+          sortedDeletesByPartition.put(partition, Pair.of(seqs, files));
+        }
+      }
+
+      return new DeleteFileIndex(globalApplySeqs, globalDeletes, sortedDeletesByPartition);
+    }
+
+    private Iterable<Pair<Integer, CloseableIterable<ManifestEntry<DeleteFile>>>> deleteManifestReaders() {
+      LoadingCache<Integer, ManifestEvaluator> evalCache = specsById == null ? null :
+          Caffeine.newBuilder().build(specId -> {
+            PartitionSpec spec = specsById.get(specId);
+            return ManifestEvaluator.forPartitionFilter(
+                Expressions.and(partitionFilter, Projections.inclusive(spec, caseSensitive).project(dataFilter)),
+                spec, caseSensitive);
+          });
+
+      Iterable<ManifestFile> matchingManifests = evalCache == null ? deleteManifests :
+          Iterables.filter(deleteManifests, manifest ->
+              manifest.content() == ManifestContent.DELETES &&
+                  (manifest.hasAddedFiles() || manifest.hasDeletedFiles()) &&
+                  evalCache.get(manifest.partitionSpecId()).eval(manifest));
+
+      return Iterables.transform(
+          matchingManifests,
+          manifest -> Pair.of(
+              manifest.partitionSpecId(),
+              ManifestFiles.readDeleteManifest(manifest, io, specsById)

Review comment:
       To confirm: we will use partition predicates to prune delete manifests and data predicates to filter out delete files similarly to what we have for data file?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] prodeezy commented on pull request #1288: Update scan planning with DeleteFiles in each task

Posted by GitBox <gi...@apache.org>.
prodeezy commented on pull request #1288:
URL: https://github.com/apache/iceberg/pull/1288#issuecomment-668392255


   thanks @rdblue will take a look.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on a change in pull request #1288: Update scan planning with DeleteFiles in each task

Posted by GitBox <gi...@apache.org>.
rdblue commented on a change in pull request #1288:
URL: https://github.com/apache/iceberg/pull/1288#discussion_r465910507



##########
File path: core/src/main/java/org/apache/iceberg/DeleteFileIndex.java
##########
@@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg;
+
+import com.github.benmanes.caffeine.cache.Caffeine;
+import com.github.benmanes.caffeine.cache.LoadingCache;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.Set;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+import org.apache.iceberg.exceptions.RuntimeIOException;
+import org.apache.iceberg.expressions.Expression;
+import org.apache.iceberg.expressions.Expressions;
+import org.apache.iceberg.expressions.ManifestEvaluator;
+import org.apache.iceberg.expressions.Projections;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
+import org.apache.iceberg.relocated.com.google.common.collect.ListMultimap;
+import org.apache.iceberg.relocated.com.google.common.collect.Lists;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.collect.Multimaps;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.util.Pair;
+import org.apache.iceberg.util.StructLikeWrapper;
+import org.apache.iceberg.util.Tasks;
+
+/**
+ * An index of {@link DeleteFile delete files} by sequence number.
+ * <p>
+ * Use {@link #builderFor(FileIO, Iterable)} to construct an index, and {@link #forDataFile(int, long, DataFile)} or
+ * {@link #forEntry(int, ManifestEntry)} to get the the delete files to apply to a given data file.
+ */
+class DeleteFileIndex {
+  private static final DeleteFile[] NO_DELETE_FILES = new DeleteFile[0];
+
+  private final long[] globalSeqs;
+  private final DeleteFile[] globalDeletes;
+  private final Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition;
+  private final ThreadLocal<StructLikeWrapper> lookupWrapper = ThreadLocal.withInitial(

Review comment:
       We need to wrap `file.partition()` in an object that implements `hashCode` and `equals` consistently, or else `PartitionKey` and `Record` might get compared and not be considered equal.
   
   All of the partitions in `sortedDeletesByPartition` are already wrapped, but we need a wrapper for the lookups. Because `forEntry` is usually called from a thread pool that is scanning manifests in parallel, we can't use the same wrapper. That's why we use a thread-local one.
   
   Ideally, we'd reuse the `Pair` as well, but it doesn't support that.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] prodeezy commented on pull request #1288: Update scan planning with DeleteFiles in each task

Posted by GitBox <gi...@apache.org>.
prodeezy commented on pull request #1288:
URL: https://github.com/apache/iceberg/pull/1288#issuecomment-679161954


   Thanks again for this @rdblue , as you mentioned during the sync, the global soft-delete support in delete file index can be used to model any soft-delete mechanism that systems have had to build externally. cc @fbocse @mehtaashish23 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] prodeezy commented on a change in pull request #1288: Update scan planning with DeleteFiles in each task

Posted by GitBox <gi...@apache.org>.
prodeezy commented on a change in pull request #1288:
URL: https://github.com/apache/iceberg/pull/1288#discussion_r465096971



##########
File path: core/src/main/java/org/apache/iceberg/BaseFileScanTask.java
##########
@@ -31,14 +31,17 @@
 
 class BaseFileScanTask implements FileScanTask {
   private final DataFile file;
+  private final DeleteFile[] deletes;
   private final String schemaString;
   private final String specString;
   private final ResidualEvaluator residuals;
 
   private transient PartitionSpec spec = null;
 
-  BaseFileScanTask(DataFile file, String schemaString, String specString, ResidualEvaluator residuals) {
+  BaseFileScanTask(DataFile file, DeleteFile[] deletes, String schemaString, String specString,

Review comment:
       This is mostly for my understanding: is `DeleteFile[] deletes`  a mandatory builder param now for file scan tasks? If not, from a v1 / v2 compatibility standpoint would it make sense to add an overloaded constructor?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on a change in pull request #1288: Update scan planning with DeleteFiles in each task

Posted by GitBox <gi...@apache.org>.
rdblue commented on a change in pull request #1288:
URL: https://github.com/apache/iceberg/pull/1288#discussion_r465915484



##########
File path: core/src/main/java/org/apache/iceberg/DeleteFileIndex.java
##########
@@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg;
+
+import com.github.benmanes.caffeine.cache.Caffeine;
+import com.github.benmanes.caffeine.cache.LoadingCache;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.Set;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+import org.apache.iceberg.exceptions.RuntimeIOException;
+import org.apache.iceberg.expressions.Expression;
+import org.apache.iceberg.expressions.Expressions;
+import org.apache.iceberg.expressions.ManifestEvaluator;
+import org.apache.iceberg.expressions.Projections;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
+import org.apache.iceberg.relocated.com.google.common.collect.ListMultimap;
+import org.apache.iceberg.relocated.com.google.common.collect.Lists;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.collect.Multimaps;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.util.Pair;
+import org.apache.iceberg.util.StructLikeWrapper;
+import org.apache.iceberg.util.Tasks;
+
+/**
+ * An index of {@link DeleteFile delete files} by sequence number.
+ * <p>
+ * Use {@link #builderFor(FileIO, Iterable)} to construct an index, and {@link #forDataFile(int, long, DataFile)} or
+ * {@link #forEntry(int, ManifestEntry)} to get the the delete files to apply to a given data file.
+ */
+class DeleteFileIndex {
+  private static final DeleteFile[] NO_DELETE_FILES = new DeleteFile[0];
+
+  private final long[] globalSeqs;
+  private final DeleteFile[] globalDeletes;
+  private final Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition;
+  private final ThreadLocal<StructLikeWrapper> lookupWrapper = ThreadLocal.withInitial(
+      () -> StructLikeWrapper.wrap(null));
+
+  DeleteFileIndex(long[] globalSeqs, DeleteFile[] globalDeletes,
+                  Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition) {
+    this.globalSeqs = globalSeqs;
+    this.globalDeletes = globalDeletes;
+    this.sortedDeletesByPartition = sortedDeletesByPartition;
+  }
+
+  DeleteFile[] forEntry(int specId, ManifestEntry<DataFile> entry) {
+    return forDataFile(specId, entry.sequenceNumber(), entry.file());
+  }
+
+  DeleteFile[] forDataFile(int specId, long sequenceNumber, DataFile file) {
+    Pair<long[], DeleteFile[]> partitionDeletes = sortedDeletesByPartition
+        .get(Pair.of(specId, lookupWrapper.get().set(file.partition())));
+
+    if (partitionDeletes == null) {
+      return limitBySequenceNumber(sequenceNumber, globalSeqs, globalDeletes);
+    } else if (globalDeletes == null) {
+      return limitBySequenceNumber(sequenceNumber, partitionDeletes.first(), partitionDeletes.second());
+    } else {
+      return Stream.concat(
+          Stream.of(limitBySequenceNumber(sequenceNumber, globalSeqs, globalDeletes)),
+          Stream.of(limitBySequenceNumber(sequenceNumber, partitionDeletes.first(), partitionDeletes.second()))
+      ).toArray(DeleteFile[]::new);
+    }
+  }
+
+  private static DeleteFile[] limitBySequenceNumber(long sequenceNumber, long[] seqs, DeleteFile[] files) {
+    if (files == null) {
+      return NO_DELETE_FILES;
+    }
+
+    int pos = Arrays.binarySearch(seqs, sequenceNumber);
+    int start;
+    if (pos < 0) {
+      // the sequence number was not found, where it would be inserted is -(pos + 1)
+      start = -(pos + 1);
+    } else {
+      // the sequence number was found, but may not be the first
+      // find the first delete file with the given sequence number by decrementing the position
+      start = pos;
+      while (start > 0 && seqs[start - 1] >= sequenceNumber) {
+        start -= 1;
+      }
+    }
+
+    return Arrays.copyOfRange(files, start, files.length);
+  }
+
+  static Builder builderFor(FileIO io, Iterable<ManifestFile> deleteManifests) {
+    return new Builder(io, Sets.newHashSet(deleteManifests));
+  }
+
+  static class Builder {
+    private final FileIO io;
+    private final Set<ManifestFile> deleteManifests;
+    private Map<Integer, PartitionSpec> specsById;
+    private Expression dataFilter;
+    private Expression partitionFilter;
+    private boolean caseSensitive;
+    private ExecutorService executorService;
+
+    Builder(FileIO io, Set<ManifestFile> deleteManifests) {
+      this.io = io;
+      this.deleteManifests = Sets.newHashSet(deleteManifests);
+      this.specsById = null;
+      this.dataFilter = Expressions.alwaysTrue();
+      this.partitionFilter = Expressions.alwaysTrue();
+      this.caseSensitive = true;
+      this.executorService = null;
+    }
+
+    Builder specsById(Map<Integer, PartitionSpec> newSpecsById) {
+      this.specsById = newSpecsById;
+      return this;
+    }
+
+    Builder filterData(Expression newDataFilter) {
+      this.dataFilter = Expressions.and(dataFilter, newDataFilter);
+      return this;
+    }
+
+    Builder filterPartitions(Expression newPartitionFilter) {
+      this.partitionFilter = Expressions.and(partitionFilter, newPartitionFilter);
+      return this;
+    }
+
+    Builder caseSensitive(boolean newCaseSensitive) {
+      this.caseSensitive = newCaseSensitive;
+      return this;
+    }
+
+    Builder planWith(ExecutorService newExecutorService) {
+      this.executorService = newExecutorService;
+      return this;
+    }
+
+    DeleteFileIndex build() {
+      // read all of the matching delete manifests in parallel and accumulate the matching files in a queue
+      Queue<Pair<Integer, ManifestEntry<DeleteFile>>> deleteEntries = new ConcurrentLinkedQueue<>();
+      Tasks.foreach(deleteManifestReaders())
+          .stopOnFailure().throwFailureWhenFinished()
+          .executeWith(executorService)
+          .run(specIdAndReader -> {
+            try (CloseableIterable<ManifestEntry<DeleteFile>> reader = specIdAndReader.second()) {
+              for (ManifestEntry<DeleteFile> entry : reader) {
+                // copy with stats for better filtering against data file stats
+                deleteEntries.add(Pair.of(specIdAndReader.first(), entry.copy()));
+              }
+            } catch (IOException e) {
+              throw new RuntimeIOException("Failed to close", e);
+            }
+          });
+
+      // build a map from (specId, partition) to delete file entries
+      ListMultimap<Pair<Integer, StructLikeWrapper>, ManifestEntry<DeleteFile>> deleteFilesByPartition =
+          Multimaps.newListMultimap(Maps.newHashMap(), Lists::newArrayList);
+      for (Pair<Integer, ManifestEntry<DeleteFile>> specIdAndEntry : deleteEntries) {
+        int specId = specIdAndEntry.first();
+        ManifestEntry<DeleteFile> entry = specIdAndEntry.second();
+        deleteFilesByPartition.put(Pair.of(specId, StructLikeWrapper.wrap(entry.file().partition())), entry);
+      }
+
+      // sort the entries in each map value by sequence number and split into sequence numbers and delete files lists
+      Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition = Maps.newHashMap();
+      // also, separate out equality deletes in an unpartitioned spec that should be applied globally
+      long[] globalApplySeqs = null;
+      DeleteFile[] globalDeletes = null;
+      for (Pair<Integer, StructLikeWrapper> partition : deleteFilesByPartition.keySet()) {
+        if (specsById.get(partition.first()).isUnpartitioned()) {
+          Preconditions.checkState(globalDeletes == null, "Detected multiple partition specs with no partitions");
+
+          List<Pair<Long, DeleteFile>> eqFilesSortedBySeq = deleteFilesByPartition.get(partition).stream()
+              .filter(entry -> entry.file().content() == FileContent.EQUALITY_DELETES)
+              .map(entry ->
+                  // a delete file is indexed by the sequence number it should be applied to
+                  Pair.of(entry.sequenceNumber() - 1, entry.file()))
+              .sorted(Comparator.comparingLong(Pair::first))
+              .collect(Collectors.toList());
+
+          globalApplySeqs = eqFilesSortedBySeq.stream().mapToLong(Pair::first).toArray();
+          globalDeletes = eqFilesSortedBySeq.stream().map(Pair::second).toArray(DeleteFile[]::new);
+
+          List<Pair<Long, DeleteFile>> posFilesSortedBySeq = deleteFilesByPartition.get(partition).stream()
+              .filter(entry -> entry.file().content() == FileContent.POSITION_DELETES)
+              .map(entry -> Pair.of(entry.sequenceNumber(), entry.file()))
+              .sorted(Comparator.comparingLong(Pair::first))
+              .collect(Collectors.toList());
+
+          long[] seqs = posFilesSortedBySeq.stream().mapToLong(Pair::first).toArray();
+          DeleteFile[] files = posFilesSortedBySeq.stream().map(Pair::second).toArray(DeleteFile[]::new);
+
+          sortedDeletesByPartition.put(partition, Pair.of(seqs, files));
+
+        } else {
+          List<Pair<Long, DeleteFile>> filesSortedBySeq = deleteFilesByPartition.get(partition).stream()
+              .map(entry -> {
+                // a delete file is indexed by the sequence number it should be applied to
+                long applySeq = entry.sequenceNumber() -
+                    (entry.file().content() == FileContent.EQUALITY_DELETES ? 1 : 0);
+                return Pair.of(applySeq, entry.file());
+              })
+              .sorted(Comparator.comparingLong(Pair::first))
+              .collect(Collectors.toList());
+
+          long[] seqs = filesSortedBySeq.stream().mapToLong(Pair::first).toArray();
+          DeleteFile[] files = filesSortedBySeq.stream().map(Pair::second).toArray(DeleteFile[]::new);
+
+          sortedDeletesByPartition.put(partition, Pair.of(seqs, files));
+        }
+      }
+
+      return new DeleteFileIndex(globalApplySeqs, globalDeletes, sortedDeletesByPartition);
+    }
+
+    private Iterable<Pair<Integer, CloseableIterable<ManifestEntry<DeleteFile>>>> deleteManifestReaders() {
+      LoadingCache<Integer, ManifestEvaluator> evalCache = specsById == null ? null :
+          Caffeine.newBuilder().build(specId -> {
+            PartitionSpec spec = specsById.get(specId);
+            return ManifestEvaluator.forPartitionFilter(
+                Expressions.and(partitionFilter, Projections.inclusive(spec, caseSensitive).project(dataFilter)),
+                spec, caseSensitive);
+          });
+
+      Iterable<ManifestFile> matchingManifests = evalCache == null ? deleteManifests :
+          Iterables.filter(deleteManifests, manifest ->
+              manifest.content() == ManifestContent.DELETES &&
+                  (manifest.hasAddedFiles() || manifest.hasDeletedFiles()) &&
+                  evalCache.get(manifest.partitionSpecId()).eval(manifest));
+
+      return Iterables.transform(
+          matchingManifests,
+          manifest -> Pair.of(
+              manifest.partitionSpecId(),
+              ManifestFiles.readDeleteManifest(manifest, io, specsById)

Review comment:
       Yes, that's why the same partition and data filters are passed to match delete files here, and why we use the same eval cache to filter delete manifests.
   
   We can add more filtering eventually. For example, when we know what the stats columns for a positional delete file are, we can make sure that each data file is actually within that stats range for the filename column. But we'll add those later, since we don't have delete file readers/writers done quite yet.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on a change in pull request #1288: Update scan planning with DeleteFiles in each task

Posted by GitBox <gi...@apache.org>.
rdblue commented on a change in pull request #1288:
URL: https://github.com/apache/iceberg/pull/1288#discussion_r465910768



##########
File path: core/src/main/java/org/apache/iceberg/DeleteFileIndex.java
##########
@@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg;
+
+import com.github.benmanes.caffeine.cache.Caffeine;
+import com.github.benmanes.caffeine.cache.LoadingCache;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.Set;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+import org.apache.iceberg.exceptions.RuntimeIOException;
+import org.apache.iceberg.expressions.Expression;
+import org.apache.iceberg.expressions.Expressions;
+import org.apache.iceberg.expressions.ManifestEvaluator;
+import org.apache.iceberg.expressions.Projections;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
+import org.apache.iceberg.relocated.com.google.common.collect.ListMultimap;
+import org.apache.iceberg.relocated.com.google.common.collect.Lists;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.collect.Multimaps;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.util.Pair;
+import org.apache.iceberg.util.StructLikeWrapper;
+import org.apache.iceberg.util.Tasks;
+
+/**
+ * An index of {@link DeleteFile delete files} by sequence number.
+ * <p>
+ * Use {@link #builderFor(FileIO, Iterable)} to construct an index, and {@link #forDataFile(int, long, DataFile)} or
+ * {@link #forEntry(int, ManifestEntry)} to get the the delete files to apply to a given data file.
+ */
+class DeleteFileIndex {
+  private static final DeleteFile[] NO_DELETE_FILES = new DeleteFile[0];
+
+  private final long[] globalSeqs;
+  private final DeleteFile[] globalDeletes;
+  private final Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition;
+  private final ThreadLocal<StructLikeWrapper> lookupWrapper = ThreadLocal.withInitial(
+      () -> StructLikeWrapper.wrap(null));
+
+  DeleteFileIndex(long[] globalSeqs, DeleteFile[] globalDeletes,
+                  Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition) {
+    this.globalSeqs = globalSeqs;
+    this.globalDeletes = globalDeletes;
+    this.sortedDeletesByPartition = sortedDeletesByPartition;
+  }
+
+  DeleteFile[] forEntry(int specId, ManifestEntry<DataFile> entry) {
+    return forDataFile(specId, entry.sequenceNumber(), entry.file());
+  }
+
+  DeleteFile[] forDataFile(int specId, long sequenceNumber, DataFile file) {
+    Pair<long[], DeleteFile[]> partitionDeletes = sortedDeletesByPartition
+        .get(Pair.of(specId, lookupWrapper.get().set(file.partition())));

Review comment:
       Will do.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on a change in pull request #1288: Update scan planning with DeleteFiles in each task

Posted by GitBox <gi...@apache.org>.
rdblue commented on a change in pull request #1288:
URL: https://github.com/apache/iceberg/pull/1288#discussion_r465913076



##########
File path: core/src/main/java/org/apache/iceberg/DeleteFileIndex.java
##########
@@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg;
+
+import com.github.benmanes.caffeine.cache.Caffeine;
+import com.github.benmanes.caffeine.cache.LoadingCache;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.Set;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+import org.apache.iceberg.exceptions.RuntimeIOException;
+import org.apache.iceberg.expressions.Expression;
+import org.apache.iceberg.expressions.Expressions;
+import org.apache.iceberg.expressions.ManifestEvaluator;
+import org.apache.iceberg.expressions.Projections;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
+import org.apache.iceberg.relocated.com.google.common.collect.ListMultimap;
+import org.apache.iceberg.relocated.com.google.common.collect.Lists;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.collect.Multimaps;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.util.Pair;
+import org.apache.iceberg.util.StructLikeWrapper;
+import org.apache.iceberg.util.Tasks;
+
+/**
+ * An index of {@link DeleteFile delete files} by sequence number.
+ * <p>
+ * Use {@link #builderFor(FileIO, Iterable)} to construct an index, and {@link #forDataFile(int, long, DataFile)} or
+ * {@link #forEntry(int, ManifestEntry)} to get the the delete files to apply to a given data file.
+ */
+class DeleteFileIndex {
+  private static final DeleteFile[] NO_DELETE_FILES = new DeleteFile[0];
+
+  private final long[] globalSeqs;
+  private final DeleteFile[] globalDeletes;
+  private final Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition;
+  private final ThreadLocal<StructLikeWrapper> lookupWrapper = ThreadLocal.withInitial(
+      () -> StructLikeWrapper.wrap(null));
+
+  DeleteFileIndex(long[] globalSeqs, DeleteFile[] globalDeletes,
+                  Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition) {
+    this.globalSeqs = globalSeqs;
+    this.globalDeletes = globalDeletes;
+    this.sortedDeletesByPartition = sortedDeletesByPartition;
+  }
+
+  DeleteFile[] forEntry(int specId, ManifestEntry<DataFile> entry) {
+    return forDataFile(specId, entry.sequenceNumber(), entry.file());
+  }
+
+  DeleteFile[] forDataFile(int specId, long sequenceNumber, DataFile file) {
+    Pair<long[], DeleteFile[]> partitionDeletes = sortedDeletesByPartition
+        .get(Pair.of(specId, lookupWrapper.get().set(file.partition())));
+
+    if (partitionDeletes == null) {
+      return limitBySequenceNumber(sequenceNumber, globalSeqs, globalDeletes);
+    } else if (globalDeletes == null) {
+      return limitBySequenceNumber(sequenceNumber, partitionDeletes.first(), partitionDeletes.second());
+    } else {
+      return Stream.concat(
+          Stream.of(limitBySequenceNumber(sequenceNumber, globalSeqs, globalDeletes)),
+          Stream.of(limitBySequenceNumber(sequenceNumber, partitionDeletes.first(), partitionDeletes.second()))
+      ).toArray(DeleteFile[]::new);
+    }
+  }
+
+  private static DeleteFile[] limitBySequenceNumber(long sequenceNumber, long[] seqs, DeleteFile[] files) {
+    if (files == null) {
+      return NO_DELETE_FILES;
+    }
+
+    int pos = Arrays.binarySearch(seqs, sequenceNumber);
+    int start;
+    if (pos < 0) {
+      // the sequence number was not found, where it would be inserted is -(pos + 1)
+      start = -(pos + 1);

Review comment:
       Yes, and the tests validate these cases.
   
   If the sequence number is not found and less than all of the sequence numbers in the array, then the insert position is 0 and the `pos` returned is `-(0 + 1) = -1`. Converting back to a start position `-(-1 + 1) = 0`, so we copy the entire array. Similarly, if the sequence number is greater than all of the numbers in the array, the return values is `length`, which results in `copyOfRange(files, length, length)` and produces a 0-length array.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on a change in pull request #1288: Update scan planning with DeleteFiles in each task

Posted by GitBox <gi...@apache.org>.
rdblue commented on a change in pull request #1288:
URL: https://github.com/apache/iceberg/pull/1288#discussion_r465913200



##########
File path: core/src/main/java/org/apache/iceberg/DeleteFileIndex.java
##########
@@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+package org.apache.iceberg;
+
+import com.github.benmanes.caffeine.cache.Caffeine;
+import com.github.benmanes.caffeine.cache.LoadingCache;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Comparator;
+import java.util.List;
+import java.util.Map;
+import java.util.Queue;
+import java.util.Set;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.ExecutorService;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
+import org.apache.iceberg.exceptions.RuntimeIOException;
+import org.apache.iceberg.expressions.Expression;
+import org.apache.iceberg.expressions.Expressions;
+import org.apache.iceberg.expressions.ManifestEvaluator;
+import org.apache.iceberg.expressions.Projections;
+import org.apache.iceberg.io.CloseableIterable;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.relocated.com.google.common.base.Preconditions;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
+import org.apache.iceberg.relocated.com.google.common.collect.ListMultimap;
+import org.apache.iceberg.relocated.com.google.common.collect.Lists;
+import org.apache.iceberg.relocated.com.google.common.collect.Maps;
+import org.apache.iceberg.relocated.com.google.common.collect.Multimaps;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.apache.iceberg.util.Pair;
+import org.apache.iceberg.util.StructLikeWrapper;
+import org.apache.iceberg.util.Tasks;
+
+/**
+ * An index of {@link DeleteFile delete files} by sequence number.
+ * <p>
+ * Use {@link #builderFor(FileIO, Iterable)} to construct an index, and {@link #forDataFile(int, long, DataFile)} or
+ * {@link #forEntry(int, ManifestEntry)} to get the the delete files to apply to a given data file.
+ */
+class DeleteFileIndex {
+  private static final DeleteFile[] NO_DELETE_FILES = new DeleteFile[0];
+
+  private final long[] globalSeqs;
+  private final DeleteFile[] globalDeletes;
+  private final Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition;
+  private final ThreadLocal<StructLikeWrapper> lookupWrapper = ThreadLocal.withInitial(
+      () -> StructLikeWrapper.wrap(null));
+
+  DeleteFileIndex(long[] globalSeqs, DeleteFile[] globalDeletes,
+                  Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition) {
+    this.globalSeqs = globalSeqs;
+    this.globalDeletes = globalDeletes;
+    this.sortedDeletesByPartition = sortedDeletesByPartition;
+  }
+
+  DeleteFile[] forEntry(int specId, ManifestEntry<DataFile> entry) {
+    return forDataFile(specId, entry.sequenceNumber(), entry.file());
+  }
+
+  DeleteFile[] forDataFile(int specId, long sequenceNumber, DataFile file) {
+    Pair<long[], DeleteFile[]> partitionDeletes = sortedDeletesByPartition
+        .get(Pair.of(specId, lookupWrapper.get().set(file.partition())));
+
+    if (partitionDeletes == null) {
+      return limitBySequenceNumber(sequenceNumber, globalSeqs, globalDeletes);
+    } else if (globalDeletes == null) {
+      return limitBySequenceNumber(sequenceNumber, partitionDeletes.first(), partitionDeletes.second());
+    } else {
+      return Stream.concat(
+          Stream.of(limitBySequenceNumber(sequenceNumber, globalSeqs, globalDeletes)),
+          Stream.of(limitBySequenceNumber(sequenceNumber, partitionDeletes.first(), partitionDeletes.second()))
+      ).toArray(DeleteFile[]::new);
+    }
+  }
+
+  private static DeleteFile[] limitBySequenceNumber(long sequenceNumber, long[] seqs, DeleteFile[] files) {
+    if (files == null) {
+      return NO_DELETE_FILES;
+    }
+
+    int pos = Arrays.binarySearch(seqs, sequenceNumber);
+    int start;
+    if (pos < 0) {
+      // the sequence number was not found, where it would be inserted is -(pos + 1)
+      start = -(pos + 1);
+    } else {
+      // the sequence number was found, but may not be the first
+      // find the first delete file with the given sequence number by decrementing the position
+      start = pos;
+      while (start > 0 && seqs[start - 1] >= sequenceNumber) {
+        start -= 1;
+      }
+    }
+
+    return Arrays.copyOfRange(files, start, files.length);
+  }
+
+  static Builder builderFor(FileIO io, Iterable<ManifestFile> deleteManifests) {
+    return new Builder(io, Sets.newHashSet(deleteManifests));
+  }
+
+  static class Builder {
+    private final FileIO io;
+    private final Set<ManifestFile> deleteManifests;
+    private Map<Integer, PartitionSpec> specsById;
+    private Expression dataFilter;
+    private Expression partitionFilter;
+    private boolean caseSensitive;
+    private ExecutorService executorService;
+
+    Builder(FileIO io, Set<ManifestFile> deleteManifests) {
+      this.io = io;
+      this.deleteManifests = Sets.newHashSet(deleteManifests);
+      this.specsById = null;
+      this.dataFilter = Expressions.alwaysTrue();
+      this.partitionFilter = Expressions.alwaysTrue();
+      this.caseSensitive = true;
+      this.executorService = null;
+    }
+
+    Builder specsById(Map<Integer, PartitionSpec> newSpecsById) {
+      this.specsById = newSpecsById;
+      return this;
+    }
+
+    Builder filterData(Expression newDataFilter) {
+      this.dataFilter = Expressions.and(dataFilter, newDataFilter);
+      return this;
+    }
+
+    Builder filterPartitions(Expression newPartitionFilter) {
+      this.partitionFilter = Expressions.and(partitionFilter, newPartitionFilter);
+      return this;
+    }
+
+    Builder caseSensitive(boolean newCaseSensitive) {
+      this.caseSensitive = newCaseSensitive;
+      return this;
+    }
+
+    Builder planWith(ExecutorService newExecutorService) {
+      this.executorService = newExecutorService;
+      return this;
+    }
+
+    DeleteFileIndex build() {
+      // read all of the matching delete manifests in parallel and accumulate the matching files in a queue
+      Queue<Pair<Integer, ManifestEntry<DeleteFile>>> deleteEntries = new ConcurrentLinkedQueue<>();
+      Tasks.foreach(deleteManifestReaders())
+          .stopOnFailure().throwFailureWhenFinished()
+          .executeWith(executorService)
+          .run(specIdAndReader -> {
+            try (CloseableIterable<ManifestEntry<DeleteFile>> reader = specIdAndReader.second()) {
+              for (ManifestEntry<DeleteFile> entry : reader) {
+                // copy with stats for better filtering against data file stats
+                deleteEntries.add(Pair.of(specIdAndReader.first(), entry.copy()));
+              }
+            } catch (IOException e) {
+              throw new RuntimeIOException("Failed to close", e);
+            }
+          });
+
+      // build a map from (specId, partition) to delete file entries
+      ListMultimap<Pair<Integer, StructLikeWrapper>, ManifestEntry<DeleteFile>> deleteFilesByPartition =
+          Multimaps.newListMultimap(Maps.newHashMap(), Lists::newArrayList);
+      for (Pair<Integer, ManifestEntry<DeleteFile>> specIdAndEntry : deleteEntries) {
+        int specId = specIdAndEntry.first();
+        ManifestEntry<DeleteFile> entry = specIdAndEntry.second();
+        deleteFilesByPartition.put(Pair.of(specId, StructLikeWrapper.wrap(entry.file().partition())), entry);
+      }
+
+      // sort the entries in each map value by sequence number and split into sequence numbers and delete files lists
+      Map<Pair<Integer, StructLikeWrapper>, Pair<long[], DeleteFile[]>> sortedDeletesByPartition = Maps.newHashMap();
+      // also, separate out equality deletes in an unpartitioned spec that should be applied globally
+      long[] globalApplySeqs = null;
+      DeleteFile[] globalDeletes = null;
+      for (Pair<Integer, StructLikeWrapper> partition : deleteFilesByPartition.keySet()) {
+        if (specsById.get(partition.first()).isUnpartitioned()) {
+          Preconditions.checkState(globalDeletes == null, "Detected multiple partition specs with no partitions");
+
+          List<Pair<Long, DeleteFile>> eqFilesSortedBySeq = deleteFilesByPartition.get(partition).stream()

Review comment:
       That's right.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] aokolnychyi merged pull request #1288: Update scan planning with DeleteFiles in each task

Posted by GitBox <gi...@apache.org>.
aokolnychyi merged pull request #1288:
URL: https://github.com/apache/iceberg/pull/1288


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on pull request #1288: Update scan planning with DeleteFiles in each task

Posted by GitBox <gi...@apache.org>.
rdblue commented on pull request #1288:
URL: https://github.com/apache/iceberg/pull/1288#issuecomment-668329803


   @rymurr, @prodeezy, @aokolnychyi, this adds delete files to scan planning for row-level deletes.
   
   I still need to add more tests, but I wanted to get something up so others can start looking at it.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] rdblue commented on a change in pull request #1288: Update scan planning with DeleteFiles in each task

Posted by GitBox <gi...@apache.org>.
rdblue commented on a change in pull request #1288:
URL: https://github.com/apache/iceberg/pull/1288#discussion_r465221224



##########
File path: core/src/main/java/org/apache/iceberg/BaseFileScanTask.java
##########
@@ -31,14 +31,17 @@
 
 class BaseFileScanTask implements FileScanTask {
   private final DataFile file;
+  private final DeleteFile[] deletes;
   private final String schemaString;
   private final String specString;
   private final ResidualEvaluator residuals;
 
   private transient PartitionSpec spec = null;
 
-  BaseFileScanTask(DataFile file, String schemaString, String specString, ResidualEvaluator residuals) {
+  BaseFileScanTask(DataFile file, DeleteFile[] deletes, String schemaString, String specString,

Review comment:
       This implementation and constructor are internal and package-private. The public API is the `FileScanTask` interface. I went ahead and updated all of the places where we use this, so we don't need to have an overloaded constructor. I think it is better to do it this way so that we ensure that the parameter is always explicit and we aren't accidentally ignoring delete files anywhere.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] prodeezy edited a comment on pull request #1288: Update scan planning with DeleteFiles in each task

Posted by GitBox <gi...@apache.org>.
prodeezy edited a comment on pull request #1288:
URL: https://github.com/apache/iceberg/pull/1288#issuecomment-679161954


   Thanks again for this @rdblue , as you mentioned during the sync, the file index can be used to model any soft-delete mechanism that systems have had to build externally. cc @fbocse @mehtaashish23 


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org