You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2022/07/28 17:53:33 UTC

[GitHub] [iceberg] dungdm93 opened a new pull request, #5375: Chores: using bulk delete AMAP

dungdm93 opened a new pull request, #5375:
URL: https://github.com/apache/iceberg/pull/5375

   Using batch delete (a.k.a bulk delete) to perform multiple files if `FileIO` implement `SupportsBulkOperations`
   #4012


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] dungdm93 commented on pull request #5375: Chores: using bulk delete if it's possible

Posted by GitBox <gi...@apache.org>.
dungdm93 commented on PR #5375:
URL: https://github.com/apache/iceberg/pull/5375#issuecomment-1202890150

   @szehon-ho I'm also thinking about **retry** and **service pool** mechanism for bulkDelete.
   There are 2 ways:
   * First option is let caller split delete files into chunks, and pass each chunk to bulkDelete API as the same way as normal delete.
       ```java
       Iteratable<List<String>> deleteFileChunked = ....
       Tasks.foreach(files)
           .retry(3)
           .executeWith(service)
           .run(io::deleteFiles);
       ````
      The drawback of this approach is
         * chunk is calculate one in the caller, one in FileIO (as S3FileIO implementation)
         * other cons is if a request fails, there is noway to retry only failed files
   * Second option is bulkDelete API return a `TaskLike` object, then let caller decided how it should run (no of retry, executor service, error handler, etc...)
       ```java
       io.deleteFiles(files) // return a TaskLike object
         .retry(3)
         .executeWith(service)
         .onFailure(...)
         .execute()  // now the task will be executed
       ```
   
   Second option introduce a breaking change in bulkDelete API. However I still prefer this because it's using in no-where now.
   WDTY?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] szehon-ho commented on a diff in pull request #5375: Chores: using bulk delete if it's possible

Posted by GitBox <gi...@apache.org>.
szehon-ho commented on code in PR #5375:
URL: https://github.com/apache/iceberg/pull/5375#discussion_r936072230


##########
core/src/main/java/org/apache/iceberg/util/FileIOUtil.java:
##########
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.util;
+
+import java.util.concurrent.ExecutorService;
+import org.apache.iceberg.ContentFile;
+import org.apache.iceberg.ManifestFile;
+import org.apache.iceberg.io.BulkDeletionFailureException;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.io.SupportsBulkOperations;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class FileIOUtil {
+  private static final Logger LOG = LoggerFactory.getLogger(FileIOUtil.class);
+
+  public static BulkDeleter bulkDeleteManifests(FileIO io, Iterable<ManifestFile> files) {
+    return bulkDelete(io, Iterables.transform(files, ManifestFile::path));
+  }
+
+  public static <C extends ContentFile<?>> BulkDeleter bulkDeleteFiles(
+      FileIO io, Iterable<C> files) {
+    return bulkDelete(io, Iterables.transform(files, file -> file.path().toString()));
+  }
+
+  public static BulkDeleter bulkDelete(FileIO io, Iterable<String> files) {
+    return new BulkDeleter(io, files);
+  }
+
+  public static BulkDeleter bulkDelete(FileIO io, String file) {
+    return new BulkDeleter(io, Sets.newHashSet(file));
+  }
+
+  public static class BulkDeleter {
+    private final FileIO io;
+    private final Iterable<String> files;
+    private String name = "files";
+    private ExecutorService service = null;
+
+    private BulkDeleter(FileIO io, Iterable<String> files) {
+      this.io = io;
+      this.files = files;
+    }
+
+    public BulkDeleter name(String newName) {
+      this.name = newName;
+      return this;
+    }
+
+    public BulkDeleter executeWith(ExecutorService svc) {
+      this.service = svc;
+      return this;
+    }
+
+    public void execute() {
+      if (io instanceof SupportsBulkOperations) {
+        try {
+          SupportsBulkOperations bulkIO = (SupportsBulkOperations) io;
+          bulkIO.deleteFiles(files);
+        } catch (BulkDeletionFailureException e) {
+          LOG.warn("Failed to delete {} {}", e.numberFailedObjects(), name);
+        } catch (Exception e) {
+          // ignore
+        }
+      } else {
+        Tasks.foreach(files)
+            .noRetry()
+            .executeWith(service)
+            .suppressFailureWhenFinished()
+            .onFailure((file, exc) -> LOG.warn("Delete failed for {}: {}", name, file, exc))

Review Comment:
   Never mind about this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] szehon-ho commented on pull request #5375: Chores: using bulk delete if it's possible

Posted by GitBox <gi...@apache.org>.
szehon-ho commented on PR #5375:
URL: https://github.com/apache/iceberg/pull/5375#issuecomment-1203269802

   I think we need to coordinate with @amogh-jahagirdar who is also working on this on #5373.  There seems a few prs trying to use the bulk delete, and see if we can push the retry logic to the FileIO?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] dramaticlly commented on pull request #5375: Chores: using bulk delete AMAP

Posted by GitBox <gi...@apache.org>.
dramaticlly commented on PR #5375:
URL: https://github.com/apache/iceberg/pull/5375#issuecomment-1201849380

   Also want to share a similar change for expire-snapshots to use BulkDeleton: https://github.com/apache/iceberg/pull/5412


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] amogh-jahagirdar commented on pull request #5375: Chores: using bulk delete if it's possible

Posted by GitBox <gi...@apache.org>.
amogh-jahagirdar commented on PR #5375:
URL: https://github.com/apache/iceberg/pull/5375#issuecomment-1203433139

   @dungdm93 Not sure about comparing transactions and record writers but I think I understand what you're getting at. Another view on this (and I think this is what you are thinking, let me know if it's not) is that procedures should be in control of their retry strategy, failure handling and so on; not the FIleIO implementation because procedures/actions may want different configurations at different times.
   
   My only doubt is that building a public Util which wraps around the existing Task framework mainly just for batch deletion seems heavy handed at this point in time. My feeling is that we shouldn't add public Util classes or any public kind of APIs unless we are really sure they will get used and so that's why I put it in the file IO implementation for simplicity; that seems more easily reversible in case folks want more configuration on retry behavior for different procedures. If we know that we want this customizability up front, then I think this makes sense. 
   
   Would like to get the communities thoughts! @dungdm93 @szehon-ho @danielcweeks @rdblue 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] szehon-ho commented on pull request #5375: Chores: using bulk delete if it's possible

Posted by GitBox <gi...@apache.org>.
szehon-ho commented on PR #5375:
URL: https://github.com/apache/iceberg/pull/5375#issuecomment-1204248363

   > I don't think we should put retry logic into bulk delete. Because different client may wanna use different config. For examples BaseTransaction use noRetry while HiveIcebergRecordWriter use [retry(3)](https://github.com/apache/iceberg/blob/263441752393834c384a04d861cda1b8cb136a63/mr/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergRecordWriter.java#L101)
   
   > @dungdm93 Not sure about comparing transactions and record writers but I think I understand what you're getting at. Another view on this (and I think this is what you are thinking, let me know if it's not) is that procedures should be in control of their retry strategy, failure handling and so on; not the FIleIO implementation because procedures/actions may want different configurations at different times.
   
   To me, if we have decided to push down the batching logic S3FileIO.deleteFiles() which seems the case, I'm not sure I see another way than for the S3FileIO to handle the retry logic.  The caller cannot retry the whole thing in bulk, because some batches may succeeded and others failed, right?  If thats the case, the only thing we can do then is to have the callers pass in different options for retry to S3FileIO, to preserve their original intent?  For example expireSnapshot has retry (3), and these ones here have no retry, like you mentioned.  Or maybe I'm not being imaginative enough.  


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] dungdm93 commented on pull request #5375: Chores: using bulk delete if it's possible

Posted by GitBox <gi...@apache.org>.
dungdm93 commented on PR #5375:
URL: https://github.com/apache/iceberg/pull/5375#issuecomment-1203422405

   I don't think we should put retry logic into bulk delete. Because different client may wanna use different config. For examples `BaseTransaction` use `noRetry` while `HiveIcebergRecordWriter` use [`retry(3)`](https://github.com/apache/iceberg/blob/263441752393834c384a04d861cda1b8cb136a63/mr/src/main/java/org/apache/iceberg/mr/hive/HiveIcebergRecordWriter.java#L101)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org


[GitHub] [iceberg] szehon-ho commented on a diff in pull request #5375: Chores: using bulk delete AMAP

Posted by GitBox <gi...@apache.org>.
szehon-ho commented on code in PR #5375:
URL: https://github.com/apache/iceberg/pull/5375#discussion_r934783633


##########
core/src/main/java/org/apache/iceberg/util/FileIOUtil.java:
##########
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.util;
+
+import java.util.concurrent.ExecutorService;
+import org.apache.iceberg.ContentFile;
+import org.apache.iceberg.ManifestFile;
+import org.apache.iceberg.io.BulkDeletionFailureException;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.io.SupportsBulkOperations;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class FileIOUtil {
+  private static final Logger LOG = LoggerFactory.getLogger(FileIOUtil.class);
+
+  public static BulkDeleter bulkDeleteManifests(FileIO io, Iterable<ManifestFile> files) {
+    return bulkDelete(io, Iterables.transform(files, ManifestFile::path));
+  }
+
+  public static <C extends ContentFile<?>> BulkDeleter bulkDeleteFiles(
+      FileIO io, Iterable<C> files) {
+    return bulkDelete(io, Iterables.transform(files, file -> file.path().toString()));
+  }
+
+  public static BulkDeleter bulkDelete(FileIO io, Iterable<String> files) {
+    return new BulkDeleter(io, files);
+  }
+
+  public static BulkDeleter bulkDelete(FileIO io, String file) {
+    return new BulkDeleter(io, Sets.newHashSet(file));
+  }
+
+  public static class BulkDeleter {
+    private final FileIO io;
+    private final Iterable<String> files;
+    private String name = "files";
+    private ExecutorService service = null;
+
+    private BulkDeleter(FileIO io, Iterable<String> files) {
+      this.io = io;
+      this.files = files;
+    }
+
+    public BulkDeleter name(String newName) {
+      this.name = newName;
+      return this;
+    }
+
+    public BulkDeleter executeWith(ExecutorService svc) {
+      this.service = svc;
+      return this;
+    }
+
+    public void execute() {
+      if (io instanceof SupportsBulkOperations) {
+        try {
+          SupportsBulkOperations bulkIO = (SupportsBulkOperations) io;
+          bulkIO.deleteFiles(files);

Review Comment:
   Should we use the configured 'service' pool?



##########
spark/v3.3/spark/src/main/java/org/apache/iceberg/spark/SparkTableUtil.java:
##########
@@ -678,11 +678,9 @@ public static List<SparkPartition> filterPartitions(
   }
 
   private static void deleteManifests(FileIO io, List<ManifestFile> manifests) {
-    Tasks.foreach(manifests)
+    FileIOUtil.bulkDeleteManifests(io, manifests)

Review Comment:
   Looks like we dont pass a name here.  Thinking about it, if a name is always recommended, why not make it mandatory parameter  of the bulkDelete API?



##########
core/src/main/java/org/apache/iceberg/util/FileIOUtil.java:
##########
@@ -0,0 +1,93 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.iceberg.util;
+
+import java.util.concurrent.ExecutorService;
+import org.apache.iceberg.ContentFile;
+import org.apache.iceberg.ManifestFile;
+import org.apache.iceberg.io.BulkDeletionFailureException;
+import org.apache.iceberg.io.FileIO;
+import org.apache.iceberg.io.SupportsBulkOperations;
+import org.apache.iceberg.relocated.com.google.common.collect.Iterables;
+import org.apache.iceberg.relocated.com.google.common.collect.Sets;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+public class FileIOUtil {
+  private static final Logger LOG = LoggerFactory.getLogger(FileIOUtil.class);
+
+  public static BulkDeleter bulkDeleteManifests(FileIO io, Iterable<ManifestFile> files) {
+    return bulkDelete(io, Iterables.transform(files, ManifestFile::path));
+  }
+
+  public static <C extends ContentFile<?>> BulkDeleter bulkDeleteFiles(
+      FileIO io, Iterable<C> files) {
+    return bulkDelete(io, Iterables.transform(files, file -> file.path().toString()));
+  }
+
+  public static BulkDeleter bulkDelete(FileIO io, Iterable<String> files) {
+    return new BulkDeleter(io, files);
+  }
+
+  public static BulkDeleter bulkDelete(FileIO io, String file) {
+    return new BulkDeleter(io, Sets.newHashSet(file));
+  }
+
+  public static class BulkDeleter {
+    private final FileIO io;
+    private final Iterable<String> files;
+    private String name = "files";
+    private ExecutorService service = null;
+
+    private BulkDeleter(FileIO io, Iterable<String> files) {
+      this.io = io;
+      this.files = files;
+    }
+
+    public BulkDeleter name(String newName) {
+      this.name = newName;
+      return this;
+    }
+
+    public BulkDeleter executeWith(ExecutorService svc) {
+      this.service = svc;
+      return this;
+    }
+
+    public void execute() {
+      if (io instanceof SupportsBulkOperations) {
+        try {
+          SupportsBulkOperations bulkIO = (SupportsBulkOperations) io;
+          bulkIO.deleteFiles(files);
+        } catch (BulkDeletionFailureException e) {
+          LOG.warn("Failed to delete {} {}", e.numberFailedObjects(), name);
+        } catch (Exception e) {
+          // ignore
+        }
+      } else {
+        Tasks.foreach(files)
+            .noRetry()
+            .executeWith(service)
+            .suppressFailureWhenFinished()
+            .onFailure((file, exc) -> LOG.warn("Delete failed for {}: {}", name, file, exc))

Review Comment:
   Are we missing a parameter?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org