You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2022/05/10 08:05:01 UTC

[GitHub] [iceberg] singhpk234 commented on a diff in pull request #4738: Spark: add delete info to rewrite job description

singhpk234 commented on code in PR #4738:
URL: https://github.com/apache/iceberg/pull/4738#discussion_r868939809


##########
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/actions/BaseRewriteDataFilesSparkAction.java:
##########
@@ -414,14 +414,14 @@ void validateAndInitOptions() {
   private String jobDesc(RewriteFileGroup group, RewriteExecutionContext ctx) {
     StructLike partition = group.info().partition();
     if (partition.size() > 0) {
-      return String.format("Rewriting %d files (%s, file group %d/%d, %s (%d/%d)) in %s",
-          group.rewrittenFiles().size(),
+      return String.format("Rewriting %d files %d eq deletes %d pos deletes (%s, file group %d/%d, %s (%d/%d)) in %s",
+          group.rewrittenFiles().size(), group.rewrittenEqDeletes(), group.rewrittenEqDeletes(),

Review Comment:
   [minor] should we add a comma in between
   ```suggestion
         return String.format("Rewriting %d files, %d eq deletes, %d pos deletes (%s, file group %d/%d, %s (%d/%d)) in %s",
             group.rewrittenFiles().size(), group.rewrittenEqDeletes(), group.rewrittenEqDeletes(),
   ```



##########
core/src/main/java/org/apache/iceberg/actions/RewriteFileGroup.java:
##########
@@ -60,6 +61,18 @@ public Set<DataFile> rewrittenFiles() {
     return fileScans().stream().map(FileScanTask::file).collect(Collectors.toSet());
   }
 
+  public int rewrittenEqDeletes() {
+    return (int) fileScans().stream().flatMap(f -> f.deletes().stream())
+        .filter(d -> d.content().equals(FileContent.EQUALITY_DELETES))
+        .count();
+  }
+
+  public int rewrittenPosDeletes() {
+    return (int) fileScans().stream().flatMap(f -> f.deletes().stream())
+        .filter(d -> d.content().equals(FileContent.POSITION_DELETES))
+        .count();
+  }

Review Comment:
   [question] A single delete file can be associated with multiple data files, should we do a distinct and then count . Your thoughts ?
   As for ex : 
   1. FSTask1 -> DataFile1 , PosDeleteFile1
   2. FSTask2 -> DataFile2 , PosDeleteFile1
   
   here we would return 2 as rewritten deletes, but actually we re-wrote only 1 Position Delete File i.e PosDeleteFile1



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org