You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@iceberg.apache.org by GitBox <gi...@apache.org> on 2022/07/29 23:23:39 UTC

[GitHub] [iceberg] aokolnychyi commented on a diff in pull request #3457: Spark: Improve performance of expire snapshot by not double-scanning retained Snapshots

aokolnychyi commented on code in PR #3457:
URL: https://github.com/apache/iceberg/pull/3457#discussion_r933682957


##########
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/actions/ExpireSnapshotsSparkAction.java:
##########
@@ -222,13 +228,34 @@ private ExpireSnapshots.Result doExecute() {
     }
   }
 
+  /**
+   * Builds a dataset of reachable files from given table metadata
+   *
+   * @param metadata table metadata
+   * @return a dataset of files: schema(file_path, file_type)
+   */
   private Dataset<Row> buildValidFileDF(TableMetadata metadata) {
     Table staticTable = newStaticTable(metadata, table.io());
     return buildValidContentFileWithTypeDF(staticTable)
         .union(withFileType(buildManifestFileDF(staticTable), MANIFEST))
         .union(withFileType(buildManifestListDF(staticTable), MANIFEST_LIST));
   }
 
+  /**
+   * Builds a dataset of reachable files from given table metadata, with a snapshot filter
+   *
+   * @param metadata table metadata
+   * @param snapshotsToExclude files reachable by this snapshot will be filtered out
+   * @return a dataset of files: schema(file_path, file_type)
+   */
+  private Dataset<Row> buildFilteredValidDataDF(
+      TableMetadata metadata, Set<Long> snapshotsToExclude) {
+    Table staticTable = newStaticTable(metadata, table.io());
+    return buildValidContentFileWithTypeDF(staticTable, snapshotsToExclude)

Review Comment:
   Two questions.
   
   - Will we scan `ALL_MANIFESTS` twice? Once for content files and once for manifests?
   - Will we open extra manifests when building the reachability set for expired snapshots? I don't think the current implementation removes any manifests from the expired set that are currently in the table.



##########
spark/v3.2/spark/src/main/java/org/apache/iceberg/spark/actions/ExpireSnapshotsSparkAction.java:
##########
@@ -171,11 +173,15 @@ public Dataset<Row> expire() {
 
       expireSnapshots.commit();
 
-      // fetch metadata after expiration
-      Dataset<Row> validFiles = buildValidFileDF(ops.refresh());
+      TableMetadata updatedTable = ops.refresh();
+      Set<Long> retainedSnapshots =
+          updatedTable.snapshots().stream().map(Snapshot::snapshotId).collect(Collectors.toSet());
+      Dataset<Row> validFiles = buildValidFileDF(updatedTable);
+      Dataset<Row> deleteCandidateFiles =

Review Comment:
   Am I correct that we are trying to build the reachability set of expired snapshots? Would it be easier to write this logic in terms of expired snapshots instead of retained snapshots? Right now, we pass snapshots to ignore, which takes a bit of time to wrap your head around. Would it be easier if we passed snapshots we are looking for?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@iceberg.apache.org
For additional commands, e-mail: issues-help@iceberg.apache.org