You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hudi.apache.org by GitBox <gi...@apache.org> on 2020/08/06 08:59:53 UTC

[GitHub] [hudi] vinothchandar commented on a change in pull request #1858: [HUDI-1014] Adding Upgrade and downgrade infra for smooth transitioning from list based rollback to marker based rollback

vinothchandar commented on a change in pull request #1858:
URL: https://github.com/apache/hudi/pull/1858#discussion_r464714682



##########
File path: hudi-client/src/main/java/org/apache/hudi/client/AbstractHoodieWriteClient.java
##########
@@ -186,10 +188,14 @@ public HoodieMetrics getMetrics() {
    * Get HoodieTable and init {@link Timer.Context}.
    *
    * @param operationType write operation type
+   * @param instantTime current inflight instant time
    * @return HoodieTable
    */
-  protected HoodieTable getTableAndInitCtx(WriteOperationType operationType) {
+  protected HoodieTable getTableAndInitCtx(WriteOperationType operationType, String instantTime) {
     HoodieTableMetaClient metaClient = createMetaClient(true);
+    if (config.shouldRollbackUsingMarkers()) {

Review comment:
       we should do this no matter, whether rollback using markers is on /off

##########
File path: hudi-client/src/main/java/org/apache/hudi/table/action/rollback/RollbackUtils.java
##########
@@ -63,4 +84,156 @@ static HoodieRollbackStat mergeRollbackStat(HoodieRollbackStat stat1, HoodieRoll
     return new HoodieRollbackStat(stat1.getPartitionPath(), successDeleteFiles, failedDeleteFiles, commandBlocksCount);
   }
 
+  /**

Review comment:
       is this just moving code in bulk?

##########
File path: hudi-cli/src/main/java/org/apache/hudi/cli/commands/SparkMain.java
##########
@@ -329,9 +341,34 @@ private static int deleteSavepoint(JavaSparkContext jsc, String savepointTime, S
     }
   }
 
+  /**
+   * Upgrade or downgrade hoodie table.
+   * @param jsc instance of {@link JavaSparkContext} to use.
+   * @param basePath base path of the dataset.
+   * @param toVersion version to which upgrade/downgrade to be done.
+   * @return 0 if success, else -1.
+   * @throws Exception
+   */
+  protected static int upgradeOrDowngradeHoodieDataset(JavaSparkContext jsc, String basePath, String toVersion) throws Exception {
+    HoodieWriteConfig config = getWriteConfig(basePath);
+    HoodieTableMetaClient metaClient = ClientUtils.createMetaClient(jsc.hadoopConfiguration(), config, false);
+    try {
+      UpgradeDowngradeUtil.doUpgradeOrDowngrade(metaClient, HoodieTableVersion.valueOf(toVersion), config, jsc, null);

Review comment:
       rename: `UpgradeDowngradeUtil.migrate(..)`

##########
File path: hudi-common/src/test/java/org/apache/hudi/common/testutils/HoodieTestUtils.java
##########
@@ -279,6 +287,23 @@ public static String createDataFile(String basePath, String partitionPath, Strin
     return fileID;
   }
 
+  public static void createMarkerFile(String basePath, String partitionPath, String instantTime, String dataFileName) throws IOException {

Review comment:
       we should keep these to just HoodieClientTestUtils. since markers are just an artifact of the client




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org