You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by GitBox <gi...@apache.org> on 2020/06/19 02:20:39 UTC

[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #943: HDDS-3615. Call cleanup on tables only when double buffer has transactions related to tables.

bharatviswa504 commented on a change in pull request #943:
URL: https://github.com/apache/hadoop-ozone/pull/943#discussion_r442596111



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
##########
@@ -286,30 +301,45 @@ private void flushTransactions() {
     }
   }
 
-  private void cleanupCache(List<Long> lastRatisTransactionIndex) {
-    // As now only volume and bucket transactions are handled only called
-    // cleanupCache on bucketTable.
-    // TODO: After supporting all write operations we need to call
-    //  cleanupCache on the tables only when buffer has entries for that table.
-    omMetadataManager.getBucketTable().cleanupCache(lastRatisTransactionIndex);
-    omMetadataManager.getVolumeTable().cleanupCache(lastRatisTransactionIndex);
-    omMetadataManager.getUserTable().cleanupCache(lastRatisTransactionIndex);
-
-    //TODO: Optimization we can do here is for key transactions we can only
-    // cleanup cache when it is key commit transaction. In this way all
-    // intermediate transactions for a key will be read from in-memory cache.
-    omMetadataManager.getOpenKeyTable().cleanupCache(lastRatisTransactionIndex);
-    omMetadataManager.getKeyTable().cleanupCache(lastRatisTransactionIndex);
-    omMetadataManager.getDeletedTable().cleanupCache(lastRatisTransactionIndex);
-    omMetadataManager.getS3Table().cleanupCache(lastRatisTransactionIndex);
-    omMetadataManager.getMultipartInfoTable().cleanupCache(
-        lastRatisTransactionIndex);
-    omMetadataManager.getS3SecretTable().cleanupCache(
-        lastRatisTransactionIndex);
-    omMetadataManager.getDelegationTokenTable().cleanupCache(
-        lastRatisTransactionIndex);
-    omMetadataManager.getPrefixTable().cleanupCache(lastRatisTransactionIndex);
+  /**
+   * Set cleanup epoch for the DoubleBufferEntry.
+   * @param entry
+   * @param cleanupEpochs
+   */
+  private void setCleanupEpoch(DoubleBufferEntry entry, Map<String,
+      List<Long>> cleanupEpochs) {
+    // Add epochs depending on operated tables. In this way
+    // cleanup will be called only when required.
+
+    // As bucket and volume table is full cache add cleanup
+    // epochs only when request is delete to cleanup deleted
+    // entries.
+
+    String opName =
+        entry.getResponse().getOMResponse().getCmdType().name();
+
+    if (opName.toLowerCase().contains(VOLUME) ||
+        opName.toLowerCase().contains(BUCKET)) {
+      if (DeleteBucket.name().equals(opName)
+          || DeleteVolume.name().equals(opName)) {
+        entry.getResponse().operatedTables().forEach(
+            table -> cleanupEpochs.get(table)
+                .add(entry.getTrxLogIndex()));
+      }
+    } else {
+      entry.getResponse().operatedTables().forEach(

Review comment:
       When operattion is of type volume/bucket entry should be added only when deletevolume/deletebucket.  As volume/bucket is full cache, and cache should be cleaned up only for delete operations.

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerDoubleBuffer.java
##########
@@ -286,30 +301,45 @@ private void flushTransactions() {
     }
   }
 
-  private void cleanupCache(List<Long> lastRatisTransactionIndex) {
-    // As now only volume and bucket transactions are handled only called
-    // cleanupCache on bucketTable.
-    // TODO: After supporting all write operations we need to call
-    //  cleanupCache on the tables only when buffer has entries for that table.
-    omMetadataManager.getBucketTable().cleanupCache(lastRatisTransactionIndex);
-    omMetadataManager.getVolumeTable().cleanupCache(lastRatisTransactionIndex);
-    omMetadataManager.getUserTable().cleanupCache(lastRatisTransactionIndex);
-
-    //TODO: Optimization we can do here is for key transactions we can only
-    // cleanup cache when it is key commit transaction. In this way all
-    // intermediate transactions for a key will be read from in-memory cache.
-    omMetadataManager.getOpenKeyTable().cleanupCache(lastRatisTransactionIndex);
-    omMetadataManager.getKeyTable().cleanupCache(lastRatisTransactionIndex);
-    omMetadataManager.getDeletedTable().cleanupCache(lastRatisTransactionIndex);
-    omMetadataManager.getS3Table().cleanupCache(lastRatisTransactionIndex);
-    omMetadataManager.getMultipartInfoTable().cleanupCache(
-        lastRatisTransactionIndex);
-    omMetadataManager.getS3SecretTable().cleanupCache(
-        lastRatisTransactionIndex);
-    omMetadataManager.getDelegationTokenTable().cleanupCache(
-        lastRatisTransactionIndex);
-    omMetadataManager.getPrefixTable().cleanupCache(lastRatisTransactionIndex);
+  /**
+   * Set cleanup epoch for the DoubleBufferEntry.
+   * @param entry
+   * @param cleanupEpochs
+   */
+  private void setCleanupEpoch(DoubleBufferEntry entry, Map<String,
+      List<Long>> cleanupEpochs) {
+    // Add epochs depending on operated tables. In this way
+    // cleanup will be called only when required.
+
+    // As bucket and volume table is full cache add cleanup
+    // epochs only when request is delete to cleanup deleted
+    // entries.
+
+    String opName =
+        entry.getResponse().getOMResponse().getCmdType().name();
+
+    if (opName.toLowerCase().contains(VOLUME) ||
+        opName.toLowerCase().contains(BUCKET)) {
+      if (DeleteBucket.name().equals(opName)
+          || DeleteVolume.name().equals(opName)) {
+        entry.getResponse().operatedTables().forEach(
+            table -> cleanupEpochs.get(table)
+                .add(entry.getTrxLogIndex()));
+      }
+    } else {
+      entry.getResponse().operatedTables().forEach(

Review comment:
       When operation is of type volume/bucket entry should be added only when deletevolume/deletebucket.  As volume/bucket is full cache, and cache should be cleaned up only for delete operations.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org