You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@ozone.apache.org by GitBox <gi...@apache.org> on 2020/09/17 18:27:20 UTC

[GitHub] [hadoop-ozone] errose28 opened a new pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

errose28 opened a new pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435


   ## What changes were proposed in this pull request?
   
   Implement OM request and response for moving keys from the open key table to the deleted table. These will be used as part of parent jira HDDS-4120 to implement the open key cleanup service.
   
   ## What is the link to the Apache JIRA
   
   HDDS-4122
   
   ## How was this patch tested?
   
   Unit tests were added for the new OMRequest and OMResponse classes.
   
   ## Notes
   
   Leaving as draft while I incorporate HDDS-4053 into the OM request and response.
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
bharatviswa504 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r501938720



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/AbstractOMKeyDeleteResponse.java
##########
@@ -0,0 +1,143 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.key;
+
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
+import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+        .OMResponse;
+import org.apache.hadoop.hdds.utils.db.BatchOperation;
+
+import java.io.IOException;
+import javax.annotation.Nullable;
+import javax.annotation.Nonnull;
+
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.DELETED_TABLE;
+
+/**
+ * Base class for responses that need to move keys from an arbitrary table to
+ * the deleted table.
+ */
+@CleanupTableInfo(cleanupTables = {DELETED_TABLE})
+public abstract class AbstractOMKeyDeleteResponse extends OMClientResponse {
+
+  private boolean isRatisEnabled;
+
+  public AbstractOMKeyDeleteResponse(
+      @Nonnull OMResponse omResponse, boolean isRatisEnabled) {
+
+    super(omResponse);
+    this.isRatisEnabled = isRatisEnabled;
+  }
+
+  /**
+   * For when the request is not successful.
+   * For a successful request, the other constructor should be used.
+   */
+  public AbstractOMKeyDeleteResponse(@Nonnull OMResponse omResponse) {
+    super(omResponse);
+    checkStatusNotOK();
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   * The log transaction index used will be retrieved by calling
+   * {@link OmKeyInfo#getUpdateID} on {@code omKeyInfo}.
+   */
+  protected void addDeletionToBatch(
+      OMMetadataManager omMetadataManager,
+      BatchOperation batchOperation,
+      Table<String, ?> fromTable,
+      String keyName,
+      OmKeyInfo omKeyInfo) throws IOException {
+
+    addDeletionToBatch(omMetadataManager, batchOperation, fromTable, keyName,
+        omKeyInfo, omKeyInfo.getUpdateID());
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   */
+  protected void addDeletionToBatch(

Review comment:
       Minor: Can we merge these two functions.

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/AbstractOMKeyDeleteResponse.java
##########
@@ -0,0 +1,143 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.key;
+
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
+import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+        .OMResponse;
+import org.apache.hadoop.hdds.utils.db.BatchOperation;
+
+import java.io.IOException;
+import javax.annotation.Nullable;
+import javax.annotation.Nonnull;
+
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.DELETED_TABLE;
+
+/**
+ * Base class for responses that need to move keys from an arbitrary table to
+ * the deleted table.
+ */
+@CleanupTableInfo(cleanupTables = {DELETED_TABLE})
+public abstract class AbstractOMKeyDeleteResponse extends OMClientResponse {
+
+  private boolean isRatisEnabled;
+
+  public AbstractOMKeyDeleteResponse(
+      @Nonnull OMResponse omResponse, boolean isRatisEnabled) {
+
+    super(omResponse);
+    this.isRatisEnabled = isRatisEnabled;
+  }
+
+  /**
+   * For when the request is not successful.
+   * For a successful request, the other constructor should be used.
+   */
+  public AbstractOMKeyDeleteResponse(@Nonnull OMResponse omResponse) {
+    super(omResponse);
+    checkStatusNotOK();
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   * The log transaction index used will be retrieved by calling
+   * {@link OmKeyInfo#getUpdateID} on {@code omKeyInfo}.
+   */
+  protected void addDeletionToBatch(
+      OMMetadataManager omMetadataManager,
+      BatchOperation batchOperation,
+      Table<String, ?> fromTable,
+      String keyName,
+      OmKeyInfo omKeyInfo) throws IOException {
+
+    addDeletionToBatch(omMetadataManager, batchOperation, fromTable, keyName,
+        omKeyInfo, omKeyInfo.getUpdateID());
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   */
+  protected void addDeletionToBatch(

Review comment:
       Previously we used to use updateID for detecting replay of transaction. Now we are not using updateID any more.
   But from my understanding updateID should be set with transactionIndex even before.
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
bharatviswa504 commented on pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#issuecomment-702871382


   I will take a look at it today.
   Thanks, @avijayanhwx for tagging.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
bharatviswa504 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r499683786



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMOpenKeysDeleteRequest.java
##########
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.response.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKeyBucket;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKey;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles requests to move open keys from the open key table to the delete
+ * table. Modifies the open key table cache only, and no underlying databases.
+ * The delete table cache does not need to be modified since it is not used
+ * for client response validation.
+ */
+public class OMOpenKeysDeleteRequest extends OMKeyRequest {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(OMOpenKeysDeleteRequest.class);
+
+  public OMOpenKeysDeleteRequest(OMRequest omRequest) {
+    super(omRequest);
+  }
+
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+      long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+    OMMetrics omMetrics = ozoneManager.getMetrics();
+    omMetrics.incNumOpenKeyDeleteRequests();
+
+    OzoneManagerProtocolProtos.DeleteOpenKeysRequest deleteOpenKeysRequest =
+            getOmRequest().getDeleteOpenKeysRequest();
+
+    List<OpenKeyBucket> submittedOpenKeyBucket =
+            deleteOpenKeysRequest.getOpenKeysPerBucketList();
+
+    long numSubmittedOpenKeys = submittedOpenKeyBucket.stream()
+        .mapToLong(OpenKeyBucket::getKeysCount).sum();
+
+    LOG.debug("{} open keys submitted for deletion.", numSubmittedOpenKeys);
+    omMetrics.incNumOpenKeysSubmittedForDeletion(numSubmittedOpenKeys);
+
+    OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+            OmResponseUtil.getOMResponseBuilder(getOmRequest());
+
+    IOException exception = null;
+    OMClientResponse omClientResponse = null;
+    Result result = null;
+    Map<String, OmKeyInfo> deletedOpenKeys = new HashMap<>();
+
+    try {
+      // Open keys are grouped by bucket, but there may be multiple buckets
+      // per volume. This maps volume name to volume args to track
+      // all volume updates for this request.
+      Map<String, OmVolumeArgs> modifiedVolumes = new HashMap<>();
+      OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+      for (OpenKeyBucket openKeyBucket: submittedOpenKeyBucket) {
+        // For each bucket where keys will be deleted from,
+        // get its bucket lock and update the cache accordingly.
+        Map<String, OmKeyInfo> deleted = updateOpenKeyTableCache(ozoneManager,
+            trxnLogIndex, openKeyBucket);
+
+        deletedOpenKeys.putAll(deleted);
+
+        // If open keys were deleted from this bucket and its volume still
+        // exists, update the volume's byte usage in the cache.
+        if (!deleted.isEmpty()) {
+          String volumeName = openKeyBucket.getVolumeName();
+          // Returns volume args from the cache if the volume is present,
+          // null otherwise.
+          OmVolumeArgs volumeArgs = getVolumeInfo(metadataManager, volumeName);
+
+          // If this volume still exists, decrement bytes used based on open
+          // keys deleted.
+          // The volume args object being updated is a reference from the
+          // cache, so this serves as a cache update.
+          if (volumeArgs != null) {
+            // If we already encountered the volume, it was a reference to
+            // the same object from the cache, so this will update it.
+            modifiedVolumes.put(volumeName, volumeArgs);

Review comment:
       Not exactly, what I mean here is by the time we process request1, as we add the same object to double buffer, and if other thread processing request2 and updating it will update DB state also (Technically this should happen after adding response to double buffer)
   
   Cache is for holding in flight updates if it is not committed to DB, I see no issues with that, this is by design.
   
   Diveregence 1 should not exist, if volume locks are held.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
avijayanhwx commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r497488345



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java
##########
@@ -553,17 +554,43 @@ protected boolean checkDirectoryAlreadyExists(String volumeName,
   }
 
   /**
-   * Return volume info for the specified volume.
+   * Return volume info for the specified volume. If the volume does not
+   * exist, returns {@code null}.
    * @param omMetadataManager
    * @param volume
    * @return OmVolumeArgs
    * @throws IOException
    */
   protected OmVolumeArgs getVolumeInfo(OMMetadataManager omMetadataManager,
       String volume) {
-    return omMetadataManager.getVolumeTable().getCacheValue(
-        new CacheKey<>(omMetadataManager.getVolumeKey(volume)))
-        .getCacheValue();
+
+    OmVolumeArgs volumeArgs = null;
+
+    CacheValue<OmVolumeArgs> value =
+        omMetadataManager.getVolumeTable().getCacheValue(
+        new CacheKey<>(omMetadataManager.getVolumeKey(volume)));
+
+    if (value != null) {
+      volumeArgs = value.getCacheValue();
+    }
+
+    return volumeArgs;
+  }
+
+  /**
+   * @return the number of bytes used by blocks pointed to by {@code omKeyInfo}.
+   */
+  protected static long sumBlockLengths(OmKeyInfo omKeyInfo) {
+    long bytesUsed = 0;
+    int keyFactor = omKeyInfo.getFactor().getNumber();
+    OmKeyLocationInfoGroup keyLocationGroup =
+        omKeyInfo.getLatestVersionLocations();

Review comment:
       Good point to update the used bytes while doing this cleanup. I am wondering what this would mean with multiple key version support in the future. We do not seem to store the "version" of the current open key.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
bharatviswa504 merged pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
bharatviswa504 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r501938720



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/AbstractOMKeyDeleteResponse.java
##########
@@ -0,0 +1,143 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.key;
+
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
+import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+        .OMResponse;
+import org.apache.hadoop.hdds.utils.db.BatchOperation;
+
+import java.io.IOException;
+import javax.annotation.Nullable;
+import javax.annotation.Nonnull;
+
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.DELETED_TABLE;
+
+/**
+ * Base class for responses that need to move keys from an arbitrary table to
+ * the deleted table.
+ */
+@CleanupTableInfo(cleanupTables = {DELETED_TABLE})
+public abstract class AbstractOMKeyDeleteResponse extends OMClientResponse {
+
+  private boolean isRatisEnabled;
+
+  public AbstractOMKeyDeleteResponse(
+      @Nonnull OMResponse omResponse, boolean isRatisEnabled) {
+
+    super(omResponse);
+    this.isRatisEnabled = isRatisEnabled;
+  }
+
+  /**
+   * For when the request is not successful.
+   * For a successful request, the other constructor should be used.
+   */
+  public AbstractOMKeyDeleteResponse(@Nonnull OMResponse omResponse) {
+    super(omResponse);
+    checkStatusNotOK();
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   * The log transaction index used will be retrieved by calling
+   * {@link OmKeyInfo#getUpdateID} on {@code omKeyInfo}.
+   */
+  protected void addDeletionToBatch(
+      OMMetadataManager omMetadataManager,
+      BatchOperation batchOperation,
+      Table<String, ?> fromTable,
+      String keyName,
+      OmKeyInfo omKeyInfo) throws IOException {
+
+    addDeletionToBatch(omMetadataManager, batchOperation, fromTable, keyName,
+        omKeyInfo, omKeyInfo.getUpdateID());
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   */
+  protected void addDeletionToBatch(

Review comment:
       Minor: Can we merge these two functions.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] errose28 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
errose28 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r500389769



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMOpenKeysDeleteRequest.java
##########
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.response.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKeyBucket;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKey;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles requests to move open keys from the open key table to the delete
+ * table. Modifies the open key table cache only, and no underlying databases.
+ * The delete table cache does not need to be modified since it is not used
+ * for client response validation.
+ */
+public class OMOpenKeysDeleteRequest extends OMKeyRequest {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(OMOpenKeysDeleteRequest.class);
+
+  public OMOpenKeysDeleteRequest(OMRequest omRequest) {
+    super(omRequest);
+  }
+
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+      long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+    OMMetrics omMetrics = ozoneManager.getMetrics();
+    omMetrics.incNumOpenKeyDeleteRequests();
+
+    OzoneManagerProtocolProtos.DeleteOpenKeysRequest deleteOpenKeysRequest =
+            getOmRequest().getDeleteOpenKeysRequest();
+
+    List<OpenKeyBucket> submittedOpenKeyBucket =
+            deleteOpenKeysRequest.getOpenKeysPerBucketList();
+
+    long numSubmittedOpenKeys = submittedOpenKeyBucket.stream()
+        .mapToLong(OpenKeyBucket::getKeysCount).sum();
+
+    LOG.debug("{} open keys submitted for deletion.", numSubmittedOpenKeys);
+    omMetrics.incNumOpenKeysSubmittedForDeletion(numSubmittedOpenKeys);
+
+    OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+            OmResponseUtil.getOMResponseBuilder(getOmRequest());
+
+    IOException exception = null;
+    OMClientResponse omClientResponse = null;
+    Result result = null;
+    Map<String, OmKeyInfo> deletedOpenKeys = new HashMap<>();
+
+    try {
+      // Open keys are grouped by bucket, but there may be multiple buckets
+      // per volume. This maps volume name to volume args to track
+      // all volume updates for this request.
+      Map<String, OmVolumeArgs> modifiedVolumes = new HashMap<>();
+      OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+      for (OpenKeyBucket openKeyBucket: submittedOpenKeyBucket) {
+        // For each bucket where keys will be deleted from,
+        // get its bucket lock and update the cache accordingly.
+        Map<String, OmKeyInfo> deleted = updateOpenKeyTableCache(ozoneManager,
+            trxnLogIndex, openKeyBucket);
+
+        deletedOpenKeys.putAll(deleted);
+
+        // If open keys were deleted from this bucket and its volume still
+        // exists, update the volume's byte usage in the cache.
+        if (!deleted.isEmpty()) {
+          String volumeName = openKeyBucket.getVolumeName();
+          // Returns volume args from the cache if the volume is present,
+          // null otherwise.
+          OmVolumeArgs volumeArgs = getVolumeInfo(metadataManager, volumeName);
+
+          // If this volume still exists, decrement bytes used based on open
+          // keys deleted.
+          // The volume args object being updated is a reference from the
+          // cache, so this serves as a cache update.
+          if (volumeArgs != null) {
+            // If we already encountered the volume, it was a reference to
+            // the same object from the cache, so this will update it.
+            modifiedVolumes.put(volumeName, volumeArgs);

Review comment:
       Thanks for the explanation @bharatviswa504. I now see that *divergence 2* in the above example poses an issue in the event of an OM crash happening between steps 5 and 6. This will cause the byte usage update to be applied twice in the DB after OM restart. Volume byte usage updates will be removed from the open key requests and responses. Since this is really a larger problem with all requests/responses operating in this way under HDDS-541, we can add the update when a solution is developed for all requests/responses.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] errose28 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
errose28 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r499579108



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMOpenKeysDeleteRequest.java
##########
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.response.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKeyBucket;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKey;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles requests to move open keys from the open key table to the delete
+ * table. Modifies the open key table cache only, and no underlying databases.
+ * The delete table cache does not need to be modified since it is not used
+ * for client response validation.
+ */
+public class OMOpenKeysDeleteRequest extends OMKeyRequest {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(OMOpenKeysDeleteRequest.class);
+
+  public OMOpenKeysDeleteRequest(OMRequest omRequest) {
+    super(omRequest);
+  }
+
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+      long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+    OMMetrics omMetrics = ozoneManager.getMetrics();
+    omMetrics.incNumOpenKeyDeleteRequests();
+
+    OzoneManagerProtocolProtos.DeleteOpenKeysRequest deleteOpenKeysRequest =
+            getOmRequest().getDeleteOpenKeysRequest();
+
+    List<OpenKeyBucket> submittedOpenKeyBucket =
+            deleteOpenKeysRequest.getOpenKeysPerBucketList();
+
+    long numSubmittedOpenKeys = submittedOpenKeyBucket.stream()
+        .mapToLong(OpenKeyBucket::getKeysCount).sum();
+
+    LOG.debug("{} open keys submitted for deletion.", numSubmittedOpenKeys);
+    omMetrics.incNumOpenKeysSubmittedForDeletion(numSubmittedOpenKeys);
+
+    OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+            OmResponseUtil.getOMResponseBuilder(getOmRequest());
+
+    IOException exception = null;
+    OMClientResponse omClientResponse = null;
+    Result result = null;
+    Map<String, OmKeyInfo> deletedOpenKeys = new HashMap<>();
+
+    try {
+      // Open keys are grouped by bucket, but there may be multiple buckets
+      // per volume. This maps volume name to volume args to track
+      // all volume updates for this request.
+      Map<String, OmVolumeArgs> modifiedVolumes = new HashMap<>();
+      OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+      for (OpenKeyBucket openKeyBucket: submittedOpenKeyBucket) {
+        // For each bucket where keys will be deleted from,
+        // get its bucket lock and update the cache accordingly.
+        Map<String, OmKeyInfo> deleted = updateOpenKeyTableCache(ozoneManager,
+            trxnLogIndex, openKeyBucket);
+
+        deletedOpenKeys.putAll(deleted);
+
+        // If open keys were deleted from this bucket and its volume still
+        // exists, update the volume's byte usage in the cache.
+        if (!deleted.isEmpty()) {
+          String volumeName = openKeyBucket.getVolumeName();
+          // Returns volume args from the cache if the volume is present,
+          // null otherwise.
+          OmVolumeArgs volumeArgs = getVolumeInfo(metadataManager, volumeName);
+
+          // If this volume still exists, decrement bytes used based on open
+          // keys deleted.
+          // The volume args object being updated is a reference from the
+          // cache, so this serves as a cache update.
+          if (volumeArgs != null) {
+            // If we already encountered the volume, it was a reference to
+            // the same object from the cache, so this will update it.
+            modifiedVolumes.put(volumeName, volumeArgs);
+            subtractUsedBytes(volumeArgs, deleted.values());
+          }
+        }
+      }
+
+      omClientResponse = new OMOpenKeysDeleteResponse(omResponse.build(),
+          deletedOpenKeys, ozoneManager.isRatisEnabled(),
+          modifiedVolumes.values());
+
+      result = Result.SUCCESS;
+    } catch (IOException ex) {
+      result = Result.FAILURE;
+      exception = ex;
+      omClientResponse =
+          new OMKeyDeleteResponse(createErrorOMResponse(omResponse, exception));

Review comment:
       Will fix.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
bharatviswa504 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r502165343



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/AbstractOMKeyDeleteResponse.java
##########
@@ -0,0 +1,143 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.key;
+
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
+import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+        .OMResponse;
+import org.apache.hadoop.hdds.utils.db.BatchOperation;
+
+import java.io.IOException;
+import javax.annotation.Nullable;
+import javax.annotation.Nonnull;
+
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.DELETED_TABLE;
+
+/**
+ * Base class for responses that need to move keys from an arbitrary table to
+ * the deleted table.
+ */
+@CleanupTableInfo(cleanupTables = {DELETED_TABLE})
+public abstract class AbstractOMKeyDeleteResponse extends OMClientResponse {
+
+  private boolean isRatisEnabled;
+
+  public AbstractOMKeyDeleteResponse(
+      @Nonnull OMResponse omResponse, boolean isRatisEnabled) {
+
+    super(omResponse);
+    this.isRatisEnabled = isRatisEnabled;
+  }
+
+  /**
+   * For when the request is not successful.
+   * For a successful request, the other constructor should be used.
+   */
+  public AbstractOMKeyDeleteResponse(@Nonnull OMResponse omResponse) {
+    super(omResponse);
+    checkStatusNotOK();
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   * The log transaction index used will be retrieved by calling
+   * {@link OmKeyInfo#getUpdateID} on {@code omKeyInfo}.
+   */
+  protected void addDeletionToBatch(
+      OMMetadataManager omMetadataManager,
+      BatchOperation batchOperation,
+      Table<String, ?> fromTable,
+      String keyName,
+      OmKeyInfo omKeyInfo) throws IOException {
+
+    addDeletionToBatch(omMetadataManager, batchOperation, fromTable, keyName,
+        omKeyInfo, omKeyInfo.getUpdateID());
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   */
+  protected void addDeletionToBatch(

Review comment:
       Previously we used to use updateID for detecting replay of transaction. Now we are not using updateID any more.
   But from my understanding updateID should be set with transactionIndex even before.
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
bharatviswa504 commented on pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#issuecomment-707853993


   Thank You @errose28 for the contribution and @avijayanhwx for the review


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] errose28 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
errose28 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r500389769



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMOpenKeysDeleteRequest.java
##########
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.response.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKeyBucket;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKey;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles requests to move open keys from the open key table to the delete
+ * table. Modifies the open key table cache only, and no underlying databases.
+ * The delete table cache does not need to be modified since it is not used
+ * for client response validation.
+ */
+public class OMOpenKeysDeleteRequest extends OMKeyRequest {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(OMOpenKeysDeleteRequest.class);
+
+  public OMOpenKeysDeleteRequest(OMRequest omRequest) {
+    super(omRequest);
+  }
+
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+      long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+    OMMetrics omMetrics = ozoneManager.getMetrics();
+    omMetrics.incNumOpenKeyDeleteRequests();
+
+    OzoneManagerProtocolProtos.DeleteOpenKeysRequest deleteOpenKeysRequest =
+            getOmRequest().getDeleteOpenKeysRequest();
+
+    List<OpenKeyBucket> submittedOpenKeyBucket =
+            deleteOpenKeysRequest.getOpenKeysPerBucketList();
+
+    long numSubmittedOpenKeys = submittedOpenKeyBucket.stream()
+        .mapToLong(OpenKeyBucket::getKeysCount).sum();
+
+    LOG.debug("{} open keys submitted for deletion.", numSubmittedOpenKeys);
+    omMetrics.incNumOpenKeysSubmittedForDeletion(numSubmittedOpenKeys);
+
+    OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+            OmResponseUtil.getOMResponseBuilder(getOmRequest());
+
+    IOException exception = null;
+    OMClientResponse omClientResponse = null;
+    Result result = null;
+    Map<String, OmKeyInfo> deletedOpenKeys = new HashMap<>();
+
+    try {
+      // Open keys are grouped by bucket, but there may be multiple buckets
+      // per volume. This maps volume name to volume args to track
+      // all volume updates for this request.
+      Map<String, OmVolumeArgs> modifiedVolumes = new HashMap<>();
+      OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+      for (OpenKeyBucket openKeyBucket: submittedOpenKeyBucket) {
+        // For each bucket where keys will be deleted from,
+        // get its bucket lock and update the cache accordingly.
+        Map<String, OmKeyInfo> deleted = updateOpenKeyTableCache(ozoneManager,
+            trxnLogIndex, openKeyBucket);
+
+        deletedOpenKeys.putAll(deleted);
+
+        // If open keys were deleted from this bucket and its volume still
+        // exists, update the volume's byte usage in the cache.
+        if (!deleted.isEmpty()) {
+          String volumeName = openKeyBucket.getVolumeName();
+          // Returns volume args from the cache if the volume is present,
+          // null otherwise.
+          OmVolumeArgs volumeArgs = getVolumeInfo(metadataManager, volumeName);
+
+          // If this volume still exists, decrement bytes used based on open
+          // keys deleted.
+          // The volume args object being updated is a reference from the
+          // cache, so this serves as a cache update.
+          if (volumeArgs != null) {
+            // If we already encountered the volume, it was a reference to
+            // the same object from the cache, so this will update it.
+            modifiedVolumes.put(volumeName, volumeArgs);

Review comment:
       Thanks for the explanation @bharatviswa504. I now see that *divergence 2* in the above example poses an issue in the event of an OM crash happening between steps 5 and 6. This will cause the byte usage update to be applied twice in the DB after OM restart. Volume byte usage updates will be removed from the open key requests and responses. Since this is really a larger problem with all requests/responses operating in this way under HDDS-541, we can add the byte usage updates when a solution is developed for all requests/responses as part of HDDS-4308.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
bharatviswa504 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r499686249



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMOpenKeysDeleteRequest.java
##########
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.response.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKeyBucket;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKey;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles requests to move open keys from the open key table to the delete
+ * table. Modifies the open key table cache only, and no underlying databases.
+ * The delete table cache does not need to be modified since it is not used
+ * for client response validation.
+ */
+public class OMOpenKeysDeleteRequest extends OMKeyRequest {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(OMOpenKeysDeleteRequest.class);
+
+  public OMOpenKeysDeleteRequest(OMRequest omRequest) {
+    super(omRequest);
+  }
+
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+      long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+    OMMetrics omMetrics = ozoneManager.getMetrics();
+    omMetrics.incNumOpenKeyDeleteRequests();
+
+    OzoneManagerProtocolProtos.DeleteOpenKeysRequest deleteOpenKeysRequest =
+            getOmRequest().getDeleteOpenKeysRequest();
+
+    List<OpenKeyBucket> submittedOpenKeyBucket =
+            deleteOpenKeysRequest.getOpenKeysPerBucketList();
+
+    long numSubmittedOpenKeys = submittedOpenKeyBucket.stream()
+        .mapToLong(OpenKeyBucket::getKeysCount).sum();
+
+    LOG.debug("{} open keys submitted for deletion.", numSubmittedOpenKeys);
+    omMetrics.incNumOpenKeysSubmittedForDeletion(numSubmittedOpenKeys);
+
+    OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+            OmResponseUtil.getOMResponseBuilder(getOmRequest());
+
+    IOException exception = null;
+    OMClientResponse omClientResponse = null;
+    Result result = null;
+    Map<String, OmKeyInfo> deletedOpenKeys = new HashMap<>();
+
+    try {
+      // Open keys are grouped by bucket, but there may be multiple buckets
+      // per volume. This maps volume name to volume args to track
+      // all volume updates for this request.
+      Map<String, OmVolumeArgs> modifiedVolumes = new HashMap<>();
+      OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+      for (OpenKeyBucket openKeyBucket: submittedOpenKeyBucket) {
+        // For each bucket where keys will be deleted from,
+        // get its bucket lock and update the cache accordingly.
+        Map<String, OmKeyInfo> deleted = updateOpenKeyTableCache(ozoneManager,
+            trxnLogIndex, openKeyBucket);
+
+        deletedOpenKeys.putAll(deleted);
+
+        // If open keys were deleted from this bucket and its volume still
+        // exists, update the volume's byte usage in the cache.
+        if (!deleted.isEmpty()) {
+          String volumeName = openKeyBucket.getVolumeName();
+          // Returns volume args from the cache if the volume is present,
+          // null otherwise.
+          OmVolumeArgs volumeArgs = getVolumeInfo(metadataManager, volumeName);
+
+          // If this volume still exists, decrement bytes used based on open
+          // keys deleted.
+          // The volume args object being updated is a reference from the
+          // cache, so this serves as a cache update.
+          if (volumeArgs != null) {
+            // If we already encountered the volume, it was a reference to
+            // the same object from the cache, so this will update it.
+            modifiedVolumes.put(volumeName, volumeArgs);

Review comment:
       https://issues.apache.org/jira/browse/HDDS-2344 Jira but here just value updating (Might not be bringing ConcurrentModificationException, but it can provide some context)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] errose28 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
errose28 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r500391837



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMOpenKeysDeleteRequest.java
##########
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.response.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKeyBucket;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKey;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles requests to move open keys from the open key table to the delete
+ * table. Modifies the open key table cache only, and no underlying databases.
+ * The delete table cache does not need to be modified since it is not used
+ * for client response validation.
+ */
+public class OMOpenKeysDeleteRequest extends OMKeyRequest {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(OMOpenKeysDeleteRequest.class);
+
+  public OMOpenKeysDeleteRequest(OMRequest omRequest) {
+    super(omRequest);
+  }
+
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+      long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+    OMMetrics omMetrics = ozoneManager.getMetrics();
+    omMetrics.incNumOpenKeyDeleteRequests();
+
+    OzoneManagerProtocolProtos.DeleteOpenKeysRequest deleteOpenKeysRequest =
+            getOmRequest().getDeleteOpenKeysRequest();
+
+    List<OpenKeyBucket> submittedOpenKeyBucket =
+            deleteOpenKeysRequest.getOpenKeysPerBucketList();
+
+    long numSubmittedOpenKeys = submittedOpenKeyBucket.stream()
+        .mapToLong(OpenKeyBucket::getKeysCount).sum();
+
+    LOG.debug("{} open keys submitted for deletion.", numSubmittedOpenKeys);
+    omMetrics.incNumOpenKeysSubmittedForDeletion(numSubmittedOpenKeys);
+
+    OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+            OmResponseUtil.getOMResponseBuilder(getOmRequest());
+
+    IOException exception = null;
+    OMClientResponse omClientResponse = null;
+    Result result = null;
+    Map<String, OmKeyInfo> deletedOpenKeys = new HashMap<>();
+
+    try {
+      // Open keys are grouped by bucket, but there may be multiple buckets
+      // per volume. This maps volume name to volume args to track
+      // all volume updates for this request.
+      Map<String, OmVolumeArgs> modifiedVolumes = new HashMap<>();
+      OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+      for (OpenKeyBucket openKeyBucket: submittedOpenKeyBucket) {
+        // For each bucket where keys will be deleted from,
+        // get its bucket lock and update the cache accordingly.
+        Map<String, OmKeyInfo> deleted = updateOpenKeyTableCache(ozoneManager,
+            trxnLogIndex, openKeyBucket);
+
+        deletedOpenKeys.putAll(deleted);
+
+        // If open keys were deleted from this bucket and its volume still
+        // exists, update the volume's byte usage in the cache.
+        if (!deleted.isEmpty()) {
+          String volumeName = openKeyBucket.getVolumeName();
+          // Returns volume args from the cache if the volume is present,
+          // null otherwise.
+          OmVolumeArgs volumeArgs = getVolumeInfo(metadataManager, volumeName);
+
+          // If this volume still exists, decrement bytes used based on open
+          // keys deleted.
+          // The volume args object being updated is a reference from the
+          // cache, so this serves as a cache update.
+          if (volumeArgs != null) {

Review comment:
       Volume byte usage updates will be removed from the open keys delete request and response classes. See [this comment](https://github.com/apache/hadoop-ozone/pull/1435#discussion_r500389769).




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] errose28 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
errose28 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r501976342



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/AbstractOMKeyDeleteResponse.java
##########
@@ -0,0 +1,143 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.key;
+
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
+import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+        .OMResponse;
+import org.apache.hadoop.hdds.utils.db.BatchOperation;
+
+import java.io.IOException;
+import javax.annotation.Nullable;
+import javax.annotation.Nonnull;
+
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.DELETED_TABLE;
+
+/**
+ * Base class for responses that need to move keys from an arbitrary table to
+ * the deleted table.
+ */
+@CleanupTableInfo(cleanupTables = {DELETED_TABLE})
+public abstract class AbstractOMKeyDeleteResponse extends OMClientResponse {
+
+  private boolean isRatisEnabled;
+
+  public AbstractOMKeyDeleteResponse(
+      @Nonnull OMResponse omResponse, boolean isRatisEnabled) {
+
+    super(omResponse);
+    this.isRatisEnabled = isRatisEnabled;
+  }
+
+  /**
+   * For when the request is not successful.
+   * For a successful request, the other constructor should be used.
+   */
+  public AbstractOMKeyDeleteResponse(@Nonnull OMResponse omResponse) {
+    super(omResponse);
+    checkStatusNotOK();
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   * The log transaction index used will be retrieved by calling
+   * {@link OmKeyInfo#getUpdateID} on {@code omKeyInfo}.
+   */
+  protected void addDeletionToBatch(
+      OMMetadataManager omMetadataManager,
+      BatchOperation batchOperation,
+      Table<String, ?> fromTable,
+      String keyName,
+      OmKeyInfo omKeyInfo) throws IOException {
+
+    addDeletionToBatch(omMetadataManager, batchOperation, fromTable, keyName,
+        omKeyInfo, omKeyInfo.getUpdateID());
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   */
+  protected void addDeletionToBatch(

Review comment:
       This is actually related to a mistake I made in OMKeysDeleteResponse. The original implementation had one trxnLogIndex it used for all the keys. All other calls to this method are using the updateID of the keyInfo provided as the trxnLogIndex. If the way I am doing it currently (OMKeysDeleteResponse uses the updateID of each key as its trxnLogIndex instead of one value for all keys deleted), then I can remove the overload. If not, I can fix OMKeysDeleteResponse to call the overload, giving it identical behavior to its original implementation.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
bharatviswa504 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r498972861



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMOpenKeysDeleteRequest.java
##########
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.response.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKeyBucket;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKey;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles requests to move open keys from the open key table to the delete
+ * table. Modifies the open key table cache only, and no underlying databases.
+ * The delete table cache does not need to be modified since it is not used
+ * for client response validation.
+ */
+public class OMOpenKeysDeleteRequest extends OMKeyRequest {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(OMOpenKeysDeleteRequest.class);
+
+  public OMOpenKeysDeleteRequest(OMRequest omRequest) {
+    super(omRequest);
+  }
+
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+      long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+    OMMetrics omMetrics = ozoneManager.getMetrics();
+    omMetrics.incNumOpenKeyDeleteRequests();
+
+    OzoneManagerProtocolProtos.DeleteOpenKeysRequest deleteOpenKeysRequest =
+            getOmRequest().getDeleteOpenKeysRequest();
+
+    List<OpenKeyBucket> submittedOpenKeyBucket =
+            deleteOpenKeysRequest.getOpenKeysPerBucketList();
+
+    long numSubmittedOpenKeys = submittedOpenKeyBucket.stream()

Review comment:
       Instead of streams, use for loop and compute, as request execution is in hot code path.
   I see a few recent jira's to not to use streams and helped perf improvement.
   Can we use good old for loop here?

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMOpenKeysDeleteRequest.java
##########
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.response.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKeyBucket;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKey;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles requests to move open keys from the open key table to the delete
+ * table. Modifies the open key table cache only, and no underlying databases.
+ * The delete table cache does not need to be modified since it is not used
+ * for client response validation.
+ */
+public class OMOpenKeysDeleteRequest extends OMKeyRequest {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(OMOpenKeysDeleteRequest.class);
+
+  public OMOpenKeysDeleteRequest(OMRequest omRequest) {
+    super(omRequest);
+  }
+
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+      long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+    OMMetrics omMetrics = ozoneManager.getMetrics();
+    omMetrics.incNumOpenKeyDeleteRequests();
+
+    OzoneManagerProtocolProtos.DeleteOpenKeysRequest deleteOpenKeysRequest =
+            getOmRequest().getDeleteOpenKeysRequest();
+
+    List<OpenKeyBucket> submittedOpenKeyBucket =
+            deleteOpenKeysRequest.getOpenKeysPerBucketList();
+
+    long numSubmittedOpenKeys = submittedOpenKeyBucket.stream()
+        .mapToLong(OpenKeyBucket::getKeysCount).sum();
+
+    LOG.debug("{} open keys submitted for deletion.", numSubmittedOpenKeys);
+    omMetrics.incNumOpenKeysSubmittedForDeletion(numSubmittedOpenKeys);
+
+    OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+            OmResponseUtil.getOMResponseBuilder(getOmRequest());
+
+    IOException exception = null;
+    OMClientResponse omClientResponse = null;
+    Result result = null;
+    Map<String, OmKeyInfo> deletedOpenKeys = new HashMap<>();
+
+    try {
+      // Open keys are grouped by bucket, but there may be multiple buckets
+      // per volume. This maps volume name to volume args to track
+      // all volume updates for this request.
+      Map<String, OmVolumeArgs> modifiedVolumes = new HashMap<>();
+      OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+      for (OpenKeyBucket openKeyBucket: submittedOpenKeyBucket) {
+        // For each bucket where keys will be deleted from,
+        // get its bucket lock and update the cache accordingly.
+        Map<String, OmKeyInfo> deleted = updateOpenKeyTableCache(ozoneManager,
+            trxnLogIndex, openKeyBucket);
+
+        deletedOpenKeys.putAll(deleted);
+
+        // If open keys were deleted from this bucket and its volume still
+        // exists, update the volume's byte usage in the cache.
+        if (!deleted.isEmpty()) {
+          String volumeName = openKeyBucket.getVolumeName();
+          // Returns volume args from the cache if the volume is present,
+          // null otherwise.
+          OmVolumeArgs volumeArgs = getVolumeInfo(metadataManager, volumeName);
+
+          // If this volume still exists, decrement bytes used based on open
+          // keys deleted.
+          // The volume args object being updated is a reference from the
+          // cache, so this serves as a cache update.
+          if (volumeArgs != null) {

Review comment:
       Looks like we need a volume lock here, as we are updating the bytesUsed.

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMOpenKeysDeleteRequest.java
##########
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.response.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKeyBucket;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKey;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles requests to move open keys from the open key table to the delete
+ * table. Modifies the open key table cache only, and no underlying databases.
+ * The delete table cache does not need to be modified since it is not used
+ * for client response validation.
+ */
+public class OMOpenKeysDeleteRequest extends OMKeyRequest {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(OMOpenKeysDeleteRequest.class);
+
+  public OMOpenKeysDeleteRequest(OMRequest omRequest) {
+    super(omRequest);
+  }
+
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+      long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+    OMMetrics omMetrics = ozoneManager.getMetrics();
+    omMetrics.incNumOpenKeyDeleteRequests();
+
+    OzoneManagerProtocolProtos.DeleteOpenKeysRequest deleteOpenKeysRequest =
+            getOmRequest().getDeleteOpenKeysRequest();
+
+    List<OpenKeyBucket> submittedOpenKeyBucket =
+            deleteOpenKeysRequest.getOpenKeysPerBucketList();
+
+    long numSubmittedOpenKeys = submittedOpenKeyBucket.stream()
+        .mapToLong(OpenKeyBucket::getKeysCount).sum();
+
+    LOG.debug("{} open keys submitted for deletion.", numSubmittedOpenKeys);
+    omMetrics.incNumOpenKeysSubmittedForDeletion(numSubmittedOpenKeys);
+
+    OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+            OmResponseUtil.getOMResponseBuilder(getOmRequest());
+
+    IOException exception = null;
+    OMClientResponse omClientResponse = null;
+    Result result = null;
+    Map<String, OmKeyInfo> deletedOpenKeys = new HashMap<>();
+
+    try {
+      // Open keys are grouped by bucket, but there may be multiple buckets
+      // per volume. This maps volume name to volume args to track
+      // all volume updates for this request.
+      Map<String, OmVolumeArgs> modifiedVolumes = new HashMap<>();
+      OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+      for (OpenKeyBucket openKeyBucket: submittedOpenKeyBucket) {
+        // For each bucket where keys will be deleted from,
+        // get its bucket lock and update the cache accordingly.
+        Map<String, OmKeyInfo> deleted = updateOpenKeyTableCache(ozoneManager,
+            trxnLogIndex, openKeyBucket);
+
+        deletedOpenKeys.putAll(deleted);
+
+        // If open keys were deleted from this bucket and its volume still
+        // exists, update the volume's byte usage in the cache.
+        if (!deleted.isEmpty()) {
+          String volumeName = openKeyBucket.getVolumeName();
+          // Returns volume args from the cache if the volume is present,
+          // null otherwise.
+          OmVolumeArgs volumeArgs = getVolumeInfo(metadataManager, volumeName);
+
+          // If this volume still exists, decrement bytes used based on open
+          // keys deleted.
+          // The volume args object being updated is a reference from the
+          // cache, so this serves as a cache update.
+          if (volumeArgs != null) {
+            // If we already encountered the volume, it was a reference to
+            // the same object from the cache, so this will update it.
+            modifiedVolumes.put(volumeName, volumeArgs);
+            subtractUsedBytes(volumeArgs, deleted.values());
+          }
+        }
+      }
+
+      omClientResponse = new OMOpenKeysDeleteResponse(omResponse.build(),
+          deletedOpenKeys, ozoneManager.isRatisEnabled(),
+          modifiedVolumes.values());
+
+      result = Result.SUCCESS;
+    } catch (IOException ex) {
+      result = Result.FAILURE;
+      exception = ex;
+      omClientResponse =
+          new OMKeyDeleteResponse(createErrorOMResponse(omResponse, exception));
+    } finally {
+      addResponseToDoubleBuffer(trxnLogIndex, omClientResponse,
+              omDoubleBufferHelper);
+    }
+
+    processResults(omMetrics, numSubmittedOpenKeys, deletedOpenKeys.size(),
+        deleteOpenKeysRequest, result);
+
+    return omClientResponse;
+  }
+
+  private void processResults(OMMetrics omMetrics, long numSubmittedOpenKeys,
+      long numDeletedOpenKeys,
+      OzoneManagerProtocolProtos.DeleteOpenKeysRequest request, Result result) {
+
+    switch (result) {
+    case SUCCESS:
+      LOG.debug("Deleted {} open keys out of {} submitted keys.",
+          numDeletedOpenKeys, numSubmittedOpenKeys);
+      break;
+    case FAILURE:
+      omMetrics.incNumOpenKeyDeleteRequestFails();
+      LOG.error("Failure occurred while trying to delete {} submitted open " +
+              "keys.", numSubmittedOpenKeys);
+      break;
+    default:
+      LOG.error("Unrecognized result for OMOpenKeysDeleteRequest: {}",
+          request);
+    }
+  }
+
+  private Map<String, OmKeyInfo> updateOpenKeyTableCache(
+      OzoneManager ozoneManager, long trxnLogIndex, OpenKeyBucket keysPerBucket)
+      throws IOException {
+
+    Map<String, OmKeyInfo> deletedKeys = new HashMap<>();
+
+    boolean acquiredLock = false;
+    String volumeName = keysPerBucket.getVolumeName();
+    String bucketName = keysPerBucket.getBucketName();
+    OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+    try {
+      acquiredLock = omMetadataManager.getLock().acquireWriteLock(BUCKET_LOCK,
+              volumeName, bucketName);
+
+      for (OpenKey key: keysPerBucket.getKeysList()) {
+        String fullKeyName = omMetadataManager.getOpenKey(volumeName,
+                bucketName, key.getName(), key.getClientID());
+
+        // If an open key is no longer present in the table, it was committed
+        // and should not be deleted.
+        OmKeyInfo omKeyInfo =
+            omMetadataManager.getOpenKeyTable().get(fullKeyName);
+        if (omKeyInfo != null) {
+          // Set the UpdateID to current transactionLogIndex
+          omKeyInfo.setUpdateID(trxnLogIndex, ozoneManager.isRatisEnabled());
+          deletedKeys.put(fullKeyName, omKeyInfo);
+
+          // Update table cache.
+          omMetadataManager.getOpenKeyTable().addCacheEntry(
+                  new CacheKey<>(fullKeyName),
+                  new CacheValue<>(Optional.absent(), trxnLogIndex));
+
+          ozoneManager.getMetrics().incNumOpenKeysDeleted();
+          LOG.debug("Open key {} deleted.", fullKeyName);
+
+          // No need to add cache entries to delete table. As delete table will
+          // be used by DeleteKeyService only, not used for any client response
+          // validation, so we don't need to add to cache.
+        } else {
+          LOG.debug("Key {} was not deleted, as it was not " +
+                  "found in the open key table.", fullKeyName);
+        }
+      }
+    } finally {
+      if (acquiredLock) {
+        omMetadataManager.getLock().releaseWriteLock(BUCKET_LOCK, volumeName,
+                bucketName);
+      }
+    }
+
+    return deletedKeys;
+  }
+
+  /**
+   * Subtracts all bytes used by the blocks pointed to by {@code keyInfos}
+   * from {@code volumeArgs}.
+   */
+  private void subtractUsedBytes(OmVolumeArgs volumeArgs,
+      Collection<OmKeyInfo> keyInfos) {
+
+    long quotaReleased = keyInfos.stream()

Review comment:
       Same here avoid stream here.

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMOpenKeysDeleteRequest.java
##########
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.response.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKeyBucket;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKey;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles requests to move open keys from the open key table to the delete
+ * table. Modifies the open key table cache only, and no underlying databases.
+ * The delete table cache does not need to be modified since it is not used
+ * for client response validation.
+ */
+public class OMOpenKeysDeleteRequest extends OMKeyRequest {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(OMOpenKeysDeleteRequest.class);
+
+  public OMOpenKeysDeleteRequest(OMRequest omRequest) {
+    super(omRequest);
+  }
+
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+      long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+    OMMetrics omMetrics = ozoneManager.getMetrics();
+    omMetrics.incNumOpenKeyDeleteRequests();
+
+    OzoneManagerProtocolProtos.DeleteOpenKeysRequest deleteOpenKeysRequest =
+            getOmRequest().getDeleteOpenKeysRequest();
+
+    List<OpenKeyBucket> submittedOpenKeyBucket =
+            deleteOpenKeysRequest.getOpenKeysPerBucketList();
+
+    long numSubmittedOpenKeys = submittedOpenKeyBucket.stream()
+        .mapToLong(OpenKeyBucket::getKeysCount).sum();
+
+    LOG.debug("{} open keys submitted for deletion.", numSubmittedOpenKeys);
+    omMetrics.incNumOpenKeysSubmittedForDeletion(numSubmittedOpenKeys);
+
+    OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+            OmResponseUtil.getOMResponseBuilder(getOmRequest());
+
+    IOException exception = null;
+    OMClientResponse omClientResponse = null;
+    Result result = null;
+    Map<String, OmKeyInfo> deletedOpenKeys = new HashMap<>();
+
+    try {
+      // Open keys are grouped by bucket, but there may be multiple buckets
+      // per volume. This maps volume name to volume args to track
+      // all volume updates for this request.
+      Map<String, OmVolumeArgs> modifiedVolumes = new HashMap<>();
+      OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+      for (OpenKeyBucket openKeyBucket: submittedOpenKeyBucket) {
+        // For each bucket where keys will be deleted from,
+        // get its bucket lock and update the cache accordingly.
+        Map<String, OmKeyInfo> deleted = updateOpenKeyTableCache(ozoneManager,
+            trxnLogIndex, openKeyBucket);
+
+        deletedOpenKeys.putAll(deleted);
+
+        // If open keys were deleted from this bucket and its volume still
+        // exists, update the volume's byte usage in the cache.
+        if (!deleted.isEmpty()) {
+          String volumeName = openKeyBucket.getVolumeName();
+          // Returns volume args from the cache if the volume is present,
+          // null otherwise.
+          OmVolumeArgs volumeArgs = getVolumeInfo(metadataManager, volumeName);
+
+          // If this volume still exists, decrement bytes used based on open
+          // keys deleted.
+          // The volume args object being updated is a reference from the
+          // cache, so this serves as a cache update.
+          if (volumeArgs != null) {
+            // If we already encountered the volume, it was a reference to
+            // the same object from the cache, so this will update it.
+            modifiedVolumes.put(volumeName, volumeArgs);

Review comment:
       And also here getting a cached object will cause issues.
   If double buffer flush has not flushed this to DB, and other thread uses same volumeArgs reference and update volumeArgs, we will be updating to DB inconsistent state.
   
   So getVolumeInfo should use Table#get API.
   

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMOpenKeysDeleteRequest.java
##########
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.response.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKeyBucket;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKey;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles requests to move open keys from the open key table to the delete
+ * table. Modifies the open key table cache only, and no underlying databases.
+ * The delete table cache does not need to be modified since it is not used
+ * for client response validation.
+ */
+public class OMOpenKeysDeleteRequest extends OMKeyRequest {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(OMOpenKeysDeleteRequest.class);
+
+  public OMOpenKeysDeleteRequest(OMRequest omRequest) {
+    super(omRequest);
+  }
+
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+      long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+    OMMetrics omMetrics = ozoneManager.getMetrics();
+    omMetrics.incNumOpenKeyDeleteRequests();
+
+    OzoneManagerProtocolProtos.DeleteOpenKeysRequest deleteOpenKeysRequest =
+            getOmRequest().getDeleteOpenKeysRequest();
+
+    List<OpenKeyBucket> submittedOpenKeyBucket =
+            deleteOpenKeysRequest.getOpenKeysPerBucketList();
+
+    long numSubmittedOpenKeys = submittedOpenKeyBucket.stream()
+        .mapToLong(OpenKeyBucket::getKeysCount).sum();
+
+    LOG.debug("{} open keys submitted for deletion.", numSubmittedOpenKeys);
+    omMetrics.incNumOpenKeysSubmittedForDeletion(numSubmittedOpenKeys);
+
+    OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+            OmResponseUtil.getOMResponseBuilder(getOmRequest());
+
+    IOException exception = null;
+    OMClientResponse omClientResponse = null;
+    Result result = null;
+    Map<String, OmKeyInfo> deletedOpenKeys = new HashMap<>();
+
+    try {
+      // Open keys are grouped by bucket, but there may be multiple buckets
+      // per volume. This maps volume name to volume args to track
+      // all volume updates for this request.
+      Map<String, OmVolumeArgs> modifiedVolumes = new HashMap<>();
+      OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+      for (OpenKeyBucket openKeyBucket: submittedOpenKeyBucket) {
+        // For each bucket where keys will be deleted from,
+        // get its bucket lock and update the cache accordingly.
+        Map<String, OmKeyInfo> deleted = updateOpenKeyTableCache(ozoneManager,
+            trxnLogIndex, openKeyBucket);
+
+        deletedOpenKeys.putAll(deleted);
+
+        // If open keys were deleted from this bucket and its volume still
+        // exists, update the volume's byte usage in the cache.
+        if (!deleted.isEmpty()) {
+          String volumeName = openKeyBucket.getVolumeName();
+          // Returns volume args from the cache if the volume is present,
+          // null otherwise.
+          OmVolumeArgs volumeArgs = getVolumeInfo(metadataManager, volumeName);
+
+          // If this volume still exists, decrement bytes used based on open
+          // keys deleted.
+          // The volume args object being updated is a reference from the
+          // cache, so this serves as a cache update.
+          if (volumeArgs != null) {
+            // If we already encountered the volume, it was a reference to
+            // the same object from the cache, so this will update it.
+            modifiedVolumes.put(volumeName, volumeArgs);
+            subtractUsedBytes(volumeArgs, deleted.values());
+          }
+        }
+      }
+
+      omClientResponse = new OMOpenKeysDeleteResponse(omResponse.build(),
+          deletedOpenKeys, ozoneManager.isRatisEnabled(),
+          modifiedVolumes.values());
+
+      result = Result.SUCCESS;
+    } catch (IOException ex) {
+      result = Result.FAILURE;
+      exception = ex;
+      omClientResponse =
+          new OMKeyDeleteResponse(createErrorOMResponse(omResponse, exception));

Review comment:
       OMKeyDeleteResponse -> OMOpenKeysDeleteResponse

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/AbstractOMKeyDeleteResponse.java
##########
@@ -0,0 +1,150 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.key;
+
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
+import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+        .OMResponse;
+import org.apache.hadoop.hdds.utils.db.BatchOperation;
+
+import java.io.IOException;
+import javax.annotation.Nullable;
+import javax.annotation.Nonnull;
+
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.DELETED_TABLE;
+
+/**
+ * Response for DeleteKey request.
+ */
+@CleanupTableInfo(cleanupTables = {DELETED_TABLE})
+public abstract class AbstractOMKeyDeleteResponse extends OMClientResponse {
+
+  private boolean isRatisEnabled;
+
+  public AbstractOMKeyDeleteResponse(
+      @Nonnull OMResponse omResponse, boolean isRatisEnabled) {
+
+    super(omResponse);
+    this.isRatisEnabled = isRatisEnabled;
+  }
+
+  /**
+   * For when the request is not successful.
+   * For a successful request, the other constructor should be used.
+   */
+  public AbstractOMKeyDeleteResponse(@Nonnull OMResponse omResponse) {
+    super(omResponse);
+    checkStatusNotOK();
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   * The log transaction index used will be retrieved by calling
+   * {@link OmKeyInfo#getUpdateID} on {@code omKeyInfo}.
+   */
+  protected void deleteFromTable(
+      OMMetadataManager omMetadataManager,
+      BatchOperation batchOperation,
+      Table<String, ?> fromTable,
+      String keyName,
+      OmKeyInfo omKeyInfo) throws IOException {
+
+    deleteFromTable(omMetadataManager, batchOperation, fromTable, keyName,
+        omKeyInfo, omKeyInfo.getUpdateID());
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   */
+  protected void deleteFromTable(
+      OMMetadataManager omMetadataManager,
+      BatchOperation batchOperation,
+      Table<String, ?> fromTable,
+      String keyName,
+      OmKeyInfo omKeyInfo, long trxnLogIndex) throws IOException {
+
+    // For OmResponse with failure, this should do nothing. This method is
+    // not called in failure scenario in OM code.
+    fromTable.deleteWithBatch(batchOperation, keyName);
+
+    // If Key is not empty add this to delete table.
+    if (!isKeyEmpty(omKeyInfo)) {
+      // If a deleted key is put in the table where a key with the same
+      // name already exists, then the old deleted key information would be
+      // lost. To avoid this, first check if a key with same name exists.
+      // deletedTable in OM Metadata stores <KeyName, RepeatedOMKeyInfo>.
+      // The RepeatedOmKeyInfo is the structure that allows us to store a
+      // list of OmKeyInfo that can be tied to same key name. For a keyName
+      // if RepeatedOMKeyInfo structure is null, we create a new instance,
+      // if it is not null, then we simply add to the list and store this
+      // instance in deletedTable.
+      RepeatedOmKeyInfo repeatedOmKeyInfo =
+          omMetadataManager.getDeletedTable().get(keyName);
+      repeatedOmKeyInfo = OmUtils.prepareKeyForDelete(
+          omKeyInfo, repeatedOmKeyInfo, trxnLogIndex,
+          isRatisEnabled);
+      omMetadataManager.getDeletedTable().putWithBatch(
+          batchOperation, keyName, repeatedOmKeyInfo);
+    }
+  }
+
+  protected void addVolumeArgsToBatch(OMMetadataManager metadataManager,
+      BatchOperation batch, OmVolumeArgs volumeArgs) throws IOException {
+
+    Table<String, OmVolumeArgs> volumeTable = metadataManager.getVolumeTable();
+    String volumeKey = metadataManager.getVolumeKey(volumeArgs.getVolume());
+    volumeTable.putWithBatch(batch, volumeKey, volumeArgs);
+  }
+
+  @Override
+  public abstract void addToDBBatch(OMMetadataManager omMetadataManager,
+        BatchOperation batchOperation) throws IOException;
+
+  /**
+   * Check if the key is empty or not. Key will be empty if it does not have
+   * blocks.
+   *
+   * @param keyInfo
+   * @return if empty true, else false.
+   */
+  private boolean isKeyEmpty(@Nullable OmKeyInfo keyInfo) {

Review comment:
       Looks some of the logic is common for OMKeyDeleteResponse and OMOpenKeysDeleteResponse like isKeyEmpty and deleteFromTable can be used from OMKeyDeleteResponse.
   Can we consolidate them and use this AbstractOMKeyDeleteResponse as base class for both of them




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] errose28 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
errose28 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r501976342



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/AbstractOMKeyDeleteResponse.java
##########
@@ -0,0 +1,143 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.key;
+
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
+import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+        .OMResponse;
+import org.apache.hadoop.hdds.utils.db.BatchOperation;
+
+import java.io.IOException;
+import javax.annotation.Nullable;
+import javax.annotation.Nonnull;
+
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.DELETED_TABLE;
+
+/**
+ * Base class for responses that need to move keys from an arbitrary table to
+ * the deleted table.
+ */
+@CleanupTableInfo(cleanupTables = {DELETED_TABLE})
+public abstract class AbstractOMKeyDeleteResponse extends OMClientResponse {
+
+  private boolean isRatisEnabled;
+
+  public AbstractOMKeyDeleteResponse(
+      @Nonnull OMResponse omResponse, boolean isRatisEnabled) {
+
+    super(omResponse);
+    this.isRatisEnabled = isRatisEnabled;
+  }
+
+  /**
+   * For when the request is not successful.
+   * For a successful request, the other constructor should be used.
+   */
+  public AbstractOMKeyDeleteResponse(@Nonnull OMResponse omResponse) {
+    super(omResponse);
+    checkStatusNotOK();
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   * The log transaction index used will be retrieved by calling
+   * {@link OmKeyInfo#getUpdateID} on {@code omKeyInfo}.
+   */
+  protected void addDeletionToBatch(
+      OMMetadataManager omMetadataManager,
+      BatchOperation batchOperation,
+      Table<String, ?> fromTable,
+      String keyName,
+      OmKeyInfo omKeyInfo) throws IOException {
+
+    addDeletionToBatch(omMetadataManager, batchOperation, fromTable, keyName,
+        omKeyInfo, omKeyInfo.getUpdateID());
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   */
+  protected void addDeletionToBatch(

Review comment:
       This is actually related to a mistake I made in OMKeysDeleteResponse. The original implementation had one trxnLogIndex it used for all the keys. All other calls to this method are using the updateID of the keyInfo provided as the trxnLogIndex. If the way I am doing it currently (OMKeysDeleteResponse uses the updateID of each key as its trxnLogIndex instead of one value for all keys deleted), then I can remove the overload. If not, I can fix OMKeysDeleteResponse to call the overload, giving it identical behavior to its original implementation.

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/AbstractOMKeyDeleteResponse.java
##########
@@ -0,0 +1,143 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.key;
+
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
+import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+        .OMResponse;
+import org.apache.hadoop.hdds.utils.db.BatchOperation;
+
+import java.io.IOException;
+import javax.annotation.Nullable;
+import javax.annotation.Nonnull;
+
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.DELETED_TABLE;
+
+/**
+ * Base class for responses that need to move keys from an arbitrary table to
+ * the deleted table.
+ */
+@CleanupTableInfo(cleanupTables = {DELETED_TABLE})
+public abstract class AbstractOMKeyDeleteResponse extends OMClientResponse {
+
+  private boolean isRatisEnabled;
+
+  public AbstractOMKeyDeleteResponse(
+      @Nonnull OMResponse omResponse, boolean isRatisEnabled) {
+
+    super(omResponse);
+    this.isRatisEnabled = isRatisEnabled;
+  }
+
+  /**
+   * For when the request is not successful.
+   * For a successful request, the other constructor should be used.
+   */
+  public AbstractOMKeyDeleteResponse(@Nonnull OMResponse omResponse) {
+    super(omResponse);
+    checkStatusNotOK();
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   * The log transaction index used will be retrieved by calling
+   * {@link OmKeyInfo#getUpdateID} on {@code omKeyInfo}.
+   */
+  protected void addDeletionToBatch(
+      OMMetadataManager omMetadataManager,
+      BatchOperation batchOperation,
+      Table<String, ?> fromTable,
+      String keyName,
+      OmKeyInfo omKeyInfo) throws IOException {
+
+    addDeletionToBatch(omMetadataManager, batchOperation, fromTable, keyName,
+        omKeyInfo, omKeyInfo.getUpdateID());
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   */
+  protected void addDeletionToBatch(

Review comment:
       Got it. A closer look at the OMKeysDeleteRequest looks like this was happening anyways. Instead of setting the update ID for the key info to be the trxnLogIndex and submitting the key info to the response, it was just passing the trxnLogIndex separately with the key info to the response. I will update OMKeysDeleteRequest/Response to be consistent with the other request/responses in how they do this, and remove the overload of this method.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] errose28 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
errose28 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r499585384



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMOpenKeysDeleteRequest.java
##########
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.response.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKeyBucket;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKey;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles requests to move open keys from the open key table to the delete
+ * table. Modifies the open key table cache only, and no underlying databases.
+ * The delete table cache does not need to be modified since it is not used
+ * for client response validation.
+ */
+public class OMOpenKeysDeleteRequest extends OMKeyRequest {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(OMOpenKeysDeleteRequest.class);
+
+  public OMOpenKeysDeleteRequest(OMRequest omRequest) {
+    super(omRequest);
+  }
+
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+      long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+    OMMetrics omMetrics = ozoneManager.getMetrics();
+    omMetrics.incNumOpenKeyDeleteRequests();
+
+    OzoneManagerProtocolProtos.DeleteOpenKeysRequest deleteOpenKeysRequest =
+            getOmRequest().getDeleteOpenKeysRequest();
+
+    List<OpenKeyBucket> submittedOpenKeyBucket =
+            deleteOpenKeysRequest.getOpenKeysPerBucketList();
+
+    long numSubmittedOpenKeys = submittedOpenKeyBucket.stream()
+        .mapToLong(OpenKeyBucket::getKeysCount).sum();
+
+    LOG.debug("{} open keys submitted for deletion.", numSubmittedOpenKeys);
+    omMetrics.incNumOpenKeysSubmittedForDeletion(numSubmittedOpenKeys);
+
+    OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+            OmResponseUtil.getOMResponseBuilder(getOmRequest());
+
+    IOException exception = null;
+    OMClientResponse omClientResponse = null;
+    Result result = null;
+    Map<String, OmKeyInfo> deletedOpenKeys = new HashMap<>();
+
+    try {
+      // Open keys are grouped by bucket, but there may be multiple buckets
+      // per volume. This maps volume name to volume args to track
+      // all volume updates for this request.
+      Map<String, OmVolumeArgs> modifiedVolumes = new HashMap<>();
+      OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+      for (OpenKeyBucket openKeyBucket: submittedOpenKeyBucket) {
+        // For each bucket where keys will be deleted from,
+        // get its bucket lock and update the cache accordingly.
+        Map<String, OmKeyInfo> deleted = updateOpenKeyTableCache(ozoneManager,
+            trxnLogIndex, openKeyBucket);
+
+        deletedOpenKeys.putAll(deleted);
+
+        // If open keys were deleted from this bucket and its volume still
+        // exists, update the volume's byte usage in the cache.
+        if (!deleted.isEmpty()) {
+          String volumeName = openKeyBucket.getVolumeName();
+          // Returns volume args from the cache if the volume is present,
+          // null otherwise.
+          OmVolumeArgs volumeArgs = getVolumeInfo(metadataManager, volumeName);
+
+          // If this volume still exists, decrement bytes used based on open
+          // keys deleted.
+          // The volume args object being updated is a reference from the
+          // cache, so this serves as a cache update.
+          if (volumeArgs != null) {

Review comment:
       I think we are safe because the bytes used value is stored in a thread safe LongAdder internally. See [the original PR where this was introduced](https://github.com/apache/hadoop-ozone/pull/1296#discussion_r485570651). If there is still an issue with this approach, then most of the request classes will need to be modified after HDDS-4053. We should discuss further, as this is really an issue with the design already introduced in master for HDDS-4053 rather than this PR.

##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMOpenKeysDeleteRequest.java
##########
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.response.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKeyBucket;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKey;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles requests to move open keys from the open key table to the delete
+ * table. Modifies the open key table cache only, and no underlying databases.
+ * The delete table cache does not need to be modified since it is not used
+ * for client response validation.
+ */
+public class OMOpenKeysDeleteRequest extends OMKeyRequest {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(OMOpenKeysDeleteRequest.class);
+
+  public OMOpenKeysDeleteRequest(OMRequest omRequest) {
+    super(omRequest);
+  }
+
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+      long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+    OMMetrics omMetrics = ozoneManager.getMetrics();
+    omMetrics.incNumOpenKeyDeleteRequests();
+
+    OzoneManagerProtocolProtos.DeleteOpenKeysRequest deleteOpenKeysRequest =
+            getOmRequest().getDeleteOpenKeysRequest();
+
+    List<OpenKeyBucket> submittedOpenKeyBucket =
+            deleteOpenKeysRequest.getOpenKeysPerBucketList();
+
+    long numSubmittedOpenKeys = submittedOpenKeyBucket.stream()
+        .mapToLong(OpenKeyBucket::getKeysCount).sum();
+
+    LOG.debug("{} open keys submitted for deletion.", numSubmittedOpenKeys);
+    omMetrics.incNumOpenKeysSubmittedForDeletion(numSubmittedOpenKeys);
+
+    OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+            OmResponseUtil.getOMResponseBuilder(getOmRequest());
+
+    IOException exception = null;
+    OMClientResponse omClientResponse = null;
+    Result result = null;
+    Map<String, OmKeyInfo> deletedOpenKeys = new HashMap<>();
+
+    try {
+      // Open keys are grouped by bucket, but there may be multiple buckets
+      // per volume. This maps volume name to volume args to track
+      // all volume updates for this request.
+      Map<String, OmVolumeArgs> modifiedVolumes = new HashMap<>();
+      OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+      for (OpenKeyBucket openKeyBucket: submittedOpenKeyBucket) {
+        // For each bucket where keys will be deleted from,
+        // get its bucket lock and update the cache accordingly.
+        Map<String, OmKeyInfo> deleted = updateOpenKeyTableCache(ozoneManager,
+            trxnLogIndex, openKeyBucket);
+
+        deletedOpenKeys.putAll(deleted);
+
+        // If open keys were deleted from this bucket and its volume still
+        // exists, update the volume's byte usage in the cache.
+        if (!deleted.isEmpty()) {
+          String volumeName = openKeyBucket.getVolumeName();
+          // Returns volume args from the cache if the volume is present,
+          // null otherwise.
+          OmVolumeArgs volumeArgs = getVolumeInfo(metadataManager, volumeName);
+
+          // If this volume still exists, decrement bytes used based on open
+          // keys deleted.
+          // The volume args object being updated is a reference from the
+          // cache, so this serves as a cache update.
+          if (volumeArgs != null) {
+            // If we already encountered the volume, it was a reference to
+            // the same object from the cache, so this will update it.
+            modifiedVolumes.put(volumeName, volumeArgs);

Review comment:
       Just to clarify, is this the execution you are talking about?
   
   1. Request1 deletes key1 from volume1 in cache.
   2. Request2 deletes key2 from volume1 in cache.
   3. Request1 sets cached VolumeArgs object volArgs.bytesUsed -= key1.bytesUsed.
       - *divergence 1*: The cache shows key1 and key2 as deleted, but cache byte usage only reflects key1's deletion.
   4. Request2 sets cached VolumeArgs object volArgs.bytesUsed -= key2.bytesUsed.
       - At this point, byte usage in the cache is consistent with the keys it shows as deleted.
   5. Response1 is processed, committing volArgs and the deletion of key1 to the DB.
       - *divergence 2*: the DB shows only key1 deleted, but volume byte usage has been set as if both key1 and key2 were deleted.
   6. Response2 is processed, committing volArgs to the DB again, and committing the deletion of key2 to the DB.
       - Now the keys deleted and bytes used align in the DB.
   
   IIRC the entire volume table is stored in memory and only persisted to the DB to save state. Reads only happen from the in memory cache for volume metadata. In this case, *divergence 2* will never be detected by callers since it only happens at the DB level. *divergence 1* my exist briefly and be detected by callers. Again, this is really an issue with all requests modified in HDDS-4053 and not just this PR. We should discuss to determine whether the slight inconsistency warrants a whole volume lock on all requests that modify byte usage.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] errose28 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
errose28 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r500393541



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/AbstractOMKeyDeleteResponse.java
##########
@@ -0,0 +1,150 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.key;
+
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
+import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+        .OMResponse;
+import org.apache.hadoop.hdds.utils.db.BatchOperation;
+
+import java.io.IOException;
+import javax.annotation.Nullable;
+import javax.annotation.Nonnull;
+
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.DELETED_TABLE;
+
+/**
+ * Response for DeleteKey request.
+ */
+@CleanupTableInfo(cleanupTables = {DELETED_TABLE})
+public abstract class AbstractOMKeyDeleteResponse extends OMClientResponse {
+
+  private boolean isRatisEnabled;
+
+  public AbstractOMKeyDeleteResponse(
+      @Nonnull OMResponse omResponse, boolean isRatisEnabled) {
+
+    super(omResponse);
+    this.isRatisEnabled = isRatisEnabled;
+  }
+
+  /**
+   * For when the request is not successful.
+   * For a successful request, the other constructor should be used.
+   */
+  public AbstractOMKeyDeleteResponse(@Nonnull OMResponse omResponse) {
+    super(omResponse);
+    checkStatusNotOK();
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   * The log transaction index used will be retrieved by calling
+   * {@link OmKeyInfo#getUpdateID} on {@code omKeyInfo}.
+   */
+  protected void deleteFromTable(
+      OMMetadataManager omMetadataManager,
+      BatchOperation batchOperation,
+      Table<String, ?> fromTable,
+      String keyName,
+      OmKeyInfo omKeyInfo) throws IOException {
+
+    deleteFromTable(omMetadataManager, batchOperation, fromTable, keyName,
+        omKeyInfo, omKeyInfo.getUpdateID());
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   */
+  protected void deleteFromTable(
+      OMMetadataManager omMetadataManager,
+      BatchOperation batchOperation,
+      Table<String, ?> fromTable,
+      String keyName,
+      OmKeyInfo omKeyInfo, long trxnLogIndex) throws IOException {
+
+    // For OmResponse with failure, this should do nothing. This method is
+    // not called in failure scenario in OM code.
+    fromTable.deleteWithBatch(batchOperation, keyName);
+
+    // If Key is not empty add this to delete table.
+    if (!isKeyEmpty(omKeyInfo)) {
+      // If a deleted key is put in the table where a key with the same
+      // name already exists, then the old deleted key information would be
+      // lost. To avoid this, first check if a key with same name exists.
+      // deletedTable in OM Metadata stores <KeyName, RepeatedOMKeyInfo>.
+      // The RepeatedOmKeyInfo is the structure that allows us to store a
+      // list of OmKeyInfo that can be tied to same key name. For a keyName
+      // if RepeatedOMKeyInfo structure is null, we create a new instance,
+      // if it is not null, then we simply add to the list and store this
+      // instance in deletedTable.
+      RepeatedOmKeyInfo repeatedOmKeyInfo =
+          omMetadataManager.getDeletedTable().get(keyName);
+      repeatedOmKeyInfo = OmUtils.prepareKeyForDelete(
+          omKeyInfo, repeatedOmKeyInfo, trxnLogIndex,
+          isRatisEnabled);
+      omMetadataManager.getDeletedTable().putWithBatch(
+          batchOperation, keyName, repeatedOmKeyInfo);
+    }
+  }
+
+  protected void addVolumeArgsToBatch(OMMetadataManager metadataManager,
+      BatchOperation batch, OmVolumeArgs volumeArgs) throws IOException {
+
+    Table<String, OmVolumeArgs> volumeTable = metadataManager.getVolumeTable();
+    String volumeKey = metadataManager.getVolumeKey(volumeArgs.getVolume());
+    volumeTable.putWithBatch(batch, volumeKey, volumeArgs);
+  }
+
+  @Override
+  public abstract void addToDBBatch(OMMetadataManager omMetadataManager,
+        BatchOperation batchOperation) throws IOException;
+
+  /**
+   * Check if the key is empty or not. Key will be empty if it does not have
+   * blocks.
+   *
+   * @param keyInfo
+   * @return if empty true, else false.
+   */
+  private boolean isKeyEmpty(@Nullable OmKeyInfo keyInfo) {

Review comment:
       Looks like development on the key(s) delete request and response classes has taken a break. I have refactored them to use these shared methods now.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
bharatviswa504 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r499683786



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMOpenKeysDeleteRequest.java
##########
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.response.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKeyBucket;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKey;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles requests to move open keys from the open key table to the delete
+ * table. Modifies the open key table cache only, and no underlying databases.
+ * The delete table cache does not need to be modified since it is not used
+ * for client response validation.
+ */
+public class OMOpenKeysDeleteRequest extends OMKeyRequest {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(OMOpenKeysDeleteRequest.class);
+
+  public OMOpenKeysDeleteRequest(OMRequest omRequest) {
+    super(omRequest);
+  }
+
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+      long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+    OMMetrics omMetrics = ozoneManager.getMetrics();
+    omMetrics.incNumOpenKeyDeleteRequests();
+
+    OzoneManagerProtocolProtos.DeleteOpenKeysRequest deleteOpenKeysRequest =
+            getOmRequest().getDeleteOpenKeysRequest();
+
+    List<OpenKeyBucket> submittedOpenKeyBucket =
+            deleteOpenKeysRequest.getOpenKeysPerBucketList();
+
+    long numSubmittedOpenKeys = submittedOpenKeyBucket.stream()
+        .mapToLong(OpenKeyBucket::getKeysCount).sum();
+
+    LOG.debug("{} open keys submitted for deletion.", numSubmittedOpenKeys);
+    omMetrics.incNumOpenKeysSubmittedForDeletion(numSubmittedOpenKeys);
+
+    OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+            OmResponseUtil.getOMResponseBuilder(getOmRequest());
+
+    IOException exception = null;
+    OMClientResponse omClientResponse = null;
+    Result result = null;
+    Map<String, OmKeyInfo> deletedOpenKeys = new HashMap<>();
+
+    try {
+      // Open keys are grouped by bucket, but there may be multiple buckets
+      // per volume. This maps volume name to volume args to track
+      // all volume updates for this request.
+      Map<String, OmVolumeArgs> modifiedVolumes = new HashMap<>();
+      OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+      for (OpenKeyBucket openKeyBucket: submittedOpenKeyBucket) {
+        // For each bucket where keys will be deleted from,
+        // get its bucket lock and update the cache accordingly.
+        Map<String, OmKeyInfo> deleted = updateOpenKeyTableCache(ozoneManager,
+            trxnLogIndex, openKeyBucket);
+
+        deletedOpenKeys.putAll(deleted);
+
+        // If open keys were deleted from this bucket and its volume still
+        // exists, update the volume's byte usage in the cache.
+        if (!deleted.isEmpty()) {
+          String volumeName = openKeyBucket.getVolumeName();
+          // Returns volume args from the cache if the volume is present,
+          // null otherwise.
+          OmVolumeArgs volumeArgs = getVolumeInfo(metadataManager, volumeName);
+
+          // If this volume still exists, decrement bytes used based on open
+          // keys deleted.
+          // The volume args object being updated is a reference from the
+          // cache, so this serves as a cache update.
+          if (volumeArgs != null) {
+            // If we already encountered the volume, it was a reference to
+            // the same object from the cache, so this will update it.
+            modifiedVolumes.put(volumeName, volumeArgs);

Review comment:
       Not exactly, what I mean here is by the time we process request1, as we add the same object to double buffer, and if other thread processing request2 and updating it, there might be a chance of updating DB state also (Technically this should happen after adding response to double buffer)
   
   Cache is for holding in flight updates if it is not committed to DB, I see no issues with that, this is by design.
   
   Diveregence 1 should not exist, if volume locks are held.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] errose28 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
errose28 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r499578543



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/AbstractOMKeyDeleteResponse.java
##########
@@ -0,0 +1,150 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.response.key;
+
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.helpers.RepeatedOmKeyInfo;
+import org.apache.hadoop.ozone.om.response.CleanupTableInfo;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+        .OMResponse;
+import org.apache.hadoop.hdds.utils.db.BatchOperation;
+
+import java.io.IOException;
+import javax.annotation.Nullable;
+import javax.annotation.Nonnull;
+
+import static org.apache.hadoop.ozone.om.OmMetadataManagerImpl.DELETED_TABLE;
+
+/**
+ * Response for DeleteKey request.
+ */
+@CleanupTableInfo(cleanupTables = {DELETED_TABLE})
+public abstract class AbstractOMKeyDeleteResponse extends OMClientResponse {
+
+  private boolean isRatisEnabled;
+
+  public AbstractOMKeyDeleteResponse(
+      @Nonnull OMResponse omResponse, boolean isRatisEnabled) {
+
+    super(omResponse);
+    this.isRatisEnabled = isRatisEnabled;
+  }
+
+  /**
+   * For when the request is not successful.
+   * For a successful request, the other constructor should be used.
+   */
+  public AbstractOMKeyDeleteResponse(@Nonnull OMResponse omResponse) {
+    super(omResponse);
+    checkStatusNotOK();
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   * The log transaction index used will be retrieved by calling
+   * {@link OmKeyInfo#getUpdateID} on {@code omKeyInfo}.
+   */
+  protected void deleteFromTable(
+      OMMetadataManager omMetadataManager,
+      BatchOperation batchOperation,
+      Table<String, ?> fromTable,
+      String keyName,
+      OmKeyInfo omKeyInfo) throws IOException {
+
+    deleteFromTable(omMetadataManager, batchOperation, fromTable, keyName,
+        omKeyInfo, omKeyInfo.getUpdateID());
+  }
+
+  /**
+   * Adds the operation of deleting the {@code keyName omKeyInfo} pair from
+   * {@code fromTable} to the batch operation {@code batchOperation}. The
+   * batch operation is not committed, so no changes are persisted to disk.
+   */
+  protected void deleteFromTable(
+      OMMetadataManager omMetadataManager,
+      BatchOperation batchOperation,
+      Table<String, ?> fromTable,
+      String keyName,
+      OmKeyInfo omKeyInfo, long trxnLogIndex) throws IOException {
+
+    // For OmResponse with failure, this should do nothing. This method is
+    // not called in failure scenario in OM code.
+    fromTable.deleteWithBatch(batchOperation, keyName);
+
+    // If Key is not empty add this to delete table.
+    if (!isKeyEmpty(omKeyInfo)) {
+      // If a deleted key is put in the table where a key with the same
+      // name already exists, then the old deleted key information would be
+      // lost. To avoid this, first check if a key with same name exists.
+      // deletedTable in OM Metadata stores <KeyName, RepeatedOMKeyInfo>.
+      // The RepeatedOmKeyInfo is the structure that allows us to store a
+      // list of OmKeyInfo that can be tied to same key name. For a keyName
+      // if RepeatedOMKeyInfo structure is null, we create a new instance,
+      // if it is not null, then we simply add to the list and store this
+      // instance in deletedTable.
+      RepeatedOmKeyInfo repeatedOmKeyInfo =
+          omMetadataManager.getDeletedTable().get(keyName);
+      repeatedOmKeyInfo = OmUtils.prepareKeyForDelete(
+          omKeyInfo, repeatedOmKeyInfo, trxnLogIndex,
+          isRatisEnabled);
+      omMetadataManager.getDeletedTable().putWithBatch(
+          batchOperation, keyName, repeatedOmKeyInfo);
+    }
+  }
+
+  protected void addVolumeArgsToBatch(OMMetadataManager metadataManager,
+      BatchOperation batch, OmVolumeArgs volumeArgs) throws IOException {
+
+    Table<String, OmVolumeArgs> volumeTable = metadataManager.getVolumeTable();
+    String volumeKey = metadataManager.getVolumeKey(volumeArgs.getVolume());
+    volumeTable.putWithBatch(batch, volumeKey, volumeArgs);
+  }
+
+  @Override
+  public abstract void addToDBBatch(OMMetadataManager omMetadataManager,
+        BatchOperation batchOperation) throws IOException;
+
+  /**
+   * Check if the key is empty or not. Key will be empty if it does not have
+   * blocks.
+   *
+   * @param keyInfo
+   * @return if empty true, else false.
+   */
+  private boolean isKeyEmpty(@Nullable OmKeyInfo keyInfo) {

Review comment:
       Yes, the idea of creating this abstract class was to eventually consolidate the duplicate code between OMOpenKeysDeleteResponse, OMKeyDeleteResponse, and OMKeysDeleteResponse. I had originally refactored the other classes as well to use this code, but since HDDS-451 (quota support) is moving along at a brisk pace, I could not keep up with the merge conflicts as the other response classes kept changing, and decided it was better to do this in a later PR.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org


[GitHub] [hadoop-ozone] errose28 commented on a change in pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

Posted by GitBox <gi...@apache.org>.
errose28 commented on a change in pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435#discussion_r500389769



##########
File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMOpenKeysDeleteRequest.java
##########
@@ -0,0 +1,233 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * <p>
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * <p>
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.response.key;
+
+import com.google.common.base.Optional;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmVolumeArgs;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.key.OMKeyRequest;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKeyBucket;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OpenKey;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.Collection;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+
+import static org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+
+/**
+ * Handles requests to move open keys from the open key table to the delete
+ * table. Modifies the open key table cache only, and no underlying databases.
+ * The delete table cache does not need to be modified since it is not used
+ * for client response validation.
+ */
+public class OMOpenKeysDeleteRequest extends OMKeyRequest {
+
+  private static final Logger LOG =
+          LoggerFactory.getLogger(OMOpenKeysDeleteRequest.class);
+
+  public OMOpenKeysDeleteRequest(OMRequest omRequest) {
+    super(omRequest);
+  }
+
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+      long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+    OMMetrics omMetrics = ozoneManager.getMetrics();
+    omMetrics.incNumOpenKeyDeleteRequests();
+
+    OzoneManagerProtocolProtos.DeleteOpenKeysRequest deleteOpenKeysRequest =
+            getOmRequest().getDeleteOpenKeysRequest();
+
+    List<OpenKeyBucket> submittedOpenKeyBucket =
+            deleteOpenKeysRequest.getOpenKeysPerBucketList();
+
+    long numSubmittedOpenKeys = submittedOpenKeyBucket.stream()
+        .mapToLong(OpenKeyBucket::getKeysCount).sum();
+
+    LOG.debug("{} open keys submitted for deletion.", numSubmittedOpenKeys);
+    omMetrics.incNumOpenKeysSubmittedForDeletion(numSubmittedOpenKeys);
+
+    OzoneManagerProtocolProtos.OMResponse.Builder omResponse =
+            OmResponseUtil.getOMResponseBuilder(getOmRequest());
+
+    IOException exception = null;
+    OMClientResponse omClientResponse = null;
+    Result result = null;
+    Map<String, OmKeyInfo> deletedOpenKeys = new HashMap<>();
+
+    try {
+      // Open keys are grouped by bucket, but there may be multiple buckets
+      // per volume. This maps volume name to volume args to track
+      // all volume updates for this request.
+      Map<String, OmVolumeArgs> modifiedVolumes = new HashMap<>();
+      OMMetadataManager metadataManager = ozoneManager.getMetadataManager();
+
+      for (OpenKeyBucket openKeyBucket: submittedOpenKeyBucket) {
+        // For each bucket where keys will be deleted from,
+        // get its bucket lock and update the cache accordingly.
+        Map<String, OmKeyInfo> deleted = updateOpenKeyTableCache(ozoneManager,
+            trxnLogIndex, openKeyBucket);
+
+        deletedOpenKeys.putAll(deleted);
+
+        // If open keys were deleted from this bucket and its volume still
+        // exists, update the volume's byte usage in the cache.
+        if (!deleted.isEmpty()) {
+          String volumeName = openKeyBucket.getVolumeName();
+          // Returns volume args from the cache if the volume is present,
+          // null otherwise.
+          OmVolumeArgs volumeArgs = getVolumeInfo(metadataManager, volumeName);
+
+          // If this volume still exists, decrement bytes used based on open
+          // keys deleted.
+          // The volume args object being updated is a reference from the
+          // cache, so this serves as a cache update.
+          if (volumeArgs != null) {
+            // If we already encountered the volume, it was a reference to
+            // the same object from the cache, so this will update it.
+            modifiedVolumes.put(volumeName, volumeArgs);

Review comment:
       Thanks for the explanation @bharatviswa504. I now see that *divergence 2* in the above example poses an issue in the event of an OM crash happening between steps 5 and 6. This will cause the byte usage update to be applied twice in the DB after OM restart. Volume byte usage updates will be removed from the open key requests and responses. Since this is really a larger problem with all requests/responses operating in this way under HDDS-541, we can fix the issue when a solution is developed for all requests/responses.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: ozone-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: ozone-issues-help@hadoop.apache.org