You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2022/02/22 09:45:13 UTC

[GitHub] [hadoop] jojochuang commented on a change in pull request #3941: HDFS-15382. Split one FsDatasetImpl lock to block pool grain locks.

jojochuang commented on a change in pull request #3941:
URL: https://github.com/apache/hadoop/pull/3941#discussion_r811729617



##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
##########
@@ -3230,7 +3235,9 @@ void transferReplicaForPipelineRecovery(final ExtendedBlock b,
     final BlockConstructionStage stage;
 
     //get replica information
-    try(AutoCloseableLock lock = data.acquireDatasetReadLock()) {
+
+    try(AutoCloseableLock lock = dataSetLockManager.writeLock(

Review comment:
       this was a read lock before. Any idea why it is made a write lock now?

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
##########
@@ -504,15 +503,13 @@ private void createWorkPlan(NodePlan plan) throws DiskBalancerException {
     Map<String, String> storageIDToVolBasePathMap = new HashMap<>();
     FsDatasetSpi.FsVolumeReferences references;
     try {
-      try(AutoCloseableLock lock = this.dataset.acquireDatasetReadLock()) {
-        references = this.dataset.getFsVolumeReferences();
-        for (int ndx = 0; ndx < references.size(); ndx++) {
-          FsVolumeSpi vol = references.get(ndx);
-          storageIDToVolBasePathMap.put(vol.getStorageID(),
-              vol.getBaseURI().getPath());
-        }
-        references.close();
+      references = this.dataset.getFsVolumeReferences();
+      for (int ndx = 0; ndx < references.size(); ndx++) {
+        FsVolumeSpi vol = references.get(ndx);
+        storageIDToVolBasePathMap.put(vol.getStorageID(),
+            vol.getBaseURI().getPath());
       }
+      references.close();

Review comment:
       It would be better to instantiate the references object in a try .. with block to ensure it is closed properly even with an exception,

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java
##########
@@ -117,7 +121,7 @@ ReplicaInfo get(String bpid, long blockId) {
   ReplicaInfo add(String bpid, ReplicaInfo replicaInfo) {
     checkBlockPool(bpid);
     checkBlock(replicaInfo);
-    try (AutoCloseableLock l = writeLock.acquire()) {
+    try (AutoCloseDataSetLock l = lockManager.readLock(LockLevel.BLOCK_POOl, bpid)) {

Review comment:
       why is read lock used here?
   LightWeightResizableGSet is not thread-safe.

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
##########
@@ -602,7 +563,7 @@ public void removeVolumes(
         new ArrayList<>(storageLocsToRemove);
     Map<String, List<ReplicaInfo>> blkToInvalidate = new HashMap<>();
     List<String> storageToRemove = new ArrayList<>();
-    try (AutoCloseableLock lock = datasetWriteLock.acquire()) {
+    synchronized (this) {

Review comment:
       how about making the entire method synchronized instead?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org