You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2022/11/01 02:34:12 UTC

[GitHub] [hadoop] ZanderXu commented on a diff in pull request #5033: HDFS-16804. AddVolume contains a race condition with shutdown block pool

ZanderXu commented on code in PR #5033:
URL: https://github.com/apache/hadoop/pull/5033#discussion_r1010020543


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/ReplicaMap.java:
##########
@@ -166,25 +167,24 @@ void addAll(ReplicaMap other) {
   /**
    * Merge all entries from the given replica map into the local replica map.
    */
-  void mergeAll(ReplicaMap other) {
+  void mergeAll(ReplicaMap other) throws IOException {
     Set<String> bplist = other.map.keySet();
     for (String bp : bplist) {
       checkBlockPool(bp);
       try (AutoCloseDataSetLock l = lockManager.writeLock(LockLevel.BLOCK_POOl, bp)) {
         LightWeightResizableGSet<Block, ReplicaInfo> replicaInfos = other.map.get(bp);
         LightWeightResizableGSet<Block, ReplicaInfo> curSet = map.get(bp);
+        if (curSet == null) {
+          // Can't find the block pool id in the replicaMap. Maybe it has been removed.

Review Comment:
   @DaveTeng0 Thanks for your review.
   
   > is it possible we can't find block pool id from the map, but the block pool is not removed yet?
   
   I think it's impossible. If you find some cases, please share.
   
   1. The `mergeAll` method only been used by the `activateVolume` method with a `synchronized` lock.
   2. The `shutdownBlockPool` method only been used by the `shutdownBlockPool` method with a `synchronized` lock, which beed added in this PR.
   3. The `addBlockPool` method always initialize an empty `LightWeightResizableGSet` in the global `ReplicaMap` for this block pool.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org