You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2022/10/28 15:47:22 UTC

[GitHub] [hadoop] tomscut commented on pull request #4945: HDFS-16785. DataNode hold BP write lock to scan disk

tomscut commented on PR #4945:
URL: https://github.com/apache/hadoop/pull/4945#issuecomment-1295157989

   > ```
   >   final FsVolumeImpl fsVolume =
   >         createFsVolume(sd.getStorageUuid(), sd, location);
   >     // no need to add lock
   >     final ReplicaMap tempVolumeMap = new ReplicaMap();
   >     ArrayList<IOException> exceptions = Lists.newArrayList();
   > 
   >     for (final NamespaceInfo nsInfo : nsInfos) {
   >       String bpid = nsInfo.getBlockPoolID();
   >       try (AutoCloseDataSetLock l = lockManager.writeLock(LockLevel.BLOCK_POOl, bpid)) {
   >         fsVolume.addBlockPool(bpid, this.conf, this.timer);
   >         fsVolume.getVolumeMap(bpid, tempVolumeMap, ramDiskReplicaTracker);
   >       } catch (IOException e) {
   >         LOG.warn("Caught exception when adding " + fsVolume +
   >             ". Will throw later.", e);
   >         exceptions.add(e);
   >       }
   >     }
   > ```
   > 
   > The `fsVolume` here is a local temporary variable and still not be added into the `volumes`, and add/remove bp operations just use the volume in `volumes`, so there is no conflicts. So here doesn't need the lock for `BlockPoolSlice`.
   > 
   > @Hexiaoqiao Sir, can check it again?
   
   I agree with @ZanderXu here. +1 from my side.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org