You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2022/10/10 12:44:27 UTC

[GitHub] [hadoop] Hexiaoqiao commented on pull request #4945: HDFS-16785. DataNode hold BP write lock to scan disk

Hexiaoqiao commented on PR #4945:
URL: https://github.com/apache/hadoop/pull/4945#issuecomment-1273259888

   I totally agree that we should not hold lock when ever IO operation, especially scan the whole disk, it will be one terrible disaster even at refresh volume. Of course it does not include during restart DataNode instance.
   Back to this case. I think the point is that we should only hold block pool lock (maybe write lock here) when get/set `BlockPoolSlice` rather than one coarse grain lock.
   So should we split the following segment and only hold lock for `BlockPoolSlice`. And leave other logic without any level locks. I think it could be acceptable if meet any conflicts or other exceptions about only one volume which is being added.
   ```
         try (AutoCloseDataSetLock l = lockManager.writeLock(LockLevel.BLOCK_POOl, bpid)) {
           fsVolume.addBlockPool(bpid, this.conf, this.timer);
           fsVolume.getVolumeMap(bpid, tempVolumeMap, ramDiskReplicaTracker);
         }
   ```
   WDYT? cc @MingXiangLi @ZanderXu 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org