You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2022/12/04 13:59:37 UTC

[GitHub] [hadoop] Hexiaoqiao commented on pull request #5170: HDFS-16855. Remove the redundant write lock in addBlockPool.

Hexiaoqiao commented on PR #5170:
URL: https://github.com/apache/hadoop/pull/5170#issuecomment-1336418882

   > > Now that, this case only happen when invoke addBlockPool() and CachingGetSpaceUsed#used < 0, I have an idea, is it possible to add a switch, not add lock when ReplicaCachingGetSpaceUsed#init() at first time , and add it at other times
   > 
   > This makes sense to me, get replicas usage message no need strong consistency.@Hexiaoqiao any suggestion?
   
   Thanks for the detailed discussions. +1. it seems good to me. 
   BTW, I try to dig PR to fix this bug but no found. It just at out internal branch which not refresh space used at init stage. And refresh-used is one complete async thread (at CachingGetSpaceUsed) , thus it could not dead lock when DataNode instance restart. Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org