You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2021/08/12 06:28:08 UTC

[GitHub] [hadoop] virajjasani commented on pull request #3296: HDFS-16163. Avoid locking entire blockPinningFailures map

virajjasani commented on pull request #3296:
URL: https://github.com/apache/hadoop/pull/3296#issuecomment-897382966


   Thanks for taking look @ferhui. Yes this is perf optimization however I just came across this while going through an non-relevant issue in mover because of which I was comparing all the diff b/ Hadoop 2.10 and latest 3.3 release. That original issue is still under investigation but while looking into all differences, I came across HDFS-11164 and realized that just to update/add one single key->value pair, we are locking entire map and hence I thought of fixing this. I just tested this locally for it's sanity and correctness but unfortunately I don't have perf results because it was simple test.
   
   The other way to look into this is with simplicity: Unless we are updating multiple entries in single batch, we don't need to lock entire map and for single entry update, we can rather use fine-grained ConcurrentHashMap provided utilities.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org