You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by GitBox <gi...@apache.org> on 2021/06/14 22:43:39 UTC

[GitHub] [hbase] bharathv commented on pull request #3382: HBASE-25998: Redo synchronization in SyncFuture

bharathv commented on pull request #3382:
URL: https://github.com/apache/hbase/pull/3382#issuecomment-861041039


   > In general, I do not think using ReentrantLock could have much difference on performance comparing to synchronized.
   > Maybe the reason is because of now we create a ReentrantLock every time so there are some side effects such as biased locking?
   
   Agree its not just synchronized vs j.u.concurrent. It looks like synchronized is a teeny bit slower compared to concurrent implementations under contention (based on benchmarks, [example](http://david-soroko.blogspot.com/2016/02/synchronized-vs-reentrantlock.html)) but in general there shouldn't be a substantial difference, especially with modern JVM versions.
   
   I don't think it is biased locking because synchronized inherently supports biased locking (and interestingly it is [being removed](https://openjdk.java.net/jeps/374)) and that is not a differentiator. 
   
   I've looked at the assembly of the JIT-ed code for this function but it was not obvious to me (I'm not an expert at reading x86 assembly, so may I have missed something too). Just looking at the flame graphs with/without change I can see that there is less contention around locks due to a reasons like no busy wait, instead there is a CV wait (so there is no constant monitor contention in a loop until woken up, there is no contention around certain fields like txid, forcesync and probably some help from reentrant lock being faster than object monitors.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org