You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2020/06/29 02:42:13 UTC

[GitHub] [spark] LantaoJin edited a comment on pull request #28938: [SPARK-32118][SQL] Use fine-grained read write lock for each database in HiveExternalCatalog

LantaoJin edited a comment on pull request #28938:
URL: https://github.com/apache/spark/pull/28938#issuecomment-650868622


   > Hi, @LantaoJin . Since this PR aims to use multiple locks instead of a global one, could you add some explanation briefly about how this PR is deadlock-free? Thanks.
   
   Sure. The multiple read-write reentrant locks are divided by different databases. Only operations applied in the same database name could share a same lock. I checked all operations, there is no one operation and its callers need two write locks. Since it's a reentrant lock, it's ok that the method `renamePartitions` which calls `alterPartitions` will re-enter the same write lock.
   Besides, this patch has been online for a long time in our production (over 1 year moved object lock to read-write lock and over 5 months separated to multiple locks per database). Firstly, we changed the single object lock to a single read-write lock since we found the contention from the old lock is very heavy. After moved to read-write lock, it became much better. But we still found a heavy holding in the write lock would impact the operations even through these queries were submitted in different users. So we decided to split the single read-write lock to one lock per database.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org