You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@kudu.apache.org by "Alexey Serbin (Code Review)" <ge...@cloudera.org> on 2020/04/13 23:33:34 UTC
[kudu-CR](branch-1.12.x) [catalog manager] reduce contention in ScopedLeaderSharedLock
Hello Kudu Jenkins, Andrew Wong,
I'd like you to do a code review. Please visit
http://gerrit.cloudera.org:8080/15723
to review the following change.
Change subject: [catalog_manager] reduce contention in ScopedLeaderSharedLock
......................................................................
[catalog_manager] reduce contention in ScopedLeaderSharedLock
While troubleshooting one performance issue if running a big cluster
with large number of tables and high rate of ConnectToMaster requests,
in the logs I noticed many reports like the following:
0323 03:59:31.091198 (+607857us) spinlock_profiling.cc:243]
Waited 492 ms on lock 0x4cb0960. stack:
0000000002398852
0000000000ad8c69
0000000000aa62ba
000000000221aaa8
...
which translates into
(anonymous namespace)::SubmitSpinLockProfileData()
master::CatalogManager::ScopedLeaderSharedLock::ScopedLeaderSharedLock()
master::MasterServiceImpl::ConnectToMaster()
rpc::GeneratedServiceIf::Handle()
...
From the code it became apparent that the lock in question was
std::lock_guard<simple_spinlock> l(catalog_->state_lock_);
in ScopedLeaderSharedLock() constructor.
As far as I can see, there is no need to access master's Raft consensus
information (which itself might wait on its internal locks if there is
corresponding Raft-consensus activity) under the catalog's state lock.
This patch shortens the critical section with catalog's state lock held
when constructing CatalogManager::ScopedLeaderSharedLock instance.
Change-Id: I3b2e6866a8a0d5bda9e2b1f01e0668427de60868
Reviewed-on: http://gerrit.cloudera.org:8080/15698
Reviewed-by: Andrew Wong <aw...@cloudera.com>
Tested-by: Kudu Jenkins
(cherry picked from commit 14912a1fd78ba7cf4d62bf934ae64d6f6f229ee6)
---
M src/kudu/master/catalog_manager.cc
1 file changed, 11 insertions(+), 7 deletions(-)
git pull ssh://gerrit.cloudera.org:29418/kudu refs/changes/23/15723/1
--
To view, visit http://gerrit.cloudera.org:8080/15723
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings
Gerrit-Project: kudu
Gerrit-Branch: branch-1.12.x
Gerrit-MessageType: newchange
Gerrit-Change-Id: I3b2e6866a8a0d5bda9e2b1f01e0668427de60868
Gerrit-Change-Number: 15723
Gerrit-PatchSet: 1
Gerrit-Owner: Alexey Serbin <as...@cloudera.com>
Gerrit-Reviewer: Andrew Wong <aw...@cloudera.com>
Gerrit-Reviewer: Kudu Jenkins (120)
[kudu-CR](branch-1.12.x) [catalog manager] reduce contention in ScopedLeaderSharedLock
Posted by "Alexey Serbin (Code Review)" <ge...@cloudera.org>.
Alexey Serbin has submitted this change and it was merged. ( http://gerrit.cloudera.org:8080/15723 )
Change subject: [catalog_manager] reduce contention in ScopedLeaderSharedLock
......................................................................
[catalog_manager] reduce contention in ScopedLeaderSharedLock
While troubleshooting one performance issue if running a big cluster
with large number of tables and high rate of ConnectToMaster requests,
in the logs I noticed many reports like the following:
0323 03:59:31.091198 (+607857us) spinlock_profiling.cc:243]
Waited 492 ms on lock 0x4cb0960. stack:
0000000002398852
0000000000ad8c69
0000000000aa62ba
000000000221aaa8
...
which translates into
(anonymous namespace)::SubmitSpinLockProfileData()
master::CatalogManager::ScopedLeaderSharedLock::ScopedLeaderSharedLock()
master::MasterServiceImpl::ConnectToMaster()
rpc::GeneratedServiceIf::Handle()
...
From the code it became apparent that the lock in question was
std::lock_guard<simple_spinlock> l(catalog_->state_lock_);
in ScopedLeaderSharedLock() constructor.
As far as I can see, there is no need to access master's Raft consensus
information (which itself might wait on its internal locks if there is
corresponding Raft-consensus activity) under the catalog's state lock.
This patch shortens the critical section with catalog's state lock held
when constructing CatalogManager::ScopedLeaderSharedLock instance.
Change-Id: I3b2e6866a8a0d5bda9e2b1f01e0668427de60868
Reviewed-on: http://gerrit.cloudera.org:8080/15698
Reviewed-by: Andrew Wong <aw...@cloudera.com>
Tested-by: Kudu Jenkins
(cherry picked from commit 14912a1fd78ba7cf4d62bf934ae64d6f6f229ee6)
Reviewed-on: http://gerrit.cloudera.org:8080/15723
Reviewed-by: Hao Hao <ha...@cloudera.com>
---
M src/kudu/master/catalog_manager.cc
1 file changed, 11 insertions(+), 7 deletions(-)
Approvals:
Kudu Jenkins: Verified
Hao Hao: Looks good to me, approved
--
To view, visit http://gerrit.cloudera.org:8080/15723
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings
Gerrit-Project: kudu
Gerrit-Branch: branch-1.12.x
Gerrit-MessageType: merged
Gerrit-Change-Id: I3b2e6866a8a0d5bda9e2b1f01e0668427de60868
Gerrit-Change-Number: 15723
Gerrit-PatchSet: 2
Gerrit-Owner: Alexey Serbin <as...@cloudera.com>
Gerrit-Reviewer: Alexey Serbin <as...@cloudera.com>
Gerrit-Reviewer: Andrew Wong <aw...@cloudera.com>
Gerrit-Reviewer: Hao Hao <ha...@cloudera.com>
Gerrit-Reviewer: Kudu Jenkins (120)
[kudu-CR](branch-1.12.x) [catalog manager] reduce contention in ScopedLeaderSharedLock
Posted by "Hao Hao (Code Review)" <ge...@cloudera.org>.
Hao Hao has posted comments on this change. ( http://gerrit.cloudera.org:8080/15723 )
Change subject: [catalog_manager] reduce contention in ScopedLeaderSharedLock
......................................................................
Patch Set 1: Code-Review+2
--
To view, visit http://gerrit.cloudera.org:8080/15723
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings
Gerrit-Project: kudu
Gerrit-Branch: branch-1.12.x
Gerrit-MessageType: comment
Gerrit-Change-Id: I3b2e6866a8a0d5bda9e2b1f01e0668427de60868
Gerrit-Change-Number: 15723
Gerrit-PatchSet: 1
Gerrit-Owner: Alexey Serbin <as...@cloudera.com>
Gerrit-Reviewer: Andrew Wong <aw...@cloudera.com>
Gerrit-Reviewer: Hao Hao <ha...@cloudera.com>
Gerrit-Reviewer: Kudu Jenkins (120)
Gerrit-Comment-Date: Tue, 14 Apr 2020 05:34:09 +0000
Gerrit-HasComments: No