You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Jean-Daniel Cryans (JIRA)" <ji...@apache.org> on 2010/09/13 19:30:32 UTC
[jira] Created: (HBASE-2989) [replication] RSM won't cleanup after
locking if 0 peers
[replication] RSM won't cleanup after locking if 0 peers
--------------------------------------------------------
Key: HBASE-2989
URL: https://issues.apache.org/jira/browse/HBASE-2989
Project: HBase
Issue Type: Bug
Reporter: Jean-Daniel Cryans
Assignee: Jean-Daniel Cryans
Priority: Minor
Fix For: 0.90.0
Small bug in ReplicationSourceManager, it won't cleanup after locking another's RS znode if it didn't contain any queue at all. It happens in transferQueues():
{code}
LOG.info("Moving " + rsZnode + "'s hlogs to my queue");
SortedMap<String, SortedSet<String>> newQueues =
this.zkHelper.copyQueuesFromRS(rsZnode);
if (newQueues == null || newQueues.size() == 0) {
return;
}
this.zkHelper.deleteRsQueues(rsZnode);
{code}
That last line should be before the if, so that it deletes the lock znode and the RS znode. Currently a lot of cruft piles up in ZK after a few restarts with replication enabled and no queues, or in slave RSs.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Resolved: (HBASE-2989) [replication] RSM won't cleanup after
locking if 0 peers
Posted by "Jean-Daniel Cryans (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HBASE-2989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jean-Daniel Cryans resolved HBASE-2989.
---------------------------------------
Resolution: Fixed
> [replication] RSM won't cleanup after locking if 0 peers
> --------------------------------------------------------
>
> Key: HBASE-2989
> URL: https://issues.apache.org/jira/browse/HBASE-2989
> Project: HBase
> Issue Type: Bug
> Reporter: Jean-Daniel Cryans
> Assignee: Jean-Daniel Cryans
> Priority: Minor
> Fix For: 0.89.20100924, 0.90.0
>
>
> Small bug in ReplicationSourceManager, it won't cleanup after locking another's RS znode if it didn't contain any queue at all. It happens in transferQueues():
> {code}
> LOG.info("Moving " + rsZnode + "'s hlogs to my queue");
> SortedMap<String, SortedSet<String>> newQueues =
> this.zkHelper.copyQueuesFromRS(rsZnode);
> if (newQueues == null || newQueues.size() == 0) {
> return;
> }
> this.zkHelper.deleteRsQueues(rsZnode);
> {code}
> That last line should be before the if, so that it deletes the lock znode and the RS znode. Currently a lot of cruft piles up in ZK after a few restarts with replication enabled and no queues, or in slave RSs.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
[jira] Updated: (HBASE-2989) [replication] RSM won't cleanup after
locking if 0 peers
Posted by "Jean-Daniel Cryans (JIRA)" <ji...@apache.org>.
[ https://issues.apache.org/jira/browse/HBASE-2989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Jean-Daniel Cryans updated HBASE-2989:
--------------------------------------
Fix Version/s: 0.89.20100924
Minor change we've been running in production with for some time. Committed to latest 0.89 and 0.90
> [replication] RSM won't cleanup after locking if 0 peers
> --------------------------------------------------------
>
> Key: HBASE-2989
> URL: https://issues.apache.org/jira/browse/HBASE-2989
> Project: HBase
> Issue Type: Bug
> Reporter: Jean-Daniel Cryans
> Assignee: Jean-Daniel Cryans
> Priority: Minor
> Fix For: 0.89.20100924, 0.90.0
>
>
> Small bug in ReplicationSourceManager, it won't cleanup after locking another's RS znode if it didn't contain any queue at all. It happens in transferQueues():
> {code}
> LOG.info("Moving " + rsZnode + "'s hlogs to my queue");
> SortedMap<String, SortedSet<String>> newQueues =
> this.zkHelper.copyQueuesFromRS(rsZnode);
> if (newQueues == null || newQueues.size() == 0) {
> return;
> }
> this.zkHelper.deleteRsQueues(rsZnode);
> {code}
> That last line should be before the if, so that it deletes the lock znode and the RS znode. Currently a lot of cruft piles up in ZK after a few restarts with replication enabled and no queues, or in slave RSs.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.