You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Clara Xiong (JIRA)" <ji...@apache.org> on 2016/02/11 02:18:18 UTC
[jira] [Updated] (HBASE-15251) During a cluster restart, Hmaster
thinks it is a failover by mistake
[ https://issues.apache.org/jira/browse/HBASE-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Clara Xiong updated HBASE-15251:
--------------------------------
Description:
We often need to do cluster restart as part of release for a cluster of > 1000 nodes. We have tried our best to get clean shutdown but 50% of the time, hmaster still thinks it is a failover. This increases the restart time from 5 min to 30 min and decreases locality from 95% to 5% since we didn't use a balancer. We had a bug HBASE-14129 but the fix didn't work.
After adding more logging and inspecting the logs, we identified two things that trigger the failover handling:
1. When Hmaster.AssignmentManager detects any dead servers on service manager during joinCluster(), it determines this is a failover without further check. I added a check whether there is even any region assigned to these servers. During a clean restart, the regions are not even assigned.
2. When there are some leftover empty folders for log and split directories or empty wal files, it is also treated as a failover. I added a check for that. Although this can be resolved by manual cleanup, it is still too tedious for restarting a large cluster.
Patch will follow shortly.
was:
We often need to do cluster restart as part of release for a cluster of > 1000 nodes. We have tried our best to get clean shutdown but 50% of the time, hmaster still thinks it is a failover. This increases the restart time from 5 min to 30 min and decreases locality from 95% to 5% since we didn't use a balancer. We had a bug HBASE-14129 but the fix didn't work.
After adding more logging and inspecting the logs, we identified two things that trigger the failover handling:
1. When Hmaster.AssignmentManager detects any dead servers on service manager during joinCluster(), it determines this is a failover without further check. I added a check whether there is even any region assigned to these servers. During a clean restart, the regions are not even assigned.
2. When there are some leftover empty folders for log and split directories or empty wal files, it is already treated as a failover. I added a check for that. Although this can be resolved by manual clean up, it is still too tedious for restarting a large cluster.
Patch will follow shortly.
> During a cluster restart, Hmaster thinks it is a failover by mistake
> --------------------------------------------------------------------
>
> Key: HBASE-15251
> URL: https://issues.apache.org/jira/browse/HBASE-15251
> Project: HBase
> Issue Type: Bug
> Components: master
> Affects Versions: 2.0.0, 0.98.15
> Reporter: Clara Xiong
> Assignee: Clara Xiong
>
> We often need to do cluster restart as part of release for a cluster of > 1000 nodes. We have tried our best to get clean shutdown but 50% of the time, hmaster still thinks it is a failover. This increases the restart time from 5 min to 30 min and decreases locality from 95% to 5% since we didn't use a balancer. We had a bug HBASE-14129 but the fix didn't work.
> After adding more logging and inspecting the logs, we identified two things that trigger the failover handling:
> 1. When Hmaster.AssignmentManager detects any dead servers on service manager during joinCluster(), it determines this is a failover without further check. I added a check whether there is even any region assigned to these servers. During a clean restart, the regions are not even assigned.
> 2. When there are some leftover empty folders for log and split directories or empty wal files, it is also treated as a failover. I added a check for that. Although this can be resolved by manual cleanup, it is still too tedious for restarting a large cluster.
> Patch will follow shortly.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)