You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Joseph (JIRA)" <ji...@apache.org> on 2016/07/11 19:07:11 UTC

[jira] [Comment Edited] (HBASE-16138) Cannot open regions after non-graceful shutdown due to deadlock with Replication Table

    [ https://issues.apache.org/jira/browse/HBASE-16138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15371424#comment-15371424 ] 

Joseph edited comment on HBASE-16138 at 7/11/16 7:06 PM:
---------------------------------------------------------

After talking to [~eclark], [~ghelmling], [~mantonov] as of now I will attempt to fix this issue with the following process:
1. Checking the success of regionOpen requests inside of AssignmentManager. If a serverManager.sendRegionOpen() returns with a RegionOpeningState.FAILED_OPENING, we will queue the request and retry later. As of now we don't do anything about failed RegionOpen requests.
2. Do not register WAL queues for Replication Table regions 
3. Automatically fail a region open request if the Replication Table is not initialized, but table-based replication is enabled
With this set up, AssignmentManager will attempt to open all its regions on cluster re-initialization, but fail on all replicated regions immediately. Thus the non-replicated tables (especially, Replication Table) regions will be opened first. The replicated regions will eventually be assigned by the newly added retry queue inside of AssignmentManager.


was (Author: vegetable26):
After talking to [~eclark], [~ghelmling], [~mantonov] as of now I will fix this issue with the following functions:
1. Checking the success of regionOpen requests inside of AssignmentManager. If a serverManager.sendRegionOpen() returns with a RegionOpeningState.FAILED_OPENING, we will queue the request and retry later. As of now we don't do anything about failed RegionOpen requests.
2. Do not register WAL queues for Replication Table regions 
3. Automatically fail a region open request if the Replication Table is not initialized, but table-based replication is enabled
With this set up, AssignmentManager will attempt to open all its regions on cluster re-initialization, but fail on all replicated regions immediately. Thus the non-replicated tables (especially, Replication Table) regions will be opened first. The replicated regions will eventually be assigned by the newly added retry queue inside of AssignmentManager.

> Cannot open regions after non-graceful shutdown due to deadlock with Replication Table
> --------------------------------------------------------------------------------------
>
>                 Key: HBASE-16138
>                 URL: https://issues.apache.org/jira/browse/HBASE-16138
>             Project: HBase
>          Issue Type: Sub-task
>          Components: Replication
>            Reporter: Joseph
>            Assignee: Joseph
>            Priority: Critical
>
> If we shutdown an entire HBase cluster and attempt to start it back up, we have to run the WAL pre-log roll that occurs before opening up a region. Yet this pre-log roll must record the new WAL inside of ReplicationQueues. This method call ends up blocking on TableBasedReplicationQueues.getOrBlockOnReplicationTable(), because the Replication Table is not up yet. And we cannot assign the Replication Table because we cannot open any regions. This ends up deadlocking the entire cluster whenever we lose Replication Table availability. 
> There are a few options that we can do, but none of them seem very good:
> 1. Depend on Zookeeper-based Replication until the Replication Table becomes available
> 2. Have a separate WAL for System Tables that does not perform any replication (see discussion  at HBASE-14623)
>               Or just have a seperate WAL for non-replicated vs replicated regions
> 3. Record the WAL log in the ReplicationQueue asynchronously (don't block opening a region on this event), which could lead to inconsistent Replication state
> The stacktrace:
>         org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.recordLog(ReplicationSourceManager.java:376)
>         org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.preLogRoll(ReplicationSourceManager.java:348)
>         org.apache.hadoop.hbase.replication.regionserver.Replication.preLogRoll(Replication.java:370)
>         org.apache.hadoop.hbase.regionserver.wal.FSHLog.tellListenersAboutPreLogRoll(FSHLog.java:637)
>         org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:701)
>         org.apache.hadoop.hbase.regionserver.wal.FSHLog.rollWriter(FSHLog.java:600)
>         org.apache.hadoop.hbase.regionserver.wal.FSHLog.<init>(FSHLog.java:533)
>         org.apache.hadoop.hbase.wal.DefaultWALProvider.getWAL(DefaultWALProvider.java:132)
>         org.apache.hadoop.hbase.wal.RegionGroupingProvider.getWAL(RegionGroupingProvider.java:186)
>         org.apache.hadoop.hbase.wal.RegionGroupingProvider.getWAL(RegionGroupingProvider.java:197)
>         org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:240)
>         org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:1883)
>         org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:363)
>         org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:129)
>         org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
>         java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>         java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>         java.lang.Thread.run(Thread.java:745)
> Does anyone have any suggestions/ideas/feedback?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)