You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Ted Yu (JIRA)" <ji...@apache.org> on 2013/06/11 18:25:20 UTC
[jira] [Comment Edited] (HBASE-8729) distributedLogReplay may hang
during chained region server failure
[ https://issues.apache.org/jira/browse/HBASE-8729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13680420#comment-13680420 ]
Ted Yu edited comment on HBASE-8729 at 6/11/13 4:23 PM:
--------------------------------------------------------
{code}
+ this.executorService.startExecutorService(ExecutorType.MASTER_LOG_REPLAY_OPERATIONS,
+ conf.getInt("hbase.master.executor.serverops.threads", 15));
{code}
Did you intend to introduce a new config param for log replay operations ?
There are several syntax errors in class javadoc for LogReplayHandler.
{code}
+ sinkConf.setInt(HConstants.HBASE_RPC_TIMEOUT_KEY, HConstants.DEFAULT_HBASE_RPC_TIMEOUT / 2);
{code}
Can you add some comment for the above change ?
was (Author: yuzhihong@gmail.com):
{code}
+ this.executorService.startExecutorService(ExecutorType.MASTER_LOG_REPLAY_OPERATIONS,
+ conf.getInt("hbase.master.executor.serverops.threads", 15));
{code}
Did you intend to introduce a new config param for log replay operations ?
There are several syntax errors in class javadoc for EventHandler.
{code}
+ sinkConf.setInt(HConstants.HBASE_RPC_TIMEOUT_KEY, HConstants.DEFAULT_HBASE_RPC_TIMEOUT / 2);
{code}
Can you add some comment for the above change ?
> distributedLogReplay may hang during chained region server failure
> ------------------------------------------------------------------
>
> Key: HBASE-8729
> URL: https://issues.apache.org/jira/browse/HBASE-8729
> Project: HBase
> Issue Type: Bug
> Components: MTTR
> Reporter: Jeffrey Zhong
> Assignee: Jeffrey Zhong
> Fix For: 0.98.0, 0.95.2
>
> Attachments: hbase-8729.patch
>
>
> In a test, half cluster(in terms of region servers) was down and some log replay had incurred chained RS failures(receiving RS of a log replay failed again).
> Since by default, we only allow 3 concurrent SSH handlers(controlled by {code}this.executorService.startExecutorService(ExecutorType.MASTER_SERVER_OPERATIONS,conf.getInt("hbase.master.executor.serverops.threads", 3));{code}).
> If all 3 SSH handlers are doing logReplay(blocking call) and one of receiving RS fails again then logReplay will hang because regions of the newly failed RS can't be re-assigned to another live RS(no ssh handler will be processed due to max threads setting) and existing log replay will keep routing replay traffic to the dead RS.
> The fix is to submit logReplay work into a separate type of executor queue in order not to block SSH region assignment so that logReplay can route traffic to a live RS after retries and move forward.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira