You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Ted Yu (JIRA)" <ji...@apache.org> on 2011/07/05 19:51:16 UTC

[jira] [Commented] (HBASE-4060) Making region assignment more robust

    [ https://issues.apache.org/jira/browse/HBASE-4060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13060036#comment-13060036 ] 

Ted Yu commented on HBASE-4060:
-------------------------------

On top of HBASE-3789, we should consider the following two remedies for 0.90 branch:
1. how to speed up enabling table with large number of regions (12K in Eran's case)
2. AM.TimeoutMonitor.chore() may reassign a region which just completed OpenedRegionHandler.process()

For #2 above, better coordination between OpenedRegionHandler and AM.TimeoutMonitor should be devised.

> Making region assignment more robust
> ------------------------------------
>
>                 Key: HBASE-4060
>                 URL: https://issues.apache.org/jira/browse/HBASE-4060
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Ted Yu
>             Fix For: 0.92.0
>
>
> From Eran Kutner:
> My concern is that the region allocation process seems to rely too much on
> timing considerations and doesn't seem to take enough measures to guarantee
> conflicts do not occur. I understand that in a distributed environment, when
> you don't get a timely response from a remote machine you can't know for
> sure if it did or did not receive the request, however there are things that
> can be done to mitigate this and reduce the conflict time significantly. For
> example, when I run dbck it knows that some regions are multiply assigned,
> the master could do the same and try to resolve the conflict. Another
> approach would be to handle late responses, even if the response from the
> remote machine arrives after it was assumed to be dead the master should
> have enough information to know it had created a conflict by assigning the
> region to another server. An even better solution, I think, is for the RS to
> periodically test that it is indeed the rightful owner of every region it
> holds and relinquish control over the region if it's not.
> Obviously a state where two RSs hold the same region is pathological and can
> lead to data loss, as demonstrated in my case. The system should be able to
> actively protect itself against such a scenario. It probably doesn't need
> saying but there is really nothing worse for a data storage system than data
> loss.
> In my case the problem didn't happen in the initial phase but after
> disabling and enabling a table with about 12K regions.
> For more background information, see 'Errors after major compaction' discussion on user@hbase.apache.org

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira