You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by "Yannis Pavlidis (JIRA)" <ji...@apache.org> on 2009/10/22 19:38:59 UTC

[jira] Commented: (HBASE-1928) ROOT and META tables stay in transition state (making the system not usable) if the designated regionServer dies before the assignment is complete

    [ https://issues.apache.org/jira/browse/HBASE-1928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12768773#action_12768773 ] 

Yannis Pavlidis commented on HBASE-1928:
----------------------------------------

The above situation occurred in our environment when:

pre condition
===============
cache01 (is the backup master, runs a region server has the ROOT and META assigned to it)
cache02 (runs a region server)
search01 (runs the master and the region server)

scenario
=========
kill the master on search01
the master on cache01 resumes master duties
cache01 encounters a fatal error (FATAL org.apache.hadoop.hbase.regionserver.LogRoller: Log rolling failed with ioe) and has to exit
The ROOT is getting re-assigned to the region server on search01 and the META is getting re-assigned to the region server on cache02.

Now cache02 encounters the same fatal error (FATAL org.apache.hadoop.hbase.regionserver.LogRoller: Log rolling failed with ioe) and has to exit before it accepts the assignment for servicing the META region

post condition
===============
While the root is assigned to search01 the meta appears to have been left in limbo state (still in regionsInTransitions map of the RegionManager). The issue I believe is because of a race condition.
The region server in cache02 never gets the chance to complete the assignment of the meta region. When cache01 realizes that cache02 has died in the ProcessServerShutdown it never checks to see whether the server that died had a meta region assigned to it in transition (isMetaServer method in the RegionManager checks for that). The result of this is that when my client connects it gets the cache02 address for the meta server and of course it keeps failing to connect.

I have attached the logs for this scenario. Because of course it is difficult to replicate the above case please follow the steps on the first comment to simulate this problem with ROOT / META.


> ROOT and META tables stay in transition state (making the system not usable) if the designated regionServer dies before the assignment is complete
> --------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-1928
>                 URL: https://issues.apache.org/jira/browse/HBASE-1928
>             Project: Hadoop HBase
>          Issue Type: Bug
>          Components: master
>    Affects Versions: 0.20.0, 0.20.1
>         Environment: Linux
>            Reporter: Yannis Pavlidis
>             Fix For: 0.20.2
>
>
> During a ROOT or META table re-assignment if the designated regionServer dies before the assignment is complete then the whole cluster becomes unavailble since the ROOT or META tables cannot be accessed (and never recover since they are kept in a transition state).
> These are the 4 steps to replicate this issue (this is the easiest way to replicate. You can imagine that the following can occur in any real system).
> Pre condition
> ============
> 1. a cluster of 3 nodes (cache01, cache02, search01).
> 2. start the system (start-hbase)
> 3. cache02 has META, search01 has ROOT, cache01 has regionServer and Master.
> Case 1:
> =======
> 1. kill cache01
> 2. kill cache02
> 3. now search01 has both ROOT and META.
> 4. re-start RegionServers on cache01 and cache02
> 5. Tail the master logs and grep for "Assigning region -ROOT-" and also "Assigning region .META." (need to windows for easiness)
> 6. kill search01
> 7. wait to see to which server the ROOT will be assigned (from the tail)
> 8. quickly kill that server
> 9. you should notice that the ROOT server never gets re-assigned (because it is stuck in the regionsInTransitions)
> The termination occurs through the ServerManager::removeServerInfo since the regionServer sends back to the master in a report that it is shutting down.
> Case 2:
> ========
> Repeat Case1 and in step 7 and 8 kill the server that has the META region assigned to it. Again the cluster becomes unavailble because the META region stays in the regionsInTransitions.
> The termination occurs through the ServerManager::removeServerInfo since the regionServer sends back to the master in a report that it is shutting down.
> Case 3:
> ========
> Repeat Case1 and in step 7 and 8 kill the server with kill -9 instead of kill. This will not give the opportunity to the regionServer to send back the master in the report that it is terminating. The master will realize this because the znode will expire (but it is a different code path from before - it goes to the ProcessServerShutdown).
> Case 4:
> ========
> Repeat Case3 and in step 7 and 8 kill the server with kill -9 instead of kill. This will not give the opportunity to the regionServer to send back the master in the report that it is terminating. The master will realize this because the znode will expire (but it is a different code path from before - it goes to the ProcessServerShutdown).
> The solution would be to check the in the ServerManager:removeServerInfo and in  ProcessServerShutdown::closeMetaRegions whether the server that has been terminated has been assigned either the ROOT or META table. And if they have make sure we make those table ready to be re-assigned again.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.