You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Gary Helmling (JIRA)" <ji...@apache.org> on 2016/06/23 23:03:16 UTC

[jira] [Commented] (HBASE-16095) Add priority to TableDescriptor and priority region open thread pool

    [ https://issues.apache.org/jira/browse/HBASE-16095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15347388#comment-15347388 ] 

Gary Helmling commented on HBASE-16095:
---------------------------------------

Do the data regions really need to block opening before index regions are available?  At any given point in time, the indexing implementation needs to be able to deal with an index region being offline right?  Can't they just reject operations while the index region cannot be reached?  This seems like a brittle way to approach this.  In general, building dependency ordering into distributed systems has a bit of a code smell to it.  Better to make each part of a distributed system resilient to failure.  Is there another way to approach this from the phoenix side?

> Add priority to TableDescriptor and priority region open thread pool
> --------------------------------------------------------------------
>
>                 Key: HBASE-16095
>                 URL: https://issues.apache.org/jira/browse/HBASE-16095
>             Project: HBase
>          Issue Type: Bug
>            Reporter: Enis Soztutar
>            Assignee: Enis Soztutar
>             Fix For: 2.0.0, 1.4.0
>
>         Attachments: hbase-16095_v0.patch
>
>
> This is in the similar area with HBASE-15816, and also required with the current secondary indexing for Phoenix. 
> The problem with P secondary indexes is that data table regions depend on index regions to be able to make progress. Possible distributed deadlocks can be prevented via custom RpcScheduler + RpcController configuration via HBASE-11048 and PHOENIX-938. However, region opening also has the same deadlock situation, because data region open has to replay the WAL edits to the index regions. There is only 1 thread pool to open regions with 3 workers by default. So if the cluster is recovering / restarting from scratch, the deadlock happens because some index regions cannot be opened due to them being in the same queue waiting for data regions to open (which waits for  RPC'ing to index regions which is not open). This is reproduced in almost all Phoenix secondary index clusters (mutable table w/o transactions) that we see. 
> The proposal is to have a "high priority" region opening thread pool, and have the HTD carry the relative priority of a table. This maybe useful for other "framework" level tables from Phoenix, Tephra, Trafodian, etc if they want some specific tables to become online faster. 
> As a follow up patch, we can also take a look at how this priority information can be used by the rpc scheduler on the server side or rpc controller on the client side, so that we do not have to set priorities manually per-operation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)