You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by Stack <st...@duboce.net> on 2017/08/15 04:39:19 UTC

Re: [DISCUSS] No regions on Master node in 2.0

(Trying to tie off this thread...)

HBASE-18511 changes the configuration hbase.balancer.tablesOnMaster from a
list of table names that the master can carry (with 'none' meaning no
tables on the master) to instead be a boolean as to whether the Master
carries tables/regions or not.

When true, the master acts like any other regionserver in the cluster
hosting regions while also fulfilling master role. If false, the master
carries no tables (False is the default for hbase-2.0.0).

Another boolean configuration,
hbase.balancer.tablesOnMaster.systemTablesOnly, when set to true, enables
hbase.balancer.tablesOnMaster and makes it so the Master hosts system
tables exclusively (the long-time deploy mode in master branch and branch-2
up until HBASE-18511 goes in).

As part of HBASE-18511, verified that RPCs are short-circuited if the
region is local to the master.

The change of hbase.balancer.tablesOnMaster from String list to boolean and
the addition of a simple boolean to enable system-tables on Master was done
to constrain what operators might ask for via this master configuration.
Stipulating what tables are bound to the Master server verges into
regionserver grouping territory, a more robust means of specifying table
and server combinations. Operators should use this latter if they want
layouts more exotic than those supplied by the provided booleans.

Thanks,
St.Ack


On Tue, Jun 6, 2017 at 11:20 AM, Enis Söztutar <en...@gmail.com> wrote:

> I still have to review the full AMv2 meta updates path to see whether there
> may still be "split brain" due to the extra RPC to a remote server. But I
> really like the notion of keeping the deployment topology of branch-1 by
> default.
>
> The fact is that 2.0 is already lagging, and minimizing the set of changes
> to get a release out earlier is in the best interest of the community.
>
> Enis
>
> On Tue, Jun 6, 2017 at 10:38 AM, Francis Liu <to...@apache.org> wrote:
>
> > > That doesn't solve the same problem.Agreed as mentioned regionserver
> > groups only provides user-system region isolation.
> > > That still means that the most important operations are competing for
> > rpc queue time.Given the previous setup. For meta access contention this
> > should be addressed by higher priority rpc access no?
> >
> >     On Tuesday, June 6, 2017 9:17 AM, Elliott Clark <ec...@apache.org>
> > wrote:
> >
> >
> >  That doesn't solve the same problem. Dedicating a remote server for the
> > system tables still means that all the master to system tables mutations
> > and reads are done over rpc. That still means that the most important
> > operations are competing for rpc queue time.
> >
> > On Fri, Nov 18, 2016 at 11:37 AM, Francis Liu <to...@ymail.com.invalid>
> > wrote:
> >
> > > Just some extra bits of information:
> > >
> > > Another way to isolate user regions from meta is you can create a
> > > regionserver group (HBASE-6721) dedicated to the system tables. This is
> > > what we do at Y!. If the load on meta gets too high (and it does), we
> > split
> > > meta so the load gets spread across more regionservers (HBASE-11165)
> this
> > > way availability for any client is not affected. Tho agreeing with
> Stack
> > > that something is really broken if high priority rpcs cannot get
> through
> > to
> > > meta.
> > > Does single writer to meta refer to the zkless assignment feature? If
> > > isn't that feature has been available since 0.98.6 (meta _not_ on
> > master)?
> > > and we've been running with it on all our clusters for quite sometime
> now
> > > (with some enhancements ie split meta etc).
> > > Cheers,Francis
> > >
> > >    On Wednesday, November 16, 2016 10:47 PM, Stack <st...@duboce.net>
> > > wrote:
> > >
> > >
> > >  On Wed, Nov 16, 2016 at 10:44 PM, Stack <st...@duboce.net> wrote:
> > >
> > > > On Wed, Nov 16, 2016 at 10:57 AM, Gary Helmling <ghelmling@gmail.com
> >
> > > > wrote:
> > > >
> > > >>
> > > >> Do you folks run the meta-carrying-master form G?
> > > >
> > > > Pardon me. I missed a paragraph. I see you folks do deploy this form.
> > > St.Ack
> > >
> > >
> > >
> > >
> > >
> > > > St.Ack
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >>
> > > >>
> > > >> > > >
> > > >> > > Is this just because meta had a dedicated server?
> > > >> > >
> > > >> > >
> > > >> > I'm sure that having dedicated resources for meta helps.  But I
> > don't
> > > >> think
> > > >> > that's sufficient.  The key is that master writes to meta are
> local,
> > > and
> > > >> do
> > > >> > not have to contend with the user requests to meta.
> > > >> >
> > > >> > It seems premature to be discussing dropping a working
> > implementation
> > > >> which
> > > >> > eliminates painful parts of distributed consensus, until we have a
> > > >> complete
> > > >> > working alternative to evaluate.  Until then, why are we looking
> at
> > > >> > features that are in use and work well?
> > > >> >
> > > >> >
> > > >> >
> > > >> How to move forward here? The Pv2 master is almost done. An ITBLL
> > > bakeoff
> > > >> of new Pv2 based assign vs a Master that exclusively hosts
> hbase:meta?
> > > >>
> > > >>
> > > >> I think that's a necessary test for proving out the new AM
> > > implementation.
> > > >> But remember that we are comparing a feature which is actively
> > > supporting
> > > >> production workloads with a line of active development.  I think
> there
> > > >> should also be additional testing around situations of high meta
> load
> > > and
> > > >> end-to-end assignment latency.
> > > >>
> > > >
> > > >
> > >
> > >
> > >
> > >
> >
> >
> >
> >
>