You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by bh...@apache.org on 2020/02/18 20:01:05 UTC

[hbase] 12/12: HBASE-23331: Documentation for HBASE-18095

This is an automated email from the ASF dual-hosted git repository.

bharathv pushed a commit to branch HBASE-18095/client-locate-meta-no-zookeeper
in repository https://gitbox.apache.org/repos/asf/hbase.git

commit f88ed9c630436e1e7f39f7a24817c0c0fe9d8fa2
Author: Bharath Vissapragada <bh...@apache.org>
AuthorDate: Mon Feb 10 22:48:47 2020 -0800

    HBASE-23331: Documentation for HBASE-18095
---
 ...95-Zookeeper-less-client-connection-design.adoc | 112 +++++++++++++++++++++
 src/main/asciidoc/_chapters/architecture.adoc      |  82 ++++++++++++++-
 src/main/asciidoc/_chapters/configuration.adoc     |  61 +++++++----
 src/main/asciidoc/_chapters/troubleshooting.adoc   |  11 ++
 4 files changed, 247 insertions(+), 19 deletions(-)

diff --git a/dev-support/design-docs/HBASE-18095-Zookeeper-less-client-connection-design.adoc b/dev-support/design-docs/HBASE-18095-Zookeeper-less-client-connection-design.adoc
new file mode 100644
index 0000000..ffadfcd
--- /dev/null
+++ b/dev-support/design-docs/HBASE-18095-Zookeeper-less-client-connection-design.adoc
@@ -0,0 +1,112 @@
+////
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+////
+
+= HBASE-18095: Zookeeper-less client connection
+
+
+== Context
+Currently, Zookeeper (ZK) lies in the critical code path of connection init. To set up a connection to a given HBase cluster, client relies on the zookeeper quorum configured in the client hbase-site.xml and attempts to fetch the following information.
+
+* ClusterID
+* Active HMaster server name
+* Meta table region locations
+
+ZK is deemed the source of truth since other processes that maintain the cluster state persist the changes to this data into ZK. So it is an obvious place to look at for clients to fetch the latest cluster state.  However this comes with it’s own set of problems, some of them are below.
+
+* Timeouts and retry logic for ZK clients are managed separately from HBase configuration. This is more administration overhead for end users (example: multiple timeouts are to be configured for different types of RPCs. client->master, client->ZK etc.). This prevents HBase from having a single holistic timeout configuration that applies to any RPCs.
+* If there is any issue with ZK (like connection overload / timeouts), the entire HBase service appears frozen and there is little visibility into it.
+* Exposing zookeeper to all the clients can be risky since it can potentially be abused to DDOS.
+* Embedded ZK client is bundled with hbase client jar as a dependency along with (with it’s log spew :-]). The embedded client also needs separate JAAS configuration management for secure clusters.
+
+== Goal
+
+We would like to remove this ZK dependency in the HBase client and instead have the clients query a preconfigured list of active and standby master host:port addresses. This brings all the client interactions with HBase under the same RPC framework that is holistically controlled by a set of hbase client configuration parameters. This also alleviates the pressure on ZK cluster which is critical from an operational standpoint as some core processes like replication, log splitting, master  [...]
+
+== Proposed design
+
+As mentioned above, clients now get a pre configured list active and standby master addresses that they can query to fetch the meta information needed for connection setup. Something like,
+
+[source, xml]
+-----
+<property>
+  <name>hbase.masters</name>
+  <value>master1:16000,master2:16001,master3:16000</value>
+</property>
+-----
+
+Clients should be robust enough to handle configuration changes to this parameter since master hosts can change (added/removed) over time and not every client can afford a restart.
+
+One thing to note here is that having masters in the init/read/write path for clients means that
+
+* At least one active/standby master is now needed for connection creation. Earlier this was not a requirement because the clients looked up the cluster ID from the relevant znode and init successfully. So, technically a master need not be around to create a connection to the cluster.
+* Masters are now active part of read write path in client life cycle. If the client  cache of meta locations/active master is purged/stale, at least one master (active/stand-by) serving the latest information should exist. Earlier this information was served by ZK and clients look up the latest cluster ID/active master/meta locations from the relevant znodes and get going.
+* There is a higher connection load on the masters than before.
+* More state synchronization traffic (see below)
+
+End users should factor these requirements into their cluster deployment if they intend to use this feature.
+
+=== Server side changes
+
+Now that the master end points are considered as source of truth for clients, they should track the latest meta information for clusterID, active master and meta table locations. Since the clients can connect to any master end point (details below), all the masters (active/standby) now track all the relevant meta information. The idea is to implement an in-memory cache local to all the masters and it should keep up with changes to this metadata. This is tracked in the following jiras.
+
+* Clusterid tracking - https://issues.apache.org/jira/browse/HBASE-23257[HBASE-23257]
+* Active master tracking - https://issues.apache.org/jira/browse/HBASE-23275[HBASE-23275]
+* Meta location tracking - https://issues.apache.org/jira/browse/HBASE-23281[HBASE-23281]
+
+Masters and region servers (all cluster internal processes) still use ZK for cluster state coordination. Masters additionally cache this information in-memory and rely on ZK listeners and watchers to track the changes to them. New RPC endpoints are added to serve this information to the clients. Having an in-memory cache can speed up client lookups rather than fetching from ZK synchronously with the client lookup RPCs.
+
+=== Client side changes
+
+The proposal is to implement a new AsyncRegistry (https://issues.apache.org/jira/browse/HBASE-23305[HBASE-23305]) based on the master RPC endpoints discussed above. Few interesting optimizations on the client side as follows.
+
+* Each client randomly picks a master from the list of host:ports passed in the configuration. This avoids hotspotting on a single master.
+* Client can also do hedged lookups, meaning a single RPC to fetch meta information (say active master) can be sent to multiple masters and which ever returns first can be passed along to the caller. This can be done under a config flag since it comes with an additional RPC load. The default behavior is to randomly probe masters until the list is exhausted.
+* Callers are expected to cache the meta information in higher levels and only probe again if the cached information is stale (which is quite possible).
+
+One user noted that there are some clients that rely on cluster ID for delegation token based auth. So there is a proposal to expose it on an auth-less end point for delegation auth support (https://issues.apache.org/jira/browse/HBASE-23330[HBASE-23330]).
+
+== Compatibility matrix / Upgrade scenarios
+
+Since rolling upgrades are quite common in HBase deployments, we need to think of a proper upgrade path for customers to deploy this feature.  Here are some scenarios with old/new server and client combinations during upgrades.
+
+* old client -> old server works (duh!)
+* old client -> new server works (backwards compatible since the patch does not change any existing RPC signatures)
+* new client -> old server - does not work, the channel will be closed on error because the end points it is looking up would still not be implemented on the server side.
+* new client -> new server works (duh!)
+
+Given this compatibility matrix,
+
+* Client-server compatibility - clients and servers are not allowed to upgrade out of sync. Servers should be upgraded first before upgrading the clients.
+* Server-server compatibility - unaffected.
+* File format compatibility - unaffected.
+* Client API compatibility - unaffected.
+* Client binary compatibility - unaffected. (only configuration changes needed)
+* Server side limited API compatibility - unaffected.
+* Dependency compatibility - unaffected.
+
+== Testing plan
+
+* Unit tests should be added to all the patches covering most critical code paths
+* Mini clusters tests simulating real world scenarios (like stale meta/master etc) should be added.
+* Consider making this the default registry implementation and let the code bakein for a while before release.
+* Deploy the bits on a real distributed cluster and test a long running application that is heavy on these RPCs and inject faults.
+
+
diff --git a/src/main/asciidoc/_chapters/architecture.adoc b/src/main/asciidoc/_chapters/architecture.adoc
index 60f5512..e4473fc 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -260,6 +260,73 @@ For region name, we only accept `byte[]` as the parameter type and it may be a f
 
 Information on non-Java clients and custom protocols is covered in <<external_apis>>
 
+[[client.masterregistry]]
+=== Master registry (new as of release 3.0.0)
+
+Client internally works with a _connection registry_ to fetch the metadata needed by connections.
+This connection registry implementation is responsible for fetching the following metadata.
+
+* Active master address
+* Current meta region(s) locations
+* Cluster ID (unique to this cluster)
+
+This information is needed as a part of various client operations like connection set up, scans,
+gets etc. Up until releases 2.x.y, the default connection registry is based on ZooKeeper as the
+source of truth and the the clients fetched the metadata from zookeeper znodes. As of release 3.0.0,
+the default implementation for connection registry has been switched  to a master based
+implementation. With this change, the clients now fetch the required metadata from master RPC end
+points directly. This change was done for the following reasons.
+
+* Reduce load on ZooKeeper since that is critical for cluster operation.
+* Holistic client timeout and retry configurations since the new registry brings all the client
+operations under HBase rpc framework.
+* Remove the ZooKeeper client dependency on HBase client library.
+
+This means that
+
+* At least a single active or stand by master is needed for cluster connection setup. Refer to
+<<master.runtime>> for more details.
+* Master can be in a critical path of read/write operations, especially if the client metadata cache
+is empty or stale.
+* There is higher connection load on the masters that before since the clients talk directly to
+HMasters instead of ZooKeeper ensemble`
+
+To reduce hot-spotting on a single master, all the masters (active & stand-by) expose the needed
+service to fetch the connection metadata. This lets the client connect to any master (not just active).
+
+==== RPC hedging
+
+This feature also implements an new RPC channel that can hedge requests to multiple masters. This
+lets the client make the same request to multiple servers and which ever responds first is returned
+back to the client and the other other in-flight requests are canceled. This improves the
+performance, especially when a subset of servers are under load. The hedging fan out size is
+configurable, meaning the number of requests that are hedged in a single attempt, using the
+configuration key _hbase.rpc.hedged.fanout_ in the client configuration. It defaults to 2. With this
+default, the RPCs are tried in batches of 2. The hedging policy is still primitive and does not
+adapt to any sort of live rpc performance metrics.
+
+==== Additional Notes
+
+* Clients hedge the requests in a randomized order to avoid hot-spotting a single server.
+* Cluster internal connections (master<->regionservers) still use ZooKeeper based connection
+registry.
+* Cluster internal state is still tracked in Zookeeper, hence ZK availability requirements are same
+as before.
+* Inter cluster replication still uses ZooKeeper based connection registry to simplify configuration
+management.
+
+For more implementation details, please refer to the https://github.com/apache/hbase/tree/master/dev-support/design-docs[design doc] and
+https://issues.apache.org/jira/browse/HBASE-18095[HBASE-18095].
+
+'''
+NOTE: (Advanced) In case of any issues with the master based registry, use the following
+configuration to fallback to the ZooKeeper based connection registry implementation.
+[source, xml]
+<property>
+    <name>hbase.client.registry.impl</name>
+    <value>org.apache.hadoop.hbase.client.ZKConnectionRegistry</value>
+ </property>
+
 [[client.filter]]
 == Client Request Filters
 
@@ -577,11 +644,24 @@ If the active Master loses its lease in ZooKeeper (or the Master shuts down), th
 [[master.runtime]]
 === Runtime Impact
 
-A common dist-list question involves what happens to an HBase cluster when the Master goes down.
+A common dist-list question involves what happens to an HBase cluster when the Master goes down. This information has changed staring 3.0.0.
+
+==== Up until releases 2.x.y
 Because the HBase client talks directly to the RegionServers, the cluster can still function in a "steady state". Additionally, per <<arch.catalog>>, `hbase:meta` exists as an HBase table and is not resident in the Master.
 However, the Master controls critical functions such as RegionServer failover and completing region splits.
 So while the cluster can still run for a short time without the Master, the Master should be restarted as soon as possible.
 
+==== Staring release 3.0.0
+As mentioned in section <<client.masterregistry>>, the default connection registry for clients is now based on master rpc end points. Hence the requirements for
+masters' uptime are even tighter starting this release.
+
+- At least one active or stand by master is needed for a connection set up, unlike before when all the clients needed was a ZooKeeper ensemble.
+- Master is now in critical path for read/write operations. For example, if the meta region bounces off to a different region server, clients
+need master to fetch the new locations. Earlier this was done by fetching this information directly from ZooKeeper.
+- Masters will now have higher connection load than before. So, the server side configuration might need adjustment depending on the load.
+
+Overall, the master uptime requirements, when this feature is enabled, are even higher for the client operations to go through.
+
 [[master.api]]
 === Interface
 
diff --git a/src/main/asciidoc/_chapters/configuration.adoc b/src/main/asciidoc/_chapters/configuration.adoc
index 3e91dcb..ddaf2044 100644
--- a/src/main/asciidoc/_chapters/configuration.adoc
+++ b/src/main/asciidoc/_chapters/configuration.adoc
@@ -563,38 +563,63 @@ Changes here will require a cluster restart for HBase to notice the change thoug
 
 If you are running HBase in standalone mode, you don't need to configure anything for your client to work provided that they are all on the same machine.
 
-Since the HBase Master may move around, clients bootstrap by looking to ZooKeeper for current critical locations.
-ZooKeeper is where all these values are kept.
-Thus clients require the location of the ZooKeeper ensemble before they can do anything else.
-Usually this ensemble location is kept out in the _hbase-site.xml_ and is picked up by the client from the `CLASSPATH`.
+Starting release 3.0.0, the default connection registry has been switched to a master based implementation. Refer to <<client.masterregistry>> for more details about
+what a connection registry is and implications of this change. Depending on your HBase version, following is the expected minimal client configuration.
 
-If you are configuring an IDE to run an HBase client, you should include the _conf/_ directory on your classpath so _hbase-site.xml_ settings can be found (or add _src/test/resources_ to pick up the hbase-site.xml used by tests).
+==== Up until 2.x.y releases
+In 2.x.y releases, the default connection registry was based on ZooKeeper as the source of truth. This means that the clients always looked up ZooKeeper znodes to fetch
+the required metadata. For example, if an active master crashed and the a new master is elected, clients looked up the master znode to fetch
+the active master address (similarly for meta locations). This meant that the clients needed to have access to ZooKeeper and need to know
+the ZooKeeper ensemble information before they can do anything. This can be configured in the client configuration xml as follows:
 
-For Java applications using Maven, including the hbase-shaded-client module is the recommended dependency when connecting to a cluster:
 [source,xml]
 ----
-<dependency>
-  <groupId>org.apache.hbase</groupId>
-  <artifactId>hbase-shaded-client</artifactId>
-  <version>2.0.0</version>
-</dependency>
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
+<configuration>
+  <property>
+    <name>hbase.zookeeper.quorum</name>
+    <value>example1,example2,example3</value>
+    <description> Zookeeper ensemble information</description>
+  </property>
+</configuration>
 ----
 
-A basic example _hbase-site.xml_ for client only may look as follows:
+==== Starting 3.0.0 release
+
+The default implementation was switched to a master based connection registry. With this implementation, clients always contact the active or
+stand-by master RPC end points to fetch the the connection registry information. This means that the clients should have access to the list of active and master
+end points before they can do anything. This can be configured in the client configuration xml as follows:
+
 [source,xml]
 ----
 <?xml version="1.0"?>
 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 <configuration>
   <property>
-    <name>hbase.zookeeper.quorum</name>
+    <name>hbase.masters</name>
     <value>example1,example2,example3</value>
-    <description>The directory shared by region servers.
-    </description>
+    <description>List of master rpc end points for the hbase cluster.</description>
   </property>
 </configuration>
 ----
 
+The configuration value for _hbase.masters_ is a comma separated list of _host:port_ values. If no port value is specified, the default of _16000_ is assumed.
+
+Usually this configuration is kept out in the _hbase-site.xml_ and is picked up by the client from the `CLASSPATH`.
+
+If you are configuring an IDE to run an HBase client, you should include the _conf/_ directory on your classpath so _hbase-site.xml_ settings can be found (or add _src/test/resources_ to pick up the hbase-site.xml used by tests).
+
+For Java applications using Maven, including the hbase-shaded-client module is the recommended dependency when connecting to a cluster:
+[source,xml]
+----
+<dependency>
+  <groupId>org.apache.hbase</groupId>
+  <artifactId>hbase-shaded-client</artifactId>
+  <version>2.0.0</version>
+</dependency>
+----
+
 [[java.client.config]]
 ==== Java client configuration
 
@@ -606,11 +631,11 @@ For example, to set the ZooKeeper ensemble for the cluster programmatically do a
 [source,java]
 ----
 Configuration config = HBaseConfiguration.create();
-config.set("hbase.zookeeper.quorum", "localhost");  // Here we are running zookeeper locally
+config.set("hbase.zookeeper.quorum", "localhost");  // Until 2.x.y versions
+// ---- or ----
+config.set("hbase.masters", "localhost:1234"); // Starting 3.0.0 version
 ----
 
-If multiple ZooKeeper instances make up your ZooKeeper ensemble, they may be specified in a comma-separated list (just as in the _hbase-site.xml_ file). This populated `Configuration` instance can then be passed to an link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html[Table], and so on.
-
 [[config_timeouts]]
 === Timeout settings
 
diff --git a/src/main/asciidoc/_chapters/troubleshooting.adoc b/src/main/asciidoc/_chapters/troubleshooting.adoc
index 3739ecd..032d2c1 100644
--- a/src/main/asciidoc/_chapters/troubleshooting.adoc
+++ b/src/main/asciidoc/_chapters/troubleshooting.adoc
@@ -724,6 +724,17 @@ Insure the JCE jars are on the classpath on both server and client systems.
 You may also need to download the link:http://www.oracle.com/technetwork/java/javase/downloads/jce-6-download-429243.html[unlimited strength JCE policy files].
 Uncompress and extract the downloaded file, and install the policy jars into _<java-home>/lib/security_.
 
+[[trouble.client.masterregistry]]
+=== Trouble shooting master registry issues
+
+* For connectivity issues, usually an exception like "MasterRegistryFetchException: Exception making rpc to masters..." is logged in the client logs. The logging includes the
+list of master end points that were attempted by the client. The bottom part of the stack trace should include the underlying reason. If you suspect connectivity
+issues (ConnectionRefused?), make sure the master end points are accessible from client.
+* If there is a suspicion of higher load on the masters due to hedging of RPCs, it can be controlled by either reducing the hedging fan out (via _hbase.rpc.hedged.fanout_) or
+by restricting the set of masters that clients can access for the master registry purposes (via _hbase.masters_).
+
+Refer to <<client.masterregistry>> and <<client_dependencies>> for more details.
+
 [[trouble.mapreduce]]
 == MapReduce