You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by el...@apache.org on 2019/06/12 20:56:28 UTC

[hbase] branch branch-2.1 updated: HBASE-22566 Update the 2.x upgrade chapter to include default compaction throughput limits

This is an automated email from the ASF dual-hosted git repository.

elserj pushed a commit to branch branch-2.1
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-2.1 by this push:
     new 1ccf37d  HBASE-22566 Update the 2.x upgrade chapter to include default compaction throughput limits
1ccf37d is described below

commit 1ccf37d7cbc78e4fa79991269691590d96baaccc
Author: Josh Elser <el...@apache.org>
AuthorDate: Wed Jun 12 11:00:07 2019 -0400

    HBASE-22566 Update the 2.x upgrade chapter to include default compaction throughput limits
    
    Signed-off-by: Sean Busbey <bu...@apache.org>
---
 src/main/asciidoc/_chapters/upgrading.adoc | 41 +++++++++++++++++++++---------
 1 file changed, 29 insertions(+), 12 deletions(-)

diff --git a/src/main/asciidoc/_chapters/upgrading.adoc b/src/main/asciidoc/_chapters/upgrading.adoc
index 489c36a..21ed65a 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -615,18 +615,35 @@ Performance is also an area that is now under active review so look forward to
 improvement in coming releases (See
 link:https://issues.apache.org/jira/browse/HBASE-20188[HBASE-20188 TESTING Performance]).
 
-[[upgrade2.0.it.kerberos]]
-.Integration Tests and Kerberos
-Integration Tests (`IntegrationTests*`) used to rely on the Kerberos credential cache
-for authentication against secured clusters. This used to lead to tests failing due
-to authentication failures when the tickets in the credential cache expired.
-As of hbase-2.0.0 (and hbase-1.3.0+), the integration test clients will make use
-of the configuration properties `hbase.client.keytab.file` and
-`hbase.client.kerberos.principal`. They are required. The clients will perform a
-login from the configured keytab file and automatically refresh the credentials
-in the background for the process lifetime (See
-link:https://issues.apache.org/jira/browse/HBASE-16231[HBASE-16231]).
-
+[[upgrade2.0.compaction.throughput.limit]]
+.Default Compaction Throughput
+HBase 2.x comes with default limits to the speed at which compactions can execute. This
+limit is defined per RegionServer. In previous versions of HBase, there was no limit to
+the speed at which a compaction could run by default. Applying a limit to the throughput of
+a compaction should ensure more stable operations from RegionServers.
+
+Take care to notice that this limit is _per RegionServer_, not _per compaction_.
+
+The throughput limit is defined as a range of bytes written per second, and is
+allowed to vary within the given lower and upper bound. RegionServers observe the
+current throughput of a compaction and apply a linear formula to adjust the allowed
+throughput, within the lower and upper bound, with respect to external pressure.
+For compactions, external pressure is defined as the number of store files with
+respect to the maximum number of allowed store files. The more store files, the
+higher the compaction pressure.
+
+Configuration of this throughput is governed by the following properties.
+
+- The lower bound is defined by `hbase.hstore.compaction.throughput.lower.bound`
+  and defaults to 10 MB/s (`10485760`).
+- The upper bound is defined by `hbase.hstore.compaction.throughput.higher.bound`
+  and defaults to 20 MB/s (`20971520`).
+
+To revert this behavior to the unlimited compaction throughput of earlier versions
+of HBase, please set the following property to the implementation that applies no
+limits to compactions.
+
+`hbase.regionserver.throughput.controller=org.apache.hadoop.hbase.regionserver.throttle.NoLimitThroughputController`
 
 ////
 This would be a good place to link to an appendix on migrating applications