You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by zh...@apache.org on 2021/07/20 02:45:54 UTC

svn commit: r48894 [3/5] - /release/hbase/3.0.0-alpha-1/

Added: release/hbase/3.0.0-alpha-1/RELEASENOTES.md
==============================================================================
--- release/hbase/3.0.0-alpha-1/RELEASENOTES.md (added)
+++ release/hbase/3.0.0-alpha-1/RELEASENOTES.md Tue Jul 20 02:45:54 2021
@@ -0,0 +1,5814 @@
+
+<!---
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+-->
+# HBASE  3.0.0-alpha-1 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, important issues, features, and major improvements.
+
+
+---
+
+* [HBASE-22923](https://issues.apache.org/jira/browse/HBASE-22923) | *Major* | **hbase:meta is assigned to localhost when we downgrade the hbase version**
+
+Introduced new config: hbase.min.version.move.system.tables
+
+When the operator uses this configuration option, any version between
+the current cluster version and the value of "hbase.min.version.move.system.tables"
+does not trigger any auto-region movement. Auto-region movement here
+refers to auto-migration of system table regions to newer server versions.
+It is assumed that the configured range of versions does not require special
+handling of moving system table regions to higher versioned RegionServer.
+This auto-migration is done by AssignmentManager#checkIfShouldMoveSystemRegionAsync().
+Example: Let's assume the cluster is on version 1.4.0 and we have
+set "hbase.min.version.move.system.tables" as "2.0.0". Now if we upgrade
+one RegionServer on 1.4.0 cluster to 1.6.0 (\< 2.0.0), then AssignmentManager will
+not move hbase:meta, hbase:namespace and other system table regions
+to newly brought up RegionServer 1.6.0 as part of auto-migration.
+However, if we upgrade one RegionServer on 1.4.0 cluster to 2.2.0 (\> 2.0.0),
+then AssignmentManager will move all system table regions to newly brought
+up RegionServer 2.2.0 as part of auto-migration done by
+AssignmentManager#checkIfShouldMoveSystemRegionAsync().
+
+
+---
+
+* [HBASE-25902](https://issues.apache.org/jira/browse/HBASE-25902) | *Critical* | **Add missing CFs in meta during HBase 1 to 2.3+ Upgrade**
+
+While upgrading cluster from 1.x to 2.3+ versions, after the active master is done setting it's status as 'Initialized', it attempts to add 'table' and 'repl\_barrier' CFs in meta. Once CFs are added successfully, master is aborted with PleaseRestartMasterException because master has missed certain initialization events (e.g ClusterSchemaService is not initialized and tableStateManager fails to migrate table states from ZK to meta due to missing CFs). Subsequent active master initialization is expected to be smooth. 
+In the presence of multi masters, when one of them becomes active for the first time after upgrading to HBase 2.3+, it is aborted after fixing CFs in meta and one of the other backup masters will take over and become active soon. Hence, overall this is expected to be smooth upgrade if we have backup masters configured. If not, operator is expected to restart same master again manually.
+
+
+---
+
+* [HBASE-26029](https://issues.apache.org/jira/browse/HBASE-26029) | *Critical* | **It is not reliable to use nodeDeleted event to track region server's death**
+
+Introduce a new step in ServerCrashProcedure to move the replication queues of the dead region server to other live region servers, as this is the only reliable way to get the death event of a region server.
+The old ReplicationTracker related code have all been purged as they are not used any more.
+
+
+---
+
+* [HBASE-25877](https://issues.apache.org/jira/browse/HBASE-25877) | *Major* | **Add access  check for compactionSwitch**
+
+Now calling RSRpcService.compactionSwitch, i.e, Admin.compactionSwitch at client side, requires ADMIN permission.
+This is an incompatible change but it is also a bug, as we should not allow any users to disable compaction on a regionserver, so we apply this to all active branches.
+
+
+---
+
+* [HBASE-25993](https://issues.apache.org/jira/browse/HBASE-25993) | *Major* | **Make excluded SSL cipher suites configurable for all Web UIs**
+
+Add "ssl.server.exclude.cipher.list" configuration to excluded cipher suites for the http server started by the InfoServer.
+
+
+---
+
+* [HBASE-25920](https://issues.apache.org/jira/browse/HBASE-25920) | *Major* | **Support Hadoop 3.3.1**
+
+Fixes to make unit tests pass and to make it so an hbase built from branch-2 against a 3.3.1RC can run on a 3.3.1RC small cluster.
+
+
+---
+
+* [HBASE-25969](https://issues.apache.org/jira/browse/HBASE-25969) | *Major* | **Cleanup netty-all transitive includes**
+
+We have an (old) netty-all in our produced artifacts. It is transitively included from hadoop. It is needed by MiniMRCluster referenced from a few MR tests in hbase. This commit adds netty-all excludes everywhere else but where tests will fail unless the transitive is allowed through. TODO: move MR and/or MR tests out of hbase core.
+
+
+---
+
+* [HBASE-22708](https://issues.apache.org/jira/browse/HBASE-22708) | *Major* | **Remove the deprecated methods in Hbck interface**
+
+Removed method 'scheduleServerCrashProcedure' of hbck.java in hbase-client module. This method was deprecated since 2.2.1 and declare to be removed in 3.0.0.
+
+
+---
+
+* [HBASE-25963](https://issues.apache.org/jira/browse/HBASE-25963) | *Major* | **HBaseCluster should be marked as IA.Public**
+
+Change HBaseCluster to IA.Public as its sub class MiniHBaseCluster is IA.Public.
+
+
+---
+
+* [HBASE-25649](https://issues.apache.org/jira/browse/HBASE-25649) | *Major* | **Complete the work on moving all the balancer related classes to hbase-balancer module**
+
+Introduced a ClusterInfoProvider as a bridge for LoadBalancer implementation to get cluster information and operate on the cluster, so the LoadBalancer implementation does not need to depend on HMaster/MasterServices directly. So then we could move most LoadBalancer related code to hbase-balancer module. MasterClusterInfoProvider is the actual implementation class for ClusterInfoProvider.
+
+Moved most unit tests for load balancer to hbase-balancer module, unless it requires starting a mini cluster.
+
+RSGroupBasedLoadBalancer and related code are still in hbase-server module. As we still have lots of migration and compatibility code, which makes it deeply tied togather with HMaster/MasterServices. We could try to move it again in 4.0.0, when we have purged the migration and compatibility code.
+
+
+---
+
+* [HBASE-25841](https://issues.apache.org/jira/browse/HBASE-25841) | *Minor* | **Add basic jshell support**
+
+This change adds a new command \`hbase jshell\` command-line interface. It launches an interactive JShell session with HBase on the classpath, as well as a the client package already imported.
+
+
+---
+
+* [HBASE-25894](https://issues.apache.org/jira/browse/HBASE-25894) | *Major* | **Improve the performance for region load and region count related cost functions**
+
+In CostFromRegionLoadFunction, now we will only recompute the cost for a given region server in regionMoved function, instead of computing all the costs every time.
+Introduced a DoubleArrayCost for better abstraction, and also try to only compute the final cost on demand as the computation is also a bit expensive.
+
+
+---
+
+* [HBASE-25869](https://issues.apache.org/jira/browse/HBASE-25869) | *Major* | **WAL value compression**
+
+WAL storage can be expensive, especially if the cell values represented in the edits are large, consisting of blobs or significant lengths of text. Such WALs might need to be kept around for a fairly long time to satisfy replication constraints on a space limited (or space-contended) filesystem.
+
+Enable WAL compression and, with this feature, WAL value compression, to save space in exchange for slightly higher WAL append latencies. The degree of performance impact will depend on the compression algorithm selection.  SNAPPY or ZSTD are recommended algorithms, if native codec support is available. SNAPPY may even provide an overall improvement in WAL commit latency, so is the best choice. GZ can be a reasonable fallback where native codec support is not available.
+
+To enable WAL compression, value compression, and select the desired algorithm, edit your site configuration like so:
+
+\<!-- to enable compression --\>
+\<property\>
+    \<name\>hbase.regionserver.wal.enablecompression\</name\>
+    \<value\>true\</value\>
+\</property\>
+
+\<!-- to enable value compression --\>
+\<property\>
+    \<name\>hbase.regionserver.wal.value.enablecompression\</name\>
+    \<value\>true\</value\>
+\</property\>
+
+\<!-- choose the value compression algorithm —\>
+\<property\>
+    \<name\>hbase.regionserver.wal.value.compression.type\</name\>
+    \<value\>snappy\</value\>
+\</property\>
+
+
+---
+
+* [HBASE-25682](https://issues.apache.org/jira/browse/HBASE-25682) | *Major* | **Add a new command to update the configuration of all RSs in a RSGroup**
+
+Added updateConfiguration(String groupName) admin interface & update\_rsgroup\_config to the HBase shell to reload a subset of configuration on all servers in the rsgroup.
+
+
+---
+
+* [HBASE-25032](https://issues.apache.org/jira/browse/HBASE-25032) | *Major* | **Do not assign regions to region server which has not called regionServerReport yet**
+
+<!-- markdown -->
+
+After this change a region server can only accept regions (as seen by master) after it's first report to master is sent successfully. Prior to this change there could be cases where the region server finishes calling regionServerStartup but is actually stuck during initialization due to issues like misconfiguration and master tries to assign regions and they are stuck because the region server is in a weird state and not ready to serve them.
+
+
+---
+
+* [HBASE-25826](https://issues.apache.org/jira/browse/HBASE-25826) | *Major* | **Revisit the synchronization of balancer implementation**
+
+Narrow down the public facing API for LoadBalancer by removing balanceTable and setConf methods.
+Redesigned the initilization sequence to simplify the initialization code. Now all the setters are just 'setter', all the initialization work are moved to initialize method.
+Rename setClusterMetrics to updateClusterMetrics, as it will be called periodically while other setters will only be called once before initialization.
+Add javadoc for LoadBalancer class to mention how to do synchronization on implementation classes.
+
+
+---
+
+* [HBASE-25834](https://issues.apache.org/jira/browse/HBASE-25834) | *Major* | **Remove balanceTable method from LoadBalancer interface**
+
+Remove balanceTable method from LoadBalancer interface as we never call it outside balancer implementation.
+Mark balanceTable method as protected in BaseLoadBalancer.
+Mark balanceCluster method as final in BaseLoadBalancer, the implementation classes should not override it anymore, just implement the balanceTable method is enough.
+
+
+---
+
+* [HBASE-22120](https://issues.apache.org/jira/browse/HBASE-22120) | *Major* | **Replace HTrace with OpenTelemetry**
+
+In this issue we change our tracing system from HTrace to OpenTelemetry.
+The HTrace dependencies are banned(transitive dependencies are still allowed as hadoop still depends on them), the imports of htrace related classes are also banned.
+We add OpenTelemtry support for our RPC system, which means all the rpc methods will be traced on both client side and server side.
+Most methods in Table interface are also traced, except scan and coprocessor related methods. As now the scan implementation is always 'async prefetch', we haven't find a suitable way to represent this relationship between the foreground and background spans yet.
+At server side, due to the same reason, we only use a span to record the time of the WAL sync operation, without tracing into the background sync thread.
+And we do not trace the next method of RegionScanner, as a scan rpc call may lead to thousands of RegionScanner.next calls, which could slow down the rpc call even when tracing is disabled.
+On how to enable tracing, please read the Tracing section in our refguide.
+https://hbase.apache.org/book.html#tracing
+
+
+---
+
+* [HBASE-25756](https://issues.apache.org/jira/browse/HBASE-25756) | *Minor* | **Support alternate compression for major and minor compactions**
+
+It is now possible to specify alternate compression algorithms for major or minor compactions, via new ColumnFamilyBuilder or HColumnDescriptor methods {get,set}{Major,Minor}CompressionType(), or shell schema attributes COMPRESSION\_COMPACT\_MAJOR or COMPRESSION\_COMPACT\_MINOR. This can be used to select a fast algorithm for frequent minor compactions and a slower algorithm offering better compression ratios for infrequent major compactions.
+
+
+---
+
+* [HBASE-25766](https://issues.apache.org/jira/browse/HBASE-25766) | *Major* | **Introduce RegionSplitRestriction that restricts the pattern of the split point**
+
+After HBASE-25766, we can specify a split restriction, "KeyPrefix" or "DelimitedKeyPrefix", to a table with the "hbase.regionserver.region.split\_restriction.type" property. The "KeyPrefix" split restriction groups rows by a prefix of the row-key. And the "DelimitedKeyPrefix" split restriction groups rows by a prefix of the row-key with a delimiter.
+
+For example:
+\`\`\`
+# Create a table with a "KeyPrefix" split restriction, where the prefix length is 2 bytes
+hbase\> create 'tbl1', 'fam', {CONFIGURATION =\> {'hbase.regionserver.region.split\_restriction.type' =\> 'KeyPrefix', 'hbase.regionserver.region.split\_restriction.prefix\_length' =\> '2'}}
+
+# Create a table with a "DelimitedKeyPrefix" split restriction, where the delimiter is a comma (,)
+hbase\> create 'tbl2', 'fam', {CONFIGURATION =\> {'hbase.regionserver.region.split\_restriction.type' =\> 'DelimitedKeyPrefix', 'hbase.regionserver.region.split\_restriction.delimiter' =\> ','}}
+\`\`\`
+
+Instead of specifying a split restriction to a table directly, we can also set the properties in hbase-site.xml. In this case, the specified split restriction is applied for all the tables.
+
+Note that the split restriction is also applied to a user-specified split point so that we don't allow users to break the restriction, which is different behavior from the existing KeyPrefixRegionSplitPolicy and DelimitedKeyPrefixRegionSplitPolicy.
+
+
+---
+
+* [HBASE-25290](https://issues.apache.org/jira/browse/HBASE-25290) | *Major* | **Remove table on master related code in balancer implementation**
+
+Removed the deprecated configs 'hbase.balancer.tablesOnMaster' and 'hbase.balancer.tablesOnMaster.systemTablesOnly', which means in 3.0.0 release, master can not carry regions any more.
+According to the compatibility guide, we should keep this till 4.0.0, but since it never works well, we decided to remove in 3.0.0 release.
+Notice that, when maintenance mode is on, HMaster could still carry system regions. This is the only exception.
+
+
+---
+
+* [HBASE-25775](https://issues.apache.org/jira/browse/HBASE-25775) | *Major* | **Use a special balancer to deal with maintenance mode**
+
+Introduced a MaintenanceLoadBalancer to be used only under maintenance mode. Typically you should not use it as your balancer implementation.
+
+
+---
+
+* [HBASE-25767](https://issues.apache.org/jira/browse/HBASE-25767) | *Major* | **CandidateGenerator.getRandomIterationOrder is too slow on large cluster**
+
+In the actual implementation classes of CandidateGenerator, now we just random select a start point and then iterate sequentially, instead of using the old way, where we will create a big array to hold all the integers in [0, num\_regions\_in\_cluster), shuffle the array, and then iterate on the array.
+The new implementation is 'random' enough as every time we just select one candidate. The problem for the old implementation is that, it will create an array every time when we want to get a candidate, if we have tens of thousands regions, we will create an array with tens of thousands length everytime, which causes big GC pressure and slow down the balancer execution.
+
+
+---
+
+* [HBASE-25744](https://issues.apache.org/jira/browse/HBASE-25744) | *Major* | **Change default of \`hbase.normalizer.merge.min\_region\_size.mb\` to \`0\`**
+
+Before this change, by default, the normalizer would exclude any region with a total \`storefileSizeMB\` \<= 1 from merge consideration. This changes the default so that these small regions will be merged away.
+
+
+---
+
+* [HBASE-25716](https://issues.apache.org/jira/browse/HBASE-25716) | *Major* | **The configured loggers in log4j2.xml will always be created**
+
+Added 'createOnDemand' for all the appenders to let them only create log files when necessary. And since log4j2 appender will always create the parent directory and the default location for http request log is /var/log/hbase, we commented out in the log4j2.xml so it will not try to create the directory by default. Users need to uncomment it when enabling http request log.
+
+
+---
+
+* [HBASE-25199](https://issues.apache.org/jira/browse/HBASE-25199) | *Minor* | **Remove HStore#getStoreHomedir**
+
+Moved the following methods from HStore to HRegionFileSystem
+
+- #getStoreHomedir(Path, RegionInfo, byte[])
+- #getStoreHomedir(Path, String, byte[])
+
+
+---
+
+* [HBASE-25174](https://issues.apache.org/jira/browse/HBASE-25174) | *Major* | **Remove deprecated fields in HConstants which should be removed in 3.0.0**
+
+Removed the following constants without replacement
+
+- HConstants#HBASE\_REGIONSERVER\_LEASE\_PERIOD\_KEY
+- HConstants#CP\_HTD\_ATTR\_KEY\_PATTERN
+- HConstants#CP\_HTD\_ATTR\_VALUE\_PATTERN
+- HConstants#CP\_HTD\_ATTR\_VALUE\_PARAM\_KEY\_PATTERN
+- HConstants#CP\_HTD\_ATTR\_VALUE\_PARAM\_VALUE\_PATTERN
+- HConstants#CP\_HTD\_ATTR\_VALUE\_PARAM\_PATTERN
+- HConstants#META\_QOS
+
+Moved the following constant into private scope
+
+- HConstants#OLDEST\_TIMESTAMP
+
+
+---
+
+* [HBASE-25685](https://issues.apache.org/jira/browse/HBASE-25685) | *Major* | **asyncprofiler2.0 no longer supports svg; wants html**
+
+If asyncprofiler 1.x, all is good. If asyncprofiler 2.x and it is hbase-2.3.x or hbase-2.4.x, add '?output=html' to get flamegraphs from the profiler.
+
+Otherwise, if hbase-2.5+ and asyncprofiler2, all works. If asyncprofiler1 and hbase-2.5+, you may have to add '?output=svg' to the query.
+
+
+---
+
+* [HBASE-19577](https://issues.apache.org/jira/browse/HBASE-19577) | *Major* | **Use log4j2 instead of log4j for logging**
+
+Use log4j2 instead of log4j for logging.
+Exclude log4j dependency from hbase and transitive dependencies, use log4j-1.2-api as test dependency for bridging as hadoop still need log4j for some reasons. Copy FileAppender implementation in hbase-logging as the ContainerLogAppender for YARN NodeManager extends it. All log4j.properties files have been replaced by log4j2.xml.
+For log4j2, there is no 'log4j.rootLogger' config, so we need to config level and appender separately, the system properties are now 'hbase.root.logger.level' and 'hbase.root.logger.appender', for security loggers they are 'hbase.security.logger.level' and 'hbase.security.logger.appender'. But for setting them from command line, you could still use something like 'HBASE\_ROOT\_LOGGER=INFO,console' as we will split it and set level and appender separately.
+
+
+---
+
+* [HBASE-25681](https://issues.apache.org/jira/browse/HBASE-25681) | *Major* | **Add a switch for server/table queryMeter**
+
+Adds "hbase.regionserver.enable.server.query.meter" and "hbase.regionserver.enable.table.query.meter" switches which are off by default.
+
+Note, these counters used to be ON by default; now they are off.
+
+
+---
+
+* [HBASE-25518](https://issues.apache.org/jira/browse/HBASE-25518) | *Major* | **Support separate child regions to different region servers**
+
+Config key for enable/disable automatically separate child regions to different region servers in the procedure of split regions. One child will be kept to the server where parent region is on, and the other child will be assigned to a random server.
+
+hbase.master.auto.separate.child.regions.after.split.enabled
+
+Default setting is false/off.
+
+
+---
+
+* [HBASE-25608](https://issues.apache.org/jira/browse/HBASE-25608) | *Major* | **Support HFileOutputFormat locality sensitive even destination cluster is different from source cluster**
+
+
+Added configurations to specify the ZK cluster key for remote cluster in HFileOutputFormat2.
+Default, input and output are to the cluster specified in Job configuration.
+Use HFileOutputformat2#configureRemoteCluster to have output go to a remote cluster.
+HFileOutputFormat2#configureIncrementalLoad(Job, Table, RegionLocator) configure them using Table's configuration.
+You can also configure them by calling HFileOutputFormat2#configureRemoteCluster explicitly.
+
+
+---
+
+* [HBASE-25665](https://issues.apache.org/jira/browse/HBASE-25665) | *Major* | **Disable reverse DNS lookup for SASL Kerberos client connection**
+
+New client side configuration \`hbase.unsafe.client.kerberos.hostname.disable.reversedns\` is added.
+
+This configuration is advanced for experts and you shouldn't specify unless you really what is this configuration and doing.
+HBase secure client using SASL Kerberos performs DNS reverse lookup to get hostname for server principal using InetAddress.getCanonicalHostName by default (false for this configuration).
+If you set true for this configuration, HBase client doen't perform DNS reverse lookup for server principal and use InetAddress.getHostName which is sent by HBase cluster instead.
+This helps your client application deploy under unusual network environment which DNS doesn't provide reverse lookup.
+Check the description of the JIRA ticket, HBASE-25665 carefully and check that this configuration fits your environment and deployment actually before enable this configuration.
+
+
+---
+
+* [HBASE-25374](https://issues.apache.org/jira/browse/HBASE-25374) | *Minor* | **Make REST Client connection and socket time out configurable**
+
+Configuration parameter to set rest client connection timeout
+
+"hbase.rest.client.conn.timeout" Default is 2 \* 1000
+
+"hbase.rest.client.socket.timeout" Default of 30 \* 1000
+
+
+---
+
+* [HBASE-25566](https://issues.apache.org/jira/browse/HBASE-25566) | *Major* | **RoundRobinTableInputFormat**
+
+Adds RoundRobinTableInputFormat, a subclass of TableInputFormat, that takes the TIF#getSplits list and resorts it so as to spread the InputFormats as broadly about the cluster as possible. RRTIF works to frustrate bunching of InputSplits on RegionServers to avoid the scenario where a few RegionServers are working hard fielding many InputSplits while others idle hosting a few or none.
+
+
+---
+
+* [HBASE-25587](https://issues.apache.org/jira/browse/HBASE-25587) | *Major* | **[hbck2] Schedule SCP for all unknown servers**
+
+Adds scheduleSCPsForUnknownServers to Hbck Service.
+
+
+---
+
+* [HBASE-25636](https://issues.apache.org/jira/browse/HBASE-25636) | *Minor* | **Expose HBCK report as metrics**
+
+Expose HBCK repost results in metrics, includes: "orphanRegionsOnRS", "orphanRegionsOnFS", "inconsistentRegions", "holes", "overlaps", "unknownServerRegions" and "emptyRegionInfoRegions".
+
+
+---
+
+* [HBASE-25582](https://issues.apache.org/jira/browse/HBASE-25582) | *Major* | **Support setting scan ReadType to be STREAM at cluster level**
+
+Adding a new meaning for the config 'hbase.storescanner.pread.max.bytes' when configured with a value \<0.   
+In HBase 2.x we allow the Scan op to specify a ReadType (PREAD / STREAM/ DEFAULT).  When Scan comes with DEFAULT read type, we will start scan with preads and later switch to stream read once we see we are scanning a total data size \> value of hbase.storescanner.pread.max.bytes.  (This is calculated for data per region:cf).  This config defaults to 4 x of HFile block size = 256 KB by default.
+This jira added a new meaning for this config when configured with a -ve value.  In such case, for all scans with DEFAULT read type, we will start with STREAM read itself. (Switch at begin of the scan itself)
+
+
+---
+
+* [HBASE-25460](https://issues.apache.org/jira/browse/HBASE-25460) | *Major* | **Expose drainingServers as cluster metric**
+
+Exposed new jmx metrics: "draininigRegionServers" and "numDrainingRegionServers" to provide "comma separated names for regionservers that are put in draining mode" and "num of such regionservers" respectively.
+
+
+---
+
+* [HBASE-25615](https://issues.apache.org/jira/browse/HBASE-25615) | *Major* | **Upgrade java version in pre commit docker file**
+
+jdk8u232-b09 -\> jdk8u282-b08
+jdk-11.0.6\_10 -\> jdk-11.0.10\_9
+
+
+---
+
+* [HBASE-23887](https://issues.apache.org/jira/browse/HBASE-23887) | *Major* | **New L1 cache : AdaptiveLRU**
+
+Introduced new L1 cache: AdaptiveLRU. This is supposed to provide better performance than default LRU cache.
+Set config key "hfile.block.cache.policy" to "AdaptiveLRU" in hbase-site in order to start using this new cache.
+
+
+---
+
+* [HBASE-25375](https://issues.apache.org/jira/browse/HBASE-25375) | *Major* | **Provide a VM-based release environment**
+
+<!-- markdown -->
+Adds a Vagrant project for running releases under `dev-support/release-vm`. This is intended to be an environment wherein the `create-release` scripts can be run (in docker mode). See the directory's readme for details.
+
+
+---
+
+* [HBASE-25449](https://issues.apache.org/jira/browse/HBASE-25449) | *Major* | **'dfs.client.read.shortcircuit' should not be set in hbase-default.xml**
+
+The presence of HDFS short-circuit read configuration properties in hbase-default.xml inadvertently causes short-circuit reads to not happen inside of RegionServers, despite short-circuit reads being enabled in hdfs-site.xml.
+
+
+---
+
+* [HBASE-25333](https://issues.apache.org/jira/browse/HBASE-25333) | *Major* | **Add maven enforcer rule to ban VisibleForTesting imports**
+
+Ban the imports of guava VisiableForTesting, which means you should not use this annotation in HBase any more.
+For IA.Public and IA.LimitedPrivate classes, typically you should not expose any test related fields/methods there, and if you want to hide something, use IA.Private on the specific fields/methods.
+For IA.Private classes, if you want to expose something only for tests, use the RestrictedApi annotation from error prone, where it could cause a compilation error if someone break the rule in the future.
+
+
+---
+
+* [HBASE-25473](https://issues.apache.org/jira/browse/HBASE-25473) | *Major* | **[create-release] checkcompatibility.py failing with "KeyError: 'binary'"**
+
+Adds temporary exclude of hbase-shaded-testing-util from checkcompatibility so create-release can work again. Undo exclude when sub-issue is fixed.
+
+
+---
+
+* [HBASE-25441](https://issues.apache.org/jira/browse/HBASE-25441) | *Critical* | **add security check for some APIs in RSRpcServices**
+
+RsRpcServices APIs that can be accessed only through Admin rights:
+- stopServer
+- updateFavoredNodes
+- updateConfiguration
+- clearRegionBlockCache
+- clearSlowLogsResponses
+
+
+---
+
+* [HBASE-25432](https://issues.apache.org/jira/browse/HBASE-25432) | *Blocker* | **we should add security checks for setTableStateInMeta and fixMeta**
+
+setTableStateInMeta and fixMeta can be accessed only through Admin rights
+
+
+---
+
+* [HBASE-25318](https://issues.apache.org/jira/browse/HBASE-25318) | *Minor* | **Configure where IntegrationTestImportTsv generates HFiles**
+
+Added IntegrationTestImportTsv.generatedHFileFolder configuration property to override the default location in IntegrationTestImportTsv. Useful for running the integration test when HDFS Transparent Encryption is enabled.
+
+
+---
+
+* [HBASE-24751](https://issues.apache.org/jira/browse/HBASE-24751) | *Minor* | **Display Task completion time and/or processing duration on Web UI**
+
+Adds completion time to tasks display.
+
+
+---
+
+* [HBASE-25456](https://issues.apache.org/jira/browse/HBASE-25456) | *Critical* | **setRegionStateInMeta need security check**
+
+setRegionStateInMeta can be accessed only through Admin rights
+
+
+---
+
+* [HBASE-25451](https://issues.apache.org/jira/browse/HBASE-25451) | *Major* | **Upgrade commons-io to 2.8.0**
+
+Upgrade commons-io to 2.8.0. Remove deprecated IOUtils.closeQuietly call in code base.
+
+
+---
+
+* [HBASE-24764](https://issues.apache.org/jira/browse/HBASE-24764) | *Minor* | **Add support of adding base peer configs via hbase-site.xml for all replication peers.**
+
+<!-- markdown -->
+
+Adds a new configuration parameter "hbase.replication.peer.base.config" which accepts a semi-colon separated key=CSV pairs (example: k1=v1;k2=v2_1,v3...). When this configuration is set on the server side, these kv pairs are added to every peer configuration if not already set. Peer specific configuration overrides have precedence over the above default configuration. This is useful in cases when some configuration has to be set for all the peers by default and one does not want to add to every peer definition.
+
+
+---
+
+* [HBASE-25419](https://issues.apache.org/jira/browse/HBASE-25419) | *Major* | **Remove deprecated methods in RpcServer implementation**
+
+Removed all the call methods except call(RpcCall, MonitoredRPCHandler) in RpcServerInterface.
+
+
+---
+
+* [HBASE-24966](https://issues.apache.org/jira/browse/HBASE-24966) | *Major* | **The methods in AsyncTableRegionLocator should not throw IOException directly**
+
+Remove the throws part for the following 3 methods in AsyncTableRegionLocator:
+
+getStartKeys
+getEndKeys
+getStartEndKeys
+
+It is a mistake and we will never throw exception directly from these methods. The should get the exception from the returned CompletableFuture.
+
+In order to not introduce new methods and make more confusing, we just remove the throws part. It is an incompatible change, you may need to change your code to remove the catch section when you upgrade to HBase 3.x.
+
+
+---
+
+* [HBASE-25127](https://issues.apache.org/jira/browse/HBASE-25127) | *Major* | **Enhance PerformanceEvaluation to profile meta replica performance.**
+
+Three new commands are added to PE:
+
+metaWrite, metaRandomRead and cleanMeta.
+
+Usage example:
+hbase pe  --rows=100000 metaWrite  1
+hbase pe  --nomapreduce --rows=100000 metaRandomRead  32
+hbase pe  --rows=100000 cleanMeta 1
+
+metaWrite and cleanMeta should be run with only 1 thread and the same number of rows so all the rows inserted will be cleaned up properly.
+
+metaRandomRead can be run with multiple threads. The rows option should set to within the range of rows inserted by metaWrite
+
+
+---
+
+* [HBASE-25237](https://issues.apache.org/jira/browse/HBASE-25237) | *Major* | **'hbase master stop' shuts down the cluster, not the master only**
+
+\`hbase master stop\` should shutdown only master by default. 
+1. Help added to \`hbase master stop\`:
+To stop cluster, use \`stop-hbase.sh\` or \`hbase master stop --shutDownCluster\`
+
+2. Help added to \`stop-hbase.sh\`:
+stop-hbase.sh can only be used for shutting down entire cluster. To shut down (HMaster\|HRegionServer) use hbase-daemon.sh stop (master\|regionserver)
+
+
+---
+
+* [HBASE-25242](https://issues.apache.org/jira/browse/HBASE-25242) | *Critical* | **Add Increment/Append support to RowMutations**
+
+<!-- markdown -->
+
+After HBASE-25242, we can add Increment/Append operations to RowMutations and perform those operations atomically in a single row.
+
+This change alters a public API so that the `mutateRow` method in both the `Table` and `AsyncTable` classes now returns a `Result` object with the result of the passed operations. Previously these methods had no return values.
+
+Code that calls these methods which was compiled against an earlier version of the HBase client libraries will fail at runtime with a `NoSuchMethodError` when used with this release. If you are going to ignore the result of the passed operations you can simply recompile your application with an updated dependency and no additional changes.
+
+
+---
+
+* [HBASE-25263](https://issues.apache.org/jira/browse/HBASE-25263) | *Major* | **Change encryption key generation algorithm used in the HBase shell**
+
+Since the backward-compatible change we introduced in HBASE-25263,  we use the more secure PBKDF2WithHmacSHA384  key generation algorithm (instead of PBKDF2WithHmacSHA1) to generate a secret key for HFile / WalFile encryption, when the user is defining a string encryption key in the hbase shell.
+
+
+---
+
+* [HBASE-24268](https://issues.apache.org/jira/browse/HBASE-24268) | *Minor* | **REST and Thrift server do not handle the "doAs" parameter case insensitively**
+
+This change allows the REST and Thrift servers to handle the "doAs" parameter case-insensitively, which is deemed as correct per the "specification" provided by the Hadoop community.
+
+
+---
+
+* [HBASE-25278](https://issues.apache.org/jira/browse/HBASE-25278) | *Minor* | **Add option to toggle CACHE\_BLOCKS in count.rb**
+
+A new option, CACHE\_BLOCKS, was added to the \`count\` shell command which will force the data for a table to be loaded into the block cache. By default, the \`count\` command will not cache any blocks. This option can serve as a means to for a table's data to be loaded into block cache on demand. See the help message on the count shell command for usage details.
+
+
+---
+
+* [HBASE-18070](https://issues.apache.org/jira/browse/HBASE-18070) | *Critical* | **Enable memstore replication for meta replica**
+
+"Async WAL Replication" [1] was added by HBASE-11183 "Timeline Consistent region replicas - Phase 2 design" but only for user-space tables. This feature adds "Async WAL Replication" for the hbase:meta table.  It also adds a client 'LoadBalance' mode that has reads go to replicas first and to the primary only on fail so as to shed read load from the primary to alleviate \*hotspotting\* on the hbase:meta Region.
+
+Configuration is as it was for the user-space 'Async WAL Replication'. See [2] and [3] for details on how to enable.
+
+1. http://hbase.apache.org/book.html#async.wal.replication
+2. http://hbase.apache.org/book.html#async.wal.replication.meta
+3. http://hbase.apache.org/book.html#\_async\_wal\_replication\_for\_meta\_table\_as\_of\_hbase\_2\_4\_0
+
+
+---
+
+* [HBASE-25126](https://issues.apache.org/jira/browse/HBASE-25126) | *Major* | **Add load balance logic in hbase-client to distribute read load over meta replica regions.**
+
+See parent issue, HBASE-18070, release notes for how to enable.
+
+
+---
+
+* [HBASE-25026](https://issues.apache.org/jira/browse/HBASE-25026) | *Minor* | **Create a metric to track full region scans RPCs**
+
+Adds a new metric where we collect the number of full region scan requests at the RPC layer. This will be collected under "name" : "Hadoop:service=HBase,name=RegionServer,sub=Server"
+
+
+---
+
+* [HBASE-25253](https://issues.apache.org/jira/browse/HBASE-25253) | *Major* | **Deprecated master carrys regions related methods and configs**
+
+Since 2.4.0, deprecated all master carrys regions related methods(LoadBalancer,BaseLoadBalancer,ZNodeClearer) and configs(hbase.balancer.tablesOnMaster, hbase.balancer.tablesOnMaster.systemTablesOnly), they will be removed in 3.0.0.
+
+
+---
+
+* [HBASE-20598](https://issues.apache.org/jira/browse/HBASE-20598) | *Major* | **Upgrade to JRuby 9.2**
+
+<!-- markdown -->
+The HBase shell now relies on JRuby 9.2. This is a new major version change for JRuby. The most significant change is Ruby compatibility changed from Ruby 2.3 to Ruby 2.5. For more detailed changes please see [the JRuby release announcement for the start of the 9.2 series](https://www.jruby.org/2018/05/24/jruby-9-2-0-0.html) as well as the [general release announcement page for updates since that version](https://www.jruby.org/news).
+
+The runtime dependency versions present on the server side classpath for the Joni (now 2.1.31) and JCodings (now 1.0.55) libraries have also been updated to match those found in the JRuby version shipped with HBase. These version changes are maintenance releases and should be backwards compatible when updated in tandem.
+
+
+---
+
+* [HBASE-25181](https://issues.apache.org/jira/browse/HBASE-25181) | *Major* | **Add options for disabling column family encryption and choosing hash algorithm for wrapped encryption keys.**
+
+<!-- markdown -->
+This change adds options for disabling column family encryption and choosing hash algorithm for wrapped encryption keys. Changes are done such that defaults will keep the same behavior prior to this issue.
+    
+Prior to this change HBase always used the MD5 hash algorithm to store a hash for encryption keys. This hash is needed to verify the secret key of the subject. (e.g. making sure that the same secrey key is used during encrypted HFile read and write). The MD5 algorithm is considered weak, and can not be used in some (e.g. FIPS compliant) clusters. Having a configurable hash enables us to use newer and more secure hash algorithms like SHA-384 or SHA-512 (which are FIPS compliant).
+
+The hash is set via the configuration option `hbase.crypto.key.hash.algorithm`. It should be set to a JDK `MessageDigest` algorithm like "MD5", "SHA-256" or "SHA-384". The default is "MD5" for backward compatibility.
+
+Alternatively, clusters which rely on an encryption at rest mechanism outside of HBase (e.g. those offered by HDFS) and wish to ensure HBase's encryption at rest system is inactive can set `hbase.crypto.enabled` to `false`.
+
+
+---
+
+* [HBASE-25238](https://issues.apache.org/jira/browse/HBASE-25238) | *Critical* | **Upgrading HBase from 2.2.0 to 2.3.x fails because of “Message missing required fields: state”**
+
+Fixes master procedure store migration issues going from 2.0.x to 2.2.x and/or 2.3.x. Also fixes failed heartbeat parse during rolling upgrade from 2.0.x. to 2.3.x.
+
+
+---
+
+* [HBASE-25235](https://issues.apache.org/jira/browse/HBASE-25235) | *Major* | **Cleanup the deprecated methods in TimeRange**
+
+Removed all the public constructors of TimeRange, thus now we make TimeRange final. Also removed the withinTimeRange(byte[], int) method, just use withinTimeRange(long) instead.
+
+
+---
+
+* [HBASE-25212](https://issues.apache.org/jira/browse/HBASE-25212) | *Major* | **Optionally abort requests in progress after deciding a region should close**
+
+If hbase.regionserver.close.wait.abort is set to true, interrupt RPC handler threads holding the region close lock. 
+
+Until requests in progress can be aborted, wait on the region close lock for a configurable interval (specified by hbase.regionserver.close.wait.time.ms, default 60000 (1 minute)). If we have failed to acquire the close lock after this interval elapses, if allowed (also specified by hbase.regionserver.close.wait.abort), abort the regionserver.
+
+We will attempt to interrupt any running handlers every hbase.regionserver.close.wait.interval.ms (default 10000 (10 seconds)) until either the close lock is acquired or we reach the maximum wait time.
+
+
+---
+
+* [HBASE-25173](https://issues.apache.org/jira/browse/HBASE-25173) | *Major* | **Remove owner related methods in TableDescriptor/TableDescriptorBuilder**
+
+Remove the OWNER field in the table creation statement, the relevant permissions will be automatically granted to the current active users of the client.
+
+
+---
+
+* [HBASE-25167](https://issues.apache.org/jira/browse/HBASE-25167) | *Major* | **Normalizer support for hot config reloading**
+
+<!-- markdown -->
+This patch adds [dynamic configuration](https://hbase.apache.org/book.html#dyn_config) support for the following configuration keys related to the normalizer:
+* hbase.normalizer.throughput.max_bytes_per_sec
+* hbase.normalizer.split.enabled
+* hbase.normalizer.merge.enabled
+* hbase.normalizer.min.region.count
+* hbase.normalizer.merge.min_region_age.days
+* hbase.normalizer.merge.min_region_size.mb
+
+
+---
+
+* [HBASE-25224](https://issues.apache.org/jira/browse/HBASE-25224) | *Major* | **Maximize sleep for checking meta and namespace regions availability**
+
+Changed the max sleep time during meta and namespace regions availability check to be 60 sec. Previously there was no such cap
+
+
+---
+
+* [HBASE-25197](https://issues.apache.org/jira/browse/HBASE-25197) | *Trivial* | **Remove SingletonCoprocessorService**
+
+Removed org.apache.hadoop.hbase.coprocessor.SingletonCoprocessorService without a replacement.
+
+
+---
+
+* [HBASE-25198](https://issues.apache.org/jira/browse/HBASE-25198) | *Minor* | **Remove RpcSchedulerFactory#create(Configuration, PriorityFunction)**
+
+Removed RpcSchedulerFactory#create(Configuration conf, PriorityFunction priority) without a replacement.
+
+
+---
+
+* [HBASE-24628](https://issues.apache.org/jira/browse/HBASE-24628) | *Major* | **Region normalizer now respects a rate limit**
+
+<!-- markdown -->
+Introduces a new configuration, `hbase.normalizer.throughput.max_bytes_per_sec`, for specifying a limit on the throughput of actions executed by the normalizer. Note that while this configuration value is in bytes, the minimum honored valued is `1,000,000`, or `1m`. Supports values configured using the human-readable suffixes honored by [`Configuration.getLongBytes`](https://hadoop.apache.org/docs/current/api/org/apache/hadoop/conf/Configuration.html#getLongBytes-java.lang.String-long-)
+
+
+---
+
+* [HBASE-24528](https://issues.apache.org/jira/browse/HBASE-24528) | *Major* | **Improve balancer decision observability**
+
+Retrieve latest balancer decisions made by LoadBalancers.
+
+Examples:
+  hbase\> get\_balancer\_decisions                       
+Retrieve recent balancer decisions with region plans
+
+  hbase\> get\_balancer\_decisions LIMIT =\> 10
+Retrieve 10 most recent balancer decisions with region plans
+
+
+Config change:
+
+hbase.master.balancer.decision.buffer.enabled:
+
+      Indicates whether active HMaster has ring buffer running for storing
+      balancer decisions in FIFO manner with limited entries. The size of
+      the ring buffer is indicated by config:
+      hbase.master.balancer.decision.queue.size
+
+
+---
+
+* [HBASE-14067](https://issues.apache.org/jira/browse/HBASE-14067) | *Major* | **bundle ruby files for hbase shell into a jar.**
+
+<!-- markdown -->
+The `hbase-shell` artifact now contains the ruby files that implement the hbase shell. There should be no downstream impact for users of the shell that rely on the `hbase shell` command.
+
+Folks that wish to include the HBase ruby classes defined for the shell in their own JRuby scripts should add the `hbase-shell.jar` file to their classpath rather than add `${HBASE_HOME}/lib/ruby` to their load paths.
+
+
+---
+
+* [HBASE-24875](https://issues.apache.org/jira/browse/HBASE-24875) | *Major* | **Remove the force param for unassign since it dose not take effect any more**
+
+<!-- markdown -->
+The "force" flag to various unassign commands (java api, shell, etc) has been ignored since HBase 2. As of this change the methods that take it are now deprecated. Downstream users should stop passing/using this flag.
+
+The Admin and AsyncAdmin Java APIs will have the deprecated version of the unassign method with a force flag removed in HBase 4. Callers can safely continue to use the deprecated API until then; the internal implementation just calls the new method.
+
+The MasterObserver coprocessor API deprecates the `preUnassign` and `postUnassign` methods that include the force parameter and replaces them with versions that omit this parameter. The deprecated methods will be removed from the API in HBase 3. Until then downstream coprocessor implementations can safely continue to *just* implement the deprecated method if they wish; the replacement methods provide a default implementation that calls the deprecated method with force set to `false`.
+
+
+---
+
+* [HBASE-25099](https://issues.apache.org/jira/browse/HBASE-25099) | *Major* | **Change meta replica count by altering meta table descriptor**
+
+Now you can change the region replication config for meta table by altering meta table.
+The old "hbase.meta.replica.count" is deprecated and will be removed in 4.0.0. But if it is set, we will still honor it, which means, when master restart, if we find out that the value of 'hbase.meta.replica.count' is different with the region replication config of meta table, we will schedule an alter table operation to change the region replication config to the value you configured for 'hbase.meta.replica.count'.
+
+
+---
+
+* [HBASE-23834](https://issues.apache.org/jira/browse/HBASE-23834) | *Major* | **HBase fails to run on Hadoop 3.3.0/3.2.2/3.1.4 due to jetty version mismatch**
+
+Use shaded json and jersey in HBase.
+Ban the imports of unshaded json and jersey in code.
+
+
+---
+
+* [HBASE-25175](https://issues.apache.org/jira/browse/HBASE-25175) | *Major* | **Remove the constructors of HBaseConfiguration**
+
+The following constructors were removed from HBaseConfiguration (due to HBASE-2036):
+
+- HBaseConfiguration(): Use the default constructor instead.
+- HBaseConfiguration(Configuration): Use the default constructor instead.
+
+
+---
+
+* [HBASE-25163](https://issues.apache.org/jira/browse/HBASE-25163) | *Major* | **Increase the timeout value for nightly jobs**
+
+Increase timeout value for nightly jobs to 16 hours since the new build machines are dedicated to hbase project, so we are allowed to use it all the time.
+
+
+---
+
+* [HBASE-22976](https://issues.apache.org/jira/browse/HBASE-22976) | *Major* | **[HBCK2] Add RecoveredEditsPlayer**
+
+WALPlayer can replay the content of recovered.edits directories.
+
+Side-effect is that WAL filename timestamp is now factored when setting start/end times for WALInputFormat; i.e. wal.start.time and wal.end.time values on a job context. Previous we looked at wal.end.time only. Now we consider wal.start.time too. If a file has a name outside of wal.start.time\<-\>wal.end.time, it'll be by-passed. This change-in-behavior will make it easier on operator crafting timestamp filters processing WALs.
+
+
+---
+
+* [HBASE-25165](https://issues.apache.org/jira/browse/HBASE-25165) | *Minor* | **Change 'State time' in UI so sorts**
+
+Start time on the Master UI is now displayed using ISO8601 format instead of java Date#toString().
+
+
+---
+
+* [HBASE-25124](https://issues.apache.org/jira/browse/HBASE-25124) | *Major* | **Support changing region replica count without disabling table**
+
+Now you do not need to disable a table before changing its 'region replication' property.
+If you are decreasing the replica count, the excess region replicas will be closed before reopening other replicas.
+If you are increasing the replica count, the new region replicas will be opened after reopening the existing replicas.
+
+
+---
+
+* [HBASE-25154](https://issues.apache.org/jira/browse/HBASE-25154) | *Major* | **Set java.io.tmpdir to project build directory to avoid writing std\*deferred files to /tmp**
+
+Change the java.io.tmpdir to project.build.directory in surefire-maven-plugin, to avoid writing std\*deferred files to /tmp which may blow up the /tmp disk on our jenkins build node.
+
+
+---
+
+* [HBASE-25055](https://issues.apache.org/jira/browse/HBASE-25055) | *Major* | **Add ReplicationSource for meta WALs; add enable/disable when hbase:meta assigned to RS**
+
+Set hbase.region.replica.replication.catalog.enabled to enable async WAL Replication for hbase:meta region replicas. Its off by default.
+
+Defaults to the RegionReadReplicaEndpoint.class shipping edits -- set hbase.region.replica.catalog.replication to target a different endpoint implementation.
+
+
+---
+
+* [HBASE-25109](https://issues.apache.org/jira/browse/HBASE-25109) | *Major* | **Add MR Counters to WALPlayer; currently hard to tell if it is doing anything**
+
+Adds a WALPlayer to MR Counter output:
+
+	org.apache.hadoop.hbase.mapreduce.WALPlayer$Counter
+		CELLS\_READ=89574
+		CELLS\_WRITTEN=89572
+		DELETES=64
+		PUTS=5305
+		WALEDITS=4375
+
+
+---
+
+* [HBASE-25081](https://issues.apache.org/jira/browse/HBASE-25081) | *Major* | **Up the container nproc uplimit to 30000**
+
+Ups the nproc (processes) limit from 12500 to 30000 in yetus (so build container can have new limit).
+
+
+---
+
+* [HBASE-24896](https://issues.apache.org/jira/browse/HBASE-24896) | *Major* | **'Stuck' in static initialization creating RegionInfo instance**
+
+1. Untangle RegionInfo, RegionInfoBuilder, and MutableRegionInfo static
+initializations.
+2. Undo static initializing references from RegionInfo to RegionInfoBuilder.
+3. Mark RegionInfo#UNDEFINED IA.Private and deprecated;
+it is for internal use only and likely to be removed in HBase4. (sub-task HBASE-24918)
+4. Move MutableRegionInfo from inner-class of
+RegionInfoBuilder to be (package private) standalone. (sub-task HBASE-24918)
+
+
+---
+
+* [HBASE-24994](https://issues.apache.org/jira/browse/HBASE-24994) | *Minor* | **Add hedgedReadOpsInCurThread metric**
+
+Expose Hadoop hedgedReadOpsInCurThread metric to HBase.
+This metric counts the number of times the hedged reads service executor rejected a read task, falling back to the current thread.
+This will help determine the proper size of the thread pool (dfs.client.hedged.read.threadpool.size).
+
+
+---
+
+* [HBASE-24776](https://issues.apache.org/jira/browse/HBASE-24776) | *Major* | **[hbtop] Support Batch mode**
+
+HBASE-24776 added the following command line parameters to hbtop:
+\| Argument \| Description \| 
+\|---\|---\|
+\| -n,--numberOfIterations \<arg\> \| The number of iterations \|
+\| -O,--outputFieldNames \| Print each of the available field names on a separate line, then quit \|
+\| -f,--fields \<arg\> \| Show only the given fields. Specify comma separated fields to show multiple fields \|
+\| -s,--sortField \<arg\> \| The initial sort field. You can prepend a \`+' or \`-' to the field name to also override the sort direction. A leading \`+' will force sorting high to low, whereas a \`-' will ensure a low to high ordering \|
+\| -i,--filters \<arg\> \| The initial filters. Specify comma separated filters to set multiple filters \|
+\| -b,--batchMode \| Starts hbtop in Batch mode, which could be useful for sending output from hbtop to other programs or to a file. In this mode, hbtop will not accept input and runs until the iterations limit you've set with the \`-n' command-line option or until killed \|
+
+
+---
+
+* [HBASE-24602](https://issues.apache.org/jira/browse/HBASE-24602) | *Major* | **Add Increment and Append support to CheckAndMutate**
+
+Summary of the change of HBASE-24602:
+- Add \`build(Increment)\` and \`build(Append)\` methods to the \`Builder\` class of the \`CheckAndMutate\` class. After this change, we can perform checkAndIncrement/Append operations as follows:
+\`\`\`
+// Build a CheckAndMutate object with a Increment object
+CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
+  .ifEquals(family, qualifier, value)
+  .build(increment);
+
+// Perform a CheckAndIncrement operation
+CheckAndMutateResult checkAndMutateResult = table.checkAndMutate(checkAndMutate);
+
+// Get whether or not the CheckAndIncrement operation is successful
+boolean success = checkAndMutateResult.isSuccess();
+
+// Get the result of the increment operation
+Result result = checkAndMutateResult.getResult();
+\`\`\`
+- After this change, \`HRegion.batchMutate()\` is used for increment/append operations.
+- As the side effect of the above change, the following coprocessor methods of RegionObserver are called when increment/append operations are performed:
+  - preBatchMutate()
+  - postBatchMutate()
+  - postBatchMutateIndispensably()
+
+
+---
+
+* [HBASE-24892](https://issues.apache.org/jira/browse/HBASE-24892) | *Major* | **config 'hbase.hregion.memstore.mslab.indexchunksize' not be used**
+
+Remove the config "hbase.hregion.memstore.mslab.indexchunksize" which never used. And use "hbase.hregion.memstore.mslab.indexchunksize.percent" instead.
+
+
+---
+
+* [HBASE-24935](https://issues.apache.org/jira/browse/HBASE-24935) | *Major* | **Remove 1.3.6 from download page**
+
+Removed 1.3.6 from download page as it is EOL.
+
+
+---
+
+* [HBASE-24799](https://issues.apache.org/jira/browse/HBASE-24799) | *Major* | **Do not call make\_binary\_release for hbase-thirdparty in release scripts**
+
+Skip make\_binary\_release call for hbase-thirdparty in release scripts as we only publish src tarballs for hbase-thirdparty.
+
+
+---
+
+* [HBASE-24886](https://issues.apache.org/jira/browse/HBASE-24886) | *Major* | **Remove deprecated methods in RowMutations**
+
+Removed RowMutations.add(Put) and RowMutations.add(Delete). Use RowMutations.add(Mutation) directly.
+
+
+---
+
+* [HBASE-24887](https://issues.apache.org/jira/browse/HBASE-24887) | *Major* | **Remove Row.compareTo**
+
+Remove Row.compareTo
+
+
+---
+
+* [HBASE-24150](https://issues.apache.org/jira/browse/HBASE-24150) | *Major* | **Allow module tests run in parallel**
+
+Pass -T2 to mvn. Makes it so we do two modules-at-a-time dependencies willing. Helps speed build and testing. Doubles the resource usage when running modules in parallel.
+
+
+---
+
+* [HBASE-24126](https://issues.apache.org/jira/browse/HBASE-24126) | *Major* | **Up the container nproc uplimit from 10000 to 12500**
+
+Start docker with upped ulimit for nproc passing '--ulimit nproc=12500'. It was 10000, the default, but made it 12500. Then, set PROC\_LIMIT in hbase-personality so when yetus runs, it is w/ the new 12500 value.
+
+
+---
+
+* [HBASE-22740](https://issues.apache.org/jira/browse/HBASE-22740) | *Major* | **[RSGroup] Forward-port HBASE-22658 to master branch**
+
+Only forward-port to master branch, couldn't apply to branch-2x because of dependency issue.
+
+
+---
+
+* [HBASE-24694](https://issues.apache.org/jira/browse/HBASE-24694) | *Major* | **Support flush a single column family of table**
+
+Adds option for the flush command to flush all stores from the specified column family only, among all regions of the given table (stores from other column families on this table would not get flushed).
+
+
+---
+
+* [HBASE-24625](https://issues.apache.org/jira/browse/HBASE-24625) | *Critical* | **AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.**
+
+We add a method getSyncedLength in  WALProvider.WriterBase interface for  WALFileLengthProvider used for replication, considering the case if we use  AsyncFSWAL,we write to 3 DNs concurrently,according to the visibility guarantee of HDFS, the data will be available immediately
+when arriving at DN since all the DNs will be considered as the last one in pipeline.This means replication may read uncommitted data and replicate it to the remote cluster and cause data inconsistency.The method WriterBase#getLength may return length which just in hdfs client buffer and not successfully synced to HDFS, so we use this method WriterBase#getSyncedLength to return the length successfully synced to HDFS and replication thread could only read writing WAL file limited by this length.
+see also HBASE-14004 and this document for more details:
+https://docs.google.com/document/d/11AyWtGhItQs6vsLRIx32PwTxmBY3libXwGXI25obVEY/edit#
+
+Before this patch, replication may read uncommitted data and replicate it to the slave cluster and cause data inconsistency between master and slave cluster, we could use FSHLog instead of AsyncFSWAL  to reduce probability of inconsistency without this patch applied.
+
+
+---
+
+* [HBASE-24779](https://issues.apache.org/jira/browse/HBASE-24779) | *Minor* | **Improve insight into replication WAL readers hung on checkQuota**
+
+New metrics are exposed, on the global source, for replication which indicate the "WAL entry buffer" that was introduced in HBASE-15995. When this usage reaches the limit, that RegionServer will cease to read more data for the sake of trying to replicate it. This usage (and limit) is local to each RegionServer is shared across all peers being handled by that RegionServer.
+
+
+---
+
+* [HBASE-24404](https://issues.apache.org/jira/browse/HBASE-24404) | *Major* | **Support flush a single column family of region**
+
+This adds an extra "flush" command option that allows for specifying an individual family to have its store flushed.
+
+Usage:
+flush 'REGIONNAME','FAMILYNAME' 
+flush 'ENCODED\_REGIONNAME','FAMILYNAME'
+
+
+---
+
+* [HBASE-24805](https://issues.apache.org/jira/browse/HBASE-24805) | *Major* | **HBaseTestingUtility.getConnection should be threadsafe**
+
+<!-- markdown -->
+Users of `HBaseTestingUtility` can now safely call the `getConnection` method from multiple threads.
+
+As a consequence of refactoring to improve the thread safety of the HBase testing classes, the protected `conf` member of the  `HBaseCommonTestingUtility` class has been marked final. Downstream users who extend from the class hierarchy rooted at this class will need to pass the Configuration instance they want used to their super constructor rather than overwriting the instance variable.
+
+
+---
+
+* [HBASE-24507](https://issues.apache.org/jira/browse/HBASE-24507) | *Major* | **Remove HTableDescriptor and HColumnDescriptor**
+
+Removed HTableDescriptor and HColumnDescritor. Please use TableDescriptor and ColumnFamilyDescriptor instead.
+
+Since the latter classes are immutable, you should use TableDescriptorBuilder and ColumnFamilyDescriptorBuilder to create them.
+
+TableDescriptorBuilder.ModifyableTableDescriptor and ColumnFamilyDescriptorBuilder.ModifyableColumnFamilyDescriptor are all changed from public to private now. It does not break our compatibilty rule as they are marked as IA.Private. But we do expose these two classes in some IA.Public classes, such as HBTU. So if you use these methods, you have to change your code.
+
+
+---
+
+* [HBASE-24767](https://issues.apache.org/jira/browse/HBASE-24767) | *Major* | **Change default to false for HBASE-15519 per-user metrics**
+
+Disables per-user metrics. They were enabled by default for the first time in hbase-2.3.0 but they need some work before they can be on all the time (See HBASE-15519)
+
+
+---
+
+* [HBASE-24704](https://issues.apache.org/jira/browse/HBASE-24704) | *Major* | **Make the Table Schema easier to view even there are multiple families**
+
+Improve the layout of column family from vertical to horizontal in table UI.
+
+
+---
+
+* [HBASE-11686](https://issues.apache.org/jira/browse/HBASE-11686) | *Minor* | **Shell code should create a binding / irb workspace instead of polluting the root namespace**
+
+In shell, all HBase constants and commands have been moved out of the top-level and into an IRB Workspace. Piped stdin and scripts passed by name to the shell will be evaluated within this workspace. If you absolutely need the top-level definitions, use the new compatibility flag, ie. hbase shell --top-level-defs or hbase shell --top-level-defs script2run.rb.
+
+
+---
+
+* [HBASE-24722](https://issues.apache.org/jira/browse/HBASE-24722) | *Minor* | **Address hbase-shell commands with unintentional return values**
+
+Shell commands that used to return true and false as Strings now return proper booleans: balance\_switch, snapshot\_cleanup\_switch, enable\_rpc\_throttle, disable\_rpc\_throttle, enable\_exceed\_throttle\_quota, disable\_exceed\_throttle\_quota. Shell commands that used to return the number 1 regardless of result now return correct values: is\_disabled, balancer, normalize, normalizer\_switch, normalizer\_enabled, catalogjanitor\_switch, catalogjanitor\_enabled, cleaner\_chore\_switch, cleaner\_chore\_enabled, splitormerge\_switch, splitormerge\_enabled, clear\_deadservers, clear\_block\_cache.
+
+
+---
+
+* [HBASE-24632](https://issues.apache.org/jira/browse/HBASE-24632) | *Major* | **Enable procedure-based log splitting as default in hbase3**
+
+Enables procedure-based distributed WAL splitting as default (HBASE-20610). To use 'classic' zk-coordinated splitting instead, set 'hbase.split.wal.zk.coordinated' to 'true'.
+
+
+---
+
+* [HBASE-24770](https://issues.apache.org/jira/browse/HBASE-24770) | *Major* | **Reimplement the Constraints API and revisit the IA annotations on related classes**
+
+Use TableDescriptorBuilder in Constraints for modifying TableDescriptor.
+Mark Constraints as IA.Public.
+
+
+---
+
+* [HBASE-24698](https://issues.apache.org/jira/browse/HBASE-24698) | *Major* | **Turn OFF Canary WebUI as default**
+
+Flips default for 'HBASE-23994 Add WebUI to Canary' The UI defaulted to on at port 16050. This JIRA changes it so new UI is off by default.
+
+To enable the UI, set property 'hbase.canary.info.port' to the port you want the UI to use.
+
+
+---
+
+* [HBASE-24578](https://issues.apache.org/jira/browse/HBASE-24578) | *Major* | **[WAL] Add a parameter to config RingBufferEventHandler's SyncFuture count**
+
+Introduce a new parameter "hbase.regionserver.wal.sync.batch.count" to control the wal sync batch size which is equals to "hbase.regionserver.handler.count" by default. It should work well if you use default wal provider---one wal per regionserver. But if you use read/write separated handlers, you can set "hbase.regionserver.wal.sync.batch.count" to the number of write handlers. And if you use wal-per-groups or wal-per-region, you can consider lower "hbase.regionserver.wal.sync.batch.count", the default number will be too big and consume more memories as the number of wals grows.
+
+
+---
+
+* [HBASE-24650](https://issues.apache.org/jira/browse/HBASE-24650) | *Major* | **Change the return types of the new checkAndMutate methods introduced in HBASE-8458**
+
+HBASE-24650 introduced CheckAndMutateResult class and changed the return type of checkAndMutate methods to this class in order to support CheckAndMutate with Increment/Append. CheckAndMutateResult class has two fields, one is \*success\* that indicates whether the operation is successful or not, and the other one is \*result\* that's the result of the operation and is used for  CheckAndMutate with Increment/Append.
+
+The new APIs for the Table interface:
+\`\`\`
+/\*\*
+ \* checkAndMutate that atomically checks if a row matches the specified condition. If it does,
+ \* it performs the specified action.
+ \*
+ \* @param checkAndMutate The CheckAndMutate object.
+ \* @return A CheckAndMutateResult object that represents the result for the CheckAndMutate.
+ \* @throws IOException if a remote or network exception occurs.
+ \*/
+default CheckAndMutateResult checkAndMutate(CheckAndMutate checkAndMutate) throws IOException {
+  return checkAndMutate(Collections.singletonList(checkAndMutate)).get(0);
+}
+
+/\*\*
+ \* Batch version of checkAndMutate. The specified CheckAndMutates are batched only in the sense
+ \* that they are sent to a RS in one RPC, but each CheckAndMutate operation is still executed
+ \* atomically (and thus, each may fail independently of others).
+ \*
+ \* @param checkAndMutates The list of CheckAndMutate.
+ \* @return A list of CheckAndMutateResult objects that represents the result for each
+ \*   CheckAndMutate.
+ \* @throws IOException if a remote or network exception occurs.
+ \*/
+default List\<CheckAndMutateResult\> checkAndMutate(List\<CheckAndMutate\> checkAndMutates)
+  throws IOException {
+  throw new NotImplementedException("Add an implementation!");
+}
+{code}
+
+The new APIs for the AsyncTable interface:
+{code}
+/\*\*
+ \* checkAndMutate that atomically checks if a row matches the specified condition. If it does,
+ \* it performs the specified action.
+ \*
+ \* @param checkAndMutate The CheckAndMutate object.
+ \* @return A {@link CompletableFuture}s that represent the result for the CheckAndMutate.
+ \*/
+CompletableFuture\<CheckAndMutateResult\> checkAndMutate(CheckAndMutate checkAndMutate);
+
+/\*\*
+ \* Batch version of checkAndMutate. The specified CheckAndMutates are batched only in the sense
+ \* that they are sent to a RS in one RPC, but each CheckAndMutate operation is still executed
+ \* atomically (and thus, each may fail independently of others).
+ \*
+ \* @param checkAndMutates The list of CheckAndMutate.
+ \* @return A list of {@link CompletableFuture}s that represent the result for each
+ \*   CheckAndMutate.
+ \*/
+List\<CompletableFuture\<CheckAndMutateResult\>\> checkAndMutate(
+  List\<CheckAndMutate\> checkAndMutates);
+
+/\*\*
+ \* A simple version of batch checkAndMutate. It will fail if there are any failures.
+ \*
+ \* @param checkAndMutates The list of rows to apply.
+ \* @return A {@link CompletableFuture} that wrapper the result list.
+ \*/
+default CompletableFuture\<List\<CheckAndMutateResult\>\> checkAndMutateAll(
+  List\<CheckAndMutate\> checkAndMutates) {
+  return allOf(checkAndMutate(checkAndMutates));
+}
+\`\`\`
+
+
+---
+
+* [HBASE-24671](https://issues.apache.org/jira/browse/HBASE-24671) | *Major* | **Add excludefile and designatedfile options to graceful\_stop.sh**
+
+Add excludefile and designatedfile options to graceful\_stop.sh. 
+
+Designated file with \<hostname:port\> per line as unload targets.
+
+Exclude file should have \<hostname:port\> per line. We do not unload regions to hostnames given in exclude file.
+
+Here is a simple example using graceful\_stop.sh with designatedfile option:
+./bin/graceful\_stop.sh --maxthreads 4 --designatedfile /path/designatedfile hostname
+The usage of the excludefile option is the same as the above.
+
+
+---
+
+* [HBASE-24560](https://issues.apache.org/jira/browse/HBASE-24560) | *Major* | **Add a new option of designatedfile in RegionMover**
+
+Add a new option "designatedfile" in RegionMover.
+
+If designated file is present with some contents, we will unload regions to hostnames provided in designated file.
+
+Designated file should have 'host:port' per line.
+
+
+---
+
+* [HBASE-24289](https://issues.apache.org/jira/browse/HBASE-24289) | *Major* | **Heterogeneous Storage for Date Tiered Compaction**
+
+Enhance DateTieredCompaction to support HDFS storage policy within one class family. 
+# First you need enable DTCP.
+To turn on Date Tiered Compaction (It is not recommended to turn on for the whole cluster because that will put meta table on it too and random get on meta table will be impacted):
+hbase.hstore.compaction.compaction.policy=org.apache.hadoop.hbase.regionserver.compactions.DateTieredCompactionPolicy
+## Parameters for Date Tiered Compaction:
+hbase.hstore.compaction.date.tiered.max.storefile.age.millis: Files with max-timestamp smaller than this will no longer be compacted.Default at Long.MAX\_VALUE.
+hbase.hstore.compaction.date.tiered.base.window.millis: base window size in milliseconds. Default at 6 hours.
+hbase.hstore.compaction.date.tiered.windows.per.tier: number of windows per tier. Default at 4.
+hbase.hstore.compaction.date.tiered.incoming.window.min: minimal number of files to compact in the incoming window. Set it to expected number of files in the window to avoid wasteful compaction. Default at 6.
+
+# Then enable HDTCP(Heterogeneous Date Tiered Compaction) as follow example configurations:  
+hbase.hstore.compaction.date.tiered.storage.policy.enable=true
+hbase.hstore.compaction.date.tiered.hot.window.age.millis=3600000
+hbase.hstore.compaction.date.tiered.hot.window.storage.policy=ALL\_SSD
+hbase.hstore.compaction.date.tiered.warm.window.age.millis=20600000
+hbase.hstore.compaction.date.tiered.warm.window.storage.policy=ONE\_SSD
+hbase.hstore.compaction.date.tiered.cold.window.storage.policy=HOT
+## It is better to enable WAL and flushing HFile storage policy with HDTCP. You can tune follow settings as well:
+hbase.wal.storage.policy=ALL\_SSD
+create 'table',{NAME=\>'f1',CONFIGURATION=\>{'hbase.hstore.block.storage.policy'=\>'ALL\_SSD'}}
+
+# Disable HDTCP as follow:
+hbase.hstore.compaction.date.tiered.storage.policy.enable=false
+
+
+---
+
+* [HBASE-24648](https://issues.apache.org/jira/browse/HBASE-24648) | *Major* | **Remove the legacy 'forceSplit' related code at region server side**
+
+Add a canSplit method to RegionSplitPolicy to determine whether we can split a region. Usually it is not related to RegionSplitPolicy so in the default implementation, it will test whether region is available and does not have reference file, but in DisabledRegionSplitPolicy, we will always return false.
+
+
+---
+
+* [HBASE-20819](https://issues.apache.org/jira/browse/HBASE-20819) | *Minor* | **Use TableDescriptor to replace HTableDescriptor in hbase-shell module**
+
+Removes HBase::Admin.hcd and HBase::Admin.update\_htd\_from\_arg from hbase-shell. Removes 8 constants from HBaseAdmin in the hbase-shell module: CACHE\_DATA\_IN\_L1, COMPARATOR, COMPARATOR\_IGNORE\_REPLICATION, ENCODE\_ON\_DISK, IS\_MOB\_BYTES, LENGTH, MOB\_COMPACT\_PARTITION\_POLICY\_BYTES, MOB\_THRESHOLD\_BYTES.
+
+
+---
+
+* [HBASE-24382](https://issues.apache.org/jira/browse/HBASE-24382) | *Major* | **Flush partial stores of region filtered by seqId when archive wal due to too many wals**
+
+Change the flush level from region to store when there are too many wals, benefit from this we can reduce unnessary flush tasks and small hfiles.
+
+
+---
+
+* [HBASE-24603](https://issues.apache.org/jira/browse/HBASE-24603) | *Critical* | **Zookeeper sync() call is async**
+
+<!-- markdown -->
+
+Fixes a couple of bugs in ZooKeeper interaction. Firstly, zk sync() call that is used to sync the lagging followers with leader so that the client sees a consistent snapshot state was actually asynchronous under the hood. We make it synchronous for correctness. Second, zookeeper events are now processed in a separate thread rather than doing it in the thread context of zookeeper client connection. This decoupling frees up client connection quickly and avoids deadlocks.
+
+
+---
+
+* [HBASE-24631](https://issues.apache.org/jira/browse/HBASE-24631) | *Major* | **Loosen Dockerfile pinned package versions of the "debian-revision"**
+
+<!-- markdown -->
+Update our package version numbers throughout the Dockerfiles to be pinned to their epic:upstream-version components only. Previously we'd specify the full debian package version number, including the debian-revision. This lead to instability as debian packaging details changed.
+See also [man deb-version](http://manpages.ubuntu.com/manpages/xenial/en/man5/deb-version.5.html)
+
+
+---
+
+* [HBASE-24609](https://issues.apache.org/jira/browse/HBASE-24609) | *Major* | **Move MetaTableAccessor out of hbase-client**
+
+Introduced a CatalogFamilyFormat to place the method for generating/parsing cells in catalog family.
+Renamed AsyncMetaTableAccessor to ClientMetaTableAccessor, removed duplicated code with CatalogFamiltyFormat and MetaTableAccessor, moved some code in MetaTableAccessor to ClientMetaTableAccessor.
+Moved MetaTableAccessor to hbase-balancer.
+
+
+---
+
+* [HBASE-24205](https://issues.apache.org/jira/browse/HBASE-24205) | *Major* | **Create metric to know the number of reads that happens from memstore**
+
+Adds a new metric where we collect the number of read requests (tracked per row) whether the row was fetched completely from memstore or it was pulled from files  and memstore. 
+The metric is now collected under the mbean for Tables and under the mbean for regions.
+Under table mbean ie.- 
+'name": "Hadoop:service=HBase,name=RegionServer,sub=Tables'
+The new metrics will be listed as 
+{code}
+    "Namespace\_default\_table\_t3\_columnfamily\_f1\_metric\_memstoreOnlyRowReadsCount": 5,
+ "Namespace\_default\_table\_t3\_columnfamily\_f1\_metric\_mixedRowReadsCount": 1,
+{code}
+Where the format is Namespace\_\<namespacename\>\_table\_\<tableName\>\_columnfamily\_\<columnfamilyname\>\_metric\_memstoreOnlyRowReadsCount
+Namespace\_\<namespacename\>\_table\_\<tableName\>\_columnfamily\_\<columnfamilyname\>\_metric\_mixedRowReadsCount
+{code}
+
+The same one under the region ie. 
+"name": "Hadoop:service=HBase,name=RegionServer,sub=Regions",
+comes as
+{code}
+   "Namespace\_default\_table\_t3\_region\_75a7846f4ac4a2805071a855f7d0dbdc\_store\_f1\_metric\_memstoreOnlyRowReadsCount": 5,
+    "Namespace\_default\_table\_t3\_region\_75a7846f4ac4a2805071a855f7d0dbdc\_store\_f1\_metric\_mixedRowReadsCount": 1,
+{code}
+where
+Namespace\_\<namespacename\_table\_\<tableName\>\_region\_\<regionName\>\_store\_\<storeName\>\_metric\_memstoreOnlyRowReadsCount
+Namespace\_\<namespacename\_table\_\<tableName\>\_region\_\<regionName\>\_store\_\<storeName\>\_metric\_mixedRowReadsCount
+This is also an aggregate against every store the number of reads that happened purely from the memstore or it was a  mixed read that happened from memstore and file.
+
+
+---
+
+* [HBASE-21773](https://issues.apache.org/jira/browse/HBASE-21773) | *Critical* | **rowcounter utility should respond to pleas for help**
+
+This adds [-h\|-help] options to rowcounter. Passing either -h or -help will print rowcounter guide as below: 
+
+$hbase rowcounter -h
+
+usage: hbase rowcounter \<tablename\> [options] [\<column1\> \<column2\>...]
+Options:
+    --starttime=\<arg\>       starting time filter to start counting rows from.
+    --endtime=\<arg\>         end time filter limit, to only count rows up to this timestamp.
+    --range=\<arg\>           [startKey],[endKey][;[startKey],[endKey]...]]
+    --expectedCount=\<arg\>   expected number of rows to be count.
+For performance, consider the following configuration properties:
+-Dhbase.client.scanner.caching=100
+-Dmapreduce.map.speculative=false
+
+
+---
+
+* [HBASE-24217](https://issues.apache.org/jira/browse/HBASE-24217) | *Major* | **Add hadoop 3.2.x support**
+
+CI coverage has been extended to include Hadoop 3.2.x for HBase 2.2+.
+
+
+---
+
+* [HBASE-23055](https://issues.apache.org/jira/browse/HBASE-23055) | *Major* | **Alter hbase:meta**
+
+Adds being able to edit hbase:meta table schema. For example,
+
+hbase(main):006:0\> alter 'hbase:meta', {NAME =\> 'info', DATA\_BLOCK\_ENCODING =\> 'ROW\_INDEX\_V1'}
+Updating all regions with the new schema...
+All regions updated.
+Done.
+Took 1.2138 seconds
+
+You can even add columnfamilies. Howevert, you cannot delete any of the core hbase:meta column families such as 'info' and 'table'.
+
+
+---
+
+* [HBASE-15161](https://issues.apache.org/jira/browse/HBASE-15161) | *Major* | **Umbrella: Miscellaneous improvements from production usage**
+
+This ticket summarizes significant improvements and expansion to the metrics surface area. Interested users should review the individual sub-tasks.
+
+
+---
+
+* [HBASE-20610](https://issues.apache.org/jira/browse/HBASE-20610) | *Major* | **Procedure V2 - Distributed Log Splitting**
+
+See RN in HBASE-21588 for detail on this feature. It landed in hbase-2.2.0.
+
+
+---
+
+* [HBASE-24038](https://issues.apache.org/jira/browse/HBASE-24038) | *Major* | **Add a metric to show the locality of ssd in table.jsp**
+
+Add a metric to show the locality of ssd in table.jsp, and move the locality related metrics to a new tab named localities.
+
+
+---
+
+* [HBASE-8458](https://issues.apache.org/jira/browse/HBASE-8458) | *Major* | **Support for batch version of checkAndMutate()**
+
+HBASE-8458 introduced CheckAndMutate class that's used to perform CheckAndMutate operations. Use the builder class to instantiate a CheckAndMutate object. This builder class is fluent style APIs, the code are like:
+\`\`\`
+// A CheckAndMutate operation where do the specified action if the column (specified by the
+family and the qualifier) of the row equals to the specified value
+CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
+  .ifEquals(family, qualifier, value)
+  .build(put);
+
+// A CheckAndMutate operation where do the specified action if the column (specified by the
+// family and the qualifier) of the row doesn't exist
+CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
+  .ifNotExists(family, qualifier)
+  .build(put);
+
+// A CheckAndMutate operation where do the specified action if the row matches the filter
+CheckAndMutate checkAndMutate = CheckAndMutate.newBuilder(row)
+  .ifMatches(filter)
+  .build(delete);
+\`\`\`
+
+And This added new checkAndMutate APIs to the Table and AsyncTable interfaces, and deprecated the old checkAndMutate APIs. The example code for the new APIs are as follows:
+\`\`\`
+Table table = ...;
+
+CheckAndMutate checkAndMutate = ...;
+
+// Perform the checkAndMutate operation
+boolean success = table.checkAndMutate(checkAndMutate);
+
+CheckAndMutate checkAndMutate1 = ...;
+CheckAndMutate checkAndMutate2 = ...;
+
+// Batch version
+List\<Boolean\> successList = table.checkAndMutate(Arrays.asList(checkAndMutate1, checkAndMutate2));
+\`\`\`
+
+This also has Protocol Buffers level changes. Old clients without this patch will work against new servers with this patch. However, new clients will break against old servers without this patch for checkAndMutate with RM and mutateRow. So, for rolling upgrade, we will need to upgrade servers first, and then roll out the new clients.
+
+
+---
+
+* [HBASE-24545](https://issues.apache.org/jira/browse/HBASE-24545) | *Major* | **Add backoff to SCP check on WAL split completion**
+
+Adds backoff in ServerCrashProcedure wait on WAL split to complete if large backlog of files to split (Its possible to avoid SCP blocking, waiting on WALs to split if you use procedure-based splitting --  set 'hbase.split.wal.zk.coordinated' to false to enable procedure based wal splitting.)
+
+
+---
+
+* [HBASE-24524](https://issues.apache.org/jira/browse/HBASE-24524) | *Minor* | **SyncTable logging improvements**
+
+Notice this has changed log level for mismatching row keys, originally those were being logged at INFO level, now it's logged at DEBUG level. This is consistent with the logging of mismatching cells. Also, for missing row keys, it now logs row key values in human readable format, making it more meaningful for operators troubleshooting mismatches.
+
+
+---
+
+* [HBASE-24510](https://issues.apache.org/jira/browse/HBASE-24510) | *Major* | **Remove HBaseTestCase and GenericTestUtils**
+
+HBaseTestCase and GenericTestUtils have been removed.
+
+Non of these classes are IA.Public, but maybe HBaseTestCase is used by users so still mark this as an incompatible change.
+
+
+---
+
+* [HBASE-24359](https://issues.apache.org/jira/browse/HBASE-24359) | *Major* | **Optionally ignore edits for deleted CFs for replication.**
+
+Introduce a new config hbase.replication.drop.on.deleted.columnfamily, default is false. When config to true, the replication will drop the edits for columnfamily that has been deleted from the replication source and target.
+
+
+---
+
+* [HBASE-24305](https://issues.apache.org/jira/browse/HBASE-24305) | *Minor* | **Handle deprecations in ServerName**
+
+The following methods were removed or made private from ServerName (due to HBASE-17624):
+
+- getHostNameMinusDomain(String): Was made private without a replacement.
+- parseHostname(String): Use #valueOf(String) instead.
+- parsePort(String): Use #valueOf(String) instead.
+- parseStartcode(String): Use #valueOf(String) instead.
+- getServerName(String, int, long): Was made private. Use #valueOf(String, int, long) instead.
+- getServerName(String, long): Use #valueOf(String, long) instead.
+- getHostAndPort(): Use #getAddress() instead.
+- getServerStartcodeFromServerName(String): Use instance of ServerName to pull out start code)
+- getServerNameLessStartCode(String): Use #getAddress() instead.
+
+
+---
+
+* [HBASE-24491](https://issues.apache.org/jira/browse/HBASE-24491) | *Major* | **Remove HRegionInfo**
+
+Removed HRegionInfo.
+
+
+---
+
+* [HBASE-24418](https://issues.apache.org/jira/browse/HBASE-24418) | *Major* | **Consolidate Normalizer implementations**
+
+<!-- markdown -->
+This change extends the Normalizer with a handful of new configurations. The configuration points supported are:
+* `hbase.normalizer.split.enabled` Whether to split a region as part of normalization. Default: `true`.
+* `hbase.normalizer.merge.enabled` Whether to merge a region as part of normalization. Default `true`.
+* `hbase.normalizer.min.region.count` The minimum number of regions in a table to consider it for merge normalization. Default: 3.
+* `hbase.normalizer.merge.min_region_age.days` The minimum age for a region to be considered for a merge, in days. Default: 3.
+* `hbase.normalizer.merge.min_region_size.mb` The minimum size for a region to be considered for a merge, in whole MBs. Default: 1.
+
+
+---
+
+* [HBASE-24309](https://issues.apache.org/jira/browse/HBASE-24309) | *Major* | **Avoid introducing log4j and slf4j-log4j dependencies for modules other than hbase-assembly**
+
+Add a hbase-logging module, put the log4j related code in this module only so other modules do not need to depend on log4j at compile scope. See the comments of Log4jUtils and InternalLog4jUtils for more details.
+
+Add a log4j.properties to the test jar of hbase-logging module, so for other sub modules we just need to depend on the test jar of hbase-logging module at test scope to output the log to console, without placing a log4j.properties in the test resources as they all (almost) have the same content. And this test module will not be included in the assembly tarball so it will not mess up the binary distribution.
+
+Ban direct commons-logging dependency, and ban commons-logging and log4j imports in non-test code, to avoid mess up the downstream users logging framework. In hbase-logging module we do need to use log4j classes and the trick is to use full class name.
+
+Add jcl-over-slf4j and jul-to-slf4j dependencies, as some of our dependencies use jcl or jul as logging framework, we should also redirect their log message to slf4j.
+
+
+---
+
+* [HBASE-21406](https://issues.apache.org/jira/browse/HBASE-21406) | *Minor* | **"status 'replication'" should not show SINK if the cluster does not act as sink**
+
+Added new metric to differentiate sink startup time from last OP applied time.
+
+Original behaviour was to always set startup time to TimestampsOfLastAppliedOp, and always show it on "status 'replication'" command, regardless if the sink ever applied any OP. 
+
+This was confusing, specially for scenarios where cluster was just acting as source, the output could lead to wrong interpretations about sink not applying edits or replication being stuck. 
+
+With the new metric, we now compare the two metrics values, assuming that if both are the same, there's never been any OP shipped to the given sink, so output would reflect it more clearly, to something as for example:
+
+SINK: TimeStampStarted=Thu Dec 06 23:59:47 GMT 2018, Waiting for OPs...
+
+
+---
+
+* [HBASE-23841](https://issues.apache.org/jira/browse/HBASE-23841) | *Minor* | **Remove deprecated methods from Scan**
+
+All deprecated methods in Scan have been removed.
+
+Please see the release note for the sub tasks for the detailed removed methods.
+
+
+---
+
+* [HBASE-24471](https://issues.apache.org/jira/browse/HBASE-24471) | *Major* | **The way we bootstrap meta table is confusing**
+
+Move all the meta initialization code in MasterFileSystem and HRegionServer to InitMetaProcedure. Add a new step for InitMetaProcedure called INIT\_META\_WRITE\_FS\_LAYOUT to place the moved code.
+
+This is an incompatible change, but should not have much impact. InitMetaProcedure will only be executed once when bootstraping a fresh new cluster, so typically this will not effect rolling upgrading. And even if you hit this problem, as long as InitMetaProcedure has not been finished, we can make sure that there is no user data in the cluster, you can just clean up the cluster and try again. There will be no data loss.
+
+
+---
+
+* [HBASE-24132](https://issues.apache.org/jira/browse/HBASE-24132) | *Major* | **Upgrade to Apache ZooKeeper 3.5.7**
+
+<!-- markdown -->
+HBase ships ZooKeeper 3.5.x. Was the EOL'd 3.4.x. 3.5.x client can talk to 3.4.x ensemble.
+
+The ZooKeeper project has built a [FAQ](https://cwiki.apache.org/confluence/display/ZOOKEEPER/Upgrade+FAQ) that documents known issues and work-arounds when upgrading existing deployments.
+
+
+---
+
+* [HBASE-22287](https://issues.apache.org/jira/browse/HBASE-22287) | *Major* | **inifinite retries on failed server in RSProcedureDispatcher**
+
+Add backoff. Avoid retrying every 100ms.
+
+
+---
+
+* [HBASE-24425](https://issues.apache.org/jira/browse/HBASE-24425) | *Major* | **Run hbck\_chore\_run and catalogjanitor\_run on draw of 'HBCK Report' page**
+
+Runs 'catalogjanitor\_run' and 'hbck\_chore\_run' inline with the loading of the 'HBCK Report' page.
+
+Pass '?cache=true' to skip inline invocation of 'catalogjanitor\_run' and 'hbck\_chore\_run' drawing the page.
+
+
+---
+
+* [HBASE-24408](https://issues.apache.org/jira/browse/HBASE-24408) | *Blocker* | **Introduce a general 'local region' to store data on master**
+
+Introduced a general 'local region' at master side to store the procedure data, etc.
+
+The hfile of this region will be stored on the root fs while the wal will be stored on the wal fs. This issue supercedes part of the code for HBASE-23326, as now we store the data in 'MasterData' directory instead of 'MasterProcs'.
+
+The old hfiles will be moved to the global hfile archived directory with the suffix $-masterlocalhfile-$. The wal files will be moved to the global old wal directory with the suffix $masterlocalwal$. The TimeToLiveMasterLocalStoreHFileCleaner and TimeToLiveMasterLocalStoreWALCleaner are configured by default for cleaning the old hfiles and wal files, and the default TTLs are both 7 days.
+
+
+---
+
+* [HBASE-24115](https://issues.apache.org/jira/browse/HBASE-24115) | *Major* | **Relocate test-only REST "client" from src/ to test/ and mark Private**
+
+Relocate test-only REST RemoteHTable and RemoteAdmin from src/ to test/. And mark them as InterfaceAudience.Private.
+
+
+---
+
+* [HBASE-23938](https://issues.apache.org/jira/browse/HBASE-23938) | *Major* | **Replicate slow/large RPC calls to HDFS**
+
+Config key: hbase.regionserver.slowlog.systable.enabled
+Default value: false
+
+This config can be enabled if hbase.regionserver.slowlog.buffer.enabled is already enabled. While hbase.regionserver.slowlog.buffer.enabled ensures that any slow/large RPC logs with complete details are written to ring buffer available at each RegionServer, hbase.regionserver.slowlog.systable.enabled would ensure that all such logs are also persisted in new system table hbase:slowlog. 
+Operator can scan hbase:slowlog with filters to retrieve specific attribute matching records and this table would be useful to capture historical performance of slowness of RPC calls with detailed analysis.
+
+hbase:slowlog consists of single ColumnFamily info. info consists of multiple qualifiers similar to the attributes available to query as part of Admin API: get\_slowlog\_responses.
+
+One example of a row from hbase:slowlog scan result (Attached a sample screenshot in the Jira) :
+
+ \\x024\\xC1\\x06X\\x81\\xF6\\xEC                                  column=info:call\_details, timestamp=2020-05-16T14:59:58.764Z, value=Scan(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ScanRequest)                             
+ \\x024\\xC1\\x06X\\x81\\xF6\\xEC                                  column=info:client\_address, timestamp=2020-05-16T14:59:58.764Z, value=172.20.10.2:57348                                                                                          
+ \\x024\\xC1\\x06X\\x81\\xF6\\xEC                                  column=info:method\_name, timestamp=2020-05-16T14:59:58.764Z, value=Scan                                                                                                          
+ \\x024\\xC1\\x06X\\x81\\xF6\\xEC                                  column=info:param, timestamp=2020-05-16T14:59:58.764Z, value=region { type: REGION\_NAME value: "cluster\_test,cccccccc,1589635796466.aa45e1571d533f5ed0bb31cdccaaf9cf." } scan { a
+                                                             ttribute { name: "\_isolationlevel\_" value: "\\x5C000" } start\_row: "cccccccc" time\_range { from: 0 to: 9223372036854775807 } max\_versions: 1 cache\_blocks: true max\_result\_size: 2
+                                                             097152 caching: 2147483647 include\_stop\_row: false } number\_of\_rows: 2147483647 close\_scanner: false client\_handles\_partials: true client\_handles\_heartbeats: true track\_scan\_met
+                                                             rics: false                                                                                                                                                                      
+ \\x024\\xC1\\x06X\\x81\\xF6\\xEC                                  column=info:processing\_time, timestamp=2020-05-16T14:59:58.764Z, value=24                                                                                                        
+ \\x024\\xC1\\x06X\\x81\\xF6\\xEC                                  column=info:queue\_time, timestamp=2020-05-16T14:59:58.764Z, value=0                                                                                                              

[... 4209 lines stripped ...]