You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by st...@apache.org on 2018/04/10 03:57:49 UTC

[2/3] hbase git commit: HBASE-20142 Copy master doc into branch-2 and edit to make it suit 2.0.0

http://git-wip-us.apache.org/repos/asf/hbase/blob/9ef75b96/src/main/asciidoc/_chapters/developer.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/developer.adoc b/src/main/asciidoc/_chapters/developer.adoc
index 4e35fd2..a6e9c3e 100644
--- a/src/main/asciidoc/_chapters/developer.adoc
+++ b/src/main/asciidoc/_chapters/developer.adoc
@@ -578,9 +578,8 @@ You could also set this in an environment variable or alias in your shell.
 The script _dev-support/make_rc.sh_ automates many of the below steps.
 It will checkout a tag, clean the checkout, build src and bin tarballs,
 and deploy the built jars to repository.apache.org.
-It does NOT do the modification of the _CHANGES.md_ and _RELEASENOTES.md_
-(See HBASE-18828 for how to generate these files)
-for the release, the checking of the produced artifacts to ensure they are 'good' --
+It does NOT do the modification of the _CHANGES.txt_ for the release,
+the checking of the produced artifacts to ensure they are 'good' --
 e.g. extracting the produced tarballs, verifying that they
 look right, then starting HBase and checking that everything is running
 correctly -- or the signing and pushing of the tarballs to
@@ -589,9 +588,9 @@ Take a look. Modify/improve as you see fit.
 ====
 
 .Procedure: Release Procedure
-. Update the _CHANGES.md and _RELEASENOTES.md_ files (See HBASE-18828 for how)_ and the POM files.
+. Update the _CHANGES.txt_ file and the POM files.
 +
-Update _CHANGES.md and _RELEASENOTES.md_ with the changes since the last release.
+Update _CHANGES.txt_ with the changes since the last release.
 Make sure the URL to the JIRA points to the proper location which lists fixes for this release.
 Adjust the version in all the POM files appropriately.
 If you are making a release candidate, you must remove the `-SNAPSHOT` label from all versions
@@ -605,8 +604,7 @@ To set a version in all the many poms of the hbase multi-module project, use a c
 $ mvn clean org.codehaus.mojo:versions-maven-plugin:2.5:set -DnewVersion=2.1.0-SNAPSHOT
 ----
 +
-Make sure all versions in poms are changed! Checkin the _CHANGES.md_ and _RELEASENOTES.md_
-and any maven version changes.
+Make sure all versions in poms are changed! Checkin the _CHANGES.txt_ and any maven version changes.
 
 . Update the documentation.
 +
@@ -769,15 +767,15 @@ To do this, log in to Apache's Nexus at link:https://repository.apache.org[repos
 Find your artifacts in the staging repository. Click on 'Staging Repositories' and look for a new one ending in "hbase" with a status of 'Open', select it.
 Use the tree view to expand the list of repository contents and inspect if the artifacts you expect are present. Check the POMs.
 As long as the staging repo is open you can re-upload if something is missing or built incorrectly.
-
++
 If something is seriously wrong and you would like to back out the upload, you can use the 'Drop' button to drop and delete the staging repository.
 Sometimes the upload fails in the middle. This is another reason you might have to 'Drop' the upload from the staging repository.
-
++
 If it checks out, close the repo using the 'Close' button. The repository must be closed before a public URL to it becomes available. It may take a few minutes for the repository to close. Once complete you'll see a public URL to the repository in the Nexus UI. You may also receive an email with the URL. Provide the URL to the temporary staging repository in the email that announces the release candidate.
 (Folks will need to add this repo URL to their local poms or to their local _settings.xml_ file to pull the published release candidate artifacts.)
-
++
 When the release vote concludes successfully, return here and click the 'Release' button to release the artifacts to central. The release process will automatically drop and delete the staging repository.
-
++
 .hbase-downstreamer
 [NOTE]
 ====
@@ -788,15 +786,18 @@ Make sure you are pulling from the repository when tests run and that you are no
 ====
 
 See link:https://www.apache.org/dev/publishing-maven-artifacts.html[Publishing Maven Artifacts] for some pointers on this maven staging process.
-
++
 If the HBase version ends in `-SNAPSHOT`, the artifacts go elsewhere.
 They are put into the Apache snapshots repository directly and are immediately available.
 Making a SNAPSHOT release, this is what you want to happen.
-
-At this stage, you have two tarballs in your 'build output directory' and a set of artifacts in a staging area of the maven repository, in the 'closed' state.
-
++
+At this stage, you have two tarballs in your 'build output directory' and a set of artifacts
+in a staging area of the maven repository, in the 'closed' state.
 Next sign, fingerprint and then 'stage' your release candiate build output directory via svnpubsub by committing
-your directory to link:https://dist.apache.org/repos/dist/dev/hbase/[The 'dev' distribution directory] (See comments on link:https://issues.apache.org/jira/browse/HBASE-10554[HBASE-10554 Please delete old releases from mirroring system] but in essence it is an svn checkout of https://dist.apache.org/repos/dist/dev/hbase -- releases are at https://dist.apache.org/repos/dist/release/hbase). In the _version directory_ run the following commands:
+your directory to link:https://dist.apache.org/repos/dist/dev/hbase/[The dev distribution directory]
+(See comments on link:https://issues.apache.org/jira/browse/HBASE-10554[HBASE-10554 Please delete old releases from mirroring system]
+but in essence it is an svn checkout of link:https://dist.apache.org/repos/dist/dev/hbase[dev/hbase] -- releases are at
+link:https://dist.apache.org/repos/dist/release/hbase[release/hbase]). In the _version directory_ run the following commands:
 
 [source,bourne]
 ----
@@ -905,13 +906,21 @@ For any other module, for example `hbase-common`, the tests must be strict unit
 ==== Testing the HBase Shell
 
 The HBase shell and its tests are predominantly written in jruby.
-In order to make these tests run as a part of the standard build, there is a single JUnit test, `TestShell`, that takes care of loading the jruby implemented tests and running them.
+
+In order to make these tests run as a part of the standard build, there are a few JUnit test classes that take care of loading the jruby implemented tests and running them.
+The tests were split into separate classes to accomodate class level timeouts (see <<hbase.unittests>> for specifics).
 You can run all of these tests from the top level with:
 
 [source,bourne]
 ----
+      mvn clean test -Dtest=Test*Shell
+----
 
-      mvn clean test -Dtest=TestShell
+If you have previously done a `mvn install`, then you can instruct maven to run only the tests in the hbase-shell module with:
+
+[source,bourne]
+----
+      mvn clean test -pl hbase-shell
 ----
 
 Alternatively, you may limit the shell tests that run using the system variable `shell.test`.
@@ -920,8 +929,7 @@ For example, the tests that cover the shell commands for altering tables are con
 
 [source,bourne]
 ----
-
-      mvn clean test -Dtest=TestShell -Dshell.test=/AdminAlterTableTest/
+      mvn clean test -pl hbase-shell -Dshell.test=/AdminAlterTableTest/
 ----
 
 You may also use a link:http://docs.ruby-doc.com/docs/ProgrammingRuby/html/language.html#UJ[Ruby Regular Expression
@@ -931,14 +939,13 @@ You can run all of the HBase admin related tests, including both the normal admi
 [source,bourne]
 ----
 
-      mvn clean test -Dtest=TestShell -Dshell.test=/.*Admin.*Test/
+      mvn clean test -pl hbase-shell -Dshell.test=/.*Admin.*Test/
 ----
 
 In the event of a test failure, you can see details by examining the XML version of the surefire report results
 
 [source,bourne]
 ----
-
       vim hbase-shell/target/surefire-reports/TEST-org.apache.hadoop.hbase.client.TestShell.xml
 ----
 
@@ -1526,35 +1533,6 @@ We use Git for source code management and latest development happens on `master`
 branches for past major/minor/maintenance releases and important features and bug fixes are often
  back-ported to them.
 
-=== Release Managers
-
-Each maintained release branch has a release manager, who volunteers to coordinate new features and bug fixes are backported to that release.
-The release managers are link:https://hbase.apache.org/team-list.html[committers].
-If you would like your feature or bug fix to be included in a given release, communicate with that release manager.
-If this list goes out of date or you can't reach the listed person, reach out to someone else on the list.
-
-NOTE: End-of-life releases are not included in this list.
-
-.Release Managers
-[cols="1,1", options="header"]
-|===
-| Release
-| Release Manager
-
-| 1.2
-| Sean Busbey
-
-| 1.3
-| Mikhail Antonov
-
-| 1.4
-| Andrew Purtell
-
-| 2.0
-| Michael Stack
-
-|===
-
 [[code.standards]]
 === Code Standards
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/9ef75b96/src/main/asciidoc/_chapters/getting_started.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/getting_started.adoc b/src/main/asciidoc/_chapters/getting_started.adoc
index 32eb669..0edddfa 100644
--- a/src/main/asciidoc/_chapters/getting_started.adoc
+++ b/src/main/asciidoc/_chapters/getting_started.adoc
@@ -44,24 +44,6 @@ table, enable or disable the table, and start and stop HBase.
 
 Apart from downloading HBase, this procedure should take less than 10 minutes.
 
-[[loopback.ip]]
-[NOTE]
-====
-.Loopback IP - HBase 0.94.x and earlier
-
-Prior to HBase 0.94.x, HBase expected the loopback IP address to be 127.0.0.1.
-Ubuntu and some other distributions default to 127.0.1.1 and this will cause
-problems for you. See link:https://web-beta.archive.org/web/20140104070155/http://blog.devving.com/why-does-hbase-care-about-etchosts[Why does HBase care about /etc/hosts?] for detail
-
-The following _/etc/hosts_ file works correctly for HBase 0.94.x and earlier, on Ubuntu. Use this as a template if you run into trouble.
-[listing]
-----
-127.0.0.1 localhost
-127.0.0.1 ubuntu.ubuntu-domain ubuntu
-----
-This issue has been fixed in hbase-0.96.0 and beyond.
-====
-
 === JDK Version Requirements
 
 HBase requires that a JDK be installed.
@@ -70,7 +52,7 @@ See <<java,Java>> for information about supported JDK versions.
 === Get Started with HBase
 
 .Procedure: Download, Configure, and Start HBase in Standalone Mode
-. Choose a download site from this list of link:https://www.apache.org/dyn/closer.cgi/hbase/[Apache Download Mirrors].
+. Choose a download site from this list of link:https://www.apache.org/dyn/closer.lua/hbase/[Apache Download Mirrors].
   Click on the suggested top link.
   This will take you to a mirror of _HBase Releases_.
   Click on the folder named _stable_ and then download the binary file that ends in _.tar.gz_ to your local filesystem.
@@ -100,7 +82,7 @@ JAVA_HOME=/usr
 +
 
 . Edit _conf/hbase-site.xml_, which is the main HBase configuration file.
-  At this time, you only need to specify the directory on the local filesystem where HBase and ZooKeeper write data.
+  At this time, you need to specify the directory on the local filesystem where HBase and ZooKeeper write data and acknowledge some risks.
   By default, a new directory is created under /tmp.
   Many servers are configured to delete the contents of _/tmp_ upon reboot, so you should store the data elsewhere.
   The following configuration will store HBase's data in the _hbase_ directory, in the home directory of the user called `testuser`.
@@ -120,6 +102,21 @@ JAVA_HOME=/usr
     <name>hbase.zookeeper.property.dataDir</name>
     <value>/home/testuser/zookeeper</value>
   </property>
+  <property>
+    <name>hbase.unsafe.stream.capability.enforce</name>
+    <value>false</value>
+    <description>
+      Controls whether HBase will check for stream capabilities (hflush/hsync).
+
+      Disable this if you intend to run on LocalFileSystem, denoted by a rootdir
+      with the 'file://' scheme, but be mindful of the NOTE below.
+
+      WARNING: Setting this to false blinds you to potential data loss and
+      inconsistent system state in the event of process and/or node failures. If
+      HBase is complaining of an inability to use hsync or hflush it's most
+      likely not a false positive.
+    </description>
+  </property>
 </configuration>
 ----
 ====
@@ -129,7 +126,14 @@ HBase will do this for you.  If you create the directory,
 HBase will attempt to do a migration, which is not what you want.
 +
 NOTE: The _hbase.rootdir_ in the above example points to a directory
-in the _local filesystem_. The 'file:/' prefix is how we denote local filesystem.
+in the _local filesystem_. The 'file://' prefix is how we denote local
+filesystem. You should take the WARNING present in the configuration example
+to heart. In standalone mode HBase makes use of the local filesystem abstraction
+from the Apache Hadoop project. That abstraction doesn't provide the durability
+promises that HBase needs to operate safely. This is fine for local development
+and testing use cases where the cost of cluster failure is well contained. It is
+not appropriate for production deployments; eventually you will lose data.
+
 To home HBase on an existing instance of HDFS, set the _hbase.rootdir_ to point at a
 directory up on your instance: e.g. _hdfs://namenode.example.org:8020/hbase_.
 For more on this variant, see the section below on Standalone HBase over HDFS.
@@ -181,7 +185,7 @@ hbase(main):001:0> create 'test', 'cf'
 
 . List Information About your Table
 +
-Use the `list` command to
+Use the `list` command to confirm your table exists
 +
 ----
 hbase(main):002:0> list 'test'
@@ -192,6 +196,22 @@ test
 => ["test"]
 ----
 
++
+Now use the `describe` command to see details, including configuration defaults
++
+----
+hbase(main):003:0> describe 'test'
+Table test is ENABLED
+test
+COLUMN FAMILIES DESCRIPTION
+{NAME => 'cf', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE =>
+'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'f
+alse', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE
+ => '65536'}
+1 row(s)
+Took 0.9998 seconds
+----
+
 . Put data into your table.
 +
 To put data into your table, use the `put` command.
@@ -332,7 +352,7 @@ First, add the following property which directs HBase to run in distributed mode
 ----
 +
 Next, change the `hbase.rootdir` from the local filesystem to the address of your HDFS instance, using the `hdfs:////` URI syntax.
-In this example, HDFS is running on the localhost at port 8020.
+In this example, HDFS is running on the localhost at port 8020. Be sure to either remove the entry for `hbase.unsafe.stream.capability.enforce` or set it to true.
 +
 [source,xml]
 ----
@@ -389,7 +409,7 @@ The following command starts 3 backup servers using ports 16002/16012, 16003/160
 +
 ----
 
-$ ./bin/local-master-backup.sh 2 3 5
+$ ./bin/local-master-backup.sh start 2 3 5
 ----
 +
 To kill a backup master without killing the entire cluster, you need to find its process ID (PID). The PID is stored in a file with a name like _/tmp/hbase-USER-X-master.pid_.

http://git-wip-us.apache.org/repos/asf/hbase/blob/9ef75b96/src/main/asciidoc/_chapters/hbase-default.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/hbase-default.adoc b/src/main/asciidoc/_chapters/hbase-default.adoc
index 7798657..f809f28 100644
--- a/src/main/asciidoc/_chapters/hbase-default.adoc
+++ b/src/main/asciidoc/_chapters/hbase-default.adoc
@@ -150,7 +150,7 @@ A comma-separated list of BaseLogCleanerDelegate invoked by
 *`hbase.master.logcleaner.ttl`*::
 +
 .Description
-Maximum time a WAL can stay in the .oldlogdir directory,
+Maximum time a WAL can stay in the oldWALs directory,
     after which it will be cleaned by a Master thread.
 +
 .Default

http://git-wip-us.apache.org/repos/asf/hbase/blob/9ef75b96/src/main/asciidoc/_chapters/mapreduce.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/mapreduce.adoc b/src/main/asciidoc/_chapters/mapreduce.adoc
index 2f72a2d..61cff86 100644
--- a/src/main/asciidoc/_chapters/mapreduce.adoc
+++ b/src/main/asciidoc/_chapters/mapreduce.adoc
@@ -120,7 +120,7 @@ You might find the more selective `hbase mapredcp` tool output of interest; it l
 to run a basic mapreduce job against an hbase install. It does not include configuration. You'll probably need to add
 these if you want your MapReduce job to find the target cluster. You'll probably have to also add pointers to extra jars
 once you start to do anything of substance. Just specify the extras by passing the system propery `-Dtmpjars` when
-you run `hbase mapredcp`.
+you run `hbase mapredcp`. 
 
 For jobs that do not package their dependencies or call `TableMapReduceUtil#addDependencyJars`, the following command structure is necessary:
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/9ef75b96/src/main/asciidoc/_chapters/ops_mgt.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/ops_mgt.adoc b/src/main/asciidoc/_chapters/ops_mgt.adoc
index a2bf3fd..6d332af 100644
--- a/src/main/asciidoc/_chapters/ops_mgt.adoc
+++ b/src/main/asciidoc/_chapters/ops_mgt.adoc
@@ -44,17 +44,18 @@ Some commands, such as `version`, `pe`, `ltt`, `clean`, are not available in pre
 $ bin/hbase
 Usage: hbase [<options>] <command> [<args>]
 Options:
-  --config DIR    Configuration direction to use. Default: ./conf
-  --hosts HOSTS   Override the list in 'regionservers' file
+  --config DIR     Configuration direction to use. Default: ./conf
+  --hosts HOSTS    Override the list in 'regionservers' file
+  --auth-as-server Authenticate to ZooKeeper using servers configuration
 
 Commands:
 Some commands take arguments. Pass no args or -h for usage.
   shell           Run the HBase shell
   hbck            Run the hbase 'fsck' tool
+  snapshot        Tool for managing snapshots
   wal             Write-ahead-log analyzer
   hfile           Store file analyzer
   zkcli           Run the ZooKeeper shell
-  upgrade         Upgrade hbase
   master          Run an HBase HMaster node
   regionserver    Run an HBase HRegionServer node
   zookeeper       Run a ZooKeeper server
@@ -78,7 +79,7 @@ Others, such as `hbase shell` (<<shell>>), `hbase upgrade` (<<upgrading>>), and
 === Canary
 
 There is a Canary class can help users to canary-test the HBase cluster status, with every column-family for every regions or RegionServer's granularity.
-To see the usage, use the `--help` parameter.
+To see the usage, use the `-help` parameter.
 
 ----
 $ ${HBASE_HOME}/bin/hbase canary -help
@@ -88,18 +89,32 @@ Usage: hbase canary [opts] [table1 [table2]...] | [regionserver1 [regionserver2]
    -help          Show this help and exit.
    -regionserver  replace the table argument to regionserver,
       which means to enable regionserver mode
+   -allRegions    Tries all regions on a regionserver,
+      only works in regionserver mode.
+   -zookeeper    Tries to grab zookeeper.znode.parent
+      on each zookeeper instance
    -daemon        Continuous check at defined intervals.
    -interval <N>  Interval between checks (sec)
-   -e             Use region/regionserver as regular expression
-      which means the region/regionserver is regular expression pattern
+   -e             Use table/regionserver as regular expression
+      which means the table/regionserver is regular expression pattern
    -f <B>         stop whole program if first error occurs, default is true
-   -t <N>         timeout for a check, default is 600000 (milliseconds)
+   -t <N>         timeout for a check, default is 600000 (millisecs)
+   -writeTableTimeout <N>         write timeout for the writeTable, default is 600000 (millisecs)
+   -readTableTimeouts <tableName>=<read timeout>,<tableName>=<read timeout>, ...    comma-separated list of read timeouts per table (no spaces), default is 600000 (millisecs)
    -writeSniffing enable the write sniffing in canary
    -treatFailureAsError treats read / write failure as error
    -writeTable    The table used for write sniffing. Default is hbase:canary
+   -Dhbase.canary.read.raw.enabled=<true/false> Use this flag to enable or disable raw scan during read canary test Default is false and raw is not enabled during scan
    -D<configProperty>=<value> assigning or override the configuration params
 ----
 
+[NOTE]
+The `Sink` class is instantiated using the `hbase.canary.sink.class` configuration property which
+will also determine the used Monitor class. In the absence of this property RegionServerStdOutSink
+will be used. You need to use the Sink according to the passed parameters to the _canary_ command.
+As an example you have to set `hbase.canary.sink.class` property to
+`org.apache.hadoop.hbase.tool.Canary$RegionStdOutSink` for using table parameters.
+
 This tool will return non zero error codes to user for collaborating with other monitoring tools, such as Nagios.
 The error code definitions are:
 
@@ -109,6 +124,7 @@ private static final int USAGE_EXIT_CODE = 1;
 private static final int INIT_ERROR_EXIT_CODE = 2;
 private static final int TIMEOUT_ERROR_EXIT_CODE = 3;
 private static final int ERROR_EXIT_CODE = 4;
+private static final int FAILURE_EXIT_CODE = 5;
 ----
 
 Here are some examples based on the following given case.
@@ -183,10 +199,10 @@ This daemon will stop itself and return non-zero error code if any error occurs,
 $ ${HBASE_HOME}/bin/hbase canary -daemon
 ----
 
-Run repeatedly with internal 5 seconds and will not stop itself even if errors occur in the test.
+Run repeatedly with 5 second intervals and will not stop itself even if errors occur in the test.
 
 ----
-$ ${HBASE_HOME}/bin/hbase canary -daemon -interval 50000 -f false
+$ ${HBASE_HOME}/bin/hbase canary -daemon -interval 5 -f false
 ----
 
 ==== Force timeout if canary test stuck
@@ -196,7 +212,7 @@ Because of this we provide a timeout option to kill the canary test and return a
 This run sets the timeout value to 60 seconds, the default value is 600 seconds.
 
 ----
-$ ${HBASE_HOME}/bin/hbase canary -t 600000
+$ ${HBASE_HOME}/bin/hbase canary -t 60000
 ----
 
 ==== Enable write sniffing in canary
@@ -225,7 +241,7 @@ while returning normal exit code. To treat read / write failure as error, you ca
 with the `-treatFailureAsError` option. When enabled, read / write failure would result in error
 exit code.
 ----
-$ ${HBASE_HOME}/bin/hbase canary --treatFailureAsError
+$ ${HBASE_HOME}/bin/hbase canary -treatFailureAsError
 ----
 
 ==== Running Canary in a Kerberos-enabled Cluster
@@ -257,7 +273,7 @@ This example shows each of the properties with valid values.
   <value>/etc/hbase/conf/keytab.krb5</value>
 </property>
 <!-- optional params -->
-property>
+<property>
   <name>hbase.client.dns.interface</name>
   <value>default</value>
 </property>
@@ -372,7 +388,7 @@ directory.
 You can get a textual dump of a WAL file content by doing the following:
 
 ----
- $ ./bin/hbase org.apache.hadoop.hbase.regionserver.wal.FSHLog --dump hdfs://example.org:8020/hbase/.logs/example.org,60020,1283516293161/10.10.21.10%3A60020.1283973724012
+ $ ./bin/hbase org.apache.hadoop.hbase.regionserver.wal.FSHLog --dump hdfs://example.org:8020/hbase/WALs/example.org,60020,1283516293161/10.10.21.10%3A60020.1283973724012
 ----
 
 The return code will be non-zero if there are any issues with the file so you can test wholesomeness of file by redirecting `STDOUT` to `/dev/null` and testing the program return.
@@ -380,7 +396,7 @@ The return code will be non-zero if there are any issues with the file so you ca
 Similarly you can force a split of a log file directory by doing:
 
 ----
- $ ./bin/hbase org.apache.hadoop.hbase.regionserver.wal.FSHLog --split hdfs://example.org:8020/hbase/.logs/example.org,60020,1283516293161/
+ $ ./bin/hbase org.apache.hadoop.hbase.regionserver.wal.FSHLog --split hdfs://example.org:8020/hbase/WALs/example.org,60020,1283516293161/
 ----
 
 [[hlog_tool.prettyprint]]
@@ -390,7 +406,7 @@ The `WALPrettyPrinter` is a tool with configurable options to print the contents
 You can invoke it via the HBase cli with the 'wal' command.
 
 ----
- $ ./bin/hbase wal hdfs://example.org:8020/hbase/.logs/example.org,60020,1283516293161/10.10.21.10%3A60020.1283973724012
+ $ ./bin/hbase wal hdfs://example.org:8020/hbase/WALs/example.org,60020,1283516293161/10.10.21.10%3A60020.1283973724012
 ----
 
 .WAL Printing in older versions of HBase
@@ -768,27 +784,20 @@ Options:
 
 === `hbase pe`
 
-The `hbase pe` command is a shortcut provided to run the `org.apache.hadoop.hbase.PerformanceEvaluation` tool, which is used for testing.
-The `hbase pe` command was introduced in HBase 0.98.4.
+The `hbase pe` command runs the PerformanceEvaluation tool, which is used for testing.
 
 The PerformanceEvaluation tool accepts many different options and commands.
 For usage instructions, run the command with no options.
 
-To run PerformanceEvaluation prior to HBase 0.98.4, issue the command `hbase org.apache.hadoop.hbase.PerformanceEvaluation`.
-
 The PerformanceEvaluation tool has received many updates in recent HBase releases, including support for namespaces, support for tags, cell-level ACLs and visibility labels, multiget support for RPC calls, increased sampling sizes, an option to randomly sleep during testing, and ability to "warm up" the cluster before testing starts.
 
 === `hbase ltt`
 
-The `hbase ltt` command is a shortcut provided to run the `org.apache.hadoop.hbase.util.LoadTestTool` utility, which is used for testing.
-The `hbase ltt` command was introduced in HBase 0.98.4.
+The `hbase ltt` command runs the LoadTestTool utility, which is used for testing.
 
 You must specify either `-init_only` or at least one of `-write`, `-update`, or `-read`.
 For general usage instructions, pass the `-h` option.
 
-To run LoadTestTool prior to HBase 0.98.4, issue the command +hbase
-          org.apache.hadoop.hbase.util.LoadTestTool+.
-
 The LoadTestTool has received many updates in recent HBase releases, including support for namespaces, support for tags, cell-level ACLS and visibility labels, testing security-related features, ability to specify the number of regions per server, tests for multi-get RPC calls, and tests relating to replication.
 
 [[ops.regionmgt]]
@@ -851,7 +860,7 @@ See <<lb,lb>> below.
 [NOTE]
 ====
 In hbase-2.0, in the bin directory, we added a script named _considerAsDead.sh_ that can be used to kill a regionserver.
-Hardware issues could be detected by specialized monitoring tools before the  zookeeper timeout has expired. _considerAsDead.sh_ is a simple function to mark a RegionServer as dead.
+Hardware issues could be detected by specialized monitoring tools before the zookeeper timeout has expired. _considerAsDead.sh_ is a simple function to mark a RegionServer as dead.
 It deletes all the znodes of the server, starting the recovery process.
 Plug in the script into your monitoring/fault detection tools to initiate faster failover.
 Be careful how you use this disruptive tool.
@@ -2559,8 +2568,10 @@ full implications and have a sufficient background in managing HBase clusters.
 It was developed by Yahoo! and they run it at scale on their large grid cluster.
 See link:http://www.slideshare.net/HBaseCon/keynote-apache-hbase-at-yahoo-scale[HBase at Yahoo! Scale].
 
-RSGroups can be defined and managed with shell commands or corresponding Java
-APIs. A server can be added to a group with hostname and port pair and tables
+RSGroups are defined and managed with shell commands. The shell drives a
+Coprocessor Endpoint whose API is marked private given this is an evolving
+feature; the Coprocessor API is not for public consumption.
+A server can be added to a group with hostname and port pair and tables
 can be moved to this group so that only regionservers in the same rsgroup can
 host the regions of the table. RegionServers and tables can only belong to one
 rsgroup at a time. By default, all tables and regionservers belong to the
@@ -2707,3 +2718,141 @@ To enable ACL, add the following to your hbase-site.xml and restart your Master:
 ----
 
 
+
+[[normalizer]]
+== Region Normalizer
+
+The Region Normalizer tries to make Regions all in a table about the same in size.
+It does this by finding a rough average. Any region that is larger than twice this
+size is split. Any region that is much smaller is merged into an adjacent region.
+It is good to run the Normalizer on occasion on a down time after the cluster has
+been running a while or say after a burst of activity such as a large delete.
+
+(The bulk of the below detail was copied wholesale from the blog by Romil Choksi at
+link:https://community.hortonworks.com/articles/54987/hbase-region-normalizer.html[HBase Region Normalizer])
+
+The Region Normalizer is feature available since HBase-1.2. It runs a set of
+pre-calculated merge/split actions to resize regions that are either too
+large or too small compared to the average region size for a given table. Region
+Normalizer when invoked computes a normalization 'plan' for all of the tables in
+HBase. System tables (such as hbase:meta, hbase:namespace, Phoenix system tables
+etc) and user tables with normalization disabled are ignored while computing the
+plan. For normalization enabled tables, normalization plan is carried out in
+parallel across multiple tables.
+
+Normalizer can be enabled or disabled globally for the entire cluster using the
+‘normalizer_switch’ command in the HBase shell. Normalization can also be
+controlled on a per table basis, which is disabled by default when a table is
+created. Normalization for a table can be enabled or disabled by setting the
+NORMALIZATION_ENABLED table attribute to true or false.
+
+To check normalizer status and enable/disable normalizer
+
+[source,bash]
+----
+hbase(main):001:0> normalizer_enabled
+true 
+0 row(s) in 0.4870 seconds
+ 
+hbase(main):002:0> normalizer_switch false
+true 
+0 row(s) in 0.0640 seconds
+ 
+hbase(main):003:0> normalizer_enabled
+false 
+0 row(s) in 0.0120 seconds
+ 
+hbase(main):004:0> normalizer_switch true
+false
+0 row(s) in 0.0200 seconds
+ 
+hbase(main):005:0> normalizer_enabled
+true
+0 row(s) in 0.0090 seconds
+----
+
+When enabled, Normalizer is invoked in the background every 5 mins (by default),
+which can be configured using `hbase.normalization.period` in `hbase-site.xml`.
+Normalizer can also be invoked manually/programmatically at will using HBase shell’s
+`normalize` command. HBase by default uses `SimpleRegionNormalizer`, but users can
+design their own normalizer as long as they implement the RegionNormalizer Interface.
+Details about the logic used by `SimpleRegionNormalizer` to compute its normalization
+plan can be found link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.html[here].
+
+The below example shows a normalization plan being computed for an user table, and
+merge action being taken as a result of the normalization plan computed by SimpleRegionNormalizer.
+
+Consider an user table with some pre-split regions having 3 equally large regions
+(about 100K rows) and 1 relatively small region (about 25K rows). Following is the
+snippet from an hbase meta table scan showing each of the pre-split regions for 
+the user table.
+
+----
+table_p8ddpd6q5z,,1469494305548.68b9892220865cb6048 column=info:regioninfo, timestamp=1469494306375, value={ENCODED => 68b9892220865cb604809c950d1adf48, NAME => 'table_p8ddpd6q5z,,1469494305548.68b989222 09c950d1adf48.   0865cb604809c950d1adf48.', STARTKEY => '', ENDKEY => '1'} 
+.... 
+table_p8ddpd6q5z,1,1469494317178.867b77333bdc75a028 column=info:regioninfo, timestamp=1469494317848, value={ENCODED => 867b77333bdc75a028bb4c5e4b235f48, NAME => 'table_p8ddpd6q5z,1,1469494317178.867b7733 bb4c5e4b235f48.  3bdc75a028bb4c5e4b235f48.', STARTKEY => '1', ENDKEY => '3'} 
+.... 
+table_p8ddpd6q5z,3,1469494328323.98f019a753425e7977 column=info:regioninfo, timestamp=1469494328486, value={ENCODED => 98f019a753425e7977ab8636e32deeeb, NAME => 'table_p8ddpd6q5z,3,1469494328323.98f019a7 ab8636e32deeeb.  53425e7977ab8636e32deeeb.', STARTKEY => '3', ENDKEY => '7'} 
+.... 
+table_p8ddpd6q5z,7,1469494339662.94c64e748979ecbb16 column=info:regioninfo, timestamp=1469494339859, value={ENCODED => 94c64e748979ecbb166f6cc6550e25c6, NAME => 'table_p8ddpd6q5z,7,1469494339662.94c64e74 6f6cc6550e25c6.   8979ecbb166f6cc6550e25c6.', STARTKEY => '7', ENDKEY => '8'} 
+.... 
+table_p8ddpd6q5z,8,1469494339662.6d2b3f5fd1595ab8e7 column=info:regioninfo, timestamp=1469494339859, value={ENCODED => 6d2b3f5fd1595ab8e7c031876057b1ee, NAME => 'table_p8ddpd6q5z,8,1469494339662.6d2b3f5f c031876057b1ee.   d1595ab8e7c031876057b1ee.', STARTKEY => '8', ENDKEY => ''}  
+----
+Invoking the normalizer using ‘normalize’ int the HBase shell, the below log snippet
+from HMaster log shows the normalization plan computed as per the logic defined for
+SimpleRegionNormalizer. Since the total region size (in MB) for the adjacent smallest
+regions in the table is less than the average region size, the normalizer computes a
+plan to merge these two regions.
+
+----
+2016-07-26 07:08:26,928 DEBUG [B.fifo.QRpcServer.handler=20,queue=2,port=20000] master.HMaster: Skipping normalization for table: hbase:namespace, as it's either system table or doesn't have auto
+normalization turned on
+2016-07-26 07:08:26,928 DEBUG [B.fifo.QRpcServer.handler=20,queue=2,port=20000] master.HMaster: Skipping normalization for table: hbase:backup, as it's either system table or doesn't have auto normalization turned on
+2016-07-26 07:08:26,928 DEBUG [B.fifo.QRpcServer.handler=20,queue=2,port=20000] master.HMaster: Skipping normalization for table: hbase:meta, as it's either system table or doesn't have auto normalization turned on
+2016-07-26 07:08:26,928 DEBUG [B.fifo.QRpcServer.handler=20,queue=2,port=20000] master.HMaster: Skipping normalization for table: table_h2osxu3wat, as it's either system table or doesn't have autonormalization turned on
+2016-07-26 07:08:26,928 DEBUG [B.fifo.QRpcServer.handler=20,queue=2,port=20000] normalizer.SimpleRegionNormalizer: Computing normalization plan for table: table_p8ddpd6q5z, number of regions: 5
+2016-07-26 07:08:26,929 DEBUG [B.fifo.QRpcServer.handler=20,queue=2,port=20000] normalizer.SimpleRegionNormalizer: Table table_p8ddpd6q5z, total aggregated regions size: 12
+2016-07-26 07:08:26,929 DEBUG [B.fifo.QRpcServer.handler=20,queue=2,port=20000] normalizer.SimpleRegionNormalizer: Table table_p8ddpd6q5z, average region size: 2.4
+2016-07-26 07:08:26,929 INFO  [B.fifo.QRpcServer.handler=20,queue=2,port=20000] normalizer.SimpleRegionNormalizer: Table table_p8ddpd6q5z, small region size: 0 plus its neighbor size: 0, less thanthe avg size 2.4, merging them
+2016-07-26 07:08:26,971 INFO  [B.fifo.QRpcServer.handler=20,queue=2,port=20000] normalizer.MergeNormalizationPlan: Executing merging normalization plan: MergeNormalizationPlan{firstRegion={ENCODED=> d51df2c58e9b525206b1325fd925a971, NAME => 'table_p8ddpd6q5z,,1469514755237.d51df2c58e9b525206b1325fd925a971.', STARTKEY => '', ENDKEY => '1'}, secondRegion={ENCODED => e69c6b25c7b9562d078d9ad3994f5330, NAME => 'table_p8ddpd6q5z,1,1469514767669.e69c6b25c7b9562d078d9ad3994f5330.',
+STARTKEY => '1', ENDKEY => '3'}}
+----
+Region normalizer as per it’s computed plan, merged the region with start key as ‘’
+and end key as ‘1’, with another region having start key as ‘1’ and end key as ‘3’.
+Now, that these regions have been merged we see a single new region with start key
+as ‘’ and end key as ‘3’
+----
+table_p8ddpd6q5z,,1469516907210.e06c9b83c4a252b130e column=info:mergeA, timestamp=1469516907431, 
+value=PBUF\x08\xA5\xD9\x9E\xAF\xE2*\x12\x1B\x0A\x07default\x12\x10table_p8ddpd6q5z\x1A\x00"\x011(\x000\x00 ea74d246741ba.   8\x00 
+table_p8ddpd6q5z,,1469516907210.e06c9b83c4a252b130e column=info:mergeB, timestamp=1469516907431,
+value=PBUF\x08\xB5\xBA\x9F\xAF\xE2*\x12\x1B\x0A\x07default\x12\x10table_p8ddpd6q5z\x1A\x011"\x013(\x000\x0 ea74d246741ba.   08\x00 
+table_p8ddpd6q5z,,1469516907210.e06c9b83c4a252b130e column=info:regioninfo, timestamp=1469516907431, value={ENCODED => e06c9b83c4a252b130eea74d246741ba, NAME => 'table_p8ddpd6q5z,,1469516907210.e06c9b83c ea74d246741ba.   4a252b130eea74d246741ba.', STARTKEY => '', ENDKEY => '3'}
+.... 
+table_p8ddpd6q5z,3,1469514778736.bf024670a847c0adff column=info:regioninfo, timestamp=1469514779417, value={ENCODED => bf024670a847c0adffb74b2e13408b32, NAME => 'table_p8ddpd6q5z,3,1469514778736.bf024670 b74b2e13408b32.  a847c0adffb74b2e13408b32.' STARTKEY => '3', ENDKEY => '7'} 
+.... 
+table_p8ddpd6q5z,7,1469514790152.7c5a67bc755e649db2 column=info:regioninfo, timestamp=1469514790312, value={ENCODED => 7c5a67bc755e649db22f49af6270f1e1, NAME => 'table_p8ddpd6q5z,7,1469514790152.7c5a67bc 2f49af6270f1e1.  755e649db22f49af6270f1e1.', STARTKEY => '7', ENDKEY => '8'} 
+....
+table_p8ddpd6q5z,8,1469514790152.58e7503cda69f98f47 column=info:regioninfo, timestamp=1469514790312, value={ENCODED => 58e7503cda69f98f4755178e74288c3a, NAME => 'table_p8ddpd6q5z,8,1469514790152.58e7503c 55178e74288c3a.  da69f98f4755178e74288c3a.', STARTKEY => '8', ENDKEY => ''}
+----
+
+A similar example can be seen for an user table with 3 smaller regions and 1
+relatively large region. For this example, we have an user table with 1 large region containing 100K rows, and 3 relatively smaller regions with about 33K rows each. As seen from the normalization plan, since the larger region is more than twice the average region size it ends being split into two regions – one with start key as ‘1’ and end key as ‘154717’ and the other region with start key as '154717' and end key as ‘3’
+----
+2016-07-26 07:39:45,636 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] master.HMaster: Skipping normalization for table: hbase:backup, as it's either system table or doesn't have auto normalization turned on
+2016-07-26 07:39:45,636 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] normalizer.SimpleRegionNormalizer: Computing normalization plan for table: table_p8ddpd6q5z, number of regions: 4
+2016-07-26 07:39:45,636 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] normalizer.SimpleRegionNormalizer: Table table_p8ddpd6q5z, total aggregated regions size: 12
+2016-07-26 07:39:45,636 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] normalizer.SimpleRegionNormalizer: Table table_p8ddpd6q5z, average region size: 3.0
+2016-07-26 07:39:45,636 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] normalizer.SimpleRegionNormalizer: No normalization needed, regions look good for table: table_p8ddpd6q5z
+2016-07-26 07:39:45,636 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] normalizer.SimpleRegionNormalizer: Computing normalization plan for table: table_h2osxu3wat, number of regions: 5
+2016-07-26 07:39:45,636 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] normalizer.SimpleRegionNormalizer: Table table_h2osxu3wat, total aggregated regions size: 7
+2016-07-26 07:39:45,636 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] normalizer.SimpleRegionNormalizer: Table table_h2osxu3wat, average region size: 1.4
+2016-07-26 07:39:45,636 INFO  [B.fifo.QRpcServer.handler=7,queue=1,port=20000] normalizer.SimpleRegionNormalizer: Table table_h2osxu3wat, large region table_h2osxu3wat,1,1469515926544.27f2fdbb2b6612ea163eb6b40753c3db. has size 4, more than twice avg size, splitting
+2016-07-26 07:39:45,640 INFO [B.fifo.QRpcServer.handler=7,queue=1,port=20000] normalizer.SplitNormalizationPlan: Executing splitting normalization plan: SplitNormalizationPlan{regionInfo={ENCODED => 27f2fdbb2b6612ea163eb6b40753c3db, NAME => 'table_h2osxu3wat,1,1469515926544.27f2fdbb2b6612ea163eb6b40753c3db.', STARTKEY => '1', ENDKEY => '3'}, splitPoint=null}
+2016-07-26 07:39:45,656 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] master.HMaster: Skipping normalization for table: hbase:namespace, as it's either system table or doesn't have auto normalization turned on
+2016-07-26 07:39:45,656 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] master.HMaster: Skipping normalization for table: hbase:meta, as it's either system table or doesn't
+have auto normalization turned on …..…..….
+2016-07-26 07:39:46,246 DEBUG [AM.ZK.Worker-pool2-t278] master.RegionStates: Onlined 54de97dae764b864504704c1c8d3674a on hbase-test-rc-5.openstacklocal,16020,1469419333913 {ENCODED => 54de97dae764b864504704c1c8d3674a, NAME => 'table_h2osxu3wat,1,1469518785661.54de97dae764b864504704c1c8d3674a.', STARTKEY => '1', ENDKEY => '154717'}
+2016-07-26 07:39:46,246 INFO  [AM.ZK.Worker-pool2-t278] master.RegionStates: Transition {d6b5625df331cfec84dce4f1122c567f state=SPLITTING_NEW, ts=1469518786246, server=hbase-test-rc-5.openstacklocal,16020,1469419333913} to {d6b5625df331cfec84dce4f1122c567f state=OPEN, ts=1469518786246,
+server=hbase-test-rc-5.openstacklocal,16020,1469419333913}
+2016-07-26 07:39:46,246 DEBUG [AM.ZK.Worker-pool2-t278] master.RegionStates: Onlined d6b5625df331cfec84dce4f1122c567f on hbase-test-rc-5.openstacklocal,16020,1469419333913 {ENCODED => d6b5625df331cfec84dce4f1122c567f, NAME => 'table_h2osxu3wat,154717,1469518785661.d6b5625df331cfec84dce4f1122c567f.', STARTKEY => '154717', ENDKEY => '3'}
+----

http://git-wip-us.apache.org/repos/asf/hbase/blob/9ef75b96/src/main/asciidoc/_chapters/schema_design.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/schema_design.adoc b/src/main/asciidoc/_chapters/schema_design.adoc
index 4cd7656..a25b85e 100644
--- a/src/main/asciidoc/_chapters/schema_design.adoc
+++ b/src/main/asciidoc/_chapters/schema_design.adoc
@@ -1148,16 +1148,41 @@ Detect regionserver failure as fast as reasonable. Set the following parameters:
 - `dfs.namenode.avoid.read.stale.datanode = true`
 - `dfs.namenode.avoid.write.stale.datanode = true`
 
+[[shortcircuit.reads]]
 ===  Optimize on the Server Side for Low Latency
-
-* Skip the network for local blocks. In `hbase-site.xml`, set the following parameters:
+Skip the network for local blocks when the RegionServer goes to read from HDFS by exploiting HDFS's
+link:https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html[Short-Circuit Local Reads] facility.
+Note how setup must be done both at the datanode and on the dfsclient ends of the conneciton -- i.e. at the RegionServer
+and how both ends need to have loaded the hadoop native `.so` library.
+After configuring your hadoop setting _dfs.client.read.shortcircuit_ to _true_ and configuring
+the _dfs.domain.socket.path_ path for the datanode and dfsclient to share and restarting, next configure
+the regionserver/dfsclient side.
+
+* In `hbase-site.xml`, set the following parameters:
 - `dfs.client.read.shortcircuit = true`
-- `dfs.client.read.shortcircuit.buffer.size = 131072` (Important to avoid OOME)
+- `dfs.client.read.shortcircuit.skip.checksum = true` so we don't double checksum (HBase does its own checksumming to save on i/os. See <<hbase.regionserver.checksum.verify.performance>> for more on this. 
+- `dfs.domain.socket.path` to match what was set for the datanodes.
+- `dfs.client.read.shortcircuit.buffer.size = 131072` Important to avoid OOME -- hbase has a default it uses if unset, see `hbase.dfs.client.read.shortcircuit.buffer.size`; its default is 131072.
 * Ensure data locality. In `hbase-site.xml`, set `hbase.hstore.min.locality.to.skip.major.compact = 0.7` (Meaning that 0.7 \<= n \<= 1)
 * Make sure DataNodes have enough handlers for block transfers. In `hdfs-site.xml`, set the following parameters:
 - `dfs.datanode.max.xcievers >= 8192`
 - `dfs.datanode.handler.count =` number of spindles
 
+Check the RegionServer logs after restart. You should only see complaint if misconfiguration.
+Otherwise, shortcircuit read operates quietly in background. It does not provide metrics so
+no optics on how effective it is but read latencies should show a marked improvement, especially if
+good data locality, lots of random reads, and dataset is larger than available cache.
+
+Other advanced configurations that you might play with, especially if shortcircuit functionality
+is complaining in the logs,  include `dfs.client.read.shortcircuit.streams.cache.size` and
+`dfs.client.socketcache.capacity`. Documentation is sparse on these options. You'll have to
+read source code.
+
+For more on short-circuit reads, see Colin's old blog on rollout,
+link:http://blog.cloudera.com/blog/2013/08/how-improved-short-circuit-local-reads-bring-better-performance-and-security-to-hadoop/[How Improved Short-Circuit Local Reads Bring Better Performance and Security to Hadoop].
+The link:https://issues.apache.org/jira/browse/HDFS-347[HDFS-347] issue also makes for an
+interesting read showing the HDFS community at its best (caveat a few comments).
+
 ===  JVM Tuning
 
 ====  Tune JVM GC for low collection latencies

http://git-wip-us.apache.org/repos/asf/hbase/blob/9ef75b96/src/main/asciidoc/_chapters/shell.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/shell.adoc b/src/main/asciidoc/_chapters/shell.adoc
index b8ea3b7..b246ab1 100644
--- a/src/main/asciidoc/_chapters/shell.adoc
+++ b/src/main/asciidoc/_chapters/shell.adoc
@@ -227,7 +227,7 @@ The table reference can be used to perform data read write operations such as pu
 For example, previously you would always specify a table name:
 
 ----
-hbase(main):000:0> create ‘t’, ‘f’
+hbase(main):000:0> create 't', 'f'
 0 row(s) in 1.0970 seconds
 hbase(main):001:0> put 't', 'rold', 'f', 'v'
 0 row(s) in 0.0080 seconds
@@ -291,7 +291,7 @@ hbase(main):012:0> tab = get_table 't'
 0 row(s) in 0.0010 seconds
 
 => Hbase::Table - t
-hbase(main):013:0> tab.put ‘r1’ ,’f’, ‘v’
+hbase(main):013:0> tab.put 'r1' ,'f', 'v'
 0 row(s) in 0.0100 seconds
 hbase(main):014:0> tab.scan
 ROW                                COLUMN+CELL
@@ -305,7 +305,7 @@ You can then use jruby to script table operations based on these names.
 The list_snapshots command also acts similarly.
 
 ----
-hbase(main):016 > tables = list(‘t.*’)
+hbase(main):016 > tables = list('t.*')
 TABLE
 t
 1 row(s) in 0.1040 seconds
@@ -385,7 +385,7 @@ This will continue for all split points up to the last. The last region will be
 
 [source]
 ----
-hbase>create 't1','f',SPLITS => ['10','20',30']
+hbase>create 't1','f',SPLITS => ['10','20','30']
 ----
 
 In the above example, the table 't1' will be created with column family 'f', pre-split to four regions. Note the first region will contain all keys from '\x00' up to '\x30' (as '\x31' is the ASCII code for '1').

http://git-wip-us.apache.org/repos/asf/hbase/blob/9ef75b96/src/main/asciidoc/_chapters/tracing.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/tracing.adoc b/src/main/asciidoc/_chapters/tracing.adoc
index 8bd1962..7305aa8 100644
--- a/src/main/asciidoc/_chapters/tracing.adoc
+++ b/src/main/asciidoc/_chapters/tracing.adoc
@@ -30,8 +30,10 @@
 :icons: font
 :experimental:
 
-link:https://issues.apache.org/jira/browse/HBASE-6449[HBASE-6449] added support for tracing requests through HBase, using the open source tracing library, link:https://htrace.incubator.apache.org/[HTrace].
-Setting up tracing is quite simple, however it currently requires some very minor changes to your client code (it would not be very difficult to remove this requirement).
+HBase includes facilities for tracing requests using the open source tracing library, link:https://htrace.incubator.apache.org/[Apache HTrace].
+Setting up tracing is quite simple, however it currently requires some very minor changes to your client code (this requirement may be removed in the future).
+
+Support for this feature using HTrace 3 in HBase was added in link:https://issues.apache.org/jira/browse/HBASE-6449[HBASE-6449]. Starting with HBase 2.0, there was a non-compatible update to HTrace 4 via link:https://issues.apache.org/jira/browse/HBASE-18601[HBASE-18601]. The examples provided in this section will be using HTrace 4 package names, syntax, and conventions. For older examples, please consult previous versions of this guide.
 
 [[tracing.spanreceivers]]
 === SpanReceivers

http://git-wip-us.apache.org/repos/asf/hbase/blob/9ef75b96/src/main/asciidoc/_chapters/troubleshooting.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/troubleshooting.adoc b/src/main/asciidoc/_chapters/troubleshooting.adoc
index 741e1ec..83f1989 100644
--- a/src/main/asciidoc/_chapters/troubleshooting.adoc
+++ b/src/main/asciidoc/_chapters/troubleshooting.adoc
@@ -101,6 +101,11 @@ To disable, set the logging level back to `INFO` level.
 [[trouble.log.gc]]
 === JVM Garbage Collection Logs
 
+[NOTE]
+----
+All example Garbage Collection logs in this section are based on Java 8 output. The introduction of Unified Logging in Java 9 and newer will result in very different looking logs.
+----
+
 HBase is memory intensive, and using the default GC you can see long pauses in all threads including the _Juliet Pause_ aka "GC of Death". To help debug this or confirm this is happening GC logging can be turned on in the Java virtual machine.
 
 To enable, in _hbase-env.sh_, uncomment one of the below lines :
@@ -258,7 +263,6 @@ link:https://issues.apache.org/jira/browse/HBASE[JIRA] is also really helpful wh
 ==== Master Web Interface
 
 The Master starts a web-interface on port 16010 by default.
-(Up to and including 0.98 this was port 60010)
 
 The Master web UI lists created tables and their definition (e.g., ColumnFamilies, blocksize, etc.). Additionally, the available RegionServers in the cluster are listed along with selected high-level metrics (requests, number of regions, usedHeap, maxHeap). The Master web UI allows navigation to each RegionServer's web UI.
 
@@ -266,7 +270,6 @@ The Master web UI lists created tables and their definition (e.g., ColumnFamilie
 ==== RegionServer Web Interface
 
 RegionServers starts a web-interface on port 16030 by default.
-(Up to an including 0.98 this was port 60030)
 
 The RegionServer web UI lists online regions and their start/end keys, as well as point-in-time RegionServer metrics (requests, regions, storeFileIndexSize, compactionQueueSize, etc.).
 
@@ -564,14 +567,6 @@ You can also tail all the logs at the same time, edit files, etc.
 
 For more information on the HBase client, see <<architecture.client,client>>.
 
-=== Missed Scan Results Due To Mismatch Of `hbase.client.scanner.max.result.size` Between Client and Server
-If either the client or server version is lower than 0.98.11/1.0.0 and the server
-has a smaller value for `hbase.client.scanner.max.result.size` than the client, scan
-requests that reach the server's `hbase.client.scanner.max.result.size` are likely
-to miss data. In particular, 0.98.11 defaults `hbase.client.scanner.max.result.size`
-to 2 MB but other versions default to larger values. For this reason, be very careful
-using 0.98.11 servers with any other client version.
-
 [[trouble.client.scantimeout]]
 === ScannerTimeoutException or UnknownScannerException
 
@@ -683,12 +678,6 @@ A workaround is passing your client-side JVM a reasonable value for `-XX:MaxDire
 By default, the `MaxDirectMemorySize` is equal to your `-Xmx` max heapsize setting (if `-Xmx` is set). Try setting it to something smaller (for example, one user had success setting it to `1g` when they had a client-side heap of `12g`). If you set it too small, it will bring on `FullGCs` so keep it a bit hefty.
 You want to make this setting client-side only especially if you are running the new experimental server-side off-heap cache since this feature depends on being able to use big direct buffers (You may have to keep separate client-side and server-side config dirs).
 
-[[trouble.client.slowdown.admin]]
-=== Client Slowdown When Calling Admin Methods (flush, compact, etc.)
-
-This is a client issue fixed by link:https://issues.apache.org/jira/browse/HBASE-5073[HBASE-5073] in 0.90.6.
-There was a ZooKeeper leak in the client and the client was getting pummeled by ZooKeeper events with each additional invocation of the admin API.
-
 [[trouble.client.security.rpc]]
 === Secure Client Cannot Connect ([Caused by GSSException: No valid credentials provided(Mechanism level: Failed to find any Kerberos tgt)])
 
@@ -817,10 +806,12 @@ The HDFS directory structure of HBase tables in the cluster is...
 ----
 
 /hbase
-    /<Table>                    (Tables in the cluster)
-        /<Region>               (Regions for the table)
-            /<ColumnFamily>     (ColumnFamilies for the Region for the table)
-                /<StoreFile>    (StoreFiles for the ColumnFamily for the Regions for the table)
+    /data
+        /<Namespace>                    (Namespaces in the cluster)
+            /<Table>                    (Tables in the cluster)
+                /<Region>               (Regions for the table)
+                    /<ColumnFamily>     (ColumnFamilies for the Region for the table)
+                        /<StoreFile>    (StoreFiles for the ColumnFamily for the Regions for the table)
 ----
 
 The HDFS directory structure of HBase WAL is..
@@ -828,7 +819,7 @@ The HDFS directory structure of HBase WAL is..
 ----
 
 /hbase
-    /.logs
+    /WALs
         /<RegionServer>    (RegionServers)
             /<WAL>         (WAL files for the RegionServer)
 ----
@@ -838,7 +829,7 @@ See the link:https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hd
 [[trouble.namenode.0size.hlogs]]
 ==== Zero size WALs with data in them
 
-Problem: when getting a listing of all the files in a RegionServer's _.logs_ directory, one file has a size of 0 but it contains data.
+Problem: when getting a listing of all the files in a RegionServer's _WALs_ directory, one file has a size of 0 but it contains data.
 
 Answer: It's an HDFS quirk.
 A file that's currently being written to will appear to have a size of 0 but once it's closed it will show its true size
@@ -892,7 +883,6 @@ See <<managed.compactions>> for more information on managing compactions.
 === Loopback IP
 
 HBase expects the loopback IP Address to be 127.0.0.1.
-See the Getting Started section on <<loopback.ip>>.
 
 [[trouble.network.ints]]
 === Network Interfaces
@@ -953,6 +943,45 @@ java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path
 \... then there is a path issue with the compression libraries.
 See the Configuration section on link:[LZO compression configuration].
 
+[[trouble.rs.startup.hsync]]
+==== RegionServer aborts due to lack of hsync for filesystem
+
+In order to provide data durability for writes to the cluster HBase relies on the ability to durably save state in a write ahead log. When using a version of Apache Hadoop Common's filesystem API that supports checking on the availability of needed calls, HBase will proactively abort the cluster if it finds it can't operate safely.
+
+For RegionServer roles, the failure will show up in logs like this:
+
+----
+2018-04-05 11:36:22,785 ERROR [regionserver/192.168.1.123:16020] wal.AsyncFSWALProvider: The RegionServer async write ahead log provider relies on the ability to call hflush and hsync for proper operation during component failures, but the current FileSystem does not support doing so. Please check the config value of 'hbase.wal.dir' and ensure it points to a FileSystem mount that has suitable capabilities for output streams.
+2018-04-05 11:36:22,799 ERROR [regionserver/192.168.1.123:16020] regionserver.HRegionServer: ***** ABORTING region server 192.168.1.123,16020,1522946074234: Unhandled: cannot get log writer *****
+java.io.IOException: cannot get log writer
+        at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:112)
+        at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:612)
+        at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:124)
+        at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:759)
+        at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:489)
+        at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.<init>(AsyncFSWAL.java:251)
+        at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:69)
+        at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:44)
+        at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:138)
+        at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:57)
+        at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:252)
+        at org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:2105)
+        at org.apache.hadoop.hbase.regionserver.HRegionServer.buildServerLoad(HRegionServer.java:1326)
+        at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1191)
+        at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1007)
+        at java.lang.Thread.run(Thread.java:745)
+Caused by: org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: hflush and hsync
+        at org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:69)
+        at org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:168)
+        at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:167)
+        at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:99)
+        ... 15 more
+
+----
+
+If you are attempting to run in standalone mode and see this error, please walk back through the section <<quickstart>> and ensure you have included *all* the given configuration settings.
+
+
 [[trouble.rs.runtime]]
 === Runtime Errors
 
@@ -1076,13 +1105,6 @@ This exception is returned back to the client and then the client goes back to `
 
 However, if the NotServingRegionException is logged ERROR, then the client ran out of retries and something probably wrong.
 
-[[trouble.rs.runtime.double_listed_regions]]
-==== Regions listed by domain name, then IP
-
-Fix your DNS.
-In versions of Apache HBase before 0.92.x, reverse DNS needs to give same answer as forward lookup.
-See link:https://issues.apache.org/jira/browse/HBASE-3431[HBASE 3431 RegionServer is not using the name given it by the master; double entry in master listing of servers] for gory details.
-
 [[brand.new.compressor]]
 ==== Logs flooded with '2011-01-10 12:40:48,407 INFO org.apache.hadoop.io.compress.CodecPool: Gotbrand-new compressor' messages
 
@@ -1146,6 +1168,29 @@ Sure fire solution is to just use Hadoop dfs to delete the HBase root and let HB
 
 If you have many regions on your cluster and you see an error like that reported above in this sections title in your logs, see link:https://issues.apache.org/jira/browse/HBASE-4246[HBASE-4246 Cluster with too many regions cannot withstand some master failover scenarios].
 
+[[trouble.master.startup.hsync]]
+==== Master fails to become active due to lack of hsync for filesystem
+
+HBase's internal framework for cluster operations requires the ability to durably save state in a write ahead log. When using a version of Apache Hadoop Common's filesystem API that supports checking on the availability of needed calls, HBase will proactively abort the cluster if it finds it can't operate safely.
+
+For Master roles, the failure will show up in logs like this:
+
+----
+2018-04-05 11:18:44,653 ERROR [Thread-21] master.HMaster: Failed to become active master
+java.lang.IllegalStateException: The procedure WAL relies on the ability to hsync for proper operation during component failures, but the underlying filesystem does not support doing so. Please check the config value of 'hbase.procedure.store.wal.use.hsync' to set the desired level of robustness and ensure the config value of 'hbase.wal.dir' points to a FileSystem mount that can provide it.
+        at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:1034)
+        at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(WALProcedureStore.java:374)
+        at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.start(ProcedureExecutor.java:530)
+        at org.apache.hadoop.hbase.master.HMaster.startProcedureExecutor(HMaster.java:1267)
+        at org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1173)
+        at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:881)
+        at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2048)
+        at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:568)
+        at java.lang.Thread.run(Thread.java:745)
+----
+
+If you are attempting to run in standalone mode and see this error, please walk back through the section <<quickstart>> and ensure you have included *all* the given configuration settings.
+
 [[trouble.master.shutdown]]
 === Shutdown Errors
 
@@ -1216,31 +1261,6 @@ See Andrew's answer here, up on the user list: link:http://search-hadoop.com/m/s
 [[trouble.versions]]
 == HBase and Hadoop version issues
 
-[[trouble.versions.205]]
-=== `NoClassDefFoundError` when trying to run 0.90.x on hadoop-0.20.205.x (or hadoop-1.0.x)
-
-Apache HBase 0.90.x does not ship with hadoop-0.20.205.x, etc.
-To make it run, you need to replace the hadoop jars that Apache HBase shipped with in its _lib_ directory with those of the Hadoop you want to run HBase on.
-If even after replacing Hadoop jars you get the below exception:
-
-[source]
-----
-
-sv4r6s38: Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/commons/configuration/Configuration
-sv4r6s38:       at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<init>(DefaultMetricsSystem.java:37)
-sv4r6s38:       at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<clinit>(DefaultMetricsSystem.java:34)
-sv4r6s38:       at org.apache.hadoop.security.UgiInstrumentation.create(UgiInstrumentation.java:51)
-sv4r6s38:       at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:209)
-sv4r6s38:       at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:177)
-sv4r6s38:       at org.apache.hadoop.security.UserGroupInformation.isSecurityEnabled(UserGroupInformation.java:229)
-sv4r6s38:       at org.apache.hadoop.security.KerberosName.<clinit>(KerberosName.java:83)
-sv4r6s38:       at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:202)
-sv4r6s38:       at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:177)
-----
-
-you need to copy under _hbase/lib_, the _commons-configuration-X.jar_ you find in your Hadoop's _lib_ directory.
-That should fix the above complaint.
-
 [[trouble.wrong.version]]
 === ...cannot communicate with client version...
 
@@ -1249,67 +1269,6 @@ If you see something like the following in your logs [computeroutput]+... 2012-0
           shutdown. org.apache.hadoop.ipc.RemoteException: Server IPC version 7 cannot communicate
           with client version 4 ...+ ...are you trying to talk to an Hadoop 2.0.x from an HBase that has an Hadoop 1.0.x client? Use the HBase built against Hadoop 2.0 or rebuild your HBase passing the +-Dhadoop.profile=2.0+ attribute to Maven (See <<maven.build.hadoop>> for more).
 
-== IPC Configuration Conflicts with Hadoop
-
-If the Hadoop configuration is loaded after the HBase configuration, and you have configured custom IPC settings in both HBase and Hadoop, the Hadoop values may overwrite the HBase values.
-There is normally no need to change these settings for HBase, so this problem is an edge case.
-However, link:https://issues.apache.org/jira/browse/HBASE-11492[HBASE-11492] renames these settings for HBase to remove the chance of a conflict.
-Each of the setting names have been prefixed with `hbase.`, as shown in the following table.
-No action is required related to these changes unless you are already experiencing a conflict.
-
-These changes were backported to HBase 0.98.x and apply to all newer versions.
-
-[cols="1,1", options="header"]
-|===
-| Pre-0.98.x
-| 0.98-x And Newer
-
-| ipc.server.listen.queue.size
-| hbase.ipc.server.listen.queue.size
-
-| ipc.server.max.callqueue.size
-| hbase.ipc.server.max.callqueue.size
-
-| ipc.server.callqueue.handler.factor
-| hbase.ipc.server.callqueue.handler.factor
-
-| ipc.server.callqueue.read.share
-| hbase.ipc.server.callqueue.read.share
-
-| ipc.server.callqueue.type
-| hbase.ipc.server.callqueue.type
-
-| ipc.server.queue.max.call.delay
-| hbase.ipc.server.queue.max.call.delay
-
-| ipc.server.max.callqueue.length
-| hbase.ipc.server.max.callqueue.length
-
-| ipc.server.read.threadpool.size
-| hbase.ipc.server.read.threadpool.size
-
-| ipc.server.tcpkeepalive
-| hbase.ipc.server.tcpkeepalive
-
-| ipc.server.tcpnodelay
-| hbase.ipc.server.tcpnodelay
-
-| ipc.client.call.purge.timeout
-| hbase.ipc.client.call.purge.timeout
-
-| ipc.client.connection.maxidletime
-| hbase.ipc.client.connection.maxidletime
-
-| ipc.client.idlethreshold
-| hbase.ipc.client.idlethreshold
-
-| ipc.client.kill.max
-| hbase.ipc.client.kill.max
-
-| ipc.server.scan.vtime.weight
-| hbase.ipc.server.scan.vtime.weight
-|===
-
 == HBase and HDFS
 
 General configuration guidance for Apache HDFS is out of the scope of this guide.

http://git-wip-us.apache.org/repos/asf/hbase/blob/9ef75b96/src/main/asciidoc/_chapters/unit_testing.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/unit_testing.adoc b/src/main/asciidoc/_chapters/unit_testing.adoc
index e503f81..3329a75 100644
--- a/src/main/asciidoc/_chapters/unit_testing.adoc
+++ b/src/main/asciidoc/_chapters/unit_testing.adoc
@@ -327,7 +327,5 @@ A record is inserted, a Get is performed from the same table, and the insertion
 
 NOTE: Starting the mini-cluster takes about 20-30 seconds, but that should be appropriate for integration testing.
 
-To use an HBase mini-cluster on Microsoft Windows, you need to use a Cygwin environment.
-
 See the paper at link:http://blog.sematext.com/2010/08/30/hbase-case-study-using-hbasetestingutility-for-local-testing-development/[HBase Case-Study: Using HBaseTestingUtility for Local Testing and
                 Development] (2010) for more information about HBaseTestingUtility.