You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by bu...@apache.org on 2016/01/03 12:19:12 UTC

[01/11] hbase git commit: HBASE-13908 update site docs for 1.2 RC.

Repository: hbase
Updated Branches:
  refs/heads/branch-1.2 a47a7a60f -> 20c436806


http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/zookeeper.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/zookeeper.adoc b/src/main/asciidoc/_chapters/zookeeper.adoc
index f6134b7..2319360 100644
--- a/src/main/asciidoc/_chapters/zookeeper.adoc
+++ b/src/main/asciidoc/_chapters/zookeeper.adoc
@@ -35,7 +35,7 @@ You can also manage the ZooKeeper ensemble independent of HBase and just point H
 To toggle HBase management of ZooKeeper, use the `HBASE_MANAGES_ZK` variable in _conf/hbase-env.sh_.
 This variable, which defaults to `true`, tells HBase whether to start/stop the ZooKeeper ensemble servers as part of HBase start/stop.
 
-When HBase manages the ZooKeeper ensemble, you can specify ZooKeeper configuration using its native _zoo.cfg_ file, or, the easier option is to just specify ZooKeeper options directly in _conf/hbase-site.xml_.
+When HBase manages the ZooKeeper ensemble, you can specify ZooKeeper configuration directly in _conf/hbase-site.xml_.
 A ZooKeeper configuration option can be set as a property in the HBase _hbase-site.xml_ XML configuration file by prefacing the ZooKeeper option name with `hbase.zookeeper.property`.
 For example, the `clientPort` setting in ZooKeeper can be changed by setting the `hbase.zookeeper.property.clientPort` property.
 For all default values used by HBase, including ZooKeeper configuration, see <<hbase_default_configurations,hbase default configurations>>.
@@ -45,7 +45,7 @@ HBase does not ship with a _zoo.cfg_ so you will need to browse the _conf_ direc
 
 You must at least list the ensemble servers in _hbase-site.xml_ using the `hbase.zookeeper.quorum` property.
 This property defaults to a single ensemble member at `localhost` which is not suitable for a fully distributed HBase.
-(It binds to the local machine only and remote clients will not be able to connect). 
+(It binds to the local machine only and remote clients will not be able to connect).
 
 .How many ZooKeepers should I run?
 [NOTE]
@@ -54,7 +54,7 @@ You can run a ZooKeeper ensemble that comprises 1 node only but in production it
 Also, run an odd number of machines.
 In ZooKeeper, an even number of peers is supported, but it is normally not used because an even sized ensemble requires, proportionally, more peers to form a quorum than an odd sized ensemble requires.
 For example, an ensemble with 4 peers requires 3 to form a quorum, while an ensemble with 5 also requires 3 to form a quorum.
-Thus, an ensemble of 5 allows 2 peers to fail, and thus is more fault tolerant than the ensemble of 4, which allows only 1 down peer. 
+Thus, an ensemble of 5 allows 2 peers to fail, and thus is more fault tolerant than the ensemble of 4, which allows only 1 down peer.
 
 Give each ZooKeeper server around 1GB of RAM, and if possible, its own dedicated disk (A dedicated disk is the best thing you can do to ensure a performant ZooKeeper ensemble). For very heavily loaded clusters, run ZooKeeper servers on separate machines from RegionServers (DataNodes and TaskTrackers).
 ====
@@ -97,12 +97,12 @@ In the example below we have ZooKeeper persist to _/user/local/zookeeper_.
   </configuration>
 ----
 
-.What verion of ZooKeeper should I use?
+.What version of ZooKeeper should I use?
 [CAUTION]
 ====
 The newer version, the better.
 For example, some folks have been bitten by link:https://issues.apache.org/jira/browse/ZOOKEEPER-1277[ZOOKEEPER-1277].
-If running zookeeper 3.5+, you can ask hbase to make use of the new multi operation by enabling <<hbase.zookeeper.usemulti,hbase.zookeeper.useMulti>>" in your _hbase-site.xml_. 
+If running zookeeper 3.5+, you can ask hbase to make use of the new multi operation by enabling <<hbase.zookeeper.usemulti,hbase.zookeeper.useMulti>>" in your _hbase-site.xml_.
 ====
 
 .ZooKeeper Maintenance
@@ -124,8 +124,7 @@ To point HBase at an existing ZooKeeper cluster, one that is not managed by HBas
   export HBASE_MANAGES_ZK=false
 ----
 
-Next set ensemble locations and client port, if non-standard, in _hbase-site.xml_, or add a suitably configured _zoo.cfg_ to HBase's _CLASSPATH_.
-HBase will prefer the configuration found in _zoo.cfg_ over any settings in _hbase-site.xml_.
+Next set ensemble locations and client port, if non-standard, in _hbase-site.xml_.
 
 When HBase manages ZooKeeper, it will start/stop the ZooKeeper servers as a part of the regular start/stop scripts.
 If you would like to run ZooKeeper yourself, independent of HBase start/stop, you would do the following
@@ -141,7 +140,7 @@ Just make sure to set `HBASE_MANAGES_ZK` to `false`      if you want it to stay
 For more information about running a distinct ZooKeeper cluster, see the ZooKeeper link:http://hadoop.apache.org/zookeeper/docs/current/zookeeperStarted.html[Getting
         Started Guide].
 Additionally, see the link:http://wiki.apache.org/hadoop/ZooKeeper/FAQ#A7[ZooKeeper Wiki] or the link:http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#sc_zkMulitServerSetup[ZooKeeper
-        documentation] for more information on ZooKeeper sizing. 
+        documentation] for more information on ZooKeeper sizing.
 
 [[zk.sasl.auth]]
 == SASL Authentication with ZooKeeper
@@ -149,24 +148,24 @@ Additionally, see the link:http://wiki.apache.org/hadoop/ZooKeeper/FAQ#A7[ZooKee
 Newer releases of Apache HBase (>= 0.92) will support connecting to a ZooKeeper Quorum that supports SASL authentication (which is available in Zookeeper versions 3.4.0 or later).
 
 This describes how to set up HBase to mutually authenticate with a ZooKeeper Quorum.
-ZooKeeper/HBase mutual authentication (link:https://issues.apache.org/jira/browse/HBASE-2418[HBASE-2418]) is required as part of a complete secure HBase configuration (link:https://issues.apache.org/jira/browse/HBASE-3025[HBASE-3025]). For simplicity of explication, this section ignores additional configuration required (Secure HDFS and Coprocessor configuration). It's recommended to begin with an HBase-managed Zookeeper configuration (as opposed to a standalone Zookeeper quorum) for ease of learning. 
+ZooKeeper/HBase mutual authentication (link:https://issues.apache.org/jira/browse/HBASE-2418[HBASE-2418]) is required as part of a complete secure HBase configuration (link:https://issues.apache.org/jira/browse/HBASE-3025[HBASE-3025]). For simplicity of explication, this section ignores additional configuration required (Secure HDFS and Coprocessor configuration). It's recommended to begin with an HBase-managed Zookeeper configuration (as opposed to a standalone Zookeeper quorum) for ease of learning.
 
 === Operating System Prerequisites
 
 You need to have a working Kerberos KDC setup.
 For each `$HOST` that will run a ZooKeeper server, you should have a principle `zookeeper/$HOST`.
 For each such host, add a service key (using the `kadmin` or `kadmin.local`        tool's `ktadd` command) for `zookeeper/$HOST` and copy this file to `$HOST`, and make it readable only to the user that will run zookeeper on `$HOST`.
-Note the location of this file, which we will use below as _$PATH_TO_ZOOKEEPER_KEYTAB_. 
+Note the location of this file, which we will use below as _$PATH_TO_ZOOKEEPER_KEYTAB_.
 
 Similarly, for each `$HOST` that will run an HBase server (master or regionserver), you should have a principle: `hbase/$HOST`.
 For each host, add a keytab file called _hbase.keytab_ containing a service key for `hbase/$HOST`, copy this file to `$HOST`, and make it readable only to the user that will run an HBase service on `$HOST`.
-Note the location of this file, which we will use below as _$PATH_TO_HBASE_KEYTAB_. 
+Note the location of this file, which we will use below as _$PATH_TO_HBASE_KEYTAB_.
 
 Each user who will be an HBase client should also be given a Kerberos principal.
 This principal should usually have a password assigned to it (as opposed to, as with the HBase servers, a keytab file) which only this user knows.
 The client's principal's `maxrenewlife` should be set so that it can be renewed enough so that the user can complete their HBase client processes.
 For example, if a user runs a long-running HBase client process that takes at most 3 days, we might create this user's principal within `kadmin` with: `addprinc -maxrenewlife 3days`.
-The Zookeeper client and server libraries manage their own ticket refreshment by running threads that wake up periodically to do the refreshment. 
+The Zookeeper client and server libraries manage their own ticket refreshment by running threads that wake up periodically to do the refreshment.
 
 On each host that will run an HBase client (e.g. `hbase shell`), add the following file to the HBase home directory's _conf_ directory:
 
@@ -211,7 +210,7 @@ where the _$PATH_TO_HBASE_KEYTAB_ and _$PATH_TO_ZOOKEEPER_KEYTAB_ files are what
 The `Server` section will be used by the Zookeeper quorum server, while the `Client` section will be used by the HBase master and regionservers.
 The path to this file should be substituted for the text _$HBASE_SERVER_CONF_ in the _hbase-env.sh_ listing below.
 
-The path to this file should be substituted for the text _$CLIENT_CONF_ in the _hbase-env.sh_ listing below. 
+The path to this file should be substituted for the text _$CLIENT_CONF_ in the _hbase-env.sh_ listing below.
 
 Modify your _hbase-env.sh_ to include the following:
 
@@ -258,7 +257,7 @@ Modify your _hbase-site.xml_ on each node that will run zookeeper, master or reg
 
 where `$ZK_NODES` is the comma-separated list of hostnames of the Zookeeper Quorum hosts.
 
-Start your hbase cluster by running one or more of the following set of commands on the appropriate hosts: 
+Start your hbase cluster by running one or more of the following set of commands on the appropriate hosts:
 
 ----
 
@@ -312,21 +311,23 @@ Modify your _hbase-site.xml_ on each node that will run a master or regionserver
     <name>hbase.cluster.distributed</name>
     <value>true</value>
   </property>
+  <property>
+    <name>hbase.zookeeper.property.authProvider.1</name>
+    <value>org.apache.zookeeper.server.auth.SASLAuthenticationProvider</value>
+  </property>
+  <property>
+    <name>hbase.zookeeper.property.kerberos.removeHostFromPrincipal</name>
+    <value>true</value>
+  </property>
+  <property>
+    <name>hbase.zookeeper.property.kerberos.removeRealmFromPrincipal</name>
+    <value>true</value>
+  </property>
 </configuration>
 ----
 
 where `$ZK_NODES` is the comma-separated list of hostnames of the Zookeeper Quorum hosts.
 
-Add a _zoo.cfg_ for each Zookeeper Quorum host containing:
-
-[source,java]
-----
-
-authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
-kerberos.removeHostFromPrincipal=true
-kerberos.removeRealmFromPrincipal=true
-----
-
 Also on each of these hosts, create a JAAS configuration file containing:
 
 [source,java]
@@ -343,7 +344,7 @@ Server {
 ----
 
 where `$HOST` is the hostname of each Quorum host.
-We will refer to the full pathname of this file as _$ZK_SERVER_CONF_ below. 
+We will refer to the full pathname of this file as _$ZK_SERVER_CONF_ below.
 
 Start your Zookeepers on each Zookeeper Quorum host with:
 
@@ -353,7 +354,7 @@ Start your Zookeepers on each Zookeeper Quorum host with:
 SERVER_JVMFLAGS="-Djava.security.auth.login.config=$ZK_SERVER_CONF" bin/zkServer start
 ----
 
-Start your HBase cluster by running one or more of the following set of commands on the appropriate nodes: 
+Start your HBase cluster by running one or more of the following set of commands on the appropriate nodes:
 
 ----
 
@@ -414,7 +415,7 @@ mvn clean test -Dtest=TestZooKeeperACL
 ----
 
 Then configure HBase as described above.
-Manually edit target/cached_classpath.txt (see below): 
+Manually edit target/cached_classpath.txt (see below):
 
 ----
 
@@ -438,7 +439,7 @@ mv target/tmp.txt target/cached_classpath.txt
 
 ==== Set JAAS configuration programmatically
 
-This would avoid the need for a separate Hadoop jar that fixes link:https://issues.apache.org/jira/browse/HADOOP-7070[HADOOP-7070]. 
+This would avoid the need for a separate Hadoop jar that fixes link:https://issues.apache.org/jira/browse/HADOOP-7070[HADOOP-7070].
 
 ==== Elimination of `kerberos.removeHostFromPrincipal` and`kerberos.removeRealmFromPrincipal`
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/book.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/book.adoc b/src/main/asciidoc/book.adoc
index b2bd151..2209b4f 100644
--- a/src/main/asciidoc/book.adoc
+++ b/src/main/asciidoc/book.adoc
@@ -61,9 +61,11 @@ include::_chapters/schema_design.adoc[]
 include::_chapters/mapreduce.adoc[]
 include::_chapters/security.adoc[]
 include::_chapters/architecture.adoc[]
+include::_chapters/hbase_mob.adoc[]
 include::_chapters/hbase_apis.adoc[]
 include::_chapters/external_apis.adoc[]
 include::_chapters/thrift_filter_language.adoc[]
+include::_chapters/spark.adoc[]
 include::_chapters/cp.adoc[]
 include::_chapters/performance.adoc[]
 include::_chapters/troubleshooting.adoc[]


[11/11] hbase git commit: HBASE-14025 update CHANGES.txt for the 1.2 RC.

Posted by bu...@apache.org.
HBASE-14025 update CHANGES.txt for the 1.2 RC.


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/20c43680
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/20c43680
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/20c43680

Branch: refs/heads/branch-1.2
Commit: 20c4368065165ad49bdfe8172316e42566a6d6a0
Parents: 19d6a29
Author: Sean Busbey <bu...@apache.org>
Authored: Sun Jan 3 06:59:57 2016 +0000
Committer: Sean Busbey <bu...@apache.org>
Committed: Sun Jan 3 09:19:03 2016 +0000

----------------------------------------------------------------------
 CHANGES.txt | 2637 +++++++++++++++++++++++++-----------------------------
 1 file changed, 1218 insertions(+), 1419 deletions(-)
----------------------------------------------------------------------



[04/11] hbase git commit: HBASE-13908 update site docs for 1.2 RC.

Posted by bu...@apache.org.
http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/external_apis.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/external_apis.adoc b/src/main/asciidoc/_chapters/external_apis.adoc
index 37156ca..43a428a 100644
--- a/src/main/asciidoc/_chapters/external_apis.adoc
+++ b/src/main/asciidoc/_chapters/external_apis.adoc
@@ -27,32 +27,454 @@
 :icons: font
 :experimental:
 
-This chapter will cover access to Apache HBase either through non-Java languages, or through custom protocols.
-For information on using the native HBase APIs, refer to link:http://hbase.apache.org/apidocs/index.html[User API Reference] and the new <<hbase_apis,HBase APIs>> chapter.
+This chapter will cover access to Apache HBase either through non-Java languages and
+through custom protocols. For information on using the native HBase APIs, refer to
+link:http://hbase.apache.org/apidocs/index.html[User API Reference] and the
+<<hbase_apis,HBase APIs>> chapter.
 
-[[nonjava.jvm]]
-== Non-Java Languages Talking to the JVM
+== REST
 
-Currently the documentation on this topic is in the link:http://wiki.apache.org/hadoop/Hbase[Apache HBase Wiki].
-See also the link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/thrift/package-summary.html#package_description[Thrift API Javadoc].
+Representational State Transfer (REST) was introduced in 2000 in the doctoral
+dissertation of Roy Fielding, one of the principal authors of the HTTP specification.
 
-== REST
+REST itself is out of the scope of this documentation, but in general, REST allows
+client-server interactions via an API that is tied to the URL itself. This section
+discusses how to configure and run the REST server included with HBase, which exposes
+HBase tables, rows, cells, and metadata as URL specified resources.
+There is also a nice series of blogs on
+link:http://blog.cloudera.com/blog/2013/03/how-to-use-the-apache-hbase-rest-interface-part-1/[How-to: Use the Apache HBase REST Interface]
+by Jesse Anderson.
+
+=== Starting and Stopping the REST Server
+
+The included REST server can run as a daemon which starts an embedded Jetty
+servlet container and deploys the servlet into it. Use one of the following commands
+to start the REST server in the foreground or background. The port is optional, and
+defaults to 8080.
+
+[source, bash]
+----
+# Foreground
+$ bin/hbase rest start -p <port>
+
+# Background, logging to a file in $HBASE_LOGS_DIR
+$ bin/hbase-daemon.sh start rest -p <port>
+----
+
+To stop the REST server, use Ctrl-C if you were running it in the foreground, or the
+following command if you were running it in the background.
+
+[source, bash]
+----
+$ bin/hbase-daemon.sh stop rest
+----
+
+=== Configuring the REST Server and Client
+
+For information about configuring the REST server and client for SSL, as well as `doAs`
+impersonation for the REST server, see <<security.gateway.thrift>> and other portions
+of the <<security>> chapter.
+
+=== Using REST Endpoints
+
+The following examples use the placeholder server pass:[http://example.com:8000], and
+the following commands can all be run using `curl` or `wget` commands. You can request
+plain text (the default), XML , or JSON output by adding no header for plain text,
+or the header "Accept: text/xml" for XML or "Accept: application/json" for JSON.
+
+NOTE: Unless specified, use `GET` requests for queries, `PUT` or `POST` requests for
+creation or mutation, and `DELETE` for deletion.
+
+==== Cluster Information
+
+.HBase Version
+----
+http://example.com:8000/version/cluster
+----
+
+.Cluster Status
+----
+http://example.com:8000/status/cluster
+----
+
+.Table List
+----
+http://example.com:8000/
+----
+
+==== Table Information
+
+.Table Schema (GET)
+
+To retrieve the table schema, use a `GET` request with the `/schema` endpoint:
+----
+http://example.com:8000/<table>/schema
+----
+
+.Table Creation
+To create a table, use a `PUT` request with the `/schema` endpoint:
+----
+http://example.com:8000/<table>/schema
+----
+
+.Table Schema Update
+To update a table, use a `POST` request with the `/schema` endpoint:
+----
+http://example.com:8000/<table>/schema
+----
+
+.Table Deletion
+To delete a table, use a `DELETE` request with the `/schema` endpoint:
+----
+http://example.com:8000<table>/schema
+----
+
+.Table Regions
+----
+http://example.com:8000/<table>/regions
+----
+
+
+==== Gets
+
+.GET a Single Cell Value
+
+To get a single cell value, use a URL scheme like the following:
+
+----
+http://example.com:8000<table>/<row>/<column>:<qualifier>/<timestamp>/content:raw
+----
+
+The column qualifier and timestamp are optional. Without them, the whole row will
+be returned, or the newest version will be returned.
+
+.Multiple Single Values (Multi-Get)
+
+To get multiple single values, specify multiple column:qualifier tuples and/or a start-timestamp
+and end-timestamp. You can also limit the number of versions.
 
-Currently most of the documentation on REST exists in the link:http://wiki.apache.org/hadoop/Hbase/Stargate[Apache HBase Wiki on REST] (The REST gateway used to be called 'Stargate').  There are also a nice set of blogs on link:http://blog.cloudera.com/blog/2013/03/how-to-use-the-apache-hbase-rest-interface-part-1/[How-to: Use the Apache HBase REST Interface] by Jesse Anderson.
+----
+http://example.com:8000<table>/<row>/<column>:<qualifier>?v=<num-versions>
+----
+
+.Globbing Rows
+To scan a series of rows, you can use a `*` glob
+character on the <row> value to glob together multiple rows.
+
+----
+http://example.com:8000urls/https|ad.doubleclick.net|*
+----
+
+==== Puts
 
-To run your REST server under SSL, set `hbase.rest.ssl.enabled` to `true` and also set the following configs when you launch the REST server: (See example commands in <<jmx_config,JMX config>>)
+For Puts, `PUT` and `POST` are equivalent.
 
-[source]
+.Put a Single Value
+The column qualifier and the timestamp are optional.
+
+----
+http://example.com:8000put/<table>/<row>/<column>:<qualifier>/<timestamp>
+http://example.com:8000test/testrow/test:testcolumn
 ----
-hbase.rest.ssl.keystore.store
-hbase.rest.ssl.keystore.password
-hbase.rest.ssl.keystore.keypassword
+
+.Put Multiple Values
+To put multiple values, use a false row key. Row, column, and timestamp values in
+the supplied cells override the specifications on the path, allowing you to post
+multiple values to a table in batch. The HTTP response code indicates the status of
+the put. Set the `Content-Type` to `text/xml` for XML encoding or to `application/x-protobuf`
+for protobufs encoding. Supply the commit data in the `PUT` or `POST` body, using
+the <<xml_schema>> and <<protobufs_schema>> as guidelines.
+
+==== Scans
+
+`PUT` and `POST` are equivalent for scans.
+
+.Scanner Creation
+To create a scanner, use the `/scanner` endpoint. The HTTP response code indicates
+success (201) or failure (anything else), and on successful scanner creation, the
+URI is returned which should be used to address the scanner.
+
+----
+http://example.com:8000<table>/scanner
+----
+
+.Scanner Get Next
+To get the next batch of cells found by the scanner, use the `/scanner/<scanner-id>'
+endpoint, using the URI returned by the scanner creation endpoint. If the scanner
+is exhausted, HTTP status `204` is returned.
+----
+http://example.com:8000<table>/scanner/<scanner-id>
+----
+
+.Scanner Deletion
+To delete resources associated with a scanner, send a HTTP `DELETE` request to the
+`/scanner/<scanner-id>` endpoint.
+----
+http://example.com:8000<table>/scanner/<scanner-id>
+----
+
+[[xml_schema]]
+=== REST XML Schema
+
+[source,xml]
+----
+<schema xmlns="http://www.w3.org/2001/XMLSchema" xmlns:tns="RESTSchema">
+
+  <element name="Version" type="tns:Version"></element>
+
+  <complexType name="Version">
+    <attribute name="REST" type="string"></attribute>
+    <attribute name="JVM" type="string"></attribute>
+    <attribute name="OS" type="string"></attribute>
+    <attribute name="Server" type="string"></attribute>
+    <attribute name="Jersey" type="string"></attribute>
+  </complexType>
+
+  <element name="TableList" type="tns:TableList"></element>
+
+  <complexType name="TableList">
+    <sequence>
+      <element name="table" type="tns:Table" maxOccurs="unbounded" minOccurs="1"></element>
+    </sequence>
+  </complexType>
+
+  <complexType name="Table">
+    <sequence>
+      <element name="name" type="string"></element>
+    </sequence>
+  </complexType>
+
+  <element name="TableInfo" type="tns:TableInfo"></element>
+
+  <complexType name="TableInfo">
+    <sequence>
+      <element name="region" type="tns:TableRegion" maxOccurs="unbounded" minOccurs="1"></element>
+    </sequence>
+    <attribute name="name" type="string"></attribute>
+  </complexType>
+
+  <complexType name="TableRegion">
+    <attribute name="name" type="string"></attribute>
+    <attribute name="id" type="int"></attribute>
+    <attribute name="startKey" type="base64Binary"></attribute>
+    <attribute name="endKey" type="base64Binary"></attribute>
+    <attribute name="location" type="string"></attribute>
+  </complexType>
+
+  <element name="TableSchema" type="tns:TableSchema"></element>
+
+  <complexType name="TableSchema">
+    <sequence>
+      <element name="column" type="tns:ColumnSchema" maxOccurs="unbounded" minOccurs="1"></element>
+    </sequence>
+    <attribute name="name" type="string"></attribute>
+    <anyAttribute></anyAttribute>
+  </complexType>
+
+  <complexType name="ColumnSchema">
+    <attribute name="name" type="string"></attribute>
+    <anyAttribute></anyAttribute>
+  </complexType>
+
+  <element name="CellSet" type="tns:CellSet"></element>
+
+  <complexType name="CellSet">
+    <sequence>
+      <element name="row" type="tns:Row" maxOccurs="unbounded" minOccurs="1"></element>
+    </sequence>
+  </complexType>
+
+  <element name="Row" type="tns:Row"></element>
+
+  <complexType name="Row">
+    <sequence>
+      <element name="key" type="base64Binary"></element>
+      <element name="cell" type="tns:Cell" maxOccurs="unbounded" minOccurs="1"></element>
+    </sequence>
+  </complexType>
+
+  <element name="Cell" type="tns:Cell"></element>
+
+  <complexType name="Cell">
+    <sequence>
+      <element name="value" maxOccurs="1" minOccurs="1">
+        <simpleType><restriction base="base64Binary">
+        </simpleType>
+      </element>
+    </sequence>
+    <attribute name="column" type="base64Binary" />
+    <attribute name="timestamp" type="int" />
+  </complexType>
+
+  <element name="Scanner" type="tns:Scanner"></element>
+
+  <complexType name="Scanner">
+    <sequence>
+      <element name="column" type="base64Binary" minOccurs="0" maxOccurs="unbounded"></element>
+    </sequence>
+    <sequence>
+      <element name="filter" type="string" minOccurs="0" maxOccurs="1"></element>
+    </sequence>
+    <attribute name="startRow" type="base64Binary"></attribute>
+    <attribute name="endRow" type="base64Binary"></attribute>
+    <attribute name="batch" type="int"></attribute>
+    <attribute name="startTime" type="int"></attribute>
+    <attribute name="endTime" type="int"></attribute>
+  </complexType>
+
+  <element name="StorageClusterVersion" type="tns:StorageClusterVersion" />
+
+  <complexType name="StorageClusterVersion">
+    <attribute name="version" type="string"></attribute>
+  </complexType>
+
+  <element name="StorageClusterStatus"
+    type="tns:StorageClusterStatus">
+  </element>
+
+  <complexType name="StorageClusterStatus">
+    <sequence>
+      <element name="liveNode" type="tns:Node"
+        maxOccurs="unbounded" minOccurs="0">
+      </element>
+      <element name="deadNode" type="string" maxOccurs="unbounded"
+        minOccurs="0">
+      </element>
+    </sequence>
+    <attribute name="regions" type="int"></attribute>
+    <attribute name="requests" type="int"></attribute>
+    <attribute name="averageLoad" type="float"></attribute>
+  </complexType>
+
+  <complexType name="Node">
+    <sequence>
+      <element name="region" type="tns:Region"
+          maxOccurs="unbounded" minOccurs="0">
+      </element>
+    </sequence>
+    <attribute name="name" type="string"></attribute>
+    <attribute name="startCode" type="int"></attribute>
+    <attribute name="requests" type="int"></attribute>
+    <attribute name="heapSizeMB" type="int"></attribute>
+    <attribute name="maxHeapSizeMB" type="int"></attribute>
+  </complexType>
+
+  <complexType name="Region">
+    <attribute name="name" type="base64Binary"></attribute>
+    <attribute name="stores" type="int"></attribute>
+    <attribute name="storefiles" type="int"></attribute>
+    <attribute name="storefileSizeMB" type="int"></attribute>
+    <attribute name="memstoreSizeMB" type="int"></attribute>
+    <attribute name="storefileIndexSizeMB" type="int"></attribute>
+  </complexType>
+
+</schema>
 ----
 
-HBase ships a simple REST client, see link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/rest/client/package-summary.html[REST client] package for details.
-To enable SSL support for it, please also import your certificate into local java cacerts keystore:
+[[protobufs_schema]]
+=== REST Protobufs Schema
+
+[source,json]
 ----
-keytool -import -trustcacerts -file /home/user/restserver.cert -keystore $JAVA_HOME/jre/lib/security/cacerts
+message Version {
+  optional string restVersion = 1;
+  optional string jvmVersion = 2;
+  optional string osVersion = 3;
+  optional string serverVersion = 4;
+  optional string jerseyVersion = 5;
+}
+
+message StorageClusterStatus {
+  message Region {
+    required bytes name = 1;
+    optional int32 stores = 2;
+    optional int32 storefiles = 3;
+    optional int32 storefileSizeMB = 4;
+    optional int32 memstoreSizeMB = 5;
+    optional int32 storefileIndexSizeMB = 6;
+  }
+  message Node {
+    required string name = 1;    // name:port
+    optional int64 startCode = 2;
+    optional int32 requests = 3;
+    optional int32 heapSizeMB = 4;
+    optional int32 maxHeapSizeMB = 5;
+    repeated Region regions = 6;
+  }
+  // node status
+  repeated Node liveNodes = 1;
+  repeated string deadNodes = 2;
+  // summary statistics
+  optional int32 regions = 3;
+  optional int32 requests = 4;
+  optional double averageLoad = 5;
+}
+
+message TableList {
+  repeated string name = 1;
+}
+
+message TableInfo {
+  required string name = 1;
+  message Region {
+    required string name = 1;
+    optional bytes startKey = 2;
+    optional bytes endKey = 3;
+    optional int64 id = 4;
+    optional string location = 5;
+  }
+  repeated Region regions = 2;
+}
+
+message TableSchema {
+  optional string name = 1;
+  message Attribute {
+    required string name = 1;
+    required string value = 2;
+  }
+  repeated Attribute attrs = 2;
+  repeated ColumnSchema columns = 3;
+  // optional helpful encodings of commonly used attributes
+  optional bool inMemory = 4;
+  optional bool readOnly = 5;
+}
+
+message ColumnSchema {
+  optional string name = 1;
+  message Attribute {
+    required string name = 1;
+    required string value = 2;
+  }
+  repeated Attribute attrs = 2;
+  // optional helpful encodings of commonly used attributes
+  optional int32 ttl = 3;
+  optional int32 maxVersions = 4;
+  optional string compression = 5;
+}
+
+message Cell {
+  optional bytes row = 1;       // unused if Cell is in a CellSet
+  optional bytes column = 2;
+  optional int64 timestamp = 3;
+  optional bytes data = 4;
+}
+
+message CellSet {
+  message Row {
+    required bytes key = 1;
+    repeated Cell values = 2;
+  }
+  repeated Row rows = 1;
+}
+
+message Scanner {
+  optional bytes startRow = 1;
+  optional bytes endRow = 2;
+  repeated bytes columns = 3;
+  optional int32 batch = 4;
+  optional int64 startTime = 5;
+  optional int64 endTime = 6;
+}
 ----
 
 == Thrift
@@ -64,3 +486,331 @@ Documentation about Thrift has moved to <<thrift>>.
 
 FB's Chip Turner wrote a pure C/C++ client.
 link:https://github.com/facebook/native-cpp-hbase-client[Check it out].
+
+[[jdo]]
+
+== Using Java Data Objects (JDO) with HBase
+
+link:https://db.apache.org/jdo/[Java Data Objects (JDO)] is a standard way to
+access persistent data in databases, using plain old Java objects (POJO) to
+represent persistent data.
+
+.Dependencies
+This code example has the following dependencies:
+
+. HBase 0.90.x or newer
+. commons-beanutils.jar (http://commons.apache.org/)
+. commons-pool-1.5.5.jar (http://commons.apache.org/)
+. transactional-tableindexed for HBase 0.90 (https://github.com/hbase-trx/hbase-transactional-tableindexed)
+
+.Download `hbase-jdo`
+Download the code from http://code.google.com/p/hbase-jdo/.
+
+.JDO Example
+====
+
+This example uses JDO to create a table and an index, insert a row into a table, get
+a row, get a column value, perform a query, and do some additional HBase operations.
+
+[source, java]
+----
+package com.apache.hadoop.hbase.client.jdo.examples;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.InputStream;
+import java.util.Hashtable;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.client.tableindexed.IndexedTable;
+
+import com.apache.hadoop.hbase.client.jdo.AbstractHBaseDBO;
+import com.apache.hadoop.hbase.client.jdo.HBaseBigFile;
+import com.apache.hadoop.hbase.client.jdo.HBaseDBOImpl;
+import com.apache.hadoop.hbase.client.jdo.query.DeleteQuery;
+import com.apache.hadoop.hbase.client.jdo.query.HBaseOrder;
+import com.apache.hadoop.hbase.client.jdo.query.HBaseParam;
+import com.apache.hadoop.hbase.client.jdo.query.InsertQuery;
+import com.apache.hadoop.hbase.client.jdo.query.QSearch;
+import com.apache.hadoop.hbase.client.jdo.query.SelectQuery;
+import com.apache.hadoop.hbase.client.jdo.query.UpdateQuery;
+
+/**
+ * Hbase JDO Example.
+ *
+ * dependency library.
+ * - commons-beanutils.jar
+ * - commons-pool-1.5.5.jar
+ * - hbase0.90.0-transactionl.jar
+ *
+ * you can expand Delete,Select,Update,Insert Query classes.
+ *
+ */
+public class HBaseExample {
+  public static void main(String[] args) throws Exception {
+    AbstractHBaseDBO dbo = new HBaseDBOImpl();
+
+    //*drop if table is already exist.*
+    if(dbo.isTableExist("user")){
+            dbo.deleteTable("user");
+    }
+
+    //*create table*
+    dbo.createTableIfNotExist("user",HBaseOrder.DESC,"account");
+    //dbo.createTableIfNotExist("user",HBaseOrder.ASC,"account");
+
+    //create index.
+    String[] cols={"id","name"};
+    dbo.addIndexExistingTable("user","account",cols);
+
+    //insert
+    InsertQuery insert = dbo.createInsertQuery("user");
+    UserBean bean = new UserBean();
+    bean.setFamily("account");
+    bean.setAge(20);
+    bean.setEmail("ncanis@gmail.com");
+    bean.setId("ncanis");
+    bean.setName("ncanis");
+    bean.setPassword("1111");
+    insert.insert(bean);
+
+    //select 1 row
+    SelectQuery select = dbo.createSelectQuery("user");
+    UserBean resultBean = (UserBean)select.select(bean.getRow(),UserBean.class);
+
+    // select column value.
+    String value = (String)select.selectColumn(bean.getRow(),"account","id",String.class);
+
+    // search with option (QSearch has EQUAL, NOT_EQUAL, LIKE)
+    // select id,password,name,email from account where id='ncanis' limit startRow,20
+    HBaseParam param = new HBaseParam();
+    param.setPage(bean.getRow(),20);
+    param.addColumn("id","password","name","email");
+    param.addSearchOption("id","ncanis",QSearch.EQUAL);
+    select.search("account", param, UserBean.class);
+
+    // search column value is existing.
+    boolean isExist = select.existColumnValue("account","id","ncanis".getBytes());
+
+    // update password.
+    UpdateQuery update = dbo.createUpdateQuery("user");
+    Hashtable<String, byte[]> colsTable = new Hashtable<String, byte[]>();
+    colsTable.put("password","2222".getBytes());
+    update.update(bean.getRow(),"account",colsTable);
+
+    //delete
+    DeleteQuery delete = dbo.createDeleteQuery("user");
+    delete.deleteRow(resultBean.getRow());
+
+    ////////////////////////////////////
+    // etc
+
+    // HTable pool with apache commons pool
+    // borrow and release. HBasePoolManager(maxActive, minIdle etc..)
+    IndexedTable table = dbo.getPool().borrow("user");
+    dbo.getPool().release(table);
+
+    // upload bigFile by hadoop directly.
+    HBaseBigFile bigFile = new HBaseBigFile();
+    File file = new File("doc/movie.avi");
+    FileInputStream fis = new FileInputStream(file);
+    Path rootPath = new Path("/files/");
+    String filename = "movie.avi";
+    bigFile.uploadFile(rootPath,filename,fis,true);
+
+    // receive file stream from hadoop.
+    Path p = new Path(rootPath,filename);
+    InputStream is = bigFile.path2Stream(p,4096);
+
+  }
+}
+----
+====
+
+[[scala]]
+== Scala
+
+=== Setting the Classpath
+
+To use Scala with HBase, your CLASSPATH must include HBase's classpath as well as
+the Scala JARs required by your code. First, use the following command on a server
+running the HBase RegionServer process, to get HBase's classpath.
+
+[source, bash]
+----
+$ ps aux |grep regionserver| awk -F 'java.library.path=' {'print $2'} | awk {'print $1'}
+
+/usr/lib/hadoop/lib/native:/usr/lib/hbase/lib/native/Linux-amd64-64
+----
+
+Set the `$CLASSPATH` environment variable to include the path you found in the previous
+step, plus the path of `scala-library.jar` and each additional Scala-related JAR needed for
+your project.
+
+[source, bash]
+----
+$ export CLASSPATH=$CLASSPATH:/usr/lib/hadoop/lib/native:/usr/lib/hbase/lib/native/Linux-amd64-64:/path/to/scala-library.jar
+----
+
+=== Scala SBT File
+
+Your `build.sbt` file needs the following `resolvers` and `libraryDependencies` to work
+with HBase.
+
+----
+resolvers += "Apache HBase" at "https://repository.apache.org/content/repositories/releases"
+
+resolvers += "Thrift" at "http://people.apache.org/~rawson/repo/"
+
+libraryDependencies ++= Seq(
+    "org.apache.hadoop" % "hadoop-core" % "0.20.2",
+    "org.apache.hbase" % "hbase" % "0.90.4"
+)
+----
+
+=== Example Scala Code
+
+This example lists HBase tables, creates a new table, and adds a row to it.
+
+[source, scala]
+----
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.hadoop.hbase.client.{Connection,ConnectionFactory,HBaseAdmin,HTable,Put,Get}
+import org.apache.hadoop.hbase.util.Bytes
+
+
+val conf = new HBaseConfiguration()
+val connection = ConnectionFactory.createConnection(conf);
+val admin = connection.getAdmin();
+
+// list the tables
+val listtables=admin.listTables()
+listtables.foreach(println)
+
+// let's insert some data in 'mytable' and get the row
+
+val table = new HTable(conf, "mytable")
+
+val theput= new Put(Bytes.toBytes("rowkey1"))
+
+theput.add(Bytes.toBytes("ids"),Bytes.toBytes("id1"),Bytes.toBytes("one"))
+table.put(theput)
+
+val theget= new Get(Bytes.toBytes("rowkey1"))
+val result=table.get(theget)
+val value=result.value()
+println(Bytes.toString(value))
+----
+
+[[jython]]
+== Jython
+
+
+=== Setting the Classpath
+
+To use Jython with HBase, your CLASSPATH must include HBase's classpath as well as
+the Jython JARs required by your code. First, use the following command on a server
+running the HBase RegionServer process, to get HBase's classpath.
+
+[source, bash]
+----
+$ ps aux |grep regionserver| awk -F 'java.library.path=' {'print $2'} | awk {'print $1'}
+
+/usr/lib/hadoop/lib/native:/usr/lib/hbase/lib/native/Linux-amd64-64
+----
+
+Set the `$CLASSPATH` environment variable to include the path you found in the previous
+step, plus the path to `jython.jar` and each additional Jython-related JAR needed for
+your project.
+
+[source, bash]
+----
+$ export CLASSPATH=$CLASSPATH:/usr/lib/hadoop/lib/native:/usr/lib/hbase/lib/native/Linux-amd64-64:/path/to/jython.jar
+----
+
+Start a Jython shell with HBase and Hadoop JARs in the classpath:
+$ bin/hbase org.python.util.jython
+
+=== Jython Code Examples
+
+.Table Creation, Population, Get, and Delete with Jython
+====
+The following Jython code example creates a table, populates it with data, fetches
+the data, and deletes the table.
+
+[source,jython]
+----
+import java.lang
+from org.apache.hadoop.hbase import HBaseConfiguration, HTableDescriptor, HColumnDescriptor, HConstants, TableName
+from org.apache.hadoop.hbase.client import HBaseAdmin, HTable, Get
+from org.apache.hadoop.hbase.io import Cell, RowResult
+
+# First get a conf object.  This will read in the configuration
+# that is out in your hbase-*.xml files such as location of the
+# hbase master node.
+conf = HBaseConfiguration()
+
+# Create a table named 'test' that has two column families,
+# one named 'content, and the other 'anchor'.  The colons
+# are required for column family names.
+tablename = TableName.valueOf("test")
+
+desc = HTableDescriptor(tablename)
+desc.addFamily(HColumnDescriptor("content:"))
+desc.addFamily(HColumnDescriptor("anchor:"))
+admin = HBaseAdmin(conf)
+
+# Drop and recreate if it exists
+if admin.tableExists(tablename):
+    admin.disableTable(tablename)
+    admin.deleteTable(tablename)
+admin.createTable(desc)
+
+tables = admin.listTables()
+table = HTable(conf, tablename)
+
+# Add content to 'column:' on a row named 'row_x'
+row = 'row_x'
+update = Get(row)
+update.put('content:', 'some content')
+table.commit(update)
+
+# Now fetch the content just added, returns a byte[]
+data_row = table.get(row, "content:")
+data = java.lang.String(data_row.value, "UTF8")
+
+print "The fetched row contains the value '%s'" % data
+
+# Delete the table.
+admin.disableTable(desc.getName())
+admin.deleteTable(desc.getName())
+----
+====
+
+.Table Scan Using Jython
+====
+This example scans a table and returns the results that match a given family qualifier.
+
+[source, jython]
+----
+# Print all rows that are members of a particular column family
+# by passing a regex for family qualifier
+
+import java.lang
+
+from org.apache.hadoop.hbase import HBaseConfiguration
+from org.apache.hadoop.hbase.client import HTable
+
+conf = HBaseConfiguration()
+
+table = HTable(conf, "wiki")
+col = "title:.*$"
+
+scanner = table.getScanner([col], "")
+while 1:
+    result = scanner.next()
+    if not result:
+        break
+    print java.lang.String(result.row), java.lang.String(result.get('title:').value)
+----
+====
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/faq.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/faq.adoc b/src/main/asciidoc/_chapters/faq.adoc
index 22e4ad3..a622650 100644
--- a/src/main/asciidoc/_chapters/faq.adoc
+++ b/src/main/asciidoc/_chapters/faq.adoc
@@ -46,7 +46,7 @@ What is the history of HBase?::
 
 === Upgrading
 How do I upgrade Maven-managed projects from HBase 0.94 to HBase 0.96+?::
-  In HBase 0.96, the project moved to a modular structure. Adjust your project's dependencies to rely upon the `hbase-client` module or another module as appropriate, rather than a single JAR. You can model your Maven depency after one of the following, depending on your targeted version of HBase. See Section 3.5, “Upgrading from 0.94.x to 0.96.x” or Section 3.3, “Upgrading from 0.96.x to 0.98.x” for more information.
+  In HBase 0.96, the project moved to a modular structure. Adjust your project's dependencies to rely upon the `hbase-client` module or another module as appropriate, rather than a single JAR. You can model your Maven dependency after one of the following, depending on your targeted version of HBase. See Section 3.5, “Upgrading from 0.94.x to 0.96.x” or Section 3.3, “Upgrading from 0.96.x to 0.98.x” for more information.
 +
 .Maven Dependency for HBase 0.98
 [source,xml]
@@ -55,18 +55,18 @@ How do I upgrade Maven-managed projects from HBase 0.94 to HBase 0.96+?::
   <groupId>org.apache.hbase</groupId>
   <artifactId>hbase-client</artifactId>
   <version>0.98.5-hadoop2</version>
-</dependency>  
-----              
-+    
-.Maven Dependency for HBase 0.96       
+</dependency>
+----
++
+.Maven Dependency for HBase 0.96
 [source,xml]
 ----
 <dependency>
   <groupId>org.apache.hbase</groupId>
   <artifactId>hbase-client</artifactId>
   <version>0.96.2-hadoop2</version>
-</dependency>  
-----           
+</dependency>
+----
 +
 .Maven Dependency for HBase 0.94
 [source,xml]
@@ -75,9 +75,9 @@ How do I upgrade Maven-managed projects from HBase 0.94 to HBase 0.96+?::
   <groupId>org.apache.hbase</groupId>
   <artifactId>hbase</artifactId>
   <version>0.94.3</version>
-</dependency>   
-----         
-                
+</dependency>
+----
+
 
 === Architecture
 How does HBase handle Region-RegionServer assignment and locality?::
@@ -91,7 +91,7 @@ Where can I learn about the rest of the configuration options?::
   See <<configuration>>.
 
 === Schema Design / Data Access
-  
+
 How should I design my schema in HBase?::
   See <<datamodel>> and <<schema>>.
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/getting_started.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/getting_started.adoc b/src/main/asciidoc/_chapters/getting_started.adoc
index 41674a0..1b38e6e 100644
--- a/src/main/asciidoc/_chapters/getting_started.adoc
+++ b/src/main/asciidoc/_chapters/getting_started.adoc
@@ -57,7 +57,7 @@ Prior to HBase 0.94.x, HBase expected the loopback IP address to be 127.0.0.1. U
 
 .Example /etc/hosts File for Ubuntu
 ====
-The following _/etc/hosts_ file works correctly for HBase 0.94.x and earlier, on Ubuntu. Use this as a template if you run into trouble. 
+The following _/etc/hosts_ file works correctly for HBase 0.94.x and earlier, on Ubuntu. Use this as a template if you run into trouble.
 [listing]
 ----
 127.0.0.1 localhost
@@ -80,15 +80,16 @@ See <<java,Java>> for information about supported JDK versions.
   This will take you to a mirror of _HBase
   Releases_.
   Click on the folder named _stable_ and then download the binary file that ends in _.tar.gz_ to your local filesystem.
-  Be sure to choose the version that corresponds with the version of Hadoop you are likely to use later.
-  In most cases, you should choose the file for Hadoop 2, which will be called something like _hbase-0.98.3-hadoop2-bin.tar.gz_.
+  Prior to 1.x version, be sure to choose the version that corresponds with the version of Hadoop you are
+  likely to use later (in most cases, you should choose the file for Hadoop 2, which will be called
+  something like _hbase-0.98.13-hadoop2-bin.tar.gz_).
   Do not download the file ending in _src.tar.gz_ for now.
 . Extract the downloaded file, and change to the newly-created directory.
 +
 ----
 
-$ tar xzvf hbase-<?eval ${project.version}?>-hadoop2-bin.tar.gz
-$ cd hbase-<?eval ${project.version}?>-hadoop2/
+$ tar xzvf hbase-<?eval ${project.version}?>-bin.tar.gz
+$ cd hbase-<?eval ${project.version}?>/
 ----
 
 . For HBase 0.98.5 and later, you are required to set the `JAVA_HOME` environment variable before starting HBase.
@@ -294,9 +295,11 @@ You can skip the HDFS configuration to continue storing your data in the local f
 .Hadoop Configuration
 [NOTE]
 ====
-This procedure assumes that you have configured Hadoop and HDFS on your local system and or a remote system, and that they are running and available.
-It also assumes you are using Hadoop 2.
-Currently, the documentation on the Hadoop website does not include a quick start for Hadoop 2, but the guide at link:http://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide          is a good starting point.
+This procedure assumes that you have configured Hadoop and HDFS on your local system and/or a remote
+system, and that they are running and available. It also assumes you are using Hadoop 2.
+The guide on
+link:http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html[Setting up a Single Node Cluster]
+in the Hadoop documentation is a good starting point.
 ====
 
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/hbase-default.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/hbase-default.adoc b/src/main/asciidoc/_chapters/hbase-default.adoc
index bf56dd3..26929a3 100644
--- a/src/main/asciidoc/_chapters/hbase-default.adoc
+++ b/src/main/asciidoc/_chapters/hbase-default.adoc
@@ -46,7 +46,7 @@ Temporary directory on the local filesystem.
 .Default
 `${java.io.tmpdir}/hbase-${user.name}`
 
-  
+
 [[hbase.rootdir]]
 *`hbase.rootdir`*::
 +
@@ -64,7 +64,7 @@ The directory shared by region servers and into
 .Default
 `${hbase.tmp.dir}/hbase`
 
-  
+
 [[hbase.cluster.distributed]]
 *`hbase.cluster.distributed`*::
 +
@@ -77,7 +77,7 @@ The mode the cluster will be in. Possible values are
 .Default
 `false`
 
-  
+
 [[hbase.zookeeper.quorum]]
 *`hbase.zookeeper.quorum`*::
 +
@@ -97,7 +97,7 @@ Comma separated list of servers in the ZooKeeper ensemble
 .Default
 `localhost`
 
-  
+
 [[hbase.local.dir]]
 *`hbase.local.dir`*::
 +
@@ -108,7 +108,7 @@ Directory on the local filesystem to be used
 .Default
 `${hbase.tmp.dir}/local/`
 
-  
+
 [[hbase.master.info.port]]
 *`hbase.master.info.port`*::
 +
@@ -119,18 +119,18 @@ The port for the HBase Master web UI.
 .Default
 `16010`
 
-  
+
 [[hbase.master.info.bindAddress]]
 *`hbase.master.info.bindAddress`*::
 +
 .Description
 The bind address for the HBase Master web UI
-    
+
 +
 .Default
 `0.0.0.0`
 
-  
+
 [[hbase.master.logcleaner.plugins]]
 *`hbase.master.logcleaner.plugins`*::
 +
@@ -145,7 +145,7 @@ A comma-separated list of BaseLogCleanerDelegate invoked by
 .Default
 `org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner`
 
-  
+
 [[hbase.master.logcleaner.ttl]]
 *`hbase.master.logcleaner.ttl`*::
 +
@@ -156,7 +156,7 @@ Maximum time a WAL can stay in the .oldlogdir directory,
 .Default
 `600000`
 
-  
+
 [[hbase.master.hfilecleaner.plugins]]
 *`hbase.master.hfilecleaner.plugins`*::
 +
@@ -172,7 +172,7 @@ A comma-separated list of BaseHFileCleanerDelegate invoked by
 .Default
 `org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner`
 
-  
+
 [[hbase.master.catalog.timeout]]
 *`hbase.master.catalog.timeout`*::
 +
@@ -183,7 +183,7 @@ Timeout value for the Catalog Janitor from the master to
 .Default
 `600000`
 
-  
+
 [[hbase.master.infoserver.redirect]]
 *`hbase.master.infoserver.redirect`*::
 +
@@ -195,7 +195,7 @@ Whether or not the Master listens to the Master web
 .Default
 `true`
 
-  
+
 [[hbase.regionserver.port]]
 *`hbase.regionserver.port`*::
 +
@@ -205,7 +205,7 @@ The port the HBase RegionServer binds to.
 .Default
 `16020`
 
-  
+
 [[hbase.regionserver.info.port]]
 *`hbase.regionserver.info.port`*::
 +
@@ -216,7 +216,7 @@ The port for the HBase RegionServer web UI
 .Default
 `16030`
 
-  
+
 [[hbase.regionserver.info.bindAddress]]
 *`hbase.regionserver.info.bindAddress`*::
 +
@@ -226,7 +226,7 @@ The address for the HBase RegionServer web UI
 .Default
 `0.0.0.0`
 
-  
+
 [[hbase.regionserver.info.port.auto]]
 *`hbase.regionserver.info.port.auto`*::
 +
@@ -239,7 +239,7 @@ Whether or not the Master or RegionServer
 .Default
 `false`
 
-  
+
 [[hbase.regionserver.handler.count]]
 *`hbase.regionserver.handler.count`*::
 +
@@ -250,7 +250,7 @@ Count of RPC Listener instances spun up on RegionServers.
 .Default
 `30`
 
-  
+
 [[hbase.ipc.server.callqueue.handler.factor]]
 *`hbase.ipc.server.callqueue.handler.factor`*::
 +
@@ -262,7 +262,7 @@ Factor to determine the number of call queues.
 .Default
 `0.1`
 
-  
+
 [[hbase.ipc.server.callqueue.read.ratio]]
 *`hbase.ipc.server.callqueue.read.ratio`*::
 +
@@ -287,12 +287,12 @@ Split the call queues into read and write queues.
       and 2 queues will contain only write requests.
       a read.ratio of 1 means that: 9 queues will contain only read requests
       and 1 queues will contain only write requests.
-    
+
 +
 .Default
 `0`
 
-  
+
 [[hbase.ipc.server.callqueue.scan.ratio]]
 *`hbase.ipc.server.callqueue.scan.ratio`*::
 +
@@ -313,12 +313,12 @@ Given the number of read call queues, calculated from the total number
       and 4 queues will contain only short-read requests.
       a scan.ratio of 0.8 means that: 6 queues will contain only long-read requests
       and 2 queues will contain only short-read requests.
-    
+
 +
 .Default
 `0`
 
-  
+
 [[hbase.regionserver.msginterval]]
 *`hbase.regionserver.msginterval`*::
 +
@@ -329,7 +329,7 @@ Interval between messages from the RegionServer to Master
 .Default
 `3000`
 
-  
+
 [[hbase.regionserver.regionSplitLimit]]
 *`hbase.regionserver.regionSplitLimit`*::
 +
@@ -342,7 +342,7 @@ Limit for the number of regions after which no more region
 .Default
 `2147483647`
 
-  
+
 [[hbase.regionserver.logroll.period]]
 *`hbase.regionserver.logroll.period`*::
 +
@@ -353,7 +353,7 @@ Period at which we will roll the commit log regardless
 .Default
 `3600000`
 
-  
+
 [[hbase.regionserver.logroll.errors.tolerated]]
 *`hbase.regionserver.logroll.errors.tolerated`*::
 +
@@ -367,7 +367,7 @@ The number of consecutive WAL close errors we will allow
 .Default
 `2`
 
-  
+
 [[hbase.regionserver.hlog.reader.impl]]
 *`hbase.regionserver.hlog.reader.impl`*::
 +
@@ -377,7 +377,7 @@ The WAL file reader implementation.
 .Default
 `org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader`
 
-  
+
 [[hbase.regionserver.hlog.writer.impl]]
 *`hbase.regionserver.hlog.writer.impl`*::
 +
@@ -387,7 +387,7 @@ The WAL file writer implementation.
 .Default
 `org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter`
 
-  
+
 [[hbase.master.distributed.log.replay]]
 *`hbase.master.distributed.log.replay`*::
 +
@@ -397,13 +397,13 @@ Enable 'distributed log replay' as default engine splitting
     back to the old mode 'distributed log splitter', set the value to
     'false'.  'Disributed log replay' improves MTTR because it does not
     write intermediate files.  'DLR' required that 'hfile.format.version'
-    be set to version 3 or higher. 
-    
+    be set to version 3 or higher.
+
 +
 .Default
 `true`
 
-  
+
 [[hbase.regionserver.global.memstore.size]]
 *`hbase.regionserver.global.memstore.size`*::
 +
@@ -416,20 +416,20 @@ Maximum size of all memstores in a region server before new
 .Default
 `0.4`
 
-  
+
 [[hbase.regionserver.global.memstore.size.lower.limit]]
 *`hbase.regionserver.global.memstore.size.lower.limit`*::
 +
 .Description
 Maximum size of all memstores in a region server before flushes are forced.
       Defaults to 95% of hbase.regionserver.global.memstore.size.
-      A 100% value for this value causes the minimum possible flushing to occur when updates are 
+      A 100% value for this value causes the minimum possible flushing to occur when updates are
       blocked due to memstore limiting.
 +
 .Default
 `0.95`
 
-  
+
 [[hbase.regionserver.optionalcacheflushinterval]]
 *`hbase.regionserver.optionalcacheflushinterval`*::
 +
@@ -441,7 +441,7 @@ Maximum size of all memstores in a region server before flushes are forced.
 .Default
 `3600000`
 
-  
+
 [[hbase.regionserver.catalog.timeout]]
 *`hbase.regionserver.catalog.timeout`*::
 +
@@ -451,7 +451,7 @@ Timeout value for the Catalog Janitor from the regionserver to META.
 .Default
 `600000`
 
-  
+
 [[hbase.regionserver.dns.interface]]
 *`hbase.regionserver.dns.interface`*::
 +
@@ -462,7 +462,7 @@ The name of the Network Interface from which a region server
 .Default
 `default`
 
-  
+
 [[hbase.regionserver.dns.nameserver]]
 *`hbase.regionserver.dns.nameserver`*::
 +
@@ -474,7 +474,7 @@ The host name or IP address of the name server (DNS)
 .Default
 `default`
 
-  
+
 [[hbase.regionserver.region.split.policy]]
 *`hbase.regionserver.region.split.policy`*::
 +
@@ -483,12 +483,12 @@ The host name or IP address of the name server (DNS)
       A split policy determines when a region should be split. The various other split policies that
       are available currently are ConstantSizeRegionSplitPolicy, DisabledRegionSplitPolicy,
       DelimitedKeyPrefixRegionSplitPolicy, KeyPrefixRegionSplitPolicy etc.
-    
+
 +
 .Default
 `org.apache.hadoop.hbase.regionserver.IncreasingToUpperBoundRegionSplitPolicy`
 
-  
+
 [[zookeeper.session.timeout]]
 *`zookeeper.session.timeout`*::
 +
@@ -497,17 +497,18 @@ ZooKeeper session timeout in milliseconds. It is used in two different ways.
       First, this value is used in the ZK client that HBase uses to connect to the ensemble.
       It is also used by HBase when it starts a ZK server and it is passed as the 'maxSessionTimeout'. See
       http://hadoop.apache.org/zookeeper/docs/current/zookeeperProgrammers.html#ch_zkSessions.
-      For example, if a HBase region server connects to a ZK ensemble that's also managed by HBase, then the
+      For example, if an HBase region server connects to a ZK ensemble that's also managed
+      by HBase, then the
       session timeout will be the one specified by this configuration. But, a region server that connects
       to an ensemble managed with a different configuration will be subjected that ensemble's maxSessionTimeout. So,
       even though HBase might propose using 90 seconds, the ensemble can have a max timeout lower than this and
       it will take precedence. The current default that ZK ships with is 40 seconds, which is lower than HBase's.
-    
+
 +
 .Default
 `90000`
 
-  
+
 [[zookeeper.znode.parent]]
 *`zookeeper.znode.parent`*::
 +
@@ -520,7 +521,7 @@ Root ZNode for HBase in ZooKeeper. All of HBase's ZooKeeper
 .Default
 `/hbase`
 
-  
+
 [[zookeeper.znode.rootserver]]
 *`zookeeper.znode.rootserver`*::
 +
@@ -533,7 +534,7 @@ Path to ZNode holding root region location. This is written by
 .Default
 `root-region-server`
 
-  
+
 [[zookeeper.znode.acl.parent]]
 *`zookeeper.znode.acl.parent`*::
 +
@@ -543,7 +544,7 @@ Root ZNode for access control lists.
 .Default
 `acl`
 
-  
+
 [[hbase.zookeeper.dns.interface]]
 *`hbase.zookeeper.dns.interface`*::
 +
@@ -554,7 +555,7 @@ The name of the Network Interface from which a ZooKeeper server
 .Default
 `default`
 
-  
+
 [[hbase.zookeeper.dns.nameserver]]
 *`hbase.zookeeper.dns.nameserver`*::
 +
@@ -566,7 +567,7 @@ The host name or IP address of the name server (DNS)
 .Default
 `default`
 
-  
+
 [[hbase.zookeeper.peerport]]
 *`hbase.zookeeper.peerport`*::
 +
@@ -578,7 +579,7 @@ Port used by ZooKeeper peers to talk to each other.
 .Default
 `2888`
 
-  
+
 [[hbase.zookeeper.leaderport]]
 *`hbase.zookeeper.leaderport`*::
 +
@@ -590,7 +591,7 @@ Port used by ZooKeeper for leader election.
 .Default
 `3888`
 
-  
+
 [[hbase.zookeeper.useMulti]]
 *`hbase.zookeeper.useMulti`*::
 +
@@ -605,21 +606,7 @@ Instructs HBase to make use of ZooKeeper's multi-update functionality.
 .Default
 `true`
 
-  
-[[hbase.config.read.zookeeper.config]]
-*`hbase.config.read.zookeeper.config`*::
-+
-.Description
-
-        Set to true to allow HBaseConfiguration to read the
-        zoo.cfg file for ZooKeeper properties. Switching this to true
-        is not recommended, since the functionality of reading ZK
-        properties from a zoo.cfg file has been deprecated.
-+
-.Default
-`false`
 
-  
 [[hbase.zookeeper.property.initLimit]]
 *`hbase.zookeeper.property.initLimit`*::
 +
@@ -630,7 +617,7 @@ Property from ZooKeeper's config zoo.cfg.
 .Default
 `10`
 
-  
+
 [[hbase.zookeeper.property.syncLimit]]
 *`hbase.zookeeper.property.syncLimit`*::
 +
@@ -642,7 +629,7 @@ Property from ZooKeeper's config zoo.cfg.
 .Default
 `5`
 
-  
+
 [[hbase.zookeeper.property.dataDir]]
 *`hbase.zookeeper.property.dataDir`*::
 +
@@ -653,7 +640,7 @@ Property from ZooKeeper's config zoo.cfg.
 .Default
 `${hbase.tmp.dir}/zookeeper`
 
-  
+
 [[hbase.zookeeper.property.clientPort]]
 *`hbase.zookeeper.property.clientPort`*::
 +
@@ -664,7 +651,7 @@ Property from ZooKeeper's config zoo.cfg.
 .Default
 `2181`
 
-  
+
 [[hbase.zookeeper.property.maxClientCnxns]]
 *`hbase.zookeeper.property.maxClientCnxns`*::
 +
@@ -678,7 +665,7 @@ Property from ZooKeeper's config zoo.cfg.
 .Default
 `300`
 
-  
+
 [[hbase.client.write.buffer]]
 *`hbase.client.write.buffer`*::
 +
@@ -693,7 +680,7 @@ Default size of the HTable client write buffer in bytes.
 .Default
 `2097152`
 
-  
+
 [[hbase.client.pause]]
 *`hbase.client.pause`*::
 +
@@ -706,7 +693,7 @@ General client pause value.  Used mostly as value to wait
 .Default
 `100`
 
-  
+
 [[hbase.client.retries.number]]
 *`hbase.client.retries.number`*::
 +
@@ -721,7 +708,7 @@ Maximum retries.  Used as maximum for all retryable
 .Default
 `35`
 
-  
+
 [[hbase.client.max.total.tasks]]
 *`hbase.client.max.total.tasks`*::
 +
@@ -732,7 +719,7 @@ The maximum number of concurrent tasks a single HTable instance will
 .Default
 `100`
 
-  
+
 [[hbase.client.max.perserver.tasks]]
 *`hbase.client.max.perserver.tasks`*::
 +
@@ -743,7 +730,7 @@ The maximum number of concurrent tasks a single HTable instance will
 .Default
 `5`
 
-  
+
 [[hbase.client.max.perregion.tasks]]
 *`hbase.client.max.perregion.tasks`*::
 +
@@ -756,7 +743,7 @@ The maximum number of concurrent connections the client will
 .Default
 `1`
 
-  
+
 [[hbase.client.scanner.caching]]
 *`hbase.client.scanner.caching`*::
 +
@@ -771,7 +758,7 @@ Number of rows that will be fetched when calling next
 .Default
 `100`
 
-  
+
 [[hbase.client.keyvalue.maxsize]]
 *`hbase.client.keyvalue.maxsize`*::
 +
@@ -786,7 +773,7 @@ Specifies the combined maximum allowed size of a KeyValue
 .Default
 `10485760`
 
-  
+
 [[hbase.client.scanner.timeout.period]]
 *`hbase.client.scanner.timeout.period`*::
 +
@@ -796,7 +783,7 @@ Client scanner lease period in milliseconds.
 .Default
 `60000`
 
-  
+
 [[hbase.client.localityCheck.threadPoolSize]]
 *`hbase.client.localityCheck.threadPoolSize`*::
 +
@@ -806,7 +793,7 @@ Client scanner lease period in milliseconds.
 .Default
 `2`
 
-  
+
 [[hbase.bulkload.retries.number]]
 *`hbase.bulkload.retries.number`*::
 +
@@ -818,7 +805,7 @@ Maximum retries.  This is maximum number of iterations
 .Default
 `10`
 
-  
+
 [[hbase.balancer.period
     ]]
 *`hbase.balancer.period
@@ -830,7 +817,7 @@ Period at which the region balancer runs in the Master.
 .Default
 `300000`
 
-  
+
 [[hbase.regions.slop]]
 *`hbase.regions.slop`*::
 +
@@ -840,7 +827,7 @@ Rebalance if any regionserver has average + (average * slop) regions.
 .Default
 `0.2`
 
-  
+
 [[hbase.server.thread.wakefrequency]]
 *`hbase.server.thread.wakefrequency`*::
 +
@@ -851,20 +838,20 @@ Time to sleep in between searches for work (in milliseconds).
 .Default
 `10000`
 
-  
+
 [[hbase.server.versionfile.writeattempts]]
 *`hbase.server.versionfile.writeattempts`*::
 +
 .Description
 
     How many time to retry attempting to write a version file
-    before just aborting. Each attempt is seperated by the
+    before just aborting. Each attempt is separated by the
     hbase.server.thread.wakefrequency milliseconds.
 +
 .Default
 `3`
 
-  
+
 [[hbase.hregion.memstore.flush.size]]
 *`hbase.hregion.memstore.flush.size`*::
 +
@@ -877,7 +864,7 @@ Time to sleep in between searches for work (in milliseconds).
 .Default
 `134217728`
 
-  
+
 [[hbase.hregion.percolumnfamilyflush.size.lower.bound]]
 *`hbase.hregion.percolumnfamilyflush.size.lower.bound`*::
 +
@@ -890,12 +877,12 @@ Time to sleep in between searches for work (in milliseconds).
     memstore size more than this, all the memstores will be flushed
     (just as usual). This value should be less than half of the total memstore
     threshold (hbase.hregion.memstore.flush.size).
-    
+
 +
 .Default
 `16777216`
 
-  
+
 [[hbase.hregion.preclose.flush.size]]
 *`hbase.hregion.preclose.flush.size`*::
 +
@@ -914,7 +901,7 @@ Time to sleep in between searches for work (in milliseconds).
 .Default
 `5242880`
 
-  
+
 [[hbase.hregion.memstore.block.multiplier]]
 *`hbase.hregion.memstore.block.multiplier`*::
 +
@@ -930,7 +917,7 @@ Time to sleep in between searches for work (in milliseconds).
 .Default
 `4`
 
-  
+
 [[hbase.hregion.memstore.mslab.enabled]]
 *`hbase.hregion.memstore.mslab.enabled`*::
 +
@@ -944,19 +931,19 @@ Time to sleep in between searches for work (in milliseconds).
 .Default
 `true`
 
-  
+
 [[hbase.hregion.max.filesize]]
 *`hbase.hregion.max.filesize`*::
 +
 .Description
 
-    Maximum HFile size. If the sum of the sizes of a region's HFiles has grown to exceed this 
+    Maximum HFile size. If the sum of the sizes of a region's HFiles has grown to exceed this
     value, the region is split in two.
 +
 .Default
 `10737418240`
 
-  
+
 [[hbase.hregion.majorcompaction]]
 *`hbase.hregion.majorcompaction`*::
 +
@@ -973,7 +960,7 @@ Time between major compactions, expressed in milliseconds. Set to 0 to disable
 .Default
 `604800000`
 
-  
+
 [[hbase.hregion.majorcompaction.jitter]]
 *`hbase.hregion.majorcompaction.jitter`*::
 +
@@ -986,32 +973,32 @@ A multiplier applied to hbase.hregion.majorcompaction to cause compaction to occ
 .Default
 `0.50`
 
-  
+
 [[hbase.hstore.compactionThreshold]]
 *`hbase.hstore.compactionThreshold`*::
 +
 .Description
- If more than this number of StoreFiles exist in any one Store 
-      (one StoreFile is written per flush of MemStore), a compaction is run to rewrite all 
+ If more than this number of StoreFiles exist in any one Store
+      (one StoreFile is written per flush of MemStore), a compaction is run to rewrite all
       StoreFiles into a single StoreFile. Larger values delay compaction, but when compaction does
       occur, it takes longer to complete.
 +
 .Default
 `3`
 
-  
+
 [[hbase.hstore.flusher.count]]
 *`hbase.hstore.flusher.count`*::
 +
 .Description
  The number of flush threads. With fewer threads, the MemStore flushes will be
       queued. With more threads, the flushes will be executed in parallel, increasing the load on
-      HDFS, and potentially causing more compactions. 
+      HDFS, and potentially causing more compactions.
 +
 .Default
 `2`
 
-  
+
 [[hbase.hstore.blockingStoreFiles]]
 *`hbase.hstore.blockingStoreFiles`*::
 +
@@ -1023,40 +1010,40 @@ A multiplier applied to hbase.hregion.majorcompaction to cause compaction to occ
 .Default
 `10`
 
-  
+
 [[hbase.hstore.blockingWaitTime]]
 *`hbase.hstore.blockingWaitTime`*::
 +
 .Description
  The time for which a region will block updates after reaching the StoreFile limit
-    defined by hbase.hstore.blockingStoreFiles. After this time has elapsed, the region will stop 
+    defined by hbase.hstore.blockingStoreFiles. After this time has elapsed, the region will stop
     blocking updates even if a compaction has not been completed.
 +
 .Default
 `90000`
 
-  
+
 [[hbase.hstore.compaction.min]]
 *`hbase.hstore.compaction.min`*::
 +
 .Description
-The minimum number of StoreFiles which must be eligible for compaction before 
-      compaction can run. The goal of tuning hbase.hstore.compaction.min is to avoid ending up with 
-      too many tiny StoreFiles to compact. Setting this value to 2 would cause a minor compaction 
+The minimum number of StoreFiles which must be eligible for compaction before
+      compaction can run. The goal of tuning hbase.hstore.compaction.min is to avoid ending up with
+      too many tiny StoreFiles to compact. Setting this value to 2 would cause a minor compaction
       each time you have two StoreFiles in a Store, and this is probably not appropriate. If you
-      set this value too high, all the other values will need to be adjusted accordingly. For most 
+      set this value too high, all the other values will need to be adjusted accordingly. For most
       cases, the default value is appropriate. In previous versions of HBase, the parameter
       hbase.hstore.compaction.min was named hbase.hstore.compactionThreshold.
 +
 .Default
 `3`
 
-  
+
 [[hbase.hstore.compaction.max]]
 *`hbase.hstore.compaction.max`*::
 +
 .Description
-The maximum number of StoreFiles which will be selected for a single minor 
+The maximum number of StoreFiles which will be selected for a single minor
       compaction, regardless of the number of eligible StoreFiles. Effectively, the value of
       hbase.hstore.compaction.max controls the length of time it takes a single compaction to
       complete. Setting it larger means that more StoreFiles are included in a compaction. For most
@@ -1065,88 +1052,88 @@ The maximum number of StoreFiles which will be selected for a single minor
 .Default
 `10`
 
-  
+
 [[hbase.hstore.compaction.min.size]]
 *`hbase.hstore.compaction.min.size`*::
 +
 .Description
-A StoreFile smaller than this size will always be eligible for minor compaction. 
-      HFiles this size or larger are evaluated by hbase.hstore.compaction.ratio to determine if 
-      they are eligible. Because this limit represents the "automatic include"limit for all 
-      StoreFiles smaller than this value, this value may need to be reduced in write-heavy 
-      environments where many StoreFiles in the 1-2 MB range are being flushed, because every 
+A StoreFile smaller than this size will always be eligible for minor compaction.
+      HFiles this size or larger are evaluated by hbase.hstore.compaction.ratio to determine if
+      they are eligible. Because this limit represents the "automatic include"limit for all
+      StoreFiles smaller than this value, this value may need to be reduced in write-heavy
+      environments where many StoreFiles in the 1-2 MB range are being flushed, because every
       StoreFile will be targeted for compaction and the resulting StoreFiles may still be under the
       minimum size and require further compaction. If this parameter is lowered, the ratio check is
-      triggered more quickly. This addressed some issues seen in earlier versions of HBase but 
-      changing this parameter is no longer necessary in most situations. Default: 128 MB expressed 
+      triggered more quickly. This addressed some issues seen in earlier versions of HBase but
+      changing this parameter is no longer necessary in most situations. Default: 128 MB expressed
       in bytes.
 +
 .Default
 `134217728`
 
-  
+
 [[hbase.hstore.compaction.max.size]]
 *`hbase.hstore.compaction.max.size`*::
 +
 .Description
-A StoreFile larger than this size will be excluded from compaction. The effect of 
-      raising hbase.hstore.compaction.max.size is fewer, larger StoreFiles that do not get 
+A StoreFile larger than this size will be excluded from compaction. The effect of
+      raising hbase.hstore.compaction.max.size is fewer, larger StoreFiles that do not get
       compacted often. If you feel that compaction is happening too often without much benefit, you
       can try raising this value. Default: the value of LONG.MAX_VALUE, expressed in bytes.
 +
 .Default
 `9223372036854775807`
 
-  
+
 [[hbase.hstore.compaction.ratio]]
 *`hbase.hstore.compaction.ratio`*::
 +
 .Description
-For minor compaction, this ratio is used to determine whether a given StoreFile 
+For minor compaction, this ratio is used to determine whether a given StoreFile
       which is larger than hbase.hstore.compaction.min.size is eligible for compaction. Its
       effect is to limit compaction of large StoreFiles. The value of hbase.hstore.compaction.ratio
-      is expressed as a floating-point decimal. A large ratio, such as 10, will produce a single 
-      giant StoreFile. Conversely, a low value, such as .25, will produce behavior similar to the 
+      is expressed as a floating-point decimal. A large ratio, such as 10, will produce a single
+      giant StoreFile. Conversely, a low value, such as .25, will produce behavior similar to the
       BigTable compaction algorithm, producing four StoreFiles. A moderate value of between 1.0 and
-      1.4 is recommended. When tuning this value, you are balancing write costs with read costs. 
-      Raising the value (to something like 1.4) will have more write costs, because you will 
-      compact larger StoreFiles. However, during reads, HBase will need to seek through fewer 
-      StoreFiles to accomplish the read. Consider this approach if you cannot take advantage of 
-      Bloom filters. Otherwise, you can lower this value to something like 1.0 to reduce the 
-      background cost of writes, and use Bloom filters to control the number of StoreFiles touched 
+      1.4 is recommended. When tuning this value, you are balancing write costs with read costs.
+      Raising the value (to something like 1.4) will have more write costs, because you will
+      compact larger StoreFiles. However, during reads, HBase will need to seek through fewer
+      StoreFiles to accomplish the read. Consider this approach if you cannot take advantage of
+      Bloom filters. Otherwise, you can lower this value to something like 1.0 to reduce the
+      background cost of writes, and use Bloom filters to control the number of StoreFiles touched
       during reads. For most cases, the default value is appropriate.
 +
 .Default
 `1.2F`
 
-  
+
 [[hbase.hstore.compaction.ratio.offpeak]]
 *`hbase.hstore.compaction.ratio.offpeak`*::
 +
 .Description
 Allows you to set a different (by default, more aggressive) ratio for determining
-      whether larger StoreFiles are included in compactions during off-peak hours. Works in the 
-      same way as hbase.hstore.compaction.ratio. Only applies if hbase.offpeak.start.hour and 
+      whether larger StoreFiles are included in compactions during off-peak hours. Works in the
+      same way as hbase.hstore.compaction.ratio. Only applies if hbase.offpeak.start.hour and
       hbase.offpeak.end.hour are also enabled.
 +
 .Default
 `5.0F`
 
-  
+
 [[hbase.hstore.time.to.purge.deletes]]
 *`hbase.hstore.time.to.purge.deletes`*::
 +
 .Description
-The amount of time to delay purging of delete markers with future timestamps. If 
-      unset, or set to 0, all delete markers, including those with future timestamps, are purged 
-      during the next major compaction. Otherwise, a delete marker is kept until the major compaction 
+The amount of time to delay purging of delete markers with future timestamps. If
+      unset, or set to 0, all delete markers, including those with future timestamps, are purged
+      during the next major compaction. Otherwise, a delete marker is kept until the major compaction
       which occurs after the marker's timestamp plus the value of this setting, in milliseconds.
-    
+
 +
 .Default
 `0`
 
-  
+
 [[hbase.offpeak.start.hour]]
 *`hbase.offpeak.start.hour`*::
 +
@@ -1157,7 +1144,7 @@ The start of off-peak hours, expressed as an integer between 0 and 23, inclusive
 .Default
 `-1`
 
-  
+
 [[hbase.offpeak.end.hour]]
 *`hbase.offpeak.end.hour`*::
 +
@@ -1168,7 +1155,7 @@ The end of off-peak hours, expressed as an integer between 0 and 23, inclusive.
 .Default
 `-1`
 
-  
+
 [[hbase.regionserver.thread.compaction.throttle]]
 *`hbase.regionserver.thread.compaction.throttle`*::
 +
@@ -1184,19 +1171,19 @@ There are two different thread pools for compactions, one for large compactions
 .Default
 `2684354560`
 
-  
+
 [[hbase.hstore.compaction.kv.max]]
 *`hbase.hstore.compaction.kv.max`*::
 +
 .Description
 The maximum number of KeyValues to read and then write in a batch when flushing or
       compacting. Set this lower if you have big KeyValues and problems with Out Of Memory
-      Exceptions Set this higher if you have wide, small rows. 
+      Exceptions Set this higher if you have wide, small rows.
 +
 .Default
 `10`
 
-  
+
 [[hbase.storescanner.parallel.seek.enable]]
 *`hbase.storescanner.parallel.seek.enable`*::
 +
@@ -1208,7 +1195,7 @@ The maximum number of KeyValues to read and then write in a batch when flushing
 .Default
 `false`
 
-  
+
 [[hbase.storescanner.parallel.seek.threads]]
 *`hbase.storescanner.parallel.seek.threads`*::
 +
@@ -1219,7 +1206,7 @@ The maximum number of KeyValues to read and then write in a batch when flushing
 .Default
 `10`
 
-  
+
 [[hfile.block.cache.size]]
 *`hfile.block.cache.size`*::
 +
@@ -1232,7 +1219,7 @@ Percentage of maximum heap (-Xmx setting) to allocate to block cache
 .Default
 `0.4`
 
-  
+
 [[hfile.block.index.cacheonwrite]]
 *`hfile.block.index.cacheonwrite`*::
 +
@@ -1243,7 +1230,7 @@ This allows to put non-root multi-level index blocks into the block
 .Default
 `false`
 
-  
+
 [[hfile.index.block.max.size]]
 *`hfile.index.block.max.size`*::
 +
@@ -1255,31 +1242,33 @@ When the size of a leaf-level, intermediate-level, or root-level
 .Default
 `131072`
 
-  
+
 [[hbase.bucketcache.ioengine]]
 *`hbase.bucketcache.ioengine`*::
 +
 .Description
-Where to store the contents of the bucketcache. One of: onheap, 
-      offheap, or file. If a file, set it to file:PATH_TO_FILE. See https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html for more information.
-    
+Where to store the contents of the bucketcache. One of: onheap,
+      offheap, or file. If a file, set it to file:PATH_TO_FILE.
+      See https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html
+      for more information.
+
 +
 .Default
 ``
 
-  
+
 [[hbase.bucketcache.combinedcache.enabled]]
 *`hbase.bucketcache.combinedcache.enabled`*::
 +
 .Description
-Whether or not the bucketcache is used in league with the LRU 
-      on-heap block cache. In this mode, indices and blooms are kept in the LRU 
+Whether or not the bucketcache is used in league with the LRU
+      on-heap block cache. In this mode, indices and blooms are kept in the LRU
       blockcache and the data blocks are kept in the bucketcache.
 +
 .Default
 `true`
 
-  
+
 [[hbase.bucketcache.size]]
 *`hbase.bucketcache.size`*::
 +
@@ -1290,19 +1279,19 @@ Used along with bucket cache, this is a float that EITHER represents a percentag
 .Default
 `0` when specified as a float
 
-  
+
 [[hbase.bucketcache.sizes]]
 *`hbase.bucketcache.sizes`*::
 +
 .Description
-A comma-separated list of sizes for buckets for the bucketcache 
-      if you use multiple sizes. Should be a list of block sizes in order from smallest 
+A comma-separated list of sizes for buckets for the bucketcache
+      if you use multiple sizes. Should be a list of block sizes in order from smallest
       to largest. The sizes you use will depend on your data access patterns.
 +
 .Default
 ``
 
-  
+
 [[hfile.format.version]]
 *`hfile.format.version`*::
 +
@@ -1310,13 +1299,13 @@ A comma-separated list of sizes for buckets for the bucketcache
 The HFile format version to use for new files.
       Version 3 adds support for tags in hfiles (See http://hbase.apache.org/book.html#hbase.tags).
       Distributed Log Replay requires that tags are enabled. Also see the configuration
-      'hbase.replication.rpc.codec'. 
-      
+      'hbase.replication.rpc.codec'.
+
 +
 .Default
 `3`
 
-  
+
 [[hfile.block.bloom.cacheonwrite]]
 *`hfile.block.bloom.cacheonwrite`*::
 +
@@ -1326,7 +1315,7 @@ Enables cache-on-write for inline blocks of a compound Bloom filter.
 .Default
 `false`
 
-  
+
 [[io.storefile.bloom.block.size]]
 *`io.storefile.bloom.block.size`*::
 +
@@ -1339,7 +1328,7 @@ The size in bytes of a single block ("chunk") of a compound Bloom
 .Default
 `131072`
 
-  
+
 [[hbase.rs.cacheblocksonwrite]]
 *`hbase.rs.cacheblocksonwrite`*::
 +
@@ -1350,7 +1339,7 @@ Whether an HFile block should be added to the block cache when the
 .Default
 `false`
 
-  
+
 [[hbase.rpc.timeout]]
 *`hbase.rpc.timeout`*::
 +
@@ -1362,7 +1351,7 @@ This is for the RPC layer to define how long HBase client applications
 .Default
 `60000`
 
-  
+
 [[hbase.rpc.shortoperation.timeout]]
 *`hbase.rpc.shortoperation.timeout`*::
 +
@@ -1375,7 +1364,7 @@ This is another version of "hbase.rpc.timeout". For those RPC operation
 .Default
 `10000`
 
-  
+
 [[hbase.ipc.client.tcpnodelay]]
 *`hbase.ipc.client.tcpnodelay`*::
 +
@@ -1386,7 +1375,7 @@ Set no delay on rpc socket connections.  See
 .Default
 `true`
 
-  
+
 [[hbase.master.keytab.file]]
 *`hbase.master.keytab.file`*::
 +
@@ -1397,7 +1386,7 @@ Full path to the kerberos keytab file to use for logging in
 .Default
 ``
 
-  
+
 [[hbase.master.kerberos.principal]]
 *`hbase.master.kerberos.principal`*::
 +
@@ -1411,7 +1400,7 @@ Ex. "hbase/_HOST@EXAMPLE.COM".  The kerberos principal name
 .Default
 ``
 
-  
+
 [[hbase.regionserver.keytab.file]]
 *`hbase.regionserver.keytab.file`*::
 +
@@ -1422,7 +1411,7 @@ Full path to the kerberos keytab file to use for logging in
 .Default
 ``
 
-  
+
 [[hbase.regionserver.kerberos.principal]]
 *`hbase.regionserver.kerberos.principal`*::
 +
@@ -1437,7 +1426,7 @@ Ex. "hbase/_HOST@EXAMPLE.COM".  The kerberos principal name
 .Default
 ``
 
-  
+
 [[hadoop.policy.file]]
 *`hadoop.policy.file`*::
 +
@@ -1449,7 +1438,7 @@ The policy configuration file used by RPC servers to make
 .Default
 `hbase-policy.xml`
 
-  
+
 [[hbase.superuser]]
 *`hbase.superuser`*::
 +
@@ -1461,7 +1450,7 @@ List of users or groups (comma-separated), who are allowed
 .Default
 ``
 
-  
+
 [[hbase.auth.key.update.interval]]
 *`hbase.auth.key.update.interval`*::
 +
@@ -1472,7 +1461,7 @@ The update interval for master key for authentication tokens
 .Default
 `86400000`
 
-  
+
 [[hbase.auth.token.max.lifetime]]
 *`hbase.auth.token.max.lifetime`*::
 +
@@ -1483,7 +1472,7 @@ The maximum lifetime in milliseconds after which an
 .Default
 `604800000`
 
-  
+
 [[hbase.ipc.client.fallback-to-simple-auth-allowed]]
 *`hbase.ipc.client.fallback-to-simple-auth-allowed`*::
 +
@@ -1498,7 +1487,7 @@ When a client is configured to attempt a secure connection, but attempts to
 .Default
 `false`
 
-  
+
 [[hbase.display.keys]]
 *`hbase.display.keys`*::
 +
@@ -1510,7 +1499,7 @@ When this is set to true the webUI and such will display all start/end keys
 .Default
 `true`
 
-  
+
 [[hbase.coprocessor.region.classes]]
 *`hbase.coprocessor.region.classes`*::
 +
@@ -1524,7 +1513,7 @@ A comma-separated list of Coprocessors that are loaded by
 .Default
 ``
 
-  
+
 [[hbase.rest.port]]
 *`hbase.rest.port`*::
 +
@@ -1534,7 +1523,7 @@ The port for the HBase REST server.
 .Default
 `8080`
 
-  
+
 [[hbase.rest.readonly]]
 *`hbase.rest.readonly`*::
 +
@@ -1546,7 +1535,7 @@ Defines the mode the REST server will be started in. Possible values are:
 .Default
 `false`
 
-  
+
 [[hbase.rest.threads.max]]
 *`hbase.rest.threads.max`*::
 +
@@ -1561,7 +1550,7 @@ The maximum number of threads of the REST server thread pool.
 .Default
 `100`
 
-  
+
 [[hbase.rest.threads.min]]
 *`hbase.rest.threads.min`*::
 +
@@ -1573,7 +1562,7 @@ The minimum number of threads of the REST server thread pool.
 .Default
 `2`
 
-  
+
 [[hbase.rest.support.proxyuser]]
 *`hbase.rest.support.proxyuser`*::
 +
@@ -1583,7 +1572,7 @@ Enables running the REST server to support proxy-user mode.
 .Default
 `false`
 
-  
+
 [[hbase.defaults.for.version.skip]]
 *`hbase.defaults.for.version.skip`*::
 +
@@ -1592,14 +1581,14 @@ Set to true to skip the 'hbase.defaults.for.version' check.
     Setting this to true can be useful in contexts other than
     the other side of a maven generation; i.e. running in an
     ide.  You'll want to set this boolean to true to avoid
-    seeing the RuntimException complaint: "hbase-default.xml file
+    seeing the RuntimeException complaint: "hbase-default.xml file
     seems to be for and old version of HBase (\${hbase.version}), this
     version is X.X.X-SNAPSHOT"
 +
 .Default
 `false`
 
-  
+
 [[hbase.coprocessor.master.classes]]
 *`hbase.coprocessor.master.classes`*::
 +
@@ -1614,7 +1603,7 @@ A comma-separated list of
 .Default
 ``
 
-  
+
 [[hbase.coprocessor.abortonerror]]
 *`hbase.coprocessor.abortonerror`*::
 +
@@ -1629,7 +1618,7 @@ Set to true to cause the hosting server (master or regionserver)
 .Default
 `true`
 
-  
+
 [[hbase.online.schema.update.enable]]
 *`hbase.online.schema.update.enable`*::
 +
@@ -1639,7 +1628,7 @@ Set true to enable online schema changes.
 .Default
 `true`
 
-  
+
 [[hbase.table.lock.enable]]
 *`hbase.table.lock.enable`*::
 +
@@ -1651,7 +1640,7 @@ Set to true to enable locking the table in zookeeper for schema change operation
 .Default
 `true`
 
-  
+
 [[hbase.table.max.rowsize]]
 *`hbase.table.max.rowsize`*::
 +
@@ -1660,12 +1649,12 @@ Set to true to enable locking the table in zookeeper for schema change operation
       Maximum size of single row in bytes (default is 1 Gb) for Get'ting
       or Scan'ning without in-row scan flag set. If row size exceeds this limit
       RowTooBigException is thrown to client.
-    
+
 +
 .Default
 `1073741824`
 
-  
+
 [[hbase.thrift.minWorkerThreads]]
 *`hbase.thrift.minWorkerThreads`*::
 +
@@ -1676,7 +1665,7 @@ The "core size" of the thread pool. New threads are created on every
 .Default
 `16`
 
-  
+
 [[hbase.thrift.maxWorkerThreads]]
 *`hbase.thrift.maxWorkerThreads`*::
 +
@@ -1688,7 +1677,7 @@ The maximum size of the thread pool. When the pending request queue
 .Default
 `1000`
 
-  
+
 [[hbase.thrift.maxQueuedRequests]]
 *`hbase.thrift.maxQueuedRequests`*::
 +
@@ -1701,7 +1690,7 @@ The maximum number of pending Thrift connections waiting in the queue. If
 .Default
 `1000`
 
-  
+
 [[hbase.thrift.htablepool.size.max]]
 *`hbase.thrift.htablepool.size.max`*::
 +
@@ -1710,12 +1699,12 @@ The upper bound for the table pool used in the Thrift gateways server.
       Since this is per table name, we assume a single table and so with 1000 default
       worker threads max this is set to a matching number. For other workloads this number
       can be adjusted as needed.
-    
+
 +
 .Default
 `1000`
 
-  
+
 [[hbase.regionserver.thrift.framed]]
 *`hbase.regionserver.thrift.framed`*::
 +
@@ -1724,12 +1713,12 @@ Use Thrift TFramedTransport on the server side.
       This is the recommended transport for thrift servers and requires a similar setting
       on the client side. Changing this to false will select the default transport,
       vulnerable to DoS when malformed requests are issued due to THRIFT-601.
-    
+
 +
 .Default
 `false`
 
-  
+
 [[hbase.regionserver.thrift.framed.max_frame_size_in_mb]]
 *`hbase.regionserver.thrift.framed.max_frame_size_in_mb`*::
 +
@@ -1739,7 +1728,7 @@ Default frame size when using framed transport
 .Default
 `2`
 
-  
+
 [[hbase.regionserver.thrift.compact]]
 *`hbase.regionserver.thrift.compact`*::
 +
@@ -1749,7 +1738,7 @@ Use Thrift TCompactProtocol binary serialization protocol.
 .Default
 `false`
 
-  
+
 [[hbase.data.umask.enable]]
 *`hbase.data.umask.enable`*::
 +
@@ -1760,7 +1749,7 @@ Enable, if true, that file permissions should be assigned
 .Default
 `false`
 
-  
+
 [[hbase.data.umask]]
 *`hbase.data.umask`*::
 +
@@ -1771,7 +1760,7 @@ File permissions that should be used to write data
 .Default
 `000`
 
-  
+
 [[hbase.metrics.showTableName]]
 *`hbase.metrics.showTableName`*::
 +
@@ -1784,7 +1773,7 @@ Whether to include the prefix "tbl.tablename" in per-column family metrics.
 .Default
 `true`
 
-  
+
 [[hbase.metrics.exposeOperationTimes]]
 *`hbase.metrics.exposeOperationTimes`*::
 +
@@ -1796,7 +1785,7 @@ Whether to report metrics about time taken performing an
 .Default
 `true`
 
-  
+
 [[hbase.snapshot.enabled]]
 *`hbase.snapshot.enabled`*::
 +
@@ -1806,7 +1795,7 @@ Set to true to allow snapshots to be taken / restored / cloned.
 .Default
 `true`
 
-  
+
 [[hbase.snapshot.restore.take.failsafe.snapshot]]
 *`hbase.snapshot.restore.take.failsafe.snapshot`*::
 +
@@ -1818,7 +1807,7 @@ Set to true to take a snapshot before the restore operation.
 .Default
 `true`
 
-  
+
 [[hbase.snapshot.restore.failsafe.name]]
 *`hbase.snapshot.restore.failsafe.name`*::
 +
@@ -1830,7 +1819,7 @@ Name of the failsafe snapshot taken by the restore operation.
 .Default
 `hbase-failsafe-{snapshot.name}-{restore.timestamp}`
 
-  
+
 [[hbase.server.compactchecker.interval.multiplier]]
 *`hbase.server.compactchecker.interval.multiplier`*::
 +
@@ -1845,7 +1834,7 @@ The number that determines how often we scan to see if compaction is necessary.
 .Default
 `1000`
 
-  
+
 [[hbase.lease.recovery.timeout]]
 *`hbase.lease.recovery.timeout`*::
 +
@@ -1855,7 +1844,7 @@ How long we wait on dfs lease recovery in total before giving up.
 .Default
 `900000`
 
-  
+
 [[hbase.lease.recovery.dfs.timeout]]
 *`hbase.lease.recovery.dfs.timeout`*::
 +
@@ -1869,7 +1858,7 @@ How long between dfs recover lease invocations. Should be larger than the sum of
 .Default
 `64000`
 
-  
+
 [[hbase.column.max.version]]
 *`hbase.column.max.version`*::
 +
@@ -1880,7 +1869,7 @@ New column family descriptors will use this value as the default number of versi
 .Default
 `1`
 
-  
+
 [[hbase.dfs.client.read.shortcircuit.buffer.size]]
 *`hbase.dfs.client.read.shortcircuit.buffer.size`*::
 +
@@ -1894,12 +1883,12 @@ If the DFSClient configuration
     direct memory.  So, we set it down from the default.  Make
     it > the default hbase block size set in the HColumnDescriptor
     which is usually 64k.
-    
+
 +
 .Default
 `131072`
 
-  
+
 [[hbase.regionserver.checksum.verify]]
 *`hbase.regionserver.checksum.verify`*::
 +
@@ -1914,13 +1903,13 @@ If the DFSClient configuration
         fails, we will switch back to using HDFS checksums (so do not disable HDFS
         checksums!  And besides this feature applies to hfiles only, not to WALs).
         If this parameter is set to false, then hbase will not verify any checksums,
-        instead it will depend on checksum verification being done in the HDFS client.  
-    
+        instead it will depend on checksum verification being done in the HDFS client.
+
 +
 .Default
 `true`
 
-  
+
 [[hbase.hstore.bytes.per.checksum]]
 *`hbase.hstore.bytes.per.checksum`*::
 +
@@ -1928,12 +1917,12 @@ If the DFSClient configuration
 
         Number of bytes in a newly created checksum chunk for HBase-level
         checksums in hfile blocks.
-    
+
 +
 .Default
 `16384`
 
-  
+
 [[hbase.hstore.checksum.algorithm]]
 *`hbase.hstore.checksum.algorithm`*::
 +
@@ -1941,12 +1930,12 @@ If the DFSClient configuration
 
       Name of an algorithm that is used to compute checksums. Possible values
       are NULL, CRC32, CRC32C.
-    
+
 +
 .Default
 `CRC32`
 
-  
+
 [[hbase.status.published]]
 *`hbase.status.published`*::
 +
@@ -1956,60 +1945,60 @@ If the DFSClient configuration
       When a region server dies and its recovery starts, the master will push this information
       to the client application, to let them cut the connection immediately instead of waiting
       for a timeout.
-    
+
 +
 .Default
 `false`
 
-  
+
 [[hbase.status.publisher.class]]
 *`hbase.status.publisher.class`*::
 +
 .Description
 
       Implementation of the status publication with a multicast message.
-    
+
 +
 .Default
 `org.apache.hadoop.hbase.master.ClusterStatusPublisher$MulticastPublisher`
 
-  
+
 [[hbase.status.listener.class]]
 *`hbase.status.listener.class`*::
 +
 .Description
 
       Implementation of the status listener with a multicast message.
-    
+
 +
 .Default
 `org.apache.hadoop.hbase.client.ClusterStatusListener$MulticastListener`
 
-  
+
 [[hbase.status.multicast.address.ip]]
 *`hbase.status.multicast.address.ip`*::
 +
 .Description
 
       Multicast address to use for the status publication by multicast.
-    
+
 +
 .Default
 `226.1.1.3`
 
-  
+
 [[hbase.status.multicast.address.port]]
 *`hbase.status.multicast.address.port`*::
 +
 .Description
 
       Multicast port to use for the status publication by multicast.
-    
+
 +
 .Default
 `16100`
 
-  
+
 [[hbase.dynamic.jars.dir]]
 *`hbase.dynamic.jars.dir`*::
 +
@@ -2019,12 +2008,12 @@ If the DFSClient configuration
       dynamically by the region server without the need to restart. However,
       an already loaded filter/co-processor class would not be un-loaded. See
       HBASE-1936 for more details.
-    
+
 +
 .Default
 `${hbase.rootdir}/lib`
 
-  
+
 [[hbase.security.authentication]]
 *`hbase.security.authentication`*::
 +
@@ -2032,24 +2021,24 @@ If the DFSClient configuration
 
       Controls whether or not secure authentication is enabled for HBase.
       Possible values are 'simple' (no authentication), and 'kerberos'.
-    
+
 +
 .Default
 `simple`
 
-  
+
 [[hbase.rest.filter.classes]]
 *`hbase.rest.filter.classes`*::
 +
 .Description
 
       Servlet filters for REST service.
-    
+
 +
 .Default
 `org.apache.hadoop.hbase.rest.filter.GzipFilter`
 
-  
+
 [[hbase.master.loadbalancer.class]]
 *`hbase.master.loadbalancer.class`*::
 +
@@ -2060,12 +2049,12 @@ If the DFSClient configuration
       http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.html
       It replaces the DefaultLoadBalancer as the default (since renamed
       as the SimpleLoadBalancer).
-    
+
 +
 .Default
 `org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer`
 
-  
+
 [[hbase.security.exec.permission.checks]]
 *`hbase.security.exec.permission.checks`*::
 +
@@ -2081,28 +2070,28 @@ If the DFSClient configuration
       section of the HBase online manual. For more information on granting or
       revoking permissions using the AccessController, see the security
       section of the HBase online manual.
-    
+
 +
 .Default
 `false`
 
-  
+
 [[hbase.procedure.regionserver.classes]]
 *`hbase.procedure.regionserver.classes`*::
 +
 .Description
-A comma-separated list of 
-    org.apache.hadoop.hbase.procedure.RegionServerProcedureManager procedure managers that are 
-    loaded by default on the active HRegionServer process. The lifecycle methods (init/start/stop) 
-    will be called by the active HRegionServer process to perform the specific globally barriered 
-    procedure. After implementing your own RegionServerProcedureManager, just put it in 
+A comma-separated list of
+    org.apache.hadoop.hbase.procedure.RegionServerProcedureManager procedure managers that are
+    loaded by default on the active HRegionServer process. The lifecycle methods (init/start/stop)
+    will be called by the active HRegionServer process to perform the specific globally barriered
+    procedure. After implementing your own RegionServerProcedureManager, just put it in
     HBase's classpath and add the fully qualified class name here.
-    
+
 +
 .Default
 ``
 
-  
+
 [[hbase.procedure.master.classes]]
 *`hbase.procedure.master.classes`*::
 +
@@ -2117,7 +2106,7 @@ A comma-separated list of
 .Default
 ``
 
-  
+
 [[hbase.coordinated.state.manager.class]]
 *`hbase.coordinated.state.manager.class`*::
 +
@@ -2127,7 +2116,7 @@ Fully qualified name of class implementing coordinated state manager.
 .Default
 `org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager`
 
-  
+
 [[hbase.regionserver.storefile.refresh.period]]
 *`hbase.regionserver.storefile.refresh.period`*::
 +
@@ -2140,12 +2129,12 @@ Fully qualified name of class implementing coordinated state manager.
       extra Namenode pressure. If the files cannot be refreshed for longer than HFile TTL
       (hbase.master.hfilecleaner.ttl) the requests are rejected. Configuring HFile TTL to a larger
       value is also recommended with this setting.
-    
+
 +
 .Default
 `0`
 
-  
+
 [[hbase.region.replica.replication.enabled]]
 *`hbase.region.replica.replication.enabled`*::
 +
@@ -2153,36 +2142,36 @@ Fully qualified name of class implementing coordinated state manager.
 
       Whether asynchronous WAL replication to the secondary region replicas is enabled or not.
       If this is enabled, a replication peer named "region_replica_replication" will be created
-      which will tail the logs and replicate the mutatations to region replicas for tables that
+      which will tail the logs and replicate the mutations to region replicas for tables that
       have region replication > 1. If this is enabled once, disabling this replication also
       requires disabling the replication peer using shell or ReplicationAdmin java class.
-      Replication to secondary region replicas works over standard inter-cluster replication. 
-      So replication, if disabled explicitly, also has to be enabled by setting "hbase.replication" 
+      Replication to secondary region replicas works over standard inter-cluster replication.
+      So replication, if disabled explicitly, also has to be enabled by setting "hbase.replication"
       to true for this feature to work.
-    
+
 +
 .Default
 `false`
 
-  
+
 [[hbase.http.filter.initializers]]
 *`hbase.http.filter.initializers`*::
 +
 .Description
 
-      A comma separated list of class names. Each class in the list must extend 
-      org.apache.hadoop.hbase.http.FilterInitializer. The corresponding Filter will 
-      be initialized. Then, the Filter will be applied to all user facing jsp 
-      and servlet web pages. 
+      A comma separated list of class names. Each class in the list must extend
+      org.apache.hadoop.hbase.http.FilterInitializer. The corresponding Filter will
+      be initialized. Then, the Filter will be applied to all user facing jsp
+      and servlet web pages.
       The ordering of the list defines the ordering of the filters.
-      The default StaticUserWebFilter add a user principal as defined by the 
+      The default StaticUserWebFilter add a user principal as defined by the
       hbase.http.staticuser.user property.
-    
+
 +
 .Default
 `org.apache.hadoop.hbase.http.lib.StaticUserWebFilter`
 
-  
+
 [[hbase.security.visibility.mutations.checkauths]]
 *`hbase.security.visibility.mutations.checkauths`*::
 +
@@ -2190,41 +2179,41 @@ Fully qualified name of class implementing coordinated state manager.
 
       This property if enabled, will check whether the labels in the visibility expression are associated
       with the user issuing the mutation
-    
+
 +
 .Default
 `false`
 
-  
+
 [[hbase.http.max.threads]]
 *`hbase.http.max.threads`*::
 +
 .Description
 
-      The maximum number of threads that the HTTP Server will create in its 
+      The maximum number of threads that the HTTP Server will create in its
       ThreadPool.
-    
+
 +
 .Default
 `10`
 
-  
+
 [[hbase.replication.rpc.codec]]
 *`hbase.replication.rpc.codec`*::
 +
 .Description
 
   		The codec that is to be used when replication is enabled so that
-  		the tags are also replicated. This is used along with HFileV3 which 
+  		the tags are also replicated. This is used along with HFileV3 which
   		supports tags in them.  If tags are not used or if the hfile version used
   		is HFileV2 then KeyValueCodec can be used as the replication codec. Note that
   		using KeyValueCodecWithTags for replication when there are no tags causes no harm.
-  	
+
 +
 .Default
 `org.apache.hadoop.hbase.codec.KeyValueCodecWithTags`
 
-  
+
 [[hbase.http.staticuser.user]]
 *`hbase.http.staticuser.user`*::
 +
@@ -2233,12 +2222,12 @@ Fully qualified name of class implementing coordinated state manager.
       The user name to filter as, on static web filters
       while rendering content. An example use is the HDFS
       web UI (user to be used for browsing files).
-    
+
 +
 .Default
 `dr.stack`
 
-  
+
 [[hbase.regionserver.handler.abort.on.error.percent]]
 *`hbase.regionserver.handler.abort.on.error.percent`*::
 +
@@ -2251,4 +2240,3 @@ The percent of region server RPC threads failed to abort RS.
 .Default
 `0.5`
 
-  
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/hbase_history.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/hbase_history.adoc b/src/main/asciidoc/_chapters/hbase_history.adoc
index de4aff5..7308b90 100644
--- a/src/main/asciidoc/_chapters/hbase_history.adoc
+++ b/src/main/asciidoc/_chapters/hbase_history.adoc
@@ -29,9 +29,9 @@
 :icons: font
 :experimental:
 
-* 2006:  link:http://research.google.com/archive/bigtable.html[BigTable] paper published by Google. 
-* 2006 (end of year):  HBase development starts. 
-* 2008:  HBase becomes Hadoop sub-project. 
-* 2010:  HBase becomes Apache top-level project. 
+* 2006:  link:http://research.google.com/archive/bigtable.html[BigTable] paper published by Google.
+* 2006 (end of year):  HBase development starts.
+* 2008:  HBase becomes Hadoop sub-project.
+* 2010:  HBase becomes Apache top-level project.
 
 :numbered:


[05/11] hbase git commit: HBASE-13908 update site docs for 1.2 RC.

Posted by bu...@apache.org.
http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/developer.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/developer.adoc b/src/main/asciidoc/_chapters/developer.adoc
index c3fc1ce..d633569 100644
--- a/src/main/asciidoc/_chapters/developer.adoc
+++ b/src/main/asciidoc/_chapters/developer.adoc
@@ -40,14 +40,14 @@ See link:http://search-hadoop.com/m/DHED43re96[What label
 
 Before you get started submitting code to HBase, please refer to <<developing,developing>>.
 
-As Apache HBase is an Apache Software Foundation project, see <<asf,asf>>            for more information about how the ASF functions. 
+As Apache HBase is an Apache Software Foundation project, see <<asf,asf>>            for more information about how the ASF functions.
 
 [[mailing.list]]
 === Mailing Lists
 
 Sign up for the dev-list and the user-list.
 See the link:http://hbase.apache.org/mail-lists.html[mailing lists] page.
-Posing questions - and helping to answer other people's questions - is encouraged! There are varying levels of experience on both lists so patience and politeness are encouraged (and please stay on topic.) 
+Posing questions - and helping to answer other people's questions - is encouraged! There are varying levels of experience on both lists so patience and politeness are encouraged (and please stay on topic.)
 
 [[irc]]
 === Internet Relay Chat (IRC)
@@ -58,7 +58,7 @@ FreeNode offers a web-based client, but most people prefer a native client, and
 === Jira
 
 Check for existing issues in link:https://issues.apache.org/jira/browse/HBASE[Jira].
-If it's either a new feature request, enhancement, or a bug, file a ticket. 
+If it's either a new feature request, enhancement, or a bug, file a ticket.
 
 To check for existing issues which you can tackle as a beginner, search for link:https://issues.apache.org/jira/issues/?jql=project%20%3D%20HBASE%20AND%20labels%20in%20(beginner)[issues in JIRA tagged with the label 'beginner'].
 
@@ -89,8 +89,8 @@ GIT is our repository of record for all but the Apache HBase website.
 We used to be on SVN.
 We migrated.
 See link:https://issues.apache.org/jira/browse/INFRA-7768[Migrate Apache HBase SVN Repos to Git].
-Updating hbase.apache.org still requires use of SVN (See <<hbase.org,hbase.org>>). See link:http://hbase.apache.org/source-repository.html[Source Code
-                Management] page for contributor and committer links or seach for HBase on the link:http://git.apache.org/[Apache Git] page.
+See link:http://hbase.apache.org/source-repository.html[Source Code
+                Management] page for contributor and committer links or search for HBase on the link:http://git.apache.org/[Apache Git] page.
 
 == IDEs
 
@@ -133,30 +133,30 @@ If you cloned the project via git, download and install the Git plugin (EGit). A
 ==== HBase Project Setup in Eclipse using `m2eclipse`
 
 The easiest way is to use the +m2eclipse+ plugin for Eclipse.
-Eclipse Indigo or newer includes +m2eclipse+, or you can download it from link:http://www.eclipse.org/m2e//. It provides Maven integration for Eclipse, and even lets you use the direct Maven commands from within Eclipse to compile and test your project.
+Eclipse Indigo or newer includes +m2eclipse+, or you can download it from http://www.eclipse.org/m2e/. It provides Maven integration for Eclipse, and even lets you use the direct Maven commands from within Eclipse to compile and test your project.
 
 To import the project, click  and select the HBase root directory. `m2eclipse`                    locates all the hbase modules for you.
 
-If you install +m2eclipse+ and import HBase in your workspace, do the following to fix your eclipse Build Path. 
+If you install +m2eclipse+ and import HBase in your workspace, do the following to fix your eclipse Build Path.
 
 . Remove _target_ folder
 . Add _target/generated-jamon_ and _target/generated-sources/java_ folders.
 . Remove from your Build Path the exclusions on the _src/main/resources_ and _src/test/resources_ to avoid error message in the console, such as the following:
 +
 ----
-Failed to execute goal 
+Failed to execute goal
 org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project hbase:
-'An Ant BuildException has occured: Replace: source file .../target/classes/hbase-default.xml 
+'An Ant BuildException has occurred: Replace: source file .../target/classes/hbase-default.xml
 doesn't exist
 ----
 +
-This will also reduce the eclipse build cycles and make your life easier when developing. 
+This will also reduce the eclipse build cycles and make your life easier when developing.
 
 
 [[eclipse.commandline]]
 ==== HBase Project Setup in Eclipse Using the Command Line
 
-Instead of using `m2eclipse`, you can generate the Eclipse files from the command line. 
+Instead of using `m2eclipse`, you can generate the Eclipse files from the command line.
 
 . First, run the following command, which builds HBase.
   You only need to do this once.
@@ -181,7 +181,7 @@ mvn eclipse:eclipse
 The `$M2_REPO` classpath variable needs to be set up for the project.
 This needs to be set to your local Maven repository, which is usually _~/.m2/repository_
 
-If this classpath variable is not configured, you will see compile errors in Eclipse like this: 
+If this classpath variable is not configured, you will see compile errors in Eclipse like this:
 
 ----
 
@@ -209,14 +209,14 @@ Access restriction: The method getLong(Object, long) from the type Unsafe is not
 [[eclipse.more]]
 ==== Eclipse - More Information
 
-For additional information on setting up Eclipse for HBase development on Windows, see link:http://michaelmorello.blogspot.com/2011/09/hbase-subversion-eclipse-windows.html[Michael Morello's blog] on the topic. 
+For additional information on setting up Eclipse for HBase development on Windows, see link:http://michaelmorello.blogspot.com/2011/09/hbase-subversion-eclipse-windows.html[Michael Morello's blog] on the topic.
 
 === IntelliJ IDEA
 
-You can set up IntelliJ IDEA for similar functinoality as Eclipse.
+You can set up IntelliJ IDEA for similar functionality as Eclipse.
 Follow these steps.
 
-. Select 
+. Select
 . You do not need to select a profile.
   Be sure [label]#Maven project
   required# is selected, and click btn:[Next].
@@ -227,7 +227,7 @@ Using the Eclipse Code Formatter plugin for IntelliJ IDEA, you can import the HB
 
 === Other IDEs
 
-It would be userful to mirror the <<eclipse,eclipse>> set-up instructions for other IDEs.
+It would be useful to mirror the <<eclipse,eclipse>> set-up instructions for other IDEs.
 If you would like to assist, please have a look at link:https://issues.apache.org/jira/browse/HBASE-11704[HBASE-11704].
 
 [[build]]
@@ -237,20 +237,20 @@ If you would like to assist, please have a look at link:https://issues.apache.or
 === Basic Compile
 
 HBase is compiled using Maven.
-You must use Maven 3.x.
+You must use at least Maven 3.0.4.
 To check your Maven version, run the command +mvn -version+.
 
 .JDK Version Requirements
 [NOTE]
 ====
 Starting with HBase 1.0 you must use Java 7 or later to build from source code.
-See <<java,java>> for more complete information about supported JDK versions. 
+See <<java,java>> for more complete information about supported JDK versions.
 ====
 
 [[maven.build.commands]]
 ==== Maven Build Commands
 
-All commands are executed from the local HBase project directory. 
+All commands are executed from the local HBase project directory.
 
 ===== Package
 
@@ -269,7 +269,7 @@ mvn clean package -DskipTests
 ----
 
 With Eclipse set up as explained above in <<eclipse,eclipse>>, you can also use the menu:Build[] command in Eclipse.
-To create the full installable HBase package takes a little bit more work, so read on. 
+To create the full installable HBase package takes a little bit more work, so read on.
 
 [[maven.build.commands.compile]]
 ===== Compile
@@ -331,13 +331,13 @@ Tests may not all pass so you may need to pass `-DskipTests` unless you are incl
 ====
 You will see ERRORs like the above title if you pass the _default_ profile; e.g.
 if you pass +hadoop.profile=1.1+ when building 0.96 or +hadoop.profile=2.0+ when building hadoop 0.98; just drop the hadoop.profile stipulation in this case to get your build to run again.
-This seems to be a maven pecularity that is probably fixable but we've not spent the time trying to figure it.
+This seems to be a maven peculiarity that is probably fixable but we've not spent the time trying to figure it.
 ====
 
 Similarly, for 3.0, you would just replace the profile value.
-Note that Hadoop-3.0.0-SNAPSHOT does not currently have a deployed maven artificat - you will need to build and install your own in your local maven repository if you want to run against this profile. 
+Note that Hadoop-3.0.0-SNAPSHOT does not currently have a deployed maven artifact - you will need to build and install your own in your local maven repository if you want to run against this profile.
 
-In earilier versions of Apache HBase, you can build against older versions of Apache Hadoop, notably, Hadoop 0.22.x and 0.23.x.
+In earlier versions of Apache HBase, you can build against older versions of Apache Hadoop, notably, Hadoop 0.22.x and 0.23.x.
 If you are running, for example HBase-0.94 and wanted to build against Hadoop 0.23.x, you would run with:
 
 [source,bourne]
@@ -367,7 +367,7 @@ You may also want to define `protoc.path` for the protoc binary, using the follo
 mvn compile -Pcompile-protobuf -Dprotoc.path=/opt/local/bin/protoc
 ----
 
-Read the _hbase-protocol/README.txt_ for more details. 
+Read the _hbase-protocol/README.txt_ for more details.
 
 [[build.thrift]]
 ==== Build Thrift
@@ -415,9 +415,9 @@ mvn -DskipTests package assembly:single deploy
 ==== Build Gotchas
 
 If you see `Unable to find resource 'VM_global_library.vm'`, ignore it.
-Its not an error.
+It's not an error.
 It is link:http://jira.codehaus.org/browse/MSITE-286[officially
-                        ugly] though. 
+                        ugly] though.
 
 [[releasing]]
 == Releasing Apache HBase
@@ -434,7 +434,7 @@ See <<java,java>> for Java requirements per HBase release.
 HBase 0.96.x will run on Hadoop 1.x or Hadoop 2.x.
 HBase 0.98 still runs on both, but HBase 0.98 deprecates use of Hadoop 1.
 HBase 1.x will _not_                run on Hadoop 1.
-In the following procedures, we make a distinction between HBase 1.x builds and the awkward process involved building HBase 0.96/0.98 for either Hadoop 1 or Hadoop 2 targets. 
+In the following procedures, we make a distinction between HBase 1.x builds and the awkward process involved building HBase 0.96/0.98 for either Hadoop 1 or Hadoop 2 targets.
 
 You must choose which Hadoop to build against.
 It is not possible to build a single HBase binary that runs against both Hadoop 1 and Hadoop 2.
@@ -450,6 +450,7 @@ You then reference these generated poms when you build.
 For now, just be aware of the difference between HBase 1.x builds and those of HBase 0.96-0.98.
 This difference is important to the build instructions.
 
+[[maven.settings.xml]]
 .Example _~/.m2/settings.xml_ File
 ====
 Publishing to maven requires you sign the artifacts you want to upload.
@@ -500,22 +501,22 @@ For the build to sign them for you, you a properly configured _settings.xml_ in
 
 NOTE: These instructions are for building HBase 1.0.x.
 For building earlier versions, the process is different.
-See this section under the respective release documentation folders. 
+See this section under the respective release documentation folders.
 
 .Point Releases
-If you are making a point release (for example to quickly address a critical incompatability or security problem) off of a release branch instead of a development branch, the tagging instructions are slightly different.
-I'll prefix those special steps with _Point Release Only_. 
+If you are making a point release (for example to quickly address a critical incompatibility or security problem) off of a release branch instead of a development branch, the tagging instructions are slightly different.
+I'll prefix those special steps with _Point Release Only_.
 
 .Before You Begin
 Before you make a release candidate, do a practice run by deploying a snapshot.
 Before you start, check to be sure recent builds have been passing for the branch from where you are going to take your release.
-You should also have tried recent branch tips out on a cluster under load, perhaps by running the `hbase-it` integration test suite for a few hours to 'burn in' the near-candidate bits. 
+You should also have tried recent branch tips out on a cluster under load, perhaps by running the `hbase-it` integration test suite for a few hours to 'burn in' the near-candidate bits.
 
 .Point Release Only
 [NOTE]
 ====
 At this point you should tag the previous release branch (ex: 0.96.1) with the new point release tag (e.g.
-0.96.1.1 tag). Any commits with changes for the point release should be appled to the new tag. 
+0.96.1.1 tag). Any commits with changes for the point release should be applied to the new tag.
 ====
 
 The Hadoop link:http://wiki.apache.org/hadoop/HowToRelease[How To
@@ -562,8 +563,9 @@ Checkin the _CHANGES.txt_ and any version changes.
 
 . Update the documentation.
 +
-Update the documentation under _src/main/docbkx_.
-This usually involves copying the latest from trunk and making version-particular adjustments to suit this release candidate version. 
+Update the documentation under _src/main/asciidoc_.
+This usually involves copying the latest from master and making version-particular
+adjustments to suit this release candidate version.
 
 . Build the source tarball.
 +
@@ -582,8 +584,8 @@ $ mvn clean install -DskipTests assembly:single -Dassembly.file=hbase-assembly/s
 Extract the tarball and make sure it looks good.
 A good test for the src tarball being 'complete' is to see if you can build new tarballs from this source bundle.
 If the source tarball is good, save it off to a _version directory_, a directory somewhere where you are collecting all of the tarballs you will publish as part of the release candidate.
-For example if you were building a hbase-0.96.0 release candidate, you might call the directory _hbase-0.96.0RC0_.
-Later you will publish this directory as our release candidate up on http://people.apache.org/~YOU. 
+For example if you were building an hbase-0.96.0 release candidate, you might call the directory _hbase-0.96.0RC0_.
+Later you will publish this directory as our release candidate up on pass:[http://people.apache.org/~YOU].
 
 . Build the binary tarball.
 +
@@ -609,12 +611,13 @@ $ mvn install -DskipTests site assembly:single -Prelease
 ----
 
 +
-Otherwise, the build complains that hbase modules are not in the maven repository when you try to do it at once, especially on fresh repository.
+Otherwise, the build complains that hbase modules are not in the maven repository
+when you try to do it at once, especially on fresh repository.
 It seems that you need the install goal in both steps.
 +
 Extract the generated tarball and check it out.
 Look at the documentation, see if it runs, etc.
-If good, copy the tarball to the above mentioned _version directory_. 
+If good, copy the tarball to the above mentioned _version directory_.
 
 . Create a new tag.
 +
@@ -631,16 +634,16 @@ Release needs to be tagged for the next step.
 . Deploy to the Maven Repository.
 +
 Next, deploy HBase to the Apache Maven repository, using the `apache-release` profile instead of the `release` profile when running the `mvn deploy` command.
-This profile invokes the Apache pom referenced by our pom files, and also signs your artifacts published to Maven, as long as the _settings.xml_ is configured correctly, as described in <<mvn.settings.file,mvn.settings.file>>.
+This profile invokes the Apache pom referenced by our pom files, and also signs your artifacts published to Maven, as long as the _settings.xml_ is configured correctly, as described in <<maven.settings.xml>>.
 +
 [source,bourne]
 ----
 
-$ mvn deploy -DskipTests -Papache-release
+$ mvn deploy -DskipTests -Papache-release -Prelease
 ----
 +
 This command copies all artifacts up to a temporary staging Apache mvn repository in an 'open' state.
-More work needs to be done on these maven artifacts to make them generally available. 
+More work needs to be done on these maven artifacts to make them generally available.
 +
 We do not release HBase tarball to the Apache Maven repository. To avoid deploying the tarball, do not include the `assembly:single` goal in your `mvn deploy` command. Check the deployed artifacts as described in the next section.
 
@@ -648,16 +651,17 @@ We do not release HBase tarball to the Apache Maven repository. To avoid deployi
 +
 The artifacts are in the maven repository in the staging area in the 'open' state.
 While in this 'open' state you can check out what you've published to make sure all is good.
-To do this, login at link:http://repository.apache.org[repository.apache.org]                        using your Apache ID.
-Find your artifacts in the staging repository.
-Browse the content.
-Make sure all artifacts made it up and that the poms look generally good.
-If it checks out, 'close' the repo.
-This will make the artifacts publically available.
-You will receive an email with the URL to give out for the temporary staging repository for others to use trying out this new release candidate.
-Include it in the email that announces the release candidate.
-Folks will need to add this repo URL to their local poms or to their local _settings.xml_ file to pull the published release candidate artifacts.
-If the published artifacts are incomplete or have problems, just delete the 'open' staged artifacts.
+To do this, log in to Apache's Nexus at link:http://repository.apache.org[repository.apache.org]                        using your Apache ID.
+Find your artifacts in the staging repository. Click on 'Staging Repositories' and look for a new one ending in "hbase" with a status of 'Open', select it.
+Use the tree view to expand the list of repository contents and inspect if the artifacts you expect are present. Check the POMs.
+As long as the staging repo is open you can re-upload if something is missing or built incorrectly.
++
+If something is seriously wrong and you would like to back out the upload, you can use the 'Drop' button to drop and delete the staging repository.
++
+If it checks out, close the repo using the 'Close' button. The repository must be closed before a public URL to it becomes available. It may take a few minutes for the repository to close. Once complete you'll see a public URL to the repository in the Nexus UI. You may also receive an email with the URL. Provide the URL to the temporary staging repository in the email that announces the release candidate.
+(Folks will need to add this repo URL to their local poms or to their local _settings.xml_ file to pull the published release candidate artifacts.)
++
+When the release vote concludes successfully, return here and click the 'Release' button to release the artifacts to central. The release process will automatically drop and delete the staging repository.
 +
 .hbase-downstreamer
 [NOTE]
@@ -665,7 +669,7 @@ If the published artifacts are incomplete or have problems, just delete the 'ope
 See the link:https://github.com/saintstack/hbase-downstreamer[hbase-downstreamer] test for a simple example of a project that is downstream of HBase an depends on it.
 Check it out and run its simple test to make sure maven artifacts are properly deployed to the maven repository.
 Be sure to edit the pom to point to the proper staging repository.
-Make sure you are pulling from the repository when tests run and that you are not getting from your local repository, by either passing the `-U` flag or deleting your local repo content and check maven is pulling from remote out of the staging repository. 
+Make sure you are pulling from the repository when tests run and that you are not getting from your local repository, by either passing the `-U` flag or deleting your local repo content and check maven is pulling from remote out of the staging repository.
 ====
 +
 See link:http://www.apache.org/dev/publishing-maven-artifacts.html[Publishing Maven Artifacts] for some pointers on this maven staging process.
@@ -673,7 +677,7 @@ See link:http://www.apache.org/dev/publishing-maven-artifacts.html[Publishing Ma
 NOTE: We no longer publish using the maven release plugin.
 Instead we do +mvn deploy+.
 It seems to give us a backdoor to maven release publishing.
-If there is no _-SNAPSHOT_                            on the version string, then we are 'deployed' to the apache maven repository staging directory from which we can publish URLs for candidates and later, if they pass, publish as release (if a _-SNAPSHOT_ on the version string, deploy will put the artifacts up into apache snapshot repos). 
+If there is no _-SNAPSHOT_                            on the version string, then we are 'deployed' to the apache maven repository staging directory from which we can publish URLs for candidates and later, if they pass, publish as release (if a _-SNAPSHOT_ on the version string, deploy will put the artifacts up into apache snapshot repos).
 +
 If the HBase version ends in `-SNAPSHOT`, the artifacts go elsewhere.
 They are put into the Apache snapshots repository directly and are immediately available.
@@ -687,7 +691,7 @@ These are publicly accessible in a temporary staging repository whose URL you sh
 The above mentioned script, _make_rc.sh_ does all of the above for you minus the check of the artifacts built, the closing of the staging repository up in maven, and the tagging of the release.
 If you run the script, do your checks at this stage verifying the src and bin tarballs and checking what is up in staging using hbase-downstreamer project.
 Tag before you start the build.
-You can always delete it if the build goes haywire. 
+You can always delete it if the build goes haywire.
 
 . Sign, upload, and 'stage' your version directory to link:http://people.apache.org[people.apache.org] (TODO:
   There is a new location to stage releases using svnpubsub.  See
@@ -695,7 +699,7 @@ You can always delete it if the build goes haywire.
 +
 If all checks out, next put the _version directory_ up on link:http://people.apache.org[people.apache.org].
 You will need to sign and fingerprint them before you push them up.
-In the _version directory_ run the following commands: 
+In the _version directory_ run the following commands:
 +
 [source,bourne]
 ----
@@ -708,13 +712,13 @@ $ rsync -av 0.96.0RC0 people.apache.org:public_html
 ----
 +
 Make sure the link:http://people.apache.org[people.apache.org] directory is showing and that the mvn repo URLs are good.
-Announce the release candidate on the mailing list and call a vote. 
+Announce the release candidate on the mailing list and call a vote.
 
 
 [[maven.snapshot]]
 === Publishing a SNAPSHOT to maven
 
-Make sure your _settings.xml_ is set up properly, as in <<mvn.settings.file,mvn.settings.file>>.
+Make sure your _settings.xml_ is set up properly (see <<maven.settings.xml>>).
 Make sure the hbase version includes `-SNAPSHOT` as a suffix.
 Following is an example of publishing SNAPSHOTS of a release that had an hbase version of 0.96.0 in its poms.
 
@@ -727,7 +731,7 @@ Following is an example of publishing SNAPSHOTS of a release that had an hbase v
 
 The _make_rc.sh_ script mentioned above (see <<maven.release,maven.release>>) can help you publish `SNAPSHOTS`.
 Make sure your `hbase.version` has a `-SNAPSHOT`                suffix before running the script.
-It will put a snapshot up into the apache snapshot repository for you. 
+It will put a snapshot up into the apache snapshot repository for you.
 
 [[hbase.rc.voting]]
 == Voting on Release Candidates
@@ -742,7 +746,7 @@ PMC members, please read this WIP doc on policy voting for a release candidate,
                 requirements of the ASF policy on releases._ Regards the latter, run +mvn apache-rat:check+ to verify all files are suitably licensed.
 See link:http://search-hadoop.com/m/DHED4dhFaU[HBase, mail # dev - On
                 recent discussion clarifying ASF release policy].
-for how we arrived at this process. 
+for how we arrived at this process.
 
 [[documentation]]
 == Generating the HBase Reference Guide
@@ -750,7 +754,7 @@ for how we arrived at this process.
 The manual is marked up using Asciidoc.
 We then use the link:http://asciidoctor.org/docs/asciidoctor-maven-plugin/[Asciidoctor maven plugin] to transform the markup to html.
 This plugin is run when you specify the +site+ goal as in when you run +mvn site+.
-See <<appendix_contributing_to_documentation,appendix contributing to documentation>> for more information on building the documentation. 
+See <<appendix_contributing_to_documentation,appendix contributing to documentation>> for more information on building the documentation.
 
 [[hbase.org]]
 == Updating link:http://hbase.apache.org[hbase.apache.org]
@@ -763,24 +767,7 @@ See <<appendix_contributing_to_documentation,appendix contributing to documentat
 [[hbase.org.site.publishing]]
 === Publishing link:http://hbase.apache.org[hbase.apache.org]
 
-As of link:https://issues.apache.org/jira/browse/INFRA-5680[INFRA-5680 Migrate apache hbase website], to publish the website, build it using Maven, and then deploy it over a checkout of _https://svn.apache.org/repos/asf/hbase/hbase.apache.org/trunk_                and check in your changes.
-The script _dev-scripts/publish_hbase_website.sh_ is provided to automate this process and to be sure that stale files are removed from SVN.
-Review the script even if you decide to publish the website manually.
-Use the script as follows:
-
-----
-$ publish_hbase_website.sh -h
-Usage: publish_hbase_website.sh [-i | -a] [-g <dir>] [-s <dir>]
- -h          Show this message
- -i          Prompts the user for input
- -a          Does not prompt the user. Potentially dangerous.
- -g          The local location of the HBase git repository
- -s          The local location of the HBase svn checkout
- Either --interactive or --silent is required.
- Edit the script to set default Git and SVN directories.
-----
-
-NOTE: The SVN commit takes a long time.
+See <<website_publish>> for instructions on publishing the website and documentation.
 
 [[hbase.tests]]
 == Tests
@@ -804,7 +791,7 @@ For any other module, for example `hbase-common`, the tests must be strict unit
 
 The HBase shell and its tests are predominantly written in jruby.
 In order to make these tests run as a part of the standard build, there is a single JUnit test, `TestShell`, that takes care of loading the jruby implemented tests and running them.
-You can run all of these tests from the top level with: 
+You can run all of these tests from the top level with:
 
 [source,bourne]
 ----
@@ -814,7 +801,7 @@ You can run all of these tests from the top level with:
 
 Alternatively, you may limit the shell tests that run using the system variable `shell.test`.
 This value should specify the ruby literal equivalent of a particular test case by name.
-For example, the tests that cover the shell commands for altering tables are contained in the test case `AdminAlterTableTest`        and you can run them with: 
+For example, the tests that cover the shell commands for altering tables are contained in the test case `AdminAlterTableTest`        and you can run them with:
 
 [source,bourne]
 ----
@@ -824,7 +811,7 @@ For example, the tests that cover the shell commands for altering tables are con
 
 You may also use a link:http://docs.ruby-doc.com/docs/ProgrammingRuby/html/language.html#UJ[Ruby Regular Expression
       literal] (in the `/pattern/` style) to select a set of test cases.
-You can run all of the HBase admin related tests, including both the normal administration and the security administration, with the command: 
+You can run all of the HBase admin related tests, including both the normal administration and the security administration, with the command:
 
 [source,bourne]
 ----
@@ -832,7 +819,7 @@ You can run all of the HBase admin related tests, including both the normal admi
       mvn clean test -Dtest=TestShell -Dshell.test=/.*Admin.*Test/
 ----
 
-In the event of a test failure, you can see details by examining the XML version of the surefire report results 
+In the event of a test failure, you can see details by examining the XML version of the surefire report results
 
 [source,bourne]
 ----
@@ -890,7 +877,7 @@ public class TestHRegionInfo {
 ----
 
 The above example shows how to mark a unit test as belonging to the `small` category.
-All unit tests in HBase have a categorization. 
+All unit tests in HBase have a categorization.
 
 The first three categories, `small`, `medium`, and `large`, are for tests run when you type `$ mvn test`.
 In other words, these three categorizations are for HBase unit tests.
@@ -898,9 +885,9 @@ The `integration` category is not for unit tests, but for integration tests.
 These are run when you invoke `$ mvn verify`.
 Integration tests are described in <<integration.tests,integration.tests>>.
 
-HBase uses a patched maven surefire plugin and maven profiles to implement its unit test characterizations. 
+HBase uses a patched maven surefire plugin and maven profiles to implement its unit test characterizations.
 
-Keep reading to figure which annotation of the set small, medium, and large to put on your new HBase unit test. 
+Keep reading to figure which annotation of the set small, medium, and large to put on your new HBase unit test.
 
 .Categorizing Tests
 Small Tests (((SmallTests)))::
@@ -912,28 +899,28 @@ Medium Tests (((MediumTests)))::
   _Medium_ tests represent tests that must be executed before proposing a patch.
   They are designed to run in less than 30 minutes altogether, and are quite stable in their results.
   They are designed to last less than 50 seconds individually.
-  They can use a cluster, and each of them is executed in a separate JVM. 
+  They can use a cluster, and each of them is executed in a separate JVM.
 
 Large Tests (((LargeTests)))::
   _Large_ tests are everything else.
   They are typically large-scale tests, regression tests for specific bugs, timeout tests, performance tests.
   They are executed before a commit on the pre-integration machines.
-  They can be run on the developer machine as well. 
+  They can be run on the developer machine as well.
 
 Integration Tests (((IntegrationTests)))::
   _Integration_ tests are system level tests.
-  See <<integration.tests,integration.tests>> for more info. 
+  See <<integration.tests,integration.tests>> for more info.
 
 [[hbase.unittests.cmds]]
 === Running tests
 
 [[hbase.unittests.cmds.test]]
-==== Default: small and medium category tests 
+==== Default: small and medium category tests
 
 Running `mvn test` will execute all small tests in a single JVM (no fork) and then medium tests in a separate JVM for each test instance.
 Medium tests are NOT executed if there is an error in a small test.
 Large tests are NOT executed.
-There is one report for small tests, and one report for medium tests if they are executed. 
+There is one report for small tests, and one report for medium tests if they are executed.
 
 [[hbase.unittests.cmds.test.runalltests]]
 ==== Running all tests
@@ -941,38 +928,38 @@ There is one report for small tests, and one report for medium tests if they are
 Running `mvn test -P runAllTests` will execute small tests in a single JVM then medium and large tests in a separate JVM for each test.
 Medium and large tests are NOT executed if there is an error in a small test.
 Large tests are NOT executed if there is an error in a small or medium test.
-There is one report for small tests, and one report for medium and large tests if they are executed. 
+There is one report for small tests, and one report for medium and large tests if they are executed.
 
 [[hbase.unittests.cmds.test.localtests.mytest]]
 ==== Running a single test or all tests in a package
 
-To run an individual test, e.g. `MyTest`, rum `mvn test -Dtest=MyTest` You can also pass multiple, individual tests as a comma-delimited list: 
+To run an individual test, e.g. `MyTest`, rum `mvn test -Dtest=MyTest` You can also pass multiple, individual tests as a comma-delimited list:
 [source,bash]
 ----
 mvn test  -Dtest=MyTest1,MyTest2,MyTest3
 ----
-You can also pass a package, which will run all tests under the package: 
+You can also pass a package, which will run all tests under the package:
 [source,bash]
 ----
 mvn test '-Dtest=org.apache.hadoop.hbase.client.*'
-----                
+----
 
 When `-Dtest` is specified, the `localTests` profile will be used.
 It will use the official release of maven surefire, rather than our custom surefire plugin, and the old connector (The HBase build uses a patched version of the maven surefire plugin). Each junit test is executed in a separate JVM (A fork per test class). There is no parallelization when tests are running in this mode.
 You will see a new message at the end of the -report: `"[INFO] Tests are skipped"`.
 It's harmless.
-However, you need to make sure the sum of `Tests run:` in the `Results:` section of test reports matching the number of tests you specified because no error will be reported when a non-existent test case is specified. 
+However, you need to make sure the sum of `Tests run:` in the `Results:` section of test reports matching the number of tests you specified because no error will be reported when a non-existent test case is specified.
 
 [[hbase.unittests.cmds.test.profiles]]
 ==== Other test invocation permutations
 
-Running `mvn test -P runSmallTests` will execute "small" tests only, using a single JVM. 
+Running `mvn test -P runSmallTests` will execute "small" tests only, using a single JVM.
 
-Running `mvn test -P runMediumTests` will execute "medium" tests only, launching a new JVM for each test-class. 
+Running `mvn test -P runMediumTests` will execute "medium" tests only, launching a new JVM for each test-class.
 
-Running `mvn test -P runLargeTests` will execute "large" tests only, launching a new JVM for each test-class. 
+Running `mvn test -P runLargeTests` will execute "large" tests only, launching a new JVM for each test-class.
 
-For convenience, you can run `mvn test -P runDevTests` to execute both small and medium tests, using a single JVM. 
+For convenience, you can run `mvn test -P runDevTests` to execute both small and medium tests, using a single JVM.
 
 [[hbase.unittests.test.faster]]
 ==== Running tests faster
@@ -994,7 +981,7 @@ $ sudo mkdir /ram2G
 sudo mount -t tmpfs -o size=2048M tmpfs /ram2G
 ----
 
-You can then use it to run all HBase tests on 2.0 with the command: 
+You can then use it to run all HBase tests on 2.0 with the command:
 
 ----
 mvn test
@@ -1002,7 +989,7 @@ mvn test
                         -Dtest.build.data.basedirectory=/ram2G
 ----
 
-On earlier versions, use: 
+On earlier versions, use:
 
 ----
 mvn test
@@ -1021,7 +1008,7 @@ It must be executed from the directory which contains the _pom.xml_.
 For example running +./dev-support/hbasetests.sh+ will execute small and medium tests.
 Running +./dev-support/hbasetests.sh
                         runAllTests+ will execute all tests.
-Running +./dev-support/hbasetests.sh replayFailed+ will rerun the failed tests a second time, in a separate jvm and without parallelisation. 
+Running +./dev-support/hbasetests.sh replayFailed+ will rerun the failed tests a second time, in a separate jvm and without parallelisation.
 
 [[hbase.unittests.resource.checker]]
 ==== Test Resource Checker(((Test ResourceChecker)))
@@ -1031,7 +1018,7 @@ Check the _*-out.txt_ files). The resources counted are the number of threads, t
 If the number has increased, it adds a _LEAK?_ comment in the logs.
 As you can have an HBase instance running in the background, some threads can be deleted/created without any specific action in the test.
 However, if the test does not work as expected, or if the test should not impact these resources, it's worth checking these log lines [computeroutput]+...hbase.ResourceChecker(157): before...+                    and [computeroutput]+...hbase.ResourceChecker(157): after...+.
-For example: 
+For example:
 
 ----
 2012-09-26 09:22:15,315 INFO [pool-1-thread-1]
@@ -1074,10 +1061,10 @@ This allows understanding what the test is waiting for.
 Moreover, the test will work whatever the machine performance is.
 Sleep should be minimal to be as fast as possible.
 Waiting for a variable should be done in a 40ms sleep loop.
-Waiting for a socket operation should be done in a 200 ms sleep loop. 
+Waiting for a socket operation should be done in a 200 ms sleep loop.
 
 [[hbase.tests.cluster]]
-==== Tests using a cluster 
+==== Tests using a cluster
 
 Tests using a HRegion do not have to start a cluster: A region can use the local file system.
 Start/stopping a cluster cost around 10 seconds.
@@ -1085,7 +1072,7 @@ They should not be started per test method but per test class.
 Started cluster must be shutdown using [method]+HBaseTestingUtility#shutdownMiniCluster+, which cleans the directories.
 As most as possible, tests should use the default settings for the cluster.
 When they don't, they should document it.
-This will allow to share the cluster later. 
+This will allow to share the cluster later.
 
 [[integration.tests]]
 === Integration Tests
@@ -1093,16 +1080,16 @@ This will allow to share the cluster later.
 HBase integration/system tests are tests that are beyond HBase unit tests.
 They are generally long-lasting, sizeable (the test can be asked to 1M rows or 1B rows), targetable (they can take configuration that will point them at the ready-made cluster they are to run against; integration tests do not include cluster start/stop code), and verifying success, integration tests rely on public APIs only; they do not attempt to examine server internals asserting success/fail.
 Integration tests are what you would run when you need to more elaborate proofing of a release candidate beyond what unit tests can do.
-They are not generally run on the Apache Continuous Integration build server, however, some sites opt to run integration tests as a part of their continuous testing on an actual cluster. 
+They are not generally run on the Apache Continuous Integration build server, however, some sites opt to run integration tests as a part of their continuous testing on an actual cluster.
 
 Integration tests currently live under the _src/test_                directory in the hbase-it submodule and will match the regex: _**/IntegrationTest*.java_.
-All integration tests are also annotated with `@Category(IntegrationTests.class)`. 
+All integration tests are also annotated with `@Category(IntegrationTests.class)`.
 
 Integration tests can be run in two modes: using a mini cluster, or against an actual distributed cluster.
 Maven failsafe is used to run the tests using the mini cluster.
 IntegrationTestsDriver class is used for executing the tests against a distributed cluster.
 Integration tests SHOULD NOT assume that they are running against a mini cluster, and SHOULD NOT use private API's to access cluster state.
-To interact with the distributed or mini cluster uniformly, `IntegrationTestingUtility`, and `HBaseCluster` classes, and public client API's can be used. 
+To interact with the distributed or mini cluster uniformly, `IntegrationTestingUtility`, and `HBaseCluster` classes, and public client API's can be used.
 
 On a distributed cluster, integration tests that use ChaosMonkey or otherwise manipulate services thru cluster manager (e.g.
 restart regionservers) use SSH to do it.
@@ -1116,15 +1103,15 @@ The argument 1 (%1$s) is SSH options set the via opts setting or via environment
 ----
 /usr/bin/ssh %1$s %2$s%3$s%4$s "su hbase - -c \"%5$s\""
 ----
-That way, to kill RS (for example) integration tests may run: 
+That way, to kill RS (for example) integration tests may run:
 [source,bash]
 ----
 {/usr/bin/ssh some-hostname "su hbase - -c \"ps aux | ... | kill ...\""}
 ----
-The command is logged in the test logs, so you can verify it is correct for your environment. 
+The command is logged in the test logs, so you can verify it is correct for your environment.
 
 To disable the running of Integration Tests, pass the following profile on the command line `-PskipIntegrationTests`.
-For example, 
+For example,
 [source]
 ----
 $ mvn clean install test -Dtest=TestZooKeeper  -PskipIntegrationTests
@@ -1146,9 +1133,9 @@ mvn verify
 ----
 
 If you just want to run the integration tests in top-level, you need to run two commands.
-First: +mvn failsafe:integration-test+ This actually runs ALL the integration tests. 
+First: +mvn failsafe:integration-test+ This actually runs ALL the integration tests.
 
-NOTE: This command will always output `BUILD SUCCESS` even if there are test failures. 
+NOTE: This command will always output `BUILD SUCCESS` even if there are test failures.
 
 At this point, you could grep the output by hand looking for failed tests.
 However, maven will do this for us; just use: +mvn
@@ -1159,19 +1146,19 @@ However, maven will do this for us; just use: +mvn
 
 This is very similar to how you specify running a subset of unit tests (see above), but use the property `it.test` instead of `test`.
 To just run `IntegrationTestClassXYZ.java`, use: +mvn
-                            failsafe:integration-test -Dit.test=IntegrationTestClassXYZ+                        The next thing you might want to do is run groups of integration tests, say all integration tests that are named IntegrationTestClassX*.java: +mvn failsafe:integration-test -Dit.test=*ClassX*+ This runs everything that is an integration test that matches *ClassX*. This means anything matching: "**/IntegrationTest*ClassX*". You can also run multiple groups of integration tests using comma-delimited lists (similar to unit tests). Using a list of matches still supports full regex matching for each of the groups.This would look something like: +mvn
-                            failsafe:integration-test -Dit.test=*ClassX*, *ClassY+                    
+                            failsafe:integration-test -Dit.test=IntegrationTestClassXYZ+                        The next thing you might want to do is run groups of integration tests, say all integration tests that are named IntegrationTestClassX*.java: +mvn failsafe:integration-test -Dit.test=*ClassX*+ This runs everything that is an integration test that matches *ClassX*. This means anything matching: "**/IntegrationTest*ClassX*". You can also run multiple groups of integration tests using comma-delimited lists (similar to unit tests). Using a list of matches still supports full regex matching for each of the groups. This would look something like: +mvn
+                            failsafe:integration-test -Dit.test=*ClassX*, *ClassY+
 
 [[maven.build.commands.integration.tests.distributed]]
 ==== Running integration tests against distributed cluster
 
 If you have an already-setup HBase cluster, you can launch the integration tests by invoking the class `IntegrationTestsDriver`.
 You may have to run test-compile first.
-The configuration will be picked by the bin/hbase script. 
+The configuration will be picked by the bin/hbase script.
 [source,bourne]
 ----
 mvn test-compile
----- 
+----
 Then launch the tests with:
 
 [source,bourne]
@@ -1184,26 +1171,30 @@ Running the IntegrationTestsDriver without any argument will launch tests found
 See the usage, by passing -h, to see how to filter test classes.
 You can pass a regex which is checked against the full class name; so, part of class name can be used.
 IntegrationTestsDriver uses Junit to run the tests.
-Currently there is no support for running integration tests against a distributed cluster using maven (see link:https://issues.apache.org/jira/browse/HBASE-6201[HBASE-6201]). 
+Currently there is no support for running integration tests against a distributed cluster using maven (see link:https://issues.apache.org/jira/browse/HBASE-6201[HBASE-6201]).
 
 The tests interact with the distributed cluster by using the methods in the `DistributedHBaseCluster` (implementing `HBaseCluster`) class, which in turn uses a pluggable `ClusterManager`.
 Concrete implementations provide actual functionality for carrying out deployment-specific and environment-dependent tasks (SSH, etc). The default `ClusterManager` is `HBaseClusterManager`, which uses SSH to remotely execute start/stop/kill/signal commands, and assumes some posix commands (ps, etc). Also assumes the user running the test has enough "power" to start/stop servers on the remote machines.
 By default, it picks up `HBASE_SSH_OPTS`, `HBASE_HOME`, `HBASE_CONF_DIR` from the env, and uses `bin/hbase-daemon.sh` to carry out the actions.
 Currently tarball deployments, deployments which uses _hbase-daemons.sh_, and link:http://incubator.apache.org/ambari/[Apache Ambari]                    deployments are supported.
 _/etc/init.d/_ scripts are not supported for now, but it can be easily added.
-For other deployment options, a ClusterManager can be implemented and plugged in. 
+For other deployment options, a ClusterManager can be implemented and plugged in.
 
 [[maven.build.commands.integration.tests.destructive]]
-==== Destructive integration / system tests
+==== Destructive integration / system tests (ChaosMonkey)
 
-In 0.96, a tool named `ChaosMonkey` has been introduced.
-It is modeled after the link:http://techblog.netflix.com/2012/07/chaos-monkey-released-into-wild.html[same-named tool by Netflix].
-Some of the tests use ChaosMonkey to simulate faults in the running cluster in the way of killing random servers, disconnecting servers, etc.
-ChaosMonkey can also be used as a stand-alone tool to run a (misbehaving) policy while you are running other tests. 
+HBase 0.96 introduced a tool named `ChaosMonkey`, modeled after
+link:http://techblog.netflix.com/2012/07/chaos-monkey-released-into-wild.html[same-named tool by Netflix's Chaos Monkey tool].
+ChaosMonkey simulates real-world
+faults in a running cluster by killing or disconnecting random servers, or injecting
+other failures into the environment. You can use ChaosMonkey as a stand-alone tool
+to run a policy while other tests are running. In some environments, ChaosMonkey is
+always running, in order to constantly check that high availability and fault tolerance
+are working as expected.
 
-ChaosMonkey defines Action's and Policy's.
-Actions are sequences of events.
-We have at least the following actions:
+ChaosMonkey defines *Actions* and *Policies*.
+
+Actions:: Actions are predefined sequences of events, such as the following:
 
 * Restart active master (sleep 5 sec)
 * Restart random regionserver (sleep 5 sec)
@@ -1213,23 +1204,17 @@ We have at least the following actions:
 * Batch restart of 50% of regionservers (sleep 5 sec)
 * Rolling restart of 100% of regionservers (sleep 5 sec)
 
-Policies on the other hand are responsible for executing the actions based on a strategy.
-The default policy is to execute a random action every minute based on predefined action weights.
-ChaosMonkey executes predefined named policies until it is stopped.
-More than one policy can be active at any time. 
-
-To run ChaosMonkey as a standalone tool deploy your HBase cluster as usual.
-ChaosMonkey uses the configuration from the bin/hbase script, thus no extra configuration needs to be done.
-You can invoke the ChaosMonkey by running:
+Policies:: A policy is a strategy for executing one or more actions. The default policy
+executes a random action every minute based on predefined action weights.
+A given policy will be executed until ChaosMonkey is interrupted.
 
-[source,bourne]
-----
-bin/hbase org.apache.hadoop.hbase.util.ChaosMonkey
-----
-
-This will output smt like: 
+Most ChaosMonkey actions are configured to have reasonable defaults, so you can run
+ChaosMonkey against an existing cluster without any additional configuration. The
+following example runs ChaosMonkey with the default configuration:
 
+[source,bash]
 ----
+$ bin/hbase org.apache.hadoop.hbase.util.ChaosMonkey
 
 12/11/19 23:21:57 INFO util.ChaosMonkey: Using ChaosMonkey Policy: class org.apache.hadoop.hbase.util.ChaosMonkey$PeriodicRandomActionPolicy, period:60000
 12/11/19 23:21:57 INFO util.ChaosMonkey: Sleeping for 26953 to add jitter
@@ -1268,31 +1253,38 @@ This will output smt like:
 12/11/19 23:24:27 INFO util.ChaosMonkey: Started region server:rs3.example.com,60020,1353367027826. Reported num of rs:6
 ----
 
-As you can see from the log, ChaosMonkey started the default PeriodicRandomActionPolicy, which is configured with all the available actions, and ran RestartActiveMaster and RestartRandomRs actions.
-ChaosMonkey tool, if run from command line, will keep on running until the process is killed. 
+The output indicates that ChaosMonkey started the default `PeriodicRandomActionPolicy`
+policy, which is configured with all the available actions. It chose to run `RestartActiveMaster` and `RestartRandomRs` actions.
+
+==== Available Policies
+HBase ships with several ChaosMonkey policies, available in the
+`hbase/hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/policies/` directory.
 
 [[chaos.monkey.properties]]
-==== Passing individual Chaos Monkey per-test Settings/Properties
+==== Configuring Individual ChaosMonkey Actions
 
-Since HBase version 1.0.0 (link:https://issues.apache.org/jira/browse/HBASE-11348[HBASE-11348]), the chaos monkeys is used to run integration tests can be configured per test run.
-Users can create a java properties file and and pass this to the chaos monkey with timing configurations.
-The properties file needs to be in the HBase classpath.
-The various properties that can be configured and their default values can be found listed in the `org.apache.hadoop.hbase.chaos.factories.MonkeyConstants`                    class.
-If any chaos monkey configuration is missing from the property file, then the default values are assumed.
-For example:
+Since HBase version 1.0.0 (link:https://issues.apache.org/jira/browse/HBASE-11348[HBASE-11348]),
+ChaosMonkey integration tests can be configured per test run.
+Create a Java properties file in the HBase classpath and pass it to ChaosMonkey using
+the `-monkeyProps` configuration flag. Configurable properties, along with their default
+values if applicable, are listed in the `org.apache.hadoop.hbase.chaos.factories.MonkeyConstants`
+class. For properties that have defaults, you can override them by including them
+in your properties file.
+
+The following example uses a properties file called <<monkey.properties,monkey.properties>>.
 
 [source,bourne]
 ----
-
-$bin/hbase org.apache.hadoop.hbase.IntegrationTestIngest -m slowDeterministic -monkeyProps monkey.properties
+$ bin/hbase org.apache.hadoop.hbase.IntegrationTestIngest -m slowDeterministic -monkeyProps monkey.properties
 ----
 
 The above command will start the integration tests and chaos monkey passing the properties file _monkey.properties_.
 Here is an example chaos monkey file:
 
+[[monkey.properties]]
+.Example ChaosMonkey Properties File
 [source]
 ----
-
 sdm.action1.period=120000
 sdm.action2.period=40000
 move.regions.sleep.time=80000
@@ -1301,6 +1293,35 @@ move.regions.sleep.time=80000
 batch.restart.rs.ratio=0.4f
 ----
 
+HBase 1.0.2 and newer adds the ability to restart HBase's underlying ZooKeeper quorum or
+HDFS nodes. To use these actions, you need to configure some new properties, which
+have no reasonable defaults because they are deployment-specific, in your ChaosMonkey
+properties file, which may be `hbase-site.xml` or a different properties file.
+
+[source,xml]
+----
+<property>
+  <name>hbase.it.clustermanager.hadoop.home</name>
+  <value>$HADOOP_HOME</value>
+</property>
+<property>
+  <name>hbase.it.clustermanager.zookeeper.home</name>
+  <value>$ZOOKEEPER_HOME</value>
+</property>
+<property>
+  <name>hbase.it.clustermanager.hbase.user</name>
+  <value>hbase</value>
+</property>
+<property>
+  <name>hbase.it.clustermanager.hadoop.hdfs.user</name>
+  <value>hdfs</value>
+</property>
+<property>
+  <name>hbase.it.clustermanager.zookeeper.user</name>
+  <value>zookeeper</value>
+</property>
+----
+
 [[developing]]
 == Developer Guidelines
 
@@ -1324,25 +1345,36 @@ NOTE: End-of-life releases are not included in this list.
 |===
 | Release
 | Release Manager
+
+| 0.94
+| Lars Hofhansl
+
 | 0.98
 | Andrew Purtell
 
 | 1.0
 | Enis Soztutar
+
+| 1.1
+| Nick Dimiduk
+
+| 1.2
+| Sean Busbey
+
 |===
 
 [[code.standards]]
 === Code Standards
 
-See <<eclipse.code.formatting,eclipse.code.formatting>> and <<common.patch.feedback,common.patch.feedback>>. 
+See <<eclipse.code.formatting,eclipse.code.formatting>> and <<common.patch.feedback,common.patch.feedback>>.
 
 ==== Interface Classifications
 
 Interfaces are classified both by audience and by stability level.
 These labels appear at the head of a class.
-The conventions followed by HBase are inherited by its parent project, Hadoop. 
+The conventions followed by HBase are inherited by its parent project, Hadoop.
 
-The following interface classifications are commonly used: 
+The following interface classifications are commonly used:
 
 .InterfaceAudience
 `@InterfaceAudience.Public`::
@@ -1366,7 +1398,7 @@ No `@InterfaceAudience` Classification::
 .Excluding Non-Public Interfaces from API Documentation
 [NOTE]
 ====
-Only interfaces classified `@InterfaceAudience.Public` should be included in API documentation (Javadoc). Committers must add new package excludes `ExcludePackageNames` section of the _pom.xml_ for new packages which do not contain public classes. 
+Only interfaces classified `@InterfaceAudience.Public` should be included in API documentation (Javadoc). Committers must add new package excludes `ExcludePackageNames` section of the _pom.xml_ for new packages which do not contain public classes.
 ====
 
 .@InterfaceStability
@@ -1384,7 +1416,7 @@ Only interfaces classified `@InterfaceAudience.Public` should be included in API
 No `@InterfaceStability` Label::
   Public classes with no `@InterfaceStability` label are discouraged, and should be considered implicitly unstable.
 
-If you are unclear about how to mark packages, ask on the development list. 
+If you are unclear about how to mark packages, ask on the development list.
 
 [[common.patch.feedback]]
 ==== Code Formatting Conventions
@@ -1487,7 +1519,7 @@ Don't forget Javadoc!
 
 Javadoc warnings are checked during precommit.
 If the precommit tool gives you a '-1', please fix the javadoc issue.
-Your patch won't be committed if it adds such warnings. 
+Your patch won't be committed if it adds such warnings.
 
 [[common.patch.feedback.findbugs]]
 ===== Findbugs
@@ -1507,7 +1539,7 @@ value="HE_EQUALS_USE_HASHCODE",
 justification="I know what I'm doing")
 ----
 
-It is important to use the Apache-licensed version of the annotations. 
+It is important to use the Apache-licensed version of the annotations.
 
 [[common.patch.feedback.javadoc.defaults]]
 ===== Javadoc - Useless Defaults
@@ -1531,14 +1563,14 @@ The preference is to add something descriptive and useful.
 [[common.patch.feedback.onething]]
 ===== One Thing At A Time, Folks
 
-If you submit a patch for one thing, don't do auto-reformatting or unrelated reformatting of code on a completely different area of code. 
+If you submit a patch for one thing, don't do auto-reformatting or unrelated reformatting of code on a completely different area of code.
 
-Likewise, don't add unrelated cleanup or refactorings outside the scope of your Jira. 
+Likewise, don't add unrelated cleanup or refactorings outside the scope of your Jira.
 
 [[common.patch.feedback.tests]]
 ===== Ambigious Unit Tests
 
-Make sure that you're clear about what you are testing in your unit tests and why. 
+Make sure that you're clear about what you are testing in your unit tests and why.
 
 [[common.patch.feedback.writable]]
 ===== Implementing Writable
@@ -1546,24 +1578,38 @@ Make sure that you're clear about what you are testing in your unit tests and wh
 .Applies pre-0.96 only
 [NOTE]
 ====
-In 0.96, HBase moved to protocol buffers (protobufs). The below section on Writables applies to 0.94.x and previous, not to 0.96 and beyond. 
+In 0.96, HBase moved to protocol buffers (protobufs). The below section on Writables applies to 0.94.x and previous, not to 0.96 and beyond.
 ====
 
 Every class returned by RegionServers must implement the `Writable` interface.
-If you are creating a new class that needs to implement this interface, do not forget the default constructor. 
+If you are creating a new class that needs to implement this interface, do not forget the default constructor.
+
+==== Garbage-Collection Conserving Guidelines
+
+The following guidelines were borrowed from http://engineering.linkedin.com/performance/linkedin-feed-faster-less-jvm-garbage.
+Keep them in mind to keep preventable garbage  collection to a minimum. Have a look
+at the blog post for some great examples of how to refactor your code according to
+these guidelines.
+
+- Be careful with Iterators
+- Estimate the size of a collection when initializing
+- Defer expression evaluation
+- Compile the regex patterns in advance
+- Cache it if you can
+- String Interns are useful but dangerous
 
 [[design.invariants]]
 === Invariants
 
 We don't have many but what we have we list below.
-All are subject to challenge of course but until then, please hold to the rules of the road. 
+All are subject to challenge of course but until then, please hold to the rules of the road.
 
 [[design.invariants.zk.data]]
 ==== No permanent state in ZooKeeper
 
 ZooKeeper state should transient (treat it like memory). If ZooKeeper state is deleted, hbase should be able to recover and essentially be in the same state.
 
-* .ExceptionsThere are currently a few exceptions that we need to fix around whether a table is enabled or disabled.
+* .Exceptions: There are currently a few exceptions that we need to fix around whether a table is enabled or disabled.
 * Replication data is currently stored only in ZooKeeper.
   Deleting ZooKeeper data related to replication may cause replication to be disabled.
   Do not delete the replication tree, _/hbase/replication/_.
@@ -1577,14 +1623,14 @@ Follow progress on this issue at link:https://issues.apache.org/jira/browse/HBAS
 
 If you are developing Apache HBase, frequently it is useful to test your changes against a more-real cluster than what you find in unit tests.
 In this case, HBase can be run directly from the source in local-mode.
-All you need to do is run: 
+All you need to do is run:
 
 [source,bourne]
 ----
 ${HBASE_HOME}/bin/start-hbase.sh
 ----
 
-This will spin up a full local-cluster, just as if you had packaged up HBase and installed it on your machine. 
+This will spin up a full local-cluster, just as if you had packaged up HBase and installed it on your machine.
 
 Keep in mind that you will need to have installed HBase into your local maven repository for the in-situ cluster to work properly.
 That is, you will need to run:
@@ -1605,21 +1651,21 @@ HBase exposes metrics using the Hadoop Metrics 2 system, so adding a new metric
 Unfortunately the API of metrics2 changed from hadoop 1 to hadoop 2.
 In order to get around this a set of interfaces and implementations have to be loaded at runtime.
 To get an in-depth look at the reasoning and structure of these classes you can read the blog post located link:https://blogs.apache.org/hbase/entry/migration_to_the_new_metrics[here].
-To add a metric to an existing MBean follow the short guide below: 
+To add a metric to an existing MBean follow the short guide below:
 
 ==== Add Metric name and Function to Hadoop Compat Interface.
 
 Inside of the source interface the corresponds to where the metrics are generated (eg MetricsMasterSource for things coming from HMaster) create new static strings for metric name and description.
-Then add a new method that will be called to add new reading. 
+Then add a new method that will be called to add new reading.
 
 ==== Add the Implementation to Both Hadoop 1 and Hadoop 2 Compat modules.
 
 Inside of the implementation of the source (eg.
 MetricsMasterSourceImpl in the above example) create a new histogram, counter, gauge, or stat in the init method.
-Then in the method that was added to the interface wire up the parameter passed in to the histogram. 
+Then in the method that was added to the interface wire up the parameter passed in to the histogram.
 
 Now add tests that make sure the data is correctly exported to the metrics 2 system.
-For this the MetricsAssertHelper is provided. 
+For this the MetricsAssertHelper is provided.
 
 [[git.best.practices]]
 === Git Best Practices
@@ -1660,7 +1706,7 @@ It provides a nice overview that applies equally to the Apache HBase Project.
 ==== Create Patch
 
 The script _dev-support/make_patch.sh_ has been provided to help you adhere to patch-creation guidelines.
-The script has the following syntax: 
+The script has the following syntax:
 
 ----
 $ make_patch.sh [-a] [-p <patch_dir>]
@@ -1675,7 +1721,9 @@ $ make_patch.sh [-a] [-p <patch_dir>]
   If you decline, the script uses +git diff+ instead.
   The patch is saved in a configurable directory and is ready to be attached to your JIRA.
 
-* .Patching WorkflowAlways patch against the master branch first, even if you want to patch in another branch.
+.Patching Workflow
+
+* Always patch against the master branch first, even if you want to patch in another branch.
   HBase committers always apply patches first to the master branch, and backport if necessary.
 * Submit one single patch for a fix.
   If necessary, squash local commits to merge local commits into a single one first.
@@ -1711,17 +1759,20 @@ Please understand that not every patch may get committed, and that feedback will
   However, at times it is easier to refer to different version of a patch if you add `-vX`, where the [replaceable]_X_ is the version (starting with 2).
 * If you need to submit your patch against multiple branches, rather than just master, name each version of the patch with the branch it is for, following the naming conventions in <<submitting.patches.create,submitting.patches.create>>.
 
-.Methods to Create PatchesEclipse::
+.Methods to Create Patches
+Eclipse::
   Select the  menu item.
 
 Git::
-  `git format-patch` is preferred because it preserves commit messages.
+  `git format-patch` is preferred:
+     - It preserves the committer and commit message.
+     - It handles binary files by default, whereas `git diff` ignores them unless
+     you use the `--binary` option.
   Use `git rebase -i` first, to combine (squash) smaller commits into a single larger one.
 
 Subversion::
-
-Make sure you review <<eclipse.code.formatting,eclipse.code.formatting>> and <<common.patch.feedback,common.patch.feedback>> for code style.
-If your patch was generated incorrectly or your code does not adhere to the code formatting guidelines, you may be asked to redo some work.
+  Make sure you review <<eclipse.code.formatting,eclipse.code.formatting>> and <<common.patch.feedback,common.patch.feedback>> for code style.
+  If your patch was generated incorrectly or your code does not adhere to the code formatting guidelines, you may be asked to redo some work.
 
 [[submitting.patches.tests]]
 ==== Unit Tests
@@ -1733,7 +1784,7 @@ Also, see <<mockito,mockito>>.
 
 If you are creating a new unit test class, notice how other unit test classes have classification/sizing annotations at the top and a static method on the end.
 Be sure to include these in any new unit test files you generate.
-See <<hbase.tests,hbase.tests>> for more on how the annotations work. 
+See <<hbase.tests,hbase.tests>> for more on how the annotations work.
 
 ==== Integration Tests
 
@@ -1741,13 +1792,13 @@ Significant new features should provide an integration test in addition to unit
 
 ==== ReviewBoard
 
-Patches larger than one screen, or patches that will be tricky to review, should go through link:http://reviews.apache.org[ReviewBoard]. 
+Patches larger than one screen, or patches that will be tricky to review, should go through link:http://reviews.apache.org[ReviewBoard].
 
 .Procedure: Use ReviewBoard
 . Register for an account if you don't already have one.
   It does not use the credentials from link:http://issues.apache.org[issues.apache.org].
   Log in.
-. Click [label]#New Review Request#. 
+. Click [label]#New Review Request#.
 . Choose the `hbase-git` repository.
   Click Choose File to select the diff and optionally a parent diff.
   Click btn:[Create
@@ -1763,39 +1814,39 @@ Patches larger than one screen, or patches that will be tricky to review, should
 . To cancel the request, click .
 
 For more information on how to use ReviewBoard, see link:http://www.reviewboard.org/docs/manual/1.5/[the ReviewBoard
-                        documentation]. 
+                        documentation].
 
 ==== Guide for HBase Committers
 
 ===== New committers
 
-New committers are encouraged to first read Apache's generic committer documentation: 
+New committers are encouraged to first read Apache's generic committer documentation:
 
-* link:http://www.apache.org/dev/new-committers-guide.html[Apache New Committer Guide]                            
-* link:http://www.apache.org/dev/committers.html[Apache Committer FAQ]                            
+* link:http://www.apache.org/dev/new-committers-guide.html[Apache New Committer Guide]
+* link:http://www.apache.org/dev/committers.html[Apache Committer FAQ]
 
 ===== Review
 
 HBase committers should, as often as possible, attempt to review patches submitted by others.
 Ideally every submitted patch will get reviewed by a committer _within a few days_.
-If a committer reviews a patch they have not authored, and believe it to be of sufficient quality, then they can commit the patch, otherwise the patch should be cancelled with a clear explanation for why it was rejected. 
+If a committer reviews a patch they have not authored, and believe it to be of sufficient quality, then they can commit the patch, otherwise the patch should be cancelled with a clear explanation for why it was rejected.
 
 The list of submitted patches is in the link:https://issues.apache.org/jira/secure/IssueNavigator.jspa?mode=hide&requestId=12312392[HBase Review Queue], which is ordered by time of last modification.
-Committers should scan the list from top to bottom, looking for patches that they feel qualified to review and possibly commit. 
+Committers should scan the list from top to bottom, looking for patches that they feel qualified to review and possibly commit.
 
 For non-trivial changes, it is required to get another committer to review your own patches before commit.
-Use the btn:[Submit Patch]                        button in JIRA, just like other contributors, and then wait for a `+1` response from another committer before committing. 
+Use the btn:[Submit Patch]                        button in JIRA, just like other contributors, and then wait for a `+1` response from another committer before committing.
 
 ===== Reject
 
 Patches which do not adhere to the guidelines in link:https://wiki.apache.org/hadoop/Hbase/HowToCommit/hadoop/Hbase/HowToContribute#[HowToContribute] and to the link:https://wiki.apache.org/hadoop/Hbase/HowToCommit/hadoop/CodeReviewChecklist#[code review checklist] should be rejected.
 Committers should always be polite to contributors and try to instruct and encourage them to contribute better patches.
-If a committer wishes to improve an unacceptable patch, then it should first be rejected, and a new patch should be attached by the committer for review. 
+If a committer wishes to improve an unacceptable patch, then it should first be rejected, and a new patch should be attached by the committer for review.
 
 [[committing.patches]]
 ===== Commit
 
-Committers commit patches to the Apache HBase GIT repository. 
+Committers commit patches to the Apache HBase GIT repository.
 
 .Before you commit!!!!
 [NOTE]
@@ -1803,13 +1854,13 @@ Committers commit patches to the Apache HBase GIT repository.
 Make sure your local configuration is correct, especially your identity and email.
 Examine the output of the +$ git config
                                 --list+ command and be sure it is correct.
-See this GitHub article, link:https://help.github.com/articles/set-up-git[Set Up Git] if you need pointers. 
+See this GitHub article, link:https://help.github.com/articles/set-up-git[Set Up Git] if you need pointers.
 ====
 
-When you commit a patch, please: 
+When you commit a patch, please:
 
 . Include the Jira issue id in the commit message, along with a short description of the change and the name of the contributor if it is not you.
-  Be sure to get the issue ID right, as this causes Jira to link to the change in Git (use the issue's "All" tab to see these). 
+  Be sure to get the issue ID right, as this causes Jira to link to the change in Git (use the issue's "All" tab to see these).
 . Commit the patch to a new branch based off master or other intended branch.
   It's a good idea to call this branch by the JIRA ID.
   Then check out the relevant target branch where you want to commit, make sure your local branch has all remote changes, by doing a +git pull --rebase+ or another similar command, cherry-pick the change into each relevant branch (such as master), and do +git push <remote-server>
@@ -1820,9 +1871,9 @@ If the push fails for any reason, fix the problem or ask for help.
 Do not do a +git push --force+.
 +
 Before you can commit a patch, you need to determine how the patch was created.
-The instructions and preferences around the way to create patches have changed, and there will be a transition periond.
+The instructions and preferences around the way to create patches have changed, and there will be a transition period.
 +
-* .Determine How a Patch Was CreatedIf the first few lines of the patch look like the headers of an email, with a From, Date, and Subject, it was created using +git format-patch+.
+* .Determine How a Patch Was Created: If the first few lines of the patch look like the headers of an email, with a From, Date, and Subject, it was created using +git format-patch+.
   This is the preference, because you can reuse the submitter's commit message.
   If the commit message is not appropriate, you can still use the commit, then run the command +git
   rebase -i origin/master+, and squash and reword as appropriate.
@@ -1832,13 +1883,13 @@ The instructions and preferences around the way to create patches have changed,
   This is the indication that the patch was not created with `--no-prefix`.
 +
 ----
-diff --git a/src/main/docbkx/developer.xml b/src/main/docbkx/developer.xml
+diff --git a/src/main/asciidoc/_chapters/developer.adoc b/src/main/asciidoc/_chapters/developer.adoc
 ----
 
 * If the first line of the patch looks similar to the following (without the `a` and `b`), the patch was created with +git diff --no-prefix+ and you need to add `-p0` to the +git apply+                                        command below.
 +
 ----
-diff --git src/main/docbkx/developer.xml src/main/docbkx/developer.xml
+diff --git src/main/asciidoc/_chapters/developer.adoc src/main/asciidoc/_chapters/developer.adoc
 ----
 
 +
@@ -1849,7 +1900,7 @@ The only command that actually writes anything to the remote repository is +git
 The extra +git
                                         pull+ commands are usually redundant, but better safe than sorry.
 
-The first example shows how to apply a patch that was generated with +git format-patch+ and apply it to the `master` and `branch-1` branches. 
+The first example shows how to apply a patch that was generated with +git format-patch+ and apply it to the `master` and `branch-1` branches.
 
 The directive to use +git format-patch+                                    rather than +git diff+, and not to use `--no-prefix`, is a new one.
 See the second example for how to apply a patch created with +git
@@ -1877,7 +1928,7 @@ This example shows how to commit a patch that was created using +git diff+ witho
 If the patch was created with `--no-prefix`, add `-p0` to the +git apply+ command.
 
 ----
-$ git apply ~/Downloads/HBASE-XXXX-v2.patch 
+$ git apply ~/Downloads/HBASE-XXXX-v2.patch
 $ git commit -m "HBASE-XXXX Really Good Code Fix (Joe Schmo)" -a # This extra step is needed for patches created with 'git diff'
 $ git checkout master
 $ git pull --rebase
@@ -1896,7 +1947,7 @@ $ git branch -D HBASE-XXXX
 ====
 
 . Resolve the issue as fixed, thanking the contributor.
-  Always set the "Fix Version" at this point, but please only set a single fix version for each branch where the change was committed, the earliest release in that branch in which the change will appear. 
+  Always set the "Fix Version" at this point, but please only set a single fix version for each branch where the change was committed, the earliest release in that branch in which the change will appear.
 
 ====== Commit Message Format
 
@@ -1916,30 +1967,30 @@ If the contributor used +git format-patch+ to generate the patch, their commit m
 [[committer.amending.author]]
 ====== Add Amending-Author when a conflict cherrypick backporting
 
-We've established the practice of committing to trunk and then cherry picking back to branches whenever possible.
+We've established the practice of committing to master and then cherry picking back to branches whenever possible.
 When there is a minor conflict we can fix it up and just proceed with the commit.
 The resulting commit retains the original author.
 When the amending author is different from the original committer, add notice of this at the end of the commit message as: `Amending-Author: Author
                                 <committer&apache>` See discussion at link:http://search-hadoop.com/m/DHED4wHGYS[HBase, mail # dev
                                 - [DISCUSSION] Best practice when amending commits cherry picked
-                                from master to branch]. 
+                                from master to branch].
 
 [[committer.tests]]
-====== Committers are responsible for making sure commits do not break thebuild or tests
+====== Committers are responsible for making sure commits do not break the build or tests
 
 If a committer commits a patch, it is their responsibility to make sure it passes the test suite.
 It is helpful if contributors keep an eye out that their patch does not break the hbase build and/or tests, but ultimately, a contributor cannot be expected to be aware of all the particular vagaries and interconnections that occur in a project like HBase.
-A committer should. 
+A committer should.
 
 [[git.patch.flow]]
 ====== Patching Etiquette
 
 In the thread link:http://search-hadoop.com/m/DHED4EiwOz[HBase, mail # dev - ANNOUNCEMENT: Git Migration In Progress (WAS =>
-                                Re: Git Migration)], it was agreed on the following patch flow 
+                                Re: Git Migration)], it was agreed on the following patch flow
 
-. Develop and commit the patch against trunk/master first.
+. Develop and commit the patch against master first.
 . Try to cherry-pick the patch when backporting if possible.
-. If this does not work, manually commit the patch to the branch.                        
+. If this does not work, manually commit the patch to the branch.
 
 ====== Merge Commits
 
@@ -1952,11 +2003,11 @@ See <<appendix_contributing_to_documentation,appendix contributing to documentat
 ==== Dialog
 
 Committers should hang out in the #hbase room on irc.freenode.net for real-time discussions.
-However any substantive discussion (as with any off-list project-related discussion) should be re-iterated in Jira or on the developer list. 
+However any substantive discussion (as with any off-list project-related discussion) should be re-iterated in Jira or on the developer list.
 
 ==== Do not edit JIRA comments
 
-Misspellings and/or bad grammar is preferable to the disruption a JIRA comment edit causes: See the discussion at link:http://search-hadoop.com/?q=%5BReopened%5D+%28HBASE-451%29+Remove+HTableDescriptor+from+HRegionInfo&fc_project=HBase[Re:(HBASE-451) Remove HTableDescriptor from HRegionInfo]                
+Misspellings and/or bad grammar is preferable to the disruption a JIRA comment edit causes: See the discussion at link:http://search-hadoop.com/?q=%5BReopened%5D+%28HBASE-451%29+Remove+HTableDescriptor+from+HRegionInfo&fc_project=HBase[Re:(HBASE-451) Remove HTableDescriptor from HRegionInfo]
 
 ifdef::backend-docbook[]
 [index]


[06/11] hbase git commit: HBASE-13908 update site docs for 1.2 RC.

Posted by bu...@apache.org.
http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/cp.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/cp.adoc b/src/main/asciidoc/_chapters/cp.adoc
index a99e903..5f50b68 100644
--- a/src/main/asciidoc/_chapters/cp.adoc
+++ b/src/main/asciidoc/_chapters/cp.adoc
@@ -27,203 +27,782 @@
 :icons: font
 :experimental:
 
-HBase coprocessors are modeled after the coprocessors which are part of Google's BigTable (http://www.scribd.com/doc/21631448/Dean-Keynote-Ladis2009, pages 66-67.). Coprocessors function in a similar way to Linux kernel modules.
-They provide a way to run server-level code against locally-stored data.
-The functionality they provide is very powerful, but also carries great risk and can have adverse effects on the system, at the level of the operating system.
-The information in this chapter is primarily sourced and heavily reused from Mingjie Lai's blog post at https://blogs.apache.org/hbase/entry/coprocessor_introduction.
+HBase Coprocessors are modeled after Google BigTable's coprocessor implementation
+(http://research.google.com/people/jeff/SOCC2010-keynote-slides.pdf pages 41-42.).
 
-Coprocessors are not designed to be used by end users of HBase, but by HBase developers who need to add specialized functionality to HBase.
-One example of the use of coprocessors is pluggable compaction and scan policies, which are provided as coprocessors in link:https://issues.apache.org/jira/browse/HBASE-6427[HBASE-6427].
+The coprocessor framework provides mechanisms for running your custom code directly on
+the RegionServers managing your data. Efforts are ongoing to bridge gaps between HBase's
+implementation and BigTable's architecture. For more information see
+link:https://issues.apache.org/jira/browse/HBASE-4047[HBASE-4047].
 
-== Coprocessor Framework
+The information in this chapter is primarily sourced and heavily reused from the following
+resources:
 
-The implementation of HBase coprocessors diverges from the BigTable implementation.
-The HBase framework provides a library and runtime environment for executing user code within the HBase region server and master processes.
+. Mingjie Lai's blog post
+link:https://blogs.apache.org/hbase/entry/coprocessor_introduction[Coprocessor Introduction].
+. Gaurav Bhardwaj's blog post
+link:http://www.3pillarglobal.com/insights/hbase-coprocessors[The How To Of HBase Coprocessors].
 
-The framework API is provided in the link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/coprocessor/package-summary.html[coprocessor] package.
+[WARNING]
+.Use Coprocessors At Your Own Risk
+====
+Coprocessors are an advanced feature of HBase and are intended to be used by system
+developers only. Because coprocessor code runs directly on the RegionServer and has
+direct access to your data, they introduce the risk of data corruption, man-in-the-middle
+attacks, or other malicious data access. Currently, there is no mechanism to prevent
+data corruption by coprocessors, though work is underway on
+link:https://issues.apache.org/jira/browse/HBASE-4047[HBASE-4047].
++
+In addition, there is no resource isolation, so a well-intentioned but misbehaving
+coprocessor can severely degrade cluster performance and stability.
+====
 
-Two different types of coprocessors are provided by the framework, based on their scope.
+== Coprocessor Overview
 
-.Types of Coprocessors
+In HBase, you fetch data using a `Get` or `Scan`, whereas in an RDBMS you use a SQL
+query. In order to fetch only the relevant data, you filter it using a HBase
+link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/Filter.html[Filter]
+, whereas in an RDBMS you use a `WHERE` predicate.
 
-System Coprocessors::
-  System coprocessors are loaded globally on all tables and regions hosted by a region server.
+After fetching the data, you perform computations on it. This paradigm works well
+for "small data" with a few thousand rows and several columns. However, when you scale
+to billions of rows and millions of columns, moving large amounts of data across your
+network will create bottlenecks at the network layer, and the client needs to be powerful
+enough and have enough memory to handle the large amounts of data and the computations.
+In addition, the client code can grow large and complex.
+
+In this scenario, coprocessors might make sense. You can put the business computation
+code into a coprocessor which runs on the RegionServer, in the same location as the
+data, and returns the result to the client.
+
+This is only one scenario where using coprocessors can provide benefit. Following
+are some analogies which may help to explain some of the benefits of coprocessors.
+
+[[cp_analogies]]
+=== Coprocessor Analogies
+
+Triggers and Stored Procedure::
+  An Observer coprocessor is similar to a trigger in a RDBMS in that it executes
+  your code either before or after a specific event (such as a `Get` or `Put`)
+  occurs. An endpoint coprocessor is similar to a stored procedure in a RDBMS
+  because it allows you to perform custom computations on the data on the
+  RegionServer itself, rather than on the client.
+
+MapReduce::
+  MapReduce operates on the principle of moving the computation to the location of
+  the data. Coprocessors operate on the same principal.
+
+AOP::
+  If you are familiar with Aspect Oriented Programming (AOP), you can think of a coprocessor
+  as applying advice by intercepting a request and then running some custom code,
+  before passing the request on to its final destination (or even changing the destination).
+
+
+=== Coprocessor Implementation Overview
+
+. Either your class should extend one of the Coprocessor classes, such as
+link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.html[BaseRegionObserver],
+or it should implement the link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/Coprocessor.html[Coprocessor]
+or
+link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/CoprocessorService.html[CoprocessorService]
+interface.
+
+. Load the coprocessor, either statically (from the configuration) or dynamically,
+using HBase Shell. For more details see <<cp_loading,Loading Coprocessors>>.
+
+. Call the coprocessor from your client-side code. HBase handles the coprocessor
+trapsparently.
+
+The framework API is provided in the
+link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/coprocessor/package-summary.html[coprocessor]
+package.
+
+== Types of Coprocessors
+
+=== Observer Coprocessors
+
+Observer coprocessors are triggered either before or after a specific event occurs.
+Observers that happen before an event use methods that start with a `pre` prefix,
+such as link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html#prePut%28org.apache.hadoop.hbase.coprocessor.ObserverContext,%20org.apache.hadoop.hbase.client.Put,%20org.apache.hadoop.hbase.regionserver.wal.WALEdit,%20org.apache.hadoop.hbase.client.Durability%29[`prePut`]. Observers that happen just after an event override methods that start
+with a `post` prefix, such as link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html#postPut%28org.apache.hadoop.hbase.coprocessor.ObserverContext,%20org.apache.hadoop.hbase.client.Put,%20org.apache.hadoop.hbase.regionserver.wal.WALEdit,%20org.apache.hadoop.hbase.client.Durability%29[`postPut`].
+
+
+==== Use Cases for Observer Coprocessors
+Security::
+  Before performing a `Get` or `Put` operation, you can check for permission using
+  `preGet` or `prePut` methods.
+
+Referential Integrity::
+  HBase does not directly support the RDBMS concept of refential integrity, also known
+  as foreign keys. You can use a coprocessor to enforce such integrity. For instance,
+  if you have a business rule that every insert to the `users` table must be followed
+  by a corresponding entry in the `user_daily_attendance` table, you could implement
+  a coprocessor to use the `prePut` method on `user` to insert a record into `user_daily_attendance`.
+
+Secondary Indexes::
+  You can use a coprocessor to maintain secondary indexes. For more information, see
+  link:http://wiki.apache.org/hadoop/Hbase/SecondaryIndexing[SecondaryIndexing].
+
+
+==== Types of Observer Coprocessor
+
+RegionObserver::
+  A RegionObserver coprocessor allows you to observe events on a region, such as `Get`
+  and `Put` operations. See
+  link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionObserver.html[RegionObserver].
+  Consider overriding the convenience class
+  link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.html[BaseRegionObserver],
+  which implements the `RegionObserver` interface and will not break if new methods are added.
+
+RegionServerObserver::
+  A RegionServerObserver allows you to observe events related to the RegionServer's
+  operation, such as starting, stopping, or performing merges, commits, or rollbacks.
+  See
+  link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/RegionServerObserver.html[RegionServerObserver].
+  Consider overriding the convenience class
+  link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/BaseMasterRegionServerObserver.html[BaseMasterRegionServerObserver]
+  which implements both `MasterObserver` and `RegionServerObserver` interfaces and
+  will not break if new methods are added.
+
+MasterOvserver::
+  A MasterObserver allows you to observe events related to the HBase Master, such
+  as table creation, deletion, or schema modification. See
+  link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/MasterObserver.html[MasterObserver].
+  Consider overriding the convenience class
+  link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/BaseMasterRegionServerObserver.html[BaseMasterRegionServerObserver],
+  which implements both `MasterObserver` and `RegionServerObserver` interfaces and
+  will not break if new methods are added.
+
+WalObserver::
+  A WalObserver allows you to observe events related to writes to the Write-Ahead
+  Log (WAL). See
+  link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/WALObserver.html[WALObserver].
+  Consider overriding the convenience class
+  link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/BaseWALObserver.html[BaseWALObserver],
+  which implements the `WalObserver` interface and will not break if new methods are added.
+
+<<cp_example,Examples>> provides working examples of observer coprocessors.
+
+
+=== Endpoint Coprocessor
+
+Endpoint processors allow you to perform computation at the location of the data.
+See <<cp_analogies, Coprocessor Analogy>>. An example is the need to calculate a running
+average or summation for an entire table which spans hundreds of regions.
+
+In contract to observer coprocessors, where your code is run transparently, endpoint
+coprocessors must be explicitly invoked using the
+link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/Table.html#coprocessorService%28java.lang.Class,%20byte%5B%5D,%20byte%5B%5D,%20org.apache.hadoop.hbase.client.coprocessor.Batch.Call%29[CoprocessorService()]
+method available in
+link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/Table.html[Table],
+link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HTableInterface.html[HTableInterface],
+or
+link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HTable.html[HTable].
+
+Starting with HBase 0.96, endpoint coprocessors are implemented using Google Protocol
+Buffers (protobuf). For more details on protobuf, see Google's
+link:https://developers.google.com/protocol-buffers/docs/proto[Protocol Buffer Guide].
+Endpoints Coprocessor written in version 0.94 are not compatible with version 0.96 or later.
+See
+link:https://issues.apache.org/jira/browse/HBASE-5448[HBASE-5448]). To upgrade your
+HBase cluster from 0.94 or earlier to 0.96 or later, you need to reimplement your
+coprocessor.
+
+<<cp_example,Examples>> provides working examples of endpoint coprocessors.
+
+[[cp_loading]]
+== Loading Coprocessors
+
+To make your coprocessor available to HBase, it must be _loaded_, either statically
+(through the HBase configuration) or dynamically (using HBase Shell or the Java API).
+
+=== Static Loading
+
+Follow these steps to statically load your coprocessor. Keep in mind that you must
+restart HBase to unload a coprocessor that has been loaded statically.
+
+. Define the Coprocessor in _hbase-site.xml_, with a <property> element with a <name>
+and a <value> sub-element. The <name> should be one of the following:
++
+- `hbase.coprocessor.region.classes` for RegionObservers and Endpoints.
+- `hbase.coprocessor.wal.classes` for WALObservers.
+- `hbase.coprocessor.master.classes` for MasterObservers.
++
+<value> must contain the fully-qualified class name of your coprocessor's implementation
+class.
++
+For example to load a Coprocessor (implemented in class SumEndPoint.java) you have to create
+following entry in RegionServer's 'hbase-site.xml' file (generally located under 'conf' directory):
++
+[source,xml]
+----
+<property>
+    <name>hbase.coprocessor.region.classes</name>
+    <value>org.myname.hbase.coprocessor.endpoint.SumEndPoint</value>
+</property>
+----
++
+If multiple classes are specified for loading, the class names must be comma-separated.
+The framework attempts to load all the configured classes using the default class loader.
+Therefore, the jar file must reside on the server-side HBase classpath.
++
+Coprocessors which are loaded in this way will be active on all regions of all tables.
+These are also called system Coprocessor.
+The first listed Coprocessors will be assigned the priority `Coprocessor.Priority.SYSTEM`.
+Each subsequent coprocessor in the list will have its priority value incremented by one (which
+reduces its priority, because priorities have the natural sort order of Integers).
++
+When calling out to registered observers, the framework executes their callbacks methods in the
+sorted order of their priority. +
+Ties are broken arbitrarily.
 
-Table Coprocessors::
-  You can specify which coprocessors should be loaded on all regions for a table on a per-table basis.
+. Put your code HBase's classpath. One easy way to do this is to drop the jar
+  (containing you code and all the dependencies) into the `lib/` directory in the
+  HBase installation.
 
-The framework provides two different aspects of extensions as well: _observers_ and _endpoints_.
+. Restart HBase.
 
-Observers::
-  Observers are analogous to triggers in conventional databases.
-  They allow you to insert user code by overriding upcall methods provided by the coprocessor framework.
-  Callback functions are executed from core HBase code when events occur.
-  Callbacks are handled by the framework, and the coprocessor itself only needs to insert the extended or alternate functionality.
 
-Endpoints (HBase 0.96.x and later)::
-  The implementation for endpoints changed significantly in HBase 0.96.x due to the introduction of protocol buffers (protobufs) (link:https://issues.apache.org/jira/browse/HBASE-5448[HBASE-5488]). If you created endpoints before 0.96.x, you will need to rewrite them.
-  Endpoints are now defined and callable as protobuf services, rather than endpoint invocations passed through as Writable blobs
+=== Static Unloading
 
-Endpoints (HBase 0.94.x and earlier)::
-  Dynamic RPC endpoints resemble stored procedures.
-  An endpoint can be invoked at any time from the client.
-  When it is invoked, it is executed remotely at the target region or regions, and results of the executions are returned to the client.
+. Delete the coprocessor's <property> element, including sub-elements, from `hbase-site.xml`.
+. Restart HBase.
+. Optionally, remove the coprocessor's JAR file from the classpath or HBase's `lib/`
+  directory.
 
-== Examples
 
-An example of an observer is included in _hbase-examples/src/test/java/org/apache/hadoop/hbase/coprocessor/example/TestZooKeeperScanPolicyObserver.java_.
-Several endpoint examples are included in the same directory.
+=== Dynamic Loading
 
-== Building A Coprocessor
+You can also load a coprocessor dynamically, without restarting HBase. This may seem
+preferable to static loading, but dynamically loaded coprocessors are loaded on a
+per-table basis, and are only available to the table for which they were loaded. For
+this reason, dynamically loaded tables are sometimes called *Table Coprocessor*.
 
-Before you can build a processor, it must be developed, compiled, and packaged in a JAR file.
-The next step is to configure the coprocessor framework to use your coprocessor.
-You can load the coprocessor from your HBase configuration, so that the coprocessor starts with HBase, or you can configure the coprocessor from the HBase shell, as a table attribute, so that it is loaded dynamically when the table is opened or reopened.
+In addition, dynamically loading a coprocessor acts as a schema change on the table,
+and the table must be taken offline to load the coprocessor.
 
-=== Load from Configuration
+There are three ways to dynamically load Coprocessor.
 
-To configure a coprocessor to be loaded when HBase starts, modify the RegionServer's _hbase-site.xml_ and configure one of the following properties, based on the type of observer you are configuring:
-
-* `hbase.coprocessor.region.classes`for RegionObservers and Endpoints
-* `hbase.coprocessor.wal.classes`for WALObservers
-* `hbase.coprocessor.master.classes`for MasterObservers
+[NOTE]
+.Assumptions
+====
+The below mentioned instructions makes the following assumptions:
 
-.Example RegionObserver Configuration
+* A JAR called `coprocessor.jar` contains the Coprocessor implementation along with all of its
+dependencies.
+* The JAR is available in HDFS in some location like
+`hdfs://<namenode>:<port>/user/<hadoop-user>/coprocessor.jar`.
 ====
-In this example, one RegionObserver is configured for all the HBase tables.
 
-[source,xml]
+==== Using HBase Shell
+
+. Disable the table using HBase Shell:
++
+[source]
 ----
-<property>
-  <name>hbase.coprocessor.region.classes</name>
-  <value>org.apache.hadoop.hbase.coprocessor.AggregateImplementation</value>
-</property>
+hbase> disable 'users'
 ----
-====
 
-If multiple classes are specified for loading, the class names must be comma-separated.
-The framework attempts to load all the configured classes using the default class loader.
-Therefore, the jar file must reside on the server-side HBase classpath.
+. Load the Coprocessor, using a command like the following:
++
+[source]
+----
+hbase alter 'users', METHOD => 'table_att', 'Coprocessor'=>'hdfs://<namenode>:<port>/
+user/<hadoop-user>/coprocessor.jar| org.myname.hbase.Coprocessor.RegionObserverExample|1073741823|
+arg1=1,arg2=2'
+----
++
+The Coprocessor framework will try to read the class information from the coprocessor table
+attribute value.
+The value contains four pieces of information which are separated by the pipe (`|`) character.
++
+* File path: The jar file containing the Coprocessor implementation must be in a location where
+all region servers can read it. +
+You could copy the file onto the local disk on each region server, but it is recommended to store
+it in HDFS.
+* Class name: The full class name of the Coprocessor.
+* Priority: An integer. The framework will determine the execution sequence of all configured
+observers registered at the same hook using priorities. This field can be left blank. In that
+case the framework will assign a default priority value.
+* Arguments (Optional): This field is passed to the Coprocessor implementation. This is optional.
+
+. Enable the table.
++
+----
+hbase(main):003:0> enable 'users'
+----
 
-Coprocessors which are loaded in this way will be active on all regions of all tables.
-These are the system coprocessor introduced earlier.
-The first listed coprocessors will be assigned the priority `Coprocessor.Priority.SYSTEM`.
-Each subsequent coprocessor in the list will have its priority value incremented by one (which reduces its priority, because priorities have the natural sort order of Integers).
+. Verify that the coprocessor loaded:
++
+----
+hbase(main):04:0> describe 'users'
+----
++
+The coprocessor should be listed in the `TABLE_ATTRIBUTES`.
 
-When calling out to registered observers, the framework executes their callbacks methods in the sorted order of their priority.
-Ties are broken arbitrarily.
+==== Using the Java API (all HBase versions)
 
-=== Load from the HBase Shell
+The following Java code shows how to use the `setValue()` method of `HTableDescriptor`
+to load a coprocessor on the `users` table.
 
-You can load a coprocessor on a specific table via a table attribute.
-The following example will load the `FooRegionObserver` observer when table `t1` is read or re-read.
+[source,java]
+----
+TableName tableName = TableName.valueOf("users");
+String path = "hdfs://<namenode>:<port>/user/<hadoop-user>/coprocessor.jar";
+Configuration conf = HBaseConfiguration.create();
+Connection connection = ConnectionFactory.createConnection(conf);
+Admin admin = connection.getAdmin();
+admin.disableTable(tableName);
+HTableDescriptor hTableDescriptor = new HTableDescriptor(tableName);
+HColumnDescriptor columnFamily1 = new HColumnDescriptor("personalDet");
+columnFamily1.setMaxVersions(3);
+hTableDescriptor.addFamily(columnFamily1);
+HColumnDescriptor columnFamily2 = new HColumnDescriptor("salaryDet");
+columnFamily2.setMaxVersions(3);
+hTableDescriptor.addFamily(columnFamily2);
+hTableDescriptor.setValue("COPROCESSOR$1", path + "|"
++ RegionObserverExample.class.getCanonicalName() + "|"
++ Coprocessor.PRIORITY_USER);
+admin.modifyTable(tableName, hTableDescriptor);
+admin.enableTable(tableName);
+----
 
-.Load a Coprocessor On a Table Using HBase Shell
-====
+==== Using the Java API (HBase 0.96+ only)
+
+In HBase 0.96 and newer, the `addCoprocessor()` method of `HTableDescriptor` provides
+an easier way to load a coprocessor dynamically.
+
+[source,java]
 ----
-hbase(main):005:0>  alter 't1', METHOD => 'table_att',
-  'coprocessor'=>'hdfs:///foo.jar|com.foo.FooRegionObserver|1001|arg1=1,arg2=2'
-Updating all regions with the new schema...
-1/1 regions updated.
-Done.
-0 row(s) in 1.0730 seconds
-
-hbase(main):006:0> describe 't1'
-DESCRIPTION                                                        ENABLED
- {NAME => 't1', coprocessor$1 => 'hdfs:///foo.jar|com.foo.FooRegio false
- nObserver|1001|arg1=1,arg2=2', FAMILIES => [{NAME => 'c1', DATA_B
- LOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE
-  => '0', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS =>
- '0', TTL => '2147483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZ
- E => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLO
- CKCACHE => 'true'}, {NAME => 'f1', DATA_BLOCK_ENCODING => 'NONE',
-  BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '3'
- , COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647'
- , KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY
- => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true'}]}
-1 row(s) in 0.0190 seconds
+TableName tableName = TableName.valueOf("users");
+String path = "hdfs://<namenode>:<port>/user/<hadoop-user>/coprocessor.jar";
+Configuration conf = HBaseConfiguration.create();
+HBaseAdmin admin = new HBaseAdmin(conf);
+admin.disableTable(tableName);
+HTableDescriptor hTableDescriptor = new HTableDescriptor(tableName);
+HColumnDescriptor columnFamily1 = new HColumnDescriptor("personalDet");
+columnFamily1.setMaxVersions(3);
+hTableDescriptor.addFamily(columnFamily1);
+HColumnDescriptor columnFamily2 = new HColumnDescriptor("salaryDet");
+columnFamily2.setMaxVersions(3);
+hTableDescriptor.addFamily(columnFamily2);
+hTableDescriptor.addCoprocessor(RegionObserverExample.class.getCanonicalName(), path,
+Coprocessor.PRIORITY_USER, null);
+admin.modifyTable(tableName, hTableDescriptor);
+admin.enableTable(tableName);
 ----
-====
 
-The coprocessor framework will try to read the class information from the coprocessor table attribute value.
-The value contains four pieces of information which are separated by the `|` character.
+WARNING: There is no guarantee that the framework will load a given Coprocessor successfully.
+For example, the shell command neither guarantees a jar file exists at a particular location nor
+verifies whether the given class is actually contained in the jar file.
 
-* File path: The jar file containing the coprocessor implementation must be in a location where all region servers can read it.
-  You could copy the file onto the local disk on each region server, but it is recommended to store it in HDFS.
-* Class name: The full class name of the coprocessor.
-* Priority: An integer.
-  The framework will determine the execution sequence of all configured observers registered at the same hook using priorities.
-  This field can be left blank.
-  In that case the framework will assign a default priority value.
-* Arguments: This field is passed to the coprocessor implementation.
 
-.Unload a Coprocessor From a Table Using HBase Shell
-====
+=== Dynamic Unloading
+
+==== Using HBase Shell
+
+. Disable the table.
++
+[source]
+----
+hbase> disable 'users'
 ----
 
-hbase(main):007:0> alter 't1', METHOD => 'table_att_unset',
-hbase(main):008:0*   NAME => 'coprocessor$1'
-Updating all regions with the new schema...
-1/1 regions updated.
-Done.
-0 row(s) in 1.1130 seconds
-
-hbase(main):009:0> describe 't1'
-DESCRIPTION                                                        ENABLED
- {NAME => 't1', FAMILIES => [{NAME => 'c1', DATA_BLOCK_ENCODING => false
-  'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', VERSION
- S => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '214
- 7483647', KEEP_DELETED_CELLS => 'false', BLOCKSIZE => '65536', IN
- _MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOCKCACHE => 'true
- '}, {NAME => 'f1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER =>
- 'NONE', REPLICATION_SCOPE => '0', VERSIONS => '3', COMPRESSION =>
-  'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_DELETED_C
- ELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCO
- DE_ON_DISK => 'true', BLOCKCACHE => 'true'}]}
-1 row(s) in 0.0180 seconds
+. Alter the table to remove the coprocessor.
++
+[source]
 ----
-====
+hbase> alter 'users', METHOD => 'table_att_unset', NAME => 'coprocessor$1'
+----
+
+. Enable the table.
++
+[source]
+----
+hbase> enable 'users'
+----
+
+==== Using the Java API
+
+Reload the table definition without setting the value of the coprocessor either by
+using `setValue()` or `addCoprocessor()` methods. This will remove any coprocessor
+attached to the table.
+
+[source,java]
+----
+TableName tableName = TableName.valueOf("users");
+String path = "hdfs://<namenode>:<port>/user/<hadoop-user>/coprocessor.jar";
+Configuration conf = HBaseConfiguration.create();
+Connection connection = ConnectionFactory.createConnection(conf);
+Admin admin = connection.getAdmin();
+admin.disableTable(tableName);
+HTableDescriptor hTableDescriptor = new HTableDescriptor(tableName);
+HColumnDescriptor columnFamily1 = new HColumnDescriptor("personalDet");
+columnFamily1.setMaxVersions(3);
+hTableDescriptor.addFamily(columnFamily1);
+HColumnDescriptor columnFamily2 = new HColumnDescriptor("salaryDet");
+columnFamily2.setMaxVersions(3);
+hTableDescriptor.addFamily(columnFamily2);
+admin.modifyTable(tableName, hTableDescriptor);
+admin.enableTable(tableName);
+----
+
+In HBase 0.96 and newer, you can instead use the `removeCoprocessor()` method of the
+`HTableDescriptor` class.
+
+
+[[cp_example]]
+== Examples
+HBase ships examples for Observer Coprocessor in
+link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/coprocessor/example/ZooKeeperScanPolicyObserver.html[ZooKeeperScanPolicyObserver]
+and for Endpoint Coprocessor in
+link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/coprocessor/example/RowCountEndpoint.html[RowCountEndpoint]
+
+A more detailed example is given below.
+
+These examples assume a table called `users`, which has two column families `personalDet`
+and `salaryDet`, containing personal and salary details. Below is the graphical representation
+of the `users` table.
+
+.Users Table
+[width="100%",cols="7",options="header,footer"]
+|====================
+| 3+|personalDet  3+|salaryDet
+|*rowkey* |*name* |*lastname* |*dob* |*gross* |*net* |*allowances*
+|admin |Admin |Admin |  3+|
+|cdickens |Charles |Dickens |02/07/1812 |10000 |8000 |2000
+|jverne |Jules |Verne |02/08/1828 |12000 |9000 |3000
+|====================
+
+
+=== Observer Example
+
+The following Observer coprocessor prevents the details of the user `admin` from being
+returned in a `Get` or `Scan` of the `users` table.
+
+. Write a class that extends the
+link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.html[BaseRegionObserver]
+class.
 
-WARNING: There is no guarantee that the framework will load a given coprocessor successfully.
-For example, the shell command neither guarantees a jar file exists at a particular location nor verifies whether the given class is actually contained in the jar file.
+. Override the `preGetOp()` method (the `preGet()` method is deprecated) to check
+whether the client has queried for the rowkey with value `admin`. If so, return an
+empty result. Otherwise, process the request as normal.
 
-== Check the Status of a Coprocessor
+. Put your code and dependencies in a JAR file.
 
-To check the status of a coprocessor after it has been configured, use the `status` HBase Shell command.
+. Place the JAR in HDFS where HBase can locate it.
 
+. Load the Coprocessor.
+
+. Write a simple program to test it.
+
+Following are the implementation of the above steps:
+
+
+[source,java]
+----
+public class RegionObserverExample extends BaseRegionObserver {
+
+    private static final byte[] ADMIN = Bytes.toBytes("admin");
+    private static final byte[] COLUMN_FAMILY = Bytes.toBytes("details");
+    private static final byte[] COLUMN = Bytes.toBytes("Admin_det");
+    private static final byte[] VALUE = Bytes.toBytes("You can't see Admin details");
+
+    @Override
+    public void preGetOp(final ObserverContext e, final Get get, final List results)
+    throws IOException {
+
+        if (Bytes.equals(get.getRow(),ADMIN)) {
+            Cell c = CellUtil.createCell(get.getRow(),COLUMN _FAMILY, COLUMN,
+            System.currentTimeMillis(), (byte)4, VALUE);
+            results.add(c);
+            e.bypass();
+        }
+
+        List kvs = new ArrayList(results.size());
+        for (Cell c : results) {
+            kvs.add(KeyValueUtil.ensureKeyValue(c));
+        }
+        preGet(e, get, kvs);
+        results.clear();
+        results.addAll(kvs);
+    }
+}
+----
+
+Overriding the `preGetOp()` will only work for `Get` operations. You also need to override
+the `preScannerOpen()` method to filter the `admin` row from scan results.
+
+[source,java]
+----
+@Override
+public RegionScanner preScannerOpen(final ObserverContext e, final Scan scan,
+final RegionScanner s) throws IOException {
+
+    Filter filter = new RowFilter(CompareOp.NOT_EQUAL, new BinaryComparator(ADMIN));
+    scan.setFilter(filter);
+    return s;
+}
 ----
 
-hbase(main):020:0> status 'detailed'
-version 0.92-tm-6
-0 regionsInTransition
-master coprocessors: []
-1 live servers
-    localhost:52761 1328082515520
-        requestsPerSecond=3, numberOfOnlineRegions=3, usedHeapMB=32, maxHeapMB=995
-        -ROOT-,,0
-            numberOfStores=1, numberOfStorefiles=1, storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0,
-storefileIndexSizeMB=0, readRequestsCount=54, writeRequestsCount=1, rootIndexSizeKB=0, totalStaticIndexSizeKB=0,
-totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, coprocessors=[]
-        .META.,,1
-            numberOfStores=1, numberOfStorefiles=0, storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0,
-storefileIndexSizeMB=0, readRequestsCount=97, writeRequestsCount=4, rootIndexSizeKB=0, totalStaticIndexSizeKB=0,
-totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, coprocessors=[]
-        t1,,1328082575190.c0491168a27620ffe653ec6c04c9b4d1.
-            numberOfStores=2, numberOfStorefiles=1, storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0,
-storefileIndexSizeMB=0, readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, totalStaticIndexSizeKB=0,
-totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN,
-coprocessors=[AggregateImplementation]
-0 dead servers
+This method works but there is a _side effect_. If the client has used a filter in
+its scan, that filter will be replaced by this filter. Instead, you can explicitly
+remove any `admin` results from the scan:
+
+[source,java]
+----
+@Override
+public boolean postScannerNext(final ObserverContext e, final InternalScanner s,
+final List results, final int limit, final boolean hasMore) throws IOException {
+	Result result = null;
+    Iterator iterator = results.iterator();
+    while (iterator.hasNext()) {
+    result = iterator.next();
+        if (Bytes.equals(result.getRow(), ROWKEY)) {
+            iterator.remove();
+            break;
+        }
+    }
+    return hasMore;
+}
 ----
 
+=== Endpoint Example
+
+Still using the `users` table, this example implements a coprocessor to calculate
+the sum of all employee salaries, using an endpoint coprocessor.
+
+. Create a '.proto' file defining your service.
++
+[source]
+----
+option java_package = "org.myname.hbase.coprocessor.autogenerated";
+option java_outer_classname = "Sum";
+option java_generic_services = true;
+option java_generate_equals_and_hash = true;
+option optimize_for = SPEED;
+message SumRequest {
+    required string family = 1;
+    required string column = 2;
+}
+
+message SumResponse {
+  required int64 sum = 1 [default = 0];
+}
+
+service SumService {
+  rpc getSum(SumRequest)
+    returns (SumResponse);
+}
+----
+
+. Execute the `protoc` command to generate the Java code from the above .proto' file.
++
+[source]
+----
+$ mkdir src
+$ protoc --java_out=src ./sum.proto
+----
++
+This will generate a class call `Sum.java`.
+
+. Write a class that extends the generated service class, implement the `Coprocessor`
+and `CoprocessorService` classes, and override the service method.
++
+WARNING: If you load a coprocessor from `hbase-site.xml` and then load the same coprocessor
+again using HBase Shell, it will be loaded a second time. The same class will
+exist twice, and the second instance will have a higher ID (and thus a lower priority).
+The effect is that the duplicate coprocessor is effectively ignored.
++
+[source, java]
+----
+public class SumEndPoint extends SumService implements Coprocessor, CoprocessorService {
+
+    private RegionCoprocessorEnvironment env;
+
+    @Override
+    public Service getService() {
+        return this;
+    }
+
+    @Override
+    public void start(CoprocessorEnvironment env) throws IOException {
+        if (env instanceof RegionCoprocessorEnvironment) {
+            this.env = (RegionCoprocessorEnvironment)env;
+        } else {
+            throw new CoprocessorException("Must be loaded on a table region!");
+        }
+    }
+
+    @Override
+    public void stop(CoprocessorEnvironment env) throws IOException {
+        // do mothing
+    }
+
+    @Override
+    public void getSum(RpcController controller, SumRequest request, RpcCallback done) {
+        Scan scan = new Scan();
+        scan.addFamily(Bytes.toBytes(request.getFamily()));
+        scan.addColumn(Bytes.toBytes(request.getFamily()), Bytes.toBytes(request.getColumn()));
+        SumResponse response = null;
+        InternalScanner scanner = null;
+        try {
+            scanner = env.getRegion().getScanner(scan);
+            List results = new ArrayList();
+            boolean hasMore = false;
+                        long sum = 0L;
+                do {
+                        hasMore = scanner.next(results);
+                        for (Cell cell : results) {
+                            sum = sum + Bytes.toLong(CellUtil.cloneValue(cell));
+                     }
+                        results.clear();
+                } while (hasMore);
+
+                response = SumResponse.newBuilder().setSum(sum).build();
+
+        } catch (IOException ioe) {
+            ResponseConverter.setControllerException(controller, ioe);
+        } finally {
+            if (scanner != null) {
+                try {
+                    scanner.close();
+                } catch (IOException ignored) {}
+            }
+        }
+        done.run(response);
+    }
+}
+----
++
+[source, java]
+----
+Configuration conf = HBaseConfiguration.create();
+// Use below code for HBase version 1.x.x or above.
+Connection connection = ConnectionFactory.createConnection(conf);
+TableName tableName = TableName.valueOf("users");
+Table table = connection.getTable(tableName);
+
+//Use below code HBase version 0.98.xx or below.
+//HConnection connection = HConnectionManager.createConnection(conf);
+//HTableInterface table = connection.getTable("users");
+
+final SumRequest request = SumRequest.newBuilder().setFamily("salaryDet").setColumn("gross")
+                            .build();
+try {
+Map<byte[], Long> results = table.CoprocessorService (SumService.class, null, null,
+new Batch.Call<SumService, Long>() {
+    @Override
+        public Long call(SumService aggregate) throws IOException {
+BlockingRpcCallback rpcCallback = new BlockingRpcCallback();
+            aggregate.getSum(null, request, rpcCallback);
+            SumResponse response = rpcCallback.get();
+            return response.hasSum() ? response.getSum() : 0L;
+        }
+    });
+    for (Long sum : results.values()) {
+        System.out.println("Sum = " + sum);
+    }
+} catch (ServiceException e) {
+e.printStackTrace();
+} catch (Throwable e) {
+    e.printStackTrace();
+}
+----
+
+. Load the Coprocessor.
+
+. Write a client code to call the Coprocessor.
+
+
+== Guidelines For Deploying A Coprocessor
+
+Bundling Coprocessors::
+  You can bundle all classes for a coprocessor into a
+  single JAR on the RegionServer's classpath, for easy deployment. Otherwise,
+  place all dependencies  on the RegionServer's classpath so that they can be
+  loaded during RegionServer start-up.  The classpath for a RegionServer is set
+  in the RegionServer's `hbase-env.sh` file.
+Automating Deployment::
+  You can use a tool such as Puppet, Chef, or
+  Ansible to ship the JAR for the coprocessor  to the required location on your
+  RegionServers' filesystems and restart each RegionServer,  to automate
+  coprocessor deployment. Details for such set-ups are out of scope of  this
+  document.
+Updating a Coprocessor::
+  Deploying a new version of a given coprocessor is not as simple as disabling it,
+  replacing the JAR, and re-enabling the coprocessor. This is because you cannot
+  reload a class in a JVM unless you delete all the current references to it.
+  Since the current JVM has reference to the existing coprocessor, you must restart
+  the JVM, by restarting the RegionServer, in order to replace it. This behavior
+  is not expected to change.
+Coprocessor Logging::
+  The Coprocessor framework does not provide an API for logging beyond standard Java
+  logging.
+Coprocessor Configuration::
+  If you do not want to load coprocessors from the HBase Shell, you can add their configuration
+  properties to `hbase-site.xml`. In <<load_coprocessor_in_shell>>, two arguments are
+  set: `arg1=1,arg2=2`. These could have been added to `hbase-site.xml` as follows:
+[source,xml]
+----
+<property>
+  <name>arg1</name>
+  <value>1</value>
+</property>
+<property>
+  <name>arg2</name>
+  <value>2</value>
+</property>
+----
+Then you can read the configuration using code like the following:
+[source,java]
+----
+Configuration conf = HBaseConfiguration.create();
+// Use below code for HBase version 1.x.x or above.
+Connection connection = ConnectionFactory.createConnection(conf);
+TableName tableName = TableName.valueOf("users");
+Table table = connection.getTable(tableName);
+
+//Use below code HBase version 0.98.xx or below.
+//HConnection connection = HConnectionManager.createConnection(conf);
+//HTableInterface table = connection.getTable("users");
+
+Get get = new Get(Bytes.toBytes("admin"));
+Result result = table.get(get);
+for (Cell c : result.rawCells()) {
+    System.out.println(Bytes.toString(CellUtil.cloneRow(c))
+        + "==> " + Bytes.toString(CellUtil.cloneFamily(c))
+        + "{" + Bytes.toString(CellUtil.cloneQualifier(c))
+        + ":" + Bytes.toLong(CellUtil.cloneValue(c)) + "}");
+}
+Scan scan = new Scan();
+ResultScanner scanner = table.getScanner(scan);
+for (Result res : scanner) {
+    for (Cell c : res.rawCells()) {
+        System.out.println(Bytes.toString(CellUtil.cloneRow(c))
+        + " ==> " + Bytes.toString(CellUtil.cloneFamily(c))
+        + " {" + Bytes.toString(CellUtil.cloneQualifier(c))
+        + ":" + Bytes.toLong(CellUtil.cloneValue(c))
+        + "}");
+    }
+}
+----
+
+
+
+
 == Monitor Time Spent in Coprocessors
 
-HBase 0.98.5 introduced the ability to monitor some statistics relating to the amount of time spent executing a given coprocessor.
-You can see these statistics via the HBase Metrics framework (see <<hbase_metrics>> or the Web UI for a given Region Server, via the _Coprocessor Metrics_ tab.
-These statistics are valuable for debugging and benchmarking the performance impact of a given coprocessor on your cluster.
+HBase 0.98.5 introduced the ability to monitor some statistics relating to the amount of time
+spent executing a given Coprocessor.
+You can see these statistics via the HBase Metrics framework (see <<hbase_metrics>> or the Web UI
+for a given Region Server, via the _Coprocessor Metrics_ tab.
+These statistics are valuable for debugging and benchmarking the performance impact of a given
+Coprocessor on your cluster.
 Tracked statistics include min, max, average, and 90th, 95th, and 99th percentile.
 All times are shown in milliseconds.
-The statistics are calculated over coprocessor execution samples recorded during the reporting interval, which is 10 seconds by default.
+The statistics are calculated over Coprocessor execution samples recorded during the reporting
+interval, which is 10 seconds by default.
 The metrics sampling rate as described in <<hbase_metrics>>.
 
 .Coprocessor Metrics UI

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/datamodel.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/datamodel.adoc b/src/main/asciidoc/_chapters/datamodel.adoc
index b76adc8..66d2801 100644
--- a/src/main/asciidoc/_chapters/datamodel.adoc
+++ b/src/main/asciidoc/_chapters/datamodel.adoc
@@ -93,7 +93,7 @@ The colon character (`:`) delimits the column family from the column family _qua
 |===
 |Row Key |Time Stamp  |ColumnFamily `contents` |ColumnFamily `anchor`|ColumnFamily `people`
 |"com.cnn.www" |t9    | |anchor:cnnsi.com = "CNN"   |
-|"com.cnn.www" |t8    | |anchor:my.look.ca = "CNN.com" |  
+|"com.cnn.www" |t8    | |anchor:my.look.ca = "CNN.com" |
 |"com.cnn.www" |t6  | contents:html = "<html>..."    | |
 |"com.cnn.www" |t5  | contents:html = "<html>..."    | |
 |"com.cnn.www" |t3  | contents:html = "<html>..."    | |
@@ -171,7 +171,7 @@ For more information about the internals of how Apache HBase stores data, see <<
 A namespace is a logical grouping of tables analogous to a database in relation database systems.
 This abstraction lays the groundwork for upcoming multi-tenancy related features:
 
-* Quota Management (link:https://issues.apache.org/jira/browse/HBASE-8410[HBASE-8410]) - Restrict the amount of resources (ie regions, tables) a namespace can consume.
+* Quota Management (link:https://issues.apache.org/jira/browse/HBASE-8410[HBASE-8410]) - Restrict the amount of resources (i.e. regions, tables) a namespace can consume.
 * Namespace Security Administration (link:https://issues.apache.org/jira/browse/HBASE-9206[HBASE-9206]) - Provide another level of security administration for tenants.
 * Region server groups (link:https://issues.apache.org/jira/browse/HBASE-6721[HBASE-6721]) - A namespace/table can be pinned onto a subset of RegionServers thus guaranteeing a course level of isolation.
 
@@ -257,7 +257,7 @@ For example, the columns _courses:history_ and _courses:math_ are both members o
 The colon character (`:`) delimits the column family from the column family qualifier.
 The column family prefix must be composed of _printable_ characters.
 The qualifying tail, the column family _qualifier_, can be made of any arbitrary bytes.
-Column families must be declared up front at schema definition time whereas columns do not need to be defined at schema time but can be conjured on the fly while the table is up an running.
+Column families must be declared up front at schema definition time whereas columns do not need to be defined at schema time but can be conjured on the fly while the table is up and running.
 
 Physically, all column family members are stored together on the filesystem.
 Because tunings and storage specifications are done at the column family level, it is advised that all column family members have the same general access pattern and size characteristics.
@@ -279,7 +279,7 @@ Gets are executed via link:http://hbase.apache.org/apidocs/org/apache/hadoop/hba
 
 === Put
 
-link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html[Put] either adds new rows to a table (if the key is new) or can update existing rows (if the key already exists). Puts are executed via link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#put(org.apache.hadoop.hbase.client.Put)[Table.put] (writeBuffer) or link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#batch(java.util.List, java.lang.Object[])[Table.batch] (non-writeBuffer).
+link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html[Put] either adds new rows to a table (if the key is new) or can update existing rows (if the key already exists). Puts are executed via link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#put(org.apache.hadoop.hbase.client.Put)[Table.put] (writeBuffer) or link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#batch(java.util.List,%20java.lang.Object%5B%5D)[Table.batch] (non-writeBuffer).
 
 [[scan]]
 === Scans
@@ -552,7 +552,7 @@ hash-joins). So which is the best approach? It depends on what you are trying to
 
 == ACID
 
-See link:http://hbase.apache.org/acid-semantics.html[ACID Semantics].
+See link:/acid-semantics.html[ACID Semantics].
 Lars Hofhansl has also written a note on link:http://hadoop-hbase.blogspot.com/2012/03/acid-in-hbase.html[ACID in HBase].
 
 ifdef::backend-docbook[]


[08/11] hbase git commit: HBASE-13908 update site docs for 1.2 RC.

Posted by bu...@apache.org.
HBASE-13908 update site docs for 1.2 RC.

copied from master as of c5f3d17ae3a61cbf77cab89cddd8303e20e5e734


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/6f07973d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/6f07973d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/6f07973d

Branch: refs/heads/branch-1.2
Commit: 6f07973dc2be5b521cd5e7fa220d5ab9cba6e76a
Parents: a47a7a6
Author: Sean Busbey <bu...@apache.org>
Authored: Sun Jan 3 07:40:58 2016 +0000
Committer: Sean Busbey <bu...@apache.org>
Committed: Sun Jan 3 07:49:23 2016 +0000

----------------------------------------------------------------------
 .../asciidoc/_chapters/appendix_acl_matrix.adoc |   2 +-
 .../appendix_contributing_to_documentation.adoc | 267 +++---
 .../_chapters/appendix_hfile_format.adoc        | 176 ++--
 src/main/asciidoc/_chapters/architecture.adoc   | 279 +++---
 src/main/asciidoc/_chapters/asf.adoc            |   4 +-
 src/main/asciidoc/_chapters/case_studies.adoc   |   2 +-
 src/main/asciidoc/_chapters/community.adoc      |  42 +-
 src/main/asciidoc/_chapters/compression.adoc    |  40 +-
 src/main/asciidoc/_chapters/configuration.adoc  |  83 +-
 src/main/asciidoc/_chapters/cp.adoc             | 875 +++++++++++++++----
 src/main/asciidoc/_chapters/datamodel.adoc      |  10 +-
 src/main/asciidoc/_chapters/developer.adoc      | 475 +++++-----
 src/main/asciidoc/_chapters/external_apis.adoc  | 782 ++++++++++++++++-
 src/main/asciidoc/_chapters/faq.adoc            |  22 +-
 .../asciidoc/_chapters/getting_started.adoc     |  19 +-
 src/main/asciidoc/_chapters/hbase-default.adoc  | 542 ++++++------
 src/main/asciidoc/_chapters/hbase_history.adoc  |   8 +-
 src/main/asciidoc/_chapters/hbase_mob.adoc      | 236 +++++
 src/main/asciidoc/_chapters/hbck_in_depth.adoc  |  24 +-
 src/main/asciidoc/_chapters/mapreduce.adoc      |  57 +-
 src/main/asciidoc/_chapters/ops_mgt.adoc        | 240 ++++-
 src/main/asciidoc/_chapters/other_info.adoc     |  34 +-
 src/main/asciidoc/_chapters/performance.adoc    |  30 +-
 src/main/asciidoc/_chapters/preface.adoc        |  19 +-
 src/main/asciidoc/_chapters/rpc.adoc            |  25 +-
 src/main/asciidoc/_chapters/schema_design.adoc  | 141 ++-
 src/main/asciidoc/_chapters/security.adoc       |  89 +-
 src/main/asciidoc/_chapters/shell.adoc          |   2 +-
 src/main/asciidoc/_chapters/spark.adoc          | 451 ++++++++++
 .../_chapters/thrift_filter_language.adoc       |   3 +-
 src/main/asciidoc/_chapters/tracing.adoc        |  65 +-
 .../asciidoc/_chapters/troubleshooting.adoc     |  43 +-
 src/main/asciidoc/_chapters/unit_testing.adoc   |  32 +-
 src/main/asciidoc/_chapters/upgrading.adoc      |  24 +-
 src/main/asciidoc/_chapters/zookeeper.adoc      |  57 +-
 src/main/asciidoc/book.adoc                     |   2 +
 36 files changed, 3898 insertions(+), 1304 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc b/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
index cb285f3..698ae82 100644
--- a/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
+++ b/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
@@ -65,7 +65,7 @@ Possible permissions include the following:
 For the most part, permissions work in an expected way, with the following caveats:
 
 Having Write permission does not imply Read permission.::
-  It is possible and sometimes desirable for a user to be able to write data that same user cannot read. One such example is a log-writing process. 
+  It is possible and sometimes desirable for a user to be able to write data that same user cannot read. One such example is a log-writing process.
 The [systemitem]+hbase:meta+ table is readable by every user, regardless of the user's other grants or restrictions.::
   This is a requirement for HBase to function correctly.
 `CheckAndPut` and `CheckAndDelete` operations will fail if the user does not have both Write and Read permission.::

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc b/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc
index 6b31059..4588e95 100644
--- a/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc
+++ b/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc
@@ -30,15 +30,14 @@
 :toc: left
 :source-language: java
 
-The Apache HBase project welcomes contributions to all aspects of the project, including the documentation.
+The Apache HBase project welcomes contributions to all aspects of the project,
+including the documentation.
 
 In HBase, documentation includes the following areas, and probably some others:
 
 * The link:http://hbase.apache.org/book.html[HBase Reference
   Guide] (this book)
 * The link:http://hbase.apache.org/[HBase website]
-* The link:http://wiki.apache.org/hadoop/Hbase[HBase
-  Wiki]
 * API documentation
 * Command-line utility output and help text
 * Web UI strings, explicit help text, context-sensitive strings, and others
@@ -46,99 +45,121 @@ In HBase, documentation includes the following areas, and probably some others:
 * Comments in source files, configuration files, and others
 * Localization of any of the above into target languages other than English
 
-No matter which area you want to help out with, the first step is almost always to download (typically by cloning the Git repository) and familiarize yourself with the HBase source code.
-The only exception in the list above is the HBase Wiki, which is edited online.
-For information on downloading and building the source, see <<developer,developer>>.
-
-=== Getting Access to the Wiki
-
-The HBase Wiki is not well-maintained and much of its content has been moved into the HBase Reference Guide (this guide). However, some pages on the Wiki are well maintained, and it would be great to have some volunteers willing to help out with the Wiki.
-To request access to the Wiki, register a new account at link:https://wiki.apache.org/hadoop/Hbase?action=newaccount[https://wiki.apache.org/hadoop/Hbase?action=newaccount].
-Contact one of the HBase committers, who can either give you access or refer you to someone who can.
+No matter which area you want to help out with, the first step is almost always
+to download (typically by cloning the Git repository) and familiarize yourself
+with the HBase source code. For information on downloading and building the source,
+see <<developer,developer>>.
 
 === Contributing to Documentation or Other Strings
 
-If you spot an error in a string in a UI, utility, script, log message, or elsewhere, or you think something could be made more clear, or you think text needs to be added where it doesn't currently exist, the first step is to file a JIRA.
-Be sure to set the component to `Documentation` in addition any other involved components.
-Most components have one or more default owners, who monitor new issues which come into those queues.
-Regardless of whether you feel able to fix the bug, you should still file bugs where you see them.
+If you spot an error in a string in a UI, utility, script, log message, or elsewhere,
+or you think something could be made more clear, or you think text needs to be added
+where it doesn't currently exist, the first step is to file a JIRA. Be sure to set
+the component to `Documentation` in addition any other involved components. Most
+components have one or more default owners, who monitor new issues which come into
+those queues. Regardless of whether you feel able to fix the bug, you should still
+file bugs where you see them.
 
 If you want to try your hand at fixing your newly-filed bug, assign it to yourself.
-You will need to clone the HBase Git repository to your local system and work on the issue there.
-When you have developed a potential fix, submit it for review.
-If it addresses the issue and is seen as an improvement, one of the HBase committers will commit it to one or more branches, as appropriate.
+You will need to clone the HBase Git repository to your local system and work on
+the issue there. When you have developed a potential fix, submit it for review.
+If it addresses the issue and is seen as an improvement, one of the HBase committers
+will commit it to one or more branches, as appropriate.
 
 .Procedure: Suggested Work flow for Submitting Patches
-This procedure goes into more detail than Git pros will need, but is included in this appendix so that people unfamiliar with Git can feel confident contributing to HBase while they learn.
+This procedure goes into more detail than Git pros will need, but is included
+in this appendix so that people unfamiliar with Git can feel confident contributing
+to HBase while they learn.
 
 . If you have not already done so, clone the Git repository locally.
   You only need to do this once.
-. Fairly often, pull remote changes into your local repository by using the `git pull` command, while your master branch is checked out.
+. Fairly often, pull remote changes into your local repository by using the
+`git pull` command, while your tracking branch is checked out.
 . For each issue you work on, create a new branch.
-  One convention that works well for naming the branches is to name a given branch the same as the JIRA it relates to:
+  One convention that works well for naming the branches is to name a given branch
+  the same as the JIRA it relates to:
 +
 ----
 $ git checkout -b HBASE-123456
 ----
 
-. Make your suggested changes on your branch, committing your changes to your local repository often.
-  If you need to switch to working on a different issue, remember to check out the appropriate branch.
-. When you are ready to submit your patch, first be sure that HBase builds cleanly and behaves as expected in your modified branch.
-  If you have made documentation changes, be sure the documentation and website builds by running `mvn clean site`.
-+
-NOTE: Before you use the `site` target the very first time, be sure you have built HBase at least once, in order to fetch all the Maven dependencies you need.
-+
-----
-$ mvn clean install -DskipTests               # Builds HBase
-----
-+
-----
-$ mvn clean site -DskipTests                  # Builds the website and documentation
-----
-+
-If any errors occur, address them.
-
-. If it takes you several days or weeks to implement your fix, or you know that the area of the code you are working in has had a lot of changes lately, make sure you rebase your branch against the remote master and take care of any conflicts before submitting your patch.
+. Make your suggested changes on your branch, committing your changes to your
+local repository often. If you need to switch to working on a different issue,
+remember to check out the appropriate branch.
+. When you are ready to submit your patch, first be sure that HBase builds cleanly
+and behaves as expected in your modified branch.
+. If you have made documentation changes, be sure the documentation and website
+builds by running `mvn clean site`.
+. If it takes you several days or weeks to implement your fix, or you know that
+the area of the code you are working in has had a lot of changes lately, make
+sure you rebase your branch against the remote master and take care of any conflicts
+before submitting your patch.
 +
 ----
-
 $ git checkout HBASE-123456
 $ git rebase origin/master
 ----
 
-. Generate your patch against the remote master.
-  Run the following command from the top level of your git repository (usually called `hbase`):
+. Generate your patch against the remote master. Run the following command from
+the top level of your git repository (usually called `hbase`):
 +
 ----
 $ git format-patch --stdout origin/master > HBASE-123456.patch
 ----
 +
 The name of the patch should contain the JIRA ID.
-Look over the patch file to be sure that you did not change any additional files by accident and that there are no other surprises.
-When you are satisfied, attach the patch to the JIRA and click the btn:[Patch Available] button.
-A reviewer will review your patch.
-If you need to submit a new version of the patch, leave the old one on the JIRA and add a version number to the name of the new patch.
-
+. Look over the patch file to be sure that you did not change any additional files
+by accident and that there are no other surprises.
+. When you are satisfied, attach the patch to the JIRA and click the
+btn:[Patch Available] button. A reviewer will review your patch.
+. If you need to submit a new version of the patch, leave the old one on the
+JIRA and add a version number to the name of the new patch.
 . After a change has been committed, there is no need to keep your local branch around.
-  Instead you should run `git pull` to get the new change into your master branch.
 
 === Editing the HBase Website
 
 The source for the HBase website is in the HBase source, in the _src/main/site/_ directory.
-Within this directory, source for the individual pages is in the _xdocs/_ directory, and images referenced in those pages are in the _images/_ directory.
+Within this directory, source for the individual pages is in the _xdocs/_ directory,
+and images referenced in those pages are in the _resources/images/_ directory.
 This directory also stores images used in the HBase Reference Guide.
 
-The website's pages are written in an HTML-like XML dialect called xdoc, which has a reference guide at link:http://maven.apache.org/archives/maven-1.x/plugins/xdoc/reference/xdocs.html.
-You can edit these files in a plain-text editor, an IDE, or an XML editor such as XML Mind XML Editor (XXE) or Oxygen XML Author. 
+The website's pages are written in an HTML-like XML dialect called xdoc, which
+has a reference guide at
+http://maven.apache.org/archives/maven-1.x/plugins/xdoc/reference/xdocs.html.
+You can edit these files in a plain-text editor, an IDE, or an XML editor such
+as XML Mind XML Editor (XXE) or Oxygen XML Author.
+
+To preview your changes, build the website using the `mvn clean site -DskipTests`
+command. The HTML output resides in the _target/site/_ directory.
+When you are satisfied with your changes, follow the procedure in
+<<submit_doc_patch_procedure,submit doc patch procedure>> to submit your patch.
 
-To preview your changes, build the website using the +mvn clean site
-                -DskipTests+ command.
-The HTML output resides in the _target/site/_ directory.
-When you are satisfied with your changes, follow the procedure in <<submit_doc_patch_procedure,submit doc patch procedure>> to submit your patch.
+[[website_publish]]
+=== Publishing the HBase Website and Documentation
+
+HBase uses the ASF's `gitpubsub` mechanism.
+. After generating the website and documentation
+artifacts using `mvn clean site site:stage`, check out the `asf-site` repository.
+
+. Remove previously-generated content using the following command:
++
+----
+rm -rf rm -rf *apidocs* *xref* *book* *.html *.pdf* css js
+----
++
+WARNING: Do not remove the `0.94/` directory. To regenerate them, you must check out
+the 0.94 branch and run `mvn clean site site:stage` from there, and then copy the
+artifacts to the 0.94/ directory of the `asf-site` branch.
+
+. Copy the contents of `target/staging` to the branch.
+
+. Add and commit your changes, and submit a patch for review.
 
 === HBase Reference Guide Style Guide and Cheat Sheet
 
-The HBase Reference Guide is written in Asciidoc and built using link:http://asciidoctor.org[AsciiDoctor]. The following cheat sheet is included for your reference. More nuanced and comprehensive documentation is available at link:http://asciidoctor.org/docs/user-manual/.
+The HBase Reference Guide is written in Asciidoc and built using link:http://asciidoctor.org[AsciiDoctor].
+The following cheat sheet is included for your reference. More nuanced and comprehensive documentation
+is available at http://asciidoctor.org/docs/user-manual/.
 
 .AsciiDoc Cheat Sheet
 [cols="1,1,a",options="header"]
@@ -147,15 +168,15 @@ The HBase Reference Guide is written in Asciidoc and built using link:http://asc
 | A paragraph | a paragraph | Just type some text with a blank line at the top and bottom.
 | Add line breaks within a paragraph without adding blank lines | Manual line breaks | This will break + at the plus sign. Or prefix the whole paragraph with a line containing '[%hardbreaks]'
 | Give a title to anything | Colored italic bold differently-sized text | .MyTitle (no space between the period and the words) on the line before the thing to be titled
-| In-Line Code or commands | monospace | \`text`  
+| In-Line Code or commands | monospace | \`text`
 | In-line literal content (things to be typed exactly as shown) | bold mono | \*\`typethis`*
 | In-line replaceable content (things to substitute with your own values) | bold italic mono | \*\_typesomething_*
-| Code blocks with highlighting | monospace, highlighted, preserve space | 
+| Code blocks with highlighting | monospace, highlighted, preserve space |
 ........
 [source,java]
----- 
-  myAwesomeCode() { 
-} 
+----
+  myAwesomeCode() {
+}
 ----
 ........
 | Code block included from a separate file | included just as though it were part of the main file |
@@ -165,51 +186,52 @@ The HBase Reference Guide is written in Asciidoc and built using link:http://asc
 include\::path/to/app.rb[]
 ----
 ................
-| Include only part of a separate file | Similar to Javadoc | See link:http://asciidoctor.org/docs/user-manual/#by-tagged-regions
+| Include only part of a separate file | Similar to Javadoc
+| See http://asciidoctor.org/docs/user-manual/#by-tagged-regions
 | Filenames, directory names, new terms | italic | \_hbase-default.xml_
-| External naked URLs | A link with the URL as link text | 
+| External naked URLs | A link with the URL as link text |
 ----
 link:http://www.google.com
 ----
 
-| External URLs with text | A link with arbitrary link text | 
+| External URLs with text | A link with arbitrary link text |
 ----
 link:http://www.google.com[Google]
 ----
 
-| Create an internal anchor to cross-reference | not rendered | 
+| Create an internal anchor to cross-reference | not rendered |
 ----
 [[anchor_name]]
 ----
-| Cross-reference an existing anchor using its default title| an internal hyperlink using the element title if available, otherwise using the anchor name | 
+| Cross-reference an existing anchor using its default title| an internal hyperlink using the element title if available, otherwise using the anchor name |
 ----
 <<anchor_name>>
 ----
-| Cross-reference an existing anchor using custom text | an internal hyperlink using arbitrary text | 
+| Cross-reference an existing anchor using custom text | an internal hyperlink using arbitrary text |
 ----
 <<anchor_name,Anchor Text>>
 ----
-| A block image | The image with alt text | 
+| A block image | The image with alt text |
 ----
-image::sunset.jpg[Alt Text] 
+image::sunset.jpg[Alt Text]
 ----
 (put the image in the src/main/site/resources/images directory)
-| An inline image | The image with alt text, as part of the text flow | 
+| An inline image | The image with alt text, as part of the text flow |
 ----
 image:sunset.jpg [Alt Text]
 ----
 (only one colon)
-| Link to a remote image | show an image hosted elsewhere | 
+| Link to a remote image | show an image hosted elsewhere |
 ----
-image::http://inkscape.org/doc/examples/tux.svg[Tux,250,350] 
+image::http://inkscape.org/doc/examples/tux.svg[Tux,250,350]
 ----
 (or `image:`)
 | Add dimensions or a URL to the image | depends | inside the brackets after the alt text, specify width, height and/or link="http://my_link.com"
-| A footnote | subscript link which takes you to the footnote | 
+| A footnote | subscript link which takes you to the footnote |
 ----
 Some text.footnote:[The footnote text.]
 ----
-| A note or warning with no title | The admonition image followed by the admonition | 
+| A note or warning with no title | The admonition image followed by the admonition |
 ----
 NOTE:My note here
 ----
@@ -217,7 +239,7 @@ NOTE:My note here
 ----
 WARNING:My warning here
 ----
-| A complex note | The note has a title and/or multiple paragraphs and/or code blocks or lists, etc | 
+| A complex note | The note has a title and/or multiple paragraphs and/or code blocks or lists, etc |
 ........
 .The Title
 [NOTE]
@@ -228,26 +250,26 @@ some source code
 ----
 ====
 ........
-| Bullet lists | bullet lists | 
+| Bullet lists | bullet lists |
 ----
 * list item 1
 ----
 (see http://asciidoctor.org/docs/user-manual/#unordered-lists)
-| Numbered lists | numbered list | 
+| Numbered lists | numbered list |
 ----
-. list item 2 
+. list item 2
 ----
 (see http://asciidoctor.org/docs/user-manual/#ordered-lists)
-| Checklists | Checked or unchecked boxes | 
+| Checklists | Checked or unchecked boxes |
 Checked:
 ----
-- [*] 
+- [*]
 ----
 Unchecked:
 ----
 - [ ]
 ----
-| Multiple levels of lists | bulleted or numbered or combo | 
+| Multiple levels of lists | bulleted or numbered or combo |
 ----
 . Numbered (1), at top level
 * Bullet (2), nested under 1
@@ -257,14 +279,18 @@ Unchecked:
 ** Bullet (6), nested under 5
 - [x] Checked (7), at top level
 ----
-| Labelled lists / variablelists | a list item title or summary followed by content | 
+| Labelled lists / variablelists | a list item title or summary followed by content |
 ----
-Title:: content 
+Title:: content
 
 Title::
   content
 ----
-| Sidebars, quotes, or other blocks of text | a block of text, formatted differently from the default | Delimited using different delimiters, see link:http://asciidoctor.org/docs/user-manual/#built-in-blocks-summary. Some of the examples above use delimiters like \...., ----,====.
+| Sidebars, quotes, or other blocks of text
+| a block of text, formatted differently from the default
+| Delimited using different delimiters,
+see http://asciidoctor.org/docs/user-manual/#built-in-blocks-summary.
+Some of the examples above use delimiters like \...., ----,====.
 ........
 [example]
 ====
@@ -288,7 +314,7 @@ ____
 ........
 
 If you want to insert literal Asciidoc content that keeps being interpreted, when in doubt, use eight dots as the delimiter at the top and bottom.
-| Nested Sections | chapter, section, sub-section, etc | 
+| Nested Sections | chapter, section, sub-section, etc |
 ----
 = Book (or chapter if the chapter can be built alone, see the leveloffset info below)
 
@@ -296,7 +322,7 @@ If you want to insert literal Asciidoc content that keeps being interpreted, whe
 
 === Section (or subsection, etc)
 
-==== Subsection 
+==== Subsection
 ----
 
 and so on up to 6 levels (think carefully about going deeper than 4 levels, maybe you can just titled paragraphs or lists instead). Note that you can include a book inside another book by adding the `:leveloffset:+1` macro directive directly before your include, and resetting it to 0 directly after. See the _book.adoc_ source for examples, as this is how this guide handles chapters. *Don't do it for prefaces, glossaries, appendixes, or other special types of chapters.*
@@ -309,7 +335,7 @@ include::[/path/to/file.adoc]
 
 For plenty of examples. see _book.adoc_.
 | A table | a table | See http://asciidoctor.org/docs/user-manual/#tables. Generally rows are separated by newlines and columns by pipes
-| Comment out a single line | A  line is skipped during rendering | 
+| Comment out a single line | A  line is skipped during rendering |
 `+//+ This line won't show up`
 | Comment out a block | A section of the file is skipped during rendering |
 ----
@@ -317,7 +343,7 @@ For plenty of examples. see _book.adoc_.
 Nothing between the slashes will show up.
 ////
 ----
-| Highlight text for review | text shows up with yellow background | 
+| Highlight text for review | text shows up with yellow background |
 ----
 Test between #hash marks# is highlighted yellow.
 ----
@@ -326,20 +352,27 @@ Test between #hash marks# is highlighted yellow.
 
 === Auto-Generated Content
 
-Some parts of the HBase Reference Guide, most notably <<config.files,config.files>>, are generated automatically, so that this area of the documentation stays in sync with the code.
-This is done by means of an XSLT transform, which you can examine in the source at _src/main/xslt/configuration_to_asciidoc_chapter.xsl_.
-This transforms the _hbase-common/src/main/resources/hbase-default.xml_            file into an Asciidoc output which can be included in the Reference Guide.
-Sometimes, it is necessary to add configuration parameters or modify their descriptions.
-Make the modifications to the source file, and they will be included in the Reference Guide when it is rebuilt.
+Some parts of the HBase Reference Guide, most notably <<config.files,config.files>>,
+are generated automatically, so that this area of the documentation stays in
+sync with the code. This is done by means of an XSLT transform, which you can examine
+in the source at _src/main/xslt/configuration_to_asciidoc_chapter.xsl_. This
+transforms the _hbase-common/src/main/resources/hbase-default.xml_ file into an
+Asciidoc output which can be included in the Reference Guide.
 
-It is possible that other types of content can and will be automatically generated from HBase source files in the future.
+Sometimes, it is necessary to add configuration parameters or modify their descriptions.
+Make the modifications to the source file, and they will be included in the
+Reference Guide when it is rebuilt.
 
+It is possible that other types of content can and will be automatically generated
+from HBase source files in the future.
 
 === Images in the HBase Reference Guide
 
-You can include images in the HBase Reference Guide. It is important to include an image title if possible, and alternate text always. 
-This allows screen readers to navigate to the image and also provides alternative text for the image.
-The following is an example of an image with a title and alternate text. Notice the double colon.
+You can include images in the HBase Reference Guide. It is important to include
+an image title if possible, and alternate text always. This allows screen readers
+to navigate to the image and also provides alternative text for the image.
+The following is an example of an image with a title and alternate text. Notice
+the double colon.
 
 [source,asciidoc]
 ----
@@ -347,42 +380,53 @@ The following is an example of an image with a title and alternate text. Notice
 image::sunset.jpg[Alt Text]
 ----
 
-Here is an example of an inline image with alternate text. Notice the single colon. Inline images cannot have titles. They are generally small images like GUI buttons.
+Here is an example of an inline image with alternate text. Notice the single colon.
+Inline images cannot have titles. They are generally small images like GUI buttons.
 
 [source,asciidoc]
 ----
 image:sunset.jpg[Alt Text]
 ----
 
-
 When doing a local build, save the image to the _src/main/site/resources/images/_ directory.
 When you link to the image, do not include the directory portion of the path.
 The image will be copied to the appropriate target location during the build of the output.
 
-When you submit a patch which includes adding an image to the HBase Reference Guide, attach the image to the JIRA.
-If the committer asks where the image should be committed, it should go into the above directory.
+When you submit a patch which includes adding an image to the HBase Reference Guide,
+attach the image to the JIRA. If the committer asks where the image should be
+committed, it should go into the above directory.
 
 === Adding a New Chapter to the HBase Reference Guide
 
-If you want to add a new chapter to the HBase Reference Guide, the easiest way is to copy an existing chapter file, rename it, and change the ID (in double brackets) and title. Chapters are located in the _src/main/asciidoc/_chapters/_ directory.
+If you want to add a new chapter to the HBase Reference Guide, the easiest way
+is to copy an existing chapter file, rename it, and change the ID (in double
+brackets) and title. Chapters are located in the _src/main/asciidoc/_chapters/_
+directory.
 
-Delete the existing content and create the new content.
-Then open the _src/main/asciidoc/book.adoc_ file, which is the main file for the HBase Reference Guide, and copy an existing `include` element to include your new chapter in the appropriate location.
-Be sure to add your new file to your Git repository before creating your patch.
+Delete the existing content and create the new content. Then open the
+_src/main/asciidoc/book.adoc_ file, which is the main file for the HBase Reference
+Guide, and copy an existing `include` element to include your new chapter in the
+appropriate location. Be sure to add your new file to your Git repository before
+creating your patch.
 
 When in doubt, check to see how other files have been included.
 
 === Common Documentation Issues
 
-The following documentation issues come up often.
-Some of these are preferences, but others can create mysterious build errors or other problems.
+The following documentation issues come up often. Some of these are preferences,
+but others can create mysterious build errors or other problems.
 
 [qanda]
 Isolate Changes for Easy Diff Review.::
-  Be careful with pretty-printing or re-formatting an entire XML file, even if the formatting has degraded over time. If you need to reformat a file, do that in a separate JIRA where you do not change any content. Be careful because some XML editors do a bulk-reformat when you open a new file, especially if you use GUI mode in the editor.
+  Be careful with pretty-printing or re-formatting an entire XML file, even if
+  the formatting has degraded over time. If you need to reformat a file, do that
+  in a separate JIRA where you do not change any content. Be careful because some
+  XML editors do a bulk-reformat when you open a new file, especially if you use
+  GUI mode in the editor.
 
 Syntax Highlighting::
-  The HBase Reference Guide uses `coderay` for syntax highlighting. To enable syntax highlighting for a given code listing, use the following type of syntax:
+  The HBase Reference Guide uses `coderay` for syntax highlighting. To enable
+  syntax highlighting for a given code listing, use the following type of syntax:
 +
 ........
 [source,xml]
@@ -391,5 +435,6 @@ Syntax Highlighting::
 ----
 ........
 +
-Several syntax types are supported. The most interesting ones for the HBase Reference Guide are `java`, `xml`, `sql`, and `bash`.
+Several syntax types are supported. The most interesting ones for the HBase
+Reference Guide are `java`, `xml`, `sql`, and `bash`.
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/appendix_hfile_format.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/appendix_hfile_format.adoc b/src/main/asciidoc/_chapters/appendix_hfile_format.adoc
index b74763c..18eafe6 100644
--- a/src/main/asciidoc/_chapters/appendix_hfile_format.adoc
+++ b/src/main/asciidoc/_chapters/appendix_hfile_format.adoc
@@ -44,16 +44,16 @@ An HFile in version 1 format is structured as follows:
 .HFile V1 Format
 image::hfile.png[HFile Version 1]
 
-====  Block index format in version 1 
+====  Block index format in version 1
 
 The block index in version 1 is very straightforward.
-For each entry, it contains: 
+For each entry, it contains:
 
 . Offset (long)
 . Uncompressed size (int)
-. Key (a serialized byte array written using Bytes.writeByteArray) 
-.. Key length as a variable-length integer (VInt) 
-.. Key bytes 
+. Key (a serialized byte array written using Bytes.writeByteArray)
+.. Key length as a variable-length integer (VInt)
+.. Key bytes
 
 
 The number of entries in the block index is stored in the fixed file trailer, and has to be passed in to the method that reads the block index.
@@ -66,7 +66,7 @@ We fix this limitation in version 2, where we store on-disk block size instead o
 
 Note:  this feature was introduced in HBase 0.92
 
-==== Motivation 
+==== Motivation
 
 We found it necessary to revise the HFile format after encountering high memory usage and slow startup times caused by large Bloom filters and block indexes in the region server.
 Bloom filters can get as large as 100 MB per HFile, which adds up to 2 GB when aggregated over 20 regions.
@@ -80,7 +80,7 @@ Bloom filter blocks and index blocks (we call these "inline blocks") become inte
 
 HFile is a low-level file format by design, and it should not deal with application-specific details such as Bloom filters, which are handled at StoreFile level.
 Therefore, we call Bloom filter blocks in an HFile "inline" blocks.
-We also supply HFile with an interface to write those inline blocks. 
+We also supply HFile with an interface to write those inline blocks.
 
 Another format modification aimed at reducing the region server startup time is to use a contiguous "load-on-open" section that has to be loaded in memory at the time an HFile is being opened.
 Currently, as an HFile opens, there are separate seek operations to read the trailer, data/meta indexes, and file info.
@@ -91,57 +91,57 @@ In version 2, we seek once to read the trailer and seek again to read everything
 ==== Overview of Version 2
 
 The version of HBase introducing the above features reads both version 1 and 2 HFiles, but only writes version 2 HFiles.
-A version 2 HFile is structured as follows: 
+A version 2 HFile is structured as follows:
 
 .HFile Version 2 Structure
-image:hfilev2.png[HFile Version 2]   
+image:hfilev2.png[HFile Version 2]
 
 ==== Unified version 2 block format
 
-In the version 2 every block in the data section contains the following fields: 
-
-. 8 bytes: Block type, a sequence of bytes equivalent to version 1's "magic records". Supported block types are: 
-.. DATA – data blocks 
-.. LEAF_INDEX – leaf-level index blocks in a multi-level-block-index 
-.. BLOOM_CHUNK – Bloom filter chunks 
-.. META – meta blocks (not used for Bloom filters in version 2 anymore) 
-.. INTERMEDIATE_INDEX – intermediate-level index blocks in a multi-level blockindex 
-.. ROOT_INDEX – root>level index blocks in a multi>level block index 
-.. FILE_INFO – the ``file info'' block, a small key>value map of metadata 
-.. BLOOM_META – a Bloom filter metadata block in the load>on>open section 
+In the version 2 every block in the data section contains the following fields:
+
+. 8 bytes: Block type, a sequence of bytes equivalent to version 1's "magic records". Supported block types are:
+.. DATA – data blocks
+.. LEAF_INDEX – leaf-level index blocks in a multi-level-block-index
+.. BLOOM_CHUNK – Bloom filter chunks
+.. META – meta blocks (not used for Bloom filters in version 2 anymore)
+.. INTERMEDIATE_INDEX – intermediate-level index blocks in a multi-level blockindex
+.. ROOT_INDEX – root>level index blocks in a multi>level block index
+.. FILE_INFO – the ``file info'' block, a small key>value map of metadata
+.. BLOOM_META – a Bloom filter metadata block in the load>on>open section
 .. TRAILER – a fixed>size file trailer.
-  As opposed to the above, this is not an HFile v2 block but a fixed>size (for each HFile version) data structure 
-.. INDEX_V1 – this block type is only used for legacy HFile v1 block 
-. Compressed size of the block's data, not including the header (int). 
+  As opposed to the above, this is not an HFile v2 block but a fixed>size (for each HFile version) data structure
+.. INDEX_V1 – this block type is only used for legacy HFile v1 block
+. Compressed size of the block's data, not including the header (int).
 +
-Can be used for skipping the current data block when scanning HFile data. 
+Can be used for skipping the current data block when scanning HFile data.
 . Uncompressed size of the block's data, not including the header (int)
 +
-This is equal to the compressed size if the compression algorithm is NONE 
+This is equal to the compressed size if the compression algorithm is NONE
 . File offset of the previous block of the same type (long)
 +
-Can be used for seeking to the previous data/index block 
+Can be used for seeking to the previous data/index block
 . Compressed data (or uncompressed data if the compression algorithm is NONE).
 
 The above format of blocks is used in the following HFile sections:
 
 Scanned block section::
   The section is named so because it contains all data blocks that need to be read when an HFile is scanned sequentially.
-  Also contains leaf block index and Bloom chunk blocks. 
+  Also contains leaf block index and Bloom chunk blocks.
 Non-scanned block section::
   This section still contains unified-format v2 blocks but it does not have to be read when doing a sequential scan.
-  This section contains "meta" blocks and intermediate-level index blocks. 
+  This section contains "meta" blocks and intermediate-level index blocks.
 
-We are supporting "meta" blocks in version 2 the same way they were supported in version 1, even though we do not store Bloom filter data in these blocks anymore. 
+We are supporting "meta" blocks in version 2 the same way they were supported in version 1, even though we do not store Bloom filter data in these blocks anymore.
 
 ====  Block index in version 2
 
-There are three types of block indexes in HFile version 2, stored in two different formats (root and non-root): 
+There are three types of block indexes in HFile version 2, stored in two different formats (root and non-root):
 
 . Data index -- version 2 multi-level block index, consisting of:
-.. Version 2 root index, stored in the data block index section of the file 
-.. Optionally, version 2 intermediate levels, stored in the non%root format in   the data index section of the file. Intermediate levels can only be present if leaf level blocks are present 
-.. Optionally, version 2 leaf levels, stored in the non%root format inline with   data blocks 
+.. Version 2 root index, stored in the data block index section of the file
+.. Optionally, version 2 intermediate levels, stored in the non%root format in   the data index section of the file. Intermediate levels can only be present if leaf level blocks are present
+.. Optionally, version 2 leaf levels, stored in the non%root format inline with   data blocks
 . Meta index -- version 2 root index format only, stored in the meta index section of the file
 . Bloom index -- version 2 root index format only, stored in the ``load-on-open'' section as part of Bloom filter metadata.
 
@@ -150,19 +150,19 @@ There are three types of block indexes in HFile version 2, stored in two differe
 This format applies to:
 
 . Root level of the version 2 data index
-. Entire meta and Bloom indexes in version 2, which are always single-level. 
+. Entire meta and Bloom indexes in version 2, which are always single-level.
 
-A version 2 root index block is a sequence of entries of the following format, similar to entries of a version 1 block index, but storing on-disk size instead of uncompressed size. 
+A version 2 root index block is a sequence of entries of the following format, similar to entries of a version 1 block index, but storing on-disk size instead of uncompressed size.
 
-. Offset (long) 
+. Offset (long)
 +
-This offset may point to a data block or to a deeper>level index block. 
+This offset may point to a data block or to a deeper>level index block.
 
-. On-disk size (int) 
-. Key (a serialized byte array stored using Bytes.writeByteArray) 
+. On-disk size (int)
+. Key (a serialized byte array stored using Bytes.writeByteArray)
 +
-. Key (VInt) 
-. Key bytes 
+. Key (VInt)
+. Key bytes
 
 
 A single-level version 2 block index consists of just a single root index block.
@@ -172,13 +172,13 @@ For the data index and the meta index the number of entries is stored in the tra
 For a multi-level block index we also store the following fields in the root index block in the load-on-open section of the HFile, in addition to the data structure described above:
 
 . Middle leaf index block offset
-. Middle leaf block on-disk size (meaning the leaf index block containing the reference to the ``middle'' data block of the file) 
+. Middle leaf block on-disk size (meaning the leaf index block containing the reference to the ``middle'' data block of the file)
 . The index of the mid-key (defined below) in the middle leaf-level block.
 
 
 
 These additional fields are used to efficiently retrieve the mid-key of the HFile used in HFile splits, which we define as the first key of the block with a zero-based index of (n – 1) / 2, if the total number of blocks in the HFile is n.
-This definition is consistent with how the mid-key was determined in HFile version 1, and is reasonable in general, because blocks are likely to be the same size on average, but we don't have any estimates on individual key/value pair sizes. 
+This definition is consistent with how the mid-key was determined in HFile version 1, and is reasonable in general, because blocks are likely to be the same size on average, but we don't have any estimates on individual key/value pair sizes.
 
 
 
@@ -189,52 +189,57 @@ When reading the HFile and the mid-key is requested, we retrieve the middle leaf
 ==== Non-root block index format in version 2
 
 This format applies to intermediate-level and leaf index blocks of a version 2 multi-level data block index.
-Every non-root index block is structured as follows. 
-
-. numEntries: the number of entries (int). 
-. entryOffsets: the ``secondary index'' of offsets of entries in the block, to facilitate a quick binary search on the key (numEntries + 1 int values). The last value is the total length of all entries in this index block.
-  For example, in a non-root index block with entry sizes 60, 80, 50 the ``secondary index'' will contain the following int array: {0, 60, 140, 190}.
+Every non-root index block is structured as follows.
+
+. numEntries: the number of entries (int).
+. entryOffsets: the "secondary index" of offsets of entries in the block, to facilitate
+  a quick binary search on the key (`numEntries + 1` int values). The last value
+  is the total length of all entries in this index block. For example, in a non-root
+  index block with entry sizes 60, 80, 50 the "secondary index" will contain the
+  following int array: `{0, 60, 140, 190}`.
 . Entries.
-  Each entry contains: 
+  Each entry contains:
 +
-. Offset of the block referenced by this entry in the file (long) 
-. On>disk size of the referenced block (int) 
+. Offset of the block referenced by this entry in the file (long)
+. On>disk size of the referenced block (int)
 . Key.
-  The length can be calculated from entryOffsets. 
+  The length can be calculated from entryOffsets.
 
 
 ==== Bloom filters in version 2
 
-In contrast with version 1, in a version 2 HFile Bloom filter metadata is stored in the load-on-open section of the HFile for quick startup. 
+In contrast with version 1, in a version 2 HFile Bloom filter metadata is stored in the load-on-open section of the HFile for quick startup.
 
-. A compound Bloom filter. 
+. A compound Bloom filter.
 +
-. Bloom filter version = 3 (int). There used to be a DynamicByteBloomFilter class that had the Bloom   filter version number 2 
-. The total byte size of all compound Bloom filter chunks (long) 
-. Number of hash functions (int 
-. Type of hash functions (int) 
-. The total key count inserted into the Bloom filter (long) 
-. The maximum total number of keys in the Bloom filter (long) 
-. The number of chunks (int) 
-. Comparator class used for Bloom filter keys, a UTF>8 encoded string stored   using Bytes.writeByteArray 
-. Bloom block index in the version 2 root block index format 
+. Bloom filter version = 3 (int). There used to be a DynamicByteBloomFilter class that had the Bloom   filter version number 2
+. The total byte size of all compound Bloom filter chunks (long)
+. Number of hash functions (int
+. Type of hash functions (int)
+. The total key count inserted into the Bloom filter (long)
+. The maximum total number of keys in the Bloom filter (long)
+. The number of chunks (int)
+. Comparator class used for Bloom filter keys, a UTF>8 encoded string stored   using Bytes.writeByteArray
+. Bloom block index in the version 2 root block index format
 
 
 ==== File Info format in versions 1 and 2
 
-The file info block is a serialized link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/io/HbaseMapWritable.html[HbaseMapWritable] (essentially a map from byte arrays to byte arrays) with the following keys, among others.
+The file info block is a serialized map from byte arrays to byte arrays, with the following keys, among others.
 StoreFile-level logic adds more keys to this.
 
 [cols="1,1", frame="all"]
 |===
 |hfile.LASTKEY| The last key of the file (byte array)
 |hfile.AVG_KEY_LEN| The average key length in the file (int)
-|hfile.AVG_VALUE_LEN| The average value length in the file (int)           
+|hfile.AVG_VALUE_LEN| The average value length in the file (int)
 |===
 
-File info format did not change in version 2.
-However, we moved the file info to the final section of the file, which can be loaded as one block at the time the HFile is being opened.
-Also, we do not store comparator in the version 2 file info anymore.
+In version 2, we did not change the file format, but we moved the file info to
+the final section of the file, which can be loaded as one block when the HFile
+is being opened.
+
+Also, we do not store the comparator in the version 2 file info anymore.
 Instead, we store it in the fixed file trailer.
 This is because we need to know the comparator at the time of parsing the load-on-open section of the HFile.
 
@@ -242,14 +247,15 @@ This is because we need to know the comparator at the time of parsing the load-o
 
 The following table shows common and different fields between fixed file trailers in versions 1 and 2.
 Note that the size of the trailer is different depending on the version, so it is ``fixed'' only within one version.
-However, the version is always stored as the last four-byte integer in the file. 
+However, the version is always stored as the last four-byte integer in the file.
 
 .Differences between HFile Versions 1 and 2
 [cols="1,1", frame="all"]
 |===
 | Version 1 | Version 2
 | |File info offset (long)
-| Data index offset (long)| loadOnOpenOffset (long) /The offset of the sectionthat we need toload when opening the file./
+| Data index offset (long)
+| loadOnOpenOffset (long) /The offset of the section that we need to load when opening the file./
 | | Number of data index entries (int)
 | metaIndexOffset (long) /This field is not being used by the version 1 reader, so we removed it from version 2./ | uncompressedDataIndexSize (long) /The total uncompressed size of the whole data block index, including root-level, intermediate-level, and leaf-level blocks./
 | | Number of meta index entries (int)
@@ -257,7 +263,7 @@ However, the version is always stored as the last four-byte integer in the file.
 | numEntries (int) | numEntries (long)
 | Compression codec: 0 = LZO, 1 = GZ, 2 = NONE (int) | Compression codec: 0 = LZO, 1 = GZ, 2 = NONE (int)
 | | The number of levels in the data block index (int)
-| | firstDataBlockOffset (long) /The offset of the first first data block. Used when scanning./
+| | firstDataBlockOffset (long) /The offset of the first data block. Used when scanning./
 | | lastDataBlockEnd (long) /The offset of the first byte after the last key/value data block. We don't need to go beyond this offset when scanning./
 | Version: 1 (int) | Version: 2 (int)
 |===
@@ -290,42 +296,42 @@ This optimization (implemented by the getShortMidpointKey method) is inspired by
 Note: this feature was introduced in HBase 0.98
 
 [[hfilev3.motivation]]
-==== Motivation 
+==== Motivation
 
-Version 3 of HFile makes changes needed to ease management of encryption at rest and cell-level metadata (which in turn is needed for cell-level ACLs and cell-level visibility labels). For more information see <<hbase.encryption.server,hbase.encryption.server>>, <<hbase.tags,hbase.tags>>, <<hbase.accesscontrol.configuration,hbase.accesscontrol.configuration>>, and <<hbase.visibility.labels,hbase.visibility.labels>>. 
+Version 3 of HFile makes changes needed to ease management of encryption at rest and cell-level metadata (which in turn is needed for cell-level ACLs and cell-level visibility labels). For more information see <<hbase.encryption.server,hbase.encryption.server>>, <<hbase.tags,hbase.tags>>, <<hbase.accesscontrol.configuration,hbase.accesscontrol.configuration>>, and <<hbase.visibility.labels,hbase.visibility.labels>>.
 
 [[hfilev3.overview]]
 ==== Overview
 
 The version of HBase introducing the above features reads HFiles in versions 1, 2, and 3 but only writes version 3 HFiles.
 Version 3 HFiles are structured the same as version 2 HFiles.
-For more information see <<hfilev2.overview,hfilev2.overview>>. 
+For more information see <<hfilev2.overview,hfilev2.overview>>.
 
 [[hvilev3.infoblock]]
 ==== File Info Block in Version 3
 
-Version 3 added two additional pieces of information to the reserved keys in the file info block. 
+Version 3 added two additional pieces of information to the reserved keys in the file info block.
 
 [cols="1,1", frame="all"]
 |===
 | hfile.MAX_TAGS_LEN | The maximum number of bytes needed to store the serialized tags for any single cell in this hfile (int)
  | hfile.TAGS_COMPRESSED | Does the block encoder for this hfile compress tags? (boolean). Should only be present if hfile.MAX_TAGS_LEN is also present.
-|===      
+|===
 
 When reading a Version 3 HFile the presence of `MAX_TAGS_LEN` is used to determine how to deserialize the cells within a data block.
-Therefore, consumers must read the file's info block prior to reading any data blocks. 
+Therefore, consumers must read the file's info block prior to reading any data blocks.
 
-When writing a Version 3 HFile, HBase will always include `MAX_TAGS_LEN ` when flushing the memstore to underlying filesystem and when using prefix tree encoding for data blocks, as described in <<compression,compression>>. 
+When writing a Version 3 HFile, HBase will always include `MAX_TAGS_LEN ` when flushing the memstore to underlying filesystem and when using prefix tree encoding for data blocks, as described in <<compression,compression>>.
 
 When compacting extant files, the default writer will omit `MAX_TAGS_LEN` if all of the files selected do not themselves contain any cells with tags.
 
-See <<compaction,compaction>> for details on the compaction file selection algorithm. 
+See <<compaction,compaction>> for details on the compaction file selection algorithm.
 
 [[hfilev3.datablock]]
 ==== Data Blocks in Version 3
 
 Within an HFile, HBase cells are stored in data blocks as a sequence of KeyValues (see <<hfilev1.overview,hfilev1.overview>>, or link:http://www.larsgeorge.com/2009/10/hbase-architecture-101-storage.html[Lars George's
-        excellent introduction to HBase Storage]). In version 3, these KeyValue optionally will include a set of 0 or more tags: 
+        excellent introduction to HBase Storage]). In version 3, these KeyValue optionally will include a set of 0 or more tags:
 
 [cols="1,1", frame="all"]
 |===
@@ -335,14 +341,14 @@ Within an HFile, HBase cells are stored in data blocks as a sequence of KeyValue
 2+| Key bytes (variable)
 2+| Value bytes (variable)
 | | Tags Length (2 bytes)
-| | Tags bytes (variable)                
-|===      
+| | Tags bytes (variable)
+|===
 
 If the info block for a given HFile contains an entry for `MAX_TAGS_LEN` each cell will have the length of that cell's tags included, even if that length is zero.
-The actual tags are stored as a sequence of tag length (2 bytes), tag type (1 byte), tag bytes (variable). The format an individual tag's bytes depends on the tag type. 
+The actual tags are stored as a sequence of tag length (2 bytes), tag type (1 byte), tag bytes (variable). The format an individual tag's bytes depends on the tag type.
 
 Note that the dependence on the contents of the info block implies that prior to reading any data blocks you must first process a file's info block.
-It also implies that prior to writing a data block you must know if the file's info block will include `MAX_TAGS_LEN`. 
+It also implies that prior to writing a data block you must know if the file's info block will include `MAX_TAGS_LEN`.
 
 [[hfilev3.fixedtrailer]]
 ==== Fixed File Trailer in Version 3
@@ -350,6 +356,6 @@ It also implies that prior to writing a data block you must know if the file's i
 The fixed file trailers written with HFile version 3 are always serialized with protocol buffers.
 Additionally, it adds an optional field to the version 2 protocol buffer named encryption_key.
 If HBase is configured to encrypt HFiles this field will store a data encryption key for this particular HFile, encrypted with the current cluster master key using AES.
-For more information see <<hbase.encryption.server,hbase.encryption.server>>. 
+For more information see <<hbase.encryption.server,hbase.encryption.server>>.
 
 :numbered:


[03/11] hbase git commit: HBASE-13908 update site docs for 1.2 RC.

Posted by bu...@apache.org.
http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/hbase_mob.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/hbase_mob.adoc b/src/main/asciidoc/_chapters/hbase_mob.adoc
new file mode 100644
index 0000000..3f67181
--- /dev/null
+++ b/src/main/asciidoc/_chapters/hbase_mob.adoc
@@ -0,0 +1,236 @@
+////
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+////
+
+[[hbase_mob]]
+== Storing Medium-sized Objects (MOB)
+:doctype: book
+:numbered:
+:toc: left
+:icons: font
+:experimental:
+:toc: left
+:source-language: java
+
+Data comes in many sizes, and saving all of your data in HBase, including binary
+data such as images and documents, is ideal. While HBase can technically handle
+binary objects with cells that are larger than 100 KB in size, HBase's normal
+read and write paths are optimized for values smaller than 100KB in size. When
+HBase deals with large numbers of objects over this threshold, referred to here
+as medium objects, or MOBs, performance is degraded due to write amplification
+caused by splits and compactions. When using MOBs, ideally your objects will be between
+100KB and 10MB. HBase ***FIX_VERSION_NUMBER*** adds support
+for better managing large numbers of MOBs while maintaining performance,
+consistency, and low operational overhead. MOB support is provided by the work
+done in link:https://issues.apache.org/jira/browse/HBASE-11339[HBASE-11339]. To
+take advantage of MOB, you need to use <<hfilev3,HFile version 3>>. Optionally,
+configure the MOB file reader's cache settings for each RegionServer (see
+<<mob.cache.configure>>), then configure specific columns to hold MOB data.
+Client code does not need to change to take advantage of HBase MOB support. The
+feature is transparent to the client.
+
+=== Configuring Columns for MOB
+
+You can configure columns to support MOB during table creation or alteration,
+either in HBase Shell or via the Java API. The two relevant properties are the
+boolean `IS_MOB` and the `MOB_THRESHOLD`, which is the number of bytes at which
+an object is considered to be a MOB. Only `IS_MOB` is required. If you do not
+specify the `MOB_THRESHOLD`, the default threshold value of 100 KB is used.
+
+.Configure a Column for MOB Using HBase Shell
+====
+----
+hbase> create 't1', {NAME => 'f1', IS_MOB => true, MOB_THRESHOLD => 102400}
+hbase> alter 't1', {NAME => 'f1', IS_MOB => true, MOB_THRESHOLD => 102400}
+----
+====
+
+.Configure a Column for MOB Using the Java API
+====
+[source,java]
+----
+...
+HColumnDescriptor hcd = new HColumnDescriptor(“f”);
+hcd.setMobEnabled(true);
+...
+hcd.setMobThreshold(102400L);
+...
+----
+====
+
+
+=== Testing MOB
+
+The utility `org.apache.hadoop.hbase.IntegrationTestIngestMOB` is provided to assist with testing
+the MOB feature. The utility is run as follows:
+[source,bash]
+----
+$ sudo -u hbase hbase org.apache.hadoop.hbase.IntegrationTestIngestMOB \
+            -threshold 102400 \
+            -minMobDataSize 512 \
+            -maxMobDataSize 5120
+----
+
+* `*threshold*` is the threshold at which cells are considered to be MOBs.
+   The default is 1 kB, expressed in bytes.
+* `*minMobDataSize*` is the minimum value for the size of MOB data.
+   The default is 512 B, expressed in bytes.
+* `*maxMobDataSize*` is the maximum value for the size of MOB data.
+   The default is 5 kB, expressed in bytes.
+
+
+[[mob.cache.configure]]
+=== Configuring the MOB Cache
+
+
+Because there can be a large number of MOB files at any time, as compared to the number of HFiles,
+MOB files are not always kept open. The MOB file reader cache is a LRU cache which keeps the most
+recently used MOB files open. To configure the MOB file reader's cache on each RegionServer, add
+the following properties to the RegionServer's `hbase-site.xml`, customize the configuration to
+suit your environment, and restart or rolling restart the RegionServer.
+
+.Example MOB Cache Configuration
+====
+[source,xml]
+----
+<property>
+    <name>hbase.mob.file.cache.size</name>
+    <value>1000</value>
+    <description>
+      Number of opened file handlers to cache.
+      A larger value will benefit reads by providing more file handlers per mob
+      file cache and would reduce frequent file opening and closing.
+      However, if this is set too high, this could lead to a "too many opened file handers"
+      The default value is 1000.
+    </description>
+</property>
+<property>
+    <name>hbase.mob.cache.evict.period</name>
+    <value>3600</value>
+    <description>
+      The amount of time in seconds after which an unused file is evicted from the
+      MOB cache. The default value is 3600 seconds.
+    </description>
+</property>
+<property>
+    <name>hbase.mob.cache.evict.remain.ratio</name>
+    <value>0.5f</value>
+    <description>
+      A multiplier (between 0.0 and 1.0), which determines how many files remain cached
+      after the threshold of files that remains cached after a cache eviction occurs
+      which is triggered by reaching the `hbase.mob.file.cache.size` threshold.
+      The default value is 0.5f, which means that half the files (the least-recently-used
+      ones) are evicted.
+    </description>
+</property>
+----
+====
+
+=== MOB Optimization Tasks
+
+==== Manually Compacting MOB Files
+
+To manually compact MOB files, rather than waiting for the
+<<mob.cache.configure,configuration>> to trigger compaction, use the
+`compact_mob` or `major_compact_mob` HBase shell commands. These commands
+require the first argument to be the table name, and take an optional column
+family as the second argument. If the column family is omitted, all MOB-enabled
+column families are compacted.
+
+----
+hbase> compact_mob 't1', 'c1'
+hbase> compact_mob 't1'
+hbase> major_compact_mob 't1', 'c1'
+hbase> major_compact_mob 't1'
+----
+
+These commands are also available via `Admin.compactMob` and
+`Admin.majorCompactMob` methods.
+
+==== MOB Sweeper
+
+HBase MOB a MapReduce job called the Sweeper tool for
+optimization. The Sweeper tool coalesces small MOB files or MOB files with many
+deletions or updates. The Sweeper tool is not required if you use native MOB compaction, which
+does not rely on MapReduce.
+
+To configure the Sweeper tool, set the following options:
+
+[source,xml]
+----
+<property>
+    <name>hbase.mob.sweep.tool.compaction.ratio</name>
+    <value>0.5f</value>
+    <description>
+      If there are too many cells deleted in a mob file, it's regarded
+      as an invalid file and needs to be merged.
+      If existingCellsSize/mobFileSize is less than ratio, it's regarded
+      as an invalid file. The default value is 0.5f.
+    </description>
+</property>
+<property>
+    <name>hbase.mob.sweep.tool.compaction.mergeable.size</name>
+    <value>134217728</value>
+    <description>
+      If the size of a mob file is less than this value, it's regarded as a small
+      file and needs to be merged. The default value is 128MB.
+    </description>
+</property>
+<property>
+    <name>hbase.mob.sweep.tool.compaction.memstore.flush.size</name>
+    <value>134217728</value>
+    <description>
+      The flush size for the memstore used by sweep job. Each sweep reducer owns such a memstore.
+      The default value is 128MB.
+    </description>
+</property>
+<property>
+    <name>hbase.master.mob.ttl.cleaner.period</name>
+    <value>86400</value>
+    <description>
+      The period that ExpiredMobFileCleanerChore runs. The unit is second.
+      The default value is one day.
+    </description>
+</property>
+----
+
+Next, add the HBase install directory, _`$HBASE_HOME`/*_, and HBase library directory to
+_yarn-site.xml_ Adjust this example to suit your environment.
+[source,xml]
+----
+<property>
+    <description>Classpath for typical applications.</description>
+    <name>yarn.application.classpath</name>
+    <value>
+        $HADOOP_CONF_DIR,
+        $HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,
+        $HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,
+        $HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,
+        $HADOOP_YARN_HOME/*,$HADOOP_YARN_HOME/lib/*,
+        $HBASE_HOME/*, $HBASE_HOME/lib/*
+    </value>
+</property>
+----
+
+Finally, run the `sweeper` tool for each column which is configured for MOB.
+[source,bash]
+----
+$ org.apache.hadoop.hbase.mob.compactions.Sweeper _tableName_ _familyName_
+----

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/hbck_in_depth.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/hbck_in_depth.adoc b/src/main/asciidoc/_chapters/hbck_in_depth.adoc
index 1b30c59..1e1f9fb 100644
--- a/src/main/asciidoc/_chapters/hbck_in_depth.adoc
+++ b/src/main/asciidoc/_chapters/hbck_in_depth.adoc
@@ -29,7 +29,7 @@
 :experimental:
 
 HBaseFsck (hbck) is a tool for checking for region consistency and table integrity problems and repairing a corrupted HBase.
-It works in two basic modes -- a read-only inconsistency identifying mode and a multi-phase read-write repair mode. 
+It works in two basic modes -- a read-only inconsistency identifying mode and a multi-phase read-write repair mode.
 
 === Running hbck to identify inconsistencies
 
@@ -42,10 +42,10 @@ $ ./bin/hbase hbck
 ----
 
 At the end of the commands output it prints OK or tells you the number of INCONSISTENCIES present.
-You may also want to run run hbck a few times because some inconsistencies can be transient (e.g.
+You may also want to run hbck a few times because some inconsistencies can be transient (e.g.
 cluster is starting up or a region is splitting). Operationally you may want to run hbck regularly and setup alert (e.g.
 via nagios) if it repeatedly reports inconsistencies . A run of hbck will report a list of inconsistencies along with a brief description of the regions and tables affected.
-The using the `-details` option will report more details including a representative listing of all the splits present in all the tables. 
+The using the `-details` option will report more details including a representative listing of all the splits present in all the tables.
 
 [source,bourne]
 ----
@@ -66,9 +66,9 @@ $ ./bin/hbase hbck TableFoo TableBar
 === Inconsistencies
 
 If after several runs, inconsistencies continue to be reported, you may have encountered a corruption.
-These should be rare, but in the event they occur newer versions of HBase include the hbck tool enabled with automatic repair options. 
+These should be rare, but in the event they occur newer versions of HBase include the hbck tool enabled with automatic repair options.
 
-There are two invariants that when violated create inconsistencies in HBase: 
+There are two invariants that when violated create inconsistencies in HBase:
 
 * HBase's region consistency invariant is satisfied if every region is assigned and deployed on exactly one region server, and all places where this state kept is in accordance.
 * HBase's table integrity invariant is satisfied if for each table, every possible row key resolves to exactly one region.
@@ -77,20 +77,20 @@ Repairs generally work in three phases -- a read-only information gathering phas
 Starting from version 0.90.0, hbck could detect region consistency problems report on a subset of possible table integrity problems.
 It also included the ability to automatically fix the most common inconsistency, region assignment and deployment consistency problems.
 This repair could be done by using the `-fix` command line option.
-These problems close regions if they are open on the wrong server or on multiple region servers and also assigns regions to region servers if they are not open. 
+These problems close regions if they are open on the wrong server or on multiple region servers and also assigns regions to region servers if they are not open.
 
 Starting from HBase versions 0.90.7, 0.92.2 and 0.94.0, several new command line options are introduced to aid repairing a corrupted HBase.
-This hbck sometimes goes by the nickname ``uberhbck''. Each particular version of uber hbck is compatible with the HBase's of the same major version (0.90.7 uberhbck can repair a 0.90.4). However, versions <=0.90.6 and versions <=0.92.1 may require restarting the master or failing over to a backup master. 
+This hbck sometimes goes by the nickname ``uberhbck''. Each particular version of uber hbck is compatible with the HBase's of the same major version (0.90.7 uberhbck can repair a 0.90.4). However, versions <=0.90.6 and versions <=0.92.1 may require restarting the master or failing over to a backup master.
 
 === Localized repairs
 
 When repairing a corrupted HBase, it is best to repair the lowest risk inconsistencies first.
 These are generally region consistency repairs -- localized single region repairs, that only modify in-memory data, ephemeral zookeeper data, or patch holes in the META table.
 Region consistency requires that the HBase instance has the state of the region's data in HDFS (.regioninfo files), the region's row in the hbase:meta table., and region's deployment/assignments on region servers and the master in accordance.
-Options for repairing region consistency include: 
+Options for repairing region consistency include:
 
 * `-fixAssignments` (equivalent to the 0.90 `-fix` option) repairs unassigned, incorrectly assigned or multiply assigned regions.
-* `-fixMeta` which removes meta rows when corresponding regions are not present in HDFS and adds new meta rows if they regions are present in HDFS while not in META.                To fix deployment and assignment problems you can run this command: 
+* `-fixMeta` which removes meta rows when corresponding regions are not present in HDFS and adds new meta rows if they regions are present in HDFS while not in META.                To fix deployment and assignment problems you can run this command:
 
 [source,bourne]
 ----
@@ -177,7 +177,7 @@ $ ./bin/hbase hbck -fixMetaOnly -fixAssignments
 ==== Special cases: HBase version file is missing
 
 HBase's data on the file system requires a version file in order to start.
-If this flie is missing, you can use the `-fixVersionFile` option to fabricating a new HBase version file.
+If this file is missing, you can use the `-fixVersionFile` option to fabricating a new HBase version file.
 This assumes that the version of hbck you are running is the appropriate version for the HBase cluster.
 
 ==== Special case: Root and META are corrupt.
@@ -205,8 +205,8 @@ However, there could be some lingering offline split parents sometimes.
 They are in META, in HDFS, and not deployed.
 But HBase can't clean them up.
 In this case, you can use the `-fixSplitParents` option to reset them in META to be online and not split.
-Therefore, hbck can merge them with other regions if fixing overlapping regions option is used. 
+Therefore, hbck can merge them with other regions if fixing overlapping regions option is used.
 
-This option should not normally be used, and it is not in `-fixAll`. 
+This option should not normally be used, and it is not in `-fixAll`.
 
 :numbered:

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/mapreduce.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/mapreduce.adoc b/src/main/asciidoc/_chapters/mapreduce.adoc
index 2a42af2..75718fd 100644
--- a/src/main/asciidoc/_chapters/mapreduce.adoc
+++ b/src/main/asciidoc/_chapters/mapreduce.adoc
@@ -33,7 +33,9 @@ A good place to get started with MapReduce is http://hadoop.apache.org/docs/r2.6
 MapReduce version 2 (MR2)is now part of link:http://hadoop.apache.org/docs/r2.3.0/hadoop-yarn/hadoop-yarn-site/[YARN].
 
 This chapter discusses specific configuration steps you need to take to use MapReduce on data within HBase.
-In addition, it discusses other interactions and issues between HBase and MapReduce jobs.
+In addition, it discusses other interactions and issues between HBase and MapReduce
+jobs. Finally, it discusses <<cascading,Cascading>>, an
+link:http://www.cascading.org/[alternative API] for MapReduce.
 
 .`mapred` and `mapreduce`
 [NOTE]
@@ -63,7 +65,7 @@ The dependencies only need to be available on the local `CLASSPATH`.
 The following example runs the bundled HBase link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter] MapReduce job against a table named `usertable`.
 If you have not set the environment variables expected in the command (the parts prefixed by a `$` sign and surrounded by curly braces), you can use the actual system paths instead.
 Be sure to use the correct version of the HBase JAR for your system.
-The backticks (``` symbols) cause ths shell to execute the sub-commands, setting the output of `hbase classpath` (the command to dump HBase CLASSPATH) to `HADOOP_CLASSPATH`.
+The backticks (``` symbols) cause the shell to execute the sub-commands, setting the output of `hbase classpath` (the command to dump HBase CLASSPATH) to `HADOOP_CLASSPATH`.
 This example assumes you use a BASH-compatible shell.
 
 [source,bash]
@@ -277,7 +279,7 @@ That is where the logic for map-task assignment resides.
 
 The following is an example of using HBase as a MapReduce source in read-only manner.
 Specifically, there is a Mapper instance but no Reducer, and nothing is being emitted from the Mapper.
-There job would be defined as follows...
+The job would be defined as follows...
 
 [source,java]
 ----
@@ -590,7 +592,54 @@ public class MyMapper extends TableMapper<Text, LongWritable> {
 == Speculative Execution
 
 It is generally advisable to turn off speculative execution for MapReduce jobs that use HBase as a source.
-This can either be done on a per-Job basis through properties, on on the entire cluster.
+This can either be done on a per-Job basis through properties, or on the entire cluster.
 Especially for longer running jobs, speculative execution will create duplicate map-tasks which will double-write your data to HBase; this is probably not what you want.
 
 See <<spec.ex,spec.ex>> for more information.
+
+[[cascading]]
+== Cascading
+
+link:http://www.cascading.org/[Cascading] is an alternative API for MapReduce, which
+actually uses MapReduce, but allows you to write your MapReduce code in a simplified
+way.
+
+The following example shows a Cascading `Flow` which "sinks" data into an HBase cluster. The same
+`hBaseTap` API could be used to "source" data as well.
+
+[source, java]
+----
+// read data from the default filesystem
+// emits two fields: "offset" and "line"
+Tap source = new Hfs( new TextLine(), inputFileLhs );
+
+// store data in an HBase cluster
+// accepts fields "num", "lower", and "upper"
+// will automatically scope incoming fields to their proper familyname, "left" or "right"
+Fields keyFields = new Fields( "num" );
+String[] familyNames = {"left", "right"};
+Fields[] valueFields = new Fields[] {new Fields( "lower" ), new Fields( "upper" ) };
+Tap hBaseTap = new HBaseTap( "multitable", new HBaseScheme( keyFields, familyNames, valueFields ), SinkMode.REPLACE );
+
+// a simple pipe assembly to parse the input into fields
+// a real app would likely chain multiple Pipes together for more complex processing
+Pipe parsePipe = new Each( "insert", new Fields( "line" ), new RegexSplitter( new Fields( "num", "lower", "upper" ), " " ) );
+
+// "plan" a cluster executable Flow
+// this connects the source Tap and hBaseTap (the sink Tap) to the parsePipe
+Flow parseFlow = new FlowConnector( properties ).connect( source, hBaseTap, parsePipe );
+
+// start the flow, and block until complete
+parseFlow.complete();
+
+// open an iterator on the HBase table we stuffed data into
+TupleEntryIterator iterator = parseFlow.openSink();
+
+while(iterator.hasNext())
+  {
+  // print out each tuple from HBase
+  System.out.println( "iterator.next() = " + iterator.next() );
+  }
+
+iterator.close();
+----

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/ops_mgt.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/ops_mgt.adoc b/src/main/asciidoc/_chapters/ops_mgt.adoc
index a4dbccb..e8d44eb 100644
--- a/src/main/asciidoc/_chapters/ops_mgt.adoc
+++ b/src/main/asciidoc/_chapters/ops_mgt.adoc
@@ -199,7 +199,7 @@ $ ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.tool.Canary -t 600000
 
 By default, the canary tool only check the read operations, it's hard to find the problem in the
 write path. To enable the write sniffing, you can run canary with the `-writeSniffing` option.
-When the write sniffing is enabled, the canary tool will create a hbase table and make sure the
+When the write sniffing is enabled, the canary tool will create an hbase table and make sure the
 regions of the table distributed on all region servers. In each sniffing period, the canary will
 try to put data to these regions to check the write availability of each region server.
 ----
@@ -351,7 +351,7 @@ You can invoke it via the HBase cli with the 'wal' command.
 [NOTE]
 ====
 Prior to version 2.0, the WAL Pretty Printer was called the `HLogPrettyPrinter`, after an internal name for HBase's write ahead log.
-In those versions, you can pring the contents of a WAL using the same configuration as above, but with the 'hlog' command.
+In those versions, you can print the contents of a WAL using the same configuration as above, but with the 'hlog' command.
 
 ----
  $ ./bin/hbase hlog hdfs://example.org:8020/hbase/.logs/example.org,60020,1283516293161/10.10.21.10%3A60020.1283973724012
@@ -523,7 +523,7 @@ row9	c1	c2
 row10	c1	c2
 ----
 
-For ImportTsv to use this imput file, the command line needs to look like this:
+For ImportTsv to use this input file, the command line needs to look like this:
 
 ----
 
@@ -637,10 +637,14 @@ See link:https://issues.apache.org/jira/browse/HBASE-4391[HBASE-4391 Add ability
 [[compaction.tool]]
 === Offline Compaction Tool
 
-See the usage for the link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/regionserver/CompactionTool.html[Compaction
-          Tool].
-Run it like this +./bin/hbase
-          org.apache.hadoop.hbase.regionserver.CompactionTool+
+See the usage for the
+link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/CompactionTool.html[CompactionTool].
+Run it like:
+
+[source, bash]
+----
+$ ./bin/hbase org.apache.hadoop.hbase.regionserver.CompactionTool
+----
 
 === `hbase clean`
 
@@ -777,7 +781,7 @@ To decommission a loaded RegionServer, run the following: +$
 ====
 The `HOSTNAME` passed to _graceful_stop.sh_ must match the hostname that hbase is using to identify RegionServers.
 Check the list of RegionServers in the master UI for how HBase is referring to servers.
-Its usually hostname but can also be FQDN.
+It's usually hostname but can also be FQDN.
 Whatever HBase is using, this is what you should pass the _graceful_stop.sh_ decommission script.
 If you pass IPs, the script is not yet smart enough to make a hostname (or FQDN) of it and so it will fail when it checks if server is currently running; the graceful unloading of regions will not run.
 ====
@@ -817,12 +821,12 @@ Hence, it is better to manage the balancer apart from `graceful_stop` reenabling
 [[draining.servers]]
 ==== Decommissioning several Regions Servers concurrently
 
-If you have a large cluster, you may want to decommission more than one machine at a time by gracefully stopping mutiple RegionServers concurrently.
+If you have a large cluster, you may want to decommission more than one machine at a time by gracefully stopping multiple RegionServers concurrently.
 To gracefully drain multiple regionservers at the same time, RegionServers can be put into a "draining" state.
 This is done by marking a RegionServer as a draining node by creating an entry in ZooKeeper under the _hbase_root/draining_ znode.
 This znode has format `name,port,startcode` just like the regionserver entries under _hbase_root/rs_ znode.
 
-Without this facility, decommissioning mulitple nodes may be non-optimal because regions that are being drained from one region server may be moved to other regionservers that are also draining.
+Without this facility, decommissioning multiple nodes may be non-optimal because regions that are being drained from one region server may be moved to other regionservers that are also draining.
 Marking RegionServers to be in the draining state prevents this from happening.
 See this link:http://inchoate-clatter.blogspot.com/2012/03/hbase-ops-automation.html[blog
             post] for more details.
@@ -987,7 +991,7 @@ To configure metrics for a given region server, edit the _conf/hadoop-metrics2-h
 Restart the region server for the changes to take effect.
 
 To change the sampling rate for the default sink, edit the line beginning with `*.period`.
-To filter which metrics are emitted or to extend the metrics framework, see link:http://hadoop.apache.org/docs/current/api/org/apache/hadoop/metrics2/package-summary.html
+To filter which metrics are emitted or to extend the metrics framework, see http://hadoop.apache.org/docs/current/api/org/apache/hadoop/metrics2/package-summary.html
 
 .HBase Metrics and Ganglia
 [NOTE]
@@ -1010,15 +1014,15 @@ Rather than listing each metric which HBase emits by default, you can browse thr
 Different metrics are exposed for the Master process and each region server process.
 
 .Procedure: Access a JSON Output of Available Metrics
-. After starting HBase, access the region server's web UI, at `http://REGIONSERVER_HOSTNAME:60030` by default (or port 16030 in HBase 1.0+).
+. After starting HBase, access the region server's web UI, at pass:[http://REGIONSERVER_HOSTNAME:60030] by default (or port 16030 in HBase 1.0+).
 . Click the [label]#Metrics Dump# link near the top.
   The metrics for the region server are presented as a dump of the JMX bean in JSON format.
   This will dump out all metrics names and their values.
-  To include metrics descriptions in the listing -- this can be useful when you are exploring what is available -- add a query string of `?description=true` so your URL becomes `http://REGIONSERVER_HOSTNAME:60030/jmx?description=true`.
+  To include metrics descriptions in the listing -- this can be useful when you are exploring what is available -- add a query string of `?description=true` so your URL becomes pass:[http://REGIONSERVER_HOSTNAME:60030/jmx?description=true].
   Not all beans and attributes have descriptions.
-. To view metrics for the Master, connect to the Master's web UI instead (defaults to `http://localhost:60010` or port 16010 in HBase 1.0+) and click its [label]#Metrics
+. To view metrics for the Master, connect to the Master's web UI instead (defaults to pass:[http://localhost:60010] or port 16010 in HBase 1.0+) and click its [label]#Metrics
   Dump# link.
-  To include metrics descriptions in the listing -- this can be useful when you are exploring what is available -- add a query string of `?description=true` so your URL becomes `http://REGIONSERVER_HOSTNAME:60010/jmx?description=true`.
+  To include metrics descriptions in the listing -- this can be useful when you are exploring what is available -- add a query string of `?description=true` so your URL becomes pass:[http://REGIONSERVER_HOSTNAME:60010/jmx?description=true].
   Not all beans and attributes have descriptions.
 
 
@@ -1252,7 +1256,8 @@ Have a look in the Web UI.
 
 == Cluster Replication
 
-NOTE: This information was previously available at link:http://hbase.apache.org/replication.html[Cluster Replication].
+NOTE: This information was previously available at
+link:http://hbase.apache.org#replication[Cluster Replication].
 
 HBase provides a cluster replication mechanism which allows you to keep one cluster's state synchronized with that of another cluster, using the write-ahead log (WAL) of the source cluster to propagate the changes.
 Some use cases for cluster replication include:
@@ -1332,13 +1337,13 @@ list_peers:: list all replication relationships known by this cluster
 enable_peer <ID>::
   Enable a previously-disabled replication relationship
 disable_peer <ID>::
-  Disable a replication relationship. HBase will no longer send edits to that peer cluster, but it still keeps track of all the new WALs that it will need to replicate if and when it is re-enabled. 
+  Disable a replication relationship. HBase will no longer send edits to that peer cluster, but it still keeps track of all the new WALs that it will need to replicate if and when it is re-enabled.
 remove_peer <ID>::
   Disable and remove a replication relationship. HBase will no longer send edits to that peer cluster or keep track of WALs.
 enable_table_replication <TABLE_NAME>::
-  Enable the table replication switch for all it's column families. If the table is not found in the destination cluster then it will create one with the same name and column families. 
+  Enable the table replication switch for all its column families. If the table is not found in the destination cluster then it will create one with the same name and column families.
 disable_table_replication <TABLE_NAME>::
-  Disable the table replication switch for all it's column families. 
+  Disable the table replication switch for all its column families.
 
 === Verifying Replicated Data
 
@@ -1457,7 +1462,7 @@ Speed is also limited by total size of the list of edits to replicate per slave,
 With this configuration, a master cluster region server with three slaves would use at most 192 MB to store data to replicate.
 This does not account for the data which was filtered but not garbage collected.
 
-Once the maximum size of edits has been buffered or the reader reaces the end of the WAL, the source thread stops reading and chooses at random a sink to replicate to (from the list that was generated by keeping only a subset of slave region servers). It directly issues a RPC to the chosen region server and waits for the method to return.
+Once the maximum size of edits has been buffered or the reader reaches the end of the WAL, the source thread stops reading and chooses at random a sink to replicate to (from the list that was generated by keeping only a subset of slave region servers). It directly issues a RPC to the chosen region server and waits for the method to return.
 If the RPC was successful, the source determines whether the current file has been emptied or it contains more data which needs to be read.
 If the file has been emptied, the source deletes the znode in the queue.
 Otherwise, it registers the new offset in the log's znode.
@@ -1630,6 +1635,197 @@ You can use the HBase Shell command `status 'replication'` to monitor the replic
 * `status 'replication', 'source'` -- prints the status for each replication source, sorted by hostname.
 * `status 'replication', 'sink'` -- prints the status for each replication sink, sorted by hostname.
 
+== Running Multiple Workloads On a Single Cluster
+
+HBase provides the following mechanisms for managing the performance of a cluster
+handling multiple workloads:
+. <<quota>>
+. <<request-queues>>
+. <<multiple-typed-queues>>
+
+[[quota]]
+=== Quotas
+HBASE-11598 introduces quotas, which allow you to throttle requests based on
+the following limits:
+
+. <<request-quotas,The number or size of requests(read, write, or read+write) in a given timeframe>>
+. <<namespace-quotas,The number of tables allowed in a namespace>>
+
+These limits can be enforced for a specified user, table, or namespace.
+
+.Enabling Quotas
+
+Quotas are disabled by default. To enable the feature, set the `hbase.quota.enabled`
+property to `true` in _hbase-site.xml_ file for all cluster nodes.
+
+.General Quota Syntax
+. THROTTLE_TYPE can be expressed as READ, WRITE, or the default type(read + write).
+. Timeframes  can be expressed in the following units: `sec`, `min`, `hour`, `day`
+. Request sizes can be expressed in the following units: `B` (bytes), `K` (kilobytes),
+`M` (megabytes), `G` (gigabytes), `T` (terabytes), `P` (petabytes)
+. Numbers of requests are expressed as an integer followed by the string `req`
+. Limits relating to time are expressed as req/time or size/time. For instance `10req/day`
+or `100P/hour`.
+. Numbers of tables or regions are expressed as integers.
+
+[[request-quotas]]
+.Setting Request Quotas
+You can set quota rules ahead of time, or you can change the throttle at runtime. The change
+will propagate after the quota refresh period has expired. This expiration period
+defaults to 5 minutes. To change it, modify the `hbase.quota.refresh.period` property
+in `hbase-site.xml`. This property is expressed in milliseconds and defaults to `300000`.
+
+----
+# Limit user u1 to 10 requests per second
+hbase> set_quota TYPE => THROTTLE, USER => 'u1', LIMIT => '10req/sec'
+
+# Limit user u1 to 10 read requests per second
+hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => READ, USER => 'u1', LIMIT => '10req/sec'
+
+# Limit user u1 to 10 M per day everywhere
+hbase> set_quota TYPE => THROTTLE, USER => 'u1', LIMIT => '10M/day'
+
+# Limit user u1 to 10 M write size per sec
+hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => WRITE, USER => 'u1', LIMIT => '10M/sec'
+
+# Limit user u1 to 5k per minute on table t2
+hbase> set_quota TYPE => THROTTLE, USER => 'u1', TABLE => 't2', LIMIT => '5K/min'
+
+# Limit user u1 to 10 read requests per sec on table t2
+hbase> set_quota TYPE => THROTTLE, THROTTLE_TYPE => READ, USER => 'u1', TABLE => 't2', LIMIT => '10req/sec'
+
+# Remove an existing limit from user u1 on namespace ns2
+hbase> set_quota TYPE => THROTTLE, USER => 'u1', NAMESPACE => 'ns2', LIMIT => NONE
+
+# Limit all users to 10 requests per hour on namespace ns1
+hbase> set_quota TYPE => THROTTLE, NAMESPACE => 'ns1', LIMIT => '10req/hour'
+
+# Limit all users to 10 T per hour on table t1
+hbase> set_quota TYPE => THROTTLE, TABLE => 't1', LIMIT => '10T/hour'
+
+# Remove all existing limits from user u1
+hbase> set_quota TYPE => THROTTLE, USER => 'u1', LIMIT => NONE
+
+# List all quotas for user u1 in namespace ns2
+hbase> list_quotas USER => 'u1, NAMESPACE => 'ns2'
+
+# List all quotas for namespace ns2
+hbase> list_quotas NAMESPACE => 'ns2'
+
+# List all quotas for table t1
+hbase> list_quotas TABLE => 't1'
+
+# list all quotas
+hbase> list_quotas
+----
+
+You can also place a global limit and exclude a user or a table from the limit by applying the
+`GLOBAL_BYPASS` property.
+----
+hbase> set_quota NAMESPACE => 'ns1', LIMIT => '100req/min'               # a per-namespace request limit
+hbase> set_quota USER => 'u1', GLOBAL_BYPASS => true                     # user u1 is not affected by the limit
+----
+
+[[namespace_quotas]]
+.Setting Namespace Quotas
+
+You can specify the maximum number of tables or regions allowed in a given namespace, either
+when you create the namespace or by altering an existing namespace, by setting the
+`hbase.namespace.quota.maxtables property`  on the namespace.
+
+.Limiting Tables Per Namespace
+----
+# Create a namespace with a max of 5 tables
+hbase> create_namespace 'ns1', {'hbase.namespace.quota.maxtables'=>'5'}
+
+# Alter an existing namespace to have a max of 8 tables
+hbase> alter_namespace 'ns2', {METHOD => 'set', 'hbase.namespace.quota.maxtables'=>'8'}
+
+# Show quota information for a namespace
+hbase> describe_namespace 'ns2'
+
+# Alter an existing namespace to remove a quota
+hbase> alter_namespace 'ns2', {METHOD => 'unset', NAME=>'hbase.namespace.quota.maxtables'}
+----
+
+.Limiting Regions Per Namespace
+----
+# Create a namespace with a max of 10 regions
+hbase> create_namespace 'ns1', {'hbase.namespace.quota.maxregions'=>'10'
+
+# Show quota information for a namespace
+hbase> describe_namespace 'ns1'
+
+# Alter an existing namespace to have a max of 20 tables
+hbase> alter_namespace 'ns2', {METHOD => 'set', 'hbase.namespace.quota.maxregions'=>'20'}
+
+# Alter an existing namespace to remove a quota
+hbase> alter_namespace 'ns2', {METHOD => 'unset', NAME=> 'hbase.namespace.quota.maxregions'}
+----
+
+[[request_queues]]
+=== Request Queues
+If no throttling policy is configured, when the RegionServer receives multiple requests,
+they are now placed into a queue waiting for a free execution slot (HBASE-6721).
+The simplest queue is a FIFO queue, where each request waits for all previous requests in the queue
+to finish before running. Fast or interactive queries can get stuck behind large requests.
+
+If you are able to guess how long a request will take, you can reorder requests by
+pushing the long requests to the end of the queue and allowing short requests to preempt
+them. Eventually, you must still execute the large requests and prioritize the new
+requests behind them. The short requests will be newer, so the result is not terrible,
+but still suboptimal compared to a mechanism which allows large requests to be split
+into multiple smaller ones.
+
+HBASE-10993 introduces such a system for deprioritizing long-running scanners. There
+are two types of queues, `fifo` and `deadline`. To configure the type of queue used,
+configure the `hbase.ipc.server.callqueue.type` property in `hbase-site.xml`. There
+is no way to estimate how long each request may take, so de-prioritization only affects
+scans, and is based on the number of “next” calls a scan request has made. An assumption
+is made that when you are doing a full table scan, your job is not likely to be interactive,
+so if there are concurrent requests, you can delay long-running scans up to a limit tunable by
+setting the `hbase.ipc.server.queue.max.call.delay` property. The slope of the delay is calculated
+by a simple square root of `(numNextCall * weight)` where the weight is
+configurable by setting the `hbase.ipc.server.scan.vtime.weight` property.
+
+[[multiple-typed-queues]]
+=== Multiple-Typed Queues
+
+You can also prioritize or deprioritize different kinds of requests by configuring
+a specified number of dedicated handlers and queues. You can segregate the scan requests
+in a single queue with a single handler, and all the other available queues can service
+short `Get` requests.
+
+You can adjust the IPC queues and handlers based on the type of workload, using static
+tuning options. This approach is an interim first step that will eventually allow
+you to change the settings at runtime, and to dynamically adjust values based on the load.
+
+.Multiple Queues
+
+To avoid contention and separate different kinds of requests, configure the
+`hbase.ipc.server.callqueue.handler.factor` property, which allows you to increase the number of
+queues and control how many handlers can share the same queue., allows admins to increase the number
+of queues and decide how many handlers share the same queue.
+
+Using more queues reduces contention when adding a task to a queue or selecting it
+from a queue. You can even configure one queue per handler. The trade-off is that
+if some queues contain long-running tasks, a handler may need to wait to execute from that queue
+rather than stealing from another queue which has waiting tasks.
+
+.Read and Write Queues
+With multiple queues, you can now divide read and write requests, giving more priority
+(more queues) to one or the other type. Use the `hbase.ipc.server.callqueue.read.ratio`
+property to choose to serve more reads or more writes.
+
+.Get and Scan Queues
+Similar to the read/write split, you can split gets and scans by tuning the `hbase.ipc.server.callqueue.scan.ratio`
+property to give more priority to gets or to scans. A scan ratio of `0.1` will give
+more queue/handlers to the incoming gets, which means that more gets can be processed
+at the same time and that fewer scans can be executed at the same time. A value of
+`0.9` will give more queue/handlers to scans, so the number of scans executed will
+increase and the number of gets will decrease.
+
+
 [[ops.backup]]
 == HBase Backup
 
@@ -1853,7 +2049,7 @@ Aside from the disk space necessary to store the data, one RS may not be able to
 [[ops.capacity.nodes.throughput]]
 ==== Read/Write throughput
 
-Number of nodes can also be driven by required thoughput for reads and/or writes.
+Number of nodes can also be driven by required throughput for reads and/or writes.
 The throughput one can get per node depends a lot on data (esp.
 key/value sizes) and request patterns, as well as node and system configuration.
 Planning should be done for peak load if it is likely that the load would be the main driver of the increase of the node count.
@@ -2018,7 +2214,7 @@ or in code it would be as follows:
 
 [source,java]
 ----
-void rename(Admin admin, String oldTableName, String newTableName) {
+void rename(Admin admin, String oldTableName, TableName newTableName) {
   String snapshotName = randomName();
   admin.disableTable(oldTableName);
   admin.snapshot(snapshotName, oldTableName);

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/other_info.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/other_info.adoc b/src/main/asciidoc/_chapters/other_info.adoc
index 046b747..6143876 100644
--- a/src/main/asciidoc/_chapters/other_info.adoc
+++ b/src/main/asciidoc/_chapters/other_info.adoc
@@ -31,50 +31,50 @@
 [[other.info.videos]]
 === HBase Videos
 
-.Introduction to HBase 
-* link:http://www.cloudera.com/content/cloudera/en/resources/library/presentation/chicago_data_summit_apache_hbase_an_introduction_todd_lipcon.html[Introduction to HBase] by Todd Lipcon (Chicago Data Summit 2011). 
-* link:http://www.cloudera.com/videos/intorduction-hbase-todd-lipcon[Introduction to HBase] by Todd Lipcon (2010).         
-link:http://www.cloudera.com/videos/hadoop-world-2011-presentation-video-building-realtime-big-data-services-at-facebook-with-hadoop-and-hbase[Building Real Time Services at Facebook with HBase] by Jonathan Gray (Hadoop World 2011). 
+.Introduction to HBase
+* link:http://www.cloudera.com/content/cloudera/en/resources/library/presentation/chicago_data_summit_apache_hbase_an_introduction_todd_lipcon.html[Introduction to HBase] by Todd Lipcon (Chicago Data Summit 2011).
+* link:http://www.cloudera.com/videos/intorduction-hbase-todd-lipcon[Introduction to HBase] by Todd Lipcon (2010).
+link:http://www.cloudera.com/videos/hadoop-world-2011-presentation-video-building-realtime-big-data-services-at-facebook-with-hadoop-and-hbase[Building Real Time Services at Facebook with HBase] by Jonathan Gray (Hadoop World 2011).
 
-link:http://www.cloudera.com/videos/hw10_video_how_stumbleupon_built_and_advertising_platform_using_hbase_and_hadoop[HBase and Hadoop, Mixing Real-Time and Batch Processing at StumbleUpon] by JD Cryans (Hadoop World 2010). 
+link:http://www.cloudera.com/videos/hw10_video_how_stumbleupon_built_and_advertising_platform_using_hbase_and_hadoop[HBase and Hadoop, Mixing Real-Time and Batch Processing at StumbleUpon] by JD Cryans (Hadoop World 2010).
 
 [[other.info.pres]]
 === HBase Presentations (Slides)
 
-link:http://www.cloudera.com/content/cloudera/en/resources/library/hadoopworld/hadoop-world-2011-presentation-video-advanced-hbase-schema-design.html[Advanced HBase Schema Design] by Lars George (Hadoop World 2011). 
+link:http://www.cloudera.com/content/cloudera/en/resources/library/hadoopworld/hadoop-world-2011-presentation-video-advanced-hbase-schema-design.html[Advanced HBase Schema Design] by Lars George (Hadoop World 2011).
 
-link:http://www.slideshare.net/cloudera/chicago-data-summit-apache-hbase-an-introduction[Introduction to HBase] by Todd Lipcon (Chicago Data Summit 2011). 
+link:http://www.slideshare.net/cloudera/chicago-data-summit-apache-hbase-an-introduction[Introduction to HBase] by Todd Lipcon (Chicago Data Summit 2011).
 
-link:http://www.slideshare.net/cloudera/hw09-practical-h-base-getting-the-most-from-your-h-base-install[Getting The Most From Your HBase Install] by Ryan Rawson, Jonathan Gray (Hadoop World 2009). 
+link:http://www.slideshare.net/cloudera/hw09-practical-h-base-getting-the-most-from-your-h-base-install[Getting The Most From Your HBase Install] by Ryan Rawson, Jonathan Gray (Hadoop World 2009).
 
 [[other.info.papers]]
 === HBase Papers
 
-link:http://research.google.com/archive/bigtable.html[BigTable] by Google (2006). 
+link:http://research.google.com/archive/bigtable.html[BigTable] by Google (2006).
 
-link:http://www.larsgeorge.com/2010/05/hbase-file-locality-in-hdfs.html[HBase and HDFS Locality] by Lars George (2010). 
+link:http://www.larsgeorge.com/2010/05/hbase-file-locality-in-hdfs.html[HBase and HDFS Locality] by Lars George (2010).
 
-link:http://ianvarley.com/UT/MR/Varley_MastersReport_Full_2009-08-07.pdf[No Relation: The Mixed Blessings of Non-Relational Databases] by Ian Varley (2009). 
+link:http://ianvarley.com/UT/MR/Varley_MastersReport_Full_2009-08-07.pdf[No Relation: The Mixed Blessings of Non-Relational Databases] by Ian Varley (2009).
 
 [[other.info.sites]]
 === HBase Sites
 
-link:http://www.cloudera.com/blog/category/hbase/[Cloudera's HBase Blog] has a lot of links to useful HBase information. 
+link:http://www.cloudera.com/blog/category/hbase/[Cloudera's HBase Blog] has a lot of links to useful HBase information.
 
-* link:http://www.cloudera.com/blog/2010/04/cap-confusion-problems-with-partition-tolerance/[CAP Confusion] is a relevant entry for background information on distributed storage systems.        
+* link:http://www.cloudera.com/blog/2010/04/cap-confusion-problems-with-partition-tolerance/[CAP Confusion] is a relevant entry for background information on distributed storage systems.
 
-link:http://wiki.apache.org/hadoop/HBase/HBasePresentations[HBase Wiki] has a page with a number of presentations. 
+link:http://wiki.apache.org/hadoop/HBase/HBasePresentations[HBase Wiki] has a page with a number of presentations.
 
-link:http://refcardz.dzone.com/refcardz/hbase[HBase RefCard] from DZone. 
+link:http://refcardz.dzone.com/refcardz/hbase[HBase RefCard] from DZone.
 
 [[other.info.books]]
 === HBase Books
 
-link:http://shop.oreilly.com/product/0636920014348.do[HBase:  The Definitive Guide] by Lars George. 
+link:http://shop.oreilly.com/product/0636920014348.do[HBase:  The Definitive Guide] by Lars George.
 
 [[other.info.books.hadoop]]
 === Hadoop Books
 
-link:http://shop.oreilly.com/product/9780596521981.do[Hadoop:  The Definitive Guide] by Tom White. 
+link:http://shop.oreilly.com/product/9780596521981.do[Hadoop:  The Definitive Guide] by Tom White.
 
 :numbered:

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/performance.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/performance.adoc b/src/main/asciidoc/_chapters/performance.adoc
index 526fd01..5155f0a 100644
--- a/src/main/asciidoc/_chapters/performance.adoc
+++ b/src/main/asciidoc/_chapters/performance.adoc
@@ -88,7 +88,7 @@ Multiple rack configurations carry the same potential issues as multiple switche
 * Poor switch capacity performance
 * Insufficient uplink to another rack
 
-If the the switches in your rack have appropriate switching capacity to handle all the hosts at full speed, the next most likely issue will be caused by homing more of your cluster across racks.
+If the switches in your rack have appropriate switching capacity to handle all the hosts at full speed, the next most likely issue will be caused by homing more of your cluster across racks.
 The easiest way to avoid issues when spanning multiple racks is to use port trunking to create a bonded uplink to other racks.
 The downside of this method however, is in the overhead of ports that could potentially be used.
 An example of this is, creating an 8Gbps port channel from rack A to rack B, using 8 of your 24 ports to communicate between racks gives you a poor ROI, using too few however can mean you're not getting the most out of your cluster.
@@ -102,14 +102,14 @@ Are all the network interfaces functioning correctly? Are you sure? See the Trou
 
 [[perf.network.call_me_maybe]]
 === Network Consistency and Partition Tolerance
-The link:http://en.wikipedia.org/wiki/CAP_theorem[CAP Theorem] states that a distributed system can maintain two out of the following three charateristics: 
-- *C*onsistency -- all nodes see the same data. 
+The link:http://en.wikipedia.org/wiki/CAP_theorem[CAP Theorem] states that a distributed system can maintain two out of the following three characteristics:
+- *C*onsistency -- all nodes see the same data.
 - *A*vailability -- every request receives a response about whether it succeeded or failed.
 - *P*artition tolerance -- the system continues to operate even if some of its components become unavailable to the others.
 
-HBase favors consistency and partition tolerance, where a decision has to be made. Coda Hale explains why partition tolerance is so important, in http://codahale.com/you-cant-sacrifice-partition-tolerance/. 
+HBase favors consistency and partition tolerance, where a decision has to be made. Coda Hale explains why partition tolerance is so important, in http://codahale.com/you-cant-sacrifice-partition-tolerance/.
 
-Robert Yokota used an automated testing framework called link:https://aphyr.com/tags/jepsen[Jepson] to test HBase's partition tolerance in the face of network partitions, using techniques modeled after Aphyr's link:https://aphyr.com/posts/281-call-me-maybe-carly-rae-jepsen-and-the-perils-of-network-partitions[Call Me Maybe] series. The results, available as a link:http://old.eng.yammer.com/call-me-maybe-hbase/[blog post] and an link:http://old.eng.yammer.com/call-me-maybe-hbase-addendum/[addendum], show that HBase performs correctly.
+Robert Yokota used an automated testing framework called link:https://aphyr.com/tags/jepsen[Jepson] to test HBase's partition tolerance in the face of network partitions, using techniques modeled after Aphyr's link:https://aphyr.com/posts/281-call-me-maybe-carly-rae-jepsen-and-the-perils-of-network-partitions[Call Me Maybe] series. The results, available as a link:https://rayokota.wordpress.com/2015/09/30/call-me-maybe-hbase/[blog post] and an link:https://rayokota.wordpress.com/2015/09/30/call-me-maybe-hbase-addendum/[addendum], show that HBase performs correctly.
 
 [[jvm]]
 == Java
@@ -196,7 +196,8 @@ tableDesc.addFamily(cfDesc);
 ----
 ====
 
-See the API documentation for link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html[CacheConfig].
+See the API documentation for
+link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html[CacheConfig].
 
 [[perf.rs.memstore.size]]
 === `hbase.regionserver.global.memstore.size`
@@ -546,7 +547,7 @@ To disable the WAL, see <<wal.disable>>.
 === HBase Client: Group Puts by RegionServer
 
 In addition to using the writeBuffer, grouping `Put`s by RegionServer can reduce the number of client RPC calls per writeBuffer flush.
-There is a utility `HTableUtil` currently on TRUNK that does this, but you can either copy that or implement your own version for those still on 0.90.x or earlier.
+There is a utility `HTableUtil` currently on MASTER that does this, but you can either copy that or implement your own version for those still on 0.90.x or earlier.
 
 [[perf.hbase.write.mr.reducer]]
 === MapReduce: Skip The Reducer
@@ -555,7 +556,7 @@ When writing a lot of data to an HBase table from a MR job (e.g., with link:http
 When a Reducer step is used, all of the output (Puts) from the Mapper will get spooled to disk, then sorted/shuffled to other Reducers that will most likely be off-node.
 It's far more efficient to just write directly to HBase.
 
-For summary jobs where HBase is used as a source and a sink, then writes will be coming from the Reducer step (e.g., summarize values then write out result). This is a different processing problem than from the the above case.
+For summary jobs where HBase is used as a source and a sink, then writes will be coming from the Reducer step (e.g., summarize values then write out result). This is a different processing problem than from the above case.
 
 [[perf.one.region]]
 === Anti-Pattern: One Hot Region
@@ -564,7 +565,7 @@ If all your data is being written to one region at a time, then re-read the sect
 
 Also, if you are pre-splitting regions and all your data is _still_ winding up in a single region even though your keys aren't monotonically increasing, confirm that your keyspace actually works with the split strategy.
 There are a variety of reasons that regions may appear "well split" but won't work with your data.
-As the HBase client communicates directly with the RegionServers, this can be obtained via link:hhttp://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#getRegionLocation(byte[])[Table.getRegionLocation].
+As the HBase client communicates directly with the RegionServers, this can be obtained via link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#getRegionLocation(byte%5B%5D)[Table.getRegionLocation].
 
 See <<precreate.regions>>, as well as <<perf.configurations>>
 
@@ -606,7 +607,7 @@ When columns are selected explicitly with `scan.addColumn`, HBase will schedule
 When rows have few columns and each column has only a few versions this can be inefficient.
 A seek operation is generally slower if does not seek at least past 5-10 columns/versions or 512-1024 bytes.
 
-In order to opportunistically look ahead a few columns/versions to see if the next column/version can be found that way before a seek operation is scheduled, a new attribute `Scan.HINT_LOOKAHEAD` can be set the on Scan object.
+In order to opportunistically look ahead a few columns/versions to see if the next column/version can be found that way before a seek operation is scheduled, a new attribute `Scan.HINT_LOOKAHEAD` can be set on the Scan object.
 The following code instructs the RegionServer to attempt two iterations of next before a seek is scheduled:
 
 [source,java]
@@ -676,7 +677,7 @@ Enabling Bloom Filters can save your having to go to disk and can help improve r
 link:http://en.wikipedia.org/wiki/Bloom_filter[Bloom filters] were developed over in link:https://issues.apache.org/jira/browse/HBASE-1200[HBase-1200 Add bloomfilters].
 For description of the development process -- why static blooms rather than dynamic -- and for an overview of the unique properties that pertain to blooms in HBase, as well as possible future directions, see the _Development Process_ section of the document link:https://issues.apache.org/jira/secure/attachment/12444007/Bloom_Filters_in_HBase.pdf[BloomFilters in HBase] attached to link:https://issues.apache.org/jira/browse/HBASE-1200[HBASE-1200].
 The bloom filters described here are actually version two of blooms in HBase.
-In versions up to 0.19.x, HBase had a dynamic bloom option based on work done by the link:http://www.one-lab.org[European Commission One-Lab Project 034819].
+In versions up to 0.19.x, HBase had a dynamic bloom option based on work done by the link:http://www.one-lab.org/[European Commission One-Lab Project 034819].
 The core of the HBase bloom work was later pulled up into Hadoop to implement org.apache.hadoop.io.BloomMapFile.
 Version 1 of HBase blooms never worked that well.
 Version 2 is a rewrite from scratch though again it starts with the one-lab work.
@@ -730,7 +731,7 @@ However, if hedged reads are enabled, the client waits some configurable amount
 Whichever read returns first is used, and the other read request is discarded.
 Hedged reads can be helpful for times where a rare slow read is caused by a transient error such as a failing disk or flaky network connection.
 
-Because a HBase RegionServer is a HDFS client, you can enable hedged reads in HBase, by adding the following properties to the RegionServer's hbase-site.xml and tuning the values to suit your environment.
+Because an HBase RegionServer is a HDFS client, you can enable hedged reads in HBase, by adding the following properties to the RegionServer's hbase-site.xml and tuning the values to suit your environment.
 
 .Configuration for Hedged Reads
 * `dfs.client.hedged.read.threadpool.size` - the number of threads dedicated to servicing hedged reads.
@@ -781,7 +782,8 @@ Be aware that `Table.delete(Delete)` doesn't use the writeBuffer.
 It will execute an RegionServer RPC with each invocation.
 For a large number of deletes, consider `Table.delete(List)`.
 
-See http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#delete%28org.apache.hadoop.hbase.client.Delete%29
+See
++++<a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#delete%28org.apache.hadoop.hbase.client.Delete%29">hbase.client.Delete</a>+++.
 
 [[perf.hdfs]]
 == HDFS
@@ -868,7 +870,7 @@ If you are running on EC2 and post performance questions on the dist-list, pleas
 == Collocating HBase and MapReduce
 
 It is often recommended to have different clusters for HBase and MapReduce.
-A better qualification of this is: don't collocate a HBase that serves live requests with a heavy MR workload.
+A better qualification of this is: don't collocate an HBase that serves live requests with a heavy MR workload.
 OLTP and OLAP-optimized systems have conflicting requirements and one will lose to the other, usually the former.
 For example, short latency-sensitive disk reads will have to wait in line behind longer reads that are trying to squeeze out as much throughput as possible.
 MR jobs that write to HBase will also generate flushes and compactions, which will in turn invalidate blocks in the <<block.cache>>.

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/preface.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/preface.adoc b/src/main/asciidoc/_chapters/preface.adoc
index 960fcc4..50df7ff 100644
--- a/src/main/asciidoc/_chapters/preface.adoc
+++ b/src/main/asciidoc/_chapters/preface.adoc
@@ -29,20 +29,29 @@
 
 This is the official reference guide for the link:http://hbase.apache.org/[HBase] version it ships with.
 
-Herein you will find either the definitive documentation on an HBase topic as of its standing when the referenced HBase version shipped, or it will point to the location in link:http://hbase.apache.org/apidocs/index.html[Javadoc], link:https://issues.apache.org/jira/browse/HBASE[JIRA] or link:http://wiki.apache.org/hadoop/Hbase[wiki] where the pertinent information can be found.
+Herein you will find either the definitive documentation on an HBase topic as of its
+standing when the referenced HBase version shipped, or it will point to the location
+in link:http://hbase.apache.org/apidocs/index.html[Javadoc] or
+link:https://issues.apache.org/jira/browse/HBASE[JIRA] where the pertinent information can be found.
 
 .About This Guide
-This reference guide is a work in progress. The source for this guide can be found in the _src/main/asciidoc directory of the HBase source. This reference guide is marked up using link:http://asciidoc.org/[AsciiDoc] from which the finished guide is generated as part of the 'site' build target. Run
+This reference guide is a work in progress. The source for this guide can be found in the
+_src/main/asciidoc directory of the HBase source. This reference guide is marked up
+using link:http://asciidoc.org/[AsciiDoc] from which the finished guide is generated as part of the
+'site' build target. Run
 [source,bourne]
 ----
 mvn site
 ----
 to generate this documentation.
 Amendments and improvements to the documentation are welcomed.
-Click link:https://issues.apache.org/jira/secure/CreateIssueDetails!init.jspa?pid=12310753&issuetype=1&components=12312132&summary=SHORT+DESCRIPTION[this link] to file a new documentation bug against Apache HBase with some values pre-selected.
+Click
+link:https://issues.apache.org/jira/secure/CreateIssueDetails!init.jspa?pid=12310753&issuetype=1&components=12312132&summary=SHORT+DESCRIPTION[this link]
+to file a new documentation bug against Apache HBase with some values pre-selected.
 
 .Contributing to the Documentation
-For an overview of AsciiDoc and suggestions to get started contributing to the documentation, see the <<appendix_contributing_to_documentation,relevant section later in this documentation>>.
+For an overview of AsciiDoc and suggestions to get started contributing to the documentation,
+see the <<appendix_contributing_to_documentation,relevant section later in this documentation>>.
 
 .Heads-up if this is your first foray into the world of distributed computing...
 If this is your first foray into the wonderful world of Distributed Computing, then you are in for some interesting times.
@@ -57,7 +66,7 @@ Yours, the HBase Community.
 
 .Reporting Bugs
 
-Please use link:https://issues.apache.org/jira/browse/hbase[JIRA] to report non-security-related bugs. 
+Please use link:https://issues.apache.org/jira/browse/hbase[JIRA] to report non-security-related bugs.
 
 To protect existing HBase installations from new vulnerabilities, please *do not* use JIRA to report security-related bugs. Instead, send your report to the mailing list private@apache.org, which allows anyone to send messages, but restricts who can read them. Someone on that list will contact you to follow up on your report.
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/rpc.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/rpc.adoc b/src/main/asciidoc/_chapters/rpc.adoc
index 43e7156..1d363eb 100644
--- a/src/main/asciidoc/_chapters/rpc.adoc
+++ b/src/main/asciidoc/_chapters/rpc.adoc
@@ -47,7 +47,7 @@ For more background on how we arrived at this spec., see link:https://docs.googl
 
 
 . A wire-format we can evolve
-. A format that does not require our rewriting server core or radically changing its current architecture (for later).        
+. A format that does not require our rewriting server core or radically changing its current architecture (for later).
 
 === TODO
 
@@ -58,7 +58,7 @@ For more background on how we arrived at this spec., see link:https://docs.googl
 . Diagram on how it works
 . A grammar that succinctly describes the wire-format.
   Currently we have these words and the content of the rpc protobuf idl but a grammar for the back and forth would help with groking rpc.
-  Also, a little state machine on client/server interactions would help with understanding (and ensuring correct implementation).        
+  Also, a little state machine on client/server interactions would help with understanding (and ensuring correct implementation).
 
 === RPC
 
@@ -71,14 +71,15 @@ Optionally, Cells(KeyValues) can be passed outside of protobufs in follow-behind
 
 
 
-For more detail on the protobufs involved, see the link:http://svn.apache.org/viewvc/hbase/trunk/hbase-protocol/src/main/protobuf/RPC.proto?view=markup[RPC.proto]            file in trunk.
+For more detail on the protobufs involved, see the
+link:https://git-wip-us.apache.org/repos/asf?p=hbase.git;a=blob;f=hbase-protocol/src/main/protobuf/RPC.proto;hb=HEAD[RPC.proto]            file in master.
 
 ==== Connection Setup
 
 Client initiates connection.
 
 ===== Client
-On connection setup, client sends a preamble followed by a connection header. 
+On connection setup, client sends a preamble followed by a connection header.
 
 .<preamble>
 [source]
@@ -105,7 +106,7 @@ After client sends preamble and connection header, server does NOT respond if su
 No response means server is READY to accept requests and to give out response.
 If the version or authentication in the preamble is not agreeable or the server has trouble parsing the preamble, it will throw a org.apache.hadoop.hbase.ipc.FatalConnectionException explaining the error and will then disconnect.
 If the client in the connection header -- i.e.
-the protobuf'd Message that comes after the connection preamble -- asks for for a Service the server does not support or a codec the server does not have, again we throw a FatalConnectionException with explanation.
+the protobuf'd Message that comes after the connection preamble -- asks for a Service the server does not support or a codec the server does not have, again we throw a FatalConnectionException with explanation.
 
 ==== Request
 
@@ -117,7 +118,7 @@ The header includes the method name and optionally, metadata on the optional Cel
 The parameter type suits the method being invoked: i.e.
 if we are doing a getRegionInfo request, the protobuf Message param will be an instance of GetRegionInfoRequest.
 The response will be a GetRegionInfoResponse.
-The CellBlock is optionally used ferrying the bulk of the RPC data: i.e Cells/KeyValues.
+The CellBlock is optionally used ferrying the bulk of the RPC data: i.e. Cells/KeyValues.
 
 ===== Request Parts
 
@@ -181,7 +182,7 @@ Codecs will live on the server for all time so old clients can connect.
 
 .Constraints
 In some part, current wire-format -- i.e.
-all requests and responses preceeded by a length -- has been dictated by current server non-async architecture.
+all requests and responses preceded by a length -- has been dictated by current server non-async architecture.
 
 .One fat pb request or header+param
 We went with pb header followed by pb param making a request and a pb header followed by pb response for now.
@@ -190,7 +191,7 @@ Doing header+param rather than a single protobuf Message with both header and pa
 . Is closer to what we currently have
 . Having a single fat pb requires extra copying putting the already pb'd param into the body of the fat request pb (and same making result)
 . We can decide whether to accept the request or not before we read the param; for example, the request might be low priority.
-  As is, we read header+param in one go as server is currently implemented so this is a TODO.            
+  As is, we read header+param in one go as server is currently implemented so this is a TODO.
 
 The advantages are minor.
 If later, fat request has clear advantage, can roll out a v2 later.
@@ -204,18 +205,18 @@ Codec must implement hbase's `Codec` Interface.
 After connection setup, all passed cellblocks will be sent with this codec.
 The server will return cellblocks using this same codec as long as the codec is on the servers' CLASSPATH (else you will get `UnsupportedCellCodecException`).
 
-To change the default codec, set `hbase.client.default.rpc.codec`. 
+To change the default codec, set `hbase.client.default.rpc.codec`.
 
 To disable cellblocks completely and to go pure protobuf, set the default to the empty String and do not specify a codec in your Configuration.
 So, set `hbase.client.default.rpc.codec` to the empty string and do not set `hbase.client.rpc.codec`.
 This will cause the client to connect to the server with no codec specified.
 If a server sees no codec, it will return all responses in pure protobuf.
-Running pure protobuf all the time will be slower than running with cellblocks. 
+Running pure protobuf all the time will be slower than running with cellblocks.
 
 .Compression
-Uses hadoops compression codecs.
+Uses hadoop's compression codecs.
 To enable compressing of passed CellBlocks, set `hbase.client.rpc.compressor` to the name of the Compressor to use.
-Compressor must implement Hadoops' CompressionCodec Interface.
+Compressor must implement Hadoop's CompressionCodec Interface.
 After connection setup, all passed cellblocks will be sent compressed.
 The server will return cellblocks compressed using this same compressor as long as the compressor is on its CLASSPATH (else you will get `UnsupportedCompressionCodecException`).
 


[07/11] hbase git commit: HBASE-13908 update site docs for 1.2 RC.

Posted by bu...@apache.org.
http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/architecture.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/architecture.adoc b/src/main/asciidoc/_chapters/architecture.adoc
index 0aac442..103f624 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -41,7 +41,8 @@ Technically speaking, HBase is really more a "Data Store" than "Data Base" becau
 However, HBase has many features which supports both linear and modular scaling.
 HBase clusters expand by adding RegionServers that are hosted on commodity class servers.
 If a cluster expands from 10 to 20 RegionServers, for example, it doubles both in terms of storage and as well as processing capacity.
-RDBMS can scale well, but only up to a point - specifically, the size of a single database server - and for the best performance requires specialized hardware and storage devices.
+An RDBMS can scale well, but only up to a point - specifically, the size of a single database
+server - and for the best performance requires specialized hardware and storage devices.
 HBase features of note are:
 
 * Strongly consistent reads/writes:  HBase is not an "eventually consistent" DataStore.
@@ -138,7 +139,10 @@ A region with an empty start key is the first region in a table.
 If a region has both an empty start and an empty end key, it is the only region in the table
 ====
 
-In the (hopefully unlikely) event that programmatic processing of catalog metadata is required, see the link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/util/Writables.html#getHRegionInfo%28byte[]%29[Writables] utility.
+In the (hopefully unlikely) event that programmatic processing of catalog metadata
+is required, see the
++++<a href="http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/util/Writables.html#getHRegionInfo%28byte%5B%5D%29">Writables</a>+++
+utility.
 
 [[arch.catalog.startup]]
 === Startup Sequencing
@@ -169,7 +173,7 @@ The API changed in HBase 1.0. For connection configuration information, see <<cl
 
 ==== API as of HBase 1.0.0
 
-Its been cleaned up and users are returned Interfaces to work against rather than particular types.
+It's been cleaned up and users are returned Interfaces to work against rather than particular types.
 In HBase 1.0, obtain a `Connection` object from `ConnectionFactory` and thereafter, get from it instances of `Table`, `Admin`, and `RegionLocator` on an as-need basis.
 When done, close the obtained instances.
 Finally, be sure to cleanup your `Connection` instance before exiting.
@@ -235,11 +239,11 @@ Please use link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/C
 [[client.writebuffer]]
 === WriteBuffer and Batch Methods
 
-In HBase 1.0 and later, link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/HTable.html[HTable] is deprecated in favor of link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html[Table]. `Table` does not use autoflush. To do buffered writes, use the BufferedMutator class.
+In HBase 1.0 and later, link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/HTable.html[HTable] is deprecated in favor of link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html[Table]. `Table` does not use autoflush. To do buffered writes, use the BufferedMutator class.
 
 Before a `Table` or `HTable` instance is discarded, invoke either `close()` or `flushCommits()`, so `Put`s will not be lost.
 
-For additional information on write durability, review the link:../acid-semantics.html[ACID semantics] page.
+For additional information on write durability, review the link:/acid-semantics.html[ACID semantics] page.
 
 For fine-grained control of batching of ``Put``s or ``Delete``s, see the link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#batch%28java.util.List%29[batch] methods on Table.
 
@@ -292,7 +296,11 @@ scan.setFilter(list);
 [[client.filter.cv.scvf]]
 ==== SingleColumnValueFilter
 
-link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.html[SingleColumnValueFilter] can be used to test column values for equivalence (`CompareOp.EQUAL`), inequality (`CompareOp.NOT_EQUAL`), or ranges (e.g., `CompareOp.GREATER`). The following is example of testing equivalence a column to a String value "my value"...
+A SingleColumnValueFilter (see:
+http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.html)
+can be used to test column values for equivalence (`CompareOp.EQUAL`),
+inequality (`CompareOp.NOT_EQUAL`), or ranges (e.g., `CompareOp.GREATER`). The following is an
+example of testing equivalence of a column to a String value "my value"...
 
 [source,java]
 ----
@@ -691,7 +699,8 @@ Here are others that you may have to take into account:
 
 Catalog Tables::
   The `-ROOT-` (prior to HBase 0.96, see <<arch.catalog.root,arch.catalog.root>>) and `hbase:meta` tables are forced into the block cache and have the in-memory priority which means that they are harder to evict.
-  The former never uses more than a few hundreds bytes while the latter can occupy a few MBs (depending on the number of regions).
+  The former never uses more than a few hundred bytes while the latter can occupy a few MBs
+  (depending on the number of regions).
 
 HFiles Indexes::
   An _HFile_ is the file format that HBase uses to store data in HDFS.
@@ -759,7 +768,7 @@ When we go to look for a cached block, we look first in L1 and if none found, th
 Let us call this deploy format, _Raw L1+L2_.
 
 Other BucketCache configs include: specifying a location to persist cache to across restarts, how many threads to use writing the cache, etc.
-See the link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html[CacheConfig.html] class for configuration options and descriptions.
+See the link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html[CacheConfig.html] class for configuration options and descriptions.
 
 
 
@@ -875,7 +884,10 @@ image::region_split_process.png[Region Split Process]
 . The Master learns about this znode, since it has a watcher for the parent `region-in-transition` znode.
 . The RegionServer creates a sub-directory named `.splits` under the parent’s `region` directory in HDFS.
 . The RegionServer closes the parent region and marks the region as offline in its local data structures. *THE SPLITTING REGION IS NOW OFFLINE.* At this point, client requests coming to the parent region will throw `NotServingRegionException`. The client will retry with some backoff. The closing region is flushed.
-. The  RegionServer creates region directories under the `.splits` directory, for daughter regions A and B, and creates necessary data structures. Then it splits the store files, in the sense that it creates two link:http://www.google.com/url?q=http%3A%2F%2Fhbase.apache.org%2Fapidocs%2Forg%2Fapache%2Fhadoop%2Fhbase%2Fio%2FReference.html&sa=D&sntz=1&usg=AFQjCNEkCbADZ3CgKHTtGYI8bJVwp663CA[Reference] files per store file in the parent region. Those reference files will point to the parent regions'files.
+. The RegionServer creates region directories under the `.splits` directory, for daughter
+regions A and B, and creates necessary data structures. Then it splits the store files,
+in the sense that it creates two Reference files per store file in the parent region.
+Those reference files will point to the parent region's files.
 . The RegionServer creates the actual region directory in HDFS, and moves the reference files for each daughter.
 . The RegionServer sends a `Put` request to the `.META.` table, to set the parent as offline in the `.META.` table and add information about daughter regions. At this point, there won’t be individual entries in `.META.` for the daughters. Clients will see that the parent region is split if they scan `.META.`, but won’t know about the daughters until they appear in `.META.`. Also, if this `Put` to `.META`. succeeds, the parent will be effectively split. If the RegionServer fails before this RPC succeeds, Master and the next Region Server opening the region will clean dirty state about the region split. After the `.META.` update, though, the region split will be rolled-forward by Master.
 . The RegionServer opens daughters A and B in parallel.
@@ -928,7 +940,7 @@ To configure MultiWAL for a RegionServer, set the value of the property `hbase.w
 </property>
 ----
 
-Restart the RegionServer for the changes to take effect. 
+Restart the RegionServer for the changes to take effect.
 
 To disable MultiWAL for a RegionServer, unset the property and restart the RegionServer.
 
@@ -1005,7 +1017,8 @@ If you set the `hbase.hlog.split.skip.errors` option to `true`, errors are treat
 * Processing of the WAL will continue
 
 If the `hbase.hlog.split.skip.errors` option is set to `false`, the default, the exception will be propagated and the split will be logged as failed.
-See link:https://issues.apache.org/jira/browse/HBASE-2958[HBASE-2958 When hbase.hlog.split.skip.errors is set to false, we fail the split but thats it].
+See link:https://issues.apache.org/jira/browse/HBASE-2958[HBASE-2958 When
+hbase.hlog.split.skip.errors is set to false, we fail the split but that's it].
 We need to do more than just fail split if this flag is set.
 
 ====== How EOFExceptions are treated when splitting a crashed RegionServer's WALs
@@ -1016,16 +1029,7 @@ For background, see link:https://issues.apache.org/jira/browse/HBASE-2643[HBASE-
 
 ===== Performance Improvements during Log Splitting
 
-WAL log splitting and recovery can be resource intensive and take a long time, depending on the number of RegionServers involved in the crash and the size of the regions. <<distributed.log.splitting>> and <<distributed.log.replay>> were developed to improve performance during log splitting.
-
-[[distributed.log.splitting]]
-====== Distributed Log Splitting
-
-_Distributed Log Splitting_ was added in HBase version 0.92 (link:https://issues.apache.org/jira/browse/HBASE-1364[HBASE-1364]) by Prakash Khemani from Facebook.
-It reduces the time to complete log splitting dramatically, improving the availability of regions and tables.
-For example, recovering a crashed cluster took around 9 hours with single-threaded log splitting, but only about six minutes with distributed log splitting.
-
-The information in this section is sourced from Jimmy Xiang's blog post at http://blog.cloudera.com/blog/2012/07/hbase-log-splitting/.
+WAL log splitting and recovery can be resource intensive and take a long time, depending on the number of RegionServers involved in the crash and the size of the regions. <<distributed.log.splitting>> was developed to improve performance during log splitting.
 
 .Enabling or Disabling Distributed Log Splitting
 
@@ -1123,7 +1127,8 @@ Based on the state of the task whose data is changed, the split log manager does
 Each RegionServer runs a daemon thread called the _split log worker_, which does the work to split the logs.
 The daemon thread starts when the RegionServer starts, and registers itself to watch HBase znodes.
 If any splitlog znode children change, it notifies a sleeping worker thread to wake up and grab more tasks.
-If if a worker's current task's node data is changed, the worker checks to see if the task has been taken by another worker.
+If a worker's current task's node data is changed,
+the worker checks to see if the task has been taken by another worker.
 If so, the worker thread stops work on the current task.
 +
 The worker monitors the splitlog znode constantly.
@@ -1133,7 +1138,7 @@ At this point, the split log worker scans for another unclaimed task.
 +
 .How the Split Log Worker Approaches a Task
 * It queries the task state and only takes action if the task is in `TASK_UNASSIGNED `state.
-* If the task is is in `TASK_UNASSIGNED` state, the worker attempts to set the state to `TASK_OWNED` by itself.
+* If the task is in `TASK_UNASSIGNED` state, the worker attempts to set the state to `TASK_OWNED` by itself.
   If it fails to set the state, another worker will try to grab it.
   The split log manager will also ask all workers to rescan later if the task remains unassigned.
 * If the worker succeeds in taking ownership of the task, it tries to get the task state again to make sure it really gets it asynchronously.
@@ -1141,7 +1146,7 @@ At this point, the split log worker scans for another unclaimed task.
 ** Get the HBase root folder, create a temp folder under the root, and split the log file to the temp folder.
 ** If the split was successful, the task executor sets the task to state `TASK_DONE`.
 ** If the worker catches an unexpected IOException, the task is set to state `TASK_ERR`.
-** If the worker is shutting down, set the the task to state `TASK_RESIGNED`.
+** If the worker is shutting down, set the task to state `TASK_RESIGNED`.
 ** If the task is taken by another worker, just log it.
 
 
@@ -1332,7 +1337,7 @@ image::region_states.png[]
 . Before assigning a region, the master moves the region to `OFFLINE` state automatically if it is in `CLOSED` state.
 . When a RegionServer is about to split a region, it notifies the master.
   The master moves the region to be split from `OPEN` to `SPLITTING` state and add the two new regions to be created to the RegionServer.
-  These two regions are in `SPLITING_NEW` state initially.
+  These two regions are in `SPLITTING_NEW` state initially.
 . After notifying the master, the RegionServer starts to split the region.
   Once past the point of no return, the RegionServer notifies the master again so the master can update the `hbase:meta` table.
   However, the master does not update the region states until it is notified by the server that the split is done.
@@ -1377,8 +1382,10 @@ The RegionServer splits a region, offlines the split region and then adds the da
 See <<disable.splitting>> for how to manually manage splits (and for why you might do this).
 
 ==== Custom Split Policies
-ou can override the default split policy using a custom link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/regionserver/RegionSplitPolicy.html[RegionSplitPolicy](HBase 0.94+). Typically a custom split policy should extend
-HBase's default split policy: link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/regionserver/IncreasingToUpperBoundRegionSplitPolicy.html[IncreasingToUpperBoundRegionSplitPolicy].
+You can override the default split policy using a custom
+link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/RegionSplitPolicy.html[RegionSplitPolicy](HBase 0.94+).
+Typically a custom split policy should extend HBase's default split policy:
+link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/IncreasingToUpperBoundRegionSplitPolicy.html[IncreasingToUpperBoundRegionSplitPolicy].
 
 The policy can set globally through the HBase configuration or on a per-table
 basis.
@@ -1397,7 +1404,7 @@ basis.
 HTableDescriptor tableDesc = new HTableDescriptor("test");
 tableDesc.setValue(HTableDescriptor.SPLIT_POLICY, ConstantSizeRegionSplitPolicy.class.getName());
 tableDesc.addFamily(new HColumnDescriptor(Bytes.toBytes("cf1")));
-admin.createTable(tableDesc);              
+admin.createTable(tableDesc);
 ----
 
 [source]
@@ -1407,7 +1414,10 @@ hbase> create 'test', {METHOD => 'table_att', CONFIG => {'SPLIT_POLICY' => 'org.
 {NAME => 'cf1'}
 ----
 
-The default split policy can be overwritten using a custom link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/regionserver/RegionSplitPolicy.html[RegionSplitPolicy(HBase 0.94+)]. Typically a custom split policy should extend HBase's default split policy: link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/regionserver/ConstantSizeRegionSplitPolicy.html[ConstantSizeRegionSplitPolicy].
+The default split policy can be overwritten using a custom
+link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/RegionSplitPolicy.html[RegionSplitPolicy(HBase 0.94+)].
+Typically a custom split policy should extend HBase's default split policy:
+link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/ConstantSizeRegionSplitPolicy.html[ConstantSizeRegionSplitPolicy].
 
 The policy can be set globally through the HBaseConfiguration used or on a per table basis:
 [source,java]
@@ -1416,6 +1426,8 @@ HTableDescriptor myHtd = ...;
 myHtd.setValue(HTableDescriptor.SPLIT_POLICY, MyCustomSplitPolicy.class.getName());
 ----
 
+NOTE: The `DisabledRegionSplitPolicy` policy blocks manual region splitting.
+
 [[manual_region_splitting_decisions]]
 === Manual Region Splitting
 
@@ -1435,6 +1447,8 @@ There may be other valid reasons, but the need to manually split your table migh
 
 See <<disable.splitting>> for a discussion about the dangers and possible benefits of managing splitting completely manually.
 
+NOTE: The `DisabledRegionSplitPolicy` policy blocks manual region splitting.
+
 ==== Determining Split Points
 
 The goal of splitting your table manually is to improve the chances of balancing the load across the cluster in situations where good rowkey design alone won't get you there.
@@ -1450,9 +1464,15 @@ Using a Custom Algorithm::
   The RegionSplitter tool is provided with HBase, and uses a _SplitAlgorithm_ to determine split points for you.
   As parameters, you give it the algorithm, desired number of regions, and column families.
   It includes two split algorithms.
-  The first is the `link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/util/RegionSplitter.HexStringSplit.html[HexStringSplit]` algorithm, which assumes the row keys are hexadecimal strings.
-  The second, `link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/util/RegionSplitter.UniformSplit.html[UniformSplit]`, assumes the row keys are random byte arrays.
-  You will probably need to develop your own `link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/util/RegionSplitter.SplitAlgorithm.html[SplitAlgorithm]`, using the provided ones as models.
+  The first is the
+  `link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/util/RegionSplitter.HexStringSplit.html[HexStringSplit]`
+  algorithm, which assumes the row keys are hexadecimal strings.
+  The second,
+  `link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/util/RegionSplitter.UniformSplit.html[UniformSplit]`,
+  assumes the row keys are random byte arrays.
+  You will probably need to develop your own
+  `link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/util/RegionSplitter.SplitAlgorithm.html[SplitAlgorithm]`,
+  using the provided ones as models.
 
 === Online Region Merges
 
@@ -1611,6 +1631,7 @@ Key portion for Put #2:
 It is critical to understand that the rowkey, ColumnFamily, and column (aka columnqualifier) are embedded within the KeyValue instance.
 The longer these identifiers are, the bigger the KeyValue is.
 
+[[compaction]]
 ==== Compaction
 
 .Ambiguous Terminology
@@ -1796,60 +1817,116 @@ This list is not exhaustive.
 To tune these parameters from the defaults, edit the _hbase-default.xml_ file.
 For a full list of all configuration parameters available, see <<config.files,config.files>>
 
-[cols="1,1a,1", options="header"]
-|===
-| Parameter
-| Description
-| Default
-
-|`hbase.hstore.compaction.min`
-| The minimum number of StoreFiles which must be eligible for compaction before compaction can run. The goal of tuning `hbase.hstore.compaction.min` is to avoid ending up with too many tiny StoreFiles to compact. Setting this value to 2 would cause a minor compaction each time you have two StoreFiles in a Store, and this is probably not appropriate. If you set this value too high, all the other values will need to be adjusted accordingly. For most cases, the default value is appropriate. In previous versions of HBase, the parameter hbase.hstore.compaction.min was called `hbase.hstore.compactionThreshold`.
-|3
-
-|`hbase.hstore.compaction.max`
-| The maximum number of StoreFiles which will be selected for a single minor compaction, regardless of the number of eligible StoreFiles. Effectively, the value of hbase.hstore.compaction.max controls the length of time it takes a single compaction to complete. Setting it larger means that more StoreFiles are included in a compaction. For most cases, the default value is appropriate.
-|10
-
-|`hbase.hstore.compaction.min.size`
-| A StoreFile smaller than this size will always be eligible for minor compaction. StoreFiles this size or larger are evaluated by `hbase.hstore.compaction.ratio` to determine if they are eligible. Because this limit represents the "automatic include" limit for all StoreFiles smaller than this value, this value may need to be reduced in write-heavy environments where many files in the 1-2 MB range are being flushed, because every StoreFile will be targeted for compaction and the resulting StoreFiles may still be under the minimum size and require further compaction. If this parameter is lowered, the ratio check is triggered more quickly. This addressed some issues seen in earlier versions of HBase but changing this parameter is no longer necessary in most situations.
-|128 MB
-
-|`hbase.hstore.compaction.max.size`
-| An StoreFile larger than this size will be excluded from compaction. The effect of raising `hbase.hstore.compaction.max.size` is fewer, larger StoreFiles that do not get compacted often. If you feel that compaction is happening too often without much benefit, you can try raising this value.
-|`Long.MAX_VALUE`
-
-|`hbase.hstore.compaction.ratio`
-| For minor compaction, this ratio is used to determine whether a given StoreFile which is larger than `hbase.hstore.compaction.min.size` is eligible for compaction. Its effect is to limit compaction of large StoreFile. The value of `hbase.hstore.compaction.ratio` is expressed as a floating-point decimal.
-
-* A large ratio, such as 10, will produce a single giant StoreFile. Conversely, a value of .25, will produce behavior similar to the BigTable compaction algorithm, producing four StoreFiles.
-* A moderate value of between 1.0 and 1.4 is recommended. When tuning this value, you are balancing write costs with read costs. Raising the value (to something like 1.4) will have more write costs, because you will compact larger StoreFiles. However, during reads, HBase will need to seek through fewer StoreFiles to accomplish the read. Consider this approach if you cannot take advantage of <<bloom>>.
-* Alternatively, you can lower this value to something like 1.0 to reduce the background cost of writes, and use  to limit the number of StoreFiles touched during reads. For most cases, the default value is appropriate.
-| `1.2F`
-
-|`hbase.hstore.compaction.ratio.offpeak`
-| The compaction ratio used during off-peak compactions, if off-peak hours are also configured (see below). Expressed as a floating-point decimal. This allows for more aggressive (or less aggressive, if you set it lower than `hbase.hstore.compaction.ratio`) compaction during a set time period. Ignored if off-peak is disabled (default). This works the same as hbase.hstore.compaction.ratio.
-| `5.0F`
+`hbase.hstore.compaction.min`::
+  The minimum number of StoreFiles which must be eligible for compaction before compaction can run.
+  The goal of tuning `hbase.hstore.compaction.min` is to avoid ending up with too many tiny StoreFiles
+  to compact. Setting this value to 2 would cause a minor compaction each time you have two StoreFiles
+  in a Store, and this is probably not appropriate. If you set this value too high, all the other
+  values will need to be adjusted accordingly. For most cases, the default value is appropriate.
+  In previous versions of HBase, the parameter `hbase.hstore.compaction.min` was called
+  `hbase.hstore.compactionThreshold`.
++
+*Default*: 3
+
+`hbase.hstore.compaction.max`::
+  The maximum number of StoreFiles which will be selected for a single minor compaction,
+  regardless of the number of eligible StoreFiles. Effectively, the value of
+  `hbase.hstore.compaction.max` controls the length of time it takes a single
+  compaction to complete. Setting it larger means that more StoreFiles are included
+  in a compaction. For most cases, the default value is appropriate.
++
+*Default*: 10
+
+`hbase.hstore.compaction.min.size`::
+  A StoreFile smaller than this size will always be eligible for minor compaction.
+  StoreFiles this size or larger are evaluated by `hbase.hstore.compaction.ratio`
+  to determine if they are eligible. Because this limit represents the "automatic
+  include" limit for all StoreFiles smaller than this value, this value may need
+  to be reduced in write-heavy environments where many files in the 1-2 MB range
+  are being flushed, because every StoreFile will be targeted for compaction and
+  the resulting StoreFiles may still be under the minimum size and require further
+  compaction. If this parameter is lowered, the ratio check is triggered more quickly.
+  This addressed some issues seen in earlier versions of HBase but changing this
+  parameter is no longer necessary in most situations.
++
+*Default*:128 MB
 
-| `hbase.offpeak.start.hour`
-| The start of off-peak hours, expressed as an integer between 0 and 23, inclusive. Set to -1 to disable off-peak.
-| `-1` (disabled)
+`hbase.hstore.compaction.max.size`::
+  A StoreFile larger than this size will be excluded from compaction. The effect of
+  raising `hbase.hstore.compaction.max.size` is fewer, larger StoreFiles that do not
+  get compacted often. If you feel that compaction is happening too often without
+  much benefit, you can try raising this value.
++
+*Default*: `Long.MAX_VALUE`
 
-| `hbase.offpeak.end.hour`
-| The end of off-peak hours, expressed as an integer between 0 and 23, inclusive. Set to -1 to disable off-peak.
-| `-1` (disabled)
+`hbase.hstore.compaction.ratio`::
+  For minor compaction, this ratio is used to determine whether a given StoreFile
+  which is larger than `hbase.hstore.compaction.min.size` is eligible for compaction.
+  Its effect is to limit compaction of large StoreFile. The value of
+  `hbase.hstore.compaction.ratio` is expressed as a floating-point decimal.
++
+* A large ratio, such as 10, will produce a single giant StoreFile. Conversely,
+  a value of .25, will produce behavior similar to the BigTable compaction algorithm,
+  producing four StoreFiles.
+* A moderate value of between 1.0 and 1.4 is recommended. When tuning this value,
+  you are balancing write costs with read costs. Raising the value (to something like
+  1.4) will have more write costs, because you will compact larger StoreFiles.
+  However, during reads, HBase will need to seek through fewer StoreFiles to
+  accomplish the read. Consider this approach if you cannot take advantage of <<bloom>>.
+* Alternatively, you can lower this value to something like 1.0 to reduce the
+  background cost of writes, and use  to limit the number of StoreFiles touched
+  during reads. For most cases, the default value is appropriate.
++
+*Default*: `1.2F`
+
+`hbase.hstore.compaction.ratio.offpeak`::
+  The compaction ratio used during off-peak compactions, if off-peak hours are
+  also configured (see below). Expressed as a floating-point decimal. This allows
+  for more aggressive (or less aggressive, if you set it lower than
+  `hbase.hstore.compaction.ratio`) compaction during a set time period. Ignored
+  if off-peak is disabled (default). This works the same as
+  `hbase.hstore.compaction.ratio`.
++
+*Default*: `5.0F`
 
-| `hbase.regionserver.thread.compaction.throttle`
-| There are two different thread pools for compactions, one for large compactions and the other for small compactions. This helps to keep compaction of lean tables (such as `hbase:meta`) fast. If a compaction is larger than this threshold, it goes into the large compaction pool. In most cases, the default value is appropriate.
-| `2 x hbase.hstore.compaction.max x hbase.hregion.memstore.flush.size` (which defaults to `128`)
+`hbase.offpeak.start.hour`::
+  The start of off-peak hours, expressed as an integer between 0 and 23, inclusive.
+  Set to -1 to disable off-peak.
++
+*Default*: `-1` (disabled)
 
-| `hbase.hregion.majorcompaction`
-| Time between major compactions, expressed in milliseconds. Set to 0 to disable time-based automatic major compactions. User-requested and size-based major compactions will still run. This value is multiplied by `hbase.hregion.majorcompaction.jitter` to cause compaction to start at a somewhat-random time during a given window of time.
-| 7 days (`604800000` milliseconds)
+`hbase.offpeak.end.hour`::
+  The end of off-peak hours, expressed as an integer between 0 and 23, inclusive.
+  Set to -1 to disable off-peak.
++
+*Default*: `-1` (disabled)
+
+`hbase.regionserver.thread.compaction.throttle`::
+  There are two different thread pools for compactions, one for large compactions
+  and the other for small compactions. This helps to keep compaction of lean tables
+  (such as `hbase:meta`) fast. If a compaction is larger than this threshold,
+  it goes into the large compaction pool. In most cases, the default value is
+  appropriate.
++
+*Default*: `2 x hbase.hstore.compaction.max x hbase.hregion.memstore.flush.size`
+(which defaults to `128`)
+
+`hbase.hregion.majorcompaction`::
+  Time between major compactions, expressed in milliseconds. Set to 0 to disable
+  time-based automatic major compactions. User-requested and size-based major
+  compactions will still run. This value is multiplied by
+  `hbase.hregion.majorcompaction.jitter` to cause compaction to start at a
+  somewhat-random time during a given window of time.
++
+*Default*: 7 days (`604800000` milliseconds)
 
-| `hbase.hregion.majorcompaction.jitter`
-| A multiplier applied to hbase.hregion.majorcompaction to cause compaction to occur a given amount of time either side of `hbase.hregion.majorcompaction`. The smaller the number, the closer the compactions will happen to the `hbase.hregion.majorcompaction` interval. Expressed as a floating-point decimal.
-| `.50F`
-|===
+`hbase.hregion.majorcompaction.jitter`::
+  A multiplier applied to hbase.hregion.majorcompaction to cause compaction to
+  occur a given amount of time either side of `hbase.hregion.majorcompaction`.
+  The smaller the number, the closer the compactions will happen to the
+  `hbase.hregion.majorcompaction` interval. Expressed as a floating-point decimal.
++
+*Default*: `.50F`
 
 [[compaction.file.selection.old]]
 ===== Compaction File Selection
@@ -1906,8 +1983,8 @@ Why?
 * 100 -> No, because sum(50, 23, 12, 12) * 1.0 = 97.
 * 50 -> No, because sum(23, 12, 12) * 1.0 = 47.
 * 23 -> Yes, because sum(12, 12) * 1.0 = 24.
-* 12 -> Yes, because the previous file has been included, and because this does not exceed the the max-file limit of 5
-* 12 -> Yes, because the previous file had been included, and because this does not exceed the the max-file limit of 5.
+* 12 -> Yes, because the previous file has been included, and because this does not exceed the max-file limit of 5
+* 12 -> Yes, because the previous file had been included, and because this does not exceed the max-file limit of 5.
 
 [[compaction.file.selection.example2]]
 ====== Minor Compaction File Selection - Example #2 (Not Enough Files ToCompact)
@@ -2168,7 +2245,7 @@ See link:http://blog.cloudera.com/blog/2013/09/how-to-use-hbase-bulk-loading-and
 [[arch.bulk.load.adv]]
 === Advanced Usage
 
-Although the `importtsv` tool is useful in many cases, advanced users may want to generate data programatically, or import data from other formats.
+Although the `importtsv` tool is useful in many cases, advanced users may want to generate data programmatically, or import data from other formats.
 To get started doing so, dig into `ImportTsv.java` and check the JavaDoc for HFileOutputFormat.
 
 The import step of the bulk load can also be done programmatically.
@@ -2264,8 +2341,8 @@ In terms of semantics, TIMELINE consistency as implemented by HBase differs from
 .Timeline Consistency
 image::timeline_consistency.png[Timeline Consistency]
 
-To better understand the TIMELINE semantics, lets look at the above diagram.
-Lets say that there are two clients, and the first one writes x=1 at first, then x=2 and x=3 later.
+To better understand the TIMELINE semantics, let's look at the above diagram.
+Let's say that there are two clients, and the first one writes x=1 at first, then x=2 and x=3 later.
 As above, all writes are handled by the primary region replica.
 The writes are saved in the write ahead log (WAL), and replicated to the other replicas asynchronously.
 In the above diagram, notice that replica_id=1 received 2 updates, and its data shows that x=2, while the replica_id=2 only received a single update, and its data shows that x=1.
@@ -2298,18 +2375,18 @@ To serve the region data from multiple replicas, HBase opens the regions in seco
 The regions opened in secondary mode will share the same data files with the primary region replica, however each secondary region replica will have its own MemStore to keep the unflushed data (only primary region can do flushes). Also to serve reads from secondary regions, the blocks of data files may be also cached in the block caches for the secondary regions.
 
 === Where is the code
-This feature is delivered in two phases, Phase 1 and 2. The first phase is done in time for HBase-1.0.0 release. Meaning that using HBase-1.0.x, you can use all the features that are marked for Phase 1. Phase 2 is committed in HBase-1.1.0, meaning all HBase versions after 1.1.0 should contain Phase 2 items. 
+This feature is delivered in two phases, Phase 1 and 2. The first phase is done in time for HBase-1.0.0 release. Meaning that using HBase-1.0.x, you can use all the features that are marked for Phase 1. Phase 2 is committed in HBase-1.1.0, meaning all HBase versions after 1.1.0 should contain Phase 2 items.
 
 === Propagating writes to region replicas
-As discussed above writes only go to the primary region replica. For propagating the writes from the primary region replica to the secondaries, there are two different mechanisms. For read-only tables, you do not need to use any of the following methods. Disabling and enabling the table should make the data available in all region replicas. For mutable tables, you have to use *only* one of the following mechanisms: storefile refresher, or async wal replication. The latter is recommeded. 
+As discussed above writes only go to the primary region replica. For propagating the writes from the primary region replica to the secondaries, there are two different mechanisms. For read-only tables, you do not need to use any of the following methods. Disabling and enabling the table should make the data available in all region replicas. For mutable tables, you have to use *only* one of the following mechanisms: storefile refresher, or async wal replication. The latter is recommended.
 
 ==== StoreFile Refresher
-The first mechanism is store file refresher which is introduced in HBase-1.0+. Store file refresher is a thread per region server, which runs periodically, and does a refresh operation for the store files of the primary region for the secondary region replicas. If enabled, the refresher will ensure that the secondary region replicas see the new flushed, compacted or bulk loaded files from the primary region in a timely manner. However, this means that only flushed data can be read back from the secondary region replicas, and after the refresher is run, making the secondaries lag behind the primary for an a longer time. 
+The first mechanism is store file refresher which is introduced in HBase-1.0+. Store file refresher is a thread per region server, which runs periodically, and does a refresh operation for the store files of the primary region for the secondary region replicas. If enabled, the refresher will ensure that the secondary region replicas see the new flushed, compacted or bulk loaded files from the primary region in a timely manner. However, this means that only flushed data can be read back from the secondary region replicas, and after the refresher is run, making the secondaries lag behind the primary for an a longer time.
 
-For turning this feature on, you should configure `hbase.regionserver.storefile.refresh.period` to a non-zero value. See Configuration section below. 
+For turning this feature on, you should configure `hbase.regionserver.storefile.refresh.period` to a non-zero value. See Configuration section below.
 
 ==== Asnyc WAL replication
-The second mechanism for propagation of writes to secondaries is done via “Async WAL Replication” feature and is only available in HBase-1.1+. This works similarly to HBase’s multi-datacenter replication, but instead the data from a region is replicated to the secondary regions. Each secondary replica always receives and observes the writes in the same order that the primary region committed them. In some sense, this design can be thought of as “in-cluster replication”, where instead of replicating to a different datacenter, the data goes to secondary regions to keep secondary region’s in-memory state up to date. The data files are shared between the primary region and the other replicas, so that there is no extra storage overhead. However, the secondary regions will have recent non-flushed data in their memstores, which increases the memory overhead. The primary region writes flush, compaction, and bulk load events to its WAL as well, which are also replicated through w
 al replication to secondaries. When they observe the flush/compaction or bulk load event, the secondary regions replay the event to pick up the new files and drop the old ones.  
+The second mechanism for propagation of writes to secondaries is done via “Async WAL Replication” feature and is only available in HBase-1.1+. This works similarly to HBase’s multi-datacenter replication, but instead the data from a region is replicated to the secondary regions. Each secondary replica always receives and observes the writes in the same order that the primary region committed them. In some sense, this design can be thought of as “in-cluster replication”, where instead of replicating to a different datacenter, the data goes to secondary regions to keep secondary region’s in-memory state up to date. The data files are shared between the primary region and the other replicas, so that there is no extra storage overhead. However, the secondary regions will have recent non-flushed data in their memstores, which increases the memory overhead. The primary region writes flush, compaction, and bulk load events to its WAL as well, which are also replicated through w
 al replication to secondaries. When they observe the flush/compaction or bulk load event, the secondary regions replay the event to pick up the new files and drop the old ones.
 
 Committing writes in the same order as in primary ensures that the secondaries won’t diverge from the primary regions data, but since the log replication is asynchronous, the data might still be stale in secondary regions. Since this feature works as a replication endpoint, the performance and latency characteristics is expected to be similar to inter-cluster replication.
 
@@ -2322,18 +2399,18 @@ Asyn WAL Replication feature will add a new replication peer named `region_repli
 	hbase> disable_peer 'region_replica_replication'
 ----
 
-=== Store File TTL 
-In both of the write propagation approaches mentioned above, store files of the primary will be opened in secondaries independent of the primary region. So for files that the primary compacted away, the secondaries might still be referring to these files for reading. Both features are using HFileLinks to refer to files, but there is no protection (yet) for guaranteeing that the file will not be deleted prematurely. Thus, as a guard, you should set the configuration property `hbase.master.hfilecleaner.ttl` to a larger value, such as 1 hour to guarantee that you will not receive IOExceptions for requests going to replicas. 
+=== Store File TTL
+In both of the write propagation approaches mentioned above, store files of the primary will be opened in secondaries independent of the primary region. So for files that the primary compacted away, the secondaries might still be referring to these files for reading. Both features are using HFileLinks to refer to files, but there is no protection (yet) for guaranteeing that the file will not be deleted prematurely. Thus, as a guard, you should set the configuration property `hbase.master.hfilecleaner.ttl` to a larger value, such as 1 hour to guarantee that you will not receive IOExceptions for requests going to replicas.
 
 === Region replication for META table’s region
-Currently, Async WAL Replication is not done for the META table’s WAL. The meta table’s secondary replicas still refreshes themselves from the persistent store files. Hence the `hbase.regionserver.meta.storefile.refresh.period` needs to be set to a certain non-zero value for refreshing the meta store files. Note that this configuration is configured differently than 
-`hbase.regionserver.storefile.refresh.period`. 
+Currently, Async WAL Replication is not done for the META table’s WAL. The meta table’s secondary replicas still refreshes themselves from the persistent store files. Hence the `hbase.regionserver.meta.storefile.refresh.period` needs to be set to a certain non-zero value for refreshing the meta store files. Note that this configuration is configured differently than
+`hbase.regionserver.storefile.refresh.period`.
 
 === Memory accounting
 The secondary region replicas refer to the data files of the primary region replica, but they have their own memstores (in HBase-1.1+) and uses block cache as well. However, one distinction is that the secondary region replicas cannot flush the data when there is memory pressure for their memstores. They can only free up memstore memory when the primary region does a flush and this flush is replicated to the secondary. Since in a region server hosting primary replicas for some regions and secondaries for some others, the secondaries might cause extra flushes to the primary regions in the same host. In extreme situations, there can be no memory left for adding new writes coming from the primary via wal replication. For unblocking this situation (and since secondary cannot flush by itself), the secondary is allowed to do a “store file refresh” by doing a file system list operation to pick up new files from primary, and possibly dropping its memstore. This refresh will only be perf
 ormed if the memstore size of the biggest secondary region replica is at least `hbase.region.replica.storefile.refresh.memstore.multiplier` (default 4) times bigger than the biggest memstore of a primary replica. One caveat is that if this is performed, the secondary can observe partial row updates across column families (since column families are flushed independently). The default should be good to not do this operation frequently. You can set this value to a large number to disable this feature if desired, but be warned that it might cause the replication to block forever.
 
 === Secondary replica failover
-When a secondary region replica first comes online, or fails over, it may have served some edits from it’s memstore. Since the recovery is handled differently for secondary replicas, the secondary has to ensure that it does not go back in time before it starts serving requests after assignment. For doing that, the secondary waits until it observes a full flush cycle (start flush, commit flush) or a “region open event” replicated from the primary. Until this happens, the secondary region replica will reject all read requests by throwing an IOException with message “The region's reads are disabled”. However, the other replicas will probably still be available to read, thus not causing any impact for the rpc with TIMELINE consistency. To facilitate faster recovery, the secondary region will trigger a flush request from the primary when it is opened. The configuration property `hbase.region.replica.wait.for.primary.flush` (enabled by default) can be used to disable this featur
 e if needed. 
+When a secondary region replica first comes online, or fails over, it may have served some edits from its memstore. Since the recovery is handled differently for secondary replicas, the secondary has to ensure that it does not go back in time before it starts serving requests after assignment. For doing that, the secondary waits until it observes a full flush cycle (start flush, commit flush) or a “region open event” replicated from the primary. Until this happens, the secondary region replica will reject all read requests by throwing an IOException with message “The region's reads are disabled”. However, the other replicas will probably still be available to read, thus not causing any impact for the rpc with TIMELINE consistency. To facilitate faster recovery, the secondary region will trigger a flush request from the primary when it is opened. The configuration property `hbase.region.replica.wait.for.primary.flush` (enabled by default) can be used to disable this feature i
 f needed.
 
 
 
@@ -2342,7 +2419,7 @@ When a secondary region replica first comes online, or fails over, it may have s
 
 To use highly available reads, you should set the following properties in `hbase-site.xml` file.
 There is no specific configuration to enable or disable region replicas.
-Instead you can change the number of region replicas per table to increase or decrease at the table creation or with alter table. The following configuration is for using async wal replication and using meta replicas of 3. 
+Instead you can change the number of region replicas per table to increase or decrease at the table creation or with alter table. The following configuration is for using async wal replication and using meta replicas of 3.
 
 
 ==== Server side properties
@@ -2369,7 +2446,7 @@ Instead you can change the number of region replicas per table to increase or de
     <name>hbase.region.replica.replication.enabled</name>
     <value>true</value>
     <description>
-      Whether asynchronous WAL replication to the secondary region replicas is enabled or not. If this is enabled, a replication peer named "region_replica_replication" will be created which will tail the logs and replicate the mutatations to region replicas for tables that have region replication > 1. If this is enabled once, disabling this replication also      requires disabling the replication peer using shell or ReplicationAdmin java class. Replication to secondary region replicas works over standard inter-cluster replication. So replication, if disabled explicitly, also has to be enabled by setting "hbase.replication"· to true for this feature to work.
+      Whether asynchronous WAL replication to the secondary region replicas is enabled or not. If this is enabled, a replication peer named "region_replica_replication" will be created which will tail the logs and replicate the mutations to region replicas for tables that have region replication > 1. If this is enabled once, disabling this replication also      requires disabling the replication peer using shell or ReplicationAdmin java class. Replication to secondary region replicas works over standard inter-cluster replication. So replication, if disabled explicitly, also has to be enabled by setting "hbase.replication"· to true for this feature to work.
     </description>
 </property>
 <property>
@@ -2403,7 +2480,7 @@ Instead you can change the number of region replicas per table to increase or de
 </property>
 
 
-<property> 
+<property>
     <name>hbase.region.replica.storefile.refresh.memstore.multiplier</name>
     <value>4</value>
     <description>
@@ -2466,7 +2543,7 @@ Ensure to set the following for all clients (and servers) that will use region r
 </property>
 ----
 
-Note HBase-1.0.x users should use `hbase.ipc.client.allowsInterrupt` rather than `hbase.ipc.client.specificThreadForWriting`. 
+Note HBase-1.0.x users should use `hbase.ipc.client.allowsInterrupt` rather than `hbase.ipc.client.specificThreadForWriting`.
 
 === User Interface
 
@@ -2537,7 +2614,7 @@ hbase> scan 't1', {CONSISTENCY => 'TIMELINE'}
 
 ==== Java
 
-You can set set the consistency for Gets and Scans and do requests as follows.
+You can set the consistency for Gets and Scans and do requests as follows.
 
 [source,java]
 ----

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/asf.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/asf.adoc b/src/main/asciidoc/_chapters/asf.adoc
index 77eed8f..47c29e5 100644
--- a/src/main/asciidoc/_chapters/asf.adoc
+++ b/src/main/asciidoc/_chapters/asf.adoc
@@ -35,13 +35,13 @@ HBase is a project in the Apache Software Foundation and as such there are respo
 [[asf.devprocess]]
 === ASF Development Process
 
-See the link:http://www.apache.org/dev/#committers[Apache Development Process page]            for all sorts of information on how the ASF is structured (e.g., PMC, committers, contributors), to tips on contributing and getting involved, and how open-source works at ASF. 
+See the link:http://www.apache.org/dev/#committers[Apache Development Process page]            for all sorts of information on how the ASF is structured (e.g., PMC, committers, contributors), to tips on contributing and getting involved, and how open-source works at ASF.
 
 [[asf.reporting]]
 === ASF Board Reporting
 
 Once a quarter, each project in the ASF portfolio submits a report to the ASF board.
 This is done by the HBase project lead and the committers.
-See link:http://www.apache.org/foundation/board/reporting[ASF board reporting] for more information. 
+See link:http://www.apache.org/foundation/board/reporting[ASF board reporting] for more information.
 
 :numbered:

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/case_studies.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/case_studies.adoc b/src/main/asciidoc/_chapters/case_studies.adoc
index 992414c..b021aa2 100644
--- a/src/main/asciidoc/_chapters/case_studies.adoc
+++ b/src/main/asciidoc/_chapters/case_studies.adoc
@@ -55,7 +55,7 @@ These jobs were consistently found to be waiting on map and reduce tasks assigne
 
 .Datanodes:
 * Two 12-core processors
-* Six Enerprise SATA disks
+* Six Enterprise SATA disks
 * 24GB of RAM
 * Two bonded gigabit NICs
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/community.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/community.adoc b/src/main/asciidoc/_chapters/community.adoc
index 4b91b0d..ba07df7 100644
--- a/src/main/asciidoc/_chapters/community.adoc
+++ b/src/main/asciidoc/_chapters/community.adoc
@@ -45,35 +45,35 @@ See link:http://search-hadoop.com/m/asM982C5FkS1[HBase, mail # dev - Thoughts
 
 The below policy is something we put in place 09/2012.
 It is a suggested policy rather than a hard requirement.
-We want to try it first to see if it works before we cast it in stone. 
+We want to try it first to see if it works before we cast it in stone.
 
 Apache HBase is made of link:https://issues.apache.org/jira/browse/HBASE#selectedTab=com.atlassian.jira.plugin.system.project%3Acomponents-panel[components].
 Components have one or more <<owner,OWNER>>s.
-See the 'Description' field on the link:https://issues.apache.org/jira/browse/HBASE#selectedTab=com.atlassian.jira.plugin.system.project%3Acomponents-panel[components]        JIRA page for who the current owners are by component. 
+See the 'Description' field on the link:https://issues.apache.org/jira/browse/HBASE#selectedTab=com.atlassian.jira.plugin.system.project%3Acomponents-panel[components]        JIRA page for who the current owners are by component.
 
 Patches that fit within the scope of a single Apache HBase component require, at least, a +1 by one of the component's owners before commit.
-If owners are absent -- busy or otherwise -- two +1s by non-owners will suffice. 
+If owners are absent -- busy or otherwise -- two +1s by non-owners will suffice.
 
-Patches that span components need at least two +1s before they can be committed, preferably +1s by owners of components touched by the x-component patch (TODO: This needs tightening up but I think fine for first pass). 
+Patches that span components need at least two +1s before they can be committed, preferably +1s by owners of components touched by the x-component patch (TODO: This needs tightening up but I think fine for first pass).
 
-Any -1 on a patch by anyone vetos a patch; it cannot be committed until the justification for the -1 is addressed. 
+Any -1 on a patch by anyone vetoes a patch; it cannot be committed until the justification for the -1 is addressed.
 
 [[hbase.fix.version.in.jira]]
 .How to set fix version in JIRA on issue resolve
 
 Here is how link:http://search-hadoop.com/m/azemIi5RCJ1[we agreed] to set versions in JIRA when we resolve an issue.
-If trunk is going to be 0.98.0 then: 
+If master is going to be 0.98.0 then:
 
-* Commit only to trunk: Mark with 0.98 
-* Commit to 0.95 and trunk : Mark with 0.98, and 0.95.x 
-* Commit to 0.94.x and 0.95, and trunk: Mark with 0.98, 0.95.x, and 0.94.x 
-* Commit to 89-fb: Mark with 89-fb. 
-* Commit site fixes: no version 
+* Commit only to master: Mark with 0.98
+* Commit to 0.95 and master: Mark with 0.98, and 0.95.x
+* Commit to 0.94.x and 0.95, and master: Mark with 0.98, 0.95.x, and 0.94.x
+* Commit to 89-fb: Mark with 89-fb.
+* Commit site fixes: no version
 
 [[hbase.when.to.close.jira]]
 .Policy on when to set a RESOLVED JIRA as CLOSED
 
-We link:http://search-hadoop.com/m/4cIKs1iwXMS1[agreed] that for issues that list multiple releases in their _Fix Version/s_ field, CLOSE the issue on the release of any of the versions listed; subsequent change to the issue must happen in a new JIRA. 
+We link:http://search-hadoop.com/m/4cIKs1iwXMS1[agreed] that for issues that list multiple releases in their _Fix Version/s_ field, CLOSE the issue on the release of any of the versions listed; subsequent change to the issue must happen in a new JIRA.
 
 [[no.permanent.state.in.zk]]
 .Only transient state in ZooKeeper!
@@ -81,7 +81,7 @@ We link:http://search-hadoop.com/m/4cIKs1iwXMS1[agreed] that for issues that lis
 You should be able to kill the data in zookeeper and hbase should ride over it recreating the zk content as it goes.
 This is an old adage around these parts.
 We just made note of it now.
-We also are currently in violation of this basic tenet -- replication at least keeps permanent state in zk -- but we are working to undo this breaking of a golden rule. 
+We also are currently in violation of this basic tenet -- replication at least keeps permanent state in zk -- but we are working to undo this breaking of a golden rule.
 
 [[community.roles]]
 == Community Roles
@@ -90,22 +90,22 @@ We also are currently in violation of this basic tenet -- replication at least k
 .Component Owner/Lieutenant
 
 Component owners are listed in the description field on this Apache HBase JIRA link:https://issues.apache.org/jira/browse/HBASE#selectedTab=com.atlassian.jira.plugin.system.project%3Acomponents-panel[components]        page.
-The owners are listed in the 'Description' field rather than in the 'Component Lead' field because the latter only allows us list one individual whereas it is encouraged that components have multiple owners. 
+The owners are listed in the 'Description' field rather than in the 'Component Lead' field because the latter only allows us list one individual whereas it is encouraged that components have multiple owners.
 
-Owners or component lieutenants are volunteers who are (usually, but not necessarily) expert in their component domain and may have an agenda on how they think their Apache HBase component should evolve. 
+Owners or component lieutenants are volunteers who are (usually, but not necessarily) expert in their component domain and may have an agenda on how they think their Apache HBase component should evolve.
 
-. Owners will try and review patches that land within their component's scope. 
-. If applicable, if an owner has an agenda, they will publish their goals or the design toward which they are driving their component 
+. Owners will try and review patches that land within their component's scope.
+. If applicable, if an owner has an agenda, they will publish their goals or the design toward which they are driving their component
 
 If you would like to be volunteer as a component owner, just write the dev list and we'll sign you up.
-Owners do not need to be committers. 
+Owners do not need to be committers.
 
 [[hbase.commit.msg.format]]
 == Commit Message format
 
-We link:http://search-hadoop.com/m/Gwxwl10cFHa1[agreed] to the following SVN commit message format: 
+We link:http://search-hadoop.com/m/Gwxwl10cFHa1[agreed] to the following Git commit message format:
 [source]
 ----
 HBASE-xxxxx <title>. (<contributor>)
----- 
-If the person making the commit is the contributor, leave off the '(<contributor>)' element. 
+----
+If the person making the commit is the contributor, leave off the '(<contributor>)' element.

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/compression.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/compression.adoc b/src/main/asciidoc/_chapters/compression.adoc
index 42d4de5..462bce3 100644
--- a/src/main/asciidoc/_chapters/compression.adoc
+++ b/src/main/asciidoc/_chapters/compression.adoc
@@ -144,15 +144,15 @@ In general, you need to weigh your options between smaller size and faster compr
 
 The Hadoop shared library has a bunch of facility including compression libraries and fast crc'ing. To make this facility available to HBase, do the following. HBase/Hadoop will fall back to use alternatives if it cannot find the native library versions -- or fail outright if you asking for an explicit compressor and there is no alternative available.
 
-If you see the following in your HBase logs, you know that HBase was unable to locate the Hadoop native libraries: 
+If you see the following in your HBase logs, you know that HBase was unable to locate the Hadoop native libraries:
 [source]
 ----
 2014-08-07 09:26:20,139 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-----      
-If the libraries loaded successfully, the WARN message does not show. 
+----
+If the libraries loaded successfully, the WARN message does not show.
 
-Lets presume your Hadoop shipped with a native library that suits the platform you are running HBase on.
-To check if the Hadoop native library is available to HBase, run the following tool (available in  Hadoop 2.1 and greater): 
+Let's presume your Hadoop shipped with a native library that suits the platform you are running HBase on.
+To check if the Hadoop native library is available to HBase, run the following tool (available in  Hadoop 2.1 and greater):
 [source]
 ----
 $ ./bin/hbase --config ~/conf_hbase org.apache.hadoop.util.NativeLibraryChecker
@@ -165,28 +165,28 @@ lz4:    false
 bzip2:  false
 2014-08-26 13:15:38,863 INFO  [main] util.ExitUtil: Exiting with status 1
 ----
-Above shows that the native hadoop library is not available in HBase context. 
+Above shows that the native hadoop library is not available in HBase context.
 
 To fix the above, either copy the Hadoop native libraries local or symlink to them if the Hadoop and HBase stalls are adjacent in the filesystem.
 You could also point at their location by setting the `LD_LIBRARY_PATH` environment variable.
 
-Where the JVM looks to find native librarys is "system dependent" (See `java.lang.System#loadLibrary(name)`). On linux, by default, is going to look in _lib/native/PLATFORM_ where `PLATFORM`      is the label for the platform your HBase is installed on.
+Where the JVM looks to find native libraries is "system dependent" (See `java.lang.System#loadLibrary(name)`). On linux, by default, is going to look in _lib/native/PLATFORM_ where `PLATFORM`      is the label for the platform your HBase is installed on.
 On a local linux machine, it seems to be the concatenation of the java properties `os.name` and `os.arch` followed by whether 32 or 64 bit.
 HBase on startup prints out all of the java system properties so find the os.name and os.arch in the log.
-For example: 
+For example:
 [source]
 ----
 ...
 2014-08-06 15:27:22,853 INFO  [main] zookeeper.ZooKeeper: Client environment:os.name=Linux
 2014-08-06 15:27:22,853 INFO  [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64
 ...
-----     
+----
 So in this case, the PLATFORM string is `Linux-amd64-64`.
 Copying the Hadoop native libraries or symlinking at _lib/native/Linux-amd64-64_     will ensure they are found.
 Check with the Hadoop _NativeLibraryChecker_.
- 
 
-Here is example of how to point at the Hadoop libs with `LD_LIBRARY_PATH`      environment variable: 
+
+Here is example of how to point at the Hadoop libs with `LD_LIBRARY_PATH`      environment variable:
 [source]
 ----
 $ LD_LIBRARY_PATH=~/hadoop-2.5.0-SNAPSHOT/lib/native ./bin/hbase --config ~/conf_hbase org.apache.hadoop.util.NativeLibraryChecker
@@ -199,7 +199,7 @@ snappy: true /usr/lib64/libsnappy.so.1
 lz4:    true revision:99
 bzip2:  true /lib64/libbz2.so.1
 ----
-Set in _hbase-env.sh_ the LD_LIBRARY_PATH environment variable when starting your HBase. 
+Set in _hbase-env.sh_ the LD_LIBRARY_PATH environment variable when starting your HBase.
 
 === Compressor Configuration, Installation, and Use
 
@@ -210,13 +210,13 @@ Before HBase can use a given compressor, its libraries need to be available.
 Due to licensing issues, only GZ compression is available to HBase (via native Java libraries) in a default installation.
 Other compression libraries are available via the shared library bundled with your hadoop.
 The hadoop native library needs to be findable when HBase starts.
-See 
+See
 
 .Compressor Support On the Master
 
 A new configuration setting was introduced in HBase 0.95, to check the Master to determine which data block encoders are installed and configured on it, and assume that the entire cluster is configured the same.
 This option, `hbase.master.check.compression`, defaults to `true`.
-This prevents the situation described in link:https://issues.apache.org/jira/browse/HBASE-6370[HBASE-6370], where a table is created or modified to support a codec that a region server does not support, leading to failures that take a long time to occur and are difficult to debug. 
+This prevents the situation described in link:https://issues.apache.org/jira/browse/HBASE-6370[HBASE-6370], where a table is created or modified to support a codec that a region server does not support, leading to failures that take a long time to occur and are difficult to debug.
 
 If `hbase.master.check.compression` is enabled, libraries for all desired compressors need to be installed and configured on the Master, even if the Master does not run a region server.
 
@@ -232,7 +232,7 @@ See <<brand.new.compressor,brand.new.compressor>>).
 
 HBase cannot ship with LZO because of incompatibility between HBase, which uses an Apache Software License (ASL) and LZO, which uses a GPL license.
 See the link:http://wiki.apache.org/hadoop/UsingLzoCompression[Using LZO
-              Compression] wiki page for information on configuring LZO support for HBase. 
+              Compression] wiki page for information on configuring LZO support for HBase.
 
 If you depend upon LZO compression, consider configuring your RegionServers to fail to start if LZO is not available.
 See <<hbase.regionserver.codecs,hbase.regionserver.codecs>>.
@@ -244,19 +244,19 @@ LZ4 support is bundled with Hadoop.
 Make sure the hadoop shared library (libhadoop.so) is accessible when you start HBase.
 After configuring your platform (see <<hbase.native.platform,hbase.native.platform>>), you can make a symbolic link from HBase to the native Hadoop libraries.
 This assumes the two software installs are colocated.
-For example, if my 'platform' is Linux-amd64-64: 
+For example, if my 'platform' is Linux-amd64-64:
 [source,bourne]
 ----
 $ cd $HBASE_HOME
 $ mkdir lib/native
 $ ln -s $HADOOP_HOME/lib/native lib/native/Linux-amd64-64
-----            
+----
 Use the compression tool to check that LZ4 is installed on all nodes.
 Start up (or restart) HBase.
-Afterward, you can create and alter tables to enable LZ4 as a compression codec.: 
+Afterward, you can create and alter tables to enable LZ4 as a compression codec.:
 ----
 hbase(main):003:0> alter 'TestTable', {NAME => 'info', COMPRESSION => 'LZ4'}
-----          
+----
 
 [[snappy.compression.installation]]
 .Install Snappy Support
@@ -347,7 +347,7 @@ You must specify either `-write` or `-update-read` as your first parameter, and
 ====
 ----
 
-$ bin/hbase org.apache.hadoop.hbase.util.LoadTestTool -h            
+$ bin/hbase org.apache.hadoop.hbase.util.LoadTestTool -h
 usage: bin/hbase org.apache.hadoop.hbase.util.LoadTestTool <options>
 Options:
  -batchupdate                 Whether to use batch as opposed to separate

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/configuration.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/configuration.adoc b/src/main/asciidoc/_chapters/configuration.adoc
index 01f2eb7..495232f 100644
--- a/src/main/asciidoc/_chapters/configuration.adoc
+++ b/src/main/asciidoc/_chapters/configuration.adoc
@@ -98,6 +98,11 @@ This section lists required services and some required system configuration.
 |JDK 7
 |JDK 8
 
+|1.2
+|link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
+|yes
+|yes
+
 |1.1
 |link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
 |yes
@@ -116,11 +121,6 @@ deprecated `remove()` method of the `PoolMap` class and is under consideration.
 link:https://issues.apache.org/jira/browse/HBASE-7608[HBASE-7608] for more information about JDK 8
 support.
 
-|0.96
-|yes
-|yes
-|N/A
-
 |0.94
 |yes
 |yes
@@ -162,7 +162,7 @@ For example, assuming that a schema had 3 ColumnFamilies per region with an aver
 +
 Another related setting is the number of processes a user is allowed to run at once. In Linux and Unix, the number of processes is set using the `ulimit -u` command. This should not be confused with the `nproc` command, which controls the number of CPUs available to a given user. Under load, a `ulimit -u` that is too low can cause OutOfMemoryError exceptions. See Jack Levin's major HDFS issues thread on the hbase-users mailing list, from 2011.
 +
-Configuring the maximum number of file descriptors and processes for the user who is running the HBase process is an operating system configuration, rather than an HBase configuration. It is also important to be sure that the settings are changed for the user that actually runs HBase. To see which user started HBase, and that user's ulimit configuration, look at the first line of the HBase log for that instance. A useful read setting config on you hadoop cluster is Aaron Kimballs' Configuration Parameters: What can you just ignore?
+Configuring the maximum number of file descriptors and processes for the user who is running the HBase process is an operating system configuration, rather than an HBase configuration. It is also important to be sure that the settings are changed for the user that actually runs HBase. To see which user started HBase, and that user's ulimit configuration, look at the first line of the HBase log for that instance. A useful read setting config on your hadoop cluster is Aaron Kimball's Configuration Parameters: What can you just ignore?
 +
 .`ulimit` Settings on Ubuntu
 ====
@@ -210,24 +210,39 @@ Use the following legend to interpret this table:
 * "X" = not supported
 * "NT" = Not tested
 
-[cols="1,1,1,1,1,1,1", options="header"]
+[cols="1,1,1,1,1,1", options="header"]
 |===
-| | HBase-0.92.x | HBase-0.94.x | HBase-0.96.x | HBase-0.98.x (Support for Hadoop 1.1+ is deprecated.) | HBase-1.0.x (Hadoop 1.x is NOT supported) | HBase-1.1.x
-|Hadoop-0.20.205 | S | X | X | X | X | X
-|Hadoop-0.22.x | S | X | X | X | X | X
-|Hadoop-1.0.x  |X | X | X | X | X | X
-|Hadoop-1.1.x | NT | S | S | NT | X | X
-|Hadoop-0.23.x | X | S | NT | X | X | X
-|Hadoop-2.0.x-alpha | X | NT | X | X | X | X
-|Hadoop-2.1.0-beta | X | NT | S | X | X | X
-|Hadoop-2.2.0 | X | NT | S | S | NT | NT
-|Hadoop-2.3.x | X | NT | S | S | NT | NT
-|Hadoop-2.4.x | X | NT | S | S | S | S
-|Hadoop-2.5.x | X | NT | S | S | S | S
-|Hadoop-2.6.x | X | NT | NT | NT | S | S
-|Hadoop-2.7.x | X | NT | NT | NT | NT | NT
+| | HBase-0.94.x | HBase-0.98.x (Support for Hadoop 1.1+ is deprecated.) | HBase-1.0.x (Hadoop 1.x is NOT supported) | HBase-1.1.x | HBase-1.2.x
+|Hadoop-1.0.x  | X | X | X | X | X
+|Hadoop-1.1.x | S | NT | X | X | X
+|Hadoop-0.23.x | S | X | X | X | X
+|Hadoop-2.0.x-alpha | NT | X | X | X | X
+|Hadoop-2.1.0-beta | NT | X | X | X | X
+|Hadoop-2.2.0 | NT | S | NT | NT | NT
+|Hadoop-2.3.x | NT | S | NT | NT | NT
+|Hadoop-2.4.x | NT | S | S | S | S
+|Hadoop-2.5.x | NT | S | S | S | S
+|Hadoop-2.6.0 | X | X | X | X | X
+|Hadoop-2.6.1+ | NT | NT | NT | NT | S
+|Hadoop-2.7.0 | X | X | X | X | X
+|Hadoop-2.7.1+ | NT | NT | NT | NT | S
 |===
 
+.Hadoop 2.6.x
+[TIP]
+====
+Hadoop distributions based on the 2.6.x line *must* have
+link:https://issues.apache.org/jira/browse/HADOOP-11710[HADOOP-11710] applied if you plan to run
+HBase on top of an HDFS Encryption Zone. Failure to do so will result in cluster failure and
+data loss. This patch is present in Apache Hadoop releases 2.6.1+.
+====
+
+.Hadoop 2.7.x
+[TIP]
+====
+Hadoop version 2.7.0 is not tested or supported as the Hadoop PMC has explicitly labeled that release as not being stable.
+====
+
 .Replace the Hadoop Bundled With HBase!
 [NOTE]
 ====
@@ -396,7 +411,7 @@ Zookeeper binds to a well known port so clients may talk to HBase.
 
 === Distributed
 
-Distributed mode can be subdivided into distributed but all daemons run on a single node -- a.k.a _pseudo-distributed_ -- and _fully-distributed_ where the daemons are spread across all nodes in the cluster.
+Distributed mode can be subdivided into distributed but all daemons run on a single node -- a.k.a. _pseudo-distributed_ -- and _fully-distributed_ where the daemons are spread across all nodes in the cluster.
 The _pseudo-distributed_ vs. _fully-distributed_ nomenclature comes from Hadoop.
 
 Pseudo-distributed mode can run against the local filesystem or it can run against an instance of the _Hadoop Distributed File System_ (HDFS). Fully-distributed mode can ONLY run on HDFS.
@@ -526,7 +541,7 @@ HBase logs can be found in the _logs_ subdirectory.
 Check them out especially if HBase had trouble starting.
 
 HBase also puts up a UI listing vital attributes.
-By default it's deployed on the Master host at port 16010 (HBase RegionServers listen on port 16020 by default and put up an informational HTTP server at port 16030). If the Master is running on a host named `master.example.org` on the default port, point your browser at _http://master.example.org:16010_ to see the web interface.
+By default it's deployed on the Master host at port 16010 (HBase RegionServers listen on port 16020 by default and put up an informational HTTP server at port 16030). If the Master is running on a host named `master.example.org` on the default port, point your browser at pass:[http://master.example.org:16010] to see the web interface.
 
 Prior to HBase 0.98 the master UI was deployed on port 60010, and the HBase RegionServers UI on port 60030.
 
@@ -550,7 +565,7 @@ If you are running a distributed operation, be sure to wait until HBase has shut
 === _hbase-site.xml_ and _hbase-default.xml_
 
 Just as in Hadoop where you add site-specific HDFS configuration to the _hdfs-site.xml_ file, for HBase, site specific customizations go into the file _conf/hbase-site.xml_.
-For the list of configurable properties, see <<hbase_default_configurations,hbase default configurations>> below or view the raw _hbase-default.xml_ source file in the HBase source code at _src/main/resources_. 
+For the list of configurable properties, see <<hbase_default_configurations,hbase default configurations>> below or view the raw _hbase-default.xml_ source file in the HBase source code at _src/main/resources_.
 
 Not all configuration options make it out to _hbase-default.xml_.
 Configuration that it is thought rare anyone would change can exist only in code; the only way to turn up such configurations is via a reading of the source code itself.
@@ -558,7 +573,7 @@ Configuration that it is thought rare anyone would change can exist only in code
 Currently, changes here will require a cluster restart for HBase to notice the change.
 // hbase/src/main/asciidoc
 //
-include::../../../../target/asciidoc/hbase-default.adoc[]
+include::{docdir}/../../../target/asciidoc/hbase-default.adoc[]
 
 
 [[hbase.env.sh]]
@@ -590,7 +605,7 @@ ZooKeeper is where all these values are kept.
 Thus clients require the location of the ZooKeeper ensemble before they can do anything else.
 Usually this the ensemble location is kept out in the _hbase-site.xml_ and is picked up by the client from the `CLASSPATH`.
 
-If you are configuring an IDE to run a HBase client, you should include the _conf/_ directory on your classpath so _hbase-site.xml_ settings can be found (or add _src/test/resources_ to pick up the hbase-site.xml used by tests). 
+If you are configuring an IDE to run an HBase client, you should include the _conf/_ directory on your classpath so _hbase-site.xml_ settings can be found (or add _src/test/resources_ to pick up the hbase-site.xml used by tests).
 
 Minimally, a client of HBase needs several libraries in its `CLASSPATH` when connecting to a cluster, including:
 [source]
@@ -607,7 +622,7 @@ slf4j-log4j (slf4j-log4j12-1.5.8.jar)
 zookeeper (zookeeper-3.4.2.jar)
 ----
 
-An example basic _hbase-site.xml_ for client only might look as follows: 
+An example basic _hbase-site.xml_ for client only might look as follows:
 [source,xml]
 ----
 <?xml version="1.0"?>
@@ -903,7 +918,7 @@ See <<master.processes.loadbalancer,master.processes.loadbalancer>> for more inf
 ==== Disabling Blockcache
 
 Do not turn off block cache (You'd do it by setting `hbase.block.cache.size` to zero). Currently we do not do well if you do this because the RegionServer will spend all its time loading HFile indices over and over again.
-If your working set it such that block cache does you no good, at least size the block cache such that HFile indices will stay up in the cache (you can get a rough idea on the size you need by surveying RegionServer UIs; you'll see index block size accounted near the top of the webpage).
+If your working set is such that block cache does you no good, at least size the block cache such that HFile indices will stay up in the cache (you can get a rough idea on the size you need by surveying RegionServer UIs; you'll see index block size accounted near the top of the webpage).
 
 [[nagles]]
 ==== link:http://en.wikipedia.org/wiki/Nagle's_algorithm[Nagle's] or the small package problem
@@ -916,7 +931,7 @@ You might also see the graphs on the tail of link:https://issues.apache.org/jira
 ==== Better Mean Time to Recover (MTTR)
 
 This section is about configurations that will make servers come back faster after a fail.
-See the Deveraj Das an Nicolas Liochon blog post link:http://hortonworks.com/blog/introduction-to-hbase-mean-time-to-recover-mttr/[Introduction to HBase Mean Time to Recover (MTTR)] for a brief introduction.
+See the Deveraj Das and Nicolas Liochon blog post link:http://hortonworks.com/blog/introduction-to-hbase-mean-time-to-recover-mttr/[Introduction to HBase Mean Time to Recover (MTTR)] for a brief introduction.
 
 The issue link:https://issues.apache.org/jira/browse/HBASE-8389[HBASE-8354 forces Namenode into loop with lease recovery requests] is messy but has a bunch of good discussion toward the end on low timeouts and how to effect faster recovery including citation of fixes added to HDFS. Read the Varun Sharma comments.
 The below suggested configurations are Varun's suggestions distilled and tested.
@@ -988,7 +1003,7 @@ See the link:http://docs.oracle.com/javase/6/docs/technotes/guides/management/ag
 Historically, besides above port mentioned, JMX opens two additional random TCP listening ports, which could lead to port conflict problem. (See link:https://issues.apache.org/jira/browse/HBASE-10289[HBASE-10289] for details)
 
 As an alternative, You can use the coprocessor-based JMX implementation provided by HBase.
-To enable it in 0.99 or above, add below property in _hbase-site.xml_: 
+To enable it in 0.99 or above, add below property in _hbase-site.xml_:
 
 [source,xml]
 ----
@@ -1019,7 +1034,7 @@ The registry port can be shared with connector port in most cases, so you only n
 However if you want to use SSL communication, the 2 ports must be configured to different values.
 
 By default the password authentication and SSL communication is disabled.
-To enable password authentication, you need to update _hbase-env.sh_          like below: 
+To enable password authentication, you need to update _hbase-env.sh_          like below:
 [source,bash]
 ----
 export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.authenticate=true                  \
@@ -1046,7 +1061,7 @@ keytool -export -alias jconsole -keystore myKeyStore -file jconsole.cert
 keytool -import -alias jconsole -keystore jconsoleKeyStore -file jconsole.cert
 ----
 
-And then update _hbase-env.sh_ like below: 
+And then update _hbase-env.sh_ like below:
 
 [source,bash]
 ----
@@ -1068,12 +1083,12 @@ Finally start `jconsole` on the client using the key store:
 jconsole -J-Djavax.net.ssl.trustStore=/home/tianq/jconsoleKeyStore
 ----
 
-NOTE: To enable the HBase JMX implementation on Master, you also need to add below property in _hbase-site.xml_: 
+NOTE: To enable the HBase JMX implementation on Master, you also need to add below property in _hbase-site.xml_:
 
 [source,xml]
 ----
 <property>
-  <ame>hbase.coprocessor.master.classes</name>
+  <name>hbase.coprocessor.master.classes</name>
   <value>org.apache.hadoop.hbase.JMXListener</value>
 </property>
 ----


[02/11] hbase git commit: HBASE-13908 update site docs for 1.2 RC.

Posted by bu...@apache.org.
http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/schema_design.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/schema_design.adoc b/src/main/asciidoc/_chapters/schema_design.adoc
index 9319c65..5cf8d12 100644
--- a/src/main/asciidoc/_chapters/schema_design.adoc
+++ b/src/main/asciidoc/_chapters/schema_design.adoc
@@ -27,7 +27,19 @@
 :icons: font
 :experimental:
 
-A good general introduction on the strength and weaknesses modelling on the various non-rdbms datastores is Ian Varley's Master thesis, link:http://ianvarley.com/UT/MR/Varley_MastersReport_Full_2009-08-07.pdf[No Relation: The Mixed Blessings of Non-Relational Databases]. Also, read <<keyvalue,keyvalue>> for how HBase stores data internally, and the section on <<schema.casestudies,schema.casestudies>>.
+A good introduction on the strength and weaknesses modelling on the various non-rdbms datastores is
+to be found in Ian Varley's Master thesis,
+link:http://ianvarley.com/UT/MR/Varley_MastersReport_Full_2009-08-07.pdf[No Relation: The Mixed Blessings of Non-Relational Databases].
+It is a little dated now but a good background read if you have a moment on how HBase schema modeling
+differs from how it is done in an RDBMS. Also,
+read <<keyvalue,keyvalue>> for how HBase stores data internally, and the section on <<schema.casestudies,schema.casestudies>>.
+
+The documentation on the Cloud Bigtable website, link:https://cloud.google.com/bigtable/docs/schema-design[Designing Your Schema],
+is pertinent and nicely done and lessons learned there equally apply here in HBase land; just divide
+any quoted values by ~10 to get what works for HBase: e.g. where it says individual values can be ~10MBs in size, HBase can do similar -- perhaps best
+to go smaller if you can -- and where it says a maximum of 100 column families in Cloud Bigtable, think ~10 when
+modeling on HBase.
+
 
 [[schema.creation]]
 ==  Schema Creation
@@ -41,7 +53,7 @@ Tables must be disabled when making ColumnFamily modifications, for example:
 
 Configuration config = HBaseConfiguration.create();
 Admin admin = new Admin(conf);
-String table = "myTable";
+TableName table = TableName.valueOf("myTable");
 
 admin.disableTable(table);
 
@@ -64,6 +76,50 @@ When changes are made to either Tables or ColumnFamilies (e.g. region size, bloc
 
 See <<store,store>> for more information on StoreFiles.
 
+[[table_schema_rules_of_thumb]]
+== Table Schema Rules Of Thumb
+
+There are many different data sets, with different access patterns and service-level
+expectations. Therefore, these rules of thumb are only an overview. Read the rest
+of this chapter to get more details after you have gone through this list.
+
+* Aim to have regions sized between 10 and 50 GB.
+* Aim to have cells no larger than 10 MB, or 50 MB if you use <<mob>>. Otherwise,
+consider storing your cell data in HDFS and store a pointer to the data in HBase.
+* A typical schema has between 1 and 3 column families per table. HBase tables should
+not be designed to mimic RDBMS tables.
+* Around 50-100 regions is a good number for a table with 1 or 2 column families.
+Remember that a region is a contiguous segment of a column family.
+* Keep your column family names as short as possible. The column family names are
+stored for every value (ignoring prefix encoding). They should not be self-documenting
+and descriptive like in a typical RDBMS.
+* If you are storing time-based machine data or logging information, and the row key
+is based on device ID or service ID plus time, you can end up with a pattern where
+older data regions never have additional writes beyond a certain age. In this type
+of situation, you end up with a small number of active regions and a large number
+of older regions which have no new writes. For these situations, you can tolerate
+a larger number of regions because your resource consumption is driven by the active
+regions only.
+* If only one column family is busy with writes, only that column family accomulates
+memory. Be aware of write patterns when allocating resources.
+
+[[regionserver_sizing_rules_of_thumb]]
+= RegionServer Sizing Rules of Thumb
+
+Lars Hofhansl wrote a great
+link:http://hadoop-hbase.blogspot.com/2013/01/hbase-region-server-memory-sizing.html[blog post]
+about RegionServer memory sizing. The upshot is that you probably need more memory
+than you think you need. He goes into the impact of region size, memstore size, HDFS
+replication factor, and other things to check.
+
+[quote, Lars Hofhansl, http://hadoop-hbase.blogspot.com/2013/01/hbase-region-server-memory-sizing.html]
+____
+Personally I would place the maximum disk space per machine that can be served
+exclusively with HBase around 6T, unless you have a very read-heavy workload.
+In that case the Java heap should be 32GB (20G regions, 128M memstores, the rest
+defaults).
+____
+
 [[number.of.cfs]]
 ==  On the number of column families
 
@@ -175,7 +231,7 @@ See this comic by IKai Lan on why monotonically increasing row keys are problema
 The pile-up on a single region brought on by monotonically increasing keys can be mitigated by randomizing the input records to not be in sorted order, but in general it's best to avoid using a timestamp or a sequence (e.g. 1, 2, 3) as the row-key.
 
 If you do need to upload time series data into HBase, you should study link:http://opentsdb.net/[OpenTSDB] as a successful example.
-It has a page describing the link: http://opentsdb.net/schema.html[schema] it uses in HBase.
+It has a page describing the link:http://opentsdb.net/schema.html[schema] it uses in HBase.
 The key format in OpenTSDB is effectively [metric_type][event_timestamp], which would appear at first glance to contradict the previous advice about not using a timestamp as the key.
 However, the difference is that the timestamp is not in the _lead_ position of the key, and the design assumption is that there are dozens or hundreds (or more) of different metric types.
 Thus, even with a continual stream of input data with a mix of metric types, the Puts are distributed across various points of regions in the table.
@@ -327,8 +383,8 @@ As an example of why this is important, consider the example of using displayabl
 
 The problem is that all the data is going to pile up in the first 2 regions and the last region thus creating a "lumpy" (and possibly "hot") region problem.
 To understand why, refer to an link:http://www.asciitable.com[ASCII Table].
-'0' is byte 48, and 'f' is byte 102, but there is a huge gap in byte values (bytes 58 to 96) that will _never appear in this keyspace_ because the only values are [0-9] and [a-f]. Thus, the middle regions regions will never be used.
-To make pre-spliting work with this example keyspace, a custom definition of splits (i.e., and not relying on the built-in split method) is required.
+'0' is byte 48, and 'f' is byte 102, but there is a huge gap in byte values (bytes 58 to 96) that will _never appear in this keyspace_ because the only values are [0-9] and [a-f]. Thus, the middle regions will never be used.
+To make pre-splitting work with this example keyspace, a custom definition of splits (i.e., and not relying on the built-in split method) is required.
 
 Lesson #1: Pre-splitting tables is generally a best practice, but you need to pre-split them in such a way that all the regions are accessible in the keyspace.
 While this example demonstrated the problem with a hex-key keyspace, the same problem can happen with _any_ keyspace.
@@ -394,7 +450,7 @@ The minimum number of row versions parameter is used together with the time-to-l
 HBase supports a "bytes-in/bytes-out" interface via link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html[Put] and link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Result.html[Result], so anything that can be converted to an array of bytes can be stored as a value.
 Input could be strings, numbers, complex objects, or even images as long as they can rendered as bytes.
 
-There are practical limits to the size of values (e.g., storing 10-50MB objects in HBase would probably be too much to ask); search the mailling list for conversations on this topic.
+There are practical limits to the size of values (e.g., storing 10-50MB objects in HBase would probably be too much to ask); search the mailing list for conversations on this topic.
 All rows in HBase conform to the <<datamodel>>, and that includes versioning.
 Take that into consideration when making your design, as well as block size for the ColumnFamily.
 
@@ -502,7 +558,7 @@ ROW                                              COLUMN+CELL
 
 Notice how delete cells are let go.
 
-Now lets run the same test only with `KEEP_DELETED_CELLS` set on the table (you can do table or per-column-family):
+Now let's run the same test only with `KEEP_DELETED_CELLS` set on the table (you can do table or per-column-family):
 
 [source]
 ----
@@ -593,7 +649,7 @@ However, don't try a full-scan on a large table like this from an application (i
 [[secondary.indexes.periodic]]
 ===  Periodic-Update Secondary Index
 
-A secondary index could be created in an other table which is periodically updated via a MapReduce job.
+A secondary index could be created in another table which is periodically updated via a MapReduce job.
 The job could be executed intra-day, but depending on load-strategy it could still potentially be out of sync with the main data table.
 
 See <<mapreduce.example.readwrite,mapreduce.example.readwrite>> for more information.
@@ -620,8 +676,13 @@ For more information, see <<coprocessors,coprocessors>>
 == Constraints
 
 HBase currently supports 'constraints' in traditional (SQL) database parlance.
-The advised usage for Constraints is in enforcing business rules for attributes in the table (e.g. make sure values are in the range 1-10). Constraints could also be used to enforce referential integrity, but this is strongly discouraged as it will dramatically decrease the write throughput of the tables where integrity checking is enabled.
-Extensive documentation on using Constraints can be found at: link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/constraint[Constraint] since version 0.94.
+The advised usage for Constraints is in enforcing business rules for attributes
+in the table (e.g. make sure values are in the range 1-10). Constraints could
+also be used to enforce referential integrity, but this is strongly discouraged
+as it will dramatically decrease the write throughput of the tables where integrity
+checking is enabled. Extensive documentation on using Constraints can be found at
+link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/constraint/Constraint.html[Constraint]
+since version 0.94.
 
 [[schema.casestudies]]
 == Schema Design Case Studies
@@ -700,7 +761,7 @@ See https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#se
 ====
 
 [[schema.casestudies.log_timeseries.varkeys]]
-==== Variangle Length or Fixed Length Rowkeys?
+==== Variable Length or Fixed Length Rowkeys?
 
 It is critical to remember that rowkeys are stamped on every column in HBase.
 If the hostname is `a` and the event type is `e1` then the resulting rowkey would be quite small.
@@ -721,10 +782,12 @@ Composite Rowkey With Numeric Substitution:
 For this approach another lookup table would be needed in addition to LOG_DATA, called LOG_TYPES.
 The rowkey of LOG_TYPES would be:
 
-* [type] (e.g., byte indicating hostname vs. event-type)
-* [bytes] variable length bytes for raw hostname or event-type.
+* `[type]` (e.g., byte indicating hostname vs. event-type)
+* `[bytes]` variable length bytes for raw hostname or event-type.
 
-A column for this rowkey could be a long with an assigned number, which could be obtained by using an link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#incrementColumnValue%28byte[],%20byte[],%20byte[],%20long%29[HBase counter].
+A column for this rowkey could be a long with an assigned number, which could be obtained
+by using an
++++<a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#incrementColumnValue%28byte[],%20byte[],%20byte[],%20long%29">HBase counter</a>+++.
 
 So the resulting composite rowkey would be:
 
@@ -739,7 +802,9 @@ In either the Hash or Numeric substitution approach, the raw values for hostname
 
 This effectively is the OpenTSDB approach.
 What OpenTSDB does is re-write data and pack rows into columns for certain time-periods.
-For a detailed explanation, see: link:http://opentsdb.net/schema.html, and link:http://www.cloudera.com/content/cloudera/en/resources/library/hbasecon/video-hbasecon-2012-lessons-learned-from-opentsdb.html[Lessons Learned from OpenTSDB] from HBaseCon2012.
+For a detailed explanation, see: http://opentsdb.net/schema.html, and
++++<a href="http://www.cloudera.com/content/cloudera/en/resources/library/hbasecon/video-hbasecon-2012-lessons-learned-from-opentsdb.html">Lessons Learned from OpenTSDB</a>+++
+from HBaseCon2012.
 
 But this is how the general concept works: data is ingested, for example, in this manner...
 
@@ -784,7 +849,7 @@ Assuming that the combination of customer number and sales order uniquely identi
 [customer number][order number]
 ----
 
-for a ORDER table.
+for an ORDER table.
 However, there are more design decisions to make: are the _raw_ values the best choices for rowkeys?
 
 The same design questions in the Log Data use-case confront us here.
@@ -842,14 +907,14 @@ The ORDER table's rowkey was described above: <<schema.casestudies.custorder,sch
 
 The SHIPPING_LOCATION's composite rowkey would be something like this:
 
-* [order-rowkey]
-* [shipping location number] (e.g., 1st location, 2nd, etc.)
+* `[order-rowkey]`
+* `[shipping location number]` (e.g., 1st location, 2nd, etc.)
 
 The LINE_ITEM table's composite rowkey would be something like this:
 
-* [order-rowkey]
-* [shipping location number] (e.g., 1st location, 2nd, etc.)
-* [line item number] (e.g., 1st lineitem, 2nd, etc.)
+* `[order-rowkey]`
+* `[shipping location number]` (e.g., 1st location, 2nd, etc.)
+* `[line item number]` (e.g., 1st lineitem, 2nd, etc.)
 
 Such a normalized model is likely to be the approach with an RDBMS, but that's not your only option with HBase.
 The cons of such an approach is that to retrieve information about any Order, you will need:
@@ -867,21 +932,21 @@ With this approach, there would exist a single table ORDER that would contain
 
 The Order rowkey was described above: <<schema.casestudies.custorder,schema.casestudies.custorder>>
 
-* [order-rowkey]
-* [ORDER record type]
+* `[order-rowkey]`
+* `[ORDER record type]`
 
 The ShippingLocation composite rowkey would be something like this:
 
-* [order-rowkey]
-* [SHIPPING record type]
-* [shipping location number] (e.g., 1st location, 2nd, etc.)
+* `[order-rowkey]`
+* `[SHIPPING record type]`
+* `[shipping location number]` (e.g., 1st location, 2nd, etc.)
 
 The LineItem composite rowkey would be something like this:
 
-* [order-rowkey]
-* [LINE record type]
-* [shipping location number] (e.g., 1st location, 2nd, etc.)
-* [line item number] (e.g., 1st lineitem, 2nd, etc.)
+* `[order-rowkey]`
+* `[LINE record type]`
+* `[shipping location number]` (e.g., 1st location, 2nd, etc.)
+* `[line item number]` (e.g., 1st lineitem, 2nd, etc.)
 
 [[schema.casestudies.custorder.obj.denorm]]
 ===== Denormalized
@@ -890,9 +955,9 @@ A variant of the Single Table With Record Types approach is to denormalize and f
 
 The LineItem composite rowkey would be something like this:
 
-* [order-rowkey]
-* [LINE record type]
-* [line item number] (e.g., 1st lineitem, 2nd, etc., care must be taken that there are unique across the entire order)
+* `[order-rowkey]`
+* `[LINE record type]`
+* `[line item number]` (e.g., 1st lineitem, 2nd, etc., care must be taken that there are unique across the entire order)
 
 and the LineItem columns would be something like this:
 
@@ -915,9 +980,9 @@ For example, the ORDER table's rowkey was described above: <<schema.casestudies.
 
 There are many options here: JSON, XML, Java Serialization, Avro, Hadoop Writables, etc.
 All of them are variants of the same approach: encode the object graph to a byte-array.
-Care should be taken with this approach to ensure backward compatibilty in case the object model changes such that older persisted structures can still be read back out of HBase.
+Care should be taken with this approach to ensure backward compatibility in case the object model changes such that older persisted structures can still be read back out of HBase.
 
-Pros are being able to manage complex object graphs with minimal I/O (e.g., a single HBase Get per Order in this example), but the cons include the aforementioned warning about backward compatiblity of serialization, language dependencies of serialization (e.g., Java Serialization only works with Java clients), the fact that you have to deserialize the entire object to get any piece of information inside the BLOB, and the difficulty in getting frameworks like Hive to work with custom objects like this.
+Pros are being able to manage complex object graphs with minimal I/O (e.g., a single HBase Get per Order in this example), but the cons include the aforementioned warning about backward compatibility of serialization, language dependencies of serialization (e.g., Java Serialization only works with Java clients), the fact that you have to deserialize the entire object to get any piece of information inside the BLOB, and the difficulty in getting frameworks like Hive to work with custom objects like this.
 
 [[schema.smackdown]]
 === Case Study - "Tall/Wide/Middle" Schema Design Smackdown
@@ -929,7 +994,7 @@ These are general guidelines and not laws - each application must consider its o
 ==== Rows vs. Versions
 
 A common question is whether one should prefer rows or HBase's built-in-versioning.
-The context is typically where there are "a lot" of versions of a row to be retained (e.g., where it is significantly above the HBase default of 1 max versions). The rows-approach would require storing a timestamp in some portion of the rowkey so that they would not overwite with each successive update.
+The context is typically where there are "a lot" of versions of a row to be retained (e.g., where it is significantly above the HBase default of 1 max versions). The rows-approach would require storing a timestamp in some portion of the rowkey so that they would not overwrite with each successive update.
 
 Preference: Rows (generally speaking).
 
@@ -1028,14 +1093,14 @@ The tl;dr version is that you should probably go with one row per user+value, an
 
 Your two options mirror a common question people have when designing HBase schemas: should I go "tall" or "wide"? Your first schema is "tall": each row represents one value for one user, and so there are many rows in the table for each user; the row key is user + valueid, and there would be (presumably) a single column qualifier that means "the value". This is great if you want to scan over rows in sorted order by row key (thus my question above, about whether these ids are sorted correctly). You can start a scan at any user+valueid, read the next 30, and be done.
 What you're giving up is the ability to have transactional guarantees around all the rows for one user, but it doesn't sound like you need that.
-Doing it this way is generally recommended (see here link:http://hbase.apache.org/book.html#schema.smackdown).
+Doing it this way is generally recommended (see here http://hbase.apache.org/book.html#schema.smackdown).
 
 Your second option is "wide": you store a bunch of values in one row, using different qualifiers (where the qualifier is the valueid). The simple way to do that would be to just store ALL values for one user in a single row.
 I'm guessing you jumped to the "paginated" version because you're assuming that storing millions of columns in a single row would be bad for performance, which may or may not be true; as long as you're not trying to do too much in a single request, or do things like scanning over and returning all of the cells in the row, it shouldn't be fundamentally worse.
 The client has methods that allow you to get specific slices of columns.
 
 Note that neither case fundamentally uses more disk space than the other; you're just "shifting" part of the identifying information for a value either to the left (into the row key, in option one) or to the right (into the column qualifiers in option 2). Under the covers, every key/value still stores the whole row key, and column family name.
-(If this is a bit confusing, take an hour and watch Lars George's excellent video about understanding HBase schema design: link:http://www.youtube.com/watch?v=_HLoH_PgrLk).
+(If this is a bit confusing, take an hour and watch Lars George's excellent video about understanding HBase schema design: http://www.youtube.com/watch?v=_HLoH_PgrLk).
 
 A manually paginated version has lots more complexities, as you note, like having to keep track of how many things are in each page, re-shuffling if new values are inserted, etc.
 That seems significantly more complex.

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/security.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/security.adoc b/src/main/asciidoc/_chapters/security.adoc
index 101affa..c346435 100644
--- a/src/main/asciidoc/_chapters/security.adoc
+++ b/src/main/asciidoc/_chapters/security.adoc
@@ -42,7 +42,7 @@ HBase provides mechanisms to secure various components and aspects of HBase and
 == Using Secure HTTP (HTTPS) for the Web UI
 
 A default HBase install uses insecure HTTP connections for Web UIs for the master and region servers.
-To enable secure HTTP (HTTPS) connections instead, set `hadoop.ssl.enabled` to `true` in _hbase-site.xml_.
+To enable secure HTTP (HTTPS) connections instead, set `hbase.ssl.enabled` to `true` in _hbase-site.xml_.
 This does not change the port used by the Web UI.
 To change the port for the web UI for a given HBase component, configure that port's setting in hbase-site.xml.
 These settings are:
@@ -175,6 +175,15 @@ Add the following to the `hbase-site.xml` file for every Thrift gateway:
    You may have  to put the concrete full hostname.
    -->
 </property>
+<!-- Add these if you need to configure a different DNS interface from the default -->
+<property>
+  <name>hbase.thrift.dns.interface</name>
+  <value>default</value>
+</property>
+<property>
+  <name>hbase.thrift.dns.nameserver</name>
+  <value>default</value>
+</property>
 ----
 
 Substitute the appropriate credential and keytab for _$USER_ and _$KEYTAB_ respectively.
@@ -227,39 +236,41 @@ To enable it, do the following.
 
 <<security.gateway.thrift>> describes how to configure the Thrift gateway to authenticate to HBase on the client's behalf, and to access HBase using a proxy user. The limitation of this approach is that after the client is initialized with a particular set of credentials, it cannot change these credentials during the session. The `doAs` feature provides a flexible way to impersonate multiple principals using the same client. This feature was implemented in link:https://issues.apache.org/jira/browse/HBASE-12640[HBASE-12640] for Thrift 1, but is currently not available for Thrift 2.
 
-*To allow proxy users*, add the following to the _hbase-site.xml_ file for every HBase node:
+*To enable the `doAs` feature*, add the following to the _hbase-site.xml_ file for every Thrift gateway:
 
 [source,xml]
 ----
 <property>
-  <name>hadoop.security.authorization</name>
+  <name>hbase.regionserver.thrift.http</name>
   <value>true</value>
 </property>
 <property>
-  <name>hadoop.proxyuser.$USER.groups</name>
-  <value>$GROUPS</value>
-</property>
-<property>
-  <name>hadoop.proxyuser.$USER.hosts</name>
-  <value>$GROUPS</value>
+  <name>hbase.thrift.support.proxyuser</name>
+  <value>true/value>
 </property>
 ----
 
-*To enable the `doAs` feature*, add the following to the _hbase-site.xml_ file for every Thrift gateway:
+*To allow proxy users* when using `doAs` impersonation, add the following to the _hbase-site.xml_ file for every HBase node:
 
 [source,xml]
 ----
 <property>
-  <name>hbase.regionserver.thrift.http</name>
+  <name>hadoop.security.authorization</name>
   <value>true</value>
 </property>
 <property>
-  <name>hbase.thrift.support.proxyuser</name>
-  <value>true/value>
+  <name>hadoop.proxyuser.$USER.groups</name>
+  <value>$GROUPS</value>
+</property>
+<property>
+  <name>hadoop.proxyuser.$USER.hosts</name>
+  <value>$GROUPS</value>
 </property>
 ----
 
-Take a look at the link:https://github.com/apache/hbase/blob/master/hbase-examples/src/main/java/org/apache/hadoop/hbase/thrift/HttpDoAsClient.java[demo client] to get an overall idea of how to use this feature in your client.
+Take a look at the
+link:https://github.com/apache/hbase/blob/master/hbase-examples/src/main/java/org/apache/hadoop/hbase/thrift/HttpDoAsClient.java[demo client]
+to get an overall idea of how to use this feature in your client.
 
 === Client-side Configuration for Secure Operation - REST Gateway
 
@@ -297,6 +308,10 @@ To enable REST gateway Kerberos authentication for client access, add the follow
 [source,xml]
 ----
 <property>
+  <name>hbase.rest.support.proxyuser</name>
+  <value>true</value>
+</property>
+<property>
   <name>hbase.rest.authentication.type</name>
   <value>kerberos</value>
 </property>
@@ -308,12 +323,21 @@ To enable REST gateway Kerberos authentication for client access, add the follow
   <name>hbase.rest.authentication.kerberos.keytab</name>
   <value>$KEYTAB</value>
 </property>
+<!-- Add these if you need to configure a different DNS interface from the default -->
+<property>
+  <name>hbase.rest.dns.interface</name>
+  <value>default</value>
+</property>
+<property>
+  <name>hbase.rest.dns.nameserver</name>
+  <value>default</value>
+</property>
 ----
 
 Substitute the keytab for HTTP for _$KEYTAB_.
 
 HBase REST gateway supports different 'hbase.rest.authentication.type': simple, kerberos.
-You can also implement a custom authentication by implemening Hadoop AuthenticationHandler, then specify the full class name as 'hbase.rest.authentication.type' value.
+You can also implement a custom authentication by implementing Hadoop AuthenticationHandler, then specify the full class name as 'hbase.rest.authentication.type' value.
 For more information, refer to link:http://hadoop.apache.org/docs/stable/hadoop-auth/index.html[SPNEGO HTTP authentication].
 
 [[security.rest.gateway]]
@@ -325,7 +349,7 @@ To the HBase server, all requests are from the REST gateway user.
 The actual users are unknown.
 You can turn on the impersonation support.
 With impersonation, the REST gateway user is a proxy user.
-The HBase server knows the acutal/real user of each request.
+The HBase server knows the actual/real user of each request.
 So it can apply proper authorizations.
 
 To turn on REST gateway impersonation, we need to configure HBase servers (masters and region servers) to allow proxy users; configure REST gateway to enable impersonation.
@@ -504,21 +528,21 @@ This is future work.
 Secure HBase requires secure ZooKeeper and HDFS so that users cannot access and/or modify the metadata and data from under HBase. HBase uses HDFS (or configured file system) to keep its data files as well as write ahead logs (WALs) and other data. HBase uses ZooKeeper to store some metadata for operations (master address, table locks, recovery state, etc).
 
 === Securing ZooKeeper Data
-ZooKeeper has a pluggable authentication mechanism to enable access from clients using different methods. ZooKeeper even allows authenticated and un-authenticated clients at the same time. The access to znodes can be restricted by providing Access Control Lists (ACLs) per znode. An ACL contains two components, the authentication method and the principal. ACLs are NOT enforced hierarchically. See link:https://zookeeper.apache.org/doc/r3.3.6/zookeeperProgrammers.html#sc_ZooKeeperPluggableAuthentication[ZooKeeper Programmers Guide] for details. 
+ZooKeeper has a pluggable authentication mechanism to enable access from clients using different methods. ZooKeeper even allows authenticated and un-authenticated clients at the same time. The access to znodes can be restricted by providing Access Control Lists (ACLs) per znode. An ACL contains two components, the authentication method and the principal. ACLs are NOT enforced hierarchically. See link:https://zookeeper.apache.org/doc/r3.3.6/zookeeperProgrammers.html#sc_ZooKeeperPluggableAuthentication[ZooKeeper Programmers Guide] for details.
 
-HBase daemons authenticate to ZooKeeper via SASL and kerberos (See <<zk.sasl.auth>>). HBase sets up the znode ACLs so that only the HBase user and the configured hbase superuser (`hbase.superuser`) can access and modify the data. In cases where ZooKeeper is used for service discovery or sharing state with the client, the znodes created by HBase will also allow anyone (regardless of authentication) to read these znodes (clusterId, master address, meta location, etc), but only the HBase user can modify them. 
+HBase daemons authenticate to ZooKeeper via SASL and kerberos (See <<zk.sasl.auth>>). HBase sets up the znode ACLs so that only the HBase user and the configured hbase superuser (`hbase.superuser`) can access and modify the data. In cases where ZooKeeper is used for service discovery or sharing state with the client, the znodes created by HBase will also allow anyone (regardless of authentication) to read these znodes (clusterId, master address, meta location, etc), but only the HBase user can modify them.
 
 === Securing File System (HDFS) Data
-All of the data under management is kept under the root directory in the file system (`hbase.rootdir`). Access to the data and WAL files in the filesystem should be restricted so that users cannot bypass the HBase layer, and peek at the underlying data files from the file system. HBase assumes the filesystem used (HDFS or other) enforces permissions hierarchically. If sufficient protection from the file system (both authorization and authentication) is not provided, HBase level authorization control (ACLs, visibility labels, etc) is meaningless since the user can always access the data from the file system. 
+All of the data under management is kept under the root directory in the file system (`hbase.rootdir`). Access to the data and WAL files in the filesystem should be restricted so that users cannot bypass the HBase layer, and peek at the underlying data files from the file system. HBase assumes the filesystem used (HDFS or other) enforces permissions hierarchically. If sufficient protection from the file system (both authorization and authentication) is not provided, HBase level authorization control (ACLs, visibility labels, etc) is meaningless since the user can always access the data from the file system.
 
 HBase enforces the posix-like permissions 700 (`rwx------`) to its root directory. It means that only the HBase user can read or write the files in FS. The default setting can be changed by configuring `hbase.rootdir.perms` in hbase-site.xml. A restart of the active master is needed so that it changes the used permissions. For versions before 1.2.0, you can check whether HBASE-13780 is committed, and if not, you can manually set the permissions for the root directory if needed. Using HDFS, the command would be:
 [source,bash]
 ----
 sudo -u hdfs hadoop fs -chmod 700 /hbase
 ----
-You should change `/hbase` if you are using a different `hbase.rootdir`. 
+You should change `/hbase` if you are using a different `hbase.rootdir`.
 
-In secure mode, SecureBulkLoadEndpoint should be configured and used for properly handing of users files created from MR jobs to the HBase daemons and HBase user. The staging directory in the distributed file system used for bulk load (`hbase.bulkload.staging.dir`, defaults to `/tmp/hbase-staging`) should have (mode 711, or `rwx--x--x`) so that users can access the staging directory created under that parent directory, but cannot do any other operation. See <<hbase.secure.bulkload>> for how to configure SecureBulkLoadEndPoint. 
+In secure mode, SecureBulkLoadEndpoint should be configured and used for properly handing of users files created from MR jobs to the HBase daemons and HBase user. The staging directory in the distributed file system used for bulk load (`hbase.bulkload.staging.dir`, defaults to `/tmp/hbase-staging`) should have (mode 711, or `rwx--x--x`) so that users can access the staging directory created under that parent directory, but cannot do any other operation. See <<hbase.secure.bulkload>> for how to configure SecureBulkLoadEndPoint.
 
 == Securing Access To Your Data
 
@@ -1099,7 +1123,7 @@ NOTE: Visibility labels are not currently applied for superusers.
 | Interpretation
 
 | fulltime
-| Allow accesss to users associated with the fulltime label.
+| Allow access to users associated with the fulltime label.
 
 | !public
 | Allow access to users not associated with the public label.
@@ -1314,11 +1338,21 @@ static Table createTableAndWriteDataWithLabels(TableName tableName, String... la
 ----
 ====
 
-<<reading_cells_with_labels>>
+[[reading_cells_with_labels]]
 ==== Reading Cells with Labels
-When you issue a Scan or Get, HBase uses your default set of authorizations to filter out cells that you do not have access to. A superuser can set the default set of authorizations for a given user by using the `set_auths` HBase Shell command or the link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/security/visibility/VisibilityClient.html#setAuths(org.apache.hadoop.conf.Configuration,%20java.lang.String\[\],%20java.lang.String)[VisibilityClient.setAuths()] method.
 
-You can specify a different authorization during the Scan or Get, by passing the AUTHORIZATIONS option in HBase Shell, or the link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setAuthorizations%28org.apache.hadoop.hbase.security.visibility.Authorizations%29[setAuthorizations()] method if you use the API. This authorization will be combined with your default set as an additional filter. It will further filter your results, rather than giving you additional authorization.
+When you issue a Scan or Get, HBase uses your default set of authorizations to
+filter out cells that you do not have access to. A superuser can set the default
+set of authorizations for a given user by using the `set_auths` HBase Shell command
+or the
+link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/security/visibility/VisibilityClient.html#setAuths(org.apache.hadoop.hbase.client.Connection,%20java.lang.String\[\],%20java.lang.String)[VisibilityClient.setAuths()] method.
+
+You can specify a different authorization during the Scan or Get, by passing the
+AUTHORIZATIONS option in HBase Shell, or the
+link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setAuthorizations%28org.apache.hadoop.hbase.security.visibility.Authorizations%29[setAuthorizations()]
+method if you use the API. This authorization will be combined with your default
+set as an additional filter. It will further filter your results, rather than
+giving you additional authorization.
 
 .HBase Shell
 ====
@@ -1564,7 +1598,10 @@ Rotate the Master Key::
 === Secure Bulk Load
 
 Bulk loading in secure mode is a bit more involved than normal setup, since the client has to transfer the ownership of the files generated from the MapReduce job to HBase.
-Secure bulk loading is implemented by a coprocessor, named link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.html[SecureBulkLoadEndpoint], which uses a staging directory configured by the configuration property `hbase.bulkload.staging.dir`, which defaults to _/tmp/hbase-staging/_.
+Secure bulk loading is implemented by a coprocessor, named
+link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.html[SecureBulkLoadEndpoint],
+which uses a staging directory configured by the configuration property `hbase.bulkload.staging.dir`, which defaults to
+_/tmp/hbase-staging/_.
 
 .Secure Bulk Load Algorithm
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/shell.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/shell.adoc b/src/main/asciidoc/_chapters/shell.adoc
index 237089e..a4237fd 100644
--- a/src/main/asciidoc/_chapters/shell.adoc
+++ b/src/main/asciidoc/_chapters/shell.adoc
@@ -76,7 +76,7 @@ NOTE: Spawning HBase Shell commands in this way is slow, so keep that in mind wh
 
 .Passing Commands to the HBase Shell
 ====
-You can pass commands to the HBase Shell in non-interactive mode (see <<hbasee.shell.noninteractive,hbasee.shell.noninteractive>>) using the `echo` command and the `|` (pipe) operator.
+You can pass commands to the HBase Shell in non-interactive mode (see <<hbase.shell.noninteractive,hbase.shell.noninteractive>>) using the `echo` command and the `|` (pipe) operator.
 Be sure to escape characters in the HBase commands which would otherwise be interpreted by the shell.
 Some debug-level output has been truncated from the example below.
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/spark.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/spark.adoc b/src/main/asciidoc/_chapters/spark.adoc
new file mode 100644
index 0000000..37503e9
--- /dev/null
+++ b/src/main/asciidoc/_chapters/spark.adoc
@@ -0,0 +1,451 @@
+////
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ . . http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+////
+
+[[spark]]
+= HBase and Spark
+:doctype: book
+:numbered:
+:toc: left
+:icons: font
+:experimental:
+
+link:http://spark.apache.org/[Apache Spark] is a software framework that is used
+to process data in memory in a distributed manner, and is replacing MapReduce in
+many use cases.
+
+Spark itself is out of scope of this document, please refer to the Spark site for
+more information on the Spark project and subprojects. This document will focus
+on 4 main interaction points between Spark and HBase. Those interaction points are:
+
+Basic Spark::
+  The ability to have an HBase Connection at any point in your Spark DAG.
+Spark Streaming::
+  The ability to have an HBase Connection at any point in your Spark Streaming
+  application.
+Spark Bulk Load::
+  The ability to write directly to HBase HFiles for bulk insertion into HBase
+SparkSQL/DataFrames::
+  The ability to write SparkSQL that draws on tables that are represented in HBase.
+
+The following sections will walk through examples of all these interaction points.
+
+== Basic Spark
+
+This section discusses Spark HBase integration at the lowest and simplest levels.
+All the other interaction points are built upon the concepts that will be described
+here.
+
+At the root of all Spark and HBase integration is the HBaseContext. The HBaseContext
+takes in HBase configurations and pushes them to the Spark executors. This allows
+us to have an HBase Connection per Spark Executor in a static location.
+
+For reference, Spark Executors can be on the same nodes as the Region Servers or
+on different nodes there is no dependence of co-location. Think of every Spark
+Executor as a multi-threaded client application. This allows any Spark Tasks
+running on the executors to access the shared Connection object.
+
+.HBaseContext Usage Example
+====
+
+This example shows how HBaseContext can be used to do a `foreachPartition` on a RDD
+in Scala:
+
+[source, scala]
+----
+val sc = new SparkContext("local", "test")
+val config = new HBaseConfiguration()
+
+...
+
+val hbaseContext = new HBaseContext(sc, config)
+
+rdd.hbaseForeachPartition(hbaseContext, (it, conn) => {
+ val bufferedMutator = conn.getBufferedMutator(TableName.valueOf("t1"))
+ it.foreach((putRecord) => {
+. val put = new Put(putRecord._1)
+. putRecord._2.foreach((putValue) => put.addColumn(putValue._1, putValue._2, putValue._3))
+. bufferedMutator.mutate(put)
+ })
+ bufferedMutator.flush()
+ bufferedMutator.close()
+})
+----
+
+Here is the same example implemented in Java:
+
+[source, java]
+----
+JavaSparkContext jsc = new JavaSparkContext(sparkConf);
+
+try {
+  List<byte[]> list = new ArrayList<>();
+  list.add(Bytes.toBytes("1"));
+  ...
+  list.add(Bytes.toBytes("5"));
+
+  JavaRDD<byte[]> rdd = jsc.parallelize(list);
+  Configuration conf = HBaseConfiguration.create();
+
+  JavaHBaseContext hbaseContext = new JavaHBaseContext(jsc, conf);
+
+  hbaseContext.foreachPartition(rdd,
+      new VoidFunction<Tuple2<Iterator<byte[]>, Connection>>() {
+   public void call(Tuple2<Iterator<byte[]>, Connection> t)
+        throws Exception {
+    Table table = t._2().getTable(TableName.valueOf(tableName));
+    BufferedMutator mutator = t._2().getBufferedMutator(TableName.valueOf(tableName));
+    while (t._1().hasNext()) {
+      byte[] b = t._1().next();
+      Result r = table.get(new Get(b));
+      if (r.getExists()) {
+       mutator.mutate(new Put(b));
+      }
+    }
+
+    mutator.flush();
+    mutator.close();
+    table.close();
+   }
+  });
+} finally {
+  jsc.stop();
+}
+----
+====
+
+All functionality between Spark and HBase will be supported both in Scala and in
+Java, with the exception of SparkSQL which will support any language that is
+supported by Spark. For the remaining of this documentation we will focus on
+Scala examples for now.
+
+The examples above illustrate how to do a foreachPartition with a connection. A
+number of other Spark base functions  are supported out of the box:
+
+// tag::spark_base_functions[]
+`bulkPut`:: For massively parallel sending of puts to HBase
+`bulkDelete`:: For massively parallel sending of deletes to HBase
+`bulkGet`:: For massively parallel sending of gets to HBase to create a new RDD
+`mapPartition`:: To do a Spark Map function with a Connection object to allow full
+access to HBase
+`hBaseRDD`:: To simplify a distributed scan to create a RDD
+// end::spark_base_functions[]
+
+For examples of all these functionalities, see the HBase-Spark Module.
+
+== Spark Streaming
+http://spark.apache.org/streaming/[Spark Streaming] is a micro batching stream
+processing framework built on top of Spark. HBase and Spark Streaming make great
+companions in that HBase can help serve the following benefits alongside Spark
+Streaming.
+
+* A place to grab reference data or profile data on the fly
+* A place to store counts or aggregates in a way that supports Spark Streaming
+promise of _only once processing_.
+
+The HBase-Spark module’s integration points with Spark Streaming are similar to
+its normal Spark integration points, in that the following commands are possible
+straight off a Spark Streaming DStream.
+
+include::spark.adoc[tags=spark_base_functions]
+
+.`bulkPut` Example with DStreams
+====
+
+Below is an example of bulkPut with DStreams. It is very close in feel to the RDD
+bulk put.
+
+[source, scala]
+----
+val sc = new SparkContext("local", "test")
+val config = new HBaseConfiguration()
+
+val hbaseContext = new HBaseContext(sc, config)
+val ssc = new StreamingContext(sc, Milliseconds(200))
+
+val rdd1 = ...
+val rdd2 = ...
+
+val queue = mutable.Queue[RDD[(Array[Byte], Array[(Array[Byte],
+    Array[Byte], Array[Byte])])]]()
+
+queue += rdd1
+queue += rdd2
+
+val dStream = ssc.queueStream(queue)
+
+dStream.hbaseBulkPut(
+  hbaseContext,
+  TableName.valueOf(tableName),
+  (putRecord) => {
+   val put = new Put(putRecord._1)
+   putRecord._2.foreach((putValue) => put.addColumn(putValue._1, putValue._2, putValue._3))
+   put
+  })
+----
+
+There are three inputs to the `hbaseBulkPut` function.
+. The hbaseContext that carries the configuration boardcast information link us
+to the HBase Connections in the executors
+. The table name of the table we are putting data into
+. A function that will convert a record in the DStream into an HBase Put object.
+====
+
+== Bulk Load
+
+Spark bulk load follows very closely to the MapReduce implementation of bulk
+load. In short, a partitioner partitions based on region splits and
+the row keys are sent to the reducers in order, so that HFiles can be written
+out. In Spark terms, the bulk load will be focused around a
+`repartitionAndSortWithinPartitions` followed by a `foreachPartition`.
+
+The only major difference with the Spark implementation compared to the
+MapReduce implementation is that the column qualifier is included in the shuffle
+ordering process. This was done because the MapReduce bulk load implementation
+would have memory issues with loading rows with a large numbers of columns, as a
+result of the sorting of those columns being done in the memory of the reducer JVM.
+Instead, that ordering is done in the Spark Shuffle, so there should no longer
+be a limit to the number of columns in a row for bulk loading.
+
+.Bulk Loading Example
+====
+
+The following example shows bulk loading with Spark.
+
+[source, scala]
+----
+val sc = new SparkContext("local", "test")
+val config = new HBaseConfiguration()
+
+val hbaseContext = new HBaseContext(sc, config)
+
+val stagingFolder = ...
+
+rdd.hbaseBulkLoad(TableName.valueOf(tableName),
+  t => {
+   val rowKey = t._1
+   val family:Array[Byte] = t._2(0)._1
+   val qualifier = t._2(0)._2
+   val value = t._2(0)._3
+
+   val keyFamilyQualifier= new KeyFamilyQualifier(rowKey, family, qualifier)
+
+   Seq((keyFamilyQualifier, value)).iterator
+  },
+  stagingFolder.getPath)
+
+val load = new LoadIncrementalHFiles(config)
+load.doBulkLoad(new Path(stagingFolder.getPath),
+  conn.getAdmin, table, conn.getRegionLocator(TableName.valueOf(tableName)))
+----
+====
+
+The `hbaseBulkLoad` function takes three required parameters:
+
+. The table name of the table we intend to bulk load too
+
+. A function that will convert a record in the RDD to a tuple key value par. With
+the tuple key being a KeyFamilyQualifer object and the value being the cell value.
+The KeyFamilyQualifer object will hold the RowKey, Column Family, and Column Qualifier.
+The shuffle will partition on the RowKey but will sort by all three values.
+
+. The temporary path for the HFile to be written out too
+
+Following the Spark bulk load command,  use the HBase's LoadIncrementalHFiles object
+to load the newly created HFiles into HBase.
+
+.Additional Parameters for Bulk Loading with Spark
+
+You can set the following attributes with additional parameter options on hbaseBulkLoad.
+
+* Max file size of the HFiles
+* A flag to exclude HFiles from compactions
+* Column Family settings for compression, bloomType, blockSize, and dataBlockEncoding
+
+.Using Additional Parameters
+====
+
+[source, scala]
+----
+val sc = new SparkContext("local", "test")
+val config = new HBaseConfiguration()
+
+val hbaseContext = new HBaseContext(sc, config)
+
+val stagingFolder = ...
+
+val familyHBaseWriterOptions = new java.util.HashMap[Array[Byte], FamilyHFileWriteOptions]
+val f1Options = new FamilyHFileWriteOptions("GZ", "ROW", 128, "PREFIX")
+
+familyHBaseWriterOptions.put(Bytes.toBytes("columnFamily1"), f1Options)
+
+rdd.hbaseBulkLoad(TableName.valueOf(tableName),
+  t => {
+   val rowKey = t._1
+   val family:Array[Byte] = t._2(0)._1
+   val qualifier = t._2(0)._2
+   val value = t._2(0)._3
+
+   val keyFamilyQualifier= new KeyFamilyQualifier(rowKey, family, qualifier)
+
+   Seq((keyFamilyQualifier, value)).iterator
+  },
+  stagingFolder.getPath,
+  familyHBaseWriterOptions,
+  compactionExclude = false,
+  HConstants.DEFAULT_MAX_FILE_SIZE)
+
+val load = new LoadIncrementalHFiles(config)
+load.doBulkLoad(new Path(stagingFolder.getPath),
+  conn.getAdmin, table, conn.getRegionLocator(TableName.valueOf(tableName)))
+----
+====
+
+== SparkSQL/DataFrames
+
+http://spark.apache.org/sql/[SparkSQL] is a subproject of Spark that supports
+SQL that will compute down to a Spark DAG. In addition,SparkSQL is a heavy user
+of DataFrames. DataFrames are like RDDs with schema information.
+
+The HBase-Spark module includes support for Spark SQL and DataFrames, which allows
+you to write SparkSQL directly on HBase tables. In addition the HBase-Spark
+will push down query filtering logic to HBase.
+
+=== Predicate Push Down
+
+There are two examples of predicate push down in the HBase-Spark implementation.
+The first example shows the push down of filtering logic on the RowKey. HBase-Spark
+will reduce the filters on RowKeys down to a set of Get and/or Scan commands.
+
+NOTE: The Scans are distributed scans, rather than a single client scan operation.
+
+If the query looks something like the following, the logic will push down and get
+the rows through 3 Gets and 0 Scans. We can do gets because all the operations
+are `equal` operations.
+
+[source,sql]
+----
+SELECT
+  KEY_FIELD,
+  B_FIELD,
+  A_FIELD
+FROM hbaseTmp
+WHERE (KEY_FIELD = 'get1' or KEY_FIELD = 'get2' or KEY_FIELD = 'get3')
+----
+
+Now let's look at an example where we will end up doing two scans on HBase.
+
+[source, sql]
+----
+SELECT
+  KEY_FIELD,
+  B_FIELD,
+  A_FIELD
+FROM hbaseTmp
+WHERE KEY_FIELD < 'get2' or KEY_FIELD > 'get3'
+----
+
+In this example we will get 0 Gets and 2 Scans. One scan will load everything
+from the first row in the table until “get2” and the second scan will get
+everything from “get3” until the last row in the table.
+
+The next query is a good example of having a good deal of range checks. However
+the ranges overlap. To the code will be smart enough to get the following data
+in a single scan that encompasses all the data asked by the query.
+
+[source, sql]
+----
+SELECT
+  KEY_FIELD,
+  B_FIELD,
+  A_FIELD
+FROM hbaseTmp
+WHERE
+  (KEY_FIELD >= 'get1' and KEY_FIELD <= 'get3') or
+  (KEY_FIELD > 'get3' and KEY_FIELD <= 'get5')
+----
+
+The second example of push down functionality offered by the HBase-Spark module
+is the ability to push down filter logic for column and cell fields. Just like
+the RowKey logic, all query logic will be consolidated into the minimum number
+of range checks and equal checks by sending a Filter object along with the Scan
+with information about consolidated push down predicates
+
+.SparkSQL Code Example
+====
+This example shows how we can interact with HBase with SQL.
+
+[source, scala]
+----
+val sc = new SparkContext("local", "test")
+val config = new HBaseConfiguration()
+
+new HBaseContext(sc, TEST_UTIL.getConfiguration)
+val sqlContext = new SQLContext(sc)
+
+df = sqlContext.load("org.apache.hadoop.hbase.spark",
+  Map("hbase.columns.mapping" ->
+   "KEY_FIELD STRING :key, A_FIELD STRING c:a, B_FIELD STRING c:b",
+   "hbase.table" -> "t1"))
+
+df.registerTempTable("hbaseTmp")
+
+val results = sqlContext.sql("SELECT KEY_FIELD, B_FIELD FROM hbaseTmp " +
+  "WHERE " +
+  "(KEY_FIELD = 'get1' and B_FIELD < '3') or " +
+  "(KEY_FIELD >= 'get3' and B_FIELD = '8')").take(5)
+----
+
+There are three major parts of this example that deserve explaining.
+
+The sqlContext.load function::
+  In the sqlContext.load function we see two
+  parameters. The first of these parameters is pointing Spark to the HBase
+  DefaultSource class that will act as the interface between SparkSQL and HBase.
+
+A map of key value pairs::
+  In this example we have two keys in our map, `hbase.columns.mapping` and
+  `hbase.table`. The `hbase.table` directs SparkSQL to use the given HBase table.
+  The `hbase.columns.mapping` key give us the logic to translate HBase columns to
+  SparkSQL columns.
++
+The `hbase.columns.mapping` is a string that follows the following format
++
+[source, scala]
+----
+(SparkSQL.ColumnName) (SparkSQL.ColumnType) (HBase.ColumnFamily):(HBase.Qualifier)
+----
++
+In the example below we see the definition of three fields. Because KEY_FIELD has
+no ColumnFamily, it is the RowKey.
++
+----
+KEY_FIELD STRING :key, A_FIELD STRING c:a, B_FIELD STRING c:b
+----
+
+The registerTempTable function::
+  This is a SparkSQL function that allows us now to be free of Scala when accessing
+  our HBase table directly with SQL with the table name of "hbaseTmp".
+
+The last major point to note in the example is the `sqlContext.sql` function, which
+allows the user to ask their questions in SQL which will be pushed down to the
+DefaultSource code in the HBase-Spark module. The result of this command will be
+a DataFrame with the Schema of KEY_FIELD and B_FIELD.
+====
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/thrift_filter_language.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/thrift_filter_language.adoc b/src/main/asciidoc/_chapters/thrift_filter_language.adoc
index 744cec6..da36cea 100644
--- a/src/main/asciidoc/_chapters/thrift_filter_language.adoc
+++ b/src/main/asciidoc/_chapters/thrift_filter_language.adoc
@@ -31,7 +31,6 @@
 Apache link:http://thrift.apache.org/[Thrift] is a cross-platform, cross-language development framework.
 HBase includes a Thrift API and filter language.
 The Thrift API relies on client and server processes.
-Documentation about the HBase Thrift API is located at http://wiki.apache.org/hadoop/Hbase/ThriftApi.
 
 You can configure Thrift for secure authentication at the server and client side, by following the procedures in <<security.client.thrift>> and <<security.gateway.thrift>>.
 
@@ -250,7 +249,7 @@ RowFilter::
 
 Family Filter::
   This filter takes a compare operator and a comparator.
-  It compares each qualifier name with the comparator using the compare operator and if the comparison returns true, it returns all the key-values in that column.
+  It compares each column family name with the comparator using the compare operator and if the comparison returns true, it returns all the Cells in that column family.
 
 QualifierFilter::
   This filter takes a compare operator and a comparator.

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/tracing.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/tracing.adoc b/src/main/asciidoc/_chapters/tracing.adoc
index 6bb8065..0cddd8a 100644
--- a/src/main/asciidoc/_chapters/tracing.adoc
+++ b/src/main/asciidoc/_chapters/tracing.adoc
@@ -30,13 +30,13 @@
 :icons: font
 :experimental:
 
-link:https://issues.apache.org/jira/browse/HBASE-6449[HBASE-6449] added support for tracing requests through HBase, using the open source tracing library, link:http://github.com/cloudera/htrace[HTrace].
-Setting up tracing is quite simple, however it currently requires some very minor changes to your client code (it would not be very difficult to remove this requirement). 
+link:https://issues.apache.org/jira/browse/HBASE-6449[HBASE-6449] added support for tracing requests through HBase, using the open source tracing library, link:http://htrace.incubator.apache.org/[HTrace].
+Setting up tracing is quite simple, however it currently requires some very minor changes to your client code (it would not be very difficult to remove this requirement).
 
 [[tracing.spanreceivers]]
 === SpanReceivers
 
-The tracing system works by collecting information in structs called 'Spans'. It is up to you to choose how you want to receive this information by implementing the `SpanReceiver` interface, which defines one method: 
+The tracing system works by collecting information in structures called 'Spans'. It is up to you to choose how you want to receive this information by implementing the `SpanReceiver` interface, which defines one method:
 
 [source]
 ----
@@ -45,68 +45,55 @@ public void receiveSpan(Span span);
 ----
 
 This method serves as a callback whenever a span is completed.
-HTrace allows you to use as many SpanReceivers as you want so you can easily send trace information to multiple destinations. 
+HTrace allows you to use as many SpanReceivers as you want so you can easily send trace information to multiple destinations.
 
-Configure what SpanReceivers you'd like to us by putting a comma separated list of the fully-qualified class name of classes implementing `SpanReceiver` in _hbase-site.xml_ property: `hbase.trace.spanreceiver.classes`. 
+Configure what SpanReceivers you'd like to us by putting a comma separated list of the fully-qualified class name of classes implementing `SpanReceiver` in _hbase-site.xml_ property: `hbase.trace.spanreceiver.classes`.
 
 HTrace includes a `LocalFileSpanReceiver` that writes all span information to local files in a JSON-based format.
-The `LocalFileSpanReceiver` looks in _hbase-site.xml_      for a `hbase.local-file-span-receiver.path` property with a value describing the name of the file to which nodes should write their span information. 
+The `LocalFileSpanReceiver` looks in _hbase-site.xml_      for a `hbase.local-file-span-receiver.path` property with a value describing the name of the file to which nodes should write their span information.
 
 [source]
 ----
 
 <property>
   <name>hbase.trace.spanreceiver.classes</name>
-  <value>org.htrace.impl.LocalFileSpanReceiver</value>
+  <value>org.apache.htrace.impl.LocalFileSpanReceiver</value>
 </property>
 <property>
-  <name>hbase.local-file-span-receiver.path</name>
+  <name>hbase.htrace.local-file-span-receiver.path</name>
   <value>/var/log/hbase/htrace.out</value>
 </property>
 ----
 
-HTrace also provides `ZipkinSpanReceiver` which converts spans to link:http://github.com/twitter/zipkin[Zipkin] span format and send them to Zipkin server.
-In order to use this span receiver, you need to install the jar of htrace-zipkin to your HBase's classpath on all of the nodes in your cluster. 
+HTrace also provides `ZipkinSpanReceiver` which converts spans to link:http://github.com/twitter/zipkin[Zipkin] span format and send them to Zipkin server. In order to use this span receiver, you need to install the jar of htrace-zipkin to your HBase's classpath on all of the nodes in your cluster.
 
-_htrace-zipkin_ is published to the maven central repository.
-You could get the latest version from there or just build it locally and then copy it out to all nodes, change your config to use zipkin receiver, distribute the new configuration and then (rolling) restart. 
+_htrace-zipkin_ is published to the link:http://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22org.apache.htrace%22%20AND%20a%3A%22htrace-zipkin%22[Maven central repository]. You could get the latest version from there or just build it locally (see the link:http://htrace.incubator.apache.org/[HTrace] homepage for information on how to do this) and then copy it out to all nodes.
 
-Here is the example of manual setup procedure. 
-
-----
-
-$ git clone https://github.com/cloudera/htrace
-$ cd htrace/htrace-zipkin
-$ mvn compile assembly:single
-$ cp target/htrace-zipkin-*-jar-with-dependencies.jar $HBASE_HOME/lib/
-  # copy jar to all nodes...
-----
-
-The `ZipkinSpanReceiver` looks in _hbase-site.xml_      for a `hbase.zipkin.collector-hostname` and `hbase.zipkin.collector-port` property with a value describing the Zipkin collector server to which span information are sent. 
+`ZipkinSpanReceiver` for properties called `hbase.htrace.zipkin.collector-hostname` and `hbase.htrace.zipkin.collector-port` in _hbase-site.xml_ with values describing the Zipkin collector server to which span information are sent.
 
 [source,xml]
 ----
 
 <property>
   <name>hbase.trace.spanreceiver.classes</name>
-  <value>org.htrace.impl.ZipkinSpanReceiver</value>
-</property> 
+  <value>org.apache.htrace.impl.ZipkinSpanReceiver</value>
+</property>
 <property>
-  <name>hbase.zipkin.collector-hostname</name>
+  <name>hbase.htrace.zipkin.collector-hostname</name>
   <value>localhost</value>
-</property> 
+</property>
 <property>
-  <name>hbase.zipkin.collector-port</name>
+  <name>hbase.htrace.zipkin.collector-port</name>
   <value>9410</value>
 </property>
 ----
 
-If you do not want to use the included span receivers, you are encouraged to write your own receiver (take a look at `LocalFileSpanReceiver` for an example). If you think others would benefit from your receiver, file a JIRA or send a pull request to link:http://github.com/cloudera/htrace[HTrace]. 
+If you do not want to use the included span receivers, you are encouraged to write your own receiver (take a look at `LocalFileSpanReceiver` for an example). If you think others would benefit from your receiver, file a JIRA with the HTrace project.
 
 [[tracing.client.modifications]]
 == Client Modifications
 
-In order to turn on tracing in your client code, you must initialize the module sending spans to receiver once per client process. 
+In order to turn on tracing in your client code, you must initialize the module sending spans to receiver once per client process.
 
 [source,java]
 ----
@@ -120,7 +107,7 @@ private SpanReceiverHost spanReceiverHost;
 ----
 
 Then you simply start tracing span before requests you think are interesting, and close it when the request is done.
-For example, if you wanted to trace all of your get operations, you change this: 
+For example, if you wanted to trace all of your get operations, you change this:
 
 [source,java]
 ----
@@ -131,7 +118,7 @@ Get get = new Get(Bytes.toBytes("r1"));
 Result res = table.get(get);
 ----
 
-into: 
+into:
 
 [source,java]
 ----
@@ -146,7 +133,7 @@ try {
 }
 ----
 
-If you wanted to trace half of your 'get' operations, you would pass in: 
+If you wanted to trace half of your 'get' operations, you would pass in:
 
 [source,java]
 ----
@@ -155,13 +142,12 @@ new ProbabilitySampler(0.5)
 ----
 
 in lieu of `Sampler.ALWAYS` to `Trace.startSpan()`.
-See the HTrace _README_ for more information on Samplers. 
+See the HTrace _README_ for more information on Samplers.
 
 [[tracing.client.shell]]
 == Tracing from HBase Shell
 
-You can use +trace+ command for tracing requests from HBase Shell. +trace 'start'+ command turns on tracing and +trace
-        'stop'+ command turns off tracing. 
+You can use `trace` command for tracing requests from HBase Shell. `trace 'start'` command turns on tracing and `trace 'stop'` command turns off tracing.
 
 [source]
 ----
@@ -171,9 +157,8 @@ hbase(main):002:0> put 'test', 'row1', 'f:', 'val1'   # traced commands
 hbase(main):003:0> trace 'stop'
 ----
 
-+trace 'start'+ and +trace 'stop'+ always returns boolean value representing if or not there is ongoing tracing.
-As a result, +trace
-        'stop'+ returns false on suceess. +trace 'status'+ just returns if or not tracing is turned on. 
+`trace 'start'` and `trace 'stop'` always returns boolean value representing if or not there is ongoing tracing.
+As a result, `trace 'stop'` returns false on success. `trace 'status'` just returns if or not tracing is turned on.
 
 [source]
 ----

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/troubleshooting.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/troubleshooting.adoc b/src/main/asciidoc/_chapters/troubleshooting.adoc
index 1776c9e..e372760 100644
--- a/src/main/asciidoc/_chapters/troubleshooting.adoc
+++ b/src/main/asciidoc/_chapters/troubleshooting.adoc
@@ -89,11 +89,11 @@ Additionally, each DataNode server will also have a TaskTracker/NodeManager log
 [[rpc.logging]]
 ==== Enabling RPC-level logging
 
-Enabling the RPC-level logging on a RegionServer can often given insight on timings at the server.
+Enabling the RPC-level logging on a RegionServer can often give insight on timings at the server.
 Once enabled, the amount of log spewed is voluminous.
 It is not recommended that you leave this logging on for more than short bursts of time.
 To enable RPC-level logging, browse to the RegionServer UI and click on _Log Level_.
-Set the log level to `DEBUG` for the package `org.apache.hadoop.ipc` (Thats right, for `hadoop.ipc`, NOT, `hbase.ipc`). Then tail the RegionServers log.
+Set the log level to `DEBUG` for the package `org.apache.hadoop.ipc` (That's right, for `hadoop.ipc`, NOT, `hbase.ipc`). Then tail the RegionServers log.
 Analyze.
 
 To disable, set the logging level back to `INFO` level.
@@ -185,7 +185,7 @@ The key points here is to keep all these pauses low.
 CMS pauses are always low, but if your ParNew starts growing, you can see minor GC pauses approach 100ms, exceed 100ms and hit as high at 400ms.
 
 This can be due to the size of the ParNew, which should be relatively small.
-If your ParNew is very large after running HBase for a while, in one example a ParNew was about 150MB, then you might have to constrain the size of ParNew (The larger it is, the longer the collections take but if its too small, objects are promoted to old gen too quickly). In the below we constrain new gen size to 64m.
+If your ParNew is very large after running HBase for a while, in one example a ParNew was about 150MB, then you might have to constrain the size of ParNew (The larger it is, the longer the collections take but if it's too small, objects are promoted to old gen too quickly). In the below we constrain new gen size to 64m.
 
 Add the below line in _hbase-env.sh_:
 [source,bourne]
@@ -443,7 +443,7 @@ java.lang.Thread.State: WAITING (on object monitor)
     at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.run(MemStoreFlusher.java:146)
 ----
 
-A handler thread that's waiting for stuff to do (like put, delete, scan, etc):
+A handler thread that's waiting for stuff to do (like put, delete, scan, etc.):
 
 [source]
 ----
@@ -559,6 +559,14 @@ You can also tail all the logs at the same time, edit files, etc.
 
 For more information on the HBase client, see <<client,client>>.
 
+=== Missed Scan Results Due To Mismatch Of `hbase.client.scanner.max.result.size` Between Client and Server
+If either the client or server version is lower than 0.98.11/1.0.0 and the server
+has a smaller value for `hbase.client.scanner.max.result.size` than the client, scan
+requests that reach the server's `hbase.client.scanner.max.result.size` are likely
+to miss data. In particular, 0.98.11 defaults `hbase.client.scanner.max.result.size`
+to 2 MB but other versions default to larger values. For this reason, be very careful
+using 0.98.11 servers with any other client version.
+
 [[trouble.client.scantimeout]]
 === ScannerTimeoutException or UnknownScannerException
 
@@ -834,6 +842,31 @@ Two common use-cases for querying HDFS for HBase objects is research the degree
 If there are a large number of StoreFiles for each ColumnFamily it could indicate the need for a major compaction.
 Additionally, after a major compaction if the resulting StoreFile is "small" it could indicate the need for a reduction of ColumnFamilies for the table.
 
+=== Unexpected Filesystem Growth
+
+If you see an unexpected spike in filesystem usage by HBase, two possible culprits
+are snapshots and WALs.
+
+Snapshots::
+  When you create a snapshot, HBase retains everything it needs to recreate the table's
+  state at that time of the snapshot. This includes deleted cells or expired versions.
+  For this reason, your snapshot usage pattern should be well-planned, and you should
+  prune snapshots that you no longer need. Snapshots are stored in `/hbase/.snapshots`,
+  and archives needed to restore snapshots are stored in
+  `/hbase/.archive/<_tablename_>/<_region_>/<_column_family_>/`.
+
+  *Do not* manage snapshots or archives manually via HDFS. HBase provides APIs and
+  HBase Shell commands for managing them. For more information, see <<ops.snapshots>>.
+
+WAL::
+  Write-ahead logs (WALs) are stored in subdirectories of `/hbase/.logs/`, depending
+  on their status. Already-processed WALs are stored in `/hbase/.logs/oldWALs/` and
+  corrupt WALs are stored in `/hbase/.logs/.corrupt/` for examination.
+  If the size of any subdirectory of `/hbase/.logs/` is growing, examine the HBase
+  server logs to find the root cause for why WALs are not being processed correctly.
+
+*Do not* manage WALs manually via HDFS.
+
 [[trouble.network]]
 == Network
 
@@ -1037,7 +1070,7 @@ However, if the NotServingRegionException is logged ERROR, then the client ran o
 
 Fix your DNS.
 In versions of Apache HBase before 0.92.x, reverse DNS needs to give same answer as forward lookup.
-See link:https://issues.apache.org/jira/browse/HBASE-3431[HBASE 3431 RegionServer is not using the name given it by the master; double entry in master listing of servers] for gorey details.
+See link:https://issues.apache.org/jira/browse/HBASE-3431[HBASE 3431 RegionServer is not using the name given it by the master; double entry in master listing of servers] for gory details.
 
 [[brand.new.compressor]]
 ==== Logs flooded with '2011-01-10 12:40:48,407 INFO org.apache.hadoop.io.compress.CodecPool: Gotbrand-new compressor' messages

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/unit_testing.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/unit_testing.adoc b/src/main/asciidoc/_chapters/unit_testing.adoc
index 3f70001..6f13864 100644
--- a/src/main/asciidoc/_chapters/unit_testing.adoc
+++ b/src/main/asciidoc/_chapters/unit_testing.adoc
@@ -47,7 +47,7 @@ public class MyHBaseDAO {
         Put put = createPut(obj);
         table.put(put);
     }
-    
+
     private static Put createPut(HBaseTestObj obj) {
         Put put = new Put(Bytes.toBytes(obj.getRowKey()));
         put.add(Bytes.toBytes("CF"), Bytes.toBytes("CQ-1"),
@@ -96,13 +96,13 @@ public class TestMyHbaseDAOData {
 
 These tests ensure that your `createPut` method creates, populates, and returns a `Put` object with expected values.
 Of course, JUnit can do much more than this.
-For an introduction to JUnit, see link:https://github.com/junit-team/junit/wiki/Getting-started. 
+For an introduction to JUnit, see https://github.com/junit-team/junit/wiki/Getting-started.
 
 == Mockito
 
 Mockito is a mocking framework.
 It goes further than JUnit by allowing you to test the interactions between objects without having to replicate the entire environment.
-You can read more about Mockito at its project site, link:https://code.google.com/p/mockito/.
+You can read more about Mockito at its project site, https://code.google.com/p/mockito/.
 
 You can use Mockito to do unit testing on smaller units.
 For instance, you can mock a `org.apache.hadoop.hbase.Server` instance or a `org.apache.hadoop.hbase.master.MasterServices` interface reference rather than a full-blown `org.apache.hadoop.hbase.master.HMaster`.
@@ -133,7 +133,7 @@ public class TestMyHBaseDAO{
   Configuration config = HBaseConfiguration.create();
   @Mock
   Connection connection = ConnectionFactory.createConnection(config);
-  @Mock 
+  @Mock
   private Table table;
   @Captor
   private ArgumentCaptor putCaptor;
@@ -150,7 +150,7 @@ public class TestMyHBaseDAO{
     MyHBaseDAO.insertRecord(table, obj);
     verify(table).put(putCaptor.capture());
     Put put = putCaptor.getValue();
-  
+
     assertEquals(Bytes.toString(put.getRow()), obj.getRowKey());
     assert(put.has(Bytes.toBytes("CF"), Bytes.toBytes("CQ-1")));
     assert(put.has(Bytes.toBytes("CF"), Bytes.toBytes("CQ-2")));
@@ -182,7 +182,7 @@ public class MyReducer extends TableReducer<Text, Text, ImmutableBytesWritable>
    public static final byte[] CF = "CF".getBytes();
    public static final byte[] QUALIFIER = "CQ-1".getBytes();
    public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
-     //bunch of processing to extract data to be inserted, in our case, lets say we are simply
+     //bunch of processing to extract data to be inserted, in our case, let's say we are simply
      //appending all the records we receive from the mapper for this particular
      //key and insert one record into HBase
      StringBuffer data = new StringBuffer();
@@ -197,7 +197,7 @@ public class MyReducer extends TableReducer<Text, Text, ImmutableBytesWritable>
  }
 ----
 
-To test this code, the first step is to add a dependency to MRUnit to your Maven POM file. 
+To test this code, the first step is to add a dependency to MRUnit to your Maven POM file.
 
 [source,xml]
 ----
@@ -225,16 +225,16 @@ public class MyReducerTest {
       MyReducer reducer = new MyReducer();
       reduceDriver = ReduceDriver.newReduceDriver(reducer);
     }
-  
+
    @Test
    public void testHBaseInsert() throws IOException {
-      String strKey = "RowKey-1", strValue = "DATA", strValue1 = "DATA1", 
+      String strKey = "RowKey-1", strValue = "DATA", strValue1 = "DATA1",
 strValue2 = "DATA2";
       List<Text> list = new ArrayList<Text>();
       list.add(new Text(strValue));
       list.add(new Text(strValue1));
       list.add(new Text(strValue2));
-      //since in our case all that the reducer is doing is appending the records that the mapper   
+      //since in our case all that the reducer is doing is appending the records that the mapper
       //sends it, we should get the following back
       String expectedOutput = strValue + strValue1 + strValue2;
      //Setup Input, mimic what mapper would have passed
@@ -242,10 +242,10 @@ strValue2 = "DATA2";
       reduceDriver.withInput(new Text(strKey), list);
       //run the reducer and get its output
       List<Pair<ImmutableBytesWritable, Writable>> result = reduceDriver.run();
-    
+
       //extract key from result and verify
       assertEquals(Bytes.toString(result.get(0).getFirst().get()), strKey);
-    
+
       //extract value for CF/QUALIFIER and verify
       Put a = (Put)result.get(0).getSecond();
       String c = Bytes.toString(a.get(CF, QUALIFIER).get(0).getValue());
@@ -259,7 +259,7 @@ Your MRUnit test verifies that the output is as expected, the Put that is insert
 
 MRUnit includes a MapperDriver to test mapping jobs, and you can use MRUnit to test other operations, including reading from HBase, processing data, or writing to HDFS,
 
-== Integration Testing with a HBase Mini-Cluster
+== Integration Testing with an HBase Mini-Cluster
 
 HBase ships with HBaseTestingUtility, which makes it easy to write integration tests using a [firstterm]_mini-cluster_.
 The first step is to add some dependencies to your Maven POM file.
@@ -283,7 +283,7 @@ Check the versions to be sure they are appropriate.
     <type>test-jar</type>
     <scope>test</scope>
 </dependency>
-        
+
 <dependency>
     <groupId>org.apache.hadoop</groupId>
     <artifactId>hadoop-hdfs</artifactId>
@@ -309,7 +309,7 @@ public class MyHBaseIntegrationTest {
     private static HBaseTestingUtility utility;
     byte[] CF = "CF".getBytes();
     byte[] QUALIFIER = "CQ-1".getBytes();
-    
+
     @Before
     public void setup() throws Exception {
     	utility = new HBaseTestingUtility();
@@ -343,7 +343,7 @@ This code creates an HBase mini-cluster and starts it.
 Next, it creates a table called `MyTest` with one column family, `CF`.
 A record is inserted, a Get is performed from the same table, and the insertion is verified.
 
-NOTE: Starting the mini-cluster takes about 20-30 seconds, but that should be appropriate for integration testing. 
+NOTE: Starting the mini-cluster takes about 20-30 seconds, but that should be appropriate for integration testing.
 
 To use an HBase mini-cluster on Microsoft Windows, you need to use a Cygwin environment.
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/6f07973d/src/main/asciidoc/_chapters/upgrading.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc b/src/main/asciidoc/_chapters/upgrading.adoc
index 6b63833..6327c5a 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -92,7 +92,7 @@ In addition to the usual API versioning considerations HBase has other compatibi
 .Operational Compatibility
 * Metric changes
 * Behavioral changes of services
-* Web page APIs
+* JMX APIs exposed via the `/jmx/` endpoint
 
 .Summary
 * A patch upgrade is a drop-in replacement. Any change that is not Java binary compatible would not be allowed.footnote:[See http://docs.oracle.com/javase/specs/jls/se7/html/jls-13.html.]. Downgrading versions within patch releases may not be compatible.
@@ -132,7 +132,7 @@ HBase Client API::
 
 [[hbase.limitetprivate.api]]
 HBase LimitedPrivate API::
-  LimitedPrivate annotation comes with a set of target consumers for the interfaces. Those consumers are coprocessors, phoenix, replication endpoint implemnetations or similar. At this point, HBase only guarantees source and binary compatibility for these interfaces between patch versions.
+  LimitedPrivate annotation comes with a set of target consumers for the interfaces. Those consumers are coprocessors, phoenix, replication endpoint implementations or similar. At this point, HBase only guarantees source and binary compatibility for these interfaces between patch versions.
 
 [[hbase.private.api]]
 HBase Private API::
@@ -158,7 +158,7 @@ When we say two HBase versions are compatible, we mean that the versions are wir
 
 A rolling upgrade is the process by which you update the servers in your cluster a server at a time. You can rolling upgrade across HBase versions if they are binary or wire compatible. See <<hbase.rolling.restart>> for more on what this means. Coarsely, a rolling upgrade is a graceful stop each server, update the software, and then restart. You do this for each server in the cluster. Usually you upgrade the Master first and then the RegionServers. See <<rolling>> for tools that can help use the rolling upgrade process.
 
-For example, in the below, HBase was symlinked to the actual HBase install. On upgrade, before running a rolling restart over the cluser, we changed the symlink to point at the new HBase software version and then ran
+For example, in the below, HBase was symlinked to the actual HBase install. On upgrade, before running a rolling restart over the cluster, we changed the symlink to point at the new HBase software version and then ran
 
 [source,bash]
 ----
@@ -192,9 +192,15 @@ See <<zookeeper.requirements>>.
 .HBase Default Ports Changed
 The ports used by HBase changed. They used to be in the 600XX range. In HBase 1.0.0 they have been moved up out of the ephemeral port range and are 160XX instead (Master web UI was 60010 and is now 16010; the RegionServer web UI was 60030 and is now 16030, etc.). If you want to keep the old port locations, copy the port setting configs from _hbase-default.xml_ into _hbase-site.xml_, change them back to the old values from the HBase 0.98.x era, and ensure you've distributed your configurations before you restart.
 
+.HBase Master Port Binding Change
+In HBase 1.0.x, the HBase Master binds the RegionServer ports as well as the Master
+ports. This behavior is changed from HBase versions prior to 1.0. In HBase 1.1 and 2.0 branches,
+this behavior is reverted to the pre-1.0 behavior of the HBase master not binding the RegionServer
+ports.
+
 [[upgrade1.0.hbase.bucketcache.percentage.in.combinedcache]]
 .hbase.bucketcache.percentage.in.combinedcache configuration has been REMOVED
-You may have made use of this configuration if you are using BucketCache. If NOT using BucketCache, this change does not effect you. Its removal means that your L1 LruBlockCache is now sized using `hfile.block.cache.size` -- i.e. the way you would size the on-heap L1 LruBlockCache if you were NOT doing BucketCache -- and the BucketCache size is not whatever the setting for `hbase.bucketcache.size` is. You may need to adjust configs to get the LruBlockCache and BucketCache sizes set to what they were in 0.98.x and previous. If you did not set this config., its default value was 0.9. If you do nothing, your BucketCache will increase in size by 10%. Your L1 LruBlockCache will become `hfile.block.cache.size` times your java heap size (`hfile.block.cache.size` is a float between 0.0 and 1.0). To read more, see link:https://issues.apache.org/jira/browse/HBASE-11520[HBASE-11520 Simplify offheap cache config by removing the confusing "hbase.bucketcache.percentage.in.combinedcache"].
+You may have made use of this configuration if you are using BucketCache. If NOT using BucketCache, this change does not affect you. Its removal means that your L1 LruBlockCache is now sized using `hfile.block.cache.size` -- i.e. the way you would size the on-heap L1 LruBlockCache if you were NOT doing BucketCache -- and the BucketCache size is not whatever the setting for `hbase.bucketcache.size` is. You may need to adjust configs to get the LruBlockCache and BucketCache sizes set to what they were in 0.98.x and previous. If you did not set this config., its default value was 0.9. If you do nothing, your BucketCache will increase in size by 10%. Your L1 LruBlockCache will become `hfile.block.cache.size` times your java heap size (`hfile.block.cache.size` is a float between 0.0 and 1.0). To read more, see link:https://issues.apache.org/jira/browse/HBASE-11520[HBASE-11520 Simplify offheap cache config by removing the confusing "hbase.bucketcache.percentage.in.combinedcache"].
 
 [[hbase-12068]]
 .If you have your own customer filters.
@@ -204,6 +210,14 @@ See the release notes on the issue link:https://issues.apache.org/jira/browse/HB
 .Distributed Log Replay
 <<distributed.log.replay>> is off by default in HBase 1.0.0. Enabling it can make a big difference improving HBase MTTR. Enable this feature if you are doing a clean stop/start when you are upgrading. You cannot rolling upgrade to this feature (caveat if you are running on a version of HBase in excess of HBase 0.98.4 -- see link:https://issues.apache.org/jira/browse/HBASE-12577[HBASE-12577 Disable distributed log replay by default] for more).
 
+.Mismatch Of `hbase.client.scanner.max.result.size` Between Client and Server
+If either the client or server version is lower than 0.98.11/1.0.0 and the server
+has a smaller value for `hbase.client.scanner.max.result.size` than the client, scan
+requests that reach the server's `hbase.client.scanner.max.result.size` are likely
+to miss data. In particular, 0.98.11 defaults `hbase.client.scanner.max.result.size`
+to 2 MB but other versions default to larger values. For this reason, be very careful
+using 0.98.11 servers with any other client version.
+
 [[upgrade1.0.rolling.upgrade]]
 ==== Rolling upgrade from 0.98.x to HBase 1.0.0
 .From 0.96.x to 1.0.0
@@ -378,7 +392,7 @@ The migration is a one-time event. However, every time your cluster starts, `MET
 
 [[upgrade0.94]]
 === Upgrading from 0.92.x to 0.94.x
-We used to think that 0.92 and 0.94 were interface compatible and that you can do a rolling upgrade between these versions but then we figured that link:https://issues.apache.org/jira/browse/HBASE-5357[HBASE-5357 Use builder pattern in HColumnDescriptor] changed method signatures so rather than return `void` they instead return `HColumnDescriptor`. This will throw`java.lang.NoSuchMethodError: org.apache.hadoop.hbase.HColumnDescriptor.setMaxVersions(I)V` so 0.92 and 0.94 are NOT compatible. You cannot do a rolling upgrade between them.
+We used to think that 0.92 and 0.94 were interface compatible and that you can do a rolling upgrade between these versions but then we figured that link:https://issues.apache.org/jira/browse/HBASE-5357[HBASE-5357 Use builder pattern in HColumnDescriptor] changed method signatures so rather than return `void` they instead return `HColumnDescriptor`. This will throw `java.lang.NoSuchMethodError: org.apache.hadoop.hbase.HColumnDescriptor.setMaxVersions(I)V` so 0.92 and 0.94 are NOT compatible. You cannot do a rolling upgrade between them.
 
 [[upgrade0.92]]
 === Upgrading from 0.90.x to 0.92.x


[10/11] hbase git commit: HBASE-14025 update CHANGES.txt for the 1.2 RC.

Posted by bu...@apache.org.
http://git-wip-us.apache.org/repos/asf/hbase/blob/20c43680/CHANGES.txt
----------------------------------------------------------------------
diff --git a/CHANGES.txt b/CHANGES.txt
index f7403a5..21d571d 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,1462 +1,1261 @@
 HBase Change Log
 
-Release Notes - HBase - Version 0.99.2 12/07/2014
+Release Notes - HBase - Version 1.2.0 01/11/2016
 
 ** Sub-task
-    * [HBASE-10671] - Add missing InterfaceAudience annotations for classes in hbase-common and hbase-client modules
-    * [HBASE-11164] - Document and test rolling updates from 0.98 -> 1.0
-    * [HBASE-11915] - Document and test 0.94 -> 1.0.0 update
-    * [HBASE-11964] - Improve spreading replication load from failed regionservers
-    * [HBASE-12075] - Preemptive Fast Fail
-    * [HBASE-12128] - Cache configuration and RpcController selection for Table in Connection
-    * [HBASE-12147] - Porting Online Config Change from 89-fb
-    * [HBASE-12202] - Support DirectByteBuffer usage in HFileBlock
-    * [HBASE-12214] - Visibility Controller in the peer cluster should be able to extract visibility tags from the replicated cells
-    * [HBASE-12288] - Support DirectByteBuffer usage in DataBlock Encoding area
-    * [HBASE-12297] - Support DBB usage in Bloom and HFileIndex area
-    * [HBASE-12313] - Redo the hfile index length optimization so cell-based rather than serialized KV key
-    * [HBASE-12353] - Turn down logging on some spewing unit tests
-    * [HBASE-12354] - Update dependencies in time for 1.0 release
-    * [HBASE-12355] - Update maven plugins
-    * [HBASE-12363] - Improve how KEEP_DELETED_CELLS works with MIN_VERSIONS
-    * [HBASE-12379] - Try surefire 2.18-SNAPSHOT
-    * [HBASE-12400] - Fix refguide so it does connection#getTable rather than new HTable everywhere: first cut!
-    * [HBASE-12404] - Task 5 from parent: Replace internal HTable constructor use with HConnection#getTable (0.98, 0.99)
-    * [HBASE-12471] - Task 4. replace internal ConnectionManager#{delete,get}Connection use with #close, #createConnection (0.98, 0.99) under src/main/java
-    * [HBASE-12517] - Several HConstant members are assignable
-    * [HBASE-12518] - Task 4 polish. Remove CM#{get,delete}Connection
-    * [HBASE-12519] - Remove tabs used as whitespace
-    * [HBASE-12526] - Remove unused imports
-    * [HBASE-12577] - Disable distributed log replay by default
-
-
+    * [HBASE-12748] - RegionCoprocessorHost.execOperation creates too many iterator objects
+    * [HBASE-13393] - Optimize memstore flushing to avoid writing tag information to hfiles when no tags are present.
+    * [HBASE-13415] - Procedure V2 - Use nonces for double submits from client
+    * [HBASE-13470] - High level Integration test for master DDL operations
+    * [HBASE-13476] - Procedure V2 - Add Replay Order logic for child procedures
+    * [HBASE-13497] - Remove MVCC stamps from HFile when that is safe
+    * [HBASE-13536] - Cleanup the handlers that are no longer being used. 
+    * [HBASE-13563] - Add missing table owner to AC tests.
+    * [HBASE-13569] - correct errors reported with mvn site
+    * [HBASE-13579] - Avoid isCellTTLExpired() for NO-TAG cases
+    * [HBASE-13593] - Quota support for namespace should take snapshot restore and clone into account
+    * [HBASE-13616] - Move ServerShutdownHandler to Pv2
+    * [HBASE-13658] - Improve the test run time for TestAccessController class
+    * [HBASE-13748] - ensure post-commit builds for branch-1 include both java 7 and java 8
+    * [HBASE-13750] - set up jenkins builds that run branch-1 ITs with java 8
+    * [HBASE-13759] - Improve procedure yielding
+    * [HBASE-13832] - Procedure V2: master fail to start due to WALProcedureStore sync failures when HDFS data nodes count is low
+    * [HBASE-13898] - correct additional javadoc failures under java 8
+    * [HBASE-13899] - Jacoco instrumentation fails under jdk8
+    * [HBASE-13912] - add branch-1.2 post-commit builds
+    * [HBASE-13920] - Exclude Java files generated from protobuf from javadoc
+    * [HBASE-13937] - Partially revert HBASE-13172 
+    * [HBASE-13950] - Add a NoopProcedureStore for testing
+    * [HBASE-13963] - avoid leaking jdk.tools
+    * [HBASE-13967] - add jdk profiles for jdk.tools dependency
+    * [HBASE-13973] - Update documentation for 10070 Phase 2 changes
+    * [HBASE-13983] - Doc how the oddball HTable methods getStartKey, getEndKey, etc. will be removed in 2.0.0
+    * [HBASE-13990] - clean up remaining errors for maven site goal
+    * [HBASE-13993] - WALProcedureStore fencing is not effective if new WAL rolls 
+    * [HBASE-14003] - work around jdk8 spec bug in WALPerfEval
+    * [HBASE-14013] - Retry when RegionServerNotYetRunningException rather than go ahead with assign so for sure we don't skip WAL replay
+    * [HBASE-14017] - Procedure v2 - MasterProcedureQueue fix concurrency issue on table queue deletion
+    * [HBASE-14025] - Update CHANGES.txt for 1.2
+    * [HBASE-14086] - remove unused bundled dependencies
+    * [HBASE-14087] - ensure correct ASF policy compliant headers on source/docs
+    * [HBASE-14104] - Add vectorportal.com to NOTICES.txt as src of our logo
+    * [HBASE-14105] - Add shell tests for Snapshot
+    * [HBASE-14147] - REST Support for Namespaces
+    * [HBASE-14176] - Add missing headers to META-INF files
+    * [HBASE-14239] - Branch-1.2 AM can get stuck when meta moves
+    * [HBASE-14274] - Deadlock in region metrics on shutdown: MetricsRegionSourceImpl vs MetricsRegionAggregateSourceImpl
+    * [HBASE-14278] - Fix NPE that is showing up since HBASE-14274 went in
+    * [HBASE-14322] - Master still not using more than it's priority threads
+    * [HBASE-14378] - Get TestAccessController* passing again on branch-1
+    * [HBASE-14401] - Stamp failed appends with sequenceid too.... Cleans up latches
+    * [HBASE-14421] - TestFastFail* are flakey
+    * [HBASE-14428] - Upgrade our surefire-plugin from 2.18 to 2.18.1
+    * [HBASE-14430] - TestHttpServerLifecycle#testStartedServerIsAlive times out
+    * [HBASE-14433] - Set down the client executor core thread count from 256 in tests
+    * [HBASE-14435] - thrift tests don't have test-specific hbase-site.xml so 'BindException: Address already in use' because info port is not turned off
+    * [HBASE-14447] - Spark tests failing: bind exception when putting up info server
+    * [HBASE-14465] - Backport 'Allow rowlock to be reader/write' to branch-1
+    * [HBASE-14472] - TestHCM and TestRegionServerNoMaster fixes
+    * [HBASE-14484] - Follow-on from HBASE-14421, just disable TestFastFail* until someone digs in and fixes it
+    * [HBASE-14513] - TestBucketCache runs obnoxious 1k threads in a unit test
+    * [HBASE-14519] - Purge TestFavoredNodeAssignmentHelper, a test for an abandoned feature that can hang
+    * [HBASE-14535] - Integration test for rpc connection concurrency / deadlock testing 
+    * [HBASE-14539] - Slight improvement of StoreScanner.optimize
+    * [HBASE-14559] - branch-1 test tweeks; disable assert explicit region lands post-restart and up a few handlers
+    * [HBASE-14561] - Disable zombie TestReplicationShell
+    * [HBASE-14563] - Disable zombie TestHFileOutputFormat2
+    * [HBASE-14571] - Purge TestProcessBasedCluster; it does nothing and then fails
+    * [HBASE-14572] - TestImportExport#testImport94Table can't find its src data file
+    * [HBASE-14585] - Clean up TestSnapshotCloneIndependence
+    * [HBASE-14596] - TestCellACLs failing... on1.2 builds
+    * [HBASE-14600] - Make #testWalRollOnLowReplication looser still
+    * [HBASE-14605] - Split fails due to 'No valid credentials' error when SecureBulkLoadEndpoint#start tries to access hdfs
+    * [HBASE-14622] - Purge TestZkLess* tests from branch-1
+    * [HBASE-14631] - Region merge request should be audited with request user through proper scope of doAs() calls to region observer notifications
+    * [HBASE-14637] - Loosen TestChoreService assert AND have TestDataBlockEncoders do less work (and add timeouts)
+    * [HBASE-14646] - Move TestCellACLs from medium to large category
+    * [HBASE-14647] - Disable TestWALProcedureStoreOnHDFS#testWalRollOnLowReplication
+    * [HBASE-14648] - Reenable TestWALProcedureStoreOnHDFS#testWalRollOnLowReplication
+    * [HBASE-14655] - Narrow the scope of doAs() calls to region observer notifications for compaction
+    * [HBASE-14656] - Move TestAssignmentManager from medium to large category
+    * [HBASE-14657] - Remove unneeded API from EncodedSeeker
+    * [HBASE-14698] - Set category timeouts on TestScanner and TestNamespaceAuditor
+    * [HBASE-14702] - TestZKProcedureControllers.testZKCoordinatorControllerWithSingleMemberCohort is a flakey
+    * [HBASE-14709] - Parent change breaks graceful_stop.sh on a cluster
+    * [HBASE-14710] - Add category-based timeouts to MR tests
+    * [HBASE-14720] - Make TestHCM and TestMetaWithReplicas large tests rather than mediums
+    * [HBASE-14794] - Cleanup TestAtomicOperation, TestImportExport, and TestMetaWithReplicas
+    * [HBASE-14798] - NPE reporting server load causes regionserver abort; causes TestAcidGuarantee to fail
+    * [HBASE-14819] - hbase-it tests failing with OOME; permgen
+    * [HBASE-14863] - Add missing test/resources/log4j files in hbase modules
+    * [HBASE-14883] - TestSplitTransactionOnCluster#testFailedSplit flakey
+    * [HBASE-14908] - TestRowCounter flakey especially on branch-1
+    * [HBASE-14909] - NPE testing for RIT
+    * [HBASE-14915] - Hanging test : org.apache.hadoop.hbase.mapreduce.TestImportExport
+    * [HBASE-14947] - WALProcedureStore improvements
+    * [HBASE-15023] - Reenable TestShell and TestStochasticLoadBalancer
 
 ** Bug
-    * [HBASE-7211] - Improve hbase ref guide for the testing part.
-    * [HBASE-9003] - TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar
-    * [HBASE-9117] - Remove HTablePool and all HConnection pooling related APIs
-    * [HBASE-9157] - ZKUtil.blockUntilAvailable loops forever with non-recoverable errors
-    * [HBASE-9527] - Review all old api that takes a table name as a byte array and ensure none can pass ns + tablename
-    * [HBASE-10536] - ImportTsv should fail fast if any of the column family passed to the job is not present in the table
-    * [HBASE-10780] - HFilePrettyPrinter#processFile should return immediately if file does not exist
-    * [HBASE-11099] - Two situations where we could open a region with smaller sequence number
-    * [HBASE-11562] - CopyTable should provide an option to shuffle the mapper tasks
-    * [HBASE-11835] - Wrong managenement of non expected calls in the client
-    * [HBASE-12017] - Use Connection.createTable() instead of HTable constructors.
-    * [HBASE-12029] - Use Table and RegionLocator in HTable.getRegionLocations() 
-    * [HBASE-12053] - SecurityBulkLoadEndPoint set 777 permission on input data files 
-    * [HBASE-12072] - Standardize retry handling for master operations
-    * [HBASE-12083] - Deprecate new HBaseAdmin() in favor of Connection.getAdmin()
-    * [HBASE-12142] - Truncate command does not preserve ACLs table
-    * [HBASE-12194] - Make TestEncodedSeekers faster
-    * [HBASE-12219] - Cache more efficiently getAll() and get() in FSTableDescriptors
-    * [HBASE-12226] - TestAccessController#testPermissionList failing on master
-    * [HBASE-12229] - NullPointerException in SnapshotTestingUtils
-    * [HBASE-12234] - Make TestMultithreadedTableMapper a little more stable.
-    * [HBASE-12237] - HBaseZeroCopyByteString#wrap() should not be called in hbase-client code
-    * [HBASE-12238] - A few ugly exceptions on startup
-    * [HBASE-12240] - hbase-daemon.sh should remove pid file if process not found running
-    * [HBASE-12241] - The crash of regionServer when taking deadserver's replication queue breaks replication
-    * [HBASE-12242] - Fix new javadoc warnings in Admin, etc.
-    * [HBASE-12246] - Compilation with hadoop-2.3.x and 2.2.x is broken
-    * [HBASE-12247] - Replace setHTable() with initializeTable() in TableInputFormat.
-    * [HBASE-12248] - broken link in hbase shell help
-    * [HBASE-12252] - IntegrationTestBulkLoad fails with illegal partition error
-    * [HBASE-12257] - TestAssignmentManager unsynchronized access to regionPlans
-    * [HBASE-12258] - Make TestHBaseFsck less flaky
-    * [HBASE-12261] - Add checkstyle to HBase build process
-    * [HBASE-12263] - RegionServer listens on localhost in distributed cluster when DNS is unavailable
-    * [HBASE-12265] - HBase shell 'show_filters' points to internal Facebook URL
-    * [HBASE-12274] - Race between RegionScannerImpl#nextInternal() and RegionScannerImpl#close() may produce null pointer exception
-    * [HBASE-12277] - Refactor bulkLoad methods in AccessController to its own interface
-    * [HBASE-12278] - Race condition in TestSecureLoadIncrementalHFilesSplitRecovery
-    * [HBASE-12279] - Generated thrift files were generated with the wrong parameters
-    * [HBASE-12281] - ClonedPrefixTreeCell should implement HeapSize
-    * [HBASE-12285] - Builds are failing, possibly because of SUREFIRE-1091
-    * [HBASE-12294] - Can't build the docs after the hbase-checkstyle module was added
-    * [HBASE-12301] - user_permission command does not show global permissions
-    * [HBASE-12302] - VisibilityClient getAuths does not propagate remote service exception correctly
-    * [HBASE-12304] - CellCounter will throw AIOBE when output directory is not specified
-    * [HBASE-12306] - CellCounter output's wrong value for Total Families Across all Rows in output file
-    * [HBASE-12308] - Fix typo in hbase-rest profile name
-    * [HBASE-12312] - Another couple of createTable race conditions
-    * [HBASE-12314] - Add chaos monkey policy to execute two actions concurrently
-    * [HBASE-12315] - Fix 0.98 Tests after checkstyle got parented
-    * [HBASE-12316] - test-patch.sh (Hadoop-QA) outputs the wrong release audit warnings URL
-    * [HBASE-12318] - Add license header to checkstyle xml files
-    * [HBASE-12319] - Inconsistencies during region recovery due to close/open of a region during recovery
-    * [HBASE-12322] - Add clean up command to ITBLL
-    * [HBASE-12327] - MetricsHBaseServerSourceFactory#createContextName has wrong conditions
-    * [HBASE-12329] - Table create with duplicate column family names quietly succeeds
-    * [HBASE-12334] - Handling of DeserializationException causes needless retry on failure
-    * [HBASE-12336] - RegionServer failed to shutdown for NodeFailoverWorker thread
-    * [HBASE-12337] - Import tool fails with NullPointerException if clusterIds is not initialized
-    * [HBASE-12346] - Scan's default auths behavior under Visibility labels
-    * [HBASE-12352] - Add hbase-annotation-tests to runtime classpath so can run hbase it tests.
-    * [HBASE-12356] - Rpc with region replica does not propagate tracing spans
-    * [HBASE-12359] - MulticastPublisher should specify IPv4/v6 protocol family when creating multicast channel
-    * [HBASE-12366] - Add login code to HBase Canary tool.
-    * [HBASE-12372] - [WINDOWS] Enable log4j configuration in hbase.cmd 
-    * [HBASE-12375] - LoadIncrementalHFiles fails to load data in table when CF name starts with '_'
-    * [HBASE-12377] - HBaseAdmin#deleteTable fails when META region is moved around the same timeframe
-    * [HBASE-12384] - TestTags can hang on fast test hosts
-    * [HBASE-12386] - Replication gets stuck following a transient zookeeper error to remote peer cluster
-    * [HBASE-12398] - Region isn't assigned in an extreme race condition
-    * [HBASE-12399] - Master startup race between metrics and RpcServer
-    * [HBASE-12402] - ZKPermissionWatcher race condition in refreshing the cache leaving stale ACLs and causing AccessDenied
-    * [HBASE-12407] - HConnectionKey doesn't contain CUSTOM_CONTROLLER_CONF_KEY in CONNECTION_PROPERTIES 
-    * [HBASE-12414] - Move HFileLink.exists() to base class
-    * [HBASE-12417] - Scan copy constructor does not retain small attribute
-    * [HBASE-12419] - "Partial cell read caused by EOF" ERRORs on replication source during replication
-    * [HBASE-12420] - BucketCache logged startup message is egregiously large
-    * [HBASE-12423] - Use a non-managed Table in TableOutputFormat
-    * [HBASE-12428] - region_mover.rb script is broken if port is not specified
-    * [HBASE-12440] - Region may remain offline on clean startup under certain race condition
-    * [HBASE-12445] - hbase is removing all remaining cells immediately after the cell marked with marker = KeyValue.Type.DeleteColumn via PUT
-    * [HBASE-12448] - Fix rate reporting in compaction progress DEBUG logging
-    * [HBASE-12449] - Use the max timestamp of current or old cell's timestamp in HRegion.append()
-    * [HBASE-12450] - Unbalance chaos monkey might kill all region servers without starting them back
-    * [HBASE-12459] - Use a non-managed Table in mapred.TableOutputFormat
-    * [HBASE-12460] - Moving Chore to hbase-common module.
-    * [HBASE-12461] - FSVisitor logging is excessive
-    * [HBASE-12464] - meta table region assignment stuck in the FAILED_OPEN state due to region server not fully ready to serve
-    * [HBASE-12478] - HBASE-10141 and MIN_VERSIONS are not compatible
-    * [HBASE-12479] - Backport HBASE-11689 (Track meta in transition) to 0.98 and branch-1
-    * [HBASE-12490] - Replace uses of setAutoFlush(boolean, boolean)
-    * [HBASE-12491] - TableMapReduceUtil.findContainingJar() NPE
-    * [HBASE-12495] - Use interfaces in the shell scripts
-    * [HBASE-12513] - Graceful stop script does not restore the balancer state
-    * [HBASE-12514] - Cleanup HFileOutputFormat legacy code
-    * [HBASE-12520] - Add protected getters on TableInputFormatBase
-    * [HBASE-12533] - staging directories are not deleted after secure bulk load
-    * [HBASE-12536] - Reduce the effective scope of GLOBAL CREATE and ADMIN permission
-    * [HBASE-12537] - HBase should log the remote host on replication error
-    * [HBASE-12539] - HFileLinkCleaner logs are uselessly noisy
-    * [HBASE-12541] - Add misc debug logging to hanging tests in TestHCM and TestBaseLoadBalancer
-    * [HBASE-12544] - ops_mgt.xml missing in branch-1
-    * [HBASE-12550] - Check all storefiles are referenced before splitting
-    * [HBASE-12560] - [WINDOWS] Append the classpath from Hadoop to HBase classpath in bin/hbase.cmd
-    * [HBASE-12576] - Add metrics for rolling the HLog if there are too few DN's in the write pipeline
-    * [HBASE-12580] - Zookeeper instantiated even though we might not need it in the shell
-    * [HBASE-12581] - TestCellACLWithMultipleVersions failing since task 5 HBASE-12404 (HBASE-12404 addendum)
-    * [HBASE-12584] - Fix branch-1 failing since task 5 HBASE-12404 (HBASE-12404 addendum)
-    * [HBASE-12595] - Use Connection.getTable() in table.rb
-    * [HBASE-12600] - Remove REPLAY tag dependency in Distributed Replay Mode
-    * [HBASE-12610] - Close connection in TableInputFormatBase
-    * [HBASE-12611] - Create autoCommit() method and remove clearBufferOnFail
-    * [HBASE-12614] - Potentially unclosed StoreFile(s) in DefaultCompactor#compact()
-    * [HBASE-12616] - We lost the IntegrationTestBigLinkedList COMMANDS in recent usage refactoring
-
-
-
+    * [HBASE-5878] - Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.
+    * [HBASE-10844] - Coprocessor failure during batchmutation leaves the memstore datastructs in an inconsistent state
+    * [HBASE-11658] - Piped commands to hbase shell should return non-zero if shell command failed.
+    * [HBASE-11830] - TestReplicationThrottler.testThrottling failed on virtual boxes
+    * [HBASE-12413] - Mismatch in the equals and hashcode methods of KeyValue
+    * [HBASE-12865] - WALs may be deleted before they are replicated to peers
+    * [HBASE-13143] - TestCacheOnWrite is flaky and needs a diet
+    * [HBASE-13200] - Improper configuration can leads to endless lease recovery during failover
+    * [HBASE-13217] - Procedure fails due to ZK issue
+    * [HBASE-13250] - chown of ExportSnapshot does not cover all path and files
+    * [HBASE-13312] - SmallScannerCallable does not increment scan metrics
+    * [HBASE-13318] - RpcServer.getListenerAddress should handle when the accept channel is closed
+    * [HBASE-13324] - o.a.h.h.Coprocessor should be LimitedPrivate("Coprocessor")
+    * [HBASE-13325] - Protocol Buffers 2.5 no longer available for download on code.google.com
+    * [HBASE-13329] - ArrayIndexOutOfBoundsException in CellComparator#getMinimumMidpointArray
+    * [HBASE-13330] - Region left unassigned due to AM & SSH each thinking the assignment would be done by the other
+    * [HBASE-13333] - Renew Scanner Lease without advancing the RegionScanner
+    * [HBASE-13337] - Table regions are not assigning back, after restarting all regionservers at once.
+    * [HBASE-13352] - Add hbase.import.version to Import usage.
+    * [HBASE-13377] - Canary may generate false alarm on the first region when there are many delete markers
+    * [HBASE-13411] - Misleading error message when request size quota limit exceeds
+    * [HBASE-13480] - ShortCircuitConnection doesn't short-circuit all calls as expected
+    * [HBASE-13560] - Large compaction queue should steal from small compaction queue when idle
+    * [HBASE-13561] - ITBLL.Verify doesn't actually evaluate counters after job completes
+    * [HBASE-13564] - Master MBeans are not published
+    * [HBASE-13576] - HBCK enhancement: Failure in checking one region should not fail the entire HBCK operation.
+    * [HBASE-13600] - check_compatibility.sh should ignore shaded jars
+    * [HBASE-13601] - Connection leak during log splitting
+    * [HBASE-13604] - bin/hbase mapredcp does not include yammer-metrics jar
+    * [HBASE-13606] - AssignmentManager.assign() is not sync in both path
+    * [HBASE-13607] - TestSplitLogManager.testGetPreviousRecoveryMode consistently failing
+    * [HBASE-13608] - 413 Error with Stargate through Knox, using AD, SPNEGO, and Pre-Auth
+    * [HBASE-13611] - update clover to work for current versions
+    * [HBASE-13612] - TestRegionFavoredNodes doesn't guard against setup failure
+    * [HBASE-13617] - TestReplicaWithCluster.testChangeTable timeout
+    * [HBASE-13618] - ReplicationSource is too eager to remove sinks
+    * [HBASE-13625] - Use HDFS for HFileOutputFormat2 partitioner's path
+    * [HBASE-13626] - ZKTableStateManager logs table state changes at WARN
+    * [HBASE-13632] - Backport HBASE-13368 to branch-1 and 0.98
+    * [HBASE-13635] - Regions stuck in transition because master is incorrectly assumed dead
+    * [HBASE-13638] - Put copy constructor is shallow
+    * [HBASE-13646] - HRegion#execService should not try to build incomplete messages
+    * [HBASE-13647] - Default value for hbase.client.operation.timeout is too high
+    * [HBASE-13649] - CellComparator.compareTimestamps javadoc inconsistent and wrong
+    * [HBASE-13651] - Handle StoreFileScanner FileNotFoundException
+    * [HBASE-13653] - Uninitialized HRegionServer#walFactory may result in NullPointerException at region server startup​
+    * [HBASE-13662] - RSRpcService.scan() throws an OutOfOrderScannerNext if the scan has a retriable failure
+    * [HBASE-13663] - HMaster fails to restart 'HMaster: Failed to become active master'
+    * [HBASE-13664] - Use HBase 1.0 interfaces in ConnectionCache
+    * [HBASE-13668] - TestFlushRegionEntry is flaky
+    * [HBASE-13686] - Fail to limit rate in RateLimiter
+    * [HBASE-13694] - CallQueueSize is incorrectly decremented until the response is sent
+    * [HBASE-13700] - Allow Thrift2 HSHA server to have configurable threads
+    * [HBASE-13703] - ReplicateContext should not be a member of ReplicationSource
+    * [HBASE-13704] - Hbase throws OutOfOrderScannerNextException when MultiRowRangeFilter is used
+    * [HBASE-13706] - CoprocessorClassLoader should not exempt Hive classes
+    * [HBASE-13709] - Updates to meta table server columns may be eclipsed
+    * [HBASE-13711] - Provide an API to set min and max versions in HColumnDescriptor
+    * [HBASE-13712] - Backport HBASE-13199 to branch-1
+    * [HBASE-13717] - TestBoundedRegionGroupingProvider#setMembershipDedups need to set HDFS diretory for WAL
+    * [HBASE-13721] - Improve shell scan performances when using LIMIT
+    * [HBASE-13723] - In table.rb scanners are never closed.
+    * [HBASE-13727] - Codehaus repository is out of service
+    * [HBASE-13729] - Old hbase.regionserver.global.memstore.upperLimit and lowerLimit properties are ignored if present
+    * [HBASE-13731] - TestReplicationAdmin should clean up MiniZKCluster resource
+    * [HBASE-13732] - TestHBaseFsck#testParallelWithRetriesHbck fails intermittently
+    * [HBASE-13733] - Failed MiniZooKeeperCluster startup did not shutdown ZK servers
+    * [HBASE-13734] - Improper timestamp checking with VisibilityScanDeleteTracker
+    * [HBASE-13741] - Disable TestRegionObserverInterface#testRecovery and testLegacyRecovery
+    * [HBASE-13744] - TestCorruptedRegionStoreFile is flaky
+    * [HBASE-13746] - list_replicated_tables command is not listing table in hbase shell.
+    * [HBASE-13767] - Allow ZKAclReset to set and not just clear ZK ACLs
+    * [HBASE-13768] - ZooKeeper znodes are bootstrapped with insecure ACLs in a secure configuration
+    * [HBASE-13770] - Programmatic JAAS configuration option for secure zookeeper may be broken
+    * [HBASE-13776] - Setting illegal versions for HColumnDescriptor does not throw IllegalArgumentException 
+    * [HBASE-13777] - Table fragmentation display triggers NPE on master status page
+    * [HBASE-13778] - BoundedByteBufferPool incorrectly increasing runningAverage buffer length
+    * [HBASE-13779] - Calling table.exists() before table.get() end up with an empty Result
+    * [HBASE-13789] - ForeignException should not be sent to the client
+    * [HBASE-13796] - ZKUtil doesn't clean quorum setting properly
+    * [HBASE-13797] - Fix resource leak in HBaseFsck
+    * [HBASE-13800] - TestStore#testDeleteExpiredStoreFiles should create unique data/log directory for each call
+    * [HBASE-13801] - Hadoop src checksum is shown instead of HBase src checksum in master / RS UI
+    * [HBASE-13802] - Procedure V2: Master fails to come up due to rollback of create namespace table
+    * [HBASE-13809] - TestRowTooBig should use HDFS directory for its region directory
+    * [HBASE-13810] - Table is left unclosed in VerifyReplication#Verifier
+    * [HBASE-13811] - Splitting WALs, we are filtering out too many edits -> DATALOSS
+    * [HBASE-13812] - Deleting of last Column Family of a table should not be allowed
+    * [HBASE-13813] - Fix Javadoc warnings in Procedure.java
+    * [HBASE-13821] - WARN if hbase.bucketcache.percentage.in.combinedcache is set
+    * [HBASE-13824] - TestGenerateDelegationToken: Master fails to start in Windows environment
+    * [HBASE-13825] - Use ProtobufUtil#mergeFrom and ProtobufUtil#mergeDelimitedFrom in place of builder methods of same name
+    * [HBASE-13826] - Unable to create table when group acls are appropriately set.
+    * [HBASE-13831] - TestHBaseFsck#testParallelHbck is flaky against hadoop 2.6+
+    * [HBASE-13833] - LoadIncrementalHFile.doBulkLoad(Path,HTable) doesn't handle unmanaged connections when using SecureBulkLoad
+    * [HBASE-13834] - Evict count not properly passed to HeapMemoryTuner.
+    * [HBASE-13835] - KeyValueHeap.current might be in heap when exception happens in pollRealKV
+    * [HBASE-13845] - Expire of one region server carrying meta can bring down the master
+    * [HBASE-13847] - getWriteRequestCount function in HRegionServer uses int variable to return the count.
+    * [HBASE-13849] - Remove restore and clone snapshot from the WebUI
+    * [HBASE-13851] - RpcClientImpl.close() can hang with cancelled replica RPCs
+    * [HBASE-13853] - ITBLL improvements after HBASE-13811
+    * [HBASE-13858] - RS/MasterDumpServlet dumps threads before its “Stacks” header
+    * [HBASE-13861] - BucketCacheTmpl.jamon has wrong bucket free and used labels
+    * [HBASE-13863] - Multi-wal feature breaks reported number and size of HLogs
+    * [HBASE-13865] - Increase the default value for hbase.hregion.memstore.block.multipler from 2 to 4 (part 2)
+    * [HBASE-13873] - LoadTestTool addAuthInfoToConf throws UnsupportedOperationException
+    * [HBASE-13875] - Clock skew between master and region server may render restored region without server address
+    * [HBASE-13877] - Interrupt to flush from TableFlushProcedure causes dataloss in ITBLL
+    * [HBASE-13878] - Set hbase.fs.tmp.dir config in HBaseTestingUtility.java for Phoenix UT to use
+    * [HBASE-13881] - Bug in HTable#incrementColumnValue implementation
+    * [HBASE-13885] - ZK watches leaks during snapshots
+    * [HBASE-13888] - Fix refill bug from HBASE-13686
+    * [HBASE-13889] - Fix hbase-shaded-client artifact so it works on hbase-downstreamer
+    * [HBASE-13892] - Scanner with all results filtered out results in NPE
+    * [HBASE-13895] - DATALOSS: Region assigned before WAL replay when abort
+    * [HBASE-13901] - Error while calling watcher on creating and deleting an HBase table
+    * [HBASE-13904] - TestAssignmentManager.testBalanceOnMasterFailoverScenarioWithOfflineNode failing consistently on branch-1.1
+    * [HBASE-13905] - TestRecoveredEdits.testReplayWorksThoughLotsOfFlushing failing consistently on branch-1.1
+    * [HBASE-13906] - Improve handling of NeedUnmanagedConnectionException
+    * [HBASE-13918] - Fix hbase:namespace description in webUI
+    * [HBASE-13923] - Loaded region coprocessors are not reported in shell status command
+    * [HBASE-13930] - Exclude Findbugs packages from shaded jars
+    * [HBASE-13933] - DBE's seekBefore with tags corrupts the tag's offset information thus leading to incorrect results
+    * [HBASE-13935] - Orphaned namespace table ZK node should not prevent master to start
+    * [HBASE-13938] - Deletes done during the region merge transaction may get eclipsed
+    * [HBASE-13945] - Prefix_Tree seekBefore() does not work correctly
+    * [HBASE-13958] - RESTApiClusterManager calls kill() instead of suspend() and resume()
+    * [HBASE-13959] - Region splitting uses a single thread in most common cases
+    * [HBASE-13966] - Limit column width in table.jsp
+    * [HBASE-13969] - AuthenticationTokenSecretManager is never stopped in RPCServer
+    * [HBASE-13970] - NPE during compaction in trunk
+    * [HBASE-13971] - Flushes stuck since 6 hours on a regionserver.
+    * [HBASE-13974] - TestRateLimiter#testFixedIntervalResourceAvailability may fail
+    * [HBASE-13978] - Variable never assigned in SimpleTotalOrderPartitioner.getPartition() 
+    * [HBASE-13982] - Add info for visibility labels/cell TTLs to ImportTsv
+    * [HBASE-13988] - Add exception handler for lease thread
+    * [HBASE-13989] - Threshold for combined MemStore and BlockCache percentages is not checked
+    * [HBASE-13995] - ServerName is not fully case insensitive
+    * [HBASE-13997] - ScannerCallableWithReplicas cause Infinitely blocking
+    * [HBASE-14000] - Region server failed to report to Master and was stuck in reportForDuty retry loop
+    * [HBASE-14005] - Set permission to .top hfile in LoadIncrementalHFiles
+    * [HBASE-14010] - TestRegionRebalancing.testRebalanceOnRegionServerNumberChange fails; cluster not balanced
+    * [HBASE-14012] - Double Assignment and Dataloss when ServerCrashProcedure runs during Master failover
+    * [HBASE-14021] - Quota table has a wrong description on the UI
+    * [HBASE-14041] - Client MetaCache is cleared if a ThrottlingException is thrown
+    * [HBASE-14042] - Fix FATAL level logging in FSHLog where logged for non fatal exceptions
+    * [HBASE-14050] - NPE in org.apache.hadoop.hbase.ipc.RpcServer$Connection.readAndProcess
+    * [HBASE-14054] - Acknowledged writes may get lost if regionserver clock is set backwards
+    * [HBASE-14089] - Remove unnecessary draw of system entropy from RecoverableZooKeeper
+    * [HBASE-14092] - hbck should run without locks by default and only disable the balancer when necessary
+    * [HBASE-14098] - Allow dropping caches behind compactions
+    * [HBASE-14100] - Fix high priority findbugs warnings
+    * [HBASE-14106] - TestProcedureRecovery is flaky
+    * [HBASE-14109] - NPE if we don't load fully before we are shutdown
+    * [HBASE-14115] - Fix resource leak in HMasterCommandLine
+    * [HBASE-14119] - Show meaningful error messages instead of stack traces in hbase shell commands. Fixing few commands in this jira.
+    * [HBASE-14145] - Allow the Canary in regionserver mode to try all regions on the server, not just one
+    * [HBASE-14146] - Once replication sees an error it slows down forever
+    * [HBASE-14153] - Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY
+    * [HBASE-14155] - StackOverflowError in reverse scan
+    * [HBASE-14157] - Interfaces implemented by subclasses should be checked when registering CoprocessorService
+    * [HBASE-14166] - Per-Region metrics can be stale
+    * [HBASE-14168] - Avoid useless retry for DoNotRetryIOException in TableRecordReaderImpl
+    * [HBASE-14178] - regionserver blocks because of waiting for offsetLock
+    * [HBASE-14185] - Incorrect region names logged by MemStoreFlusher
+    * [HBASE-14196] - Thrift server idle connection timeout issue
+    * [HBASE-14201] - hbck should not take a lock unless fixing errors
+    * [HBASE-14205] - RegionCoprocessorHost System.nanoTime() performance bottleneck
+    * [HBASE-14206] - MultiRowRangeFilter returns records whose rowKeys are out of allowed ranges
+    * [HBASE-14209] - TestShell visibility tests failing
+    * [HBASE-14211] - Add more rigorous integration tests of splits
+    * [HBASE-14214] - list_labels shouldn't raise ArgumentError if no labels are defined 
+    * [HBASE-14219] - src tgz no longer builds after HBASE-14085
+    * [HBASE-14224] - Fix coprocessor handling of duplicate classes
+    * [HBASE-14228] - Close BufferedMutator and connection in MultiTableOutputFormat
+    * [HBASE-14229] - Flushing canceled by coprocessor still leads to memstoreSize set down
+    * [HBASE-14234] - Procedure-V2: Exception encountered in WALProcedureStore#rollWriter() should be properly handled
+    * [HBASE-14238] - Branch-1.2 AM issues
+    * [HBASE-14241] - Fix deadlock during cluster shutdown due to concurrent connection close
+    * [HBASE-14243] - Incorrect NOTICE file in hbase-it test-jar
+    * [HBASE-14249] - shaded jar modules create spurious source and test jars with incorrect LICENSE/NOTICE info
+    * [HBASE-14250] - branch-1.1 hbase-server test-jar has incorrect LICENSE
+    * [HBASE-14251] - javadoc jars use LICENSE/NOTICE from primary artifact
+    * [HBASE-14257] - Periodic flusher only handles hbase:meta, not other system tables
+    * [HBASE-14258] - Make region_mover.rb script case insensitive with regard to hostname
+    * [HBASE-14269] - FuzzyRowFilter omits certain rows when multiple fuzzy keys exist
+    * [HBASE-14273] - Rename MVCC to MVCC: From MultiVersionConsistencyControl to MultiVersionConcurrencyControl
+    * [HBASE-14280] - Bulk Upload from HA cluster to remote HA hbase cluster fails
+    * [HBASE-14283] - Reverse scan doesn’t work with HFile inline index/bloom blocks
+    * [HBASE-14287] - Bootstrapping a cluster leaves temporary WAL directory laying around
+    * [HBASE-14291] - NPE On StochasticLoadBalancer Balance Involving RS With No Regions
+    * [HBASE-14302] - TableSnapshotInputFormat should not create back references when restoring snapshot
+    * [HBASE-14307] - Incorrect use of positional read api in HFileBlock
+    * [HBASE-14313] - After a Connection sees ConnectionClosingException it never recovers
+    * [HBASE-14315] - Save one call to KeyValueHeap.peek per row
+    * [HBASE-14317] - Stuck FSHLog: bad disk (HDFS-8960) and can't roll WAL
+    * [HBASE-14327] - TestIOFencing#testFencingAroundCompactionAfterWALSync is flaky
+    * [HBASE-14338] - License notification misspells 'Asciidoctor'
+    * [HBASE-14342] - Recursive call in RegionMergeTransactionImpl.getJournal()
+    * [HBASE-14343] - Fix debug message in SimpleRegionNormalizer for small regions
+    * [HBASE-14347] - Add a switch to DynamicClassLoader to disable it
+    * [HBASE-14354] - Minor improvements for usage of the mlock agent
+    * [HBASE-14359] - HTable#close will hang forever if unchecked error/exception thrown in AsyncProcess#sendMultiAction
+    * [HBASE-14362] - org.apache.hadoop.hbase.master.procedure.TestWALProcedureStoreOnHDFS is super duper flaky
+    * [HBASE-14366] - NPE in case visibility expression is not present in labels table during importtsv run
+    * [HBASE-14367] - Add normalization support to shell
+    * [HBASE-14380] - Correct data gets skipped along with bad data in importTsv bulk load thru TsvImporterTextMapper
+    * [HBASE-14382] - TestInterfaceAudienceAnnotations should hadoop-compt module resources
+    * [HBASE-14384] - Trying to run canary locally with -regionserver option causes exception
+    * [HBASE-14385] - Close the sockets that is missing in connection closure.
+    * [HBASE-14392] - [tests] TestLogRollingNoCluster fails on master from time to time
+    * [HBASE-14393] - Have TestHFileEncryption clean up after itself so it don't go all zombie on us
+    * [HBASE-14394] - Properly close the connection after reading records from table.
+    * [HBASE-14400] - Fix HBase RPC protection documentation
+    * [HBASE-14407] - NotServingRegion: hbase region closed forever
+    * [HBASE-14425] - In Secure Zookeeper cluster superuser will not have sufficient permission if multiple values are configured in "hbase.superuser"
+    * [HBASE-14431] - AsyncRpcClient#removeConnection() never removes connection from connections pool if server fails
+    * [HBASE-14437] - ArithmeticException in ReplicationInterClusterEndpoint
+    * [HBASE-14445] - ExportSnapshot does not honor -chmod option
+    * [HBASE-14449] - Rewrite deadlock prevention for concurrent connection close
+    * [HBASE-14463] - Severe performance downgrade when parallel reading a single key from BucketCache
+    * [HBASE-14469] - Fix some comment, validation and logging around memstore lower limit configuration
+    * [HBASE-14471] - Thrift -  HTTP Error 413 full HEAD if using kerberos authentication
+    * [HBASE-14473] - Compute region locality in parallel
+    * [HBASE-14474] - DeadLock in RpcClientImpl.Connection.close() 
+    * [HBASE-14475] - Region split requests are always audited with "hbase" user rather than request user
+    * [HBASE-14486] - Disable TestRegionPlacement, a flakey test for an unfinished feature
+    * [HBASE-14489] - postScannerFilterRow consumes a lot of CPU
+    * [HBASE-14492] - Increase REST server header buffer size from 8k to 64k
+    * [HBASE-14494] - Wrong usage messages on shell commands
+    * [HBASE-14501] - NPE in replication when HDFS transparent encryption is enabled.
+    * [HBASE-14510] - Can not set coprocessor from Shell after HBASE-14224
+    * [HBASE-14512] - Cache UGI groups
+    * [HBASE-14518] - Give TestScanEarlyTermination the same treatment as 'HBASE-14378 Get TestAccessController* passing again...' -- up priority handlers
+    * [HBASE-14531] - graceful_stop.sh "if [ "$local" ]" condition unexpected behaviour
+    * [HBASE-14536] - Balancer & SSH interfering with each other leading to unavailability
+    * [HBASE-14541] - TestHFileOutputFormat.testMRIncrementalLoadWithSplit failed due to too many splits and few retries
+    * [HBASE-14544] - Allow HConnectionImpl to not refresh the dns on errors
+    * [HBASE-14545] - TestMasterFailover often times out
+    * [HBASE-14555] - Deadlock in MVCC branch-1.2 toString()
+    * [HBASE-14557] - MapReduce WALPlayer issue with NoTagsKeyValue
+    * [HBASE-14577] - HBase shell help for scan and returning a column family has a typo
+    * [HBASE-14578] - URISyntaxException during snapshot restore for table with user defined namespace
+    * [HBASE-14581] - Znode cleanup throws auth exception in secure mode
+    * [HBASE-14591] - Region with reference hfile may split after a forced split in IncreasingToUpperBoundRegionSplitPolicy
+    * [HBASE-14592] - BatchRestartRsAction always restarts 0 RS when running SlowDeterministicMonkey
+    * [HBASE-14594] - Use new DNS API introduced in HADOOP-12437
+    * [HBASE-14597] - Fix Groups cache in multi-threaded env
+    * [HBASE-14598] - ByteBufferOutputStream grows its HeapByteBuffer beyond JVM limitations
+    * [HBASE-14606] - TestSecureLoadIncrementalHFiles tests timed out in trunk build on apache
+    * [HBASE-14608] - testWalRollOnLowReplication has some risk to assert failed after HBASE-14600
+    * [HBASE-14621] - ReplicationLogCleaner gets stuck when a regionserver crashes
+    * [HBASE-14624] - BucketCache.freeBlock is too expensive
+    * [HBASE-14625] - Chaos Monkey should shut down faster
+    * [HBASE-14632] - Region server aborts due to unguarded dereference of Reader
+    * [HBASE-14633] - Try fluid width UI
+    * [HBASE-14634] - Disable flakey TestSnapshotCloneIndependence.testOnlineSnapshotDeleteIndependent
+    * [HBASE-14658] - Allow loading a MonkeyFactory by class name
+    * [HBASE-14661] - RegionServer link is not opening, in HBase Table page.
+    * [HBASE-14663] - HStore::close does not honor config hbase.rs.evictblocksonclose
+    * [HBASE-14667] - HBaseFsck constructors have diverged
+    * [HBASE-14674] - Rpc handler / task monitoring seems to be broken after 0.98
+    * [HBASE-14680] - Two configs for snapshot timeout and better defaults
+    * [HBASE-14682] - CM restore functionality for regionservers is broken
+    * [HBASE-14689] - Addendum and unit test for HBASE-13471
+    * [HBASE-14690] - Fix css so there's no left/right scroll bar
+    * [HBASE-14694] - Scan copy constructor doesn't handle allowPartialResults
+    * [HBASE-14705] - Javadoc for KeyValue constructor is not correct.
+    * [HBASE-14706] - RegionLocationFinder should return multiple servernames by top host
+    * [HBASE-14712] - MasterProcWALs never clean up
+    * [HBASE-14717] - enable_table_replication command should only create specified table for a peer cluster
+    * [HBASE-14723] - Fix IT tests split too many times
+    * [HBASE-14733] - Minor typo in alter_namespace.rb
+    * [HBASE-14737] - Clear cachedMaxVersions when HColumnDescriptor#setValue(VERSIONS, value) is called
+    * [HBASE-14742] - TestHeapMemoryManager is flakey
+    * [HBASE-14745] - Shade the last few dependencies in hbase-shaded-client
+    * [HBASE-14754] - TestFastFailWithoutTestUtil failing on trunk now in #testPreemptiveFastFailException50Times
+    * [HBASE-14759] - Avoid using Math.abs when selecting SyncRunner in FSHLog
+    * [HBASE-14761] - Deletes with and without visibility expression do not delete the matching mutation
+    * [HBASE-14768] - bin/graceful_stop.sh logs nothing as a balancer state to be stored
+    * [HBASE-14771] - RpcServer#getRemoteAddress always returns null
+    * [HBASE-14773] - Fix HBase shell tests are skipped when skipping server tests.
+    * [HBASE-14777] - Fix Inter Cluster Replication Future ordering issues
+    * [HBASE-14778] - Make block cache hit percentages not integer in the metrics system
+    * [HBASE-14781] - Turn per cf flushing on for ITBLL by default
+    * [HBASE-14782] - FuzzyRowFilter skips valid rows
+    * [HBASE-14784] - Port conflict is not resolved in HBaseTestingUtility.randomFreePort()
+    * [HBASE-14788] - Splitting a region does not support the hbase.rs.evictblocksonclose config when closing source region
+    * [HBASE-14793] - Allow limiting size of block into L1 block cache.
+    * [HBASE-14799] - Commons-collections object deserialization remote command execution vulnerability 
+    * [HBASE-14802] - Replaying server crash recovery procedure after a failover causes incorrect handling of deadservers
+    * [HBASE-14804] - HBase shell's create table command ignores 'NORMALIZATION_ENABLED' attribute
+    * [HBASE-14806] - Missing sources.jar for several modules when building HBase
+    * [HBASE-14807] - TestWALLockup is flakey
+    * [HBASE-14809] - Grant / revoke Namespace admin permission to group 
+    * [HBASE-14812] - Fix ResultBoundedCompletionService deadlock
+    * [HBASE-14824] - HBaseAdmin.mergeRegions should recognize both full region names and encoded region names
+    * [HBASE-14838] - Clarify that SimpleRegionNormalizer does not merge empty (<1MB) regions
+    * [HBASE-14840] - Sink cluster reports data replication request as success though the data is not replicated
+    * [HBASE-14843] - TestWALProcedureStore.testLoad is flakey
+    * [HBASE-14867] - SimpleRegionNormalizer needs to have better heuristics to trigger merge operation
+    * [HBASE-14875] - Forward port HBASE-14207 'Region was hijacked and remained in transition when RS failed to open a region and later regionplan changed to new RS on retry'
+    * [HBASE-14885] - NullPointerException in HMaster#normalizeRegions() due to missing TableDescriptor
+    * [HBASE-14893] - Race between mutation on region and region closing operation leads to NotServingRegionException
+    * [HBASE-14894] - Fix misspellings of threshold in log4j.properties files for tests
+    * [HBASE-14904] - Mark Base[En|De]coder LimitedPrivate and fix binary compat issue
+    * [HBASE-14905] - VerifyReplication does not honour versions option
+    * [HBASE-14922] - Delayed flush doesn't work causing flush storms.
+    * [HBASE-14923] - VerifyReplication should not mask the exception during result comparison 
+    * [HBASE-14926] - Hung ThriftServer; no timeout on read from client; if client crashes, worker thread gets stuck reading
+    * [HBASE-14928] - Start row should be set for query through HBase REST gateway involving globbing option
+    * [HBASE-14929] - There is a space missing from Table "foo" is not currently available.
+    * [HBASE-14930] - check_compatibility.sh needs smarter exit codes
+    * [HBASE-14936] - CombinedBlockCache should overwrite CacheStats#rollMetricsPeriod()
+    * [HBASE-14940] - Make our unsafe based ops more safe
+    * [HBASE-14941] - locate_region shell command
+    * [HBASE-14942] - Allow turning off BoundedByteBufferPool
+    * [HBASE-14952] - hbase-assembly source artifact has some incorrect modules
+    * [HBASE-14953] - HBaseInterClusterReplicationEndpoint: Do not retry the whole batch of edits in case of RejectedExecutionException
+    * [HBASE-14954] - IllegalArgumentException was thrown when doing online configuration change in CompactSplitThread
+    * [HBASE-14960] - Fallback to using default RPCControllerFactory if class cannot be loaded
+    * [HBASE-14965] - Remove un-used hbase-spark in branch-1 +
+    * [HBASE-14968] - ConcurrentModificationException in region close resulting in the region staying in closing state
+    * [HBASE-14974] - Total number of Regions in Transition number on UI incorrect
+    * [HBASE-14977] - ChoreService.shutdown may result in ConcurrentModificationException
+    * [HBASE-14989] - Implementation of Mutation.getWriteToWAL() is backwards
+    * [HBASE-14999] - Remove ref to org.mortbay.log.Log
+    * [HBASE-15001] - Thread Safety issues in ReplicationSinkManager and HBaseInterClusterReplicationEndpoint
+    * [HBASE-15009] - Update test-patch.sh on branches; to fix curtailed build report
+    * [HBASE-15011] - turn off the jdk8 javadoc linter. :(
+    * [HBASE-15014] - Fix filterCellByStore in WALsplitter is awful for performance
+    * [HBASE-15015] - Checktyle plugin shouldn't check Jamon-generated Java classes
+    * [HBASE-15018] - Inconsistent way of handling TimeoutException in the rpc client implementations
+    * [HBASE-15021] - hadoopqa doing false positives
+    * [HBASE-15022] - undefined method `getZooKeeperClusterKey' for Java::OrgApacheHadoopHbaseZookeeper::ZKUtil:Class
+    * [HBASE-15032] - hbase shell scan filter string assumes UTF-8 encoding
+    * [HBASE-15035] - bulkloading hfiles with tags that require splits do not preserve tags
+    * [HBASE-15039] - HMaster and RegionServers should try to refresh token keys from zk when facing InvalidToken
 
 ** Improvement
-    * [HBASE-2609] - Harmonize the Get and Delete operations
-    * [HBASE-4955] - Use the official versions of surefire & junit
-    * [HBASE-8361] - Bulk load and other utilities should not create tables for user
-    * [HBASE-8572] - Enhance delete_snapshot.rb to call snapshot deletion API with regex
-    * [HBASE-10082] - Describe 'table' output is all on one line, could use better formatting
-    * [HBASE-10483] - Provide API for retrieving info port when hbase.master.info.port is set to 0
-    * [HBASE-11639] - [Visibility controller] Replicate the visibility of Cells as strings
-    * [HBASE-11870] - Optimization : Avoid copy of key and value for tags addition in AC and VC
-    * [HBASE-12161] - Add support for grant/revoke on namespaces in AccessControlClient
-    * [HBASE-12243] - HBaseFsck should auto set ignorePreCheckPermission to true if no fix option is set.
-    * [HBASE-12249] - Script to help you adhere to the patch-naming guidelines
-    * [HBASE-12264] - ImportTsv should fail fast if output is not specified and table does not exist
-    * [HBASE-12271] - Add counters for files skipped during snapshot export
-    * [HBASE-12272] - Generate Thrift code through maven
-    * [HBASE-12328] - Need to separate JvmMetrics for Master and RegionServer
-    * [HBASE-12389] - Reduce the number of versions configured for the ACL table
-    * [HBASE-12390] - Change revision style from svn to git
-    * [HBASE-12411] - Optionally enable p-reads and private readers for compactions
-    * [HBASE-12416] - RegionServerCallable should report what host it was communicating with
-    * [HBASE-12424] - Finer grained logging and metrics for split transactions
-    * [HBASE-12432] - RpcRetryingCaller should log after fixed number of retries like AsyncProcess
-    * [HBASE-12434] - Add a command to compact all the regions in a regionserver
-    * [HBASE-12447] - Add support for setTimeRange for RowCounter and CellCounter
-    * [HBASE-12455] - Add 'description' to bean and attribute output when you do /jmx?description=true
-    * [HBASE-12529] - Use ThreadLocalRandom for RandomQueueBalancer
-    * [HBASE-12569] - Control MaxDirectMemorySize in the same manner as heap size
+    * [HBASE-6617] - ReplicationSourceManager should be able to track multiple WAL paths
+    * [HBASE-7171] - Initial web UI for region/memstore/storefiles details
+    * [HBASE-11927] - Use Native Hadoop Library for HFile checksum (And flip default from CRC32 to CRC32C)
+    * [HBASE-12415] - Add add(byte[][] arrays) to Bytes.
+    * [HBASE-12986] - Compaction pressure based client pushback
+    * [HBASE-12988] - [Replication]Parallel apply edits across regions
+    * [HBASE-13103] - [ergonomics] add region size balancing as a feature of master
+    * [HBASE-13127] - Add timeouts on all tests so less zombie sightings
+    * [HBASE-13247] - Change BufferedMutatorExample to use addColumn() since add() is deprecated
+    * [HBASE-13344] - Add enforcer rule that matches our JDK support statement
+    * [HBASE-13358] - Upgrade VisibilityClient API to accept Connection object.
+    * [HBASE-13366] - Throw DoNotRetryIOException instead of read only IOException
+    * [HBASE-13375] - Provide HBase superuser higher priority over other users in the RPC handling
+    * [HBASE-13420] - RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load
+    * [HBASE-13534] - Change HBase master WebUI to explicitly mention if it is a backup master
+    * [HBASE-13598] - Make hbase assembly 'attach' to the project
+    * [HBASE-13671] - More classes to add to the invoking repository of org.apache.hadoop.hbase.mapreduce.driver
+    * [HBASE-13673] - WALProcedureStore procedure is chatty
+    * [HBASE-13675] - ProcedureExecutor completion report should be at DEBUG log level
+    * [HBASE-13677] - RecoverableZookeeper WARNs on expected events
+    * [HBASE-13684] - Allow mlockagent to be used when not starting as root
+    * [HBASE-13710] - Remove use of Hadoop's ReflectionUtil in favor of our own.
+    * [HBASE-13745] - Say why a flush was requested in log message
+    * [HBASE-13755] - Provide single super user check implementation
+    * [HBASE-13761] - Optimize FuzzyRowFilter
+    * [HBASE-13780] - Default to 700 for HDFS root dir permissions for secure deployments
+    * [HBASE-13816] - Build shaded modules only in release profile
+    * [HBASE-13828] - Add group permissions testing coverage to AC.
+    * [HBASE-13829] - Add more ThrottleType
+    * [HBASE-13846] - Run MiniCluster on top of other MiniDfsCluster
+    * [HBASE-13848] - Access InfoServer SSL passwords through Credential Provder API
+    * [HBASE-13876] - Improving performance of HeapMemoryManager
+    * [HBASE-13894] - Avoid visitor alloc each call of ByteBufferArray get/putMultiple()
+    * [HBASE-13917] - Remove string comparison to identify request priority
+    * [HBASE-13925] - Use zookeeper multi to clear znodes in ZKProcedureUtil
+    * [HBASE-13927] - Allow hbase-daemon.sh to conditionally redirect the log or not
+    * [HBASE-13947] - Use MasterServices instead of Server in AssignmentManager
+    * [HBASE-13980] - Distinguish blockedFlushCount vs unblockedFlushCount when tuning heap memory
+    * [HBASE-13985] - Add configuration to skip validating HFile format when bulk loading
+    * [HBASE-13996] - Add write sniffing in canary
+    * [HBASE-14002] - Add --noReplicationSetup option to IntegrationTestReplication
+    * [HBASE-14015] - Allow setting a richer state value when toString a pv2
+    * [HBASE-14027] - Clean up netty dependencies
+    * [HBASE-14078] - improve error message when HMaster can't bind to port
+    * [HBASE-14082] - Add replica id to JMX metrics names
+    * [HBASE-14097] - Log link to client scan troubleshooting section when scanner exceptions happen.
+    * [HBASE-14110] - Add CostFunction for balancing primary region replicas
+    * [HBASE-14122] - Client API for determining if server side supports cell level security
+    * [HBASE-14148] - Web UI Framable Page
+    * [HBASE-14172] - Upgrade existing thrift binding using thrift 0.9.3 compiler.
+    * [HBASE-14194] - Undeprecate methods in ThriftServerRunner.HBaseHandler
+    * [HBASE-14203] - remove duplicate code getTableDescriptor in HTable
+    * [HBASE-14230] - replace reflection in FSHlog with HdfsDataOutputStream#getCurrentBlockReplication()
+    * [HBASE-14260] - don't build javadocs for hbase-protocol module
+    * [HBASE-14261] - Enhance Chaos Monkey framework by adding zookeeper and datanode fault injections.
+    * [HBASE-14266] - RegionServers have a lock contention of Configuration.getProps
+    * [HBASE-14268] - Improve KeyLocker
+    * [HBASE-14314] - Metrics for block cache should take region replicas into account
+    * [HBASE-14325] - Add snapshotinfo command to hbase script
+    * [HBASE-14334] - Move Memcached block cache in to it's own optional module.
+    * [HBASE-14387] - Compaction improvements: Maximum off-peak compaction size
+    * [HBASE-14436] - HTableDescriptor#addCoprocessor will always make RegionCoprocessorHost create new Configuration
+    * [HBASE-14461] - Cleanup IncreasingToUpperBoundRegionSplitPolicy
+    * [HBASE-14468] - Compaction improvements: FIFO compaction policy
+    * [HBASE-14517] - Show regionserver's version in master status page
+    * [HBASE-14547] - Add more debug/trace to zk-procedure
+    * [HBASE-14580] - Make the HBaseMiniCluster compliant with Kerberos
+    * [HBASE-14582] - Regionserver status webpage bucketcache list can become huge
+    * [HBASE-14586] - Use a maven profile to run Jacoco analysis
+    * [HBASE-14587] - Attach a test-sources.jar for hbase-server
+    * [HBASE-14588] - Stop accessing test resources from within src folder
+    * [HBASE-14643] - Avoid Splits from once again opening a closed reader for fetching the first and last key
+    * [HBASE-14683] - Batching in buffered mutator is awful when adding lists of mutations.
+    * [HBASE-14684] - Try to remove all MiniMapReduceCluster in unit tests
+    * [HBASE-14687] - Un-synchronize BufferedMutator
+    * [HBASE-14693] - Add client-side metrics for received pushback signals
+    * [HBASE-14696] - Support setting allowPartialResults in mapreduce Mappers
+    * [HBASE-14700] - Support a "permissive" mode for secure clusters to allow "simple" auth clients
+    * [HBASE-14708] - Use copy on write Map for region location cache
+    * [HBASE-14714] - some cleanup to snapshot code
+    * [HBASE-14715] - Add javadocs to DelegatingRetryingCallable
+    * [HBASE-14721] - Memstore add cells - Avoid many garbage
+    * [HBASE-14730] - region server needs to log warnings when there are attributes configured for cells with hfile v2
+    * [HBASE-14752] - Add example of using the HBase client in a multi-threaded environment
+    * [HBASE-14765] - Remove snappy profile
+    * [HBASE-14780] - Integration Tests that run with ChaosMonkey need to specify CFs
+    * [HBASE-14805] - status should show the master in shell
+    * [HBASE-14821] - CopyTable should allow overriding more config properties for peer cluster
+    * [HBASE-14862] - Add support for reporting p90 for histogram metrics
+    * [HBASE-14866] - VerifyReplication should use peer configuration in peer connection
+    * [HBASE-14891] - Add log for uncaught exception in RegionServerMetricsWrapperRunnable
+    * [HBASE-14946] - Don't allow multi's to over run the max result size.
+    * [HBASE-14951] - Make hbase.regionserver.maxlogs obsolete
+    * [HBASE-14976] - Add RPC call queues to the web ui
+    * [HBASE-14978] - Don't allow Multi to retain too many blocks
+    * [HBASE-14984] - Allow memcached block cache to set optimze to false
+    * [HBASE-15005] - Use value array in computing block length for 1.2 and 1.3
 
 ** New Feature
-    * [HBASE-8707] - Add LongComparator for filter
-    * [HBASE-12286] - [shell] Add server/cluster online load of configuration changes
-    * [HBASE-12361] - Show data locality of region in table page
-    * [HBASE-12496] - A blockedRequestsCount metric
-
-
-
-
-
-
-
+    * [HBASE-5980] - Scanner responses from RS should include metrics on rows/KVs filtered
+    * [HBASE-10070] - HBase read high-availability using timeline-consistent region replicas
+    * [HBASE-12911] - Client-side metrics
+    * [HBASE-13356] - HBase should provide an InputFormat supporting multiple scans in mapreduce jobs over snapshots
+    * [HBASE-13639] - SyncTable - rsync for HBase tables
+    * [HBASE-13698] - Add RegionLocator methods to Thrift2 proxy.
+    * [HBASE-14154] - DFS Replication should be configurable at column family level
+    * [HBASE-14355] - Scan different TimeRange for each column family
+    * [HBASE-14459] - Add request and response sizes metrics
+    * [HBASE-14529] - Respond to SIGHUP to reload config
 
 ** Task
-    * [HBASE-10200] - Better error message when HttpServer fails to start due to java.net.BindException
-    * [HBASE-10870] - Deprecate and replace HCD methods that have a 'should' prefix with a 'get' instead
-    * [HBASE-12250] - Adding an endpoint for updating the regionserver config
-    * [HBASE-12344] - Split up TestAdmin
-    * [HBASE-12381] - Add maven enforcer rules for build assumptions
-    * [HBASE-12388] - Document that WALObservers don't get empty edits.
-    * [HBASE-12427] - Change branch-1 version from 0.99.2-SNAPSHOT to 0.99.3-SNAPSHOT
-    * [HBASE-12442] - Bring KeyValue#createFirstOnRow() back to branch-1 as deprecated methods
-    * [HBASE-12456] - Update surefire from 2.18-SNAPSHOT to 2.18
-    * [HBASE-12516] - Clean up master so QA Bot is in known good state
-    * [HBASE-12522] - Backport WAL refactoring to branch-1
-
+    * [HBASE-11276] - Add back support for running ChaosMonkey as standalone tool
+    * [HBASE-11677] - Make Logger instance modifiers consistent
+    * [HBASE-13089] - Fix test compilation error on building against htrace-3.2.0-incubating
+    * [HBASE-13666] - book.pdf is not renamed during site build
+    * [HBASE-13716] - Stop using Hadoop's FSConstants
+    * [HBASE-13726] - stop using Hadoop's IOUtils
+    * [HBASE-13764] - Backport HBASE-7782 (HBaseTestingUtility.truncateTable() not acting like CLI) to branch-1.x
+    * [HBASE-13799] - javadoc how Scan gets polluted when used; if you set attributes or ask for scan metrics
+    * [HBASE-13929] - make_rc.sh publishes empty shaded artifacts
+    * [HBASE-13964] - Skip region normalization for tables under namespace quota
+    * [HBASE-14052] - Mark a few methods in CellUtil audience private since only make sense internally to hbase
+    * [HBASE-14053] - Disable DLR in branch-1+
+    * [HBASE-14066] - clean out old docbook docs from branch-1
+    * [HBASE-14085] - Correct LICENSE and NOTICE files in artifacts
+    * [HBASE-14288] - Upgrade asciidoctor plugin to v1.5.2.1
+    * [HBASE-14290] - Spin up less threads in tests
+    * [HBASE-14308] - HTableDescriptor WARN is not actionable
+    * [HBASE-14318] - make_rc.sh should purge/re-resolve dependencies from local repository
+    * [HBASE-14361] - ReplicationSink should create Connection instances lazily
+    * [HBASE-14493] - Upgrade the jamon-runtime dependency
+    * [HBASE-14502] - Purge use of jmock and remove as dependency
+    * [HBASE-14516] - categorize hadoop-compat tests
+    * [HBASE-14851] - Add test showing how to use TTL from thrift
+    * [HBASE-15003] - Remove BoundedConcurrentLinkedQueue and associated test
 
 ** Test
-    * [HBASE-12317] - Run IntegrationTestRegionReplicaPerf w.o mapred
-    * [HBASE-12335] - IntegrationTestRegionReplicaPerf is flaky
-    * [HBASE-12367] - Integration tests should not restore the cluster if the CM is not destructive
-    * [HBASE-12378] - Add a test to verify that the read-replica is able to read after a compaction
-    * [HBASE-12401] - Add some timestamp signposts in IntegrationTestMTTR
-    * [HBASE-12403] - IntegrationTestMTTR flaky due to aggressive RS restart timeout
-    * [HBASE-12472] - Improve debuggability of IntegrationTestBulkLoad
-    * [HBASE-12549] - Fix TestAssignmentManagerOnCluster#testAssignRacingWithSSH() flaky test
-    * [HBASE-12554] - TestBaseLoadBalancer may timeout due to lengthy rack lookup
+    * [HBASE-13591] - TestHBaseFsck is flakey
+    * [HBASE-13609] - TestFastFail is still failing
+    * [HBASE-13940] - IntegrationTestBulkLoad needs option to specify output folders used by test
+    * [HBASE-14197] - TestRegionServerHostname#testInvalidRegionServerHostnameAbortsServer fails in Jenkins
+    * [HBASE-14210] - Create test for cell level ACLs involving user group
+    * [HBASE-14277] - TestRegionServerHostname.testRegionServerHostname may fail at host with a case sensitive name
+    * [HBASE-14344] - Add timeouts to TestHttpServerLifecycle
+    * [HBASE-14584] - TestNamespacesInstanceModel fails on jdk8
+    * [HBASE-14758] - Add UT case for unchecked error/exception thrown in AsyncProcess#sendMultiAction
+    * [HBASE-14839] - [branch-1] Backport test categories so that patch backport is easier
 
 ** Umbrella
-    * [HBASE-10602] - Cleanup HTable public interface
-    * [HBASE-10856] - Prep for 1.0
-
+    * [HBASE-13747] - Promote Java 8 to "yes" in support matrix
+    * [HBASE-13908] - 1.2 release umbrella
+    * [HBASE-14420] - Zombie Stomping Session
 
-
-Release Notes - HBase - Version 0.99.1 10/15/2014
+Release Notes - HBase - Version 1.1.0 05/11/2015
 
 ** Sub-task
-    * [HBASE-11160] - Undo append waiting on region edit/sequence id update
-    * [HBASE-11178] - Remove deprecation annotations from mapred namespace
-    * [HBASE-11738] - Document improvements to LoadTestTool and PerformanceEvaluation
-    * [HBASE-11872] - Avoid usage of KeyValueUtil#ensureKeyValue from Compactor
-    * [HBASE-11874] - Support Cell to be passed to StoreFile.Writer rather than KeyValue
-    * [HBASE-11917] - Deprecate / Remove HTableUtil
-    * [HBASE-11920] - Add CP hooks for ReplicationEndPoint
-    * [HBASE-11930] - Document new permission check to roll WAL writer
-    * [HBASE-11980] - Change sync to hsync, remove unused InfoServer, and reference our httpserver instead of hadoops
-    * [HBASE-11997] - CopyTable with bulkload
-    * [HBASE-12023] - HRegion.applyFamilyMapToMemstore creates too many iterator objects.
-    * [HBASE-12046] - HTD/HCD setters should be builder-style
-    * [HBASE-12047] - Avoid usage of KeyValueUtil#ensureKeyValue in simple cases
-    * [HBASE-12050] - Avoid KeyValueUtil#ensureKeyValue from DefaultMemStore
-    * [HBASE-12051] - Avoid KeyValueUtil#ensureKeyValue from DefaultMemStore
-    * [HBASE-12059] - Create hbase-annotations module
-    * [HBASE-12062] - Fix usage of Collections.toArray
-    * [HBASE-12068] - [Branch-1] Avoid need to always do KeyValueUtil#ensureKeyValue for Filter transformCell
-    * [HBASE-12069] - Finish making HFile.Writer Cell-centric; undo APIs that expect KV serializations.
-    * [HBASE-12076] - Move InterfaceAudience imports to hbase-annotations
-    * [HBASE-12077] - FilterLists create many ArrayList$Itr objects per row.
-    * [HBASE-12079] - Deprecate KeyValueUtil#ensureKeyValue(s)
-    * [HBASE-12082] - Find a way to set timestamp on Cells on the server
-    * [HBASE-12086] - Fix bugs in HTableMultiplexer
-    * [HBASE-12096] - In ZKSplitLog Coordination and AggregateImplementation replace enhaced for statements with basic for statement to avoid unnecessary object allocation
-    * [HBASE-12104] - Some optimization and bugfix for HTableMultiplexer
-    * [HBASE-12110] - Fix .arcconfig
-    * [HBASE-12112] - Avoid KeyValueUtil#ensureKeyValue some more simple cases
-    * [HBASE-12115] - Fix NumberFormat Exception in TableInputFormatBase.
-    * [HBASE-12189] - Fix new issues found by coverity static analysis
-    * [HBASE-12210] - Avoid KeyValue in Prefix Tree
+    * [HBASE-7847] - Use zookeeper multi to clear znodes
+    * [HBASE-10674] - HBCK should be updated to do replica related checks
+    * [HBASE-10942] - support parallel request cancellation for multi-get
+    * [HBASE-11261] - Handle splitting/merging of regions that have region_replication greater than one
+    * [HBASE-11567] - Write bulk load COMMIT events to WAL
+    * [HBASE-11568] - Async WAL replication for region replicas
+    * [HBASE-11569] - Flush / Compaction handling from secondary region replicas
+    * [HBASE-11571] - Bulk load handling from secondary region replicas
+    * [HBASE-11574] - hbase:meta's regions can be replicated
+    * [HBASE-11580] - Failover handling for secondary region replicas
+    * [HBASE-11598] - Add simple rpc throttling
+    * [HBASE-11842] - Integration test for async wal replication to secondary regions
+    * [HBASE-11903] - Directly invoking split & merge of replica regions should be disallowed
+    * [HBASE-11908] - Region replicas should be added to the meta table at the time of table creation
+    * [HBASE-12012] - Improve cancellation for the scan RPCs
+    * [HBASE-12511] - namespace permissions - add support from table creation privilege in a namespace 'C'
+    * [HBASE-12561] - Replicas of regions can be cached from different instances of the table in MetaCache
+    * [HBASE-12562] - Handling memory pressure for secondary region replicas
+    * [HBASE-12708] - Document newly introduced params for using Thrift-over-HTTPS.
+    * [HBASE-12714] - RegionReplicaReplicationEndpoint should not set the RPC Codec
+    * [HBASE-12730] - Backport HBASE-5162 (Basic client pushback mechanism) to branch-1
+    * [HBASE-12735] - Refactor TAG so it can live as unit test and as an integration test
+    * [HBASE-12763] - Make it so there must be WALs for a server to be marked dead
+    * [HBASE-12776] - SpliTransaction: Log number of files to be split
+    * [HBASE-12779] - SplitTransaction: Add metrics
+    * [HBASE-12793] - [hbck] closeRegionSilentlyAndWait() should log cause of IOException and retry until  hbase.hbck.close.timeout expires
+    * [HBASE-12802] - Remove unnecessary Table.flushCommits()
+    * [HBASE-12848] - Utilize Flash storage for WAL
+    * [HBASE-12926] - Backport HBASE-12688 (Update site with a bootstrap-based UI) for HBASE-12918
+    * [HBASE-12980] - Delete of a table may not clean all rows from hbase:meta
+    * [HBASE-13006] - Document visibility label support for groups
+    * [HBASE-13067] - Fix caching of stubs to allow IP address changes of restarted remote servers
+    * [HBASE-13108] - Reduce Connection creations in TestAcidGuarantees
+    * [HBASE-13121] - Async wal replication for region replicas and dist log replay does not work together
+    * [HBASE-13130] - Add timeouts on TestMasterObserver, a frequent zombie show
+    * [HBASE-13164] - Update TestUsersOperationsWithSecureHadoop to use MiniKdc
+    * [HBASE-13169] - ModifyTable increasing the region replica count should also auto-setup RRRE
+    * [HBASE-13201] - Remove HTablePool from thrift-server
+    * [HBASE-13202] - Procedure v2 - core framework
+    * [HBASE-13203] - Procedure v2 - master create/delete table
+    * [HBASE-13204] - Procedure v2 - client create/delete table sync
+    * [HBASE-13209] - Procedure V2 - master Add/Modify/Delete Column Family
+    * [HBASE-13210] - Procedure V2 - master Modify table
+    * [HBASE-13211] - Procedure V2 - master Enable/Disable table
+    * [HBASE-13213] - Split out locality metrics among primary and secondary region
+    * [HBASE-13244] - Test delegation token generation with kerberos enabled
+    * [HBASE-13290] - Procedure v2 - client enable/disable table sync
+    * [HBASE-13303] - Fix size calculation of results on the region server
+    * [HBASE-13307] - Making methods under ScannerV2#next inlineable, faster
+    * [HBASE-13327] - Use Admin in ConnectionCache
+    * [HBASE-13332] - Fix the usage of doAs/runAs in Visibility Controller tests.
+    * [HBASE-13335] - Update ClientSmallScanner and ClientSmallReversedScanner
+    * [HBASE-13386] - Backport HBASE-12601 to all active branches other than master
+    * [HBASE-13421] - Reduce the number of object creations introduced by HBASE-11544 in scan RPC hot code paths
+    * [HBASE-13447] - Bypass logic in TimeRange.compare
+    * [HBASE-13455] - Procedure V2 - master truncate table
+    * [HBASE-13466] - Document deprecations in 1.x - Part 1
+    * [HBASE-13469] - [branch-1.1] Procedure V2 - Make procedure v2 configurable in branch-1.1
+    * [HBASE-13481] - Master should respect master (old) DNS/bind related configurations
+    * [HBASE-13496] - Make Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo inlineable
+    * [HBASE-13498] - Add more docs and a basic check for storage policy handling
+    * [HBASE-13502] - Deprecate/remove getRowComparator() in TableName
+    * [HBASE-13514] - Fix test failures in TestScannerHeartbeatMessages caused by incorrect setting of hbase.rpc.timeout
+    * [HBASE-13515] - Handle FileNotFoundException in region replica replay for flush/compaction events
+    * [HBASE-13529] - Procedure v2 - WAL Improvements
+    * [HBASE-13551] - Procedure V2 - Procedure classes should not be InterfaceAudience.Public
+
+** Brainstorming
+    * [HBASE-12859] - New master API to track major compaction completion
 
 ** Bug
-    * [HBASE-6994] - minor doc update about DEFAULT_ACCEPTABLE_FACTOR
-    * [HBASE-8808] - Use Jacoco to generate Unit Test coverage reports
-    * [HBASE-8936] - Fixing TestSplitLogWorker while running Jacoco tests.
-    * [HBASE-9005] - Improve documentation around KEEP_DELETED_CELLS, time range scans, and delete markers
-    * [HBASE-9513] - Why is PE#RandomSeekScanTest way slower in 0.96 than in 0.94?
-    * [HBASE-10314] - Add Chaos Monkey that doesn't touch the master
-    * [HBASE-10748] - hbase-daemon.sh fails to execute with 'sh' command
-    * [HBASE-10757] - Change HTable class doc so it sends people to HCM getting instances
-    * [HBASE-11145] - UNEXPECTED!!! when HLog sync: Queue full
-    * [HBASE-11266] - Remove shaded references to logger
-    * [HBASE-11394] - Replication can have data loss if peer id contains hyphen "-"
-    * [HBASE-11401] - Late-binding sequenceid presumes a particular KeyValue mvcc format hampering experiment
-    * [HBASE-11405] - Multiple invocations of hbck in parallel disables balancer permanently 
-    * [HBASE-11804] - Raise default heap size if unspecified
-    * [HBASE-11815] - Flush and compaction could just close the tmp writer if there is an exception
-    * [HBASE-11890] - HBase REST Client is hard coded to http protocol
-    * [HBASE-11906] - Meta data loss with distributed log replay
-    * [HBASE-11967] - HMaster in standalone won't go down if it gets 'Unhandled exception'
-    * [HBASE-11974] - When a disabled table is scanned, NotServingRegionException is thrown instead of TableNotEnabledException
-    * [HBASE-11982] - Bootstraping hbase:meta table creates a WAL file in region dir
-    * [HBASE-11988] - AC/VC system table create on postStartMaster fails too often in test
-    * [HBASE-11991] - Region states may be out of sync
-    * [HBASE-11994] - PutCombiner floods the M/R log with repeated log messages.
-    * [HBASE-12007] - StochasticBalancer should avoid putting user regions on master
-    * [HBASE-12019] - hbase-daemon.sh overwrite HBASE_ROOT_LOGGER and HBASE_SECURITY_LOGGER variables
-    * [HBASE-12024] - Fix javadoc warning
-    * [HBASE-12025] - TestHttpServerLifecycle.testStartedServerWithRequestLog hangs frequently
-    * [HBASE-12034] - If I kill single RS in branch-1, all regions end up on Master!
-    * [HBASE-12038] - Replace internal uses of signatures with byte[] and String tableNames to use the TableName equivalents. 
-    * [HBASE-12041] - AssertionError in HFilePerformanceEvaluation.UniformRandomReadBenchmark
-    * [HBASE-12042] - Replace internal uses of HTable(Configuration, String) with HTable(Configuration, TableName)
-    * [HBASE-12043] - REST server should respond with FORBIDDEN(403) code on AccessDeniedException
-    * [HBASE-12044] - REST delete operation should not retry disableTable for DoNotRetryIOException
-    * [HBASE-12045] - REST proxy users configuration in hbase-site.xml is ignored
-    * [HBASE-12052] - BulkLoad Failed due to no write permission on input files
-    * [HBASE-12054] - bad state after NamespaceUpgrade with reserved table names
-    * [HBASE-12056] - RPC logging too much in DEBUG mode
-    * [HBASE-12064] - hbase.master.balancer.stochastic.numRegionLoadsToRemember is not used
-    * [HBASE-12065] -  Import tool is not restoring multiple DeleteFamily markers of a row
-    * [HBASE-12067] - Remove deprecated metrics classes.
-    * [HBASE-12078] - Missing Data when scanning using PREFIX_TREE DATA-BLOCK-ENCODING
-    * [HBASE-12095] - SecureWALCellCodec should handle the case where encryption is disabled
-    * [HBASE-12098] - User granted namespace table create permissions can't create a table
-    * [HBASE-12099] - TestScannerModel fails if using jackson 1.9.13
-    * [HBASE-12106] - Move test annotations to test artifact
-    * [HBASE-12109] - user_permission command for namespace does not return correct result
-    * [HBASE-12119] - Master regionserver web UI NOT_FOUND
-    * [HBASE-12120] - HBase shell doesn't allow deleting of a cell by user with W-only permissions to it
-    * [HBASE-12122] - Try not to assign user regions to master all the time
-    * [HBASE-12123] - Failed assertion in BucketCache after 11331
-    * [HBASE-12124] - Closed region could stay closed if master stops at bad time
-    * [HBASE-12126] - Region server coprocessor endpoint
-    * [HBASE-12130] - HBASE-11980 calls hflush and hsync doing near double the syncing work
-    * [HBASE-12134] - publish_website.sh script is too optimistic
-    * [HBASE-12135] - Website is broken
-    * [HBASE-12136] - Race condition between client adding tableCF replication znode and  server triggering TableCFsTracker
-    * [HBASE-12137] - Alter table add cf doesn't do compression test
-    * [HBASE-12139] - StochasticLoadBalancer doesn't work on large lightly loaded clusters
-    * [HBASE-12140] - Add ConnectionFactory.createConnection() to create using default HBaseConfiguration.
-    * [HBASE-12145] - Fix javadoc and findbugs so new folks aren't freaked when they see them
-    * [HBASE-12146] - RegionServerTracker should escape data in log messages
-    * [HBASE-12149] - TestRegionPlacement is failing undeterministically
-    * [HBASE-12151] - Make dev scripts executable
-    * [HBASE-12153] - Fixing TestReplicaWithCluster
-    * [HBASE-12156] - TableName cache isn't used for one of valueOf methods.
-    * [HBASE-12158] - TestHttpServerLifecycle.testStartedServerWithRequestLog goes zombie on occasion
-    * [HBASE-12160] - Make Surefire's argLine configurable in the command line
-    * [HBASE-12164] - Check for presence of user Id in SecureBulkLoadEndpoint#secureBulkLoadHFiles() is inaccurate
-    * [HBASE-12165] - TestEndToEndSplitTransaction.testFromClientSideWhileSplitting fails
-    * [HBASE-12166] - TestDistributedLogSplitting.testMasterStartsUpWithLogReplayWork
-    * [HBASE-12167] - NPE in AssignmentManager
-    * [HBASE-12170] - TestReplicaWithCluster.testReplicaAndReplication timeouts
-    * [HBASE-12181] - Some tests create a table and try to use it before regions get assigned
-    * [HBASE-12183] - FuzzyRowFilter doesn't support reverse scans
-    * [HBASE-12184] - ServerShutdownHandler throws NPE
-    * [HBASE-12191] - Make TestCacheOnWrite faster.
-    * [HBASE-12196] - SSH should retry in case failed to assign regions
-    * [HBASE-12197] - Move REST
-    * [HBASE-12198] - Fix the bug of not updating location cache
-    * [HBASE-12199] - Make TestAtomicOperation and TestEncodedSeekers faster
-    * [HBASE-12200] - When an RPC server handler thread dies, throw exception 
-    * [HBASE-12206] - NPE in RSRpcServices
-    * [HBASE-12209] - NPE in HRegionServer#getLastSequenceId
-    * [HBASE-12218] - Make HBaseCommonTestingUtil#deleteDir try harder
+    * [HBASE-6778] - Deprecate Chore; its a thread per task when we should have one thread to do all tasks
+    * [HBASE-7332] - [webui] HMaster webui should display the number of regions a table has.
+    * [HBASE-8026] - HBase Shell docs for scan command does not reference VERSIONS
+    * [HBASE-8725] - Add total time RPC call metrics
+    * [HBASE-9738] - Delete table and loadbalancer interference
+    * [HBASE-9910] - TestHFilePerformance and HFilePerformanceEvaluation should be merged in a single HFile performance test class.
+    * [HBASE-10499] - In write heavy scenario one of the regions does not get flushed causing RegionTooBusyException
+    * [HBASE-10528] - DefaultBalancer selects plans to move regions onto draining nodes
+    * [HBASE-10728] - get_counter value is never used.
+    * [HBASE-11542] - Unit Test  KeyStoreTestUtil.java compilation failure in IBM JDK
+    * [HBASE-11544] - [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME
+    * [HBASE-12006] - [JDK 8] KeyStoreTestUtil#generateCertificate fails due to "subject class type invalid"
+    * [HBASE-12028] - Abort the RegionServer, when it's handler threads die
+    * [HBASE-12070] - Add an option to hbck to fix ZK inconsistencies
+    * [HBASE-12102] - Duplicate keys in HBase.RegionServer metrics JSON
+    * [HBASE-12108] - HBaseConfiguration: set classloader before loading xml files
+    * [HBASE-12270] - A bug in the bucket cache, with cache blocks on write enabled
+    * [HBASE-12339] - WAL performance evaluation tool doesn't roll logs
+    * [HBASE-12393] - The regionserver web will throw exception if we disable block cache
+    * [HBASE-12480] - Regions in FAILED_OPEN/FAILED_CLOSE should be processed on master failover
+    * [HBASE-12548] - Improve debuggability of IntegrationTestTimeBoundedRequestsWithRegionReplicas
+    * [HBASE-12574] - Update replication metrics to not do so many map look ups.
+    * [HBASE-12585] - Fix refguide so it does hbase 1.0 style API everywhere with callout on how we used to do it in pre-1.0
+    * [HBASE-12607] - TestHBaseFsck#testParallelHbck fails running against hadoop 2.6.0
+    * [HBASE-12644] - Visibility Labels: issue with storing super users in labels table
+    * [HBASE-12694] - testTableExistsIfTheSpecifiedTableRegionIsSplitParent in TestSplitTransactionOnCluster class leaves regions in transition
+    * [HBASE-12697] - Don't use RegionLocationFinder if localityCost == 0
+    * [HBASE-12711] - Fix new findbugs warnings in hbase-thrift module
+    * [HBASE-12715] - getLastSequenceId always returns -1
+    * [HBASE-12716] - A bug in RegionSplitter.UniformSplit algorithm
+    * [HBASE-12717] - Pre-split algorithm in HBaseAdmin.create() can not find the split point
+    * [HBASE-12718] - Convert TestAcidGuarantees from a unit test to an integration test
+    * [HBASE-12728] - buffered writes substantially less useful after removal of HTablePool
+    * [HBASE-12732] - Log messages in FileLink$FileLinkInputStream#tryOpen are reversed
+    * [HBASE-12734] - TestPerColumnFamilyFlush.testCompareStoreFileCount is flakey
+    * [HBASE-12739] - Avoid too large identifier of ZooKeeperWatcher
+    * [HBASE-12740] - Improve performance of TestHBaseFsck
+    * [HBASE-12741] - AccessController contains a javadoc issue
+    * [HBASE-12742] - ClusterStatusPublisher crashes with a IPv6 network interface.
+    * [HBASE-12743] - [ITBLL] Master fails rejoining cluster stuck splitting logs; Distributed log replay=true
+    * [HBASE-12744] - hbase-default.xml lists hbase.regionserver.global.memstore.size twice
+    * [HBASE-12747] - IntegrationTestMTTR will OOME if launched with mvn verify
+    * [HBASE-12749] - Tighten HFileLink api to enable non-snapshot uses
+    * [HBASE-12750] - getRequestsCount() in ClusterStatus returns total number of request
+    * [HBASE-12767] - Fix a StoreFileScanner NPE in reverse scan flow
+    * [HBASE-12771] - TestFailFast#testFastFail failing
+    * [HBASE-12772] - TestPerColumnFamilyFlush failing
+    * [HBASE-12774] - Fix the inconsistent permission checks for bulkloading.
+    * [HBASE-12781] - thrift2 listen port will bind always to the passed command line address
+    * [HBASE-12782] - ITBLL fails for me if generator does anything but 5M per maptask
+    * [HBASE-12791] - HBase does not attempt to clean up an aborted split when the regionserver shutting down
+    * [HBASE-12798] - Map Reduce jobs should not create Tables in setConf()
+    * [HBASE-12801] - Failed to truncate a table while maintaing binary region boundaries
+    * [HBASE-12804] - ImportTsv fails to delete partition files created by it
+    * [HBASE-12810] - Update to htrace-incubating
+    * [HBASE-12811] - [AccessController] NPE while scanning a table with user not having READ permission on the namespace
+    * [HBASE-12817] - Data missing while scanning using PREFIX_TREE data block encoding
+    * [HBASE-12819] - ExportSnapshot doesn't close FileSystem instances
+    * [HBASE-12824] - CompressionTest fails with org.apache.hadoop.hbase.io.hfile.AbstractHFileReader$NotSeekedException: Not seeked to a key/value
+    * [HBASE-12831] - Changing the set of vis labels a user has access to doesn't generate an audit log event
+    * [HBASE-12832] - Describe table from shell no longer shows Table's attributes, only CF attributes
+    * [HBASE-12833] - [shell] table.rb leaks connections
+    * [HBASE-12835] - HBASE-12422 changed new HTable(Configuration) to not use managed Connections anymore
+    * [HBASE-12837] - ReplicationAdmin leaks zk connections
+    * [HBASE-12844] - ServerManager.isServerReacable() should sleep between retries
+    * [HBASE-12845] - ByteBufferOutputStream should grow as direct buffer if the initial buffer is also direct BB
+    * [HBASE-12847] - TestZKLessSplitOnCluster frequently times out in 0.98 builds
+    * [HBASE-12849] - LoadIncrementalHFiles should use unmanaged connection in branch-1
+    * [HBASE-12862] - Uppercase "wals" in RegionServer webUI
+    * [HBASE-12863] - Master info port on RS UI is always 0
+    * [HBASE-12864] - IntegrationTestTableSnapshotInputFormat fails
+    * [HBASE-12867] - Shell does not support custom replication endpoint specification
+    * [HBASE-12878] - Incorrect HFile path in TestHFilePerformance print output (fix for easier debugging)
+    * [HBASE-12881] - TestFastFail is not compatible with surefire.rerunFailingTestsCount
+    * [HBASE-12886] - Correct tag option name in PerformanceEvaluation
+    * [HBASE-12892] - Add a class to allow taking a snapshot from the command line
+    * [HBASE-12897] - Minimum memstore size is a percentage
+    * [HBASE-12898] - Add in used undeclared dependencies
+    * [HBASE-12901] - Possible deadlock while onlining a region and get region plan for other region run parallel
+    * [HBASE-12904] - Threading issues in region_mover.rb
+    * [HBASE-12908] - Typos in MemStoreFlusher javadocs
+    * [HBASE-12915] - Disallow small scan with batching
+    * [HBASE-12916] - No access control for replicating WAL entries
+    * [HBASE-12917] - HFilePerformanceEvaluation Scan tests fail with StackOverflowError due to recursive call in createCell function
+    * [HBASE-12918] - Backport asciidoc changes
+    * [HBASE-12919] - Compilation with Hadoop-2.4- is broken again
+    * [HBASE-12924] - HRegionServer#MovedRegionsCleaner Chore does not start
+    * [HBASE-12927] - TestFromClientSide#testScanMetrics() failing due to duplicate createTable commands
+    * [HBASE-12931] - The existing KeyValues in memstore are not removed completely after inserting cell into memStore
+    * [HBASE-12948] - Calling Increment#addColumn on the same column multiple times produces wrong result
+    * [HBASE-12951] - TestHCM.testConnectionClose is flakey when using AsyncRpcClient as client implementation
+    * [HBASE-12953] - RegionServer is not functionally working with AysncRpcClient in secure mode
+    * [HBASE-12954] - Ability impaired using HBase on multihomed hosts
+    * [HBASE-12956] - Binding to 0.0.0.0 is broken after HBASE-10569
+    * [HBASE-12958] - SSH doing hbase:meta get but hbase:meta not assigned
+    * [HBASE-12961] - Negative values in read and write region server metrics
+    * [HBASE-12962] - TestHFileBlockIndex.testBlockIndex() commented out during HBASE-10531
+    * [HBASE-12964] - Add the ability for hbase-daemon.sh to start in the foreground
+    * [HBASE-12966] - NPE in HMaster while recovering tables in Enabling state
+    * [HBASE-12969] - Parameter Validation is not there for shell script, local-master-backup.sh and local-regionservers.sh
+    * [HBASE-12971] - Replication stuck due to large default value for replication.source.maxretriesmultiplier
+    * [HBASE-12976] - Set default value for hbase.client.scanner.max.result.size
+    * [HBASE-12978] - Region goes permanently offline (WAS: hbase:meta has a row missing hregioninfo and it causes my long-running job to fail)
+    * [HBASE-12984] - SSL cannot be used by the InfoPort after removing deprecated code in HBASE-10336
+    * [HBASE-12985] - Javadoc warning and findbugs fixes to get us green again
+    * [HBASE-12989] - region_mover.rb unloadRegions method uses ArrayList concurrently resulting in errors
+    * [HBASE-12991] - Use HBase 1.0 interfaces in hbase-rest
+    * [HBASE-12993] - Use HBase 1.0 interfaces in hbase-thrift
+    * [HBASE-12996] - Reversed field on Filter should be transient
+    * [HBASE-12998] - Compilation with Hdfs-2.7.0-SNAPSHOT is broken after HDFS-7647
+    * [HBASE-12999] - Make foreground_start return the correct exit code
+    * [HBASE-13001] - NullPointer in master logs for table.jsp
+    * [HBASE-13003] - Get tests in TestHFileBlockIndex back
+    * [HBASE-13004] - Make possible to explain why HBaseTestingUtility.waitFor fails
+    * [HBASE-13007] - Fix the test timeouts being caused by ChoreService
+    * [HBASE-13009] - HBase REST UI inaccessible
+    * [HBASE-13010] - HFileOutputFormat2 partitioner's path is hard-coded as '/tmp'
+    * [HBASE-13011] - TestLoadIncrementalHFiles is flakey when using AsyncRpcClient as client implementation
+    * [HBASE-13027] - mapreduce.TableInputFormatBase should create its own Connection if needed
+    * [HBASE-13030] - [1.0.0 polish] Make ScanMetrics public again and align Put 'add' with Get, Delete, etc., addColumn
+    * [HBASE-13032] - Migration of states should be performed once META is assigned and onlined.
+    * [HBASE-13036] - Meta scanner should use its own threadpool
+    * [HBASE-13038] - Fix the java doc warning continuously reported by Hadoop QA
+    * [HBASE-13039] - Add patchprocess/* to .gitignore to fix builds of branches
+    * [HBASE-13040] - Possible failure of TestHMasterRPCException
+    * [HBASE-13047] - Add "HBase Configuration" link missing on the table details pages
+    * [HBASE-13048] - Use hbase.crypto.wal.algorithm in SecureProtobufLogReader while decrypting the data
+    * [HBASE-13049] - wal_roll ruby command doesn't work.
+    * [HBASE-13050] - Hbase shell create_namespace command throws ArrayIndexOutOfBoundException for (invalid) empty text input.
+    * 

<TRUNCATED>

[09/11] hbase git commit: HBASE-13908 update version to 1.2.0 for RC

Posted by bu...@apache.org.
HBASE-13908 update version to 1.2.0 for RC


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/19d6a295
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/19d6a295
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/19d6a295

Branch: refs/heads/branch-1.2
Commit: 19d6a2959a14c9b5bccd125ba348c09be455e381
Parents: 6f07973
Author: Sean Busbey <bu...@apache.org>
Authored: Sun Jan 3 07:34:27 2016 +0000
Committer: Sean Busbey <bu...@apache.org>
Committed: Sun Jan 3 07:49:24 2016 +0000

----------------------------------------------------------------------
 hbase-annotations/pom.xml                | 2 +-
 hbase-assembly/pom.xml                   | 2 +-
 hbase-checkstyle/pom.xml                 | 4 ++--
 hbase-client/pom.xml                     | 2 +-
 hbase-common/pom.xml                     | 2 +-
 hbase-examples/pom.xml                   | 2 +-
 hbase-external-blockcache/pom.xml        | 2 +-
 hbase-hadoop-compat/pom.xml              | 2 +-
 hbase-hadoop2-compat/pom.xml             | 2 +-
 hbase-it/pom.xml                         | 2 +-
 hbase-prefix-tree/pom.xml                | 2 +-
 hbase-procedure/pom.xml                  | 2 +-
 hbase-protocol/pom.xml                   | 2 +-
 hbase-resource-bundle/pom.xml            | 2 +-
 hbase-rest/pom.xml                       | 2 +-
 hbase-server/pom.xml                     | 2 +-
 hbase-shaded/hbase-shaded-client/pom.xml | 2 +-
 hbase-shaded/hbase-shaded-server/pom.xml | 2 +-
 hbase-shaded/pom.xml                     | 2 +-
 hbase-shell/pom.xml                      | 2 +-
 hbase-testing-util/pom.xml               | 2 +-
 hbase-thrift/pom.xml                     | 2 +-
 pom.xml                                  | 2 +-
 23 files changed, 24 insertions(+), 24 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-annotations/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-annotations/pom.xml b/hbase-annotations/pom.xml
index 2b45496..776710c 100644
--- a/hbase-annotations/pom.xml
+++ b/hbase-annotations/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.2.0-SNAPSHOT</version>
+    <version>1.2.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-assembly/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-assembly/pom.xml b/hbase-assembly/pom.xml
index 5f08261..750b07e 100644
--- a/hbase-assembly/pom.xml
+++ b/hbase-assembly/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.2.0-SNAPSHOT</version>
+    <version>1.2.0</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-assembly</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-checkstyle/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-checkstyle/pom.xml b/hbase-checkstyle/pom.xml
index 51e10e3..52c8e56 100644
--- a/hbase-checkstyle/pom.xml
+++ b/hbase-checkstyle/pom.xml
@@ -24,14 +24,14 @@
 <modelVersion>4.0.0</modelVersion>
 <groupId>org.apache.hbase</groupId>
 <artifactId>hbase-checkstyle</artifactId>
-<version>1.2.0-SNAPSHOT</version>
+<version>1.2.0</version>
 <name>Apache HBase - Checkstyle</name>
 <description>Module to hold Checkstyle properties for HBase.</description>
 
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.2.0-SNAPSHOT</version>
+    <version>1.2.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-client/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-client/pom.xml b/hbase-client/pom.xml
index 7a6ba8c..3e2c032 100644
--- a/hbase-client/pom.xml
+++ b/hbase-client/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.2.0-SNAPSHOT</version>
+    <version>1.2.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-common/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-common/pom.xml b/hbase-common/pom.xml
index 0465620..a115657 100644
--- a/hbase-common/pom.xml
+++ b/hbase-common/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.2.0-SNAPSHOT</version>
+    <version>1.2.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-examples/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-examples/pom.xml b/hbase-examples/pom.xml
index b03627c..4272286 100644
--- a/hbase-examples/pom.xml
+++ b/hbase-examples/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.2.0-SNAPSHOT</version>
+    <version>1.2.0</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-examples</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-external-blockcache/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-external-blockcache/pom.xml b/hbase-external-blockcache/pom.xml
index 2116713..1a0bf40 100644
--- a/hbase-external-blockcache/pom.xml
+++ b/hbase-external-blockcache/pom.xml
@@ -25,7 +25,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.2.0-SNAPSHOT</version>
+    <version>1.2.0</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-external-blockcache</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-hadoop-compat/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-hadoop-compat/pom.xml b/hbase-hadoop-compat/pom.xml
index be1e1c8..3107024 100644
--- a/hbase-hadoop-compat/pom.xml
+++ b/hbase-hadoop-compat/pom.xml
@@ -23,7 +23,7 @@
     <parent>
         <artifactId>hbase</artifactId>
         <groupId>org.apache.hbase</groupId>
-        <version>1.2.0-SNAPSHOT</version>
+        <version>1.2.0</version>
         <relativePath>..</relativePath>
     </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-hadoop2-compat/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-hadoop2-compat/pom.xml b/hbase-hadoop2-compat/pom.xml
index 2069bfa..39f7661 100644
--- a/hbase-hadoop2-compat/pom.xml
+++ b/hbase-hadoop2-compat/pom.xml
@@ -21,7 +21,7 @@ limitations under the License.
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.2.0-SNAPSHOT</version>
+    <version>1.2.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-it/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-it/pom.xml b/hbase-it/pom.xml
index 91e3daf..d7e6610 100644
--- a/hbase-it/pom.xml
+++ b/hbase-it/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.2.0-SNAPSHOT</version>
+    <version>1.2.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-prefix-tree/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-prefix-tree/pom.xml b/hbase-prefix-tree/pom.xml
index 061feb5..09c0c43 100644
--- a/hbase-prefix-tree/pom.xml
+++ b/hbase-prefix-tree/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.2.0-SNAPSHOT</version>
+    <version>1.2.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-procedure/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-procedure/pom.xml b/hbase-procedure/pom.xml
index 6994325..be4bb09 100644
--- a/hbase-procedure/pom.xml
+++ b/hbase-procedure/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.2.0-SNAPSHOT</version>
+    <version>1.2.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-protocol/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-protocol/pom.xml b/hbase-protocol/pom.xml
index c0453c2..16d2964 100644
--- a/hbase-protocol/pom.xml
+++ b/hbase-protocol/pom.xml
@@ -23,7 +23,7 @@
     <parent>
         <artifactId>hbase</artifactId>
         <groupId>org.apache.hbase</groupId>
-        <version>1.2.0-SNAPSHOT</version>
+        <version>1.2.0</version>
         <relativePath>..</relativePath>
     </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-resource-bundle/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-resource-bundle/pom.xml b/hbase-resource-bundle/pom.xml
index a835e43..ebbbe36 100644
--- a/hbase-resource-bundle/pom.xml
+++ b/hbase-resource-bundle/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.2.0-SNAPSHOT</version>
+    <version>1.2.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-rest/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-rest/pom.xml b/hbase-rest/pom.xml
index f19bc92..d91db79 100644
--- a/hbase-rest/pom.xml
+++ b/hbase-rest/pom.xml
@@ -25,7 +25,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.2.0-SNAPSHOT</version>
+    <version>1.2.0</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-rest</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-server/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-server/pom.xml b/hbase-server/pom.xml
index 5d439be..2745598 100644
--- a/hbase-server/pom.xml
+++ b/hbase-server/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.2.0-SNAPSHOT</version>
+    <version>1.2.0</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-server</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-shaded/hbase-shaded-client/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-shaded/hbase-shaded-client/pom.xml b/hbase-shaded/hbase-shaded-client/pom.xml
index 7bb22cb..12fe62d 100644
--- a/hbase-shaded/hbase-shaded-client/pom.xml
+++ b/hbase-shaded/hbase-shaded-client/pom.xml
@@ -24,7 +24,7 @@
     <parent>
         <artifactId>hbase-shaded</artifactId>
         <groupId>org.apache.hbase</groupId>
-        <version>1.2.0-SNAPSHOT</version>
+        <version>1.2.0</version>
         <relativePath>..</relativePath>
     </parent>
     <artifactId>hbase-shaded-client</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-shaded/hbase-shaded-server/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-shaded/hbase-shaded-server/pom.xml b/hbase-shaded/hbase-shaded-server/pom.xml
index 665c0c0..9b3083f 100644
--- a/hbase-shaded/hbase-shaded-server/pom.xml
+++ b/hbase-shaded/hbase-shaded-server/pom.xml
@@ -24,7 +24,7 @@
     <parent>
         <artifactId>hbase-shaded</artifactId>
         <groupId>org.apache.hbase</groupId>
-        <version>1.2.0-SNAPSHOT</version>
+        <version>1.2.0</version>
         <relativePath>..</relativePath>
     </parent>
     <artifactId>hbase-shaded-server</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-shaded/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-shaded/pom.xml b/hbase-shaded/pom.xml
index 135dfd4..486e4f0 100644
--- a/hbase-shaded/pom.xml
+++ b/hbase-shaded/pom.xml
@@ -23,7 +23,7 @@
     <parent>
         <artifactId>hbase</artifactId>
         <groupId>org.apache.hbase</groupId>
-        <version>1.2.0-SNAPSHOT</version>
+        <version>1.2.0</version>
         <relativePath>..</relativePath>
     </parent>
     <artifactId>hbase-shaded</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-shell/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-shell/pom.xml b/hbase-shell/pom.xml
index cb0b858..921b2b7 100644
--- a/hbase-shell/pom.xml
+++ b/hbase-shell/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.2.0-SNAPSHOT</version>
+    <version>1.2.0</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-shell</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-testing-util/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-testing-util/pom.xml b/hbase-testing-util/pom.xml
index ae9becd..01de6a4 100644
--- a/hbase-testing-util/pom.xml
+++ b/hbase-testing-util/pom.xml
@@ -23,7 +23,7 @@
     <parent>
         <artifactId>hbase</artifactId>
         <groupId>org.apache.hbase</groupId>
-        <version>1.2.0-SNAPSHOT</version>
+        <version>1.2.0</version>
         <relativePath>..</relativePath>
     </parent>
     <artifactId>hbase-testing-util</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/hbase-thrift/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-thrift/pom.xml b/hbase-thrift/pom.xml
index d774109..0d923ab 100644
--- a/hbase-thrift/pom.xml
+++ b/hbase-thrift/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.2.0-SNAPSHOT</version>
+    <version>1.2.0</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-thrift</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/19d6a295/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index fdd91d8..d04c831 100644
--- a/pom.xml
+++ b/pom.xml
@@ -39,7 +39,7 @@
   <groupId>org.apache.hbase</groupId>
   <artifactId>hbase</artifactId>
   <packaging>pom</packaging>
-  <version>1.2.0-SNAPSHOT</version>
+  <version>1.2.0</version>
   <name>Apache HBase</name>
   <description>
     Apache HBase™ is the Hadoop database. Use it when you need