You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by an...@apache.org on 2016/10/26 20:07:33 UTC

[4/8] hbase git commit: HBASE-15347 updated asciidoc for 1.3

http://git-wip-us.apache.org/repos/asf/hbase/blob/6cb8a436/src/main/asciidoc/_chapters/external_apis.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/external_apis.adoc b/src/main/asciidoc/_chapters/external_apis.adoc
index 37156ca..556c4e0 100644
--- a/src/main/asciidoc/_chapters/external_apis.adoc
+++ b/src/main/asciidoc/_chapters/external_apis.adoc
@@ -27,32 +27,592 @@
 :icons: font
 :experimental:
 
-This chapter will cover access to Apache HBase either through non-Java languages, or through custom protocols.
-For information on using the native HBase APIs, refer to link:http://hbase.apache.org/apidocs/index.html[User API Reference] and the new <<hbase_apis,HBase APIs>> chapter.
+This chapter will cover access to Apache HBase either through non-Java languages and
+through custom protocols. For information on using the native HBase APIs, refer to
+link:http://hbase.apache.org/apidocs/index.html[User API Reference] and the
+<<hbase_apis,HBase APIs>> chapter.
 
-[[nonjava.jvm]]
-== Non-Java Languages Talking to the JVM
+== REST
 
-Currently the documentation on this topic is in the link:http://wiki.apache.org/hadoop/Hbase[Apache HBase Wiki].
-See also the link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/thrift/package-summary.html#package_description[Thrift API Javadoc].
+Representational State Transfer (REST) was introduced in 2000 in the doctoral
+dissertation of Roy Fielding, one of the principal authors of the HTTP specification.
 
-== REST
+REST itself is out of the scope of this documentation, but in general, REST allows
+client-server interactions via an API that is tied to the URL itself. This section
+discusses how to configure and run the REST server included with HBase, which exposes
+HBase tables, rows, cells, and metadata as URL specified resources.
+There is also a nice series of blogs on
+link:http://blog.cloudera.com/blog/2013/03/how-to-use-the-apache-hbase-rest-interface-part-1/[How-to: Use the Apache HBase REST Interface]
+by Jesse Anderson.
 
-Currently most of the documentation on REST exists in the link:http://wiki.apache.org/hadoop/Hbase/Stargate[Apache HBase Wiki on REST] (The REST gateway used to be called 'Stargate').  There are also a nice set of blogs on link:http://blog.cloudera.com/blog/2013/03/how-to-use-the-apache-hbase-rest-interface-part-1/[How-to: Use the Apache HBase REST Interface] by Jesse Anderson.
+=== Starting and Stopping the REST Server
 
-To run your REST server under SSL, set `hbase.rest.ssl.enabled` to `true` and also set the following configs when you launch the REST server: (See example commands in <<jmx_config,JMX config>>)
+The included REST server can run as a daemon which starts an embedded Jetty
+servlet container and deploys the servlet into it. Use one of the following commands
+to start the REST server in the foreground or background. The port is optional, and
+defaults to 8080.
 
-[source]
+[source, bash]
 ----
-hbase.rest.ssl.keystore.store
-hbase.rest.ssl.keystore.password
-hbase.rest.ssl.keystore.keypassword
+# Foreground
+$ bin/hbase rest start -p <port>
+
+# Background, logging to a file in $HBASE_LOGS_DIR
+$ bin/hbase-daemon.sh start rest -p <port>
 ----
 
-HBase ships a simple REST client, see link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/rest/client/package-summary.html[REST client] package for details.
-To enable SSL support for it, please also import your certificate into local java cacerts keystore:
+To stop the REST server, use Ctrl-C if you were running it in the foreground, or the
+following command if you were running it in the background.
+
+[source, bash]
 ----
-keytool -import -trustcacerts -file /home/user/restserver.cert -keystore $JAVA_HOME/jre/lib/security/cacerts
+$ bin/hbase-daemon.sh stop rest
+----
+
+=== Configuring the REST Server and Client
+
+For information about configuring the REST server and client for SSL, as well as `doAs`
+impersonation for the REST server, see <<security.gateway.thrift>> and other portions
+of the <<security>> chapter.
+
+=== Using REST Endpoints
+
+The following examples use the placeholder server pass:[http://example.com:8000], and
+the following commands can all be run using `curl` or `wget` commands. You can request
+plain text (the default), XML , or JSON output by adding no header for plain text,
+or the header "Accept: text/xml" for XML, "Accept: application/json" for JSON, or
+"Accept: application/x-protobuf" to for protocol buffers.
+
+NOTE: Unless specified, use `GET` requests for queries, `PUT` or `POST` requests for
+creation or mutation, and `DELETE` for deletion.
+
+.Cluster-Wide Endpoints
+[options="header", cols="2m,m,3d,6l"]
+|===
+|Endpoint
+|HTTP Verb
+|Description
+|Example
+
+|/version/cluster
+|GET
+|Version of HBase running on this cluster
+|curl -vi -X GET \
+  -H "Accept: text/xml" \
+  "http://example.com:8000/version/cluster"
+
+|/status/cluster
+|GET
+|Cluster status
+|curl -vi -X GET \
+  -H "Accept: text/xml" \
+  "http://example.com:8000/status/cluster"
+
+|/
+|GET
+|List of all non-system tables
+|curl -vi -X GET \
+  -H "Accept: text/xml" \
+  "http://example.com:8000/"
+
+|===
+
+.Namespace Endpoints
+[options="header", cols="2m,m,3d,6l"]
+|===
+|Endpoint
+|HTTP Verb
+|Description
+|Example
+
+|/namespaces
+|GET
+|List all namespaces
+|curl -vi -X GET \
+  -H "Accept: text/xml" \
+  "http://example.com:8000/namespaces/"
+
+|/namespaces/_namespace_
+|GET
+|Describe a specific namespace
+|curl -vi -X GET \
+  -H "Accept: text/xml" \
+  "http://example.com:8000/namespaces/special_ns"
+
+|/namespaces/_namespace_
+|POST
+|Create a new namespace
+|curl -vi -X POST \
+  -H "Accept: text/xml" \
+  "example.com:8000/namespaces/special_ns"
+
+|/namespaces/_namespace_/tables
+|GET
+|List all tables in a specific namespace
+|curl -vi -X GET \
+  -H "Accept: text/xml" \
+  "http://example.com:8000/namespaces/special_ns/tables"
+
+|/namespaces/_namespace_
+|PUT
+|Alter an existing namespace. Currently not used.
+|curl -vi -X PUT \
+  -H "Accept: text/xml" \
+  "http://example.com:8000/namespaces/special_ns
+
+|/namespaces/_namespace_
+|DELETE
+|Delete a namespace. The namespace must be empty.
+|curl -vi -X DELETE \
+  -H "Accept: text/xml" \
+  "example.com:8000/namespaces/special_ns"
+
+|===
+
+.Table Endpoints
+[options="header", cols="2m,m,3d,6l"]
+|===
+|Endpoint
+|HTTP Verb
+|Description
+|Example
+
+|/_table_/schema
+|GET
+|Describe the schema of the specified table.
+|curl -vi -X GET \
+  -H "Accept: text/xml" \
+  "http://example.com:8000/users/schema"
+
+|/_table_/schema
+|POST
+|Create a new table, or replace an existing table's schema
+|curl -vi -X POST \
+  -H "Accept: text/xml" \
+  -H "Content-Type: text/xml" \
+  -d '&lt;?xml version="1.0" encoding="UTF-8"?>&lt;TableSchema name="users">&lt;ColumnSchema name="cf" />&lt;/TableSchema>' \
+  "http://example.com:8000/users/schema"
+
+|/_table_/schema
+|PUT
+|Update an existing table with the provided schema fragment
+|curl -vi -X PUT \
+  -H "Accept: text/xml" \
+  -H "Content-Type: text/xml" \
+  -d '&lt;?xml version="1.0" encoding="UTF-8"?>&lt;TableSchema name="users">&lt;ColumnSchema name="cf" KEEP_DELETED_CELLS="true" />&lt;/TableSchema>' \
+  "http://example.com:8000/users/schema"
+
+|/_table_/schema
+|DELETE
+|Delete the table. You must use the `/_table_/schema` endpoint, not just `/_table_/`.
+|curl -vi -X DELETE \
+  -H "Accept: text/xml" \
+  "http://example.com:8000/users/schema"
+
+|/_table_/regions
+|GET
+|List the table regions
+|curl -vi -X GET \
+  -H "Accept: text/xml" \
+  "http://example.com:8000/users/regions
+|===
+
+.Endpoints for `Get` Operations
+[options="header", cols="2m,m,3d,6l"]
+|===
+|Endpoint
+|HTTP Verb
+|Description
+|Example
+
+|/_table_/_row_/_column:qualifier_/_timestamp_
+|GET
+|Get the value of a single row. Values are Base-64 encoded.
+|curl -vi -X GET \
+  -H "Accept: text/xml" \
+  "http://example.com:8000/users/row1"
+
+curl -vi -X GET \
+  -H "Accept: text/xml" \
+  "http://example.com:8000/users/row1/cf:a/1458586888395"
+
+|/_table_/_row_/_column:qualifier_
+|GET
+|Get the value of a single column. Values are Base-64 encoded.
+|curl -vi -X GET \
+  -H "Accept: text/xml" \
+  "http://example.com:8000/users/row1/cf:a"
+
+curl -vi -X GET \
+  -H "Accept: text/xml" \
+   "http://example.com:8000/users/row1/cf:a/"
+
+|/_table_/_row_/_column:qualifier_/?v=_number_of_versions_
+|GET
+|Multi-Get a specified number of versions of a given cell. Values are Base-64 encoded.
+|curl -vi -X GET \
+  -H "Accept: text/xml" \
+  "http://example.com:8000/users/row1/cf:a?v=2"
+
+|===
+
+.Endpoints for `Scan` Operations
+[options="header", cols="2m,m,3d,6l"]
+|===
+|Endpoint
+|HTTP Verb
+|Description
+|Example
+
+|/_table_/scanner/
+|PUT
+|Get a Scanner object. Required by all other Scan operations. Adjust the batch parameter
+to the number of rows the scan should return in a batch. See the next example for
+adding filters to your scanner. The scanner endpoint URL is returned as the `Location`
+in the HTTP response. The other examples in this table assume that the scanner endpoint
+is `\http://example.com:8000/users/scanner/145869072824375522207`.
+|curl -vi -X PUT \
+  -H "Accept: text/xml" \
+  -H "Content-Type: text/xml" \
+  -d '<Scanner batch="1"/>' \
+  "http://example.com:8000/users/scanner/"
+
+|/_table_/scanner/
+|PUT
+|To supply filters to the Scanner object or configure the
+Scanner in any other way, you can create a text file and add
+your filter to the file. For example, to return only rows for
+which keys start with <codeph>u123</codeph> and use a batch size
+of 100, the filter file would look like this:
+
++++
+<pre>
+&lt;Scanner batch="100"&gt;
+  &lt;filter&gt;
+    {
+      "type": "PrefixFilter",
+      "value": "u123"
+    }
+  &lt;/filter&gt;
+&lt;/Scanner&gt;
+</pre>
++++
+
+Pass the file to the `-d` argument of the `curl` request.
+|curl -vi -X PUT \
+  -H "Accept: text/xml" \
+  -H "Content-Type:text/xml" \
+  -d @filter.txt \
+  "http://example.com:8000/users/scanner/"
+
+|/_table_/scanner/_scanner-id_
+|GET
+|Get the next batch from the scanner. Cell values are byte-encoded. If the scanner
+has been exhausted, HTTP status `204` is returned.
+|curl -vi -X GET \
+  -H "Accept: text/xml" \
+  "http://example.com:8000/users/scanner/145869072824375522207"
+
+|_table_/scanner/_scanner-id_
+|DELETE
+|Deletes the scanner and frees the resources it used.
+|curl -vi -X DELETE \
+  -H "Accept: text/xml" \
+  "http://example.com:8000/users/scanner/145869072824375522207"
+
+|===
+
+.Endpoints for `Put` Operations
+[options="header", cols="2m,m,3d,6l"]
+|===
+|Endpoint
+|HTTP Verb
+|Description
+|Example
+
+|/_table_/_row_key_
+|PUT
+|Write a row to a table. The row, column qualifier, and value must each be Base-64
+encoded. To encode a string, use the `base64` command-line utility. To decode the
+string, use `base64 -d`. The payload is in the `--data` argument, and the `/users/fakerow`
+value is a placeholder. Insert multiple rows by adding them to the `<CellSet>`
+element. You can also save the data to be inserted to a file and pass it to the `-d`
+parameter with syntax like `-d @filename.txt`.
+|curl -vi -X PUT \
+  -H "Accept: text/xml" \
+  -H "Content-Type: text/xml" \
+  -d '<?xml version="1.0" encoding="UTF-8" standalone="yes"?><CellSet><Row key="cm93NQo="><Cell column="Y2Y6ZQo=">dmFsdWU1Cg==</Cell></Row></CellSet>' \
+  "http://example.com:8000/users/fakerow"
+
+curl -vi -X PUT \
+  -H "Accept: text/json" \
+  -H "Content-Type: text/json" \
+  -d '{"Row":[{"key":"cm93NQo=", "Cell": [{"column":"Y2Y6ZQo=", "$":"dmFsdWU1Cg=="}]}]}'' \
+  "example.com:8000/users/fakerow"
+
+|===
+[[xml_schema]]
+=== REST XML Schema
+
+[source,xml]
+----
+<schema xmlns="http://www.w3.org/2001/XMLSchema" xmlns:tns="RESTSchema">
+
+  <element name="Version" type="tns:Version"></element>
+
+  <complexType name="Version">
+    <attribute name="REST" type="string"></attribute>
+    <attribute name="JVM" type="string"></attribute>
+    <attribute name="OS" type="string"></attribute>
+    <attribute name="Server" type="string"></attribute>
+    <attribute name="Jersey" type="string"></attribute>
+  </complexType>
+
+  <element name="TableList" type="tns:TableList"></element>
+
+  <complexType name="TableList">
+    <sequence>
+      <element name="table" type="tns:Table" maxOccurs="unbounded" minOccurs="1"></element>
+    </sequence>
+  </complexType>
+
+  <complexType name="Table">
+    <sequence>
+      <element name="name" type="string"></element>
+    </sequence>
+  </complexType>
+
+  <element name="TableInfo" type="tns:TableInfo"></element>
+
+  <complexType name="TableInfo">
+    <sequence>
+      <element name="region" type="tns:TableRegion" maxOccurs="unbounded" minOccurs="1"></element>
+    </sequence>
+    <attribute name="name" type="string"></attribute>
+  </complexType>
+
+  <complexType name="TableRegion">
+    <attribute name="name" type="string"></attribute>
+    <attribute name="id" type="int"></attribute>
+    <attribute name="startKey" type="base64Binary"></attribute>
+    <attribute name="endKey" type="base64Binary"></attribute>
+    <attribute name="location" type="string"></attribute>
+  </complexType>
+
+  <element name="TableSchema" type="tns:TableSchema"></element>
+
+  <complexType name="TableSchema">
+    <sequence>
+      <element name="column" type="tns:ColumnSchema" maxOccurs="unbounded" minOccurs="1"></element>
+    </sequence>
+    <attribute name="name" type="string"></attribute>
+    <anyAttribute></anyAttribute>
+  </complexType>
+
+  <complexType name="ColumnSchema">
+    <attribute name="name" type="string"></attribute>
+    <anyAttribute></anyAttribute>
+  </complexType>
+
+  <element name="CellSet" type="tns:CellSet"></element>
+
+  <complexType name="CellSet">
+    <sequence>
+      <element name="row" type="tns:Row" maxOccurs="unbounded" minOccurs="1"></element>
+    </sequence>
+  </complexType>
+
+  <element name="Row" type="tns:Row"></element>
+
+  <complexType name="Row">
+    <sequence>
+      <element name="key" type="base64Binary"></element>
+      <element name="cell" type="tns:Cell" maxOccurs="unbounded" minOccurs="1"></element>
+    </sequence>
+  </complexType>
+
+  <element name="Cell" type="tns:Cell"></element>
+
+  <complexType name="Cell">
+    <sequence>
+      <element name="value" maxOccurs="1" minOccurs="1">
+        <simpleType><restriction base="base64Binary">
+        </simpleType>
+      </element>
+    </sequence>
+    <attribute name="column" type="base64Binary" />
+    <attribute name="timestamp" type="int" />
+  </complexType>
+
+  <element name="Scanner" type="tns:Scanner"></element>
+
+  <complexType name="Scanner">
+    <sequence>
+      <element name="column" type="base64Binary" minOccurs="0" maxOccurs="unbounded"></element>
+    </sequence>
+    <sequence>
+      <element name="filter" type="string" minOccurs="0" maxOccurs="1"></element>
+    </sequence>
+    <attribute name="startRow" type="base64Binary"></attribute>
+    <attribute name="endRow" type="base64Binary"></attribute>
+    <attribute name="batch" type="int"></attribute>
+    <attribute name="startTime" type="int"></attribute>
+    <attribute name="endTime" type="int"></attribute>
+  </complexType>
+
+  <element name="StorageClusterVersion" type="tns:StorageClusterVersion" />
+
+  <complexType name="StorageClusterVersion">
+    <attribute name="version" type="string"></attribute>
+  </complexType>
+
+  <element name="StorageClusterStatus"
+    type="tns:StorageClusterStatus">
+  </element>
+
+  <complexType name="StorageClusterStatus">
+    <sequence>
+      <element name="liveNode" type="tns:Node"
+        maxOccurs="unbounded" minOccurs="0">
+      </element>
+      <element name="deadNode" type="string" maxOccurs="unbounded"
+        minOccurs="0">
+      </element>
+    </sequence>
+    <attribute name="regions" type="int"></attribute>
+    <attribute name="requests" type="int"></attribute>
+    <attribute name="averageLoad" type="float"></attribute>
+  </complexType>
+
+  <complexType name="Node">
+    <sequence>
+      <element name="region" type="tns:Region"
+   maxOccurs="unbounded" minOccurs="0">
+      </element>
+    </sequence>
+    <attribute name="name" type="string"></attribute>
+    <attribute name="startCode" type="int"></attribute>
+    <attribute name="requests" type="int"></attribute>
+    <attribute name="heapSizeMB" type="int"></attribute>
+    <attribute name="maxHeapSizeMB" type="int"></attribute>
+  </complexType>
+
+  <complexType name="Region">
+    <attribute name="name" type="base64Binary"></attribute>
+    <attribute name="stores" type="int"></attribute>
+    <attribute name="storefiles" type="int"></attribute>
+    <attribute name="storefileSizeMB" type="int"></attribute>
+    <attribute name="memstoreSizeMB" type="int"></attribute>
+    <attribute name="storefileIndexSizeMB" type="int"></attribute>
+  </complexType>
+
+</schema>
+----
+
+[[protobufs_schema]]
+=== REST Protobufs Schema
+
+[source,json]
+----
+message Version {
+  optional string restVersion = 1;
+  optional string jvmVersion = 2;
+  optional string osVersion = 3;
+  optional string serverVersion = 4;
+  optional string jerseyVersion = 5;
+}
+
+message StorageClusterStatus {
+  message Region {
+    required bytes name = 1;
+    optional int32 stores = 2;
+    optional int32 storefiles = 3;
+    optional int32 storefileSizeMB = 4;
+    optional int32 memstoreSizeMB = 5;
+    optional int32 storefileIndexSizeMB = 6;
+  }
+  message Node {
+    required string name = 1;    // name:port
+    optional int64 startCode = 2;
+    optional int32 requests = 3;
+    optional int32 heapSizeMB = 4;
+    optional int32 maxHeapSizeMB = 5;
+    repeated Region regions = 6;
+  }
+  // node status
+  repeated Node liveNodes = 1;
+  repeated string deadNodes = 2;
+  // summary statistics
+  optional int32 regions = 3;
+  optional int32 requests = 4;
+  optional double averageLoad = 5;
+}
+
+message TableList {
+  repeated string name = 1;
+}
+
+message TableInfo {
+  required string name = 1;
+  message Region {
+    required string name = 1;
+    optional bytes startKey = 2;
+    optional bytes endKey = 3;
+    optional int64 id = 4;
+    optional string location = 5;
+  }
+  repeated Region regions = 2;
+}
+
+message TableSchema {
+  optional string name = 1;
+  message Attribute {
+    required string name = 1;
+    required string value = 2;
+  }
+  repeated Attribute attrs = 2;
+  repeated ColumnSchema columns = 3;
+  // optional helpful encodings of commonly used attributes
+  optional bool inMemory = 4;
+  optional bool readOnly = 5;
+}
+
+message ColumnSchema {
+  optional string name = 1;
+  message Attribute {
+    required string name = 1;
+    required string value = 2;
+  }
+  repeated Attribute attrs = 2;
+  // optional helpful encodings of commonly used attributes
+  optional int32 ttl = 3;
+  optional int32 maxVersions = 4;
+  optional string compression = 5;
+}
+
+message Cell {
+  optional bytes row = 1;       // unused if Cell is in a CellSet
+  optional bytes column = 2;
+  optional int64 timestamp = 3;
+  optional bytes data = 4;
+}
+
+message CellSet {
+  message Row {
+    required bytes key = 1;
+    repeated Cell values = 2;
+  }
+  repeated Row rows = 1;
+}
+
+message Scanner {
+  optional bytes startRow = 1;
+  optional bytes endRow = 2;
+  repeated bytes columns = 3;
+  optional int32 batch = 4;
+  optional int64 startTime = 5;
+  optional int64 endTime = 6;
+}
 ----
 
 == Thrift
@@ -64,3 +624,331 @@ Documentation about Thrift has moved to <<thrift>>.
 
 FB's Chip Turner wrote a pure C/C++ client.
 link:https://github.com/facebook/native-cpp-hbase-client[Check it out].
+
+[[jdo]]
+
+== Using Java Data Objects (JDO) with HBase
+
+link:https://db.apache.org/jdo/[Java Data Objects (JDO)] is a standard way to
+access persistent data in databases, using plain old Java objects (POJO) to
+represent persistent data.
+
+.Dependencies
+This code example has the following dependencies:
+
+. HBase 0.90.x or newer
+. commons-beanutils.jar (http://commons.apache.org/)
+. commons-pool-1.5.5.jar (http://commons.apache.org/)
+. transactional-tableindexed for HBase 0.90 (https://github.com/hbase-trx/hbase-transactional-tableindexed)
+
+.Download `hbase-jdo`
+Download the code from http://code.google.com/p/hbase-jdo/.
+
+.JDO Example
+====
+
+This example uses JDO to create a table and an index, insert a row into a table, get
+a row, get a column value, perform a query, and do some additional HBase operations.
+
+[source, java]
+----
+package com.apache.hadoop.hbase.client.jdo.examples;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.InputStream;
+import java.util.Hashtable;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.client.tableindexed.IndexedTable;
+
+import com.apache.hadoop.hbase.client.jdo.AbstractHBaseDBO;
+import com.apache.hadoop.hbase.client.jdo.HBaseBigFile;
+import com.apache.hadoop.hbase.client.jdo.HBaseDBOImpl;
+import com.apache.hadoop.hbase.client.jdo.query.DeleteQuery;
+import com.apache.hadoop.hbase.client.jdo.query.HBaseOrder;
+import com.apache.hadoop.hbase.client.jdo.query.HBaseParam;
+import com.apache.hadoop.hbase.client.jdo.query.InsertQuery;
+import com.apache.hadoop.hbase.client.jdo.query.QSearch;
+import com.apache.hadoop.hbase.client.jdo.query.SelectQuery;
+import com.apache.hadoop.hbase.client.jdo.query.UpdateQuery;
+
+/**
+ * Hbase JDO Example.
+ *
+ * dependency library.
+ * - commons-beanutils.jar
+ * - commons-pool-1.5.5.jar
+ * - hbase0.90.0-transactionl.jar
+ *
+ * you can expand Delete,Select,Update,Insert Query classes.
+ *
+ */
+public class HBaseExample {
+  public static void main(String[] args) throws Exception {
+    AbstractHBaseDBO dbo = new HBaseDBOImpl();
+
+    //*drop if table is already exist.*
+    if(dbo.isTableExist("user")){
+     dbo.deleteTable("user");
+    }
+
+    //*create table*
+    dbo.createTableIfNotExist("user",HBaseOrder.DESC,"account");
+    //dbo.createTableIfNotExist("user",HBaseOrder.ASC,"account");
+
+    //create index.
+    String[] cols={"id","name"};
+    dbo.addIndexExistingTable("user","account",cols);
+
+    //insert
+    InsertQuery insert = dbo.createInsertQuery("user");
+    UserBean bean = new UserBean();
+    bean.setFamily("account");
+    bean.setAge(20);
+    bean.setEmail("ncanis@gmail.com");
+    bean.setId("ncanis");
+    bean.setName("ncanis");
+    bean.setPassword("1111");
+    insert.insert(bean);
+
+    //select 1 row
+    SelectQuery select = dbo.createSelectQuery("user");
+    UserBean resultBean = (UserBean)select.select(bean.getRow(),UserBean.class);
+
+    // select column value.
+    String value = (String)select.selectColumn(bean.getRow(),"account","id",String.class);
+
+    // search with option (QSearch has EQUAL, NOT_EQUAL, LIKE)
+    // select id,password,name,email from account where id='ncanis' limit startRow,20
+    HBaseParam param = new HBaseParam();
+    param.setPage(bean.getRow(),20);
+    param.addColumn("id","password","name","email");
+    param.addSearchOption("id","ncanis",QSearch.EQUAL);
+    select.search("account", param, UserBean.class);
+
+    // search column value is existing.
+    boolean isExist = select.existColumnValue("account","id","ncanis".getBytes());
+
+    // update password.
+    UpdateQuery update = dbo.createUpdateQuery("user");
+    Hashtable<String, byte[]> colsTable = new Hashtable<String, byte[]>();
+    colsTable.put("password","2222".getBytes());
+    update.update(bean.getRow(),"account",colsTable);
+
+    //delete
+    DeleteQuery delete = dbo.createDeleteQuery("user");
+    delete.deleteRow(resultBean.getRow());
+
+    ////////////////////////////////////
+    // etc
+
+    // HTable pool with apache commons pool
+    // borrow and release. HBasePoolManager(maxActive, minIdle etc..)
+    IndexedTable table = dbo.getPool().borrow("user");
+    dbo.getPool().release(table);
+
+    // upload bigFile by hadoop directly.
+    HBaseBigFile bigFile = new HBaseBigFile();
+    File file = new File("doc/movie.avi");
+    FileInputStream fis = new FileInputStream(file);
+    Path rootPath = new Path("/files/");
+    String filename = "movie.avi";
+    bigFile.uploadFile(rootPath,filename,fis,true);
+
+    // receive file stream from hadoop.
+    Path p = new Path(rootPath,filename);
+    InputStream is = bigFile.path2Stream(p,4096);
+
+  }
+}
+----
+====
+
+[[scala]]
+== Scala
+
+=== Setting the Classpath
+
+To use Scala with HBase, your CLASSPATH must include HBase's classpath as well as
+the Scala JARs required by your code. First, use the following command on a server
+running the HBase RegionServer process, to get HBase's classpath.
+
+[source, bash]
+----
+$ ps aux |grep regionserver| awk -F 'java.library.path=' {'print $2'} | awk {'print $1'}
+
+/usr/lib/hadoop/lib/native:/usr/lib/hbase/lib/native/Linux-amd64-64
+----
+
+Set the `$CLASSPATH` environment variable to include the path you found in the previous
+step, plus the path of `scala-library.jar` and each additional Scala-related JAR needed for
+your project.
+
+[source, bash]
+----
+$ export CLASSPATH=$CLASSPATH:/usr/lib/hadoop/lib/native:/usr/lib/hbase/lib/native/Linux-amd64-64:/path/to/scala-library.jar
+----
+
+=== Scala SBT File
+
+Your `build.sbt` file needs the following `resolvers` and `libraryDependencies` to work
+with HBase.
+
+----
+resolvers += "Apache HBase" at "https://repository.apache.org/content/repositories/releases"
+
+resolvers += "Thrift" at "http://people.apache.org/~rawson/repo/"
+
+libraryDependencies ++= Seq(
+    "org.apache.hadoop" % "hadoop-core" % "0.20.2",
+    "org.apache.hbase" % "hbase" % "0.90.4"
+)
+----
+
+=== Example Scala Code
+
+This example lists HBase tables, creates a new table, and adds a row to it.
+
+[source, scala]
+----
+import org.apache.hadoop.hbase.HBaseConfiguration
+import org.apache.hadoop.hbase.client.{Connection,ConnectionFactory,HBaseAdmin,HTable,Put,Get}
+import org.apache.hadoop.hbase.util.Bytes
+
+
+val conf = new HBaseConfiguration()
+val connection = ConnectionFactory.createConnection(conf);
+val admin = connection.getAdmin();
+
+// list the tables
+val listtables=admin.listTables()
+listtables.foreach(println)
+
+// let's insert some data in 'mytable' and get the row
+
+val table = new HTable(conf, "mytable")
+
+val theput= new Put(Bytes.toBytes("rowkey1"))
+
+theput.add(Bytes.toBytes("ids"),Bytes.toBytes("id1"),Bytes.toBytes("one"))
+table.put(theput)
+
+val theget= new Get(Bytes.toBytes("rowkey1"))
+val result=table.get(theget)
+val value=result.value()
+println(Bytes.toString(value))
+----
+
+[[jython]]
+== Jython
+
+
+=== Setting the Classpath
+
+To use Jython with HBase, your CLASSPATH must include HBase's classpath as well as
+the Jython JARs required by your code. First, use the following command on a server
+running the HBase RegionServer process, to get HBase's classpath.
+
+[source, bash]
+----
+$ ps aux |grep regionserver| awk -F 'java.library.path=' {'print $2'} | awk {'print $1'}
+
+/usr/lib/hadoop/lib/native:/usr/lib/hbase/lib/native/Linux-amd64-64
+----
+
+Set the `$CLASSPATH` environment variable to include the path you found in the previous
+step, plus the path to `jython.jar` and each additional Jython-related JAR needed for
+your project.
+
+[source, bash]
+----
+$ export CLASSPATH=$CLASSPATH:/usr/lib/hadoop/lib/native:/usr/lib/hbase/lib/native/Linux-amd64-64:/path/to/jython.jar
+----
+
+Start a Jython shell with HBase and Hadoop JARs in the classpath:
+$ bin/hbase org.python.util.jython
+
+=== Jython Code Examples
+
+.Table Creation, Population, Get, and Delete with Jython
+====
+The following Jython code example creates a table, populates it with data, fetches
+the data, and deletes the table.
+
+[source,jython]
+----
+import java.lang
+from org.apache.hadoop.hbase import HBaseConfiguration, HTableDescriptor, HColumnDescriptor, HConstants, TableName
+from org.apache.hadoop.hbase.client import HBaseAdmin, HTable, Get
+from org.apache.hadoop.hbase.io import Cell, RowResult
+
+# First get a conf object.  This will read in the configuration
+# that is out in your hbase-*.xml files such as location of the
+# hbase master node.
+conf = HBaseConfiguration()
+
+# Create a table named 'test' that has two column families,
+# one named 'content, and the other 'anchor'.  The colons
+# are required for column family names.
+tablename = TableName.valueOf("test")
+
+desc = HTableDescriptor(tablename)
+desc.addFamily(HColumnDescriptor("content:"))
+desc.addFamily(HColumnDescriptor("anchor:"))
+admin = HBaseAdmin(conf)
+
+# Drop and recreate if it exists
+if admin.tableExists(tablename):
+    admin.disableTable(tablename)
+    admin.deleteTable(tablename)
+admin.createTable(desc)
+
+tables = admin.listTables()
+table = HTable(conf, tablename)
+
+# Add content to 'column:' on a row named 'row_x'
+row = 'row_x'
+update = Get(row)
+update.put('content:', 'some content')
+table.commit(update)
+
+# Now fetch the content just added, returns a byte[]
+data_row = table.get(row, "content:")
+data = java.lang.String(data_row.value, "UTF8")
+
+print "The fetched row contains the value '%s'" % data
+
+# Delete the table.
+admin.disableTable(desc.getName())
+admin.deleteTable(desc.getName())
+----
+====
+
+.Table Scan Using Jython
+====
+This example scans a table and returns the results that match a given family qualifier.
+
+[source, jython]
+----
+# Print all rows that are members of a particular column family
+# by passing a regex for family qualifier
+
+import java.lang
+
+from org.apache.hadoop.hbase import HBaseConfiguration
+from org.apache.hadoop.hbase.client import HTable
+
+conf = HBaseConfiguration()
+
+table = HTable(conf, "wiki")
+col = "title:.*$"
+
+scanner = table.getScanner([col], "")
+while 1:
+    result = scanner.next()
+    if not result:
+        break
+    print java.lang.String(result.row), java.lang.String(result.get('title:').value)
+----
+====

http://git-wip-us.apache.org/repos/asf/hbase/blob/6cb8a436/src/main/asciidoc/_chapters/faq.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/faq.adoc b/src/main/asciidoc/_chapters/faq.adoc
index 22e4ad3..7bffe0e 100644
--- a/src/main/asciidoc/_chapters/faq.adoc
+++ b/src/main/asciidoc/_chapters/faq.adoc
@@ -46,7 +46,7 @@ What is the history of HBase?::
 
 === Upgrading
 How do I upgrade Maven-managed projects from HBase 0.94 to HBase 0.96+?::
-  In HBase 0.96, the project moved to a modular structure. Adjust your project's dependencies to rely upon the `hbase-client` module or another module as appropriate, rather than a single JAR. You can model your Maven depency after one of the following, depending on your targeted version of HBase. See Section 3.5, \u201cUpgrading from 0.94.x to 0.96.x\u201d or Section 3.3, \u201cUpgrading from 0.96.x to 0.98.x\u201d for more information.
+  In HBase 0.96, the project moved to a modular structure. Adjust your project's dependencies to rely upon the `hbase-client` module or another module as appropriate, rather than a single JAR. You can model your Maven dependency after one of the following, depending on your targeted version of HBase. See Section 3.5, \u201cUpgrading from 0.94.x to 0.96.x\u201d or Section 3.3, \u201cUpgrading from 0.96.x to 0.98.x\u201d for more information.
 +
 .Maven Dependency for HBase 0.98
 [source,xml]
@@ -55,18 +55,18 @@ How do I upgrade Maven-managed projects from HBase 0.94 to HBase 0.96+?::
   <groupId>org.apache.hbase</groupId>
   <artifactId>hbase-client</artifactId>
   <version>0.98.5-hadoop2</version>
-</dependency>  
-----              
-+    
-.Maven Dependency for HBase 0.96       
+</dependency>
+----
++
+.Maven Dependency for HBase 0.96
 [source,xml]
 ----
 <dependency>
   <groupId>org.apache.hbase</groupId>
   <artifactId>hbase-client</artifactId>
   <version>0.96.2-hadoop2</version>
-</dependency>  
-----           
+</dependency>
+----
 +
 .Maven Dependency for HBase 0.94
 [source,xml]
@@ -75,9 +75,9 @@ How do I upgrade Maven-managed projects from HBase 0.94 to HBase 0.96+?::
   <groupId>org.apache.hbase</groupId>
   <artifactId>hbase</artifactId>
   <version>0.94.3</version>
-</dependency>   
-----         
-                
+</dependency>
+----
+
 
 === Architecture
 How does HBase handle Region-RegionServer assignment and locality?::
@@ -91,7 +91,7 @@ Where can I learn about the rest of the configuration options?::
   See <<configuration>>.
 
 === Schema Design / Data Access
-  
+
 How should I design my schema in HBase?::
   See <<datamodel>> and <<schema>>.
 
@@ -105,7 +105,7 @@ Can I change a table's rowkeys?::
   This is a very common question. You can't. See <<changing.rowkeys>>.
 
 What APIs does HBase support?::
-  See <<datamodel>>, <<architecture.client>>, and <<nonjava.jvm>>.
+  See <<datamodel>>, <<architecture.client>>, and <<external_apis>>.
 
 === MapReduce
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/6cb8a436/src/main/asciidoc/_chapters/getting_started.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/getting_started.adoc b/src/main/asciidoc/_chapters/getting_started.adoc
index 41674a0..26af568 100644
--- a/src/main/asciidoc/_chapters/getting_started.adoc
+++ b/src/main/asciidoc/_chapters/getting_started.adoc
@@ -19,6 +19,7 @@
  */
 ////
 
+[[getting_started]]
 = Getting Started
 :doctype: book
 :numbered:
@@ -57,7 +58,7 @@ Prior to HBase 0.94.x, HBase expected the loopback IP address to be 127.0.0.1. U
 
 .Example /etc/hosts File for Ubuntu
 ====
-The following _/etc/hosts_ file works correctly for HBase 0.94.x and earlier, on Ubuntu. Use this as a template if you run into trouble. 
+The following _/etc/hosts_ file works correctly for HBase 0.94.x and earlier, on Ubuntu. Use this as a template if you run into trouble.
 [listing]
 ----
 127.0.0.1 localhost
@@ -80,15 +81,17 @@ See <<java,Java>> for information about supported JDK versions.
   This will take you to a mirror of _HBase
   Releases_.
   Click on the folder named _stable_ and then download the binary file that ends in _.tar.gz_ to your local filesystem.
-  Be sure to choose the version that corresponds with the version of Hadoop you are likely to use later.
-  In most cases, you should choose the file for Hadoop 2, which will be called something like _hbase-0.98.3-hadoop2-bin.tar.gz_.
+  Prior to 1.x version, be sure to choose the version that corresponds with the version of Hadoop you are
+  likely to use later (in most cases, you should choose the file for Hadoop 2, which will be called
+  something like _hbase-0.98.13-hadoop2-bin.tar.gz_).
   Do not download the file ending in _src.tar.gz_ for now.
 . Extract the downloaded file, and change to the newly-created directory.
 +
+[source,subs="attributes"]
 ----
 
-$ tar xzvf hbase-<?eval ${project.version}?>-hadoop2-bin.tar.gz
-$ cd hbase-<?eval ${project.version}?>-hadoop2/
+$ tar xzvf hbase-{Version}-bin.tar.gz
+$ cd hbase-{Version}/
 ----
 
 . For HBase 0.98.5 and later, you are required to set the `JAVA_HOME` environment variable before starting HBase.
@@ -286,7 +289,7 @@ $
 === Intermediate - Pseudo-Distributed Local Install
 
 After working your way through <<quickstart,quickstart>>, you can re-configure HBase to run in pseudo-distributed mode.
-Pseudo-distributed mode means that HBase still runs completely on a single host, but each HBase daemon (HMaster, HRegionServer, and Zookeeper) runs as a separate process.
+Pseudo-distributed mode means that HBase still runs completely on a single host, but each HBase daemon (HMaster, HRegionServer, and ZooKeeper) runs as a separate process.
 By default, unless you configure the `hbase.rootdir` property as described in <<quickstart,quickstart>>, your data is still stored in _/tmp/_.
 In this walk-through, we store your data in HDFS instead, assuming you have HDFS available.
 You can skip the HDFS configuration to continue storing your data in the local filesystem.
@@ -294,9 +297,11 @@ You can skip the HDFS configuration to continue storing your data in the local f
 .Hadoop Configuration
 [NOTE]
 ====
-This procedure assumes that you have configured Hadoop and HDFS on your local system and or a remote system, and that they are running and available.
-It also assumes you are using Hadoop 2.
-Currently, the documentation on the Hadoop website does not include a quick start for Hadoop 2, but the guide at link:http://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide          is a good starting point.
+This procedure assumes that you have configured Hadoop and HDFS on your local system and/or a remote
+system, and that they are running and available. It also assumes you are using Hadoop 2.
+The guide on
+link:http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html[Setting up a Single Node Cluster]
+in the Hadoop documentation is a good starting point.
 ====
 
 
@@ -425,7 +430,7 @@ You can stop HBase the same way as in the <<quickstart,quickstart>> procedure, u
 
 In reality, you need a fully-distributed configuration to fully test HBase and to use it in real-world scenarios.
 In a distributed configuration, the cluster contains multiple nodes, each of which runs one or more HBase daemon.
-These include primary and backup Master instances, multiple Zookeeper nodes, and multiple RegionServer nodes.
+These include primary and backup Master instances, multiple ZooKeeper nodes, and multiple RegionServer nodes.
 
 This advanced quickstart adds two more nodes to your cluster.
 The architecture will be as follows:

http://git-wip-us.apache.org/repos/asf/hbase/blob/6cb8a436/src/main/asciidoc/_chapters/hbase-default.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/hbase-default.adoc b/src/main/asciidoc/_chapters/hbase-default.adoc
index bf56dd3..60c0849 100644
--- a/src/main/asciidoc/_chapters/hbase-default.adoc
+++ b/src/main/asciidoc/_chapters/hbase-default.adoc
@@ -46,7 +46,7 @@ Temporary directory on the local filesystem.
 .Default
 `${java.io.tmpdir}/hbase-${user.name}`
 
-  
+
 [[hbase.rootdir]]
 *`hbase.rootdir`*::
 +
@@ -64,7 +64,7 @@ The directory shared by region servers and into
 .Default
 `${hbase.tmp.dir}/hbase`
 
-  
+
 [[hbase.cluster.distributed]]
 *`hbase.cluster.distributed`*::
 +
@@ -77,7 +77,7 @@ The mode the cluster will be in. Possible values are
 .Default
 `false`
 
-  
+
 [[hbase.zookeeper.quorum]]
 *`hbase.zookeeper.quorum`*::
 +
@@ -97,7 +97,7 @@ Comma separated list of servers in the ZooKeeper ensemble
 .Default
 `localhost`
 
-  
+
 [[hbase.local.dir]]
 *`hbase.local.dir`*::
 +
@@ -108,7 +108,7 @@ Directory on the local filesystem to be used
 .Default
 `${hbase.tmp.dir}/local/`
 
-  
+
 [[hbase.master.info.port]]
 *`hbase.master.info.port`*::
 +
@@ -119,18 +119,18 @@ The port for the HBase Master web UI.
 .Default
 `16010`
 
-  
+
 [[hbase.master.info.bindAddress]]
 *`hbase.master.info.bindAddress`*::
 +
 .Description
 The bind address for the HBase Master web UI
-    
+
 +
 .Default
 `0.0.0.0`
 
-  
+
 [[hbase.master.logcleaner.plugins]]
 *`hbase.master.logcleaner.plugins`*::
 +
@@ -145,7 +145,7 @@ A comma-separated list of BaseLogCleanerDelegate invoked by
 .Default
 `org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner`
 
-  
+
 [[hbase.master.logcleaner.ttl]]
 *`hbase.master.logcleaner.ttl`*::
 +
@@ -156,7 +156,7 @@ Maximum time a WAL can stay in the .oldlogdir directory,
 .Default
 `600000`
 
-  
+
 [[hbase.master.hfilecleaner.plugins]]
 *`hbase.master.hfilecleaner.plugins`*::
 +
@@ -172,18 +172,7 @@ A comma-separated list of BaseHFileCleanerDelegate invoked by
 .Default
 `org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner`
 
-  
-[[hbase.master.catalog.timeout]]
-*`hbase.master.catalog.timeout`*::
-+
-.Description
-Timeout value for the Catalog Janitor from the master to
-    META.
-+
-.Default
-`600000`
 
-  
 [[hbase.master.infoserver.redirect]]
 *`hbase.master.infoserver.redirect`*::
 +
@@ -195,7 +184,7 @@ Whether or not the Master listens to the Master web
 .Default
 `true`
 
-  
+
 [[hbase.regionserver.port]]
 *`hbase.regionserver.port`*::
 +
@@ -205,7 +194,7 @@ The port the HBase RegionServer binds to.
 .Default
 `16020`
 
-  
+
 [[hbase.regionserver.info.port]]
 *`hbase.regionserver.info.port`*::
 +
@@ -216,7 +205,7 @@ The port for the HBase RegionServer web UI
 .Default
 `16030`
 
-  
+
 [[hbase.regionserver.info.bindAddress]]
 *`hbase.regionserver.info.bindAddress`*::
 +
@@ -226,7 +215,7 @@ The address for the HBase RegionServer web UI
 .Default
 `0.0.0.0`
 
-  
+
 [[hbase.regionserver.info.port.auto]]
 *`hbase.regionserver.info.port.auto`*::
 +
@@ -239,7 +228,7 @@ Whether or not the Master or RegionServer
 .Default
 `false`
 
-  
+
 [[hbase.regionserver.handler.count]]
 *`hbase.regionserver.handler.count`*::
 +
@@ -250,7 +239,7 @@ Count of RPC Listener instances spun up on RegionServers.
 .Default
 `30`
 
-  
+
 [[hbase.ipc.server.callqueue.handler.factor]]
 *`hbase.ipc.server.callqueue.handler.factor`*::
 +
@@ -262,7 +251,7 @@ Factor to determine the number of call queues.
 .Default
 `0.1`
 
-  
+
 [[hbase.ipc.server.callqueue.read.ratio]]
 *`hbase.ipc.server.callqueue.read.ratio`*::
 +
@@ -287,12 +276,12 @@ Split the call queues into read and write queues.
       and 2 queues will contain only write requests.
       a read.ratio of 1 means that: 9 queues will contain only read requests
       and 1 queues will contain only write requests.
-    
+
 +
 .Default
 `0`
 
-  
+
 [[hbase.ipc.server.callqueue.scan.ratio]]
 *`hbase.ipc.server.callqueue.scan.ratio`*::
 +
@@ -313,12 +302,12 @@ Given the number of read call queues, calculated from the total number
       and 4 queues will contain only short-read requests.
       a scan.ratio of 0.8 means that: 6 queues will contain only long-read requests
       and 2 queues will contain only short-read requests.
-    
+
 +
 .Default
 `0`
 
-  
+
 [[hbase.regionserver.msginterval]]
 *`hbase.regionserver.msginterval`*::
 +
@@ -329,7 +318,7 @@ Interval between messages from the RegionServer to Master
 .Default
 `3000`
 
-  
+
 [[hbase.regionserver.regionSplitLimit]]
 *`hbase.regionserver.regionSplitLimit`*::
 +
@@ -342,7 +331,7 @@ Limit for the number of regions after which no more region
 .Default
 `2147483647`
 
-  
+
 [[hbase.regionserver.logroll.period]]
 *`hbase.regionserver.logroll.period`*::
 +
@@ -353,7 +342,7 @@ Period at which we will roll the commit log regardless
 .Default
 `3600000`
 
-  
+
 [[hbase.regionserver.logroll.errors.tolerated]]
 *`hbase.regionserver.logroll.errors.tolerated`*::
 +
@@ -367,7 +356,7 @@ The number of consecutive WAL close errors we will allow
 .Default
 `2`
 
-  
+
 [[hbase.regionserver.hlog.reader.impl]]
 *`hbase.regionserver.hlog.reader.impl`*::
 +
@@ -377,7 +366,7 @@ The WAL file reader implementation.
 .Default
 `org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader`
 
-  
+
 [[hbase.regionserver.hlog.writer.impl]]
 *`hbase.regionserver.hlog.writer.impl`*::
 +
@@ -387,7 +376,7 @@ The WAL file writer implementation.
 .Default
 `org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter`
 
-  
+
 [[hbase.master.distributed.log.replay]]
 *`hbase.master.distributed.log.replay`*::
 +
@@ -397,13 +386,13 @@ Enable 'distributed log replay' as default engine splitting
     back to the old mode 'distributed log splitter', set the value to
     'false'.  'Disributed log replay' improves MTTR because it does not
     write intermediate files.  'DLR' required that 'hfile.format.version'
-    be set to version 3 or higher. 
-    
+    be set to version 3 or higher.
+
 +
 .Default
 `true`
 
-  
+
 [[hbase.regionserver.global.memstore.size]]
 *`hbase.regionserver.global.memstore.size`*::
 +
@@ -416,20 +405,20 @@ Maximum size of all memstores in a region server before new
 .Default
 `0.4`
 
-  
+
 [[hbase.regionserver.global.memstore.size.lower.limit]]
 *`hbase.regionserver.global.memstore.size.lower.limit`*::
 +
 .Description
 Maximum size of all memstores in a region server before flushes are forced.
       Defaults to 95% of hbase.regionserver.global.memstore.size.
-      A 100% value for this value causes the minimum possible flushing to occur when updates are 
+      A 100% value for this value causes the minimum possible flushing to occur when updates are
       blocked due to memstore limiting.
 +
 .Default
 `0.95`
 
-  
+
 [[hbase.regionserver.optionalcacheflushinterval]]
 *`hbase.regionserver.optionalcacheflushinterval`*::
 +
@@ -441,17 +430,7 @@ Maximum size of all memstores in a region server before flushes are forced.
 .Default
 `3600000`
 
-  
-[[hbase.regionserver.catalog.timeout]]
-*`hbase.regionserver.catalog.timeout`*::
-+
-.Description
-Timeout value for the Catalog Janitor from the regionserver to META.
-+
-.Default
-`600000`
 
-  
 [[hbase.regionserver.dns.interface]]
 *`hbase.regionserver.dns.interface`*::
 +
@@ -462,7 +441,7 @@ The name of the Network Interface from which a region server
 .Default
 `default`
 
-  
+
 [[hbase.regionserver.dns.nameserver]]
 *`hbase.regionserver.dns.nameserver`*::
 +
@@ -474,7 +453,7 @@ The host name or IP address of the name server (DNS)
 .Default
 `default`
 
-  
+
 [[hbase.regionserver.region.split.policy]]
 *`hbase.regionserver.region.split.policy`*::
 +
@@ -483,12 +462,12 @@ The host name or IP address of the name server (DNS)
       A split policy determines when a region should be split. The various other split policies that
       are available currently are ConstantSizeRegionSplitPolicy, DisabledRegionSplitPolicy,
       DelimitedKeyPrefixRegionSplitPolicy, KeyPrefixRegionSplitPolicy etc.
-    
+
 +
 .Default
 `org.apache.hadoop.hbase.regionserver.IncreasingToUpperBoundRegionSplitPolicy`
 
-  
+
 [[zookeeper.session.timeout]]
 *`zookeeper.session.timeout`*::
 +
@@ -497,17 +476,18 @@ ZooKeeper session timeout in milliseconds. It is used in two different ways.
       First, this value is used in the ZK client that HBase uses to connect to the ensemble.
       It is also used by HBase when it starts a ZK server and it is passed as the 'maxSessionTimeout'. See
       http://hadoop.apache.org/zookeeper/docs/current/zookeeperProgrammers.html#ch_zkSessions.
-      For example, if a HBase region server connects to a ZK ensemble that's also managed by HBase, then the
+      For example, if an HBase region server connects to a ZK ensemble that's also managed
+      by HBase, then the
       session timeout will be the one specified by this configuration. But, a region server that connects
       to an ensemble managed with a different configuration will be subjected that ensemble's maxSessionTimeout. So,
       even though HBase might propose using 90 seconds, the ensemble can have a max timeout lower than this and
       it will take precedence. The current default that ZK ships with is 40 seconds, which is lower than HBase's.
-    
+
 +
 .Default
 `90000`
 
-  
+
 [[zookeeper.znode.parent]]
 *`zookeeper.znode.parent`*::
 +
@@ -520,20 +500,7 @@ Root ZNode for HBase in ZooKeeper. All of HBase's ZooKeeper
 .Default
 `/hbase`
 
-  
-[[zookeeper.znode.rootserver]]
-*`zookeeper.znode.rootserver`*::
-+
-.Description
-Path to ZNode holding root region location. This is written by
-      the master and read by clients and region servers. If a relative path is
-      given, the parent folder will be ${zookeeper.znode.parent}. By default,
-      this means the root location is stored at /hbase/root-region-server.
-+
-.Default
-`root-region-server`
 
-  
 [[zookeeper.znode.acl.parent]]
 *`zookeeper.znode.acl.parent`*::
 +
@@ -543,7 +510,7 @@ Root ZNode for access control lists.
 .Default
 `acl`
 
-  
+
 [[hbase.zookeeper.dns.interface]]
 *`hbase.zookeeper.dns.interface`*::
 +
@@ -554,7 +521,7 @@ The name of the Network Interface from which a ZooKeeper server
 .Default
 `default`
 
-  
+
 [[hbase.zookeeper.dns.nameserver]]
 *`hbase.zookeeper.dns.nameserver`*::
 +
@@ -566,7 +533,7 @@ The host name or IP address of the name server (DNS)
 .Default
 `default`
 
-  
+
 [[hbase.zookeeper.peerport]]
 *`hbase.zookeeper.peerport`*::
 +
@@ -578,7 +545,7 @@ Port used by ZooKeeper peers to talk to each other.
 .Default
 `2888`
 
-  
+
 [[hbase.zookeeper.leaderport]]
 *`hbase.zookeeper.leaderport`*::
 +
@@ -590,7 +557,7 @@ Port used by ZooKeeper for leader election.
 .Default
 `3888`
 
-  
+
 [[hbase.zookeeper.useMulti]]
 *`hbase.zookeeper.useMulti`*::
 +
@@ -605,21 +572,7 @@ Instructs HBase to make use of ZooKeeper's multi-update functionality.
 .Default
 `true`
 
-  
-[[hbase.config.read.zookeeper.config]]
-*`hbase.config.read.zookeeper.config`*::
-+
-.Description
 
-        Set to true to allow HBaseConfiguration to read the
-        zoo.cfg file for ZooKeeper properties. Switching this to true
-        is not recommended, since the functionality of reading ZK
-        properties from a zoo.cfg file has been deprecated.
-+
-.Default
-`false`
-
-  
 [[hbase.zookeeper.property.initLimit]]
 *`hbase.zookeeper.property.initLimit`*::
 +
@@ -630,7 +583,7 @@ Property from ZooKeeper's config zoo.cfg.
 .Default
 `10`
 
-  
+
 [[hbase.zookeeper.property.syncLimit]]
 *`hbase.zookeeper.property.syncLimit`*::
 +
@@ -642,7 +595,7 @@ Property from ZooKeeper's config zoo.cfg.
 .Default
 `5`
 
-  
+
 [[hbase.zookeeper.property.dataDir]]
 *`hbase.zookeeper.property.dataDir`*::
 +
@@ -653,7 +606,7 @@ Property from ZooKeeper's config zoo.cfg.
 .Default
 `${hbase.tmp.dir}/zookeeper`
 
-  
+
 [[hbase.zookeeper.property.clientPort]]
 *`hbase.zookeeper.property.clientPort`*::
 +
@@ -664,7 +617,7 @@ Property from ZooKeeper's config zoo.cfg.
 .Default
 `2181`
 
-  
+
 [[hbase.zookeeper.property.maxClientCnxns]]
 *`hbase.zookeeper.property.maxClientCnxns`*::
 +
@@ -678,7 +631,7 @@ Property from ZooKeeper's config zoo.cfg.
 .Default
 `300`
 
-  
+
 [[hbase.client.write.buffer]]
 *`hbase.client.write.buffer`*::
 +
@@ -693,7 +646,7 @@ Default size of the HTable client write buffer in bytes.
 .Default
 `2097152`
 
-  
+
 [[hbase.client.pause]]
 *`hbase.client.pause`*::
 +
@@ -706,7 +659,7 @@ General client pause value.  Used mostly as value to wait
 .Default
 `100`
 
-  
+
 [[hbase.client.retries.number]]
 *`hbase.client.retries.number`*::
 +
@@ -721,7 +674,7 @@ Maximum retries.  Used as maximum for all retryable
 .Default
 `35`
 
-  
+
 [[hbase.client.max.total.tasks]]
 *`hbase.client.max.total.tasks`*::
 +
@@ -732,7 +685,7 @@ The maximum number of concurrent tasks a single HTable instance will
 .Default
 `100`
 
-  
+
 [[hbase.client.max.perserver.tasks]]
 *`hbase.client.max.perserver.tasks`*::
 +
@@ -743,7 +696,7 @@ The maximum number of concurrent tasks a single HTable instance will
 .Default
 `5`
 
-  
+
 [[hbase.client.max.perregion.tasks]]
 *`hbase.client.max.perregion.tasks`*::
 +
@@ -756,7 +709,7 @@ The maximum number of concurrent connections the client will
 .Default
 `1`
 
-  
+
 [[hbase.client.scanner.caching]]
 *`hbase.client.scanner.caching`*::
 +
@@ -771,7 +724,7 @@ Number of rows that will be fetched when calling next
 .Default
 `100`
 
-  
+
 [[hbase.client.keyvalue.maxsize]]
 *`hbase.client.keyvalue.maxsize`*::
 +
@@ -786,7 +739,7 @@ Specifies the combined maximum allowed size of a KeyValue
 .Default
 `10485760`
 
-  
+
 [[hbase.client.scanner.timeout.period]]
 *`hbase.client.scanner.timeout.period`*::
 +
@@ -796,7 +749,7 @@ Client scanner lease period in milliseconds.
 .Default
 `60000`
 
-  
+
 [[hbase.client.localityCheck.threadPoolSize]]
 *`hbase.client.localityCheck.threadPoolSize`*::
 +
@@ -806,7 +759,7 @@ Client scanner lease period in milliseconds.
 .Default
 `2`
 
-  
+
 [[hbase.bulkload.retries.number]]
 *`hbase.bulkload.retries.number`*::
 +
@@ -818,7 +771,7 @@ Maximum retries.  This is maximum number of iterations
 .Default
 `10`
 
-  
+
 [[hbase.balancer.period
     ]]
 *`hbase.balancer.period
@@ -830,7 +783,7 @@ Period at which the region balancer runs in the Master.
 .Default
 `300000`
 
-  
+
 [[hbase.regions.slop]]
 *`hbase.regions.slop`*::
 +
@@ -840,7 +793,7 @@ Rebalance if any regionserver has average + (average * slop) regions.
 .Default
 `0.2`
 
-  
+
 [[hbase.server.thread.wakefrequency]]
 *`hbase.server.thread.wakefrequency`*::
 +
@@ -851,20 +804,20 @@ Time to sleep in between searches for work (in milliseconds).
 .Default
 `10000`
 
-  
+
 [[hbase.server.versionfile.writeattempts]]
 *`hbase.server.versionfile.writeattempts`*::
 +
 .Description
 
     How many time to retry attempting to write a version file
-    before just aborting. Each attempt is seperated by the
+    before just aborting. Each attempt is separated by the
     hbase.server.thread.wakefrequency milliseconds.
 +
 .Default
 `3`
 
-  
+
 [[hbase.hregion.memstore.flush.size]]
 *`hbase.hregion.memstore.flush.size`*::
 +
@@ -877,7 +830,7 @@ Time to sleep in between searches for work (in milliseconds).
 .Default
 `134217728`
 
-  
+
 [[hbase.hregion.percolumnfamilyflush.size.lower.bound]]
 *`hbase.hregion.percolumnfamilyflush.size.lower.bound`*::
 +
@@ -890,12 +843,12 @@ Time to sleep in between searches for work (in milliseconds).
     memstore size more than this, all the memstores will be flushed
     (just as usual). This value should be less than half of the total memstore
     threshold (hbase.hregion.memstore.flush.size).
-    
+
 +
 .Default
 `16777216`
 
-  
+
 [[hbase.hregion.preclose.flush.size]]
 *`hbase.hregion.preclose.flush.size`*::
 +
@@ -914,7 +867,7 @@ Time to sleep in between searches for work (in milliseconds).
 .Default
 `5242880`
 
-  
+
 [[hbase.hregion.memstore.block.multiplier]]
 *`hbase.hregion.memstore.block.multiplier`*::
 +
@@ -930,7 +883,7 @@ Time to sleep in between searches for work (in milliseconds).
 .Default
 `4`
 
-  
+
 [[hbase.hregion.memstore.mslab.enabled]]
 *`hbase.hregion.memstore.mslab.enabled`*::
 +
@@ -944,19 +897,19 @@ Time to sleep in between searches for work (in milliseconds).
 .Default
 `true`
 
-  
+
 [[hbase.hregion.max.filesize]]
 *`hbase.hregion.max.filesize`*::
 +
 .Description
 
-    Maximum HFile size. If the sum of the sizes of a region's HFiles has grown to exceed this 
+    Maximum HFile size. If the sum of the sizes of a region's HFiles has grown to exceed this
     value, the region is split in two.
 +
 .Default
 `10737418240`
 
-  
+
 [[hbase.hregion.majorcompaction]]
 *`hbase.hregion.majorcompaction`*::
 +
@@ -973,7 +926,7 @@ Time between major compactions, expressed in milliseconds. Set to 0 to disable
 .Default
 `604800000`
 
-  
+
 [[hbase.hregion.majorcompaction.jitter]]
 *`hbase.hregion.majorcompaction.jitter`*::
 +
@@ -986,32 +939,32 @@ A multiplier applied to hbase.hregion.majorcompaction to cause compaction to occ
 .Default
 `0.50`
 
-  
+
 [[hbase.hstore.compactionThreshold]]
 *`hbase.hstore.compactionThreshold`*::
 +
 .Description
- If more than this number of StoreFiles exist in any one Store 
-      (one StoreFile is written per flush of MemStore), a compaction is run to rewrite all 
+ If more than this number of StoreFiles exist in any one Store
+      (one StoreFile is written per flush of MemStore), a compaction is run to rewrite all
       StoreFiles into a single StoreFile. Larger values delay compaction, but when compaction does
       occur, it takes longer to complete.
 +
 .Default
 `3`
 
-  
+
 [[hbase.hstore.flusher.count]]
 *`hbase.hstore.flusher.count`*::
 +
 .Description
  The number of flush threads. With fewer threads, the MemStore flushes will be
       queued. With more threads, the flushes will be executed in parallel, increasing the load on
-      HDFS, and potentially causing more compactions. 
+      HDFS, and potentially causing more compactions.
 +
 .Default
 `2`
 
-  
+
 [[hbase.hstore.blockingStoreFiles]]
 *`hbase.hstore.blockingStoreFiles`*::
 +
@@ -1023,40 +976,40 @@ A multiplier applied to hbase.hregion.majorcompaction to cause compaction to occ
 .Default
 `10`
 
-  
+
 [[hbase.hstore.blockingWaitTime]]
 *`hbase.hstore.blockingWaitTime`*::
 +
 .Description
  The time for which a region will block updates after reaching the StoreFile limit
-    defined by hbase.hstore.blockingStoreFiles. After this time has elapsed, the region will stop 
+    defined by hbase.hstore.blockingStoreFiles. After this time has elapsed, the region will stop
     blocking updates even if a compaction has not been completed.
 +
 .Default
 `90000`
 
-  
+
 [[hbase.hstore.compaction.min]]
 *`hbase.hstore.compaction.min`*::
 +
 .Description
-The minimum number of StoreFiles which must be eligible for compaction before 
-      compaction can run. The goal of tuning hbase.hstore.compaction.min is to avoid ending up with 
-      too many tiny StoreFiles to compact. Setting this value to 2 would cause a minor compaction 
+The minimum number of StoreFiles which must be eligible for compaction before
+      compaction can run. The goal of tuning hbase.hstore.compaction.min is to avoid ending up with
+      too many tiny StoreFiles to compact. Setting this value to 2 would cause a minor compaction
       each time you have two StoreFiles in a Store, and this is probably not appropriate. If you
-      set this value too high, all the other values will need to be adjusted accordingly. For most 
+      set this value too high, all the other values will need to be adjusted accordingly. For most
       cases, the default value is appropriate. In previous versions of HBase, the parameter
       hbase.hstore.compaction.min was named hbase.hstore.compactionThreshold.
 +
 .Default
 `3`
 
-  
+
 [[hbase.hstore.compaction.max]]
 *`hbase.hstore.compaction.max`*::
 +
 .Description
-The maximum number of StoreFiles which will be selected for a single minor 
+The maximum number of StoreFiles which will be selected for a single minor
       compaction, regardless of the number of eligible StoreFiles. Effectively, the value of
       hbase.hstore.compaction.max controls the length of time it takes a single compaction to
       complete. Setting it larger means that more StoreFiles are included in a compaction. For most
@@ -1065,88 +1018,88 @@ The maximum number of StoreFiles which will be selected for a single minor
 .Default
 `10`
 
-  
+
 [[hbase.hstore.compaction.min.size]]
 *`hbase.hstore.compaction.min.size`*::
 +
 .Description
-A StoreFile smaller than this size will always be eligible for minor compaction. 
-      HFiles this size or larger are evaluated by hbase.hstore.compaction.ratio to determine if 
-      they are eligible. Because this limit represents the "automatic include"limit for all 
-      StoreFiles smaller than this value, this value may need to be reduced in write-heavy 
-      environments where many StoreFiles in the 1-2 MB range are being flushed, because every 
+A StoreFile smaller than this size will always be eligible for minor compaction.
+      HFiles this size or larger are evaluated by hbase.hstore.compaction.ratio to determine if
+      they are eligible. Because this limit represents the "automatic include"limit for all
+      StoreFiles smaller than this value, this value may need to be reduced in write-heavy
+      environments where many StoreFiles in the 1-2 MB range are being flushed, because every
       StoreFile will be targeted for compaction and the resulting StoreFiles may still be under the
       minimum size and require further compaction. If this parameter is lowered, the ratio check is
-      triggered more quickly. This addressed some issues seen in earlier versions of HBase but 
-      changing this parameter is no longer necessary in most situations. Default: 128 MB expressed 
+      triggered more quickly. This addressed some issues seen in earlier versions of HBase but
+      changing this parameter is no longer necessary in most situations. Default: 128 MB expressed
       in bytes.
 +
 .Default
 `134217728`
 
-  
+
 [[hbase.hstore.compaction.max.size]]
 *`hbase.hstore.compaction.max.size`*::
 +
 .Description
-A StoreFile larger than this size will be excluded from compaction. The effect of 
-      raising hbase.hstore.compaction.max.size is fewer, larger StoreFiles that do not get 
+A StoreFile larger than this size will be excluded from compaction. The effect of
+      raising hbase.hstore.compaction.max.size is fewer, larger StoreFiles that do not get
       compacted often. If you feel that compaction is happening too often without much benefit, you
       can try raising this value. Default: the value of LONG.MAX_VALUE, expressed in bytes.
 +
 .Default
 `9223372036854775807`
 
-  
+
 [[hbase.hstore.compaction.ratio]]
 *`hbase.hstore.compaction.ratio`*::
 +
 .Description
-For minor compaction, this ratio is used to determine whether a given StoreFile 
+For minor compaction, this ratio is used to determine whether a given StoreFile
       which is larger than hbase.hstore.compaction.min.size is eligible for compaction. Its
       effect is to limit compaction of large StoreFiles. The value of hbase.hstore.compaction.ratio
-      is expressed as a floating-point decimal. A large ratio, such as 10, will produce a single 
-      giant StoreFile. Conversely, a low value, such as .25, will produce behavior similar to the 
+      is expressed as a floating-point decimal. A large ratio, such as 10, will produce a single
+      giant StoreFile. Conversely, a low value, such as .25, will produce behavior similar to the
       BigTable compaction algorithm, producing four StoreFiles. A moderate value of between 1.0 and
-      1.4 is recommended. When tuning this value, you are balancing write costs with read costs. 
-      Raising the value (to something like 1.4) will have more write costs, because you will 
-      compact larger StoreFiles. However, during reads, HBase will need to seek through fewer 
-      StoreFiles to accomplish the read. Consider this approach if you cannot take advantage of 
-      Bloom filters. Otherwise, you can lower this value to something like 1.0 to reduce the 
-      background cost of writes, and use Bloom filters to control the number of StoreFiles touched 
+      1.4 is recommended. When tuning this value, you are balancing write costs with read costs.
+      Raising the value (to something like 1.4) will have more write costs, because you will
+      compact larger StoreFiles. However, during reads, HBase will need to seek through fewer
+      StoreFiles to accomplish the read. Consider this approach if you cannot take advantage of
+      Bloom filters. Otherwise, you can lower this value to something like 1.0 to reduce the
+      background cost of writes, and use Bloom filters to control the number of StoreFiles touched
       during reads. For most cases, the default value is appropriate.
 +
 .Default
 `1.2F`
 
-  
+
 [[hbase.hstore.compaction.ratio.offpeak]]
 *`hbase.hstore.compaction.ratio.offpeak`*::
 +
 .Description
 Allows you to set a different (by default, more aggressive) ratio for determining
-      whether larger StoreFiles are included in compactions during off-peak hours. Works in the 
-      same way as hbase.hstore.compaction.ratio. Only applies if hbase.offpeak.start.hour and 
+      whether larger StoreFiles are included in compactions during off-peak hours. Works in the
+      same way as hbase.hstore.compaction.ratio. Only applies if hbase.offpeak.start.hour and
       hbase.offpeak.end.hour are also enabled.
 +
 .Default
 `5.0F`
 
-  
+
 [[hbase.hstore.time.to.purge.deletes]]
 *`hbase.hstore.time.to.purge.deletes`*::
 +
 .Description
-The amount of time to delay purging of delete markers with future timestamps. If 
-      unset, or set to 0, all delete markers, including those with future timestamps, are purged 
-      during the next major compaction. Otherwise, a delete marker is kept until the major compaction 
+The amount of time to delay purging of delete markers with future timestamps. If
+      unset, or set to 0, all delete markers, including those with future timestamps, are purged
+      during the next major compaction. Otherwise, a delete marker is kept until the major compaction
       which occurs after the marker's timestamp plus the value of this setting, in milliseconds.
-    
+
 +
 .Default
 `0`
 
-  
+
 [[hbase.offpeak.start.hour]]
 *`hbase.offpeak.start.hour`*::
 +
@@ -1157,7 +1110,7 @@ The start of off-peak hours, expressed as an integer between 0 and 23, inclusive
 .Default
 `-1`
 
-  
+
 [[hbase.offpeak.end.hour]]
 *`hbase.offpeak.end.hour`*::
 +
@@ -1168,7 +1121,7 @@ The end of off-peak hours, expressed as an integer between 0 and 23, inclusive.
 .Default
 `-1`
 
-  
+
 [[hbase.regionserver.thread.compaction.throttle]]
 *`hbase.regionserver.thread.compaction.throttle`*::
 +
@@ -1184,19 +1137,19 @@ There are two different thread pools for compactions, one for large compactions
 .Default
 `2684354560`
 
-  
+
 [[hbase.hstore.compaction.kv.max]]
 *`hbase.hstore.compaction.kv.max`*::
 +
 .Description
 The maximum number of KeyValues to read and then write in a batch when flushing or
       compacting. Set this lower if you have big KeyValues and problems with Out Of Memory
-      Exceptions Set this higher if you have wide, small rows. 
+      Exceptions Set this higher if you have wide, small rows.
 +
 .Default
 `10`
 
-  
+
 [[hbase.storescanner.parallel.seek.enable]]
 *`hbase.storescanner.parallel.seek.enable`*::
 +
@@ -1208,7 +1161,7 @@ The maximum number of KeyValues to read and then write in a batch when flushing
 .Default
 `false`
 
-  
+
 [[hbase.storescanner.parallel.seek.threads]]
 *`hbase.storescanner.parallel.seek.threads`*::
 +
@@ -1219,7 +1172,7 @@ The maximum number of KeyValues to read and then write in a batch when flushing
 .Default
 `10`
 
-  
+
 [[hfile.block.cache.size]]
 *`hfile.block.cache.size`*::
 +
@@ -1232,7 +1185,7 @@ Percentage of maximum heap (-Xmx setting) to allocate to block cache
 .Default
 `0.4`
 
-  
+
 [[hfile.block.index.cacheonwrite]]
 *`hfile.block.index.cacheonwrite`*::
 +
@@ -1243,7 +1196,7 @@ This allows to put non-root multi-level index blocks into the block
 .Default
 `false`
 
-  
+
 [[hfile.index.block.max.size]]
 *`hfile.index.block.max.size`*::
 +
@@ -1255,31 +1208,33 @@ When the size of a leaf-level, intermediate-level, or root-level
 .Default
 `131072`
 
-  
+
 [[hbase.bucketcache.ioengine]]
 *`hbase.bucketcache.ioengine`*::
 +
 .Description
-Where to store the contents of the bucketcache. One of: onheap, 
-      offheap, or file. If a file, set it to file:PATH_TO_FILE. See https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html for more information.
-    
+Where to store the contents of the bucketcache. One of: onheap,
+      offheap, or file. If a file, set it to file:PATH_TO_FILE.
+      See https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html
+      for more information.
+
 +
 .Default
 ``
 
-  
+
 [[hbase.bucketcache.combinedcache.enabled]]
 *`hbase.bucketcache.combinedcache.enabled`*::
 +
 .Description
-Whether or not the bucketcache is used in league with the LRU 
-      on-heap block cache. In this mode, indices and blooms are kept in the LRU 
+Whether or not the bucketcache is used in league with the LRU
+      on-heap block cache. In this mode, indices and blooms are kept in the LRU
       blockcache and the data blocks are kept in the bucketcache.
 +
 .Default
 `true`
 
-  
+
 [[hbase.bucketcache.size]]
 *`hbase.bucketcache.size`*::
 +
@@ -1290,19 +1245,19 @@ Used along with bucket cache, this is a float that EITHER represents a percentag
 .Default
 `0` when specified as a float
 
-  
-[[hbase.bucketcache.sizes]]
-*`hbase.bucketcache.sizes`*::
+
+[[hbase.bucketcache.bucket.sizes]]
+*`hbase.bucketcache.bucket.sizes`*::
 +
 .Description
-A comma-separated list of sizes for buckets for the bucketcache 
-      if you use multiple sizes. Should be a list of block sizes in order from smallest 
+A comma-separated list of sizes for buckets for the bucketcache
+      if you use multiple sizes. Should be a list of block sizes in order from smallest
       to largest. The sizes you use will depend on your data access patterns.
 +
 .Default
 ``
 
-  
+
 [[hfile.format.version]]
 *`hfile.format.version`*::
 +
@@ -1310,13 +1265,13 @@ A comma-separated list of sizes for buckets for the bucketcache
 The HFile format version to use for new files.
       Version 3 adds support for tags in hfiles (See http://hbase.apache.org/book.html#hbase.tags).
       Distributed Log Replay requires that tags are enabled. Also see the configuration
-      'hbase.replication.rpc.codec'. 
-      
+      'hbase.replication.rpc.codec'.
+
 +
 .Default
 `3`
 
-  
+
 [[hfile.block.bloom.cacheonwrite]]
 *`hfile.block.bloom.cacheonwrite`*::
 +
@@ -1326,7 +1281,7 @@ Enables cache-on-write for inline blocks of a compound Bloom filter.
 .Default
 `false`
 
-  
+
 [[io.storefile.bloom.block.size]]
 *`io.storefile.bloom.block.size`*::
 +
@@ -1339,7 +1294,7 @@ The size in bytes of a single block ("chunk") of a compound Bloom
 .Default
 `131072`
 
-  
+
 [[hbase.rs.cacheblocksonwrite]]
 *`hbase.rs.cacheblocksonwrite`*::
 +
@@ -1350,7 +1305,7 @@ Whether an HFile block should be added to the block cache when the
 .Default
 `false`
 
-  
+
 [[hbase.rpc.timeout]]
 *`hbase.rpc.timeout`*::
 +
@@ -1362,7 +1317,7 @@ This is for the RPC layer to define how long HBase client applications
 .Default
 `60000`
 
-  
+
 [[hbase.rpc.shortoperation.timeout]]
 *`hbase.rpc.shortoperation.timeout`*::
 +
@@ -1375,7 +1330,7 @@ This is another version of "hbase.rpc.timeout". For those RPC operation
 .Default
 `10000`
 
-  
+
 [[hbase.ipc.client.tcpnodelay]]
 *`hbase.ipc.client.tcpnodelay`*::
 +
@@ -1386,7 +1341,7 @@ Set no delay on rpc socket connections.  See
 .Default
 `true`
 
-  
+
 [[hbase.master.keytab.file]]
 *`hbase.master.keytab.file`*::
 +
@@ -1397,7 +1352,7 @@ Full path to the kerberos keytab file to use for logging in
 .Default
 ``
 
-  
+
 [[hbase.master.kerberos.principal]]
 *`hbase.master.kerberos.principal`*::
 +
@@ -1411,7 +1366,7 @@ Ex. "hbase/_HOST@EXAMPLE.COM".  The kerberos principal name
 .Default
 ``
 
-  
+
 [[hbase.regionserver.keytab.file]]
 *`hbase.regionserver.keytab.file`*::
 +
@@ -1422,7 +1377,7 @@ Full path to the kerberos keytab file to use for logging in
 .Default
 ``
 
-  
+
 [[hbase.regionserver.kerberos.principal]]
 *`hbase.regionserver.kerberos.principal`*::
 +
@@ -1437,7 +1392,7 @@ Ex. "hbase/_HOST@EXAMPLE.COM".  The kerberos principal name
 .Default
 ``
 
-  
+
 [[hadoop.policy.file]]
 *`hadoop.policy.file`*::
 +
@@ -1449,7 +1404,7 @@ The policy configuration file used by RPC servers to make
 .Default
 `hbase-policy.xml`
 
-  
+
 [[hbase.superuser]]
 *`hbase.superuser`*::
 +
@@ -1461,7 +1416,7 @@ List of users or groups (comma-separated), who are allowed
 .Default
 ``
 
-  
+
 [[hbase.auth.key.update.interval]]
 *`hbase.auth.key.update.interval`*::
 +
@@ -1472,7 +1427,7 @@ The update interval for master key for authentication tokens
 .Default
 `86400000`
 
-  
+
 [[hbase.auth.token.max.lifetime]]
 *`hbase.auth.token.max.lifetime`*::
 +
@@ -1483,7 +1438,7 @@ The maximum lifetime in milliseconds after which an
 .Default
 `604800000`
 
-  
+
 [[hbase.ipc.client.fallback-to-simple-auth-allowed]]
 *`hbase.ipc.client.fallback-to-simple-auth-allowed`*::
 +
@@ -1498,7 +1453,7 @@ When a client is configured to attempt a secure connection, but attempts to
 .Default
 `false`
 
-  
+
 [[hbase.display.keys]]
 *`hbase.display.keys`*::
 +
@@ -1510,7 +1465,7 @@ When this is set to true the webUI and such will display all start/end keys
 .Default
 `true`
 
-  
+
 [[hbase.coprocessor.region.classes]]
 *`hbase.coprocessor.region.classes`*::
 +
@@ -1524,7 +1479,7 @@ A comma-separated list of Coprocessors that are loaded by
 .Default
 ``
 
-  
+
 [[hbase.rest.port]]
 *`hbase.rest.port`*::
 +
@@ -1534,7 +1489,7 @@ The port for the HBase REST server.
 .Default
 `8080`
 
-  
+
 [[hbase.rest.readonly]]
 *`hbase.rest.readonly`*::
 +
@@ -1546,7 +1501,7 @@ Defines the mode the REST server will be started in. Possible values are:
 .Default
 `false`
 
-  
+
 [[hbase.rest.threads.max]]
 *`hbase.rest.threads.max`*::
 +
@@ -1561,7 +1516,7 @@ The maximum number of threads of the REST server thread pool.
 .Default
 `100`
 
-  
+
 [[hbase.rest.threads.min]]
 *`hbase.rest.threads.min`*::
 +
@@ -1573,7 +1528,7 @@ The minimum number of threads of the REST server thread pool.
 .Default
 `2`
 
-  
+
 [[hbase.rest.support.proxyuser]]
 *`hbase.rest.support.proxyuser`*::
 +
@@ -1583,7 +1538,7 @@ Enables running the REST server to support proxy-user mode.
 .Default
 `false`
 
-  
+
 [[hbase.defaults.for.version.skip]]
 *`hbase.defaults.for.version.skip`*::
 +
@@ -1592,14 +1547,14 @@ Set to true to skip the 'hbase.defaults.for.version' check.
     Setting this to true can be useful in contexts other than
     the other side of a maven generation; i.e. running in an
     ide.  You'll want to set this boolean to true to avoid
-    seeing the RuntimException complaint: "hbase-default.xml file
+    seeing the RuntimeException complaint: "hbase-default.xml file
     seems to be for and old version of HBase (\${hbase.version}), this
     version is X.X.X-SNAPSHOT"
 +
 .Default
 `false`
 
-  
+
 [[hbase.coprocessor.master.classes]]
 *`hbase.coprocessor.master.classes`*::
 +
@@ -1614,7 +1569,7 @@ A comma-separated list of
 .Default
 ``
 
-  
+
 [[hbase.coprocessor.abortonerror]]
 *`hbase.coprocessor.abortonerror`*::
 +
@@ -1629,17 +1584,7 @@ Set to true to cause the hosting server (master or regionserver)
 .Default
 `true`
 
-  
-[[hbase.online.schema.update.enable]]
-*`hbase.online.schema.update.enable`*::
-+
-.Description
-Set true to enable online schema changes.
-+
-.Default
-`true`
 
-  
 [[hbase.table.lock.enable]]
 *`hbase.table.lock.enable`*::
 +
@@ -1651,7 +1596,7 @@ Set to true to enable locking the table in zookeeper for schema change operation
 .Default
 `true`
 
-  
+
 [[hbase.table.max.rowsize]]
 *`hbase.table.max.rowsize`*::
 +
@@ -1660,12 +1605,12 @@ Set to true to enable locking the table in zookeeper for schema change operation
       Maximum size of single row in bytes (default is 1 Gb) for Get'ting
       or Scan'ning without in-row scan flag set. If row size exceeds this limit
       RowTooBigException is thrown to client.
-    
+
 +
 .Default
 `1073741824`
 
-  
+
 [[hbase.thrift.minWorkerThreads]]
 *`hbase.thrift.minWorkerThreads`*::
 +
@@ -1676,7 +1621,7 @@ The "core size" of the thread pool. New threads are created on every
 .Default
 `16`
 
-  
+
 [[hbase.thrift.maxWorkerThreads]]
 *`hbase.thrift.maxWorkerThreads`*::
 +
@@ -1688,7 +1633,7 @@ The maximum size of the thread pool. When the pending request queue
 .Default
 `1000`
 
-  
+
 [[hbase.thrift.maxQueuedRequests]]
 *`hbase.thrift.maxQueuedRequests`*::
 +
@@ -1701,21 +1646,7 @@ The maximum number of pending Thrift connections waiting in the queue. If
 .Default
 `1000`
 
-  
-[[hbase.thrift.htablepool.size.max]]
-*`hbase.thrift.htablepool.size.max`*::
-+
-.Description
-The upper bound for the table pool used in the Thrift gateways server.
-      Since this is per table name, we assume a single table and so with 1000 default
-      worker threads max this is set to a matching number. For other workloads this number
-      can be adjusted as needed.
-    
-+
-.Default
-`1000`
 
-  
 [[hbase.regionserver.thrift.framed]]
 *`hbase.regionserver.thrift.framed`*::
 +
@@ -1724,12 +1655,12 @@ Use Thrift TFramedTransport on the server side.
       This is the recommended transport for thrift servers and requires a similar setting
       on the client side. Changing this to false will select the default transport,
       vulnerable to DoS when malformed requests are issued due to THRIFT-601.
-    
+
 +
 .Default
 `false`
 
-  
+
 [[hbase.regionserver.thrift.framed.max_frame_size_in_mb]]
 *`hbase.regionserver.thrift.framed.max_frame_size_in_mb`*::
 +
@@ -1739,7 +1670,7 @@ Default frame size when using framed transport
 .Default
 `2`
 
-  
+
 [[hbase.regionserver.thrift.compact]]
 *`hbase.regionserver.thrift.compact`*::
 +
@@ -1749,7 +1680,7 @@ Use Thrift TCompactProtocol binary serialization protocol.
 .Default
 `false`
 
-  
+
 [[hbase.data.umask.enable]]
 *`hbase.data.umask.enable`*::
 +
@@ -1760,7 +1691,7 @@ Enable, if true, that file permissions should be assigned
 .Default
 `false`
 
-  
+
 [[hbase.data.umask]]
 *`hbase.data.umask`*::
 +
@@ -1771,32 +1702,7 @@ File permissions that should be used to write data
 .Default
 `000`
 
-  
-[[hbase.metrics.showTableName]]
-*`hbase.metrics.showTableName`*::
-+
-.Description
-Whether to include the prefix "tbl.tablename" in per-column family metrics.
-	If true, for each metric M, per-cf metrics will be reported for tbl.T.cf.CF.M, if false,
-	per-cf metrics will be aggregated by column-family across tables, and reported for cf.CF.M.
-	In both cases, the aggregated metric M across tables and cfs will be reported.
-+
-.Default
-`true`
-
-  
-[[hbase.metrics.exposeOperationTimes]]
-*`hbase.metrics.exposeOperationTimes`*::
-+
-.Description
-Whether to report metrics about time taken performing an
-      operation on the region server.  Get, Put, Delete, Increment, and Append can all
-      have their times exposed through Hadoop metrics per CF and per region.
-+
-.Default
-`true`
 
-  
 [[hbase.snapshot.enabled]]
 *`hbase.snapshot.enabled`*::
 +
@@ -1806,7 +1712,7 @@ Set to true to allow snapshots to be taken / restored / cloned.
 .Default
 `true`
 
-  
+
 [[hbase.snapshot.restore.take.failsafe.snapshot]]
 *`hbase.snapshot.restore.take.failsafe.snapshot`*::
 +
@@ -1818,7 +1724,7 @@ Set to true to take a snapshot before the restore operation.
 .Default
 `true`
 
-  
+
 [[hbase.snapshot.restore.failsafe.name]]
 *`hbase.snapshot.restore.failsafe.name`*::
 +
@@ -1830,7 +1736,7 @@ Name of the failsafe snapshot taken by the restore operation.
 .Default
 `hbase-failsafe-{snapshot.name}-{restore.timestamp}`
 
-  
+
 [[hbase.server.compactchecker.interval.multiplier]]
 *`hbase.server.compactchecker.interval.multiplier`*::
 +
@@ -1845,7 +1751,7 @@ The number that determines how often we scan to see if compaction is necessary.
 .Default
 `1000`
 
-  
+
 [[hbase.lease.recovery.timeout]]
 *`hbase.lease.recovery.timeout`*::
 +
@@ -1855,7 +1761,7 @@ How long we wait on dfs lease recovery in total before giving up.
 .Default
 `900000`
 
-  
+
 [[hbase.lease.recovery.dfs.timeout]]
 *`hbase.lease.recovery.dfs.timeout`*::
 +
@@ -1869,7 +1775,7 @@ How long between dfs recover lease invocations. Should be larger than the sum of
 .Default
 `64000`
 
-  
+
 [[hbase.column.max.version]]
 *`hbase.column.max.version`*::
 +
@@ -1880,7 +1786,7 @@ New column family descriptors will use this value as the default number of versi
 .Default
 `1`
 
-  
+
 [[hbase.dfs.client.read.shortcircuit.buffer.size]]
 *`hbase.dfs.client.read.shortcircuit.buffer.size`*::
 +
@@ -1894,12 +1800,12 @@ If the DFSClient configuration
     direct memory.  So, we set it down from the default.  Make
     it > the default hbase block size set in the HColumnDescriptor
     which is usually 64k.
-    
+
 +
 .Default
 `131072`
 
-  
+
 [[hbase.regionserver.checksum.verify]]
 *`hbase.regionserver.checksum.verify`*::
 +
@@ -1914,13 +1820,13 @@ If the DFSClient configuration
         fails, we will switch back to using HDFS checksums (so do not disable HDFS
         checksums!  And besides this feature applies to hfiles only, not to WALs).
         If this parameter is set to false, then hbase will not verify any checksums,
-        instead it will depend on checksum verification being done in the HDFS client.  
-    
+        instead it will depend on checksum verification being done in the HDFS client.
+
 +
 .Default
 `true`
 
-  
+
 [[hbase.hstore.bytes.per.checksum]]
 *`hbase.hstore.bytes.per.checksum`*::
 +
@@ -1928,12 +1834,12 @@ If the DFSClient configuration
 
         Number of bytes in a newly created checksum chunk for HBase-level
         checksums in hfile blocks.
-    
+
 +
 .Default
 `16384`
 
-  
+
 [[hbase.hstore.checksum.algorithm]]
 *`hbase.hstore.checksum.algorithm`*::
 +
@@ -1941,12 +1847,12 @@ If the DFSClient configuration
 
       Name of an algorithm that is used to compute checksums. Possible values
       are NULL, CRC32, CRC32C.
-    
+
 +
 .Default
 `CRC32`
 
-  
+
 [[hbase.status.published]]
 *`hbase.status.published`*::
 +
@@ -1956,60 +1862,60 @@ If the DFSClient configuration
       When a region server dies and its recovery starts, the master will push this information
       to the client application, to let them cut the connection immediately instead of waiting
       for a timeout.
-    
+
 +
 .Default
 `false`
 
-  
+
 [[hbase.status.publisher.class]]
 *`hbase.status.publisher.class`*::
 +
 .Description
 
       Implementation of the status publication with a multicast message.
-    
+
 +
 .Default
 `org.apache.hadoop.hbase.master.ClusterStatusPublisher$MulticastPublisher`
 
-  
+
 [[hbase.status.listener.class]]
 *`hbase.status.listener.class`*::
 +
 .Description
 
       Implementation of the status listener with a multicast message.
-    
+
 +
 .Default
 `org.apache.hadoop.hbase.client.ClusterStatusListener$MulticastListener`
 
-  
+
 [[hbase.status.multicast.address.ip]]
 *`hbase.status.multicast.address.ip`*::
 +
 .Description
 
       Multicast address to use for the status publication by multicast.
-    
+
 +
 .Default
 `226.1.1.3`
 
-  
+
 [[hbase.status.multicast.address.port]]
 *`hbase.status.multicast.address.port`*::
 +
 .Description
 
       Multicast port to use for the status publication by multicast.
-    
+
 +
 .Default
 `16100`
 
-  
+
 [[hbase.dynamic.jars.dir]]
 *`hbase.dynamic.jars.dir`*::
 +
@@ -2019,12 +1925,12 @@ If the DFSClient configuration
       dynamically by the region server without the need to restart. However,
       an already loaded filter/co-processor class would not be un-loaded. See
       HBASE-1936 for more details.
-    
+
 +
 .Default
 `${hbase.rootdir}/lib`
 
-  
+
 [[hbase.security.authentication]]
 *`hbase.security.authentication`*::
 +
@@ -2032,24 +1938,24 @@ If the DFSClient configuration
 
       Controls whether or not secure authentication is enabled for HBase.
       Possible values are 'simple' (no authentication), and 'kerberos'.
-    
+
 +
 .Default
 `simple`
 
-  
+
 [[hbase.rest.filter.classes]]
 *`hbase.rest.filter.classes`*::
 +
 .Description
 
       Servlet filters for REST service.
-    
+
 +
 .Default
 `org.apache.hadoop.hbase.rest.filter.GzipFilter`
 
-  
+
 [[hbase.master.loadbalancer.class]]
 *`hbase.master.loadbalancer.class`*::
 +
@@ -2060,12 +1966,12 @@ If the DFSClient configuration
       http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/master/balancer/StochasticLoadBalancer.html
       It replaces the DefaultLoadBalancer as the default (since renamed
       as the SimpleLoadBalancer).
-    
+
 +
 .Default
 `org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer`
 
-  
+
 [[hbase.security.exec.permission.checks]]
 *`hbase.security.exec.permission.checks`*::
 +
@@ -2081,28 +1987,28 @@ If the DFSClient configuration
       section of the HBase online manual. For more information on granting or
       revoking permissions using the AccessController, see the security
       section of the HBase online manual.
-    
+
 +
 .Default
 `false`
 
-  
+
 [[hbase.procedure.regionserver.classes]]
 *`hbase.procedure.regionserver.classes`*::
 +
 .Description
-A comma-separated list of 
-    org.apache.hadoop.hbase.procedure.RegionServerProcedureManager procedure managers that are 
-    loaded by default on the active HRegionServer process. The lifecycle methods (init/start/stop) 
-    will be called by the active HRegionServer process to perform the specific globally barriered 
-    procedure. After implementing your own RegionServerProcedureManager, just put it in 
+A comma-separated list of
+    org.apache.hadoop.hbase.procedure.RegionServerProcedureManager procedure managers that are
+    loaded by default on the active HRegionServer process. The lifecycle methods (init/start/stop)
+    will be called by the active HRegionServer process to perform the specific globally barriered
+    procedure. After implementing your own RegionServerProcedureManager, just put it in
     HBase's classpath and add the fully qualified class name here.
-    
+
 +
 .Default
 ``
 
-  
+
 [[hbase.procedure.master.classes]]
 *`hbase.procedure.master.classes`*::
 +
@@ -2117,7 +2023,7 @@ A comma-separated list of
 .Default
 ``
 
-  
+
 [[hbase.coordinated.state.manager.class]]
 *`hbase.coordinated.state.manager.class`*::
 +
@@ -2127,7 +2033,7 @@ Fully qualified name of class implementing coordinated state manager.
 .Default
 `org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager`
 
-  
+
 [[hbase.regionserver.storefile.refresh.period]]
 *`hbase.regionserver.storefile.refresh.period`*::
 +
@@ -2140,12 +2046,12 @@ Fully qualified name of class implementing coordinated state manager.
       extra Namenode pressure. If the files cannot be refreshed for longer than HFile TTL
       (hbase.master.hfilecleaner.ttl) the requests are rejected. Configuring HFile TTL to a larger
       value is also recommended with this setting.
-    
+
 +
 .Default
 `0`
 
-  
+
 [[hbase.region.replica.replication.enabled]]
 *`hbase.region.replica.replication.enabled`*::
 +
@@ -2153,36 +2059,35 @@ Fully qualified name of class implementing coordinated state manager.
 
       Whether asynchronous WAL replication to the secondary region replicas is enabled or not.
       If this is enabled, a replication peer named "region_replica_replication" will be created
-      which will tail the logs and replicate the mutatations to region replicas for tables that
+      which will tail the logs and replicate the mutations to region replicas for tables that
       have region replication > 1. If this is enabled once, disabling this replication also
       requires disabling the replication peer using shell or ReplicationAdmin java class.
-      Replication to secondary region replicas works over standard inter-cluster replication. 
-      So replication, if disabled explicitly, also has to be enabled by setting "hbase.replication" 
-      to true for this feature to work.
-    
+      Replication to secondary region replicas works over standard inter-cluster replication.
+
+
 +
 .Default
 `false`
 
-  
+
 [[hbase.http.filter.initializers]]
 *`hbase.http.filter.initializers`*::
 +
 .Description
 
-      A comma separated list of class names. Each class in the list must extend 
-      org.apache.hadoop.hbase.http.FilterInitializer. The corresponding Filter will 
-      be initialized. Then, the Filter will be applied to all user facing jsp 
-      and servlet web pages. 
+      A comma separated list of class names. Each class in the list must extend
+      org.apache.hadoop.hbase.http.FilterInitializer. The corresponding Filter will
+      be initialized. Then, the Filter will be applied to all user facing jsp
+      and servlet web pages.
       The ordering of the list defines the ordering of the filters.
-      The default StaticUserWebFilter add a user principal as defined by the 
+      The default StaticUserWebFilter add a user principal as defined by the
       hbase.http.staticuser.user property.
-    
+
 +
 .Default
 `org.apache.hadoop.hbase.http.lib.StaticUserWebFilter`
 
-  
+
 [[hbase.security.visibility.mutations.checkauths]]
 *`hbase.security.visibility.mutations.checkauths`*::
 +
@@ -2190,41 +2095,41 @@ Fully qualified name of class implementing coordinated state manager.
 
       This property if enabled, will check whether the labels in the visibility expression are associated
       with the user issuing the mutation
-    
+
 +
 .Default
 `false`
 
-  
+
 [[hbase.http.max.threads]]
 *`hbase.http.max.threads`*::
 +
 .Description
 
-      The maximum number of threads that the HTTP Server will create in its 
+      The maximum number of threads that the HTTP Server will create in its
       ThreadPool.
-    
+
 +
 .Default
 `10`
 
-  
+
 [[hbase.replication.rpc.codec]]
 *`hbase.replication.rpc.codec`*::
 +
 .Description
 
   		The codec that is to be used when replication is enabled so that
-  		the tags are also replicated. This is used along with HFileV3 which 
+  		the tags are also replicated. This is used along with HFileV3 which
   		supports tags in them.  If tags are not used or if the hfile version used
   		is HFileV2 then KeyValueCodec can be used as the replication codec. Note that
   		using KeyValueCodecWithTags for replication when there are no tags causes no harm.
-  	
+
 +
 .Default
 `org.apache.hadoop.hbase.codec.KeyValueCodecWithTags`
 
-  
+
 [[hbase.http.staticuser.user]]
 *`hbase.http.staticuser.user`*::
 +
@@ -2233,12 +2138,12 @@ Fully qualified name of class implementing coordinated state manager.
       The user name to filter as, on static web filters
       while rendering content. An example use is the HDFS
       web UI (user to be used for browsing files).
-    
+
 +
 .Default
 `dr.stack`
 
-  
+
 [[hbase.regionserver.handler.abort.on.error.percent]]
 *`hbase.regionserver.handler.abort.on.error.percent`*::
 +
@@ -2251,4 +2156,3 @@ The percent of region server RPC threads failed to abort RS.
 .Default
 `0.5`
 
-  
\ No newline at end of file