You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by nd...@apache.org on 2016/12/13 04:45:21 UTC

[2/3] hbase git commit: updating docs from master

updating docs from master


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/eb5d2ca7
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/eb5d2ca7
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/eb5d2ca7

Branch: refs/heads/branch-1.1
Commit: eb5d2ca74d97896d1a920389779e2b4f8fa3f905
Parents: 7acc78c
Author: Nick Dimiduk <nd...@apache.org>
Authored: Mon Dec 12 20:10:00 2016 -0800
Committer: Nick Dimiduk <nd...@apache.org>
Committed: Mon Dec 12 20:10:00 2016 -0800

----------------------------------------------------------------------
 .../asciidoc/_chapters/appendix_acl_matrix.adoc |   2 +-
 .../appendix_contributing_to_documentation.adoc |   2 +-
 src/main/asciidoc/_chapters/architecture.adoc   |  12 +-
 src/main/asciidoc/_chapters/configuration.adoc  |  62 ++++++--
 src/main/asciidoc/_chapters/cp.adoc             |  50 ++++--
 src/main/asciidoc/_chapters/developer.adoc      |   9 +-
 .../asciidoc/_chapters/getting_started.adoc     |  89 ++++++-----
 src/main/asciidoc/_chapters/hbase_apis.adoc     |   8 +-
 src/main/asciidoc/_chapters/performance.adoc    |  35 ++++-
 src/main/asciidoc/_chapters/protobuf.adoc       | 154 +++++++++++++++++++
 src/main/asciidoc/_chapters/schema_design.adoc  |  99 +++++++++++-
 .../asciidoc/_chapters/troubleshooting.adoc     |   5 +
 src/main/asciidoc/_chapters/upgrading.adoc      |  11 ++
 src/main/asciidoc/book.adoc                     |   1 +
 14 files changed, 452 insertions(+), 87 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hbase/blob/eb5d2ca7/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc b/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
index 698ae82..e222875 100644
--- a/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
+++ b/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
@@ -100,7 +100,7 @@ In case the table goes out of date, the unit tests which check for accuracy of p
 |        | stopMaster | superuser\|global(A)
 |        | snapshot | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
 |        | listSnapshot | superuser\|global(A)\|SnapshotOwner
-|        | cloneSnapshot | superuser\|global(A)
+|        | cloneSnapshot | superuser\|global(A)\|(SnapshotOwner & TableName matches)
 |        | restoreSnapshot | superuser\|global(A)\|SnapshotOwner & (NS(A)\|TableOwner\|table(A))
 |        | deleteSnapshot | superuser\|global(A)\|SnapshotOwner
 |        | createNamespace | superuser\|global(A)

http://git-wip-us.apache.org/repos/asf/hbase/blob/eb5d2ca7/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc b/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc
index ce6f835..0d68dce 100644
--- a/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc
+++ b/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc
@@ -145,7 +145,7 @@ artifacts using `mvn clean site site:stage`, check out the `asf-site` repository
 . Remove previously-generated content using the following command:
 +
 ----
-rm -rf rm -rf *apidocs* *xref* *book* *.html *.pdf* css js
+rm -rf rm -rf *apidocs* *book* *.html *.pdf* css js
 ----
 +
 WARNING: Do not remove the `0.94/` directory. To regenerate them, you must check out

http://git-wip-us.apache.org/repos/asf/hbase/blob/eb5d2ca7/src/main/asciidoc/_chapters/architecture.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/architecture.adoc b/src/main/asciidoc/_chapters/architecture.adoc
index cfdd638..339566a 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -670,7 +670,7 @@ if creating a table from java, or set `IN_MEMORY => true` when creating or alter
 hbase(main):003:0> create  't', {NAME => 'f', IN_MEMORY => 'true'}
 ----
 
-For more information, see the link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/io/hfile/LruBlockCache.html[LruBlockCache source]
+For more information, see the LruBlockCache source
 
 [[block.cache.usage]]
 ==== LruBlockCache Usage
@@ -1551,7 +1551,7 @@ StoreFiles are where your data lives.
 The _HFile_ file format is based on the SSTable file described in the link:http://research.google.com/archive/bigtable.html[BigTable [2006]] paper and on Hadoop's link:http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/io/file/tfile/TFile.html[TFile] (The unit test suite and the compression harness were taken directly from TFile). Schubert Zhang's blog post on link:http://cloudepr.blogspot.com/2009/09/hfile-block-indexed-file-format-to.html[HFile: A Block-Indexed File Format to Store Sorted Key-Value Pairs] makes for a thorough introduction to HBase's HFile.
 Matteo Bertozzi has also put up a helpful description, link:http://th30z.blogspot.com/2011/02/hbase-io-hfile.html?spref=tw[HBase I/O: HFile].
 
-For more information, see the link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/io/hfile/HFile.html[HFile source code].
+For more information, see the HFile source code.
 Also see <<hfilev2>> for information about the HFile v2 format that was included in 0.92.
 
 [[hfile_tool]]
@@ -1586,7 +1586,7 @@ The blocksize is configured on a per-ColumnFamily basis.
 Compression happens at the block level within StoreFiles.
 For more information on compression, see <<compression>>.
 
-For more information on blocks, see the link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/io/hfile/HFileBlock.html[HFileBlock source code].
+For more information on blocks, see the HFileBlock source code.
 
 [[keyvalue]]
 ==== KeyValue
@@ -1613,7 +1613,7 @@ The Key is further decomposed as:
 
 KeyValue instances are _not_ split across blocks.
 For example, if there is an 8 MB KeyValue, even if the block-size is 64kb this KeyValue will be read in as a coherent block.
-For more information, see the link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/KeyValue.html[KeyValue source code].
+For more information, see the KeyValue source code.
 
 [[keyvalue.example]]
 ===== Example
@@ -1741,7 +1741,7 @@ With the ExploringCompactionPolicy, major compactions happen much less frequentl
 In general, ExploringCompactionPolicy is the right choice for most situations, and thus is the default compaction policy.
 You can also use ExploringCompactionPolicy along with <<ops.stripe>>.
 
-The logic of this policy can be examined in _link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/regionserver/compactions/ExploringCompactionPolicy.html[hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/ExploringCompactionPolicy.java]_.
+The logic of this policy can be examined in hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/ExploringCompactionPolicy.java.
 The following is a walk-through of the logic of the ExploringCompactionPolicy.
 
 
@@ -1957,7 +1957,7 @@ This section has been preserved for historical reasons and refers to the way com
 You can still use this behavior if you enable <<compaction.ratiobasedcompactionpolicy.algorithm>>. For information on the way that compactions work in HBase 0.96.x and later, see <<compaction>>.
 ====
 
-To understand the core algorithm for StoreFile selection, there is some ASCII-art in the link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/regionserver/Store.html#836[Store source code] that will serve as useful reference.
+To understand the core algorithm for StoreFile selection, there is some ASCII-art in the Store source code that will serve as useful reference.
 
 It has been copied below:
 [source]

http://git-wip-us.apache.org/repos/asf/hbase/blob/eb5d2ca7/src/main/asciidoc/_chapters/configuration.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/configuration.adoc b/src/main/asciidoc/_chapters/configuration.adoc
index 4804332..6e356bc 100644
--- a/src/main/asciidoc/_chapters/configuration.adoc
+++ b/src/main/asciidoc/_chapters/configuration.adoc
@@ -225,22 +225,22 @@ Use the following legend to interpret this table:
 * "X" = not supported
 * "NT" = Not tested
 
-[cols="1,1,1,1,1,1,1", options="header"]
+[cols="1,1,1,1,1,1,1,1", options="header"]
 |===
-| | HBase-0.94.x | HBase-0.98.x (Support for Hadoop 1.1+ is deprecated.) | HBase-1.0.x (Hadoop 1.x is NOT supported) | HBase-1.1.x | HBase-1.2.x | HBase-1.3.x
-|Hadoop-1.0.x  | X | X | X | X | X | X
-|Hadoop-1.1.x | S | NT | X | X | X | X
-|Hadoop-0.23.x | S | X | X | X | X | X
-|Hadoop-2.0.x-alpha | NT | X | X | X | X | X
-|Hadoop-2.1.0-beta | NT | X | X | X | X | X
-|Hadoop-2.2.0 | NT | S | NT | NT | X  | X
-|Hadoop-2.3.x | NT | S | NT | NT | X  | X
-|Hadoop-2.4.x | NT | S | S | S | S | S
-|Hadoop-2.5.x | NT | S | S | S | S | S
-|Hadoop-2.6.0 | X | X | X | X | X | X
-|Hadoop-2.6.1+ | NT | NT | NT | NT | S | S
-|Hadoop-2.7.0 | X | X | X | X | X | X
-|Hadoop-2.7.1+ | NT | NT | NT | NT | S | S
+| | HBase-0.94.x | HBase-0.98.x (Support for Hadoop 1.1+ is deprecated.) | HBase-1.0.x (Hadoop 1.x is NOT supported) | HBase-1.1.x | HBase-1.2.x | HBase-1.3.x | HBase-2.0.x
+|Hadoop-1.0.x  | X | X | X | X | X | X | X
+|Hadoop-1.1.x | S | NT | X | X | X | X | X
+|Hadoop-0.23.x | S | X | X | X | X | X | X
+|Hadoop-2.0.x-alpha | NT | X | X | X | X | X | X
+|Hadoop-2.1.0-beta | NT | X | X | X | X | X | X
+|Hadoop-2.2.0 | NT | S | NT | NT | X  | X | X
+|Hadoop-2.3.x | NT | S | NT | NT | X  | X | X
+|Hadoop-2.4.x | NT | S | S | S | S | S | X
+|Hadoop-2.5.x | NT | S | S | S | S | S | X
+|Hadoop-2.6.0 | X | X | X | X | X | X | X
+|Hadoop-2.6.1+ | NT | NT | NT | NT | S | S | S
+|Hadoop-2.7.0 | X | X | X | X | X | X | X
+|Hadoop-2.7.1+ | NT | NT | NT | NT | S | S | S
 |===
 
 .Hadoop 2.6.x
@@ -406,6 +406,36 @@ Standalone mode is what is described in the <<quickstart,quickstart>> section.
 In standalone mode, HBase does not use HDFS -- it uses the local filesystem instead -- and it runs all HBase daemons and a local ZooKeeper all up in the same JVM.
 ZooKeeper binds to a well known port so clients may talk to HBase.
 
+[[standalone.over.hdfs]]
+==== Standalone HBase over HDFS
+A sometimes useful variation on standalone hbase has all daemons running inside the
+one JVM but rather than persist to the local filesystem, instead
+they persist to an HDFS instance.
+
+You might consider this profile when you are intent on
+a simple deploy profile, the loading is light, but the
+data must persist across node comings and goings. Writing to
+HDFS where data is replicated ensures the latter.
+
+To configure this standalone variant, edit your _hbase-site.xml_
+setting the _hbase.rootdir_ to point at a directory in your
+HDFS instance but then set _hbase.cluster.distributed_
+to _false_. For example:
+
+[source,xml]
+----
+<configuration>
+  <property>
+    <name>hbase.rootdir</name>
+    <value>hdfs://namenode.example.org:8020/hbase</value>
+  </property>
+  <property>
+    <name>hbase.cluster.distributed</name>
+    <value>false</value>
+  </property>
+</configuration>
+----
+ 
 [[distributed]]
 === Distributed
 
@@ -729,7 +759,7 @@ The following lines in the _hbase-env.sh_ file show how to set the `JAVA_HOME` e
 
 ----
 # The java implementation to use.
-export JAVA_HOME=/usr/java/jdk1.7.0/
+export JAVA_HOME=/usr/java/jdk1.8.0/
 
 # The maximum amount of heap to use. Default is left to JVM default.
 export HBASE_HEAPSIZE=4G

http://git-wip-us.apache.org/repos/asf/hbase/blob/eb5d2ca7/src/main/asciidoc/_chapters/cp.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/cp.adoc b/src/main/asciidoc/_chapters/cp.adoc
index 5142337..72fd95e 100644
--- a/src/main/asciidoc/_chapters/cp.adoc
+++ b/src/main/asciidoc/_chapters/cp.adoc
@@ -184,13 +184,15 @@ WalObserver::
 <<cp_example,Examples>> provides working examples of observer coprocessors.
 
 
+
+[[cpeps]]
 === Endpoint Coprocessor
 
 Endpoint processors allow you to perform computation at the location of the data.
 See <<cp_analogies, Coprocessor Analogy>>. An example is the need to calculate a running
 average or summation for an entire table which spans hundreds of regions.
 
-In contract to observer coprocessors, where your code is run transparently, endpoint
+In contrast to observer coprocessors, where your code is run transparently, endpoint
 coprocessors must be explicitly invoked using the
 link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/client/Table.html#coprocessorService%28java.lang.Class,%20byte%5B%5D,%20byte%5B%5D,%20org.apache.hadoop.hbase.client.coprocessor.Batch.Call%29[CoprocessorService()]
 method available in
@@ -208,6 +210,18 @@ link:https://issues.apache.org/jira/browse/HBASE-5448[HBASE-5448]). To upgrade y
 HBase cluster from 0.94 or earlier to 0.96 or later, you need to reimplement your
 coprocessor.
 
+Coprocessor Endpoints should make no use of HBase internals and
+only avail of public APIs; ideally a CPEP should depend on Interfaces
+and data structures only. This is not always possible but beware
+that doing so makes the Endpoint brittle, liable to breakage as HBase
+internals evolve. HBase internal APIs annotated as private or evolving
+do not have to respect semantic versioning rules or general java rules on
+deprecation before removal. While generated protobuf files are
+absent the hbase audience annotations -- they are created by the
+protobuf protoc tool which knows nothing of how HBase works --
+they should be consided `@InterfaceAudience.Private` so are liable to
+change.
+
 <<cp_example,Examples>> provides working examples of endpoint coprocessors.
 
 [[cp_loading]]
@@ -256,7 +270,7 @@ When calling out to registered observers, the framework executes their callbacks
 sorted order of their priority. +
 Ties are broken arbitrarily.
 
-. Put your code HBase's classpath. One easy way to do this is to drop the jar
+. Put your code on HBase's classpath. One easy way to do this is to drop the jar
   (containing you code and all the dependencies) into the `lib/` directory in the
   HBase installation.
 
@@ -324,10 +338,9 @@ it in HDFS. +
 https://issues.apache.org/jira/browse/HBASE-14548[HBASE-14548] allows a directory containing the jars
 or some wildcards to be specified, such as: hdfs://<namenode>:<port>/user/<hadoop-user>/ or
 hdfs://<namenode>:<port>/user/<hadoop-user>/*.jar. Please note that if a directory is specified,
-all jar files(.jar) directly in the directory are added,
-but it does not search files in the subtree rooted in the directory.
-And do not contain any wildcard if you would like to specify a directory.
-This enhancement applies to the ways of using the JAVA API as well.
+all jar files(.jar) in the directory are added. It does not search for files in sub-directories.
+Do not use a wildcard if you would like to specify a directory. This enhancement applies to the
+usage via the JAVA API as well.
 * Class name: The full class name of the Coprocessor.
 * Priority: An integer. The framework will determine the execution sequence of all configured
 observers registered at the same hook using priorities. This field can be left blank. In that
@@ -462,10 +475,7 @@ In HBase 0.96 and newer, you can instead use the `removeCoprocessor()` method of
 
 [[cp_example]]
 == Examples
-HBase ships examples for Observer Coprocessor in
-link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/coprocessor/example/ZooKeeperScanPolicyObserver.html[ZooKeeperScanPolicyObserver]
-and for Endpoint Coprocessor in
-link:http://hbase.apache.org/xref/org/apache/hadoop/hbase/coprocessor/example/RowCountEndpoint.html[RowCountEndpoint]
+HBase ships examples for Observer Coprocessor.
 
 A more detailed example is given below.
 
@@ -808,3 +818,23 @@ The metrics sampling rate as described in <<hbase_metrics>>.
 
 .Coprocessor Metrics UI
 image::coprocessor_stats.png[]
+
+== Restricting Coprocessor Usage
+
+Restricting arbitrary user coprocessors can be a big concern in multitenant environments. HBase provides a continuum of options for ensuring only expected coprocessors are running:
+
+- `hbase.coprocessor.enabled`: Enables or disables all coprocessors. This will limit the functionality of HBase, as disabling all coprocessors will disable some security providers. An example coproccessor so affected is `org.apache.hadoop.hbase.security.access.AccessController`.
+* `hbase.coprocessor.user.enabled`: Enables or disables loading coprocessors on tables (i.e. user coprocessors).
+* One can statically load coprocessors via the following tunables in `hbase-site.xml`:
+** `hbase.coprocessor.regionserver.classes`: A comma-separated list of coprocessors that are loaded by region servers
+** `hbase.coprocessor.region.classes`: A comma-separated list of RegionObserver and Endpoint coprocessors
+** `hbase.coprocessor.user.region.classes`: A comma-separated list of coprocessors that are loaded by all regions
+** `hbase.coprocessor.master.classes`: A comma-separated list of coprocessors that are loaded by the master (MasterObserver coprocessors)
+** `hbase.coprocessor.wal.classes`: A comma-separated list of WALObserver coprocessors to load
+* `hbase.coprocessor.abortonerror`: Whether to abort the daemon which has loaded the coprocessor if the coprocessor should error other than `IOError`. If this is set to false and an access controller coprocessor should have a fatal error the coprocessor will be circumvented, as such in secure installations this is advised to be `true`; however, one may override this on a per-table basis for user coprocessors, to ensure they do not abort their running region server and are instead unloaded on error.
+* `hbase.coprocessor.region.whitelist.paths`: A comma separated list available for those loading `org.apache.hadoop.hbase.security.access.CoprocessorWhitelistMasterObserver` whereby one can use the following options to white-list paths from which coprocessors may be loaded.
+** Coprocessors on the classpath are implicitly white-listed
+** `*` to wildcard all coprocessor paths
+** An entire filesystem (e.g. `hdfs://my-cluster/`)
+** A wildcard path to be evaluated by link:https://commons.apache.org/proper/commons-io/javadocs/api-release/org/apache/commons/io/FilenameUtils.html[FilenameUtils.wildcardMatch]
+** Note: Path can specify scheme or not (e.g. `file:///usr/hbase/lib/coprocessors` or for all filesystems `/usr/hbase/lib/coprocessors`)

http://git-wip-us.apache.org/repos/asf/hbase/blob/eb5d2ca7/src/main/asciidoc/_chapters/developer.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/developer.adoc b/src/main/asciidoc/_chapters/developer.adoc
index ad9f3f4..910473c 100644
--- a/src/main/asciidoc/_chapters/developer.adoc
+++ b/src/main/asciidoc/_chapters/developer.adoc
@@ -49,9 +49,16 @@ Sign up for the dev-list and the user-list.
 See the link:http://hbase.apache.org/mail-lists.html[mailing lists] page.
 Posing questions - and helping to answer other people's questions - is encouraged! There are varying levels of experience on both lists so patience and politeness are encouraged (and please stay on topic.)
 
+[[slack]]
+=== Slack
+The Apache HBase project has its own link: http://apache-hbase.slack.com[Slack Channel] for real-time questions
+and discussion. Mail dev@hbase.apache.org to request an invite.
+
 [[irc]]
 === Internet Relay Chat (IRC)
 
+(NOTE: Our IRC channel seems to have been deprecated in favor of the above Slack channel)
+
 For real-time questions and discussions, use the `#hbase` IRC channel on the link:https://freenode.net/[FreeNode] IRC network.
 FreeNode offers a web-based client, but most people prefer a native client, and several clients are available for each operating system.
 
@@ -108,7 +115,7 @@ We encourage you to have this formatter in place in eclipse when editing HBase c
 . In Preferences, Go to `Java->Code Style->Formatter`.
 . Click btn:[Import] and browse to the location of the _hbase_eclipse_formatter.xml_ file, which is in the _dev-support/_ directory.
   Click btn:[Apply].
-. Still in Preferences, click .
+. Still in Preferences, click `Java->Editor->Save Actions`.
   Be sure the following options are selected:
 +
 * Perform the selected actions on save

http://git-wip-us.apache.org/repos/asf/hbase/blob/eb5d2ca7/src/main/asciidoc/_chapters/getting_started.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/getting_started.adoc b/src/main/asciidoc/_chapters/getting_started.adoc
index 26af568..4ffae6d 100644
--- a/src/main/asciidoc/_chapters/getting_started.adoc
+++ b/src/main/asciidoc/_chapters/getting_started.adoc
@@ -29,45 +29,39 @@
 
 == Introduction
 
-<<quickstart,Quickstart>> will get you up and running on a single-node, standalone instance of HBase, followed by a pseudo-distributed single-machine instance, and finally a fully-distributed cluster.
+<<quickstart,Quickstart>> will get you up and running on a single-node, standalone instance of HBase.
 
 [[quickstart]]
 == Quick Start - Standalone HBase
 
-This guide describes the setup of a standalone HBase instance running against the local filesystem.
-This is not an appropriate configuration for a production instance of HBase, but will allow you to experiment with HBase.
-This section shows you how to create a table in HBase using the `hbase shell` CLI, insert rows into the table, perform put and scan operations against the table, enable or disable the table, and start and stop HBase.
-Apart from downloading HBase, this procedure should take less than 10 minutes.
-
-.Local Filesystem and Durability
-WARNING: _The following is fixed in HBase 0.98.3 and beyond. See link:https://issues.apache.org/jira/browse/HBASE-11272[HBASE-11272] and link:https://issues.apache.org/jira/browse/HBASE-11218[HBASE-11218]._
+This section describes the setup of a single-node standalone HBase.
+A _standalone_ instance has all HBase daemons -- the Master, RegionServers,
+and ZooKeeper -- running in a single JVM persisting to the local filesystem.
+It is our most basic deploy profile. We will show you how
+to create a table in HBase using the `hbase shell` CLI,
+insert rows into the table, perform put and scan operations against the
+table, enable or disable the table, and start and stop HBase.
 
-Using HBase with a local filesystem does not guarantee durability.
-The HDFS local filesystem implementation will lose edits if files are not properly closed.
-This is very likely to happen when you are experimenting with new software, starting and stopping the daemons often and not always cleanly.
-You need to run HBase on HDFS to ensure all writes are preserved.
-Running against the local filesystem is intended as a shortcut to get you familiar with how the general system works, as the very first phase of evaluation.
-See link:https://issues.apache.org/jira/browse/HBASE-3696[HBASE-3696] and its associated issues for more details about the issues of running on the local filesystem.
+Apart from downloading HBase, this procedure should take less than 10 minutes.
 
 [[loopback.ip]]
+[NOTE]
+====
 .Loopback IP - HBase 0.94.x and earlier
-NOTE: _The below advice is for hbase-0.94.x and older versions only. This is fixed in hbase-0.96.0 and beyond._
-
-Prior to HBase 0.94.x, HBase expected the loopback IP address to be 127.0.0.1. Ubuntu and some other distributions default to 127.0.1.1 and this will cause problems for you. See link:http://devving.com/?p=414[Why does HBase care about /etc/hosts?] for detail
 
+Prior to HBase 0.94.x, HBase expected the loopback IP address to be 127.0.0.1.
+Ubuntu and some other distributions default to 127.0.1.1 and this will cause
+problems for you. See link:http://devving.com/?p=414[Why does HBase care about /etc/hosts?] for detail
 
-.Example /etc/hosts File for Ubuntu
-====
 The following _/etc/hosts_ file works correctly for HBase 0.94.x and earlier, on Ubuntu. Use this as a template if you run into trouble.
 [listing]
 ----
 127.0.0.1 localhost
 127.0.0.1 ubuntu.ubuntu-domain ubuntu
 ----
-
+This issue has been fixed in hbase-0.96.0 and beyond.
 ====
 
-
 === JDK Version Requirements
 
 HBase requires that a JDK be installed.
@@ -75,16 +69,13 @@ See <<java,Java>> for information about supported JDK versions.
 
 === Get Started with HBase
 
-.Procedure: Download, Configure, and Start HBase
+.Procedure: Download, Configure, and Start HBase in Standalone Mode
 . Choose a download site from this list of link:http://www.apache.org/dyn/closer.cgi/hbase/[Apache Download Mirrors].
   Click on the suggested top link.
-  This will take you to a mirror of _HBase
-  Releases_.
+  This will take you to a mirror of _HBase Releases_.
   Click on the folder named _stable_ and then download the binary file that ends in _.tar.gz_ to your local filesystem.
-  Prior to 1.x version, be sure to choose the version that corresponds with the version of Hadoop you are
-  likely to use later (in most cases, you should choose the file for Hadoop 2, which will be called
-  something like _hbase-0.98.13-hadoop2-bin.tar.gz_).
   Do not download the file ending in _src.tar.gz_ for now.
+
 . Extract the downloaded file, and change to the newly-created directory.
 +
 [source,subs="attributes"]
@@ -94,10 +85,11 @@ $ tar xzvf hbase-{Version}-bin.tar.gz
 $ cd hbase-{Version}/
 ----
 
-. For HBase 0.98.5 and later, you are required to set the `JAVA_HOME` environment variable before starting HBase.
-  Prior to 0.98.5, HBase attempted to detect the location of Java if the variables was not set.
-  You can set the variable via your operating system's usual mechanism, but HBase provides a central mechanism, _conf/hbase-env.sh_.
-  Edit this file, uncomment the line starting with `JAVA_HOME`, and set it to the appropriate location for your operating system.
+. You are required to set the `JAVA_HOME` environment variable before starting HBase.
+  You can set the variable via your operating system's usual mechanism, but HBase
+  provides a central mechanism, _conf/hbase-env.sh_.
+  Edit this file, uncomment the line starting with `JAVA_HOME`, and set it to the
+  appropriate location for your operating system.
   The `JAVA_HOME` variable should be set to a directory which contains the executable file _bin/java_.
   Most modern Linux operating systems provide a mechanism, such as /usr/bin/alternatives on RHEL or CentOS, for transparently switching between versions of executables such as Java.
   In this case, you can set `JAVA_HOME` to the directory containing the symbolic link to _bin/java_, which is usually _/usr_.
@@ -106,8 +98,6 @@ $ cd hbase-{Version}/
 JAVA_HOME=/usr
 ----
 +
-NOTE: These instructions assume that each node of your cluster uses the same configuration.
-If this is not the case, you may need to set `JAVA_HOME` separately for each node.
 
 . Edit _conf/hbase-site.xml_, which is the main HBase configuration file.
   At this time, you only need to specify the directory on the local filesystem where HBase and ZooKeeper write data.
@@ -135,17 +125,27 @@ If this is not the case, you may need to set `JAVA_HOME` separately for each nod
 ====
 +
 You do not need to create the HBase data directory.
-HBase will do this for you.
-If you create the directory, HBase will attempt to do a migration, which is not what you want.
+HBase will do this for you.  If you create the directory,
+HBase will attempt to do a migration, which is not what you want.
++
+NOTE: The _hbase.rootdir_ in the above example points to a directory
+in the _local filesystem_. The 'file:/' prefix is how we denote local filesystem.
+To home HBase on an existing instance of HDFS, set the _hbase.rootdir_ to point at a
+directory up on your instance: e.g. _hdfs://namenode.example.org:8020/hbase_.
+For more on this variant, see the section below on Standalone HBase over HDFS.
 
 . The _bin/start-hbase.sh_ script is provided as a convenient way to start HBase.
   Issue the command, and if all goes well, a message is logged to standard output showing that HBase started successfully.
   You can use the `jps` command to verify that you have one running process called `HMaster`.
   In standalone mode HBase runs all daemons within this single JVM, i.e.
   the HMaster, a single HRegionServer, and the ZooKeeper daemon.
+  Go to _http://localhost:16010_ to view the HBase Web UI.
 +
 NOTE: Java needs to be installed and available.
-If you get an error indicating that Java is not installed, but it is on your system, perhaps in a non-standard location, edit the _conf/hbase-env.sh_ file and modify the `JAVA_HOME` setting to point to the directory that contains _bin/java_ your system.
+If you get an error indicating that Java is not installed,
+but it is on your system, perhaps in a non-standard location,
+edit the _conf/hbase-env.sh_ file and modify the `JAVA_HOME`
+setting to point to the directory that contains _bin/java_ your system.
 
 
 [[shell_exercises]]
@@ -285,12 +285,19 @@ $
 . After issuing the command, it can take several minutes for the processes to shut down.
   Use the `jps` to be sure that the HMaster and HRegionServer processes are shut down.
 
-[[quickstart_pseudo]]
-=== Intermediate - Pseudo-Distributed Local Install
+The above has shown you how to start and stop a standalone instance of HBase.
+In the next sections we give a quick overview of other modes of hbase deploy.
 
-After working your way through <<quickstart,quickstart>>, you can re-configure HBase to run in pseudo-distributed mode.
-Pseudo-distributed mode means that HBase still runs completely on a single host, but each HBase daemon (HMaster, HRegionServer, and ZooKeeper) runs as a separate process.
-By default, unless you configure the `hbase.rootdir` property as described in <<quickstart,quickstart>>, your data is still stored in _/tmp/_.
+[[quickstart_pseudo]]
+=== Pseudo-Distributed Local Install
+
+After working your way through <<quickstart,quickstart>> standalone mode,
+you can re-configure HBase to run in pseudo-distributed mode.
+Pseudo-distributed mode means that HBase still runs completely on a single host,
+but each HBase daemon (HMaster, HRegionServer, and ZooKeeper) runs as a separate process:
+in standalone mode all daemons ran in one jvm process/instance.
+By default, unless you configure the `hbase.rootdir` property as described in
+<<quickstart,quickstart>>, your data is still stored in _/tmp/_.
 In this walk-through, we store your data in HDFS instead, assuming you have HDFS available.
 You can skip the HDFS configuration to continue storing your data in the local filesystem.
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/eb5d2ca7/src/main/asciidoc/_chapters/hbase_apis.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/hbase_apis.adoc b/src/main/asciidoc/_chapters/hbase_apis.adoc
index 6d2777b..f27c9dc 100644
--- a/src/main/asciidoc/_chapters/hbase_apis.adoc
+++ b/src/main/asciidoc/_chapters/hbase_apis.adoc
@@ -43,8 +43,6 @@ See <<external_apis>> for more information.
 ----
 package com.example.hbase.admin;
 
-package util;
-
 import java.io.IOException;
 
 import org.apache.hadoop.conf.Configuration;
@@ -77,7 +75,7 @@ public class Example {
          Admin admin = connection.getAdmin()) {
 
       HTableDescriptor table = new HTableDescriptor(TableName.valueOf(TABLE_NAME));
-      table.addFamily(new HColumnDescriptor(CF_DEFAULT).setCompressionType(Algorithm.SNAPPY));
+      table.addFamily(new HColumnDescriptor(CF_DEFAULT).setCompressionType(Algorithm.NONE));
 
       System.out.print("Creating table. ");
       createOrOverwrite(admin, table);
@@ -90,12 +88,12 @@ public class Example {
          Admin admin = connection.getAdmin()) {
 
       TableName tableName = TableName.valueOf(TABLE_NAME);
-      if (admin.tableExists(tableName)) {
+      if (!admin.tableExists(tableName)) {
         System.out.println("Table does not exist.");
         System.exit(-1);
       }
 
-      HTableDescriptor table = new HTableDescriptor(tableName);
+      HTableDescriptor table = admin.getTableDescriptor(tableName);
 
       // Update existing table
       HColumnDescriptor newColumn = new HColumnDescriptor("NEWCF");

http://git-wip-us.apache.org/repos/asf/hbase/blob/eb5d2ca7/src/main/asciidoc/_chapters/performance.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/performance.adoc b/src/main/asciidoc/_chapters/performance.adoc
index 5f27640..114754f 100644
--- a/src/main/asciidoc/_chapters/performance.adoc
+++ b/src/main/asciidoc/_chapters/performance.adoc
@@ -156,6 +156,10 @@ See <<block.cache>>
 
 See <<recommended_configurations>>.
 
+[[perf.99th.percentile]]
+=== Improving the 99th Percentile
+Try link:[hedged_reads].
+
 [[perf.compactions.and.splits]]
 === Managing Compactions
 
@@ -751,15 +755,35 @@ Most people should leave this alone.
 Default = 7, or can collapse to at least 1/128th of original size.
 See the _Development Process_ section of the document link:https://issues.apache.org/jira/secure/attachment/12444007/Bloom_Filters_in_HBase.pdf[BloomFilters in HBase] for more on what this option means.
 
+[[hedged.reads]]
 === Hedged Reads
 
-Hedged reads are a feature of HDFS, introduced in link:https://issues.apache.org/jira/browse/HDFS-5776[HDFS-5776].
+Hedged reads are a feature of HDFS, introduced in Hadoop 2.4.0 with link:https://issues.apache.org/jira/browse/HDFS-5776[HDFS-5776].
 Normally, a single thread is spawned for each read request.
-However, if hedged reads are enabled, the client waits some configurable amount of time, and if the read does not return, the client spawns a second read request, against a different block replica of the same data.
-Whichever read returns first is used, and the other read request is discarded.
-Hedged reads can be helpful for times where a rare slow read is caused by a transient error such as a failing disk or flaky network connection.
+However, if hedged reads are enabled, the client waits some
+configurable amount of time, and if the read does not return,
+the client spawns a second read request, against a different
+block replica of the same data. Whichever read returns first is
+used, and the other read request is discarded.
+
+Hedged reads are "...very good at eliminating outlier datanodes, which
+in turn makes them very good choice for latency sensitive setups.
+But, if you are looking for maximizing throughput, hedged reads tend to
+create load amplification as things get slower in general. In short,
+the thing to watch out for is the non-graceful performance degradation
+when you are running close a certain throughput threshold." (Quote from Ashu Pachauri in HBASE-17083).
+
+Other concerns to keep in mind while running with hedged reads enabled
+include:
 
-Because an HBase RegionServer is a HDFS client, you can enable hedged reads in HBase, by adding the following properties to the RegionServer's hbase-site.xml and tuning the values to suit your environment.
+* They may lead to network congestion. See link:https://issues.apache.org/jira/browse/HBASE-17083[HBASE-17083]
+* Make sure you set the thread pool large enough so as blocking on the pool does not become a bottleneck (Again see link:https://issues.apache.org/jira/browse/HBASE-17083[HBASE-17083])
+
+(From Yu Li up in HBASE-17083)
+
+Because an HBase RegionServer is a HDFS client, you can enable hedged
+reads in HBase, by adding the following properties to the RegionServer's
+hbase-site.xml and tuning the values to suit your environment.
 
 .Configuration for Hedged Reads
 * `dfs.client.hedged.read.threadpool.size` - the number of threads dedicated to servicing hedged reads.
@@ -790,6 +814,7 @@ See <<hbase_metrics>>  for more information.
 * hedgeReadOpsWin - the number of times the hedged read thread was faster than the original thread.
   This could indicate that a given RegionServer is having trouble servicing requests.
 
+
 [[perf.deleting]]
 == Deleting from HBase
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/eb5d2ca7/src/main/asciidoc/_chapters/protobuf.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/protobuf.adoc b/src/main/asciidoc/_chapters/protobuf.adoc
new file mode 100644
index 0000000..1c2cc47
--- /dev/null
+++ b/src/main/asciidoc/_chapters/protobuf.adoc
@@ -0,0 +1,154 @@
+////
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+////
+
+[[protobuf]]
+= Protobuf in HBase
+:doctype: book
+:numbered:
+:toc: left
+:icons: font
+:experimental:
+
+
+== Protobuf
+HBase uses Google's link:http://protobuf.protobufs[protobufs] wherever
+it persists metadata -- in the tail of hfiles or Cells written by
+HBase into the system hbase;meta table or when HBase writes znodes
+to zookeeper, etc. -- and when it passes objects over the wire making
+xref:hbase.rpc[RPCs]. HBase uses protobufs to describe the RPC
+Interfaces (Services) we expose to clients, for example the `Admin` and `Client`
+Interfaces that the RegionServer fields,
+or specifying the arbitrary extensions added by developers via our
+xref:cp[Coprocessor Endpoint] mechanism.
+
+In this chapter we go into detail for  developers who are looking to
+understand better how it all works. This chapter is of particular
+use to those who would amend or extend HBase functionality.
+
+With protobuf, you describe serializations and services in a `.protos` file.
+You then feed these descriptors to a protobuf tool, the `protoc` binary,
+to generate classes that can marshall and unmarshall the described serializations
+and field the specified Services.
+
+See the `README.txt` in the HBase sub-modules for detail on how
+to run the class generation on a per-module basis;
+e.g. see `hbase-protocol/README.txt` for how to generated protobuf classes
+in the hbase-protocol module.
+
+In HBase, `.proto` files are either in the `hbase-protocol` module, a module
+dedicated to hosting the common proto files and the protoc generated classes
+that HBase uses internally serializing metadata or, for extensions to hbase
+such as REST or Coprocessor Endpoints that need their own descriptors, their
+protos are located inside the function's hosting module: e.g. `hbase-rest`
+is home to the REST proto files and the `hbase-rsgroup` table grouping
+Coprocessor Endpoint has all protos that have to do with table grouping.
+
+Protos are hosted by the module that makes use of them. While
+this makes it so generation of protobuf classes is distributed, done
+per module, we do it this way so modules encapsulate all to do with
+the functionality they bring to hbase.
+
+Extensions whether REST or Coprocessor Endpoints will make use
+of core HBase protos found back in the hbase-protocol module. They'll
+use these core protos when they want to serialize a Cell or a Put or
+refer to a particular node via ServerName, etc., as part of providing the
+CPEP Service. Going forward, after the release of hbase-2.0.0, this
+practice needs to whither. We'll make plain why in the later
+xref:shaded.protobuf[hbase-2.0.0] section.
+
+[[shaded.protobuf]]
+=== hbase-2.0.0 and the shading of protobufs (HBASE-15638)
+
+As of hbase-2.0.0, our protobuf usage gets a little more involved. HBase
+core protobuf references are offset so as to refer to a private,
+bundled protobuf. Core stops referring to protobuf
+classes at com.google.protobuf.* and instead references protobuf at
+the HBase-specific offset
+org.apache.hadoop.hbase.shaded.com.google.protobuf.*.  We do this indirection
+so hbase core can evolve its protobuf version independent of whatever our
+dependencies rely on. For instance, HDFS serializes using protobuf.
+HDFS is on our CLASSPATH. Without the above described indirection, our
+protobuf versions would have to align. HBase would be stuck
+on the HDFS protobuf version until HDFS decided upgrade. HBase
+and HDFS verions would be tied.
+
+We had to move on from protobuf-2.5.0 because we need facilities
+added in protobuf-3.1.0; in particular being able to save on
+copies and avoiding bringing protobufs onheap for
+serialization/deserialization.
+
+In hbase-2.0.0, we introduced a new module, `hbase-protocol-shaded`
+inside which we contained all to do with protobuf and its subsequent
+relocation/shading. This module is in essence a copy of much of the old
+`hbase-protocol` but with an extra shading/relocation step (see the `README.txt`
+and the `poms.xml` in this module for more on how to trigger this
+effect and how it all works). Core was moved to depend on this new
+module.
+
+That said, a complication arises around Coprocessor Endpoints (CPEPs).
+CPEPs depend on public HBase APIs that reference protobuf classes at
+`com.google.protobuf.*` explicitly. For example, in our Table Interface
+we have the below as the means by which you obtain a CPEP Service
+to make invocations against:
+
+[source,java]
+----
+...
+  <T extends com.google.protobuf.Service,R> Map<byte[],R> coprocessorService(
+   Class<T> service, byte[] startKey, byte[] endKey,
+     org.apache.hadoop.hbase.client.coprocessor.Batch.Call<T,R> callable)
+  throws com.google.protobuf.ServiceException, Throwable
+----
+
+Existing CPEPs will have made reference to core HBase protobufs
+specifying ServerNames or carrying Mutations.
+So as to continue being able to service CPEPs and their references
+to `com.google.protobuf.*` across the upgrade to hbase-2.0.0 and beyond,
+HBase needs to be able to deal with both
+`com.google.protobuf.*` references and its internal offset
+`org.apache.hadoop.hbase.shaded.com.google.protobuf.*` protobufs.
+
+The `hbase-protocol-shaded` module hosts all
+protobufs used by HBase core as well as the internal shaded version of
+protobufs that hbase depends on. hbase-client and hbase-server, etc.,
+depend on this module.
+
+But for the vestigial CPEP references to the (non-shaded) content of
+`hbase-protocol`, we keep around most of this  module going forward
+just so it is available to CPEPs.  Retaining the most of `hbase-protocol`
+makes for overlapping, 'duplicated' proto instances where some exist as
+non-shaded/non-relocated here in their old module
+location but also in the new location, shaded under
+`hbase-protocol-shaded`. In other words, there is an instance
+of the generated protobuf class
+`org.apache.hadoop.hbase.protobuf.generated.ServerName`
+in hbase-protocol and another generated instance that is the same in all
+regards except its protobuf references are to the internal shaded
+version at `org.apache.hadoop.hbase.shaded.protobuf.generated.ServerName`
+(note the 'shaded' addition in the middle of the package name).
+
+If you extend a proto in `hbase-protocol-shaded` for  internal use,
+consider extending it also in
+`hbase-protocol` (and regenerating).
+
+Going forward, we will provide a new module of common types for use
+by CPEPs that will have the same guarantees against change as does our
+public API. TODO.

http://git-wip-us.apache.org/repos/asf/hbase/blob/eb5d2ca7/src/main/asciidoc/_chapters/schema_design.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/schema_design.adoc b/src/main/asciidoc/_chapters/schema_design.adoc
index 7dc568a..7b85d15 100644
--- a/src/main/asciidoc/_chapters/schema_design.adoc
+++ b/src/main/asciidoc/_chapters/schema_design.adoc
@@ -1110,4 +1110,101 @@ If you don't have time to build it both ways and compare, my advice would be to
 [[schema.ops]]
 == Operational and Performance Configuration Options
 
-See the Performance section <<perf.schema,perf.schema>> for more information operational and performance schema design options, such as Bloom Filters, Table-configured regionsizes, compression, and blocksizes.
+====  Tune HBase Server RPC Handling
+
+* Set `hbase.regionserver.handler.count` (in `hbase-site.xml`) to cores x spindles for concurrency.
+* Optionally, split the call queues into separate read and write queues for differentiated service. The parameter `hbase.ipc.server.callqueue.handler.factor` specifies the number of call queues:
+- `0` means a single shared queue
+- `1` means one queue for each handler.
+- A value between `0` and `1` allocates the number of queues proportionally to the number of handlers. For instance, a value of `.5` shares one queue between each two handlers.
+* Use `hbase.ipc.server.callqueue.read.ratio` (`hbase.ipc.server.callqueue.read.share` in 0.98) to split the call queues into read and write queues:
+- `0.5` means there will be the same number of read and write queues
+- `< 0.5` for more read than write
+- `> 0.5` for more write than read
+* Set `hbase.ipc.server.callqueue.scan.ratio` (HBase 1.0+)  to split read call queues into small-read and long-read queues:
+- 0.5 means that there will be the same number of short-read and long-read queues
+- `< 0.5` for more short-read
+- `> 0.5` for more long-read
+
+====  Disable Nagle for RPC
+
+Disable Nagle\u2019s algorithm. Delayed ACKs can add up to ~200ms to RPC round trip time. Set the following parameters:
+
+* In Hadoop\u2019s `core-site.xml`:
+- `ipc.server.tcpnodelay = true`
+- `ipc.client.tcpnodelay = true`
+* In HBase\u2019s `hbase-site.xml`:
+- `hbase.ipc.client.tcpnodelay = true`
+- `hbase.ipc.server.tcpnodelay = true`
+
+====  Limit Server Failure Impact
+
+Detect regionserver failure as fast as reasonable. Set the following parameters:
+
+* In `hbase-site.xml`, set `zookeeper.session.timeout` to 30 seconds or less to bound failure detection (20-30 seconds is a good start).
+* Detect and avoid unhealthy or failed HDFS DataNodes: in `hdfs-site.xml` and `hbase-site.xml`, set the following parameters:
+- `dfs.namenode.avoid.read.stale.datanode = true`
+- `dfs.namenode.avoid.write.stale.datanode = true`
+
+====  Optimize on the Server Side for Low Latency
+
+* Skip the network for local blocks. In `hbase-site.xml`, set the following parameters:
+- `dfs.client.read.shortcircuit = true`
+- `dfs.client.read.shortcircuit.buffer.size = 131072` (Important to avoid OOME)
+* Ensure data locality. In `hbase-site.xml`, set `hbase.hstore.min.locality.to.skip.major.compact = 0.7` (Meaning that 0.7 \<= n \<= 1)
+* Make sure DataNodes have enough handlers for block transfers. In `hdfs-site`.xml``, set the following parameters:
+- `dfs.datanode.max.xcievers >= 8192`
+- `dfs.datanode.handler.count =` number of spindles
+
+===  JVM Tuning
+
+====  Tune JVM GC for low collection latencies
+
+* Use the CMS collector: `-XX:+UseConcMarkSweepGC`
+* Keep eden space as small as possible to minimize average collection time. Example:
+
+    -XX:CMSInitiatingOccupancyFraction=70
+
+* Optimize for low collection latency rather than throughput: `-Xmn512m`
+* Collect eden in parallel: `-XX:+UseParNewGC`
+*  Avoid collection under pressure: `-XX:+UseCMSInitiatingOccupancyOnly`
+* Limit per request scanner result sizing so everything fits into survivor space but doesn\u2019t tenure. In `hbase-site.xml`, set `hbase.client.scanner.max.result.size` to 1/8th of eden space (with -`Xmn512m` this is ~51MB )
+* Set `max.result.size` x `handler.count` less than survivor space
+
+====  OS-Level Tuning
+
+* Turn transparent huge pages (THP) off:
+
+  echo never > /sys/kernel/mm/transparent_hugepage/enabled
+  echo never > /sys/kernel/mm/transparent_hugepage/defrag
+
+* Set `vm.swappiness = 0`
+* Set `vm.min_free_kbytes` to at least 1GB (8GB on larger memory systems)
+* Disable NUMA zone reclaim with `vm.zone_reclaim_mode = 0`
+
+==  Special Cases
+
+====  For applications where failing quickly is better than waiting
+
+*  In `hbase-site.xml` on the client side, set the following parameters:
+- Set `hbase.client.pause = 1000`
+- Set `hbase.client.retries.number = 3`
+- If you want to ride over splits and region moves, increase `hbase.client.retries.number` substantially (>= 20)
+- Set the RecoverableZookeeper retry count: `zookeeper.recovery.retry = 1` (no retry)
+* In `hbase-site.xml` on the server side, set the Zookeeper session timeout for detecting server failures: `zookeeper.session.timeout` <= 30 seconds (20-30 is good).
+
+====  For applications that can tolerate slightly out of date information
+
+**HBase timeline consistency (HBASE-10070) **
+With read replicas enabled, read-only copies of regions (replicas) are distributed over the cluster. One RegionServer services the default or primary replica, which is the only replica that can service writes. Other RegionServers serve the secondary replicas, follow the primary RegionServer, and only see committed updates. The secondary replicas are read-only, but can serve reads immediately while the primary is failing over, cutting read availability blips from seconds to milliseconds. Phoenix supports timeline consistency as of 4.4.0
+Tips:
+
+* Deploy HBase 1.0.0 or later.
+* Enable timeline consistent replicas on the server side.
+* Use one of the following methods to set timeline consistency:
+- Use `ALTER SESSION SET CONSISTENCY = 'TIMELINE\u2019`
+- Set the connection property `Consistency` to `timeline` in the JDBC connect string
+
+=== More Information
+
+See the Performance section <<perf.schema,perf.schema>> for more information about operational and performance schema design options, such as Bloom Filters, Table-configured regionsizes, compression, and blocksizes.

http://git-wip-us.apache.org/repos/asf/hbase/blob/eb5d2ca7/src/main/asciidoc/_chapters/troubleshooting.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/troubleshooting.adoc b/src/main/asciidoc/_chapters/troubleshooting.adoc
index ce47423..c6253b8 100644
--- a/src/main/asciidoc/_chapters/troubleshooting.adoc
+++ b/src/main/asciidoc/_chapters/troubleshooting.adoc
@@ -233,8 +233,13 @@ Take some time crafting your question.
 See link:http://www.mikeash.com/getting_answers.html[Getting Answers] for ideas on crafting good questions.
 A quality question that includes all context and exhibits evidence the author has tried to find answers in the manual and out on lists is more likely to get a prompt response.
 
+[[trouble.resources.slack]]
+=== Slack
+See  http://apache-hbase.slack.com Channel on Slack
+
 [[trouble.resources.irc]]
 === IRC
+(You will probably get a more prompt response on the Slack channel)
 
 #hbase on irc.freenode.net
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/eb5d2ca7/src/main/asciidoc/_chapters/upgrading.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc b/src/main/asciidoc/_chapters/upgrading.adoc
index 9552024..b0a5565 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -226,6 +226,17 @@ NOTE: You cannot do a <<hbase.rolling.upgrade,rolling upgrade>> from 0.96.x to 1
 
 There are no known issues running a <<hbase.rolling.upgrade,rolling upgrade>> from HBase 0.98.x to HBase 1.0.0.
 
+[[upgrade1.0.scanner.caching]]
+==== Scanner Caching has Changed
+.From 0.98.x to 1.x
+In hbase-1.x, the default Scan caching 'number of rows' changed.
+Where in 0.98.x, it defaulted to 100, in later HBase versions, the
+default became Integer.MAX_VALUE. Not setting a cache size can make
+for Scans that run for a long time server-side, especially if 
+they are running with stringent filtering.  See
+link:https://issues.apache.org/jira/browse/HBASE-16973[Revisiting default value for hbase.client.scanner.caching]; 
+for further discussion.
+
 [[upgrade1.0.from.0.94]]
 ==== Upgrading to 1.0 from 0.94
 You cannot rolling upgrade from 0.94.x to 1.x.x.  You must stop your cluster, install the 1.x.x software, run the migration described at <<executing.the.0.96.upgrade>> (substituting 1.x.x. wherever we make mention of 0.96.x in the section below), and then restart. Be sure to upgrade your ZooKeeper if it is a version less than the required 3.4.x.

http://git-wip-us.apache.org/repos/asf/hbase/blob/eb5d2ca7/src/main/asciidoc/book.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/book.adoc b/src/main/asciidoc/book.adoc
index 2209b4f..e5898d5 100644
--- a/src/main/asciidoc/book.adoc
+++ b/src/main/asciidoc/book.adoc
@@ -73,6 +73,7 @@ include::_chapters/case_studies.adoc[]
 include::_chapters/ops_mgt.adoc[]
 include::_chapters/developer.adoc[]
 include::_chapters/unit_testing.adoc[]
+include::_chapters/protobuf.adoc[]
 include::_chapters/zookeeper.adoc[]
 include::_chapters/community.adoc[]