You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by ap...@apache.org on 2015/05/27 23:00:29 UTC

[1/2] hbase git commit: Update POM and CHANGES.txt for 0.98.13

Repository: hbase
Updated Branches:
  refs/heads/0.98 090d89f74 -> dd8e926a7


Update POM and CHANGES.txt for 0.98.13


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ab0a9b9f
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ab0a9b9f
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ab0a9b9f

Branch: refs/heads/0.98
Commit: ab0a9b9f35a997201e3cb0cf418107a260592154
Parents: 090d89f
Author: Andrew Purtell <ap...@apache.org>
Authored: Wed May 27 13:40:40 2015 -0700
Committer: Andrew Purtell <ap...@apache.org>
Committed: Wed May 27 13:40:40 2015 -0700

----------------------------------------------------------------------
 CHANGES.txt                  | 94 +++++++++++++++++++++++++++++++++++++++
 hbase-annotations/pom.xml    |  2 +-
 hbase-assembly/pom.xml       |  2 +-
 hbase-checkstyle/pom.xml     |  4 +-
 hbase-client/pom.xml         |  2 +-
 hbase-common/pom.xml         |  2 +-
 hbase-examples/pom.xml       |  2 +-
 hbase-hadoop-compat/pom.xml  |  2 +-
 hbase-hadoop1-compat/pom.xml |  2 +-
 hbase-hadoop2-compat/pom.xml |  2 +-
 hbase-it/pom.xml             |  2 +-
 hbase-prefix-tree/pom.xml    |  2 +-
 hbase-protocol/pom.xml       |  2 +-
 hbase-rest/pom.xml           |  2 +-
 hbase-server/pom.xml         |  2 +-
 hbase-shell/pom.xml          |  2 +-
 hbase-testing-util/pom.xml   |  2 +-
 hbase-thrift/pom.xml         |  2 +-
 pom.xml                      |  2 +-
 19 files changed, 113 insertions(+), 19 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hbase/blob/ab0a9b9f/CHANGES.txt
----------------------------------------------------------------------
diff --git a/CHANGES.txt b/CHANGES.txt
index 269e8f5..8eb6574 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,5 +1,99 @@
 HBase Change Log
 
+Release 0.98.13 - 6/4/2015
+
+** Sub-task
+    * [HBASE-13035] - [0.98] Backport HBASE-12867 - Shell does not support custom replication endpoint specification
+    * [HBASE-13494] - 0.98: Remove remove(K, V) from type PoolMap<K,V>
+    * [HBASE-13563] - Add missing table owner to AC tests.
+    * [HBASE-13579] - Avoid isCellTTLExpired() for NO-TAG cases
+    * [HBASE-13658] - Improve the test run time for TestAccessController class
+    * [HBASE-13752] - Temporarily disable TestCorruptedRegionStoreFile on 0.98
+
+** Bug
+    * [HBASE-8725] - Add total time RPC call metrics
+    * [HBASE-11658] - Piped commands to hbase shell should return non-zero if shell command failed.
+    * [HBASE-12413] - Mismatch in the equals and hashcode methods of KeyValue
+    * [HBASE-13084] - Add labels to VisibilityLabelsCache asynchronously causes TestShell flakey
+    * [HBASE-13200] - Improper configuration can leads to endless lease recovery during failover
+    * [HBASE-13217] - Procedure fails due to ZK issue
+    * [HBASE-13301] - Possible memory leak in BucketCache
+    * [HBASE-13312] - SmallScannerCallable does not increment scan metrics
+    * [HBASE-13325] - Protocol Buffers 2.5 no longer available for download on code.google.com
+    * [HBASE-13333] - Renew Scanner Lease without advancing the RegionScanner
+    * [HBASE-13377] - Canary may generate false alarm on the first region when there are many delete markers
+    * [HBASE-13382] - IntegrationTestBigLinkedList should use SecureRandom
+    * [HBASE-13417] - batchCoprocessorService() does not handle NULL keys
+    * [HBASE-13430] - HFiles that are in use by a table cloned from a snapshot may be deleted when that snapshot is deleted
+    * [HBASE-13437] - ThriftServer leaks ZooKeeper connections
+    * [HBASE-13457] - SnapshotExistsException doesn't honor the DoNotRetry
+    * [HBASE-13471] - Fix a possible infinite loop in doMiniBatchMutation
+    * [HBASE-13473] - deleted cells come back alive after the stripe compaction
+    * [HBASE-13475] - Small spelling mistake in region_mover#isSuccessfulScan causes NoMethodError
+    * [HBASE-13477] - Create metrics on failed requests
+    * [HBASE-13482] - Phoenix is failing to scan tables on secure environments. 
+    * [HBASE-13490] - foreground daemon start re-executes ulimit output
+    * [HBASE-13491] - Issue in FuzzyRowFilter#getNextForFuzzyRule
+    * [HBASE-13526] - TestRegionServerReportForDuty can be flaky: hang or timeout
+    * [HBASE-13528] - A bug on selecting compaction pool
+    * [HBASE-13546] - NPE on region server status page if all masters are down
+    * [HBASE-13564] - Master MBeans are not published
+    * [HBASE-13585] - HRegionFileSystem#splitStoreFile() finishes without closing the file handle in some situation
+    * [HBASE-13592] - RegionServer sometimes gets stuck during shutdown in case of cache flush failures
+    * [HBASE-13600] - check_compatibility.sh should ignore shaded jars
+    * [HBASE-13601] - Connection leak during log splitting
+    * [HBASE-13604] - bin/hbase mapredcp does not include yammer-metrics jar
+    * [HBASE-13608] - 413 Error with Stargate through Knox, using AD, SPNEGO, and Pre-Auth
+    * [HBASE-13612] - TestRegionFavoredNodes doesn't guard against setup failure
+    * [HBASE-13618] - ReplicationSource is too eager to remove sinks
+    * [HBASE-13625] - Use HDFS for HFileOutputFormat2 partitioner's path
+    * [HBASE-13628] - Use AtomicLong as size in BoundedConcurrentLinkedQueue
+    * [HBASE-13632] - Backport HBASE-13368 to branch-1 and 0.98
+    * [HBASE-13635] - Regions stuck in transition because master is incorrectly assumed dead
+    * [HBASE-13651] - Handle StoreFileScanner FileNotFoundException
+    * [HBASE-13662] - RSRpcService.scan() throws an OutOfOrderScannerNext if the scan has a retriable failure
+    * [HBASE-13668] - TestFlushRegionEntry is flaky
+    * [HBASE-13703] - ReplicateContext should not be a member of ReplicationSource
+    * [HBASE-13711] - Provide an API to set min and max versions in HColumnDescriptor
+    * [HBASE-13712] - Backport HBASE-13199 to branch-1
+    * [HBASE-13721] - Improve shell scan performances when using LIMIT
+    * [HBASE-13727] - Codehaus repository is out of service
+    * [HBASE-13731] - TestReplicationAdmin should clean up MiniZKCluster resource
+    * [HBASE-13734] - Improper timestamp checking with VisibilityScanDeleteTracker
+    * [HBASE-13746] - list_replicated_tables command is not listing table in hbase shell.
+    * [HBASE-13757] - TestMultiParallel (and others) failing on 0.98 since HBASE-13712
+    * [HBASE-13767] - Allow ZKAclReset to set and not just clear ZK ACLs
+    * [HBASE-13768] - ZooKeeper znodes are bootstrapped with insecure ACLs in a secure configuration
+
+** Improvement
+    * [HBASE-12415] - Add add(byte[][] arrays) to Bytes.
+    * [HBASE-12987] - HBCK should print status while scanning over many regions
+    * [HBASE-13122] - Improve efficiency for return codes of some filters
+    * [HBASE-13132] - Improve RemoveColumn action debug message 
+    * [HBASE-13216] - Add version info in RPC connection header
+    * [HBASE-13350] - Add a debug-warn if we fail HTD checks even if table.sanity.checks is false
+    * [HBASE-13366] - Throw DoNotRetryIOException instead of read only IOException
+    * [HBASE-13369] - Expose scanNext stats to region server level
+    * [HBASE-13420] - RegionEnvironment.offerExecutionLatency Blocks Threads under Heavy Load
+    * [HBASE-13431] - Allow to skip store file range check based on column family while creating reference files in HRegionFileSystem#splitStoreFile
+    * [HBASE-13436] - Include user name in ADE for scans
+    * [HBASE-13534] - Change HBase master WebUI to explicitly mention if it is a backup master
+    * [HBASE-13550] - [Shell] Support unset of a list of table attributes
+    * [HBASE-13671] - More classes to add to the invoking repository of org.apache.hadoop.hbase.mapreduce.driver
+    * [HBASE-13677] - RecoverableZookeeper WARNs on expected events
+    * [HBASE-13684] - Allow mlockagent to be used when not starting as root
+    * [HBASE-13780] - Default to 700 for HDFS root dir permissions for secure deployments
+
+** New Feature
+    * [HBASE-13412] - Region split decisions should have jitter
+
+** Task
+    * [HBASE-13610] - Backport HBASE-13222 (Provide means of non-destructive balancer inspection) to 0.98
+
+** Test
+    * [HBASE-13413] - Create an integration test for Replication
+
+
 Release 0.98.12 - 4/17/2015
 
 ** Sub-task

http://git-wip-us.apache.org/repos/asf/hbase/blob/ab0a9b9f/hbase-annotations/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-annotations/pom.xml b/hbase-annotations/pom.xml
index 9aa4bc5..0076c75 100644
--- a/hbase-annotations/pom.xml
+++ b/hbase-annotations/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>0.98.13-SNAPSHOT</version>
+    <version>0.98.13</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/ab0a9b9f/hbase-assembly/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-assembly/pom.xml b/hbase-assembly/pom.xml
index a15a249..cd29e57 100644
--- a/hbase-assembly/pom.xml
+++ b/hbase-assembly/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>0.98.13-SNAPSHOT</version>
+    <version>0.98.13</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-assembly</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/ab0a9b9f/hbase-checkstyle/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-checkstyle/pom.xml b/hbase-checkstyle/pom.xml
index d0f05e5..6279972 100644
--- a/hbase-checkstyle/pom.xml
+++ b/hbase-checkstyle/pom.xml
@@ -24,14 +24,14 @@
 <modelVersion>4.0.0</modelVersion>
 <groupId>org.apache.hbase</groupId>
 <artifactId>hbase-checkstyle</artifactId>
-<version>0.98.13-SNAPSHOT</version>
+<version>0.98.13</version>
 <name>HBase - Checkstyle</name>
 <description>Module to hold Checkstyle properties for HBase.</description>
 
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>0.98.13-SNAPSHOT</version>
+    <version>0.98.13</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/ab0a9b9f/hbase-client/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-client/pom.xml b/hbase-client/pom.xml
index 67561c2..a14f808 100644
--- a/hbase-client/pom.xml
+++ b/hbase-client/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>0.98.13-SNAPSHOT</version>
+    <version>0.98.13</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/ab0a9b9f/hbase-common/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-common/pom.xml b/hbase-common/pom.xml
index d2e0ece..ce0b69b 100644
--- a/hbase-common/pom.xml
+++ b/hbase-common/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>0.98.13-SNAPSHOT</version>
+    <version>0.98.13</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/ab0a9b9f/hbase-examples/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-examples/pom.xml b/hbase-examples/pom.xml
index 2330a47..7cf43bb 100644
--- a/hbase-examples/pom.xml
+++ b/hbase-examples/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>0.98.13-SNAPSHOT</version>
+    <version>0.98.13</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-examples</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/ab0a9b9f/hbase-hadoop-compat/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-hadoop-compat/pom.xml b/hbase-hadoop-compat/pom.xml
index 80819bd..9c6ba4c 100644
--- a/hbase-hadoop-compat/pom.xml
+++ b/hbase-hadoop-compat/pom.xml
@@ -23,7 +23,7 @@
     <parent>
         <artifactId>hbase</artifactId>
         <groupId>org.apache.hbase</groupId>
-        <version>0.98.13-SNAPSHOT</version>
+        <version>0.98.13</version>
         <relativePath>..</relativePath>
     </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/ab0a9b9f/hbase-hadoop1-compat/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-hadoop1-compat/pom.xml b/hbase-hadoop1-compat/pom.xml
index 8094e28..5e5356b 100644
--- a/hbase-hadoop1-compat/pom.xml
+++ b/hbase-hadoop1-compat/pom.xml
@@ -21,7 +21,7 @@ limitations under the License.
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>0.98.13-SNAPSHOT</version>
+    <version>0.98.13</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/ab0a9b9f/hbase-hadoop2-compat/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-hadoop2-compat/pom.xml b/hbase-hadoop2-compat/pom.xml
index 139aa38..b768852 100644
--- a/hbase-hadoop2-compat/pom.xml
+++ b/hbase-hadoop2-compat/pom.xml
@@ -21,7 +21,7 @@ limitations under the License.
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>0.98.13-SNAPSHOT</version>
+    <version>0.98.13</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/ab0a9b9f/hbase-it/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-it/pom.xml b/hbase-it/pom.xml
index 6583902..f3889c5 100644
--- a/hbase-it/pom.xml
+++ b/hbase-it/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>0.98.13-SNAPSHOT</version>
+    <version>0.98.13</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/ab0a9b9f/hbase-prefix-tree/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-prefix-tree/pom.xml b/hbase-prefix-tree/pom.xml
index 5f16586..1628f3d 100644
--- a/hbase-prefix-tree/pom.xml
+++ b/hbase-prefix-tree/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>0.98.13-SNAPSHOT</version>
+    <version>0.98.13</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/ab0a9b9f/hbase-protocol/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-protocol/pom.xml b/hbase-protocol/pom.xml
index 97efdc8..3cb0b59 100644
--- a/hbase-protocol/pom.xml
+++ b/hbase-protocol/pom.xml
@@ -23,7 +23,7 @@
     <parent>
         <artifactId>hbase</artifactId>
         <groupId>org.apache.hbase</groupId>
-        <version>0.98.13-SNAPSHOT</version>
+        <version>0.98.13</version>
         <relativePath>..</relativePath>
     </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/ab0a9b9f/hbase-rest/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-rest/pom.xml b/hbase-rest/pom.xml
index 3836547..7791186 100644
--- a/hbase-rest/pom.xml
+++ b/hbase-rest/pom.xml
@@ -25,7 +25,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>0.98.13-SNAPSHOT</version>
+    <version>0.98.13</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-rest</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/ab0a9b9f/hbase-server/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-server/pom.xml b/hbase-server/pom.xml
index df96215..f695ba4 100644
--- a/hbase-server/pom.xml
+++ b/hbase-server/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>0.98.13-SNAPSHOT</version>
+    <version>0.98.13</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-server</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/ab0a9b9f/hbase-shell/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-shell/pom.xml b/hbase-shell/pom.xml
index 7c48c08..75c3572 100644
--- a/hbase-shell/pom.xml
+++ b/hbase-shell/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>0.98.13-SNAPSHOT</version>
+    <version>0.98.13</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-shell</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/ab0a9b9f/hbase-testing-util/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-testing-util/pom.xml b/hbase-testing-util/pom.xml
index 186478f..07233e8 100644
--- a/hbase-testing-util/pom.xml
+++ b/hbase-testing-util/pom.xml
@@ -23,7 +23,7 @@
     <parent>
         <artifactId>hbase</artifactId>
         <groupId>org.apache.hbase</groupId>
-        <version>0.98.13-SNAPSHOT</version>
+        <version>0.98.13</version>
         <relativePath>..</relativePath>
     </parent>
     <artifactId>hbase-testing-util</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/ab0a9b9f/hbase-thrift/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-thrift/pom.xml b/hbase-thrift/pom.xml
index cec471b..9011047 100644
--- a/hbase-thrift/pom.xml
+++ b/hbase-thrift/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>0.98.13-SNAPSHOT</version>
+    <version>0.98.13</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-thrift</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/ab0a9b9f/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index c200073..6d8fba3 100644
--- a/pom.xml
+++ b/pom.xml
@@ -39,7 +39,7 @@
   <groupId>org.apache.hbase</groupId>
   <artifactId>hbase</artifactId>
   <packaging>pom</packaging>
-  <version>0.98.13-SNAPSHOT</version>
+  <version>0.98.13</version>
   <name>HBase</name>
   <description>
     Apache HBase™ is the Hadoop database. Use it when you need


[2/2] hbase git commit: Update docs from master

Posted by ap...@apache.org.
Update docs from master


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/dd8e926a
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/dd8e926a
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/dd8e926a

Branch: refs/heads/0.98
Commit: dd8e926a7200142dc8807e0f95189b5c44a43b28
Parents: ab0a9b9
Author: Andrew Purtell <ap...@apache.org>
Authored: Wed May 27 13:40:58 2015 -0700
Committer: Andrew Purtell <ap...@apache.org>
Committed: Wed May 27 13:40:58 2015 -0700

----------------------------------------------------------------------
 .../asciidoc/_chapters/appendix_acl_matrix.adoc |  94 +++++-----
 src/main/asciidoc/_chapters/architecture.adoc   |  14 ++
 src/main/asciidoc/_chapters/configuration.adoc  |  38 ++--
 src/main/asciidoc/_chapters/datamodel.adoc      |   2 +-
 src/main/asciidoc/_chapters/developer.adoc      |  20 +-
 .../asciidoc/_chapters/getting_started.adoc     |  18 +-
 src/main/asciidoc/_chapters/hbase-default.adoc  |  15 --
 src/main/asciidoc/_chapters/hbase_apis.adoc     | 109 ++++++-----
 src/main/asciidoc/_chapters/mapreduce.adoc      |  27 ++-
 src/main/asciidoc/_chapters/ops_mgt.adoc        | 187 ++++++++++++++++++-
 src/main/asciidoc/_chapters/schema_design.adoc  |  96 +++++++++-
 src/main/asciidoc/_chapters/tracing.adoc        |  43 ++---
 src/main/asciidoc/_chapters/upgrading.adoc      |   8 +-
 src/main/asciidoc/_chapters/zookeeper.adoc      |  27 +--
 14 files changed, 496 insertions(+), 202 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hbase/blob/dd8e926a/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc b/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
index bf35c1a..cb285f3 100644
--- a/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
+++ b/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
@@ -81,77 +81,77 @@ In case the table goes out of date, the unit tests which check for accuracy of p
 |===
 | Interface | Operation | Permissions
 | Master | createTable | superuser\|global\(C)\|NS\(C)
-|        | modifyTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|table(A)\|table\(C)
-|        | deleteTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|table(A)\|table\(C)
-|        | truncateTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|table(A)\|table\(C)
-|        | addColumn | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|table(A)\|table\(C)
-|        | modifyColumn | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|table(A)\|table\(C)\|column(A)\|column\(C)
-|        | deleteColumn | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|table(A)\|table\(C)\|column(A)\|column\(C)
-|        | enableTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|table(A)\|table\(C)
-|        | disableTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|table(A)\|table\(C)
+|        | modifyTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
+|        | deleteTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
+|        | truncateTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
+|        | addColumn | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
+|        | modifyColumn | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)\|column(A)\|column\(C)
+|        | deleteColumn | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)\|column(A)\|column\(C)
+|        | enableTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
+|        | disableTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
 |        | disableAclTable | Not allowed
-|        | move | superuser\|global(A)\|NS(A)\|Table(A)
-|        | assign | superuser\|global(A)\|NS(A)\|Table(A)
-|        | unassign | superuser\|global(A)\|NS(A)\|Table(A)
-|        | regionOffline | superuser\|global(A)\|NS(A)\|Table(A)
+|        | move | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
+|        | assign | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
+|        | unassign | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
+|        | regionOffline | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
 |        | balance | superuser\|global(A)
 |        | balanceSwitch | superuser\|global(A)
 |        | shutdown | superuser\|global(A)
 |        | stopMaster | superuser\|global(A)
-|        | snapshot | superuser\|global(A)\|NS(A)\|Table(A)
+|        | snapshot | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
 |        | listSnapshot | superuser\|global(A)\|SnapshotOwner
 |        | cloneSnapshot | superuser\|global(A)
-|        | restoreSnapshot | superuser\|global(A)\|SnapshotOwner & (NS(A)\|Table(A))
+|        | restoreSnapshot | superuser\|global(A)\|SnapshotOwner & (NS(A)\|TableOwner\|table(A))
 |        | deleteSnapshot | superuser\|global(A)\|SnapshotOwner
 |        | createNamespace | superuser\|global(A)
 |        | deleteNamespace | superuser\|global(A)
 |        | modifyNamespace | superuser\|global(A)
 |        | getNamespaceDescriptor | superuser\|global(A)\|NS(A)
 |        | listNamespaceDescriptors* | superuser\|global(A)\|NS(A)
-|        | flushTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS(\C)\|table(A)\|table\(C)
-|        | getTableDescriptors* | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|table(A)\|table\(C)
-|        | getTableNames* | Any global or table perm
+|        | flushTable | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
+|        | getTableDescriptors* | superuser\|global(A)\|global\(C)\|NS(A)\|NS\(C)\|TableOwner\|table(A)\|table\(C)
+|        | getTableNames* | superuser\|TableOwner\|Any global or table perm
 |        | setUserQuota(global level) | superuser\|global(A)
 |        | setUserQuota(namespace level) | superuser\|global(A)
-|        | setUserQuota(Table level) | superuser\|global(A)\|NS(A)\|Table(A)
-|        | setTableQuota | superuser\|global(A)\|NS(A)\|Table(A)
+|        | setUserQuota(Table level) | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
+|        | setTableQuota | superuser\|global(A)\|NS(A)\|TableOwner\|table(A)
 |        | setNamespaceQuota | superuser\|global(A)
 | Region | openRegion | superuser\|global(A)
 |        | closeRegion | superuser\|global(A)
-|        | flush | superuser\|global(A)\|global\(C)\|table(A)\|table\(C)
-|        | split | superuser\|global(A)\|Table(A)
-|        | compact | superuser\|global(A)\|global\(C)\|table(A)\|table\(C)
-|        | getClosestRowBefore | superuser\|global\(R)\|NS\(R)\|Table\(R)\|CF\(R)\|CQ\(R)
-|        | getOp | superuser\|global\(R)\|NS\(R)\|Table\(R)\|CF\(R)\|CQ\(R)
-|        | exists | superuser\|global\(R)\|NS\(R)\|Table\(R)\|CF\(R)\|CQ\(R)
-|        | put | superuser\|global(W)\|NS(W)\|Table(W)\|CF(W)\|CQ(W)
-|        | delete | superuser\|global(W)\|NS(W)\|Table(W)\|CF(W)\|CQ(W)
-|        | batchMutate | superuser\|global(W)\|NS(W)\|Table(W)\|CF(W)\|CQ(W)
-|        | checkAndPut | superuser\|global(RW)\|NS(RW)\|Table(RW)\|CF(RW)\|CQ(RW)
-|        | checkAndPutAfterRowLock | superuser\|global\(R)\|NS\(R)\|Table\(R)\|CF\(R)\|CQ\(R)
-|        | checkAndDelete   | superuser\|global(RW)\|NS(RW)\|Table(RW)\|CF(RW)\|CQ(RW)
-|        | checkAndDeleteAfterRowLock | superuser\|global\(R)\|NS\(R)\|Table\(R)\|CF\(R)\|CQ\(R)
-|        | incrementColumnValue | superuser\|global(W)\|NS(W)\|Table(W)\|CF(W)\|CQ(W)
-|        | append | superuser\|global(W)\|NS(W)\|Table(W)\|CF(W)\|CQ(W)
-|        | appendAfterRowLock | superuser\|global(W)\|NS(W)\|Table(W)\|CF(W)\|CQ(W)
-|        | increment | superuser\|global(W)\|NS(W)\|Table(W)\|CF(W)\|CQ(W)
-|        | incrementAfterRowLock | superuser\|global(W)\|NS(W)\|Table(W)\|CF(W)\|CQ(W)
-|        | scannerOpen | superuser\|global\(R)\|NS\(R)\|Table\(R)\|CF\(R)\|CQ\(R)
-|        | scannerNext | superuser\|global\(R)\|NS\(R)\|Table\(R)\|CF\(R)\|CQ\(R)
-|        | scannerClose | superuser\|global\(R)\|NS\(R)\|Table\(R)\|CF\(R)\|CQ\(R)
-|        | bulkLoadHFile | superuser\|global\(C)\|table\(C)\|CF\(C)
-|        | prepareBulkLoad | superuser\|global\(C)\|table\(C)\|CF\(C)
-|        | cleanupBulkLoad | superuser\|global\(C)\|table\(C)\|CF\(C)
-| Endpoint | invoke | superuser\|global(X)\|NS(X)\|Table(X)
+|        | flush | superuser\|global(A)\|global\(C)\|TableOwner\|table(A)\|table\(C)
+|        | split | superuser\|global(A)\|TableOwner\|TableOwner\|table(A)
+|        | compact | superuser\|global(A)\|global\(C)\|TableOwner\|table(A)\|table\(C)
+|        | getClosestRowBefore | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
+|        | getOp | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
+|        | exists | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
+|        | put | superuser\|global(W)\|NS(W)\|table(W)\|TableOwner\|CF(W)\|CQ(W)
+|        | delete | superuser\|global(W)\|NS(W)\|table(W)\|TableOwner\|CF(W)\|CQ(W)
+|        | batchMutate | superuser\|global(W)\|NS(W)\|TableOwner\|table(W)\|CF(W)\|CQ(W)
+|        | checkAndPut | superuser\|global(RW)\|NS(RW)\|TableOwner\|table(RW)\|CF(RW)\|CQ(RW)
+|        | checkAndPutAfterRowLock | superuser\|global\(R)\|NS\(R)\|TableOwner\|Table\(R)\|CF\(R)\|CQ\(R)
+|        | checkAndDelete   | superuser\|global(RW)\|NS(RW)\|TableOwner\|table(RW)\|CF(RW)\|CQ(RW)
+|        | checkAndDeleteAfterRowLock | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
+|        | incrementColumnValue | superuser\|global(W)\|NS(W)\|TableOwner\|table(W)\|CF(W)\|CQ(W)
+|        | append | superuser\|global(W)\|NS(W)\|TableOwner\|table(W)\|CF(W)\|CQ(W)
+|        | appendAfterRowLock | superuser\|global(W)\|NS(W)\|TableOwner\|table(W)\|CF(W)\|CQ(W)
+|        | increment | superuser\|global(W)\|NS(W)\|TableOwner\|table(W)\|CF(W)\|CQ(W)
+|        | incrementAfterRowLock | superuser\|global(W)\|NS(W)\|TableOwner\|table(W)\|CF(W)\|CQ(W)
+|        | scannerOpen | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
+|        | scannerNext | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
+|        | scannerClose | superuser\|global\(R)\|NS\(R)\|TableOwner\|table\(R)\|CF\(R)\|CQ\(R)
+|        | bulkLoadHFile | superuser\|global\(C)\|TableOwner\|table\(C)\|CF\(C)
+|        | prepareBulkLoad | superuser\|global\(C)\|TableOwner\|table\(C)\|CF\(C)
+|        | cleanupBulkLoad | superuser\|global\(C)\|TableOwner\|table\(C)\|CF\(C)
+| Endpoint | invoke | superuser\|global(X)\|NS(X)\|TableOwner\|table(X)
 | AccessController | grant(global level) | global(A)
 |                  | grant(namespace level) | global(A)\|NS(A)
-|                  | grant(table level) | global(A)\|NS(A)\|table(A)\|CF(A)\|CQ(A)
+|                  | grant(table level) | global(A)\|NS(A)\|TableOwner\|table(A)\|CF(A)\|CQ(A)
 |                  | revoke(global level) | global(A)
 |                  | revoke(namespace level) | global(A)\|NS(A)
-|                  | revoke(table level) | global(A)\|NS(A)\|table(A)\|CF(A)\|CQ(A)
+|                  | revoke(table level) | global(A)\|NS(A)\|TableOwner\|table(A)\|CF(A)\|CQ(A)
 |                  | getUserPermissions(global level) | global(A)
 |                  | getUserPermissions(namespace level) | global(A)\|NS(A)
-|                  | getUserPermissions(table level) | global(A)\|NS(A)\|table(A)\|CF(A)\|CQ(A)
+|                  | getUserPermissions(table level) | global(A)\|NS(A)\|TableOwner\|table(A)\|CF(A)\|CQ(A)
 | RegionServer | stopRegionServer | superuser\|global(A)
 |              | mergeRegions | superuser\|global(A)
 |              | rollWALWriterRequest | superuser\|global(A)

http://git-wip-us.apache.org/repos/asf/hbase/blob/dd8e926a/src/main/asciidoc/_chapters/architecture.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/architecture.adoc b/src/main/asciidoc/_chapters/architecture.adoc
index 0236d81..659c4ee 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -2327,6 +2327,20 @@ Instead you can change the number of region replicas per table to increase or de
     The period (in milliseconds) for refreshing the store files for the secondary regions. 0 means this feature is disabled. Secondary regions sees new files (from flushes and compactions) from primary once the secondary region refreshes the list of files in the region. But too frequent refreshes might cause extra Namenode pressure. If the files cannot be refreshed for longer than HFile TTL (hbase.master.hfilecleaner.ttl) the requests are rejected. Configuring HFile TTL to a larger value is also recommended with this setting.
   </description>
 </property>
+<property>
+  <name>hbase.region.replica.replication.memstore.enabled</name>
+  <value>true</value>
+  <description>
+    If you set this to `false`, replicas do not receive memstore updates from
+    the primary RegionServer. If you set this to `true`, you can still disable
+    memstore replication on a per-table basis, by setting the table's
+    `REGION_MEMSTORE_REPLICATION` configuration property to `false`. If
+    memstore replication is disabled, the secondaries will only receive
+    updates for events like flushes and bulkloads, and will not have access to
+    data which the primary has not yet flushed. This preserves the guarantee
+    of row-level consistency, even when the read requests `Consistency.TIMELINE`.
+  </description>
+</property>
 ----
 
 One thing to keep in mind also is that, region replica placement policy is only enforced by the `StochasticLoadBalancer` which is the default balancer.

http://git-wip-us.apache.org/repos/asf/hbase/blob/dd8e926a/src/main/asciidoc/_chapters/configuration.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/configuration.adoc b/src/main/asciidoc/_chapters/configuration.adoc
index ed00a49..01f2eb7 100644
--- a/src/main/asciidoc/_chapters/configuration.adoc
+++ b/src/main/asciidoc/_chapters/configuration.adoc
@@ -98,6 +98,11 @@ This section lists required services and some required system configuration.
 |JDK 7
 |JDK 8
 
+|1.1
+|link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
+|yes
+|Running with JDK 8 will work but is not well tested.
+
 |1.0
 |link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
 |yes
@@ -205,20 +210,22 @@ Use the following legend to interpret this table:
 * "X" = not supported
 * "NT" = Not tested
 
-[cols="1,1,1,1,1,1", options="header"]
+[cols="1,1,1,1,1,1,1", options="header"]
 |===
-| | HBase-0.92.x | HBase-0.94.x | HBase-0.96.x | HBase-0.98.x (Support for Hadoop 1.1+ is deprecated.) | HBase-1.0.x (Hadoop 1.x is NOT supported)
-|Hadoop-0.20.205 | S | X | X | X | X
-|Hadoop-0.22.x | S | X | X | X | X
-|Hadoop-1.0.x  |X | X | X | X | X
-|Hadoop-1.1.x | NT | S | S | NT | X
-|Hadoop-0.23.x | X | S | NT | X | X
-|Hadoop-2.0.x-alpha | X | NT | X | X | X
-|Hadoop-2.1.0-beta | X | NT | S | X | X
-|Hadoop-2.2.0 | X | NT | S | S | NT
-|Hadoop-2.3.x | X | NT | S | S | NT
-|Hadoop-2.4.x | X | NT | S | S | S
-|Hadoop-2.5.x | X | NT | S | S | S
+| | HBase-0.92.x | HBase-0.94.x | HBase-0.96.x | HBase-0.98.x (Support for Hadoop 1.1+ is deprecated.) | HBase-1.0.x (Hadoop 1.x is NOT supported) | HBase-1.1.x
+|Hadoop-0.20.205 | S | X | X | X | X | X
+|Hadoop-0.22.x | S | X | X | X | X | X
+|Hadoop-1.0.x  |X | X | X | X | X | X
+|Hadoop-1.1.x | NT | S | S | NT | X | X
+|Hadoop-0.23.x | X | S | NT | X | X | X
+|Hadoop-2.0.x-alpha | X | NT | X | X | X | X
+|Hadoop-2.1.0-beta | X | NT | S | X | X | X
+|Hadoop-2.2.0 | X | NT | S | S | NT | NT
+|Hadoop-2.3.x | X | NT | S | S | NT | NT
+|Hadoop-2.4.x | X | NT | S | S | S | S
+|Hadoop-2.5.x | X | NT | S | S | S | S
+|Hadoop-2.6.x | X | NT | NT | NT | S | S
+|Hadoop-2.7.x | X | NT | NT | NT | NT | NT
 |===
 
 .Replace the Hadoop Bundled With HBase!
@@ -994,8 +1001,7 @@ To enable it in 0.99 or above, add below property in _hbase-site.xml_:
 NOTE: DO NOT set `com.sun.management.jmxremote.port` for Java VM at the same time.
 
 Currently it supports Master and RegionServer Java VM.
-The reason why you only configure coprocessor for 'regionserver' is that, starting from HBase 0.99, a Master IS also a RegionServer.
-(See link:https://issues.apache.org/jira/browse/HBASE-10569[HBASE-10569] for more information.) By default, the JMX listens on TCP port 10102, you can further configure the port using below properties:
+By default, the JMX listens on TCP port 10102, you can further configure the port using below properties:
 
 [source,xml]
 ----
@@ -1062,7 +1068,7 @@ Finally start `jconsole` on the client using the key store:
 jconsole -J-Djavax.net.ssl.trustStore=/home/tianq/jconsoleKeyStore
 ----
 
-NOTE: for HBase 0.98, To enable the HBase JMX implementation on Master, you also need to add below property in _hbase-site.xml_: 
+NOTE: To enable the HBase JMX implementation on Master, you also need to add below property in _hbase-site.xml_: 
 
 [source,xml]
 ----

http://git-wip-us.apache.org/repos/asf/hbase/blob/dd8e926a/src/main/asciidoc/_chapters/datamodel.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/datamodel.adoc b/src/main/asciidoc/_chapters/datamodel.adoc
index 74238ca..b76adc8 100644
--- a/src/main/asciidoc/_chapters/datamodel.adoc
+++ b/src/main/asciidoc/_chapters/datamodel.adoc
@@ -495,7 +495,7 @@ For an informative discussion on how deletes and versioning interact, see the th
 
 Also see <<keyvalue,keyvalue>> for more information on the internal KeyValue format.
 
-Delete markers are purged during the next major compaction of the store, unless the `KEEP_DELETED_CELLS` option is set in the column family.
+Delete markers are purged during the next major compaction of the store, unless the `KEEP_DELETED_CELLS` option is set in the column family (See <<cf.keep.deleted>>).
 To keep the deletes for a configurable amount of time, you can set the delete TTL via the +hbase.hstore.time.to.purge.deletes+ property in _hbase-site.xml_.
 If `hbase.hstore.time.to.purge.deletes` is not set, or set to 0, all delete markers, including those with timestamps in the future, are purged during the next major compaction.
 Otherwise, a delete marker with a timestamp in the future is kept until the major compaction which occurs after the time represented by the marker's timestamp plus the value of `hbase.hstore.time.to.purge.deletes`, in milliseconds.

http://git-wip-us.apache.org/repos/asf/hbase/blob/dd8e926a/src/main/asciidoc/_chapters/developer.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/developer.adoc b/src/main/asciidoc/_chapters/developer.adoc
index 26ba325..ce66a90 100644
--- a/src/main/asciidoc/_chapters/developer.adoc
+++ b/src/main/asciidoc/_chapters/developer.adoc
@@ -401,6 +401,16 @@ mvn -DskipTests clean install && mvn -DskipTests package assembly:single
 
 The distribution tarball is built in _hbase-assembly/target/hbase-<version>-bin.tar.gz_.
 
+You can install or deploy the tarball by having the assembly:single goal before install or deploy in the maven command:
+
+----
+mvn -DskipTests package assembly:single install
+----
+----
+mvn -DskipTests package assembly:single deploy
+----
+
+
 [[build.gotchas]]
 ==== Build Gotchas
 
@@ -446,6 +456,7 @@ You then reference these generated poms when you build.
 For now, just be aware of the difference between HBase 1.x builds and those of HBase 0.96-0.98.
 This difference is important to the build instructions.
 
+[[maven.settings.xml]]
 .Example _~/.m2/settings.xml_ File
 ====
 Publishing to maven requires you sign the artifacts you want to upload.
@@ -626,9 +637,8 @@ Release needs to be tagged for the next step.
 
 . Deploy to the Maven Repository.
 +
-Next, deploy HBase to the Apache Maven repository, using the `apache-release` profile instead of the `release` profile when running the +mvn
-                            deploy+ command.
-This profile invokes the Apache pom referenced by our pom files, and also signs your artifacts published to Maven, as long as the _settings.xml_ is configured correctly, as described in <<mvn.settings.file,mvn.settings.file>>.
+Next, deploy HBase to the Apache Maven repository, using the `apache-release` profile instead of the `release` profile when running the `mvn deploy` command.
+This profile invokes the Apache pom referenced by our pom files, and also signs your artifacts published to Maven, as long as the _settings.xml_ is configured correctly, as described in <<maven.settings.xml>>.
 +
 [source,bourne]
 ----
@@ -638,6 +648,8 @@ $ mvn deploy -DskipTests -Papache-release
 +
 This command copies all artifacts up to a temporary staging Apache mvn repository in an 'open' state.
 More work needs to be done on these maven artifacts to make them generally available. 
++
+We do not release HBase tarball to the Apache Maven repository. To avoid deploying the tarball, do not include the `assembly:single` goal in your `mvn deploy` command. Check the deployed artifacts as described in the next section.
 
 . Make the Release Candidate available.
 +
@@ -709,7 +721,7 @@ Announce the release candidate on the mailing list and call a vote.
 [[maven.snapshot]]
 === Publishing a SNAPSHOT to maven
 
-Make sure your _settings.xml_ is set up properly, as in <<mvn.settings.file,mvn.settings.file>>.
+Make sure your _settings.xml_ is set up properly (see <<maven.settings.xml>>).
 Make sure the hbase version includes `-SNAPSHOT` as a suffix.
 Following is an example of publishing SNAPSHOTS of a release that had an hbase version of 0.96.0 in its poms.
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/dd8e926a/src/main/asciidoc/_chapters/getting_started.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/getting_started.adoc b/src/main/asciidoc/_chapters/getting_started.adoc
index 76d793c..7839bad 100644
--- a/src/main/asciidoc/_chapters/getting_started.adoc
+++ b/src/main/asciidoc/_chapters/getting_started.adoc
@@ -294,9 +294,11 @@ You can skip the HDFS configuration to continue storing your data in the local f
 .Hadoop Configuration
 [NOTE]
 ====
-This procedure assumes that you have configured Hadoop and HDFS on your local system and or a remote system, and that they are running and available.
-It also assumes you are using Hadoop 2.
-Currently, the documentation on the Hadoop website does not include a quick start for Hadoop 2, but the guide at link:http://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide          is a good starting point.
+This procedure assumes that you have configured Hadoop and HDFS on your local system and/or a remote
+system, and that they are running and available. It also assumes you are using Hadoop 2.
+The guide on
+link:http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html[Setting up a Single Node Cluster]
+in the Hadoop documentation is a good starting point.
 ====
 
 
@@ -619,12 +621,16 @@ For more about ZooKeeper configuration, including using an external ZooKeeper in
 .Web UI Port Changes
 NOTE: Web UI Port Changes
 +
-In HBase newer than 0.98.x, the HTTP ports used by the HBase Web UI changed from 60010 for the Master and 60030 for each RegionServer to 16610 for the Master and 16030 for the RegionServer.
+In HBase newer than 0.98.x, the HTTP ports used by the HBase Web UI changed from 60010 for the
+Master and 60030 for each RegionServer to 16010 for the Master and 16030 for the RegionServer.
 
 +
-If everything is set up correctly, you should be able to connect to the UI for the Master `http://node-a.example.com:16610/` or the secondary master at `http://node-b.example.com:16610/` for the secondary master, using a web browser.
+If everything is set up correctly, you should be able to connect to the UI for the Master
+`http://node-a.example.com:16010/` or the secondary master at `http://node-b.example.com:16010/`
+for the secondary master, using a web browser.
 If you can connect via `localhost` but not from another host, check your firewall rules.
-You can see the web UI for each of the RegionServers at port 16630 of their IP addresses, or by clicking their links in the web UI for the Master.
+You can see the web UI for each of the RegionServers at port 16030 of their IP addresses, or by
+clicking their links in the web UI for the Master.
 
 . Test what happens when nodes or services disappear.
 +

http://git-wip-us.apache.org/repos/asf/hbase/blob/dd8e926a/src/main/asciidoc/_chapters/hbase-default.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/hbase-default.adoc b/src/main/asciidoc/_chapters/hbase-default.adoc
index bf56dd3..8df9b17 100644
--- a/src/main/asciidoc/_chapters/hbase-default.adoc
+++ b/src/main/asciidoc/_chapters/hbase-default.adoc
@@ -605,21 +605,7 @@ Instructs HBase to make use of ZooKeeper's multi-update functionality.
 .Default
 `true`
 
-  
-[[hbase.config.read.zookeeper.config]]
-*`hbase.config.read.zookeeper.config`*::
-+
-.Description
-
-        Set to true to allow HBaseConfiguration to read the
-        zoo.cfg file for ZooKeeper properties. Switching this to true
-        is not recommended, since the functionality of reading ZK
-        properties from a zoo.cfg file has been deprecated.
-+
-.Default
-`false`
 
-  
 [[hbase.zookeeper.property.initLimit]]
 *`hbase.zookeeper.property.initLimit`*::
 +
@@ -2251,4 +2237,3 @@ The percent of region server RPC threads failed to abort RS.
 .Default
 `0.5`
 
-  
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/dd8e926a/src/main/asciidoc/_chapters/hbase_apis.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/hbase_apis.adoc b/src/main/asciidoc/_chapters/hbase_apis.adoc
index 85dbad1..6d2777b 100644
--- a/src/main/asciidoc/_chapters/hbase_apis.adoc
+++ b/src/main/asciidoc/_chapters/hbase_apis.adoc
@@ -36,102 +36,99 @@ See <<external_apis>> for more information.
 
 == Examples
 
-.Create a Table Using Java
+.Create, modify and delete a Table Using Java
 ====
 
 [source,java]
 ----
 package com.example.hbase.admin;
 
+package util;
+
 import java.io.IOException;
 
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.HBaseConfiguration;
 import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
 import org.apache.hadoop.hbase.io.compress.Compression.Algorithm;
-import org.apache.hadoop.conf.Configuration;
 
-import static com.example.hbase.Constants.*;
+public class Example {
 
-public class CreateSchema {
+  private static final String TABLE_NAME = "MY_TABLE_NAME_TOO";
+  private static final String CF_DEFAULT = "DEFAULT_COLUMN_FAMILY";
 
   public static void createOrOverwrite(Admin admin, HTableDescriptor table) throws IOException {
-    if (admin.tableExists(table.getName())) {
-      admin.disableTable(table.getName());
-      admin.deleteTable(table.getName());
+    if (admin.tableExists(table.getTableName())) {
+      admin.disableTable(table.getTableName());
+      admin.deleteTable(table.getTableName());
     }
     admin.createTable(table);
   }
 
-  public static void createSchemaTables (Configuration config) {
-    try {
-      final Admin admin = new Admin(config);
+  public static void createSchemaTables(Configuration config) throws IOException {
+    try (Connection connection = ConnectionFactory.createConnection(config);
+         Admin admin = connection.getAdmin()) {
+
       HTableDescriptor table = new HTableDescriptor(TableName.valueOf(TABLE_NAME));
       table.addFamily(new HColumnDescriptor(CF_DEFAULT).setCompressionType(Algorithm.SNAPPY));
 
       System.out.print("Creating table. ");
       createOrOverwrite(admin, table);
       System.out.println(" Done.");
-
-      admin.close();
-    } catch (Exception e) {
-      e.printStackTrace();
-      System.exit(-1);
     }
   }
 
-}
-----
-====
-
-.Add, Modify, and Delete a Table
-====
-
-[source,java]
-----
-public static void upgradeFrom0 (Configuration config) {
-
-  try {
-    final Admin admin = new Admin(config);
-    TableName tableName = TableName.valueOf(TABLE_ASSETMETA);
-    HTableDescriptor table_assetmeta = new HTableDescriptor(tableName);
-    table_assetmeta.addFamily(new HColumnDescriptor(CF_DEFAULT).setCompressionType(Algorithm.SNAPPY));
+  public static void modifySchema (Configuration config) throws IOException {
+    try (Connection connection = ConnectionFactory.createConnection(config);
+         Admin admin = connection.getAdmin()) {
 
-    // Create a new table.
+      TableName tableName = TableName.valueOf(TABLE_NAME);
+      if (admin.tableExists(tableName)) {
+        System.out.println("Table does not exist.");
+        System.exit(-1);
+      }
 
-    System.out.print("Creating table_assetmeta. ");
-    admin.createTable(table_assetmeta);
-    System.out.println(" Done.");
+      HTableDescriptor table = new HTableDescriptor(tableName);
 
-    // Update existing table
-    HColumnDescriptor newColumn = new HColumnDescriptor("NEWCF");
-    newColumn.setCompactionCompressionType(Algorithm.GZ);
-    newColumn.setMaxVersions(HConstants.ALL_VERSIONS);
-    admin.addColumn(tableName, newColumn);
+      // Update existing table
+      HColumnDescriptor newColumn = new HColumnDescriptor("NEWCF");
+      newColumn.setCompactionCompressionType(Algorithm.GZ);
+      newColumn.setMaxVersions(HConstants.ALL_VERSIONS);
+      admin.addColumn(tableName, newColumn);
 
-    // Update existing column family
-    HColumnDescriptor existingColumn = new HColumnDescriptor(CF_DEFAULT);
-    existingColumn.setCompactionCompressionType(Algorithm.GZ);
-    existingColumn.setMaxVersions(HConstants.ALL_VERSIONS);
-    table_assetmeta.modifyFamily(existingColumn)
-    admin.modifyTable(tableName, table_assetmeta);
+      // Update existing column family
+      HColumnDescriptor existingColumn = new HColumnDescriptor(CF_DEFAULT);
+      existingColumn.setCompactionCompressionType(Algorithm.GZ);
+      existingColumn.setMaxVersions(HConstants.ALL_VERSIONS);
+      table.modifyFamily(existingColumn);
+      admin.modifyTable(tableName, table);
 
-    // Disable an existing table
-    admin.disableTable(tableName);
+      // Disable an existing table
+      admin.disableTable(tableName);
 
-    // Delete an existing column family
-    admin.deleteColumn(tableName, CF_DEFAULT);
+      // Delete an existing column family
+      admin.deleteColumn(tableName, CF_DEFAULT.getBytes("UTF-8"));
 
-    // Delete a table (Need to be disabled first)
-    admin.deleteTable(tableName);
+      // Delete a table (Need to be disabled first)
+      admin.deleteTable(tableName);
+    }
+  }
 
+  public static void main(String... args) throws IOException {
+    Configuration config = HBaseConfiguration.create();
 
-    admin.close();
-  } catch (Exception e) {
-    e.printStackTrace();
-    System.exit(-1);
+    //Add any necessary configuration files (hbase-site.xml, core-site.xml)
+    config.addResource(new Path(System.getenv("HBASE_CONF_DIR"), "hbase-site.xml"));
+    config.addResource(new Path(System.getenv("HADOOP_CONF_DIR"), "core-site.xml"));
+    createSchemaTables(config);
+    modifySchema(config);
   }
 }
 ----

http://git-wip-us.apache.org/repos/asf/hbase/blob/dd8e926a/src/main/asciidoc/_chapters/mapreduce.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/mapreduce.adoc b/src/main/asciidoc/_chapters/mapreduce.adoc
index a008a4f..2a42af2 100644
--- a/src/main/asciidoc/_chapters/mapreduce.adoc
+++ b/src/main/asciidoc/_chapters/mapreduce.adoc
@@ -51,27 +51,38 @@ In the notes below, we refer to o.a.h.h.mapreduce but replace with the o.a.h.h.m
 
 By default, MapReduce jobs deployed to a MapReduce cluster do not have access to either the HBase configuration under `$HBASE_CONF_DIR` or the HBase classes.
 
-To give the MapReduce jobs the access they need, you could add _hbase-site.xml_ to the _$HADOOP_HOME/conf/_ directory and add the HBase JARs to the _HADOOP_HOME/conf/_ directory, then copy these changes across your cluster.
-You could add _hbase-site.xml_ to _$HADOOP_HOME/conf_ and add HBase jars to the _$HADOOP_HOME/lib_ directory.
-You would then need to copy these changes across your cluster or edit _$HADOOP_HOMEconf/hadoop-env.sh_ and add them to the `HADOOP_CLASSPATH` variable.
+To give the MapReduce jobs the access they need, you could add _hbase-site.xml_ to _$HADOOP_HOME/conf_ and add HBase jars to the _$HADOOP_HOME/lib_ directory.
+You would then need to copy these changes across your cluster. Or you can edit _$HADOOP_HOME/conf/hadoop-env.sh_ and add them to the `HADOOP_CLASSPATH` variable.
 However, this approach is not recommended because it will pollute your Hadoop install with HBase references.
 It also requires you to restart the Hadoop cluster before Hadoop can use the HBase data.
 
+The recommended approach is to let HBase add its dependency jars itself and use `HADOOP_CLASSPATH` or `-libjars`.
+
 Since HBase 0.90.x, HBase adds its dependency JARs to the job configuration itself.
 The dependencies only need to be available on the local `CLASSPATH`.
-The following example runs the bundled HBase link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter] MapReduce job against a table named `usertable` If you have not set the environment variables expected in the command (the parts prefixed by a `$` sign and curly braces), you can use the actual system paths instead.
+The following example runs the bundled HBase link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter] MapReduce job against a table named `usertable`.
+If you have not set the environment variables expected in the command (the parts prefixed by a `$` sign and surrounded by curly braces), you can use the actual system paths instead.
 Be sure to use the correct version of the HBase JAR for your system.
-The backticks (``` symbols) cause ths shell to execute the sub-commands, setting the `CLASSPATH` as part of the command.
+The backticks (``` symbols) cause ths shell to execute the sub-commands, setting the output of `hbase classpath` (the command to dump HBase CLASSPATH) to `HADOOP_CLASSPATH`.
 This example assumes you use a BASH-compatible shell.
 
 [source,bash]
 ----
-$ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-server-VERSION.jar rowcounter usertable
+$ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/lib/hbase-server-VERSION.jar rowcounter usertable
 ----
 
 When the command runs, internally, the HBase JAR finds the dependencies it needs for ZooKeeper, Guava, and its other dependencies on the passed `HADOOP_CLASSPATH` and adds the JARs to the MapReduce job configuration.
 See the source at `TableMapReduceUtil#addDependencyJars(org.apache.hadoop.mapreduce.Job)` for how this is done.
 
+The command `hbase mapredcp` can also help you dump the CLASSPATH entries required by MapReduce, which are the same jars `TableMapReduceUtil#addDependencyJars` would add.
+You can add them together with HBase conf directory to `HADOOP_CLASSPATH`.
+For jobs that do not package their dependencies or call `TableMapReduceUtil#addDependencyJars`, the following command structure is necessary:
+
+[source,bash]
+----
+$ HADOOP_CLASSPATH=`${HBASE_HOME}/bin/hbase mapredcp`:${HBASE_HOME}/conf hadoop jar MyApp.jar MyJobMainClass -libjars $(${HBASE_HOME}/bin/hbase mapredcp | tr ':' ',') ...
+----
+
 [NOTE]
 ====
 The example may not work if you are running HBase from its build directory rather than an installed location.
@@ -85,11 +96,11 @@ If this occurs, try modifying the command as follows, so that it uses the HBase
 
 [source,bash]
 ----
-$ HADOOP_CLASSPATH=${HBASE_HOME}/hbase-server/target/hbase-server-VERSION-SNAPSHOT.jar:`${HBASE_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_HOME}/hbase-server/target/hbase-server-VERSION-SNAPSHOT.jar rowcounter usertable
+$ HADOOP_CLASSPATH=${HBASE_BUILD_HOME}/hbase-server/target/hbase-server-VERSION-SNAPSHOT.jar:`${HBASE_BUILD_HOME}/bin/hbase classpath` ${HADOOP_HOME}/bin/hadoop jar ${HBASE_BUILD_HOME}/hbase-server/target/hbase-server-VERSION-SNAPSHOT.jar rowcounter usertable
 ----
 ====
 
-.Notice to MapReduce users of HBase 0.96.1 and above
+.Notice to MapReduce users of HBase between 0.96.1 and 0.98.4
 [CAUTION]
 ====
 Some MapReduce jobs that use HBase fail to launch.

http://git-wip-us.apache.org/repos/asf/hbase/blob/dd8e926a/src/main/asciidoc/_chapters/ops_mgt.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/ops_mgt.adoc b/src/main/asciidoc/_chapters/ops_mgt.adoc
index b8018b6..3c4a73b 100644
--- a/src/main/asciidoc/_chapters/ops_mgt.adoc
+++ b/src/main/asciidoc/_chapters/ops_mgt.adoc
@@ -1311,13 +1311,13 @@ list_peers:: list all replication relationships known by this cluster
 enable_peer <ID>::
   Enable a previously-disabled replication relationship
 disable_peer <ID>::
-  Disable a replication relationship. HBase will no longer send edits to that peer cluster, but it still keeps track of all the new WALs that it will need to replicate if and when it is re-enabled. 
+  Disable a replication relationship. HBase will no longer send edits to that peer cluster, but it still keeps track of all the new WALs that it will need to replicate if and when it is re-enabled.
 remove_peer <ID>::
   Disable and remove a replication relationship. HBase will no longer send edits to that peer cluster or keep track of WALs.
 enable_table_replication <TABLE_NAME>::
-  Enable the table replication switch for all it's column families. If the table is not found in the destination cluster then it will create one with the same name and column families. 
+  Enable the table replication switch for all it's column families. If the table is not found in the destination cluster then it will create one with the same name and column families.
 disable_table_replication <TABLE_NAME>::
-  Disable the table replication switch for all it's column families. 
+  Disable the table replication switch for all it's column families.
 
 === Verifying Replicated Data
 
@@ -1609,6 +1609,187 @@ You can use the HBase Shell command `status 'replication'` to monitor the replic
 * `status 'replication', 'source'` -- prints the status for each replication source, sorted by hostname.
 * `status 'replication', 'sink'` -- prints the status for each replication sink, sorted by hostname.
 
+== Running Multiple Workloads On a Single Cluster
+
+HBase provides the following mechanisms for managing the performance of a cluster
+handling multiple workloads:
+. <<quota>>
+. <<request-queues>>
+. <<multiple-typed-queues>>
+
+[[quota]]
+=== Quotas
+HBASE-11598 introduces quotas, which allow you to throttle requests based on
+the following limits:
+
+. <<request-quotas,The number or size of requests in a given timeframe>>
+. <<namespace-quotas,The number of tables allowed in a namespace>>
+
+These limits can be enforced for a specified user, table, or namespace.
+
+.Enabling Quotas
+
+Quotas are disabled by default. To enable the feature, set the `hbase.quota.enabled`
+property to `true` in _hbase-site.xml_ file for all cluster nodes.
+
+.General Quota Syntax
+. Timeframes  can be expressed in the following units: `sec`, `min`, `hour`, `day`
+. Request sizes can be expressed in the following units: `B` (bytes), `K` (kilobytes),
+`M` (megabytes), `G` (gigabytes), `T` (terabytes), `P` (petabytes)
+. Numbers of requests are expressed as an integer followed by the string `req`
+. Limits relating to time are expressed as req/time or size/time. For instance `10req/day`
+or `100P/hour`.
+. Numbers of tables or regions are expressed as integers.
+
+[[request-quotas]]
+.Setting Request Quotas
+You can set quota rules ahead of time, or you can change the throttle at runtime. The change
+will propagate after the quota refresh period has expired. This expiration period
+defaults to 5 minutes. To change it, modify the `hbase.quota.refresh.period` property
+in `hbase-site.xml`. This property is expressed in milliseconds and defaults to `300000`.
+
+----
+# Limit user u1 to 10 requests per second
+hbase> set_quota TYPE => THROTTLE, USER => 'u1', LIMIT => '10req/sec'
+
+# Limit user u1 to 10 M per day everywhere
+hbase> set_quota TYPE => THROTTLE, USER => 'u1', LIMIT => '10M/day'
+
+# Limit user u1 to 5k per minute on table t2
+hbase> set_quota TYPE => THROTTLE, USER => 'u1', TABLE => 't2', LIMIT => '5K/min'
+
+# Remove an existing limit from user u1 on namespace ns2
+hbase> set_quota TYPE => THROTTLE, USER => 'u1', NAMESPACE => 'ns2', LIMIT => NONE
+
+# Limit all users to 10 requests per hour on namespace ns1
+hbase> set_quota TYPE => THROTTLE, NAMESPACE => 'ns1', LIMIT => '10req/shour'
+
+# Limit all users to 10 T per hour on table t1
+hbase> set_quota TYPE => THROTTLE, TABLE => 't1', LIMIT => '10T/hour'
+
+# Remove all existing limits from user u1
+hbase> set_quota TYPE => THROTTLE, USER => 'u1', LIMIT => NONE
+
+# List all quotas for user u1 in namespace ns2
+hbase> list_quotas USER => 'u1, NAMESPACE => 'ns2'
+
+# List all quotas for namespace ns2
+hbase> list_quotas NAMESPACE => 'ns2'
+
+# List all quotas for table t1
+hbase> list_quotas TABLE => 't1'
+
+# list all quotas
+hbase> list_quotas
+----
+
+You can also place a global limit and exclude a user or a table from the limit by applying the
+`GLOBAL_BYPASS` property.
+----
+hbase> set_quota NAMESPACE => 'ns1', LIMIT => '100req/min'               # a per-namespace request limit
+hbase> set_quota USER => 'u1', GLOBAL_BYPASS => true                     # user u1 is not affected by the limit
+----
+
+[[namespace_quotas]]
+.Setting Namespace Quotas
+
+You can specify the maximum number of tables or regions allowed in a given namespace, either
+when you create the namespace or by altering an existing namespace, by setting the
+`hbase.namespace.quota.maxtables property`  on the namespace.
+
+.Limiting Tables Per Namespace
+----
+# Create a namespace with a max of 5 tables
+hbase> create_namespace 'ns1', {'hbase.namespace.quota.maxtables'=>'5'}
+
+# Alter an existing namespace to have a max of 8 tables
+hbase> alter_namespace 'ns2', {METHOD => 'set', 'hbase.namespace.quota.maxtables'=>'8'}
+
+# Show quota information for a namespace
+hbase> describe_namespace 'ns2'
+
+# Alter an existing namespace to remove a quota
+hbase> alter_namespace 'ns2', {METHOD => 'unset', NAME=>'hbase.namespace.quota.maxtables'}
+----
+
+.Limiting Regions Per Namespace
+----
+# Create a namespace with a max of 10 regions
+hbase> create_namespace 'ns1', {'hbase.namespace.quota.maxregions'=>'10'
+
+# Show quota information for a namespace
+hbase> describe_namespace 'ns1'
+
+# Alter an existing namespace to have a max of 20 tables
+hbase> alter_namespace 'ns2', {METHOD => 'set', 'hbase.namespace.quota.maxregions'=>'20'}
+
+# Alter an existing namespace to remove a quota
+hbase> alter_namespace 'ns2', {METHOD => 'unset', NAME=> 'hbase.namespace.quota.maxregions'}
+----
+
+[[request_queues]]
+=== Request Queues
+If no throttling policy is configured, when the RegionServer receives multiple requests,
+they are now placed into a queue waiting for a free execution slot (HBASE-6721).
+The simplest queue is a FIFO queue, where each request waits for all previous requests in the queue
+to finish before running. Fast or interactive queries can get stuck behind large requests.
+
+If you are able to guess how long a request will take, you can reorder requests by
+pushing the long requests to the end of the queue and allowing short requests to preempt
+them. Eventually, you must still execute the large requests and prioritize the new
+requests behind them. The short requests will be newer, so the result is not terrible,
+but still suboptimal compared to a mechanism which allows large requests to be split
+into multiple smaller ones.
+
+HBASE-10993 introduces such a system for deprioritizing long-running scanners. There
+are two types of queues,`fifo` and `deadline`.To configure the type of queue used,
+configure the `hbase.ipc.server.callqueue.type` property in `hbase-site.xml`. There
+is no way to estimate how long each request may take, so de-prioritization only affects
+scans, and is based on the number of “next” calls a scan request has made. An assumption
+is made that when you are doing a full table scan, your job is not likely to be interactive,
+so if there are concurrent requests, you can delay long-running scans up to a limit tunable by
+setting the `hbase.ipc.server.queue.max.call.delay` property. The slope of the delay is calculated
+by a simple square root of `(numNextCall * weight)` where the weight is
+configurable by setting the `hbase.ipc.server.scan.vtime.weight` property.
+
+[[multiple-typed-queues]]
+=== Multiple-Typed Queues
+
+You can also prioritize or deprioritize different kinds of requests by configuring
+a specified number of dedicated handlers and queues. You can segregate the scan requests
+in a single queue with a single handler, and all the other available queues can service
+short `Get` requests.
+
+You can adjust the IPC queues and handlers based on the type of workload, using static
+tuning options. This approach is an interim first step that will eventually allow
+you to change the settings at runtime, and to dynamically adjust values based on the load.
+
+.Multiple Queues
+
+To avoid contention and separate different kinds of requests, configure the
+`hbase.ipc.server.callqueue.handler.factor` property, which allows you to increase the number of
+queues and control how many handlers can share the same queue., allows admins to increase the number
+of queues and decide how many handlers share the same queue.
+
+Using more queues reduces contention when adding a task to a queue or selecting it
+from a queue. You can even configure one queue per handler. The trade-off is that
+if some queues contain long-running tasks, a handler may need to wait to execute from that queue
+rather than stealing from another queue which has waiting tasks.
+
+.Read and Write Queues
+With multiple queues, you can now divide read and write requests, giving more priority
+(more queues) to one or the other type. Use the `hbase.ipc.server.callqueue.read.ratio`
+property to choose to serve more reads or more writes.
+
+.Get and Scan Queues
+Similar to the read/write split, you can split gets and scans by tuning the `hbase.ipc.server.callqueue.scan.ratio`
+property to give more priority to gets or to scans. A scan ratio of `0.1` will give
+more queue/handlers to the incoming gets, which means that more gets can be processed
+at the same time and that fewer scans can be executed at the same time. A value of
+`0.9` will give more queue/handlers to scans, so the number of scans executed will
+increase and the number of gets will decrease.
+
+
 [[ops.backup]]
 == HBase Backup
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/dd8e926a/src/main/asciidoc/_chapters/schema_design.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/schema_design.adoc b/src/main/asciidoc/_chapters/schema_design.adoc
index 28f28a5..9319c65 100644
--- a/src/main/asciidoc/_chapters/schema_design.adoc
+++ b/src/main/asciidoc/_chapters/schema_design.adoc
@@ -461,7 +461,101 @@ HColumnDescriptor.setKeepDeletedCells(true);
 ----
 ====
 
-See the API documentation for link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html#KEEP_DELETED_CELLS[KEEP_DELETED_CELLS] for more information.
+Let us illustrate the basic effect of setting the `KEEP_DELETED_CELLS` attribute on a table.
+
+First, without:
+[source]
+----
+create 'test', {NAME=>'e', VERSIONS=>2147483647}
+put 'test', 'r1', 'e:c1', 'value', 10
+put 'test', 'r1', 'e:c1', 'value', 12
+put 'test', 'r1', 'e:c1', 'value', 14
+delete 'test', 'r1', 'e:c1',  11
+
+hbase(main):017:0> scan 'test', {RAW=>true, VERSIONS=>1000}
+ROW                                              COLUMN+CELL
+ r1                                              column=e:c1, timestamp=14, value=value
+ r1                                              column=e:c1, timestamp=12, value=value
+ r1                                              column=e:c1, timestamp=11, type=DeleteColumn
+ r1                                              column=e:c1, timestamp=10, value=value
+1 row(s) in 0.0120 seconds
+
+hbase(main):018:0> flush 'test'
+0 row(s) in 0.0350 seconds
+
+hbase(main):019:0> scan 'test', {RAW=>true, VERSIONS=>1000}
+ROW                                              COLUMN+CELL
+ r1                                              column=e:c1, timestamp=14, value=value
+ r1                                              column=e:c1, timestamp=12, value=value
+ r1                                              column=e:c1, timestamp=11, type=DeleteColumn
+1 row(s) in 0.0120 seconds
+
+hbase(main):020:0> major_compact 'test'
+0 row(s) in 0.0260 seconds
+
+hbase(main):021:0> scan 'test', {RAW=>true, VERSIONS=>1000}
+ROW                                              COLUMN+CELL
+ r1                                              column=e:c1, timestamp=14, value=value
+ r1                                              column=e:c1, timestamp=12, value=value
+1 row(s) in 0.0120 seconds
+----
+
+Notice how delete cells are let go.
+
+Now lets run the same test only with `KEEP_DELETED_CELLS` set on the table (you can do table or per-column-family):
+
+[source]
+----
+hbase(main):005:0> create 'test', {NAME=>'e', VERSIONS=>2147483647, KEEP_DELETED_CELLS => true}
+0 row(s) in 0.2160 seconds
+
+=> Hbase::Table - test
+hbase(main):006:0> put 'test', 'r1', 'e:c1', 'value', 10
+0 row(s) in 0.1070 seconds
+
+hbase(main):007:0> put 'test', 'r1', 'e:c1', 'value', 12
+0 row(s) in 0.0140 seconds
+
+hbase(main):008:0> put 'test', 'r1', 'e:c1', 'value', 14
+0 row(s) in 0.0160 seconds
+
+hbase(main):009:0> delete 'test', 'r1', 'e:c1',  11
+0 row(s) in 0.0290 seconds
+
+hbase(main):010:0> scan 'test', {RAW=>true, VERSIONS=>1000}
+ROW                                                                                          COLUMN+CELL
+ r1                                                                                          column=e:c1, timestamp=14, value=value
+ r1                                                                                          column=e:c1, timestamp=12, value=value
+ r1                                                                                          column=e:c1, timestamp=11, type=DeleteColumn
+ r1                                                                                          column=e:c1, timestamp=10, value=value
+1 row(s) in 0.0550 seconds
+
+hbase(main):011:0> flush 'test'
+0 row(s) in 0.2780 seconds
+
+hbase(main):012:0> scan 'test', {RAW=>true, VERSIONS=>1000}
+ROW                                                                                          COLUMN+CELL
+ r1                                                                                          column=e:c1, timestamp=14, value=value
+ r1                                                                                          column=e:c1, timestamp=12, value=value
+ r1                                                                                          column=e:c1, timestamp=11, type=DeleteColumn
+ r1                                                                                          column=e:c1, timestamp=10, value=value
+1 row(s) in 0.0620 seconds
+
+hbase(main):013:0> major_compact 'test'
+0 row(s) in 0.0530 seconds
+
+hbase(main):014:0> scan 'test', {RAW=>true, VERSIONS=>1000}
+ROW                                                                                          COLUMN+CELL
+ r1                                                                                          column=e:c1, timestamp=14, value=value
+ r1                                                                                          column=e:c1, timestamp=12, value=value
+ r1                                                                                          column=e:c1, timestamp=11, type=DeleteColumn
+ r1                                                                                          column=e:c1, timestamp=10, value=value
+1 row(s) in 0.0650 seconds
+----
+
+KEEP_DELETED_CELLS is to avoid removing Cells from HBase when the _only_ reason to remove them is the delete marker.
+So with KEEP_DELETED_CELLS enabled deleted cells would get removed if either you write more versions than the configured max, or you have a TTL and Cells are in excess of the configured timeout, etc.
+
 
 [[secondary.indexes]]
 ==  Secondary Indexes and Alternate Query Paths

http://git-wip-us.apache.org/repos/asf/hbase/blob/dd8e926a/src/main/asciidoc/_chapters/tracing.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/tracing.adoc b/src/main/asciidoc/_chapters/tracing.adoc
index 6bb8065..9b3711e 100644
--- a/src/main/asciidoc/_chapters/tracing.adoc
+++ b/src/main/asciidoc/_chapters/tracing.adoc
@@ -30,13 +30,13 @@
 :icons: font
 :experimental:
 
-link:https://issues.apache.org/jira/browse/HBASE-6449[HBASE-6449] added support for tracing requests through HBase, using the open source tracing library, link:http://github.com/cloudera/htrace[HTrace].
+link:https://issues.apache.org/jira/browse/HBASE-6449[HBASE-6449] added support for tracing requests through HBase, using the open source tracing library, link:http://htrace.incubator.apache.org/[HTrace].
 Setting up tracing is quite simple, however it currently requires some very minor changes to your client code (it would not be very difficult to remove this requirement). 
 
 [[tracing.spanreceivers]]
 === SpanReceivers
 
-The tracing system works by collecting information in structs called 'Spans'. It is up to you to choose how you want to receive this information by implementing the `SpanReceiver` interface, which defines one method: 
+The tracing system works by collecting information in structures called 'Spans'. It is up to you to choose how you want to receive this information by implementing the `SpanReceiver` interface, which defines one method: 
 
 [source]
 ----
@@ -57,51 +57,38 @@ The `LocalFileSpanReceiver` looks in _hbase-site.xml_      for a `hbase.local-fi
 
 <property>
   <name>hbase.trace.spanreceiver.classes</name>
-  <value>org.htrace.impl.LocalFileSpanReceiver</value>
+  <value>org.apache.htrace.impl.LocalFileSpanReceiver</value>
 </property>
 <property>
-  <name>hbase.local-file-span-receiver.path</name>
+  <name>hbase.htrace.local-file-span-receiver.path</name>
   <value>/var/log/hbase/htrace.out</value>
 </property>
 ----
 
-HTrace also provides `ZipkinSpanReceiver` which converts spans to link:http://github.com/twitter/zipkin[Zipkin] span format and send them to Zipkin server.
-In order to use this span receiver, you need to install the jar of htrace-zipkin to your HBase's classpath on all of the nodes in your cluster. 
+HTrace also provides `ZipkinSpanReceiver` which converts spans to link:http://github.com/twitter/zipkin[Zipkin] span format and send them to Zipkin server. In order to use this span receiver, you need to install the jar of htrace-zipkin to your HBase's classpath on all of the nodes in your cluster. 
 
-_htrace-zipkin_ is published to the maven central repository.
-You could get the latest version from there or just build it locally and then copy it out to all nodes, change your config to use zipkin receiver, distribute the new configuration and then (rolling) restart. 
+_htrace-zipkin_ is published to the link:http://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22org.apache.htrace%22%20AND%20a%3A%22htrace-zipkin%22[Maven central repository]. You could get the latest version from there or just build it locally (see the link:http://htrace.incubator.apache.org/[HTrace] homepage for information on how to do this) and then copy it out to all nodes.
 
-Here is the example of manual setup procedure. 
-
-----
-
-$ git clone https://github.com/cloudera/htrace
-$ cd htrace/htrace-zipkin
-$ mvn compile assembly:single
-$ cp target/htrace-zipkin-*-jar-with-dependencies.jar $HBASE_HOME/lib/
-  # copy jar to all nodes...
-----
-
-The `ZipkinSpanReceiver` looks in _hbase-site.xml_      for a `hbase.zipkin.collector-hostname` and `hbase.zipkin.collector-port` property with a value describing the Zipkin collector server to which span information are sent. 
+`ZipkinSpanReceiver` for properties called `hbase.htrace.zipkin.collector-hostname` and `hbase.htrace.zipkin.collector-port` in _hbase-site.xml_ with values describing the Zipkin collector server to which span information are sent.
 
 [source,xml]
 ----
 
 <property>
   <name>hbase.trace.spanreceiver.classes</name>
-  <value>org.htrace.impl.ZipkinSpanReceiver</value>
+  <value>org.apache.htrace.impl.ZipkinSpanReceiver</value>
 </property> 
 <property>
-  <name>hbase.zipkin.collector-hostname</name>
+  <name>hbase.htrace.zipkin.collector-hostname</name>
   <value>localhost</value>
 </property> 
 <property>
-  <name>hbase.zipkin.collector-port</name>
+  <name>hbase.htrace.zipkin.collector-port</name>
   <value>9410</value>
 </property>
 ----
 
-If you do not want to use the included span receivers, you are encouraged to write your own receiver (take a look at `LocalFileSpanReceiver` for an example). If you think others would benefit from your receiver, file a JIRA or send a pull request to link:http://github.com/cloudera/htrace[HTrace]. 
+If you do not want to use the included span receivers, you are encouraged to write your own receiver (take a look at `LocalFileSpanReceiver` for an example). If you think others would benefit from your receiver, file a JIRA with the HTrace project.
 
 [[tracing.client.modifications]]
 == Client Modifications
@@ -160,8 +147,7 @@ See the HTrace _README_ for more information on Samplers.
 [[tracing.client.shell]]
 == Tracing from HBase Shell
 
-You can use +trace+ command for tracing requests from HBase Shell. +trace 'start'+ command turns on tracing and +trace
-        'stop'+ command turns off tracing. 
+You can use `trace` command for tracing requests from HBase Shell. `trace 'start'` command turns on tracing and `trace 'stop'` command turns off tracing. 
 
 [source]
 ----
@@ -171,9 +157,8 @@ hbase(main):002:0> put 'test', 'row1', 'f:', 'val1'   # traced commands
 hbase(main):003:0> trace 'stop'
 ----
 
-+trace 'start'+ and +trace 'stop'+ always returns boolean value representing if or not there is ongoing tracing.
-As a result, +trace
-        'stop'+ returns false on suceess. +trace 'status'+ just returns if or not tracing is turned on. 
+`trace 'start'` and `trace 'stop'` always returns boolean value representing if or not there is ongoing tracing.
+As a result, `trace 'stop'` returns false on success. `trace 'status'` just returns if or not tracing is turned on. 
 
 [source]
 ----

http://git-wip-us.apache.org/repos/asf/hbase/blob/dd8e926a/src/main/asciidoc/_chapters/upgrading.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc b/src/main/asciidoc/_chapters/upgrading.adoc
index ab3f154..6b63833 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -41,7 +41,7 @@ HBase has two versioning schemes, pre-1.0 and post-1.0. Both are detailed below.
 [[hbase.versioning.post10]]
 === Post 1.0 versions
 
-Starting with the 1.0.0 release, HBase uses link:http://semver.org/[Semantic Versioning] for its release versioning. In summary:
+Starting with the 1.0.0 release, HBase is working towards link:http://semver.org/[Semantic Versioning] for its release versioning. In summary:
 
 .Given a version number MAJOR.MINOR.PATCH, increment the:
 * MAJOR version when you make incompatible API changes,
@@ -72,10 +72,12 @@ In addition to the usual API versioning considerations HBase has other compatibi
 .Client API compatibility
 * Allow changing or removing existing client APIs.
 * An API needs to deprecated for a major version before we will change/remove it.
+* APIs available in a patch version will be available in all later patch versions. However, new APIs may be added which will not be available in earlier patch versions.
 * Example: A user using a newly deprecated api does not need to modify application code with hbase api calls until the next major version.
 
 .Client Binary compatibility
-* Old client code can run unchanged (no recompilation needed) against new jars.
+* Client code written to APIs available in a given patch release can run unchanged (no recompilation needed) against the new jars of later patch versions.
+* Client code written to APIs available in a given patch release might not run against the old jars from an earlier patch version.
 * Example: Old compiled client code will work unchanged with the new jars.
 
 .Server-Side Limited API compatibility (taken from Hadoop)
@@ -93,7 +95,7 @@ In addition to the usual API versioning considerations HBase has other compatibi
 * Web page APIs
 
 .Summary
-* A patch upgrade is a drop-in replacement. Any change that is not Java binary compatible would not be allowed.footnote:[See http://docs.oracle.com/javase/specs/jls/se7/html/jls-13.html.]
+* A patch upgrade is a drop-in replacement. Any change that is not Java binary compatible would not be allowed.footnote:[See http://docs.oracle.com/javase/specs/jls/se7/html/jls-13.html.]. Downgrading versions within patch releases may not be compatible.
 
 * A minor upgrade requires no application/client code modification. Ideally it would be a drop-in replacement but client code, coprocessors, filters, etc might have to be recompiled if new jars are used.
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/dd8e926a/src/main/asciidoc/_chapters/zookeeper.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/zookeeper.adoc b/src/main/asciidoc/_chapters/zookeeper.adoc
index f6134b7..3266964 100644
--- a/src/main/asciidoc/_chapters/zookeeper.adoc
+++ b/src/main/asciidoc/_chapters/zookeeper.adoc
@@ -35,7 +35,7 @@ You can also manage the ZooKeeper ensemble independent of HBase and just point H
 To toggle HBase management of ZooKeeper, use the `HBASE_MANAGES_ZK` variable in _conf/hbase-env.sh_.
 This variable, which defaults to `true`, tells HBase whether to start/stop the ZooKeeper ensemble servers as part of HBase start/stop.
 
-When HBase manages the ZooKeeper ensemble, you can specify ZooKeeper configuration using its native _zoo.cfg_ file, or, the easier option is to just specify ZooKeeper options directly in _conf/hbase-site.xml_.
+When HBase manages the ZooKeeper ensemble, you can specify ZooKeeper configuration directly in _conf/hbase-site.xml_.
 A ZooKeeper configuration option can be set as a property in the HBase _hbase-site.xml_ XML configuration file by prefacing the ZooKeeper option name with `hbase.zookeeper.property`.
 For example, the `clientPort` setting in ZooKeeper can be changed by setting the `hbase.zookeeper.property.clientPort` property.
 For all default values used by HBase, including ZooKeeper configuration, see <<hbase_default_configurations,hbase default configurations>>.
@@ -124,8 +124,7 @@ To point HBase at an existing ZooKeeper cluster, one that is not managed by HBas
   export HBASE_MANAGES_ZK=false
 ----
 
-Next set ensemble locations and client port, if non-standard, in _hbase-site.xml_, or add a suitably configured _zoo.cfg_ to HBase's _CLASSPATH_.
-HBase will prefer the configuration found in _zoo.cfg_ over any settings in _hbase-site.xml_.
+Next set ensemble locations and client port, if non-standard, in _hbase-site.xml_.
 
 When HBase manages ZooKeeper, it will start/stop the ZooKeeper servers as a part of the regular start/stop scripts.
 If you would like to run ZooKeeper yourself, independent of HBase start/stop, you would do the following
@@ -312,21 +311,23 @@ Modify your _hbase-site.xml_ on each node that will run a master or regionserver
     <name>hbase.cluster.distributed</name>
     <value>true</value>
   </property>
+  <property>
+    <name>hbase.zookeeper.property.authProvider.1</name>
+    <value>org.apache.zookeeper.server.auth.SASLAuthenticationProvider</value>
+  </property>
+  <property>
+    <name>hbase.zookeeper.property.kerberos.removeHostFromPrincipal</name>
+    <value>true</value>
+  </property>
+  <property>
+    <name>hbase.zookeeper.property.kerberos.removeRealmFromPrincipal</name>
+    <value>true</value>
+  </property>
 </configuration>
 ----
 
 where `$ZK_NODES` is the comma-separated list of hostnames of the Zookeeper Quorum hosts.
 
-Add a _zoo.cfg_ for each Zookeeper Quorum host containing:
-
-[source,java]
-----
-
-authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
-kerberos.removeHostFromPrincipal=true
-kerberos.removeRealmFromPrincipal=true
-----
-
 Also on each of these hosts, create a JAAS configuration file containing:
 
 [source,java]