You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by zh...@apache.org on 2018/07/05 07:20:08 UTC

[01/11] hbase git commit: HBASE-20831 Copy master doc into branch-2.1 and edit to make it suit 2.1.0

Repository: hbase
Updated Branches:
  refs/heads/branch-2 4653d4ac6 -> 61d706044


http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/xdoc/supportingprojects.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/supportingprojects.xml b/src/site/xdoc/supportingprojects.xml
new file mode 100644
index 0000000..f949a57
--- /dev/null
+++ b/src/site/xdoc/supportingprojects.xml
@@ -0,0 +1,161 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title>Supporting Projects</title>
+  </properties>
+
+<body>
+<section name="Supporting Projects">
+  <p>This page is a list of projects that are related to HBase. To
+    have your project added, file a documentation JIRA or email
+    <a href="mailto:dev@hbase.apache.org">hbase-dev</a> with the relevant
+    information. If you notice out-of-date information, use the same avenues to
+    report it.
+  </p>
+  <p><b>These items are user-submitted and the HBase team assumes no responsibility for their accuracy.</b></p>
+  <h3>Projects that add new features to HBase</h3>
+  <dl>
+   <dt><a href="https://github.com/XiaoMi/themis/">Themis</a></dt>
+   <dd>Themis provides cross-row/cross-table transaction on HBase based on
+    Google's Percolator.</dd>
+   <dt><a href="https://github.com/caskdata/tephra">Tephra</a></dt>
+   <dd>Cask Tephra provides globally consistent transactions on top of Apache
+    HBase.</dd>
+   <dt><a href="https://github.com/VCNC/haeinsa">Haeinsa</a></dt>
+   <dd>Haeinsa is linearly scalable multi-row, multi-table transaction library
+    for HBase.</dd>
+   <dt><a href="https://github.com/juwi/HBase-TAggregator">HBase TAggregator</a></dt>
+   <dd>An HBase coprocessor for timeseries-based aggregations.</dd>
+   <dt><a href="http://trafodion.incubator.apache.org/">Apache Trafodion</a></dt>
+   <dd>Apache Trafodion is a webscale SQL-on-Hadoop solution enabling
+    transactional or operational workloads on Hadoop.</dd>
+   <dt><a href="http://phoenix.apache.org/">Apache Phoenix</a></dt>
+   <dd>Apache Phoenix is a relational database layer over HBase delivered as a
+    client-embedded JDBC driver targeting low latency queries over HBase data.</dd>
+   <dt><a href="https://github.com/cloudera/hue/tree/master/apps/hbase">Hue HBase Browser</a></dt>
+   <dd>An Easy &amp; Powerful WebUI for HBase, distributed with <a href="https://www.gethue.com">Hue</a>.</dd>
+   <dt><a href="https://github.com/NGDATA/hbase-indexer/tree/master/hbase-sep">HBase SEP</a></dt>
+   <dd>the HBase Side Effect Processor, a system for asynchronously and reliably listening to HBase
+    mutation events, based on HBase replication.</dd>
+   <dt><a href="https://github.com/ngdata/hbase-indexer">Lily HBase Indexer</a></dt>
+   <dd>indexes HBase content to Solr by listening to the replication stream
+    (uses the HBase SEP).</dd>
+   <dt><a href="https://github.com/sonalgoyal/crux/">Crux</a></dt>
+   <dd> - HBase Reporting and Analysis with support for simple and composite keys,
+    get and range scans, column based filtering, charting.</dd>
+   <dt><a href="https://github.com/yahoo/omid/">Omid</a></dt>
+   <dd> - Lock-free transactional support on top of HBase providing Snapshot
+    Isolation.</dd>
+   <dt><a href="http://dev.tailsweep.com/projects/parhely">Parhely</a></dt>
+   <dd>ORM for HBase</dd>
+   <dt><a href="http://code.google.com/p/hbase-writer/">HBase-Writer</a></dt>
+   <dd> Heritrix2 Processor for writing crawls to HBase.</dd>
+   <dt><a href="http://www.pigi-project.org/">Pigi Project</a></dt>
+   <dd>The Pigi Project is an ORM-like framework. It includes a configurable
+    index system and a simple object to HBase mapping framework (or indexing for
+    HBase if you like).  Designed for use by web applications.</dd>
+   <dt><a href="http://code.google.com/p/hbase-thrift/">hbase-thrift</a></dt>
+   <dd>hbase-thrift generates and installs Perl and Python Thrift bindings for
+    HBase.</dd>
+   <dt><a href="http://belowdeck.kissintelligentsystems.com/ohm">OHM</a></dt>
+   <dd>OHM is a weakly relational ORM for HBase which provides Object Mapping and
+    Column indexing. It has its own compiler capable of generating interface
+    code for multiple languages. Currently C# (via the Thrift API), with support
+    for Java currently in development. The compiler is easily extensible to add
+    support for other languages.</dd>
+   <dt><a href="http://datastore.googlecode.com">datastore</a></dt>
+   <dd>Aims to be an implementation of the
+    <a href="http://code.google.com/appengine/docs/python/datastore/">Google app-engine datastore</a>
+    in Java using HBase instead of bigtable.</dd>
+   <dt><a href="http://datanucleus.org">DataNucleus</a></dt>
+   <dd>DataNucleus is a Java JDO/JPA/REST implementation. It supports HBase and
+    many other datastores.</dd>
+   <dt><a href="http://github.com/impetus-opensource/Kundera">Kundera</a></dt>
+   <dd>Kundera is a JPA 2.0 based object-datastore mapping library for HBase,
+    Cassandra and MongoDB.</dd>
+   <dt><a href="http://github.com/zohmg/zohmg/tree/master">Zohmg</a></dt>
+   <dd>Zohmg is a time-series data store that uses HBase as its backing store.</dd>
+   <dt><a href="http://grails.org/plugin/gorm-hbase">Grails Support</a></dt>
+   <dd>Grails HBase plug-in.</dd>
+   <dt><a href="http://www.bigrecord.org">BigRecord</a></dt>
+   <dd>is an active_record-based object mapping layer for Ruby on Rails.</dd>
+   <dt><a href="http://github.com/greglu/hbase-stargate">hbase-stargate</a></dt>
+   <dd>Ruby client for HBase Stargate.</dd>
+   <dt><a href="http://github.com/ghelmling/meetup.beeno">Meetup.Beeno</a></dt>
+   <dd>Meetup.Beeno is a simple HBase Java "beans" mapping framework based on
+    annotations. It includes a rudimentary high level query API that generates
+    the appropriate server-side filters.</dd>
+   <dt><a href="http://www.springsource.org/spring-data/hadoop">Spring Hadoop</a></dt>
+   <dd> - The Spring Hadoop project provides support for writing Apache Hadoop
+    applications that benefit from the features of Spring, Spring Batch and
+    Spring Integration.</dd>
+   <dt><a href="https://jira.springsource.org/browse/SPR-5950">Spring Framework HBase Template</a></dt>
+   <dd>Spring Framework HBase Template provides HBase data access templates
+    similar to what is provided in Spring for JDBC, Hibernate, iBatis, etc.
+    If you find this useful, please vote for its inclusion in the Spring Framework.</dd>
+   <dt><a href="http://github.com/davidsantiago/clojure-hbase">Clojure-HBase</a></dt>
+   <dd>A library for convenient access to HBase from Clojure.</dd>
+   <dt><a href="http://www.lilyproject.org/lily/about/playground/hbaseindexes.html">HBase indexing library</a></dt>
+   <dd>A library for building and querying HBase-table-based indexes.</dd>
+   <dt><a href="http://github.com/akkumar/hbasene">HBasene</a></dt>
+   <dd>Lucene+HBase - Using HBase as the backing store for the TF-IDF
+    representations needed by Lucene. Also, contains a library for constructing
+    lucene indices from HBase schema.</dd>
+   <dt><a href="http://github.com/larsgeorge/jmxtoolkit">JMXToolkit</a></dt>
+   <dd>A HBase tailored JMX toolkit enabling monitoring with Cacti and checking
+    with Nagios or similar.</dd>
+   <dt><a href="http://github.com/ykulbak/ihbase">IHBASE</a></dt>
+   <dd>IHBASE provides faster scans by indexing regions, each region has its own
+    index. The indexed columns are user-defined and indexes can be intersected or
+    joined in a single query.</dd>
+   <dt><a href="http://github.com/apurtell/hbase-ec2">HBASE EC2 scripts</a></dt>
+   <dd>This collection of bash scripts allows you to run HBase clusters on
+    Amazon's Elastic Compute Cloud (EC2) service with best practices baked in.</dd>
+   <dt><a href="http://github.com/apurtell/hbase-stargate">Stargate</a></dt>
+   <dd>Stargate provides an enhanced RESTful interface.</dd>
+   <dt><a href="http://github.com/hbase-trx/hbase-transactional-tableindexed">HBase-trx</a></dt>
+   <dd>HBase-trx provides Transactional (JTA) and indexed extensions of HBase.</dd>
+   <dt><a href="http://github.com/simplegeo/python-hbase-thrift">HBase Thrift Python client Debian package</a></dt>
+   <dd>Debian packages for the HBase Thrift Python client (see readme for
+    sources.list setup)</dd>
+   <dt><a href="http://github.com/amitrathore/capjure">capjure</a></dt>
+   <dd>capjure is a persistence helper for HBase. It is written in the Clojure
+    language, and supports persisting of native hash-maps.</dd>
+   <dt><a href="http://github.com/sematext/HBaseHUT">HBaseHUT</a></dt>
+   <dd>(High Update Throughput for HBase) It focuses on write performance during
+    records update (by avoiding doing Get on every Put to update record).</dd>
+   <dt><a href="http://github.com/sematext/HBaseWD">HBaseWD</a></dt>
+   <dd>HBase Writes Distributor spreads records over the cluster even when their
+    keys are sequential, while still allowing fast range scans over them</dd>
+   <dt><a href="http://code.google.com/p/hbase-jdo/">HBase UI Tool &amp; Util</a></dt>
+   <dd>HBase UI Tool &amp; Util is an HBase UI client and simple util module.
+    It can handle hbase more easily like jdo(not persistence api)</dd>
+  </dl>
+  <h3>Example HBase Applications</h3>
+  <ul>
+    <li><a href="http://github.com/andreisavu/feedaggregator">HBase powered feed aggregator</a>
+    by Savu Andrei -- 200909</li>
+  </ul>
+</section>
+</body>
+</document>


[08/11] hbase git commit: HBASE-20831 Copy master doc into branch-2.1 and edit to make it suit 2.1.0

Posted by zh...@apache.org.
http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/security.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/security.adoc b/src/main/asciidoc/_chapters/security.adoc
index ef7d6c4..dae6c53 100644
--- a/src/main/asciidoc/_chapters/security.adoc
+++ b/src/main/asciidoc/_chapters/security.adoc
@@ -662,6 +662,7 @@ You also need to enable the DataBlockEncoder for the column family, for encoding
 You can enable compression of each tag in the WAL, if WAL compression is also enabled, by setting the value of `hbase.regionserver.wal.tags.enablecompression` to `true` in _hbase-site.xml_.
 Tag compression uses dictionary encoding.
 
+Coprocessors that run server-side on RegionServers can perform get and set operations on cell Tags. Tags are stripped out at the RPC layer before the read response is sent back, so clients do not see these tags.
 Tag compression is not supported when using WAL encryption.
 
 [[hbase.accesscontrol.configuration]]
@@ -1086,7 +1087,6 @@ public static void revokeFromTable(final HBaseTestingUtility util, final String
 . Showing a User's Effective Permissions
 +
 .HBase Shell
-====
 ----
 hbase> user_permission 'user'
 
@@ -1094,7 +1094,6 @@ hbase> user_permission '.*'
 
 hbase> user_permission JAVA_REGEX
 ----
-====
 
 .API
 ====
@@ -1234,11 +1233,9 @@ Refer to the official API for usage instructions.
 . Define the List of Visibility Labels
 +
 .HBase Shell
-====
 ----
 hbase> add_labels [ 'admin', 'service', 'developer', 'test' ]
 ----
-====
 +
 .Java API
 ====
@@ -1265,7 +1262,6 @@ public static void addLabels() throws Exception {
 . Associate Labels with Users
 +
 .HBase Shell
-====
 ----
 hbase> set_auths 'service', [ 'service' ]
 ----
@@ -1281,7 +1277,6 @@ hbase> set_auths 'qa', [ 'test', 'developer' ]
 ----
 hbase> set_auths '@qagroup', [ 'test' ]
 ----
-====
 +
 .Java API
 ====
@@ -1305,7 +1300,6 @@ public void testSetAndGetUserAuths() throws Throwable {
 . Clear Labels From Users
 +
 .HBase Shell
-====
 ----
 hbase> clear_auths 'service', [ 'service' ]
 ----
@@ -1321,7 +1315,6 @@ hbase> clear_auths 'qa', [ 'test', 'developer' ]
 ----
 hbase> clear_auths '@qagroup', [ 'test', 'developer' ]
 ----
-====
 +
 .Java API
 ====
@@ -1345,7 +1338,6 @@ The label is only applied when data is written.
 The label is associated with a given version of the cell.
 +
 .HBase Shell
-====
 ----
 hbase> set_visibility 'user', 'admin|service|developer', { COLUMNS => 'i' }
 ----
@@ -1357,7 +1349,6 @@ hbase> set_visibility 'user', 'admin|service', { COLUMNS => 'pii' }
 ----
 hbase> set_visibility 'user', 'test', { COLUMNS => [ 'i', 'pii' ], FILTER => "(PrefixFilter ('test'))" }
 ----
-====
 +
 NOTE: HBase Shell support for applying labels or permissions to cells is for testing and verification support, and should not be employed for production use because it won't apply the labels to cells that don't exist yet.
 The correct way to apply cell level labels is to do so in the application code when storing the values.
@@ -1408,12 +1399,10 @@ set as an additional filter. It will further filter your results, rather than
 giving you additional authorization.
 
 .HBase Shell
-====
 ----
 hbase> get_auths 'myUser'
 hbase> scan 'table1', AUTHORIZATIONS => ['private']
 ----
-====
 
 .Java API
 ====

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/shell.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/shell.adoc b/src/main/asciidoc/_chapters/shell.adoc
index 13b8dd1..5612e1d 100644
--- a/src/main/asciidoc/_chapters/shell.adoc
+++ b/src/main/asciidoc/_chapters/shell.adoc
@@ -145,7 +145,6 @@ For instance, if your script creates a table, but returns a non-zero exit value,
 You can enter HBase Shell commands into a text file, one command per line, and pass that file to the HBase Shell.
 
 .Example Command File
-====
 ----
 create 'test', 'cf'
 list 'test'
@@ -158,7 +157,6 @@ get 'test', 'row1'
 disable 'test'
 enable 'test'
 ----
-====
 
 .Directing HBase Shell to Execute the Commands
 ====
@@ -227,7 +225,7 @@ The table reference can be used to perform data read write operations such as pu
 For example, previously you would always specify a table name:
 
 ----
-hbase(main):000:0> create ‘t’, ‘f’
+hbase(main):000:0> create 't', 'f'
 0 row(s) in 1.0970 seconds
 hbase(main):001:0> put 't', 'rold', 'f', 'v'
 0 row(s) in 0.0080 seconds
@@ -291,7 +289,7 @@ hbase(main):012:0> tab = get_table 't'
 0 row(s) in 0.0010 seconds
 
 => Hbase::Table - t
-hbase(main):013:0> tab.put ‘r1’ ,’f’, ‘v’
+hbase(main):013:0> tab.put 'r1' ,'f', 'v'
 0 row(s) in 0.0100 seconds
 hbase(main):014:0> tab.scan
 ROW                                COLUMN+CELL
@@ -305,7 +303,7 @@ You can then use jruby to script table operations based on these names.
 The list_snapshots command also acts similarly.
 
 ----
-hbase(main):016 > tables = list(‘t.*’)
+hbase(main):016 > tables = list('t.*')
 TABLE
 t
 1 row(s) in 0.1040 seconds

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/tracing.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/tracing.adoc b/src/main/asciidoc/_chapters/tracing.adoc
index 8bd1962..7305aa8 100644
--- a/src/main/asciidoc/_chapters/tracing.adoc
+++ b/src/main/asciidoc/_chapters/tracing.adoc
@@ -30,8 +30,10 @@
 :icons: font
 :experimental:
 
-link:https://issues.apache.org/jira/browse/HBASE-6449[HBASE-6449] added support for tracing requests through HBase, using the open source tracing library, link:https://htrace.incubator.apache.org/[HTrace].
-Setting up tracing is quite simple, however it currently requires some very minor changes to your client code (it would not be very difficult to remove this requirement).
+HBase includes facilities for tracing requests using the open source tracing library, link:https://htrace.incubator.apache.org/[Apache HTrace].
+Setting up tracing is quite simple, however it currently requires some very minor changes to your client code (this requirement may be removed in the future).
+
+Support for this feature using HTrace 3 in HBase was added in link:https://issues.apache.org/jira/browse/HBASE-6449[HBASE-6449]. Starting with HBase 2.0, there was a non-compatible update to HTrace 4 via link:https://issues.apache.org/jira/browse/HBASE-18601[HBASE-18601]. The examples provided in this section will be using HTrace 4 package names, syntax, and conventions. For older examples, please consult previous versions of this guide.
 
 [[tracing.spanreceivers]]
 === SpanReceivers

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/troubleshooting.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/troubleshooting.adoc b/src/main/asciidoc/_chapters/troubleshooting.adoc
index eb62b33..0340105 100644
--- a/src/main/asciidoc/_chapters/troubleshooting.adoc
+++ b/src/main/asciidoc/_chapters/troubleshooting.adoc
@@ -102,9 +102,9 @@ To disable, set the logging level back to `INFO` level.
 === JVM Garbage Collection Logs
 
 [NOTE]
-----
+====
 All example Garbage Collection logs in this section are based on Java 8 output. The introduction of Unified Logging in Java 9 and newer will result in very different looking logs.
-----
+====
 
 HBase is memory intensive, and using the default GC you can see long pauses in all threads including the _Juliet Pause_ aka "GC of Death". To help debug this or confirm this is happening GC logging can be turned on in the Java virtual machine.
 
@@ -806,10 +806,12 @@ The HDFS directory structure of HBase tables in the cluster is...
 ----
 
 /hbase
-    /<Table>                    (Tables in the cluster)
-        /<Region>               (Regions for the table)
-            /<ColumnFamily>     (ColumnFamilies for the Region for the table)
-                /<StoreFile>    (StoreFiles for the ColumnFamily for the Regions for the table)
+    /data
+        /<Namespace>                    (Namespaces in the cluster)
+            /<Table>                    (Tables in the cluster)
+                /<Region>               (Regions for the table)
+                    /<ColumnFamily>     (ColumnFamilies for the Region for the table)
+                        /<StoreFile>    (StoreFiles for the ColumnFamily for the Regions for the table)
 ----
 
 The HDFS directory structure of HBase WAL is..
@@ -817,7 +819,7 @@ The HDFS directory structure of HBase WAL is..
 ----
 
 /hbase
-    /.logs
+    /WALs
         /<RegionServer>    (RegionServers)
             /<WAL>         (WAL files for the RegionServer)
 ----
@@ -827,7 +829,7 @@ See the link:https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hd
 [[trouble.namenode.0size.hlogs]]
 ==== Zero size WALs with data in them
 
-Problem: when getting a listing of all the files in a RegionServer's _.logs_ directory, one file has a size of 0 but it contains data.
+Problem: when getting a listing of all the files in a RegionServer's _WALs_ directory, one file has a size of 0 but it contains data.
 
 Answer: It's an HDFS quirk.
 A file that's currently being written to will appear to have a size of 0 but once it's closed it will show its true size
@@ -941,6 +943,96 @@ java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path
 \... then there is a path issue with the compression libraries.
 See the Configuration section on link:[LZO compression configuration].
 
+[[trouble.rs.startup.hsync]]
+==== RegionServer aborts due to lack of hsync for filesystem
+
+In order to provide data durability for writes to the cluster HBase relies on the ability to durably save state in a write ahead log. When using a version of Apache Hadoop Common's filesystem API that supports checking on the availability of needed calls, HBase will proactively abort the cluster if it finds it can't operate safely.
+
+For RegionServer roles, the failure will show up in logs like this:
+
+----
+2018-04-05 11:36:22,785 ERROR [regionserver/192.168.1.123:16020] wal.AsyncFSWALProvider: The RegionServer async write ahead log provider relies on the ability to call hflush and hsync for proper operation during component failures, but the current FileSystem does not support doing so. Please check the config value of 'hbase.wal.dir' and ensure it points to a FileSystem mount that has suitable capabilities for output streams.
+2018-04-05 11:36:22,799 ERROR [regionserver/192.168.1.123:16020] regionserver.HRegionServer: ***** ABORTING region server 192.168.1.123,16020,1522946074234: Unhandled: cannot get log writer *****
+java.io.IOException: cannot get log writer
+        at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:112)
+        at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:612)
+        at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:124)
+        at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:759)
+        at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:489)
+        at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.<init>(AsyncFSWAL.java:251)
+        at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:69)
+        at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:44)
+        at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:138)
+        at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:57)
+        at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:252)
+        at org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:2105)
+        at org.apache.hadoop.hbase.regionserver.HRegionServer.buildServerLoad(HRegionServer.java:1326)
+        at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1191)
+        at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1007)
+        at java.lang.Thread.run(Thread.java:745)
+Caused by: org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: hflush and hsync
+        at org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:69)
+        at org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:168)
+        at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:167)
+        at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:99)
+        ... 15 more
+
+----
+
+If you are attempting to run in standalone mode and see this error, please walk back through the section <<quickstart>> and ensure you have included *all* the given configuration settings.
+
+[[trouble.rs.startup.asyncfs]]
+==== RegionServer aborts due to can not initialize access to HDFS
+
+We will try to use _AsyncFSWAL_ for HBase-2.x as it has better performance while consuming less resources. But the problem for _AsyncFSWAL_ is that it hacks into the internal of the DFSClient implementation, so it will easily be broken when upgrading hadoop, even for a simple patch release.
+
+If you do not specify the wal provider, we will try to fall back to the old _FSHLog_ if we fail to initialize _AsyncFSWAL_, but it may not always work. The failure will show up in logs like this:
+
+----
+18/07/02 18:51:06 WARN concurrent.DefaultPromise: An exception was
+thrown by org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete()
+java.lang.Error: Couldn't properly initialize access to HDFS
+internals. Please update your WAL Provider to not make use of the
+'asyncfs' provider. See HBASE-16110 for more information.
+     at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.<clinit>(FanOutOneBlockAsyncDFSOutputSaslHelper.java:268)
+     at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:661)
+     at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:118)
+     at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:720)
+     at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:715)
+     at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
+     at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
+     at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
+     at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
+     at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
+     at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
+     at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:638)
+     at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:676)
+     at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:552)
+     at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:394)
+     at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304)
+     at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
+     at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
+     at java.lang.Thread.run(Thread.java:748)
+ Caused by: java.lang.NoSuchMethodException:
+org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo)
+     at java.lang.Class.getDeclaredMethod(Class.java:2130)
+     at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:232)
+     at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.<clinit>(FanOutOneBlockAsyncDFSOutputSaslHelper.java:262)
+     ... 18 more
+----
+
+If you hit this error, please specify _FSHLog_, i.e, _filesystem_, explicitly in your config file.
+
+[source,xml]
+----
+<property>
+  <name>hbase.wal.provider</name>
+  <value>filesystem</value>
+</property>
+----
+
+And do not forget to send an email to the user@hbase.apache.org or dev@hbase.apache.org to report the failure and also your hadoop version, we will try to fix the problem ASAP in the next release.
+
 [[trouble.rs.runtime]]
 === Runtime Errors
 
@@ -1127,6 +1219,29 @@ Sure fire solution is to just use Hadoop dfs to delete the HBase root and let HB
 
 If you have many regions on your cluster and you see an error like that reported above in this sections title in your logs, see link:https://issues.apache.org/jira/browse/HBASE-4246[HBASE-4246 Cluster with too many regions cannot withstand some master failover scenarios].
 
+[[trouble.master.startup.hsync]]
+==== Master fails to become active due to lack of hsync for filesystem
+
+HBase's internal framework for cluster operations requires the ability to durably save state in a write ahead log. When using a version of Apache Hadoop Common's filesystem API that supports checking on the availability of needed calls, HBase will proactively abort the cluster if it finds it can't operate safely.
+
+For Master roles, the failure will show up in logs like this:
+
+----
+2018-04-05 11:18:44,653 ERROR [Thread-21] master.HMaster: Failed to become active master
+java.lang.IllegalStateException: The procedure WAL relies on the ability to hsync for proper operation during component failures, but the underlying filesystem does not support doing so. Please check the config value of 'hbase.procedure.store.wal.use.hsync' to set the desired level of robustness and ensure the config value of 'hbase.wal.dir' points to a FileSystem mount that can provide it.
+        at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:1034)
+        at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(WALProcedureStore.java:374)
+        at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.start(ProcedureExecutor.java:530)
+        at org.apache.hadoop.hbase.master.HMaster.startProcedureExecutor(HMaster.java:1267)
+        at org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1173)
+        at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:881)
+        at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2048)
+        at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:568)
+        at java.lang.Thread.run(Thread.java:745)
+----
+
+If you are attempting to run in standalone mode and see this error, please walk back through the section <<quickstart>> and ensure you have included *all* the given configuration settings.
+
 [[trouble.master.shutdown]]
 === Shutdown Errors
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/unit_testing.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/unit_testing.adoc b/src/main/asciidoc/_chapters/unit_testing.adoc
index e503f81..3329a75 100644
--- a/src/main/asciidoc/_chapters/unit_testing.adoc
+++ b/src/main/asciidoc/_chapters/unit_testing.adoc
@@ -327,7 +327,5 @@ A record is inserted, a Get is performed from the same table, and the insertion
 
 NOTE: Starting the mini-cluster takes about 20-30 seconds, but that should be appropriate for integration testing.
 
-To use an HBase mini-cluster on Microsoft Windows, you need to use a Cygwin environment.
-
 See the paper at link:http://blog.sematext.com/2010/08/30/hbase-case-study-using-hbasetestingutility-for-local-testing-development/[HBase Case-Study: Using HBaseTestingUtility for Local Testing and
                 Development] (2010) for more information about HBaseTestingUtility.

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/upgrading.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc b/src/main/asciidoc/_chapters/upgrading.adoc
index ef20c7d..bc2ec1c 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -314,6 +314,411 @@ Quitting...
 
 == Upgrade Paths
 
+[[upgrade2.0]]
+=== Upgrading from 1.x to 2.x
+
+In this section we will first call out significant changes compared to the prior stable HBase release and then go over the upgrade process. Be sure to read the former with care so you avoid suprises.
+
+==== Changes of Note!
+
+First we'll cover deployment / operational changes that you might hit when upgrading to HBase 2.0+. After that we'll call out changes for downstream applications. Please note that Coprocessors are covered in the operational section. Also note that this section is not meant to convey information about new features that may be of interest to you. For a complete summary of changes, please see the CHANGES.txt file in the source release artifact for the version you are planning to upgrade to.
+
+[[upgrade2.0.basic.requirements]]
+.Update to basic prerequisite minimums in HBase 2.0+
+As noted in the section <<basic.prerequisites>>, HBase 2.0+ requires a minimum of Java 8 and Hadoop 2.6. The HBase community recommends ensuring you have already completed any needed upgrades in prerequisites prior to upgrading your HBase version.
+
+[[upgrade2.0.hbck]]
+.HBCK must match HBase server version
+You *must not* use an HBase 1.x version of HBCK against an HBase 2.0+ cluster. HBCK is strongly tied to the HBase server version. Using the HBCK tool from an earlier release against an HBase 2.0+ cluster will destructively alter said cluster in unrecoverable ways.
+
+As of HBase 2.0, HBCK is a read-only tool that can report the status of some non-public system internals. You should not rely on the format nor content of these internals to remain consistent across HBase releases.
+
+////
+Link to a ref guide section on HBCK in 2.0 that explains use and calls out the inability of clients and server sides to detect version of each other.
+////
+
+[[upgrade2.0.removed.configs]]
+.Configuration settings no longer in HBase 2.0+
+
+The following configuration settings are no longer applicable or available. For details, please see the detailed release notes.
+
+* hbase.config.read.zookeeper.config (see <<upgrade2.0.zkconfig>> for migration details)
+* hbase.zookeeper.useMulti (HBase now always uses ZK's multi functionality)
+* hbase.rpc.client.threads.max
+* hbase.rpc.client.nativetransport
+* hbase.fs.tmp.dir
+// These next two seem worth a call out section?
+* hbase.bucketcache.combinedcache.enabled
+* hbase.bucketcache.ioengine no longer supports the 'heap' value.
+* hbase.bulkload.staging.dir
+* hbase.balancer.tablesOnMaster wasn't removed, strictly speaking, but its meaning has fundamentally changed and users should not set it. See the section <<upgrade2.0.regions.on.master>> for details.
+* hbase.master.distributed.log.replay See the section <<upgrade2.0.distributed.log.replay>> for details
+* hbase.regionserver.disallow.writes.when.recovering See the section <<upgrade2.0.distributed.log.replay>> for details
+* hbase.regionserver.wal.logreplay.batch.size See the section <<upgrade2.0.distributed.log.replay>> for details
+* hbase.master.catalog.timeout
+* hbase.regionserver.catalog.timeout
+* hbase.metrics.exposeOperationTimes
+* hbase.metrics.showTableName
+* hbase.online.schema.update.enable (HBase now always supports this)
+* hbase.thrift.htablepool.size.max
+
+[[upgrade2.0.renamed.configs]]
+.Configuration properties that were renamed in HBase 2.0+
+
+The following properties have been renamed. Attempts to set the old property will be ignored at run time.
+
+.Renamed properties
+[options="header"]
+|============================================================================================================
+|Old name |New name
+|hbase.rpc.server.nativetransport |hbase.netty.nativetransport
+|hbase.netty.rpc.server.worker.count |hbase.netty.worker.count
+|hbase.hfile.compactions.discharger.interval |hbase.hfile.compaction.discharger.interval
+|hbase.hregion.percolumnfamilyflush.size.lower.bound |hbase.hregion.percolumnfamilyflush.size.lower.bound.min
+|============================================================================================================
+
+[[upgrade2.0.changed.defaults]]
+.Configuration settings with different defaults in HBase 2.0+
+
+The following configuration settings changed their default value. Where applicable, the value to set to restore the behavior of HBase 1.2 is given.
+
+* hbase.security.authorization now defaults to false. set to true to restore same behavior as previous default.
+* hbase.client.retries.number is now set to 10. Previously it was 35. Downstream users are advised to use client timeouts as described in section <<config_timeouts>> instead.
+* hbase.client.serverside.retries.multiplier is now set to 3. Previously it was 10. Downstream users are advised to use client timesout as describe in section <<config_timeouts>> instead.
+* hbase.master.fileSplitTimeout is now set to 10 minutes. Previously it was 30 seconds.
+* hbase.regionserver.logroll.multiplier is now set to 0.5. Previously it was 0.95. This change is tied with the following doubling of block size. Combined, these two configuration changes should make for WALs of about the same size as those in hbase-1.x but there should be less incidence of small blocks because we fail to roll the WAL before we hit the blocksize threshold. See link:https://issues.apache.org/jira/browse/HBASE-19148[HBASE-19148] for discussion.
+* hbase.regionserver.hlog.blocksize defaults to 2x the HDFS default block size for the WAL dir. Previously it was equal to the HDFS default block size for the WAL dir.
+* hbase.client.start.log.errors.counter changed to 5. Previously it was 9.
+* hbase.ipc.server.callqueue.type changed to 'fifo'. In HBase versions 1.0 - 1.2 it was 'deadline'. In prior and later 1.x versions it already defaults to 'fifo'.
+* hbase.hregion.memstore.chunkpool.maxsize is 1.0 by default. Previously it was 0.0. Effectively, this means previously we would not use a chunk pool when our memstore is onheap and now we will. See the section <<gcpause>> for more infromation about the MSLAB chunk pool.
+* hbase.master.cleaner.interval is now set to 10 minutes. Previously it was 1 minute.
+* hbase.master.procedure.threads will now default to 1/4 of the number of available CPUs, but not less than 16 threads. Previously it would be number of threads equal to number of CPUs.
+* hbase.hstore.blockingStoreFiles is now 16. Previously it was 10.
+* hbase.http.max.threads is now 16. Previously it was 10.
+* hbase.client.max.perserver.tasks is now 2. Previously it was 5.
+* hbase.normalizer.period is now 5 minutes. Previously it was 30 minutes.
+* hbase.regionserver.region.split.policy is now SteppingSplitPolicy. Previously it was IncreasingToUpperBoundRegionSplitPolicy.
+* replication.source.ratio is now 0.5. Previously it was 0.1.
+
+[[upgrade2.0.regions.on.master]]
+."Master hosting regions" feature broken and unsupported
+
+The feature "Master acts as region server" and associated follow-on work available in HBase 1.y is non-functional in HBase 2.y and should not be used in a production setting due to deadlock on Master initialization. Downstream users are advised to treat related configuration settings as experimental and the feature as inappropriate for production settings.
+
+A brief summary of related changes:
+
+* Master no longer carries regions by default
+* hbase.balancer.tablesOnMaster is a boolean, default false (if it holds an HBase 1.x list of tables, will default to false)
+* hbase.balancer.tablesOnMaster.systemTablesOnly is boolean to keep user tables off master. default false
+* those wishing to replicate old list-of-servers config should deploy a stand-alone RegionServer process and then rely on Region Server Groups
+
+[[upgrade2.0.distributed.log.replay]]
+."Distributed Log Replay" feature broken and removed
+
+The Distributed Log Replay feature was broken and has been removed from HBase 2.y+. As a consequence all related configs, metrics, RPC fields, and logging have also been removed. Note that this feature was found to be unreliable in the run up to HBase 1.0, defaulted to being unused, and was effectively removed in HBase 1.2.0 when we started ignoring the config that turns it on (link:https://issues.apache.org/jira/browse/HBASE-14465[HBASE-14465]). If you are currently using the feature, be sure to perform a clean shutdown, ensure all DLR work is complete, and disable the feature prior to upgrading.
+
+[[upgrade2.0.prefix-tree.removed]]
+._prefix-tree_ encoding removed
+
+The prefix-tree encoding was removed from HBase 2.0.0 (link:https://issues.apache.org/jira/browse/HBASE-19179[HBASE-19179]).
+It was (late!) deprecated in hbase-1.2.7, hbase-1.4.0, and hbase-1.3.2.
+
+This feature was removed because it as not being actively maintained. If interested in reviving this
+sweet facility which improved random read latencies at the expensive of slowed writes,
+write the HBase developers list at _dev at hbase dot apache dot org_.
+
+The prefix-tree encoding needs to be removed from all tables before upgrading to HBase 2.0+.
+To do that first you need to change the encoding from PREFIX_TREE to something else that is supported in HBase 2.0.
+After that you have to major compact the tables that were using PREFIX_TREE encoding before.
+To check which column families are using incompatible data block encoding you can use <<ops.pre-upgrade,Pre-Upgrade Validator>>.
+
+[[upgrade2.0.metrics]]
+.Changed metrics
+
+The following metrics have changed names:
+
+* Metrics previously published under the name "AssignmentManger" [sic] are now published under the name "AssignmentManager"
+
+The following metrics have changed their meaning:
+
+* The metric 'blockCacheEvictionCount' published on a per-region server basis no longer includes blocks removed from the cache due to the invalidation of the hfiles they are from (e.g. via compaction).
+* The metric 'totalRequestCount' increments once per request; previously it incremented by the number of `Actions` carried in the request; e.g. if a request was a `multi` made of four Gets and two Puts, we'd increment 'totalRequestCount' by six; now we increment by one regardless. Expect to see lower values for this metric in hbase-2.0.0.
+* The 'readRequestCount' now counts reads that return a non-empty row where in older hbases, we'd increment 'readRequestCount' whether a Result or not. This change will flatten the profile of the read-requests graphs if requests for non-existent rows. A YCSB read-heavy workload can do this dependent on how the database was loaded.
+
+The following metrics have been removed:
+
+* Metrics related to the Distributed Log Replay feature are no longer present. They were previsouly found in the region server context under the name 'replay'. See the section <<upgrade2.0.distributed.log.replay>> for details.
+
+The following metrics have been added:
+
+* 'totalRowActionRequestCount' is a count of region row actions summing reads and writes.
+
+[[upgrade2.0.logging]]
+.Changed logging
+HBase-2.0.0 now uses link:https://www.slf4j.org/[slf4j] as its logging frontend.
+Prevously, we used link:http://logging.apache.org/log4j/1.2/[log4j (1.2)].
+For most the transition should be seamless; slf4j does a good job interpreting
+_log4j.properties_ logging configuration files such that you should not notice
+any difference in your log system emissions.
+
+That said, your _log4j.properties_ may need freshening. See link:https://issues.apache.org/jira/browse/HBASE-20351[HBASE-20351]
+for example, where a stale log configuration file manifest as netty configuration
+being dumped at DEBUG level as preamble on every shell command invocation.
+
+[[upgrade2.0.zkconfig]]
+.ZooKeeper configs no longer read from zoo.cfg
+
+HBase no longer optionally reads the 'zoo.cfg' file for ZooKeeper related configuration settings. If you previously relied on the 'hbase.config.read.zookeeper.config' config for this functionality, you should migrate any needed settings to the hbase-site.xml file while adding the prefix 'hbase.zookeeper.property.' to each property name.
+
+[[upgrade2.0.permissions]]
+.Changes in permissions
+The following permission related changes either altered semantics or defaults:
+
+* Permissions granted to a user now merge with existing permissions for that user, rather than over-writing them. (see link:https://issues.apache.org/jira/browse/HBASE-17472[the release note on HBASE-17472] for details)
+* Region Server Group commands (added in 1.4.0) now require admin privileges.
+
+[[upgrade2.0.admin.commands]]
+.Most Admin APIs don't work against an HBase 2.0+ cluster from pre-HBase 2.0 clients
+
+A number of admin commands are known to not work when used from a pre-HBase 2.0 client. This includes an HBase Shell that has the library jars from pre-HBase 2.0. You will need to plan for an outage of use of admin APIs and commands until you can also update to the needed client version.
+
+The following client operations do not work against HBase 2.0+ cluster when executed from a pre-HBase 2.0 client:
+
+* list_procedures
+* split
+* merge_region
+* list_quotas
+* enable_table_replication
+* disable_table_replication
+* Snapshot related commands
+
+.Deprecated in 1.0 admin commands have been removed.
+
+The following commands that were deprecated in 1.0 have been removed. Where applicable the replacement command is listed.
+
+* The 'hlog' command has been removed. Downstream users should rely on the 'wal' command instead.
+
+[[upgrade2.0.memory]]
+.Region Server memory consumption changes.
+
+Users upgrading from versions prior to HBase 1.4 should read the instructions in section <<upgrade1.4.memory>>.
+
+Additionally, HBase 2.0 has changed how memstore memory is tracked for flushing decisions. Previously, both the data size and overhead for storage were used to calculate utilization against the flush threashold. Now, only data size is used to make these per-region decisions. Globally the addition of the storage overhead is used to make decisions about forced flushes.
+
+[[upgrade2.0.ui.splitmerge.by.row]]
+.Web UI for splitting and merging operate on row prefixes
+
+Previously, the Web UI included functionality on table status pages to merge or split based on an encoded region name. In HBase 2.0, instead this functionality works by taking a row prefix.
+
+[[upgrade2.0.replication]]
+.Special upgrading for Replication users from pre-HBase 1.4
+
+User running versions of HBase prior to the 1.4.0 release that make use of replication should be sure to read the instructions in the section <<upgrade1.4.replication>>.
+
+[[upgrade2.0.shell]]
+.HBase shell changes
+
+The HBase shell command relies on a bundled JRuby instance. This bundled JRuby been updated from version 1.6.8 to version 9.1.10.0. The represents a change from Ruby 1.8 to Ruby 2.3.3, which introduces non-compatible language changes for user scripts.
+
+The HBase shell command now ignores the '--return-values' flag that was present in early HBase 1.4 releases. Instead the shell always behaves as though that flag were passed. If you wish to avoid having expression results printed in the console you should alter your IRB configuration as noted in the section <<irbrc>>.
+
+[[upgrade2.0.coprocessors]]
+.Coprocessor APIs have changed in HBase 2.0+
+
+All Coprocessor APIs have been refactored to improve supportability around binary API compatibility for future versions of HBase. If you or applications you rely on have custom HBase coprocessors, you should read link:https://issues.apache.org/jira/browse/HBASE-18169[the release notes for HBASE-18169] for details of changes you will need to make prior to upgrading to HBase 2.0+.
+
+For example, if you had a BaseRegionObserver in HBase 1.2 then at a minimum you will need to update it to implement both RegionObserver and RegionCoprocessor and add the method
+
+[source,java]
+----
+...
+  @Override
+  public Optional<RegionObserver> getRegionObserver() {
+    return Optional.of(this);
+  }
+...
+----
+
+////
+This would be a good place to link to a coprocessor migration guide
+////
+
+[[upgrade2.0.hfile3.only]]
+.HBase 2.0+ can no longer write HFile v2 files.
+
+HBase has simplified our internal HFile handling. As a result, we can no longer write HFile versions earlier than the default of version 3. Upgrading users should ensure that hfile.format.version is not set to 2 in hbase-site.xml before upgrading. Failing to do so will cause Region Server failure. HBase can still read HFiles written in the older version 2 format.
+
+[[upgrade2.0.pb.wal.only]]
+.HBase 2.0+ can no longer read Sequence File based WAL file.
+
+HBase can no longer read the deprecated WAL files written in the Apache Hadoop Sequence File format. The hbase.regionserver.hlog.reader.impl and hbase.regionserver.hlog.reader.impl configuration entries should be set to use the Protobuf based WAL reader / writer classes. This implementation has been the default since HBase 0.96, so legacy WAL files should not be a concern for most downstream users.
+
+A clean cluster shutdown should ensure there are no WAL files. If you are unsure of a given WAL file's format you can use the `hbase wal` command to parse files while the HBase cluster is offline. In HBase 2.0+, this command will not be able to read a Sequence File based WAL. For more information on the tool see the section <<hlog_tool.prettyprint>>.
+
+[[upgrade2.0.filters]]
+.Change in behavior for filters
+
+The Filter ReturnCode NEXT_ROW has been redefined as skipping to next row in current family, not to next row in all family. it’s more reasonable, because ReturnCode is a concept in store level, not in region level.
+
+[[upgrade2.0.shaded.client.preferred]]
+.Downstream HBase 2.0+ users should use the shaded client
+Downstream users are strongly urged to rely on the Maven coordinates org.apache.hbase:hbase-shaded-client for their runtime use. This artifact contains all the needed implementation details for talking to an HBase cluster while minimizing the number of third party dependencies exposed.
+
+Note that this artifact exposes some classes in the org.apache.hadoop package space (e.g. o.a.h.configuration.Configuration) so that we can maintain source compatibility with our public API. Those classes are included so that they can be altered to use the same relocated third party dependencies as the rest of the HBase client code. In the event that you need to *also* use Hadoop in your code, you should ensure all Hadoop related jars precede the HBase client jar in your classpath.
+
+[[upgrade2.0.mapreduce.module]]
+.Downstream HBase 2.0+ users of MapReduce must switch to new artifact
+Downstream users of HBase's integration for Apache Hadoop MapReduce must switch to relying on the org.apache.hbase:hbase-shaded-mapreduce module for their runtime use. Historically, downstream users relied on either the org.apache.hbase:hbase-server or org.apache.hbase:hbase-shaded-server artifacts for these classes. Both uses are no longer supported and in the vast majority of cases will fail at runtime.
+
+Note that this artifact exposes some classes in the org.apache.hadoop package space (e.g. o.a.h.configuration.Configuration) so that we can maintain source compatibility with our public API. Those classes are included so that they can be altered to use the same relocated third party dependencies as the rest of the HBase client code. In the event that you need to *also* use Hadoop in your code, you should ensure all Hadoop related jars precede the HBase client jar in your classpath.
+
+[[upgrade2.0.dependencies]]
+.Significant changes to runtime classpath
+A number of internal dependencies for HBase were updated or removed from the runtime classpath. Downstream client users who do not follow the guidance in <<upgrade2.0.shaded.client.preferred>> will have to examine the set of dependencies Maven pulls in for impact. Downstream users of LimitedPrivate Coprocessor APIs will need to examine the runtime environment for impact. For details on our new handling of third party libraries that have historically been a problem with respect to harmonizing compatible runtime versions, see the reference guide section <<thirdparty>>.
+
+[[upgrade2.0.public.api]]
+.Multiple breaking changes to source and binary compatibility for client API
+The Java client API for HBase has a number of changes that break both source and binary compatibility for details see the Compatibility Check Report for the release you'll be upgrading to.
+
+[[upgrade2.0.tracing]]
+.Tracing implementation changes
+The backing implementation of HBase's tracing features was updated from Apache HTrace 3 to HTrace 4, which includes several breaking changes. While HTrace 3 and 4 can coexist in the same runtime, they will not integrate with each other, leading to disjoint trace information.
+
+The internal changes to HBase during this upgrade were sufficient for compilation, but it has not been confirmed that there are no regressions in tracing functionality. Please consider this feature expiremental for the immediate future.
+
+If you previously relied on client side tracing integrated with HBase operations, it is recommended that you upgrade your usage to HTrace 4 as well.
+
+[[upgrade2.0.perf]]
+.Performance
+
+You will likely see a change in the performance profile on upgrade to hbase-2.0.0 given
+read and write paths have undergone significant change. On release, writes may be
+slower with reads about the same or much better, dependent on context. Be prepared
+to spend time re-tuning (See <<performance>>).
+Performance is also an area that is now under active review so look forward to
+improvement in coming releases (See
+link:https://issues.apache.org/jira/browse/HBASE-20188[HBASE-20188 TESTING Performance]).
+
+////
+This would be a good place to link to an appendix on migrating applications
+////
+
+[[upgrade2.0.coprocessors]]
+==== Upgrading Coprocessors to 2.0
+Coprocessors have changed substantially in 2.0 ranging from top level design changes in class
+hierarchies to changed/removed methods, interfaces, etc.
+(Parent jira: link:https://issues.apache.org/jira/browse/HBASE-18169[HBASE-18169 Coprocessor fix
+and cleanup before 2.0.0 release]). Some of the reasons for such widespread changes:
+
+. Pass Interfaces instead of Implementations; e.g. TableDescriptor instead of HTableDescriptor and
+Region instead of HRegion (link:https://issues.apache.org/jira/browse/HBASE-18241[HBASE-18241]
+Change client.Table and client.Admin to not use HTableDescriptor).
+. Design refactor so implementers need to fill out less boilerplate and so we can do more
+compile-time checking (link:https://issues.apache.org/jira/browse/HBASE-17732[HBASE-17732])
+. Purge Protocol Buffers from Coprocessor API
+(link:https://issues.apache.org/jira/browse/HBASE-18859[HBASE-18859],
+link:https://issues.apache.org/jira/browse/HBASE-16769[HBASE-16769], etc)
+. Cut back on what we expose to Coprocessors removing hooks on internals that were too private to
+ expose (for eg. link:https://issues.apache.org/jira/browse/HBASE-18453[HBASE-18453]
+ CompactionRequest should not be exposed to user directly;
+ link:https://issues.apache.org/jira/browse/HBASE-18298[HBASE-18298] RegionServerServices Interface
+ cleanup for CP expose; etc)
+
+To use coprocessors in 2.0, they should be rebuilt against new API otherwise they will fail to
+load and HBase processes will die.
+
+Suggested order of changes to upgrade the coprocessors:
+
+. Directly implement observer interfaces instead of extending Base*Observer classes. Change
+ `Foo extends BaseXXXObserver` to `Foo implements XXXObserver`.
+ (link:https://issues.apache.org/jira/browse/HBASE-17312[HBASE-17312]).
+. Adapt to design change from Inheritence to Composition
+ (link:https://issues.apache.org/jira/browse/HBASE-17732[HBASE-17732]) by following
+ link:https://github.com/apache/hbase/blob/master/dev-support/design-docs/Coprocessor_Design_Improvements-Use_composition_instead_of_inheritance-HBASE-17732.adoc#migrating-existing-cps-to-new-design[this
+ example].
+. getTable() has been removed from the CoprocessorEnvrionment, coprocessors should self-manage
+ Table instances.
+
+Some examples of writing coprocessors with new API can be found in hbase-example module
+link:https://github.com/apache/hbase/tree/branch-2.0/hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example[here] .
+
+Lastly, if an api has been changed/removed that breaks you in an irreparable way, and if there's a
+good justification to add it back, bring it our notice (dev@hbase.apache.org).
+
+[[upgrade2.0.rolling.upgrades]]
+==== Rolling Upgrade from 1.x to 2.x
+
+Rolling upgrades are currently an experimental feature.
+They have had limited testing. There are likely corner
+cases as yet uncovered in our
+limited experience so you should be careful if you go this
+route. The stop/upgrade/start described in the next section,
+<<upgrade2.0.process>>, is the safest route.
+
+That said, the below is a prescription for a
+rolling upgrade of a 1.4 cluster.
+
+.Pre-Requirements
+* Upgrade to the latest 1.4.x release. Pre 1.4 releases may also work but are not tested, so please upgrade to 1.4.3+ before upgrading to 2.x, unless you are an expert and familiar with the region assignment and crash processing. See the section <<upgrade1.4>> on how to upgrade to 1.4.x.
+* Make sure that the zk-less assignment is enabled, i.e, set `hbase.assignment.usezk` to `false`. This is the most important thing. It allows the 1.x master to assign/unassign regions to/from 2.x region servers. See the release note section of link:https://issues.apache.org/jira/browse/HBASE-11059[HBASE-11059] on how to migrate from zk based assignment to zk less assignment.
+* We have tested rolling upgrading from 1.4.3 to 2.1.0, but it should also work if you want to upgrade to 2.0.x.
+
+.Instructions
+. Unload a region server and upgrade it to 2.1.0. With link:https://issues.apache.org/jira/browse/HBASE-17931[HBASE-17931] in place, the meta region and regions for other system tables will be moved to this region server immediately. If not, please move them manually to the new region server. This is very important because
+** The schema of meta region is hard coded, if meta is on an old region server, then the new region servers can not access it as it does not have some families, for example, table state.
+** Client with lower version can communicate with server with higher version, but not vice versa. If the meta region is on an old region server, the new region server will use a client with higher version to communicate with a server with lower version, this may introduce strange problems.
+. Rolling upgrade all other region servers.
+. Upgrading masters.
+
+It is OK that during the rolling upgrading there are region server crashes. The 1.x master can assign regions to both 1.x and 2.x region servers, and link:https://issues.apache.org/jira/browse/HBASE-19166[HBASE-19166] fixed a problem so that 1.x region server can also read the WALs written by 2.x region server and split them.
+
+NOTE: please read the <<Changes of Note!,Changes of Note!>> section carefully before rolling upgrading. Make sure that you do not use the removed features in 2.0, for example, the prefix-tree encoding, the old hfile format, etc. They could both fail the upgrading and leave the cluster in an intermediate state and hard to recover.
+
+NOTE: If you have success running this prescription, please notify the dev list with a note on your experience and/or update the above with any deviations you may have taken so others going this route can benefit from your efforts.
+
+[[upgrade2.0.process]]
+==== Upgrade process from 1.x to 2.x
+
+To upgrade an existing HBase 1.x cluster, you should:
+
+* Clean shutdown of existing 1.x cluster
+* Update coprocessors
+* Upgrade Master roles first
+* Upgrade RegionServers
+* (Eventually) Upgrade Clients
+
+[[upgrade1.4]]
+=== Upgrading from pre-1.4 to 1.4+
+
+[[upgrade1.4.memory]]
+==== Region Server memory consumption changes.
+
+Users upgrading from versions prior to HBase 1.4 should be aware that the estimates of heap usage by the memstore objects (KeyValue, object and array header sizes, etc) have been made more accurate for heap sizes up to 32G (using CompressedOops), resulting in them dropping by 10-50% in practice. This also results in less number of flushes and compactions due to "fatter" flushes. YMMV. As a result, the actual heap usage of the memstore before being flushed may increase by up to 100%. If configured memory limits for the region server had been tuned based on observed usage, this change could result in worse GC behavior or even OutOfMemory errors. Set the environment property (not hbase-site.xml) "hbase.memorylayout.use.unsafe" to false to disable.
+
+
+[[upgrade1.4.replication]]
+==== Replication peer's TableCFs config
+
+Before 1.4, the table name can't include namespace for replication peer's TableCFs config. It was fixed by add TableCFs to ReplicationPeerConfig which was stored on Zookeeper. So when upgrade to 1.4, you have to update the original ReplicationPeerConfig data on Zookeeper firstly. There are four steps to upgrade when your cluster have a replication peer with TableCFs config.
+
+* Disable the replication peer.
+* If master has permission to write replication peer znode, then rolling update master directly. If not, use TableCFsUpdater tool to update the replication peer's config.
+[source,bash]
+----
+$ bin/hbase org.apache.hadoop.hbase.replication.master.TableCFsUpdater update
+----
+* Rolling update regionservers.
+* Enable the replication peer.
+
+Notes:
+
+* Can't use the old client(before 1.4) to change the replication peer's config. Because the client will write config to Zookeeper directly, the old client will miss TableCFs config. And the old client write TableCFs config to the old tablecfs znode, it will not work for new version regionserver.
+
+[[upgrade1.4.rawscan]]
+==== Raw scan now ignores TTL
+
+Doing a raw scan will now return results that have expired according to TTL settings.
+
 [[upgrade1.0]]
 === Upgrading to 1.x
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/book.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/book.adoc b/src/main/asciidoc/book.adoc
index 0a21e7b..764d7b4 100644
--- a/src/main/asciidoc/book.adoc
+++ b/src/main/asciidoc/book.adoc
@@ -63,7 +63,6 @@ include::_chapters/security.adoc[]
 include::_chapters/architecture.adoc[]
 include::_chapters/hbase_mob.adoc[]
 include::_chapters/inmemory_compaction.adoc[]
-include::_chapters/backup_restore.adoc[]
 include::_chapters/hbase_apis.adoc[]
 include::_chapters/external_apis.adoc[]
 include::_chapters/thrift_filter_language.adoc[]
@@ -75,6 +74,8 @@ include::_chapters/ops_mgt.adoc[]
 include::_chapters/developer.adoc[]
 include::_chapters/unit_testing.adoc[]
 include::_chapters/protobuf.adoc[]
+include::_chapters/pv2.adoc[]
+include::_chapters/amv2.adoc[]
 include::_chapters/zookeeper.adoc[]
 include::_chapters/community.adoc[]
 
@@ -94,3 +95,4 @@ include::_chapters/asf.adoc[]
 include::_chapters/orca.adoc[]
 include::_chapters/tracing.adoc[]
 include::_chapters/rpc.adoc[]
+include::_chapters/appendix_hbase_incompatibilities.adoc[]

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/images
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/images b/src/main/asciidoc/images
index 06d04d0..02e8e94 120000
--- a/src/main/asciidoc/images
+++ b/src/main/asciidoc/images
@@ -1 +1 @@
-../site/resources/images
\ No newline at end of file
+../../site/resources/images/
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/asciidoc/acid-semantics.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/acid-semantics.adoc b/src/main/site/asciidoc/acid-semantics.adoc
deleted file mode 100644
index 0038901..0000000
--- a/src/main/site/asciidoc/acid-semantics.adoc
+++ /dev/null
@@ -1,118 +0,0 @@
-////
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
-////
-
-= Apache HBase (TM) ACID Properties
-
-== About this Document
-
-Apache HBase (TM) is not an ACID compliant database. However, it does guarantee certain specific properties.
-
-This specification enumerates the ACID properties of HBase.
-
-== Definitions
-
-For the sake of common vocabulary, we define the following terms:
-Atomicity::
-  An operation is atomic if it either completes entirely or not at all.
-
-Consistency::
-  All actions cause the table to transition from one valid state directly to another (eg a row will not disappear during an update, etc).
-
-Isolation::
-  an operation is isolated if it appears to complete independently of any other concurrent transaction.
-
-Durability::
-  Any update that reports &quot;successful&quot; to the client will not be lost.
-
-Visibility::
-  An update is considered visible if any subsequent read will see the update as having been committed.
-
-
-The terms _must_ and _may_ are used as specified by link:[RFC 2119].
-
-In short, the word &quot;must&quot; implies that, if some case exists where the statement is not true, it is a bug. The word _may_ implies that, even if the guarantee is provided in a current release, users should not rely on it.
-
-== APIs to Consider
-- Read APIs
-* get
-* scan
-- Write APIs
-* put
-* batch put
-* delete
-- Combination (read-modify-write) APIs
-* incrementColumnValue
-* checkAndPut
-
-== Guarantees Provided
-
-.Atomicity
-.  All mutations are atomic within a row. Any put will either wholely succeed or wholely fail.footnoteref[Puts will either wholely succeed or wholely fail, provided that they are actually sent to the RegionServer.  If the writebuffer is used, Puts will not be sent until the writebuffer is filled or it is explicitly flushed.]
-.. An operation that returns a _success_ code has completely succeeded.
-.. An operation that returns a _failure_ code has completely failed.
-.. An operation that times out may have succeeded and may have failed. However, it will not have partially succeeded or failed.
-. This is true even if the mutation crosses multiple column families within a row.
-. APIs that mutate several rows will _not_ be atomic across the multiple rows. For example, a multiput that operates on rows 'a','b', and 'c' may return having mutated some but not all of the rows. In such cases, these APIs will return a list of success codes, each of which may be succeeded, failed, or timed out as described above.
-. The checkAndPut API happens atomically like the typical _compareAndSet (CAS)_ operation found in many hardware architectures.
-. The order of mutations is seen to happen in a well-defined order for each row, with no interleaving. For example, if one writer issues the mutation `a=1,b=1,c=1` and another writer issues the mutation `a=2,b=2,c=`, the row must either be `a=1,b=1,c=1` or `a=2,b=2,c=2` and must *not* be something like `a=1,b=2,c=1`. +
-NOTE:This is not true _across rows_ for multirow batch mutations.
-
-== Consistency and Isolation
-. All rows returned via any access API will consist of a complete row that existed at some point in the table's history.
-. This is true across column families - i.e a get of a full row that occurs concurrent with some mutations 1,2,3,4,5 will return a complete row that existed at some point in time between mutation i and i+1 for some i between 1 and 5.
-. The state of a row will only move forward through the history of edits to it.
-
-== Consistency of Scans
-A scan is *not* a consistent view of a table. Scans do *not* exhibit _snapshot isolation_.
-
-Rather, scans have the following properties:
-. Any row returned by the scan will be a consistent view (i.e. that version of the complete row existed at some point in time)footnoteref[consistency,A consistent view is not guaranteed intra-row scanning -- i.e. fetching a portion of a row in one RPC then going back to fetch another portion of the row in a subsequent RPC. Intra-row scanning happens when you set a limit on how many values to return per Scan#next (See link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setBatch(int)"[Scan#setBatch(int)]).]
-. A scan will always reflect a view of the data _at least as new as_ the beginning of the scan. This satisfies the visibility guarantees enumerated below.
-.. For example, if client A writes data X and then communicates via a side channel to client B, any scans started by client B will contain data at least as new as X.
-.. A scan _must_ reflect all mutations committed prior to the construction of the scanner, and _may_ reflect some mutations committed subsequent to the construction of the scanner.
-.. Scans must include _all_ data written prior to the scan (except in the case where data is subsequently mutated, in which case it _may_ reflect the mutation)
-
-Those familiar with relational databases will recognize this isolation level as "read committed".
-
-NOTE: The guarantees listed above regarding scanner consistency are referring to "transaction commit time", not the "timestamp" field of each cell. That is to say, a scanner started at time _t_ may see edits with a timestamp value greater than _t_, if those edits were committed with a "forward dated" timestamp before the scanner was constructed.
-
-== Visibility
-
-. When a client receives a &quot;success&quot; response for any mutation, that mutation is immediately visible to both that client and any client with whom it later communicates through side channels.footnoteref[consistency]
-. A row must never exhibit so-called "time-travel" properties. That is to say, if a series of mutations moves a row sequentially through a series of states, any sequence of concurrent reads will return a subsequence of those states. +
-For example, if a row's cells are mutated using the `incrementColumnValue` API, a client must never see the value of any cell decrease. +
-This is true regardless of which read API is used to read back the mutation.
-. Any version of a cell that has been returned to a read operation is guaranteed to be durably stored.
-
-== Durability
-. All visible data is also durable data. That is to say, a read will never return data that has not been made durable on disk.footnoteref[durability,In the context of Apache HBase, _durably on disk_; implies an `hflush()` call on the transaction log. This does not actually imply an `fsync()` to magnetic media, but rather just that the data has been written to the OS cache on all replicas of the log. In the case of a full datacenter power loss, it is possible that the edits are not truly durable.]
-. Any operation that returns a &quot;success&quot; code (eg does not throw an exception) will be made durable.footnoteref[durability]
-. Any operation that returns a &quot;failure&quot; code will not be made durable (subject to the Atomicity guarantees above).
-. All reasonable failure scenarios will not affect any of the guarantees of this document.
-
-== Tunability
-
-All of the above guarantees must be possible within Apache HBase. For users who would like to trade off some guarantees for performance, HBase may offer several tuning options. For example:
-
-* Visibility may be tuned on a per-read basis to allow stale reads or time travel.
-* Durability may be tuned to only flush data to disk on a periodic basis.
-
-== More Information
-
-For more information, see the link:book.html#client[client architecture] and  link:book.html#datamodel[data model] sections in the Apache HBase Reference Guide. 

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/asciidoc/bulk-loads.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/bulk-loads.adoc b/src/main/site/asciidoc/bulk-loads.adoc
deleted file mode 100644
index fc320d8..0000000
--- a/src/main/site/asciidoc/bulk-loads.adoc
+++ /dev/null
@@ -1,23 +0,0 @@
-////
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
-////
-
-= Bulk Loads in Apache HBase (TM)
-
-This page has been retired.  The contents have been moved to the link:book.html#arch.bulk.load[Bulk Loading] section in the Reference Guide.
-

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/asciidoc/cygwin.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/cygwin.adoc b/src/main/site/asciidoc/cygwin.adoc
deleted file mode 100644
index 11c4df4..0000000
--- a/src/main/site/asciidoc/cygwin.adoc
+++ /dev/null
@@ -1,197 +0,0 @@
-////
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
-////
-
-
-== Installing Apache HBase (TM) on Windows using Cygwin
-
-== Introduction
-
-link:http://hbase.apache.org[Apache HBase (TM)] is a distributed, column-oriented store, modeled after Google's link:http://research.google.com/archive/bigtable.html[BigTable]. Apache HBase is built on top of link:http://hadoop.apache.org[Hadoop] for its link:http://hadoop.apache.org/mapreduce[MapReduce] link:http://hadoop.apache.org/hdfs[distributed file system] implementations. All these projects are open-source and part of the link:http://www.apache.org[Apache Software Foundation].
-
-== Purpose
-
-This document explains the *intricacies* of running Apache HBase on Windows using Cygwin* as an all-in-one single-node installation for testing and development. The HBase link:http://hbase.apache.org/apidocs/overview-summary.html#overview_description[Overview] and link:book.html#getting_started[QuickStart] guides on the other hand go a long way in explaning how to setup link:http://hadoop.apache.org/hbase[HBase] in more complex deployment scenarios.
-
-== Installation
-
-For running Apache HBase on Windows, 3 technologies are required: 
-* Java
-* Cygwin
-* SSH 
-
-The following paragraphs detail the installation of each of the aforementioned technologies.
-
-=== Java
-
-HBase depends on the link:http://java.sun.com/javase/6/[Java Platform, Standard Edition, 6 Release]. So the target system has to be provided with at least the Java Runtime Environment (JRE); however if the system will also be used for development, the Jave Development Kit (JDK) is preferred. You can download the latest versions for both from link:http://java.sun.com/javase/downloads/index.jsp[Sun's download page]. Installation is a simple GUI wizard that guides you through the process.
-
-=== Cygwin
-
-Cygwin is probably the oddest technology in this solution stack. It provides a dynamic link library that emulates most of a *nix environment on Windows. On top of that a whole bunch of the most common *nix tools are supplied. Combined, the DLL with the tools form a very *nix-alike environment on Windows.
-
-For installation, Cygwin provides the link:http://cygwin.com/setup.exe[`setup.exe` utility] that tracks the versions of all installed components on the target system and provides the mechanism for installing or updating everything from the mirror sites of Cygwin.
-
-To support installation, the `setup.exe` utility uses 2 directories on the target system. The *Root* directory for Cygwin (defaults to _C:\cygwin)_ which will become _/_ within the eventual Cygwin installation; and the *Local Package* directory (e.g. _C:\cygsetup_ that is the cache where `setup.exe`stores the packages before they are installed. The cache must not be the same folder as the Cygwin root.
-
-Perform following steps to install Cygwin, which are elaboratly detailed in the link:http://cygwin.com/cygwin-ug-net/setup-net.html[2nd chapter] of the link:http://cygwin.com/cygwin-ug-net/cygwin-ug-net.html[Cygwin User's Guide].
-
-. Make sure you have `Administrator` privileges on the target system.
-. Choose and create you Root and *Local Package* directories. A good suggestion is to use `C:\cygwin\root` and `C:\cygwin\setup` folders.
-. Download the `setup.exe` utility and save it to the *Local Package* directory. Run the `setup.exe` utility.
-.. Choose  the `Install from Internet` option.
-.. Choose your *Root* and *Local Package* folders.
-.. Select an appropriate mirror.
-.. Don't select any additional packages yet, as we only want to install Cygwin for now.
-.. Wait for download and install.
-.. Finish the installation.
-. Optionally, you can now also add a shortcut to your Start menu pointing to the `setup.exe` utility in the *Local Package *folder.
-. Add `CYGWIN_HOME` system-wide environment variable that points to your *Root* directory.
-. Add `%CYGWIN_HOME%\bin` to the end of your `PATH` environment variable.
-. Reboot the sytem after making changes to the environment variables otherwise the OS will not be able to find the Cygwin utilities.
-. Test your installation by running your freshly created shortcuts or the `Cygwin.bat` command in the *Root* folder. You should end up in a terminal window that is running a link:http://www.gnu.org/software/bash/manual/bashref.html[Bash shell]. Test the shell by issuing following commands:
-.. `cd /` should take you to thr *Root* directory in Cygwin.
-.. The `LS` commands that should list all files and folders in the current directory.
-.. Use the `exit` command to end the terminal.
-. When needed, to *uninstall* Cygwin you can simply delete the *Root* and *Local Package* directory, and the *shortcuts* that were created during installation.
-
-=== SSH
-
-HBase (and Hadoop) rely on link:http://nl.wikipedia.org/wiki/Secure_Shell[*SSH*] for interprocess/-node *communication* and launching* remote commands*. SSH will be provisioned on the target system via Cygwin, which supports running Cygwin programs as *Windows services*!
-
-. Rerun the `*setup.exe*`* utility*.
-. Leave all parameters as is, skipping through the wizard using the `Next` button until the `Select Packages` panel is shown.
-. Maximize the window and click the `View` button to toggle to the list view, which is ordered alfabetically on `Package`, making it easier to find the packages we'll need.
-. Select the following packages by clicking the status word (normally `Skip`) so it's marked for installation. Use the `Next `button to download and install the packages.
-.. `OpenSSH`
-.. `tcp_wrappers`
-.. `diffutils`
-.. `zlib`
-. Wait for the install to complete and finish the installation.
-
-=== HBase
-
-Download the *latest release* of Apache HBase from link:http://www.apache.org/dyn/closer.cgi/hbase/. As the Apache HBase distributable is just a zipped archive, installation is as simple as unpacking the archive so it ends up in its final *installation* directory. Notice that HBase has to be installed in Cygwin and a good directory suggestion is to use `/usr/local/` (or [`*Root* directory]\usr\local` in Windows slang). You should end up with a `/usr/local/hbase-_versi` installation in Cygwin.
-
-This finishes installation. We go on with the configuration.
-
-== Configuration
-
-There are 3 parts left to configure: *Java, SSH and HBase* itself. Following paragraphs explain eacht topic in detail.
-
-=== Java
-
-One important thing to remember in shell scripting in general (i.e. *nix and Windows) is that managing, manipulating and assembling path names that contains spaces can be very hard, due to the need to escape and quote those characters and strings. So we try to stay away from spaces in path names. *nix environments can help us out here very easily by using *symbolic links*.
-
-. Create a link in `/usr/local` to the Java home directory by using the following command and substituting the name of your chosen Java environment: +
-----
-LN -s /cygdrive/c/Program\ Files/Java/*_jre name_*/usr/local/*_jre name_*
-----
-. Test your java installation by changing directories to your Java folder `CD /usr/local/_jre name_` and issueing the command `./bin/java -version`. This should output your version of the chosen JRE.
-
-=== SSH 
-
-Configuring *SSH *is quite elaborate, but primarily a question of launching it by default as a* Windows service*.
-
-. On Windows Vista and above make sure you run the Cygwin shell with *elevated privileges*, by right-clicking on the shortcut an using `Run as Administrator`.
-. First of all, we have to make sure the *rights on some crucial files* are correct. Use the commands underneath. You can verify all rights by using the `LS -L` command on the different files. Also, notice the auto-completion feature in the shell using `TAB` is extremely handy in these situations.
-.. `chmod +r /etc/passwd` to make the passwords file readable for all
-.. `chmod u+w /etc/passwd` to make the passwords file writable for the owner
-.. `chmod +r /etc/group` to make the groups file readable for all
-.. `chmod u+w /etc/group` to make the groups file writable for the owner
-.. `chmod 755 /var` to make the var folder writable to owner and readable and executable to all
-. Edit the */etc/hosts.allow* file using your favorite editor (why not VI in the shell!) and make sure the following two lines are in there before the `PARANOID` line: +
-----
-ALL : localhost 127.0.0.1/32 : allow
-ALL : [::1]/128 : allow
-----
-. Next we have to *configure SSH* by using the script `ssh-host-config`.
-.. If this script asks to overwrite an existing `/etc/ssh_config`, answer `yes`.
-.. If this script asks to overwrite an existing `/etc/sshd_config`, answer `yes`.
-.. If this script asks to use privilege separation, answer `yes`.
-.. If this script asks to install `sshd` as a service, answer `yes`. Make sure you started your shell as Adminstrator!
-.. If this script asks for the CYGWIN value, just `enter` as the default is `ntsec`.
-.. If this script asks to create the `sshd` account, answer `yes`.
-.. If this script asks to use a different user name as service account, answer `no` as the default will suffice.
-.. If this script asks to create the `cyg_server` account, answer `yes`. Enter a password for the account.
-. *Start the SSH service* using `net start sshd` or `cygrunsrv  --start  sshd`. Notice that `cygrunsrv` is the utility that make the process run as a Windows service. Confirm that you see a message stating that `the CYGWIN sshd service  was started succesfully.`
-. Harmonize Windows and Cygwin* user account* by using the commands: +
-----
-mkpasswd -cl > /etc/passwd
-mkgroup --local > /etc/group
-----
-. Test *the installation of SSH:
-.. Open a new Cygwin terminal.
-.. Use the command `whoami` to verify your userID.
-.. Issue an `ssh localhost` to connect to the system itself.
-.. Answer `yes` when presented with the server's fingerprint.
-.. Issue your password when prompted.
-.. Test a few commands in the remote session
-.. The `exit` command should take you back to your first shell in Cygwin.
-. `Exit` should terminate the Cygwin shell.
-
-=== HBase
-
-If all previous configurations are working properly, we just need some tinkering at the *HBase config* files to properly resolve on Windows/Cygwin. All files and paths referenced here start from the HBase `[*installation* directory]` as working directory.
-
-. HBase uses the `./conf/*hbase-env.sh*` to configure its dependencies on the runtime environment. Copy and uncomment following lines just underneath their original, change them to fit your environemnt. They should read something like: +
-----
-export JAVA_HOME=/usr/local/_jre name_
-export HBASE_IDENT_STRING=$HOSTNAME
-----
-. HBase uses the _./conf/`*hbase-default.xml*`_ file for configuration. Some properties do not resolve to existing directories because the JVM runs on Windows. This is the major issue to keep in mind when working with Cygwin: within the shell all paths are *nix-alike, hence relative to the root `/`. However, every parameter that is to be consumed within the windows processes themself, need to be Windows settings, hence `C:\`-alike. Change following propeties in the configuration file, adjusting paths where necessary to conform with your own installation:
-.. `hbase.rootdir` must read e.g. `file:///C:/cygwin/root/tmp/hbase/data`
-.. `hbase.tmp.dir` must read `C:/cygwin/root/tmp/hbase/tmp`
-.. `hbase.zookeeper.quorum` must read `127.0.0.1` because for some reason `localhost` doesn't seem to resolve properly on Cygwin.
-. Make sure the configured `hbase.rootdir` and `hbase.tmp.dir` *directories exist* and have the proper* rights* set up e.g. by issuing a `chmod 777` on them.
-
-== Testing
-
-This should conclude the installation and configuration of Apache HBase on Windows using Cygwin. So it's time *to test it*.
-
-. Start a Cygwin* terminal*, if you haven't already.
-. Change directory to HBase *installation* using `CD /usr/local/hbase-_version_`, preferably using auto-completion.
-. *Start HBase* using the command `./bin/start-hbase.sh`
-.. When prompted to accept the SSH fingerprint, answer `yes`.
-.. When prompted, provide your password. Maybe multiple times.
-.. When the command completes, the HBase server should have started.
-.. However, to be absolutely certain, check the logs in the `./logs` directory for any exceptions.
-. Next we *start the HBase shell* using the command `./bin/hbase shell`
-. We run some simple *test commands*
-.. Create a simple table using command `create 'test', 'data'`
-.. Verify the table exists using the command `list`
-.. Insert data into the table using e.g. +
-----
-put 'test', 'row1', 'data:1', 'value1'
-put 'test', 'row2', 'data:2', 'value2'
-put 'test', 'row3', 'data:3', 'value3'
-----
-.. List all rows in the table using the command `scan 'test'` that should list all the rows previously inserted. Notice how 3 new columns where added without changing the schema!
-.. Finally we get rid of the table by issuing `disable 'test'` followed by `drop 'test'` and verified by `list` which should give an empty listing.
-. *Leave the shell* by `exit`
-. To *stop the HBase server* issue the `./bin/stop-hbase.sh` command. And wait for it to complete!!! Killing the process might corrupt your data on disk.
-. In case of *problems*,
-.. Verify the HBase logs in the `./logs` directory.
-.. Try to fix the problem
-.. Get help on the forums or IRC (`#hbase@freenode.net`). People are very active and keen to help out!
-.. Stop and retest the server.
-
-== Conclusion
-
-Now your *HBase *server is running, *start coding* and build that next killer app on this particular, but scalable datastore!
-

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/asciidoc/export_control.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/export_control.adoc b/src/main/site/asciidoc/export_control.adoc
deleted file mode 100644
index 1bbefb5..0000000
--- a/src/main/site/asciidoc/export_control.adoc
+++ /dev/null
@@ -1,44 +0,0 @@
-////
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
-////
-
-
-= Export Control
-
-This distribution uses or includes cryptographic software. The country in
-which you currently reside may have restrictions on the import, possession,
-use, and/or re-export to another country, of encryption software. BEFORE
-using any encryption software, please check your country's laws, regulations
-and policies concerning the import, possession, or use, and re-export of
-encryption software, to see if this is permitted. See the
-link:http://www.wassenaar.org/[Wassenaar Arrangement] for more
-information.
-
-The U.S. Government Department of Commerce, Bureau of Industry and Security 
-(BIS), has classified this software as Export Commodity Control Number (ECCN) 
-5D002.C.1, which includes information security software using or performing 
-cryptographic functions with asymmetric algorithms. The form and manner of this
-Apache Software Foundation distribution makes it eligible for export under the 
-License Exception ENC Technology Software Unrestricted (TSU) exception (see the
-BIS Export Administration Regulations, Section 740.13) for both object code and
-source code.
-
-Apache HBase uses the built-in java cryptography libraries. See Oracle's
-information regarding
-link:http://www.oracle.com/us/products/export/export-regulations-345813.html[Java cryptographic export regulations]
-for more details.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/asciidoc/index.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/index.adoc b/src/main/site/asciidoc/index.adoc
deleted file mode 100644
index 9b31c49..0000000
--- a/src/main/site/asciidoc/index.adoc
+++ /dev/null
@@ -1,75 +0,0 @@
-////
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
-////
-
-= Apache HBase&#153; Home
-
-.Welcome to Apache HBase(TM)
-link:http://www.apache.org/[Apache HBase(TM)] is the link:http://hadoop.apache.org[Hadoop] database, a distributed, scalable, big data store.
-
-.When Would I Use Apache HBase?
-Use Apache HBase when you need random, realtime read/write access to your Big Data. +
-This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware.
-
-Apache HBase is an open-source, distributed, versioned, non-relational database modeled after Google's link:http://research.google.com/archive/bigtable.html[Bigtable: A Distributed Storage System for Structured Data] by Chang et al.
-
-Just as Bigtable leverages the distributed data storage provided by the Google File System, Apache HBase provides Bigtable-like capabilities on top of Hadoop and HDFS.
-
-.Features
-- Linear and modular scalability.
-- Strictly consistent reads and writes.
-- Automatic and configurable sharding of tables
-- Automatic failover support between RegionServers.
-- Convenient base classes for backing Hadoop MapReduce jobs with Apache HBase tables.
-- Easy to use Java API for client access.
-- Block cache and Bloom Filters for real-time queries.
-- Query predicate push down via server side Filters
-- Thrift gateway and a REST-ful Web service that supports XML, Protobuf, and binary data encoding options
-- Extensible jruby-based (JIRB) shell
-- Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX
-
-.Where Can I Get More Information?
-See the link:book.html#arch.overview[Architecture Overview], the link:book.html#faq[FAQ] and the other documentation links at the top!
-
-.Export Control
-The HBase distribution includes cryptographic software. See the link:export_control.html[export control notice].
-
-== News
-Feb 17, 2015:: link:http://www.meetup.com/hbaseusergroup/events/219260093/[HBase meetup around Strata+Hadoop World] in San Jose
-
-January 15th, 2015:: link:http://www.meetup.com/hbaseusergroup/events/218744798/[HBase meetup @ AppDynamics] in San Francisco
-
-November 20th, 2014::  link:http://www.meetup.com/hbaseusergroup/events/205219992/[HBase meetup @ WANdisco] in San Ramon
-
-October 27th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/207386102/[HBase Meetup @ Apple] in Cupertino
-
-October 15th, 2014:: link:http://www.meetup.com/HBase-NYC/events/207655552[HBase Meetup @ Google] on the night before Strata/HW in NYC
-
-September 25th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/203173692/[HBase Meetup @ Continuuity] in Palo Alto
-
-August 28th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/197773762/[HBase Meetup @ Sift Science] in San Francisco
-
-July 17th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/190994082/[HBase Meetup @ HP] in Sunnyvale
-
-June 5th, 2014:: link:http://www.meetup.com/Hadoop-Summit-Community-San-Jose/events/179081342/[HBase BOF at Hadoop Summit], San Jose Convention Center
-
-May 5th, 2014:: link:http://www.hbasecon.com[HBaseCon2014] at the Hilton San Francisco on Union Square
-
-March 12th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/160757912/[HBase Meetup @ Ancestry.com] in San Francisco
-
-View link:old_news.html[Old News]


[09/11] hbase git commit: HBASE-20831 Copy master doc into branch-2.1 and edit to make it suit 2.1.0

Posted by zh...@apache.org.
http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/configuration.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/configuration.adoc b/src/main/asciidoc/_chapters/configuration.adoc
index 66fe5dd..174aa80 100644
--- a/src/main/asciidoc/_chapters/configuration.adoc
+++ b/src/main/asciidoc/_chapters/configuration.adoc
@@ -29,7 +29,7 @@
 
 This chapter expands upon the <<getting_started>> chapter to further explain configuration of Apache HBase.
 Please read this chapter carefully, especially the <<basic.prerequisites,Basic Prerequisites>>
-to ensure that your HBase testing and deployment goes smoothly, and prevent data loss.
+to ensure that your HBase testing and deployment goes smoothly.
 Familiarize yourself with <<hbase_supported_tested_definitions>> as well.
 
 == Configuration Files
@@ -92,24 +92,42 @@ This section lists required services and some required system configuration.
 
 [[java]]
 .Java
-[cols="1,1,4", options="header"]
+
+The following table summarizes the recommendation of the HBase community wrt deploying on various Java versions. An entry of "yes" is meant to indicate a base level of testing and willingness to help diagnose and address issues you might run into. Similarly, an entry of "no" or "Not Supported" generally means that should you run into an issue the community is likely to ask you to change the Java environment before proceeding to help. In some cases, specific guidance on limitations (e.g. wether compiling / unit tests work, specific operational issues, etc) will also be noted.
+
+.Long Term Support JDKs are recommended
+[TIP]
+====
+HBase recommends downstream users rely on JDK releases that are marked as Long Term Supported (LTS) either from the OpenJDK project or vendors. As of March 2018 that means Java 8 is the only applicable version and that the next likely version to see testing will be Java 11 near Q3 2018.
+====
+
+.Java support by release line
+[cols="1,1,1,1,1", options="header"]
 |===
 |HBase Version
 |JDK 7
 |JDK 8
+|JDK 9
+|JDK 10
 
 |2.0
 |link:http://search-hadoop.com/m/YGbbsPxZ723m3as[Not Supported]
 |yes
+|link:https://issues.apache.org/jira/browse/HBASE-20264[Not Supported]
+|link:https://issues.apache.org/jira/browse/HBASE-20264[Not Supported]
 
 |1.3
 |yes
 |yes
+|link:https://issues.apache.org/jira/browse/HBASE-20264[Not Supported]
+|link:https://issues.apache.org/jira/browse/HBASE-20264[Not Supported]
 
 
 |1.2
 |yes
 |yes
+|link:https://issues.apache.org/jira/browse/HBASE-20264[Not Supported]
+|link:https://issues.apache.org/jira/browse/HBASE-20264[Not Supported]
 
 |===
 
@@ -146,9 +164,9 @@ It is recommended to raise the ulimit to at least 10,000, but more likely 10,240
 +
 For example, assuming that a schema had 3 ColumnFamilies per region with an average of 3 StoreFiles per ColumnFamily, and there are 100 regions per RegionServer, the JVM will open `3 * 3 * 100 = 900` file descriptors, not counting open JAR files, configuration files, and others. Opening a file does not take many resources, and the risk of allowing a user to open too many files is minimal.
 +
-Another related setting is the number of processes a user is allowed to run at once. In Linux and Unix, the number of processes is set using the `ulimit -u` command. This should not be confused with the `nproc` command, which controls the number of CPUs available to a given user. Under load, a `ulimit -u` that is too low can cause OutOfMemoryError exceptions. See Jack Levin's major HDFS issues thread on the hbase-users mailing list, from 2011.
+Another related setting is the number of processes a user is allowed to run at once. In Linux and Unix, the number of processes is set using the `ulimit -u` command. This should not be confused with the `nproc` command, which controls the number of CPUs available to a given user. Under load, a `ulimit -u` that is too low can cause OutOfMemoryError exceptions.
 +
-Configuring the maximum number of file descriptors and processes for the user who is running the HBase process is an operating system configuration, rather than an HBase configuration. It is also important to be sure that the settings are changed for the user that actually runs HBase. To see which user started HBase, and that user's ulimit configuration, look at the first line of the HBase log for that instance. A useful read setting config on your hadoop cluster is Aaron Kimball's Configuration Parameters: What can you just ignore?
+Configuring the maximum number of file descriptors and processes for the user who is running the HBase process is an operating system configuration, rather than an HBase configuration. It is also important to be sure that the settings are changed for the user that actually runs HBase. To see which user started HBase, and that user's ulimit configuration, look at the first line of the HBase log for that instance.
 +
 .`ulimit` Settings on Ubuntu
 ====
@@ -183,7 +201,8 @@ See link:https://wiki.apache.org/hadoop/Distributions%20and%20Commercial%20Suppo
 .Hadoop 2.x is recommended.
 [TIP]
 ====
-Hadoop 2.x is faster and includes features, such as short-circuit reads, which will help improve your HBase random read profile.
+Hadoop 2.x is faster and includes features, such as short-circuit reads (see <<perf.hdfs.configs.localread>>),
+which will help improve your HBase random read profile.
 Hadoop 2.x also includes important bug fixes that will improve your overall HBase experience. HBase does not support running with
 earlier versions of Hadoop. See the table below for requirements specific to different HBase versions.
 
@@ -211,7 +230,9 @@ Use the following legend to interpret this table:
 |Hadoop-2.8.2 | NT | NT | NT | NT | NT
 |Hadoop-2.8.3+ | NT | NT | NT | S | S
 |Hadoop-2.9.0 | X | X | X | X | X
-|Hadoop-3.0.0 | NT | NT | NT | NT | NT
+|Hadoop-2.9.1+ | NT | NT | NT | NT | NT
+|Hadoop-3.0.x | X | X | X | X | X
+|Hadoop-3.1.0 | X | X | X | X | X
 |===
 
 .Hadoop Pre-2.6.1 and JDK 1.8 Kerberos
@@ -232,27 +253,35 @@ HBase on top of an HDFS Encryption Zone. Failure to do so will result in cluster
 data loss. This patch is present in Apache Hadoop releases 2.6.1+.
 ====
 
-.Hadoop 2.7.x
+.Hadoop 2.y.0 Releases
 [TIP]
 ====
-Hadoop version 2.7.0 is not tested or supported as the Hadoop PMC has explicitly labeled that release as not being stable. (reference the link:https://s.apache.org/hadoop-2.7.0-announcement[announcement of Apache Hadoop 2.7.0].)
+Starting around the time of Hadoop version 2.7.0, the Hadoop PMC got into the habit of calling out new minor releases on their major version 2 release line as not stable / production ready. As such, HBase expressly advises downstream users to avoid running on top of these releases. Note that additionally the 2.8.1 release was given the same caveat by the Hadoop PMC. For reference, see the release announcements for link:https://s.apache.org/hadoop-2.7.0-announcement[Apache Hadoop 2.7.0], link:https://s.apache.org/hadoop-2.8.0-announcement[Apache Hadoop 2.8.0], link:https://s.apache.org/hadoop-2.8.1-announcement[Apache Hadoop 2.8.1], and link:https://s.apache.org/hadoop-2.9.0-announcement[Apache Hadoop 2.9.0].
 ====
 
-.Hadoop 2.8.x
+.Hadoop 3.0.x Releases
 [TIP]
 ====
-Hadoop version 2.8.0 and 2.8.1 are not tested or supported as the Hadoop PMC has explicitly labeled that releases as not being stable. (reference the link:https://s.apache.org/hadoop-2.8.0-announcement[announcement of Apache Hadoop 2.8.0] and link:https://s.apache.org/hadoop-2.8.1-announcement[announcement of Apache Hadoop 2.8.1].)
+Hadoop distributions that include the Application Timeline Service feature may cause unexpected versions of HBase classes to be present in the application classpath. Users planning on running MapReduce applications with HBase should make sure that link:https://issues.apache.org/jira/browse/YARN-7190[YARN-7190] is present in their YARN service (currently fixed in 2.9.1+ and 3.1.0+).
+====
+
+.Hadoop 3.1.0 Release
+[TIP]
+====
+The Hadoop PMC called out the 3.1.0 release as not stable / production ready. As such, HBase expressly advises downstream users to avoid running on top of this release. For reference, see the link:https://s.apache.org/hadoop-3.1.0-announcement[release announcement for Hadoop 3.1.0].
 ====
 
 .Replace the Hadoop Bundled With HBase!
 [NOTE]
 ====
-Because HBase depends on Hadoop, it bundles an instance of the Hadoop jar under its _lib_ directory.
-The bundled jar is ONLY for use in standalone mode.
+Because HBase depends on Hadoop, it bundles Hadoop jars under its _lib_ directory.
+The bundled jars are ONLY for use in standalone mode.
 In distributed mode, it is _critical_ that the version of Hadoop that is out on your cluster match what is under HBase.
-Replace the hadoop jar found in the HBase lib directory with the hadoop jar you are running on your cluster to avoid version mismatch issues.
-Make sure you replace the jar in HBase across your whole cluster.
-Hadoop version mismatch issues have various manifestations but often all look like its hung.
+Replace the hadoop jars found in the HBase lib directory with the equivalent hadoop jars from the version you are running
+on your cluster to avoid version mismatch issues.
+Make sure you replace the jars under HBase across your whole cluster.
+Hadoop version mismatch issues have various manifestations. Check for mismatch if
+HBase appears hung.
 ====
 
 [[dfs.datanode.max.transfer.threads]]
@@ -537,7 +566,6 @@ If you are configuring an IDE to run an HBase client, you should include the _co
 For Java applications using Maven, including the hbase-shaded-client module is the recommended dependency when connecting to a cluster:
 [source,xml]
 ----
-
 <dependency>
   <groupId>org.apache.hbase</groupId>
   <artifactId>hbase-shaded-client</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/datamodel.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/datamodel.adoc b/src/main/asciidoc/_chapters/datamodel.adoc
index 3674566..ba4961a 100644
--- a/src/main/asciidoc/_chapters/datamodel.adoc
+++ b/src/main/asciidoc/_chapters/datamodel.adoc
@@ -343,6 +343,7 @@ In particular:
 Below we describe how the version dimension in HBase currently works.
 See link:https://issues.apache.org/jira/browse/HBASE-2406[HBASE-2406] for discussion of HBase versions. link:https://www.ngdata.com/bending-time-in-hbase/[Bending time in HBase] makes for a good read on the version, or time, dimension in HBase.
 It has more detail on versioning than is provided here.
+
 As of this writing, the limitation _Overwriting values at existing timestamps_ mentioned in the article no longer holds in HBase.
 This section is basically a synopsis of this article by Bruno Dumon.
 
@@ -503,8 +504,42 @@ Otherwise, a delete marker with a timestamp in the future is kept until the majo
 NOTE: This behavior represents a fix for an unexpected change that was introduced in HBase 0.94, and was fixed in link:https://issues.apache.org/jira/browse/HBASE-10118[HBASE-10118].
 The change has been backported to HBase 0.94 and newer branches.
 
+[[new.version.behavior]]
+=== Optional New Version and Delete behavior in HBase-2.0.0
+
+In `hbase-2.0.0`, the operator can specify an alternate version and
+delete treatment by setting the column descriptor property
+`NEW_VERSION_BEHAVIOR` to true (To set a property on a column family
+descriptor, you must first disable the table and then alter the
+column family descriptor; see <<cf.keep.deleted>> for an example
+of editing an attribute on a column family descriptor).
+
+The 'new version behavior', undoes the limitations listed below
+whereby a `Delete` ALWAYS overshadows a `Put` if at the same
+location -- i.e. same row, column family, qualifier and timestamp
+-- regardless of which arrived first. Version accounting is also
+changed as deleted versions are considered toward total version count.
+This is done to ensure results are not changed should a major
+compaction intercede. See `HBASE-15968` and linked issues for
+discussion.
+
+Running with this new configuration currently costs; we factor
+the Cell MVCC on every compare so we burn more CPU. The slow
+down will depend. In testing we've seen between 0% and 25%
+degradation.
+
+If replicating, it is advised that you run with the new
+serial replication feature (See `HBASE-9465`; the serial
+replication feature did NOT make it into `hbase-2.0.0` but
+should arrive in a subsequent hbase-2.x release) as now
+the order in which Mutations arrive is a factor.
+
+
 === Current Limitations
 
+The below limitations are addressed in hbase-2.0.0. See
+the section above, <<new.version.behavior>>.
+
 ==== Deletes mask Puts
 
 Deletes mask puts, even puts that happened after the delete was entered.

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/developer.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/developer.adoc b/src/main/asciidoc/_chapters/developer.adoc
index 11ef4ba..6d0a7d1 100644
--- a/src/main/asciidoc/_chapters/developer.adoc
+++ b/src/main/asciidoc/_chapters/developer.adoc
@@ -773,15 +773,15 @@ To do this, log in to Apache's Nexus at link:https://repository.apache.org[repos
 Find your artifacts in the staging repository. Click on 'Staging Repositories' and look for a new one ending in "hbase" with a status of 'Open', select it.
 Use the tree view to expand the list of repository contents and inspect if the artifacts you expect are present. Check the POMs.
 As long as the staging repo is open you can re-upload if something is missing or built incorrectly.
-
++
 If something is seriously wrong and you would like to back out the upload, you can use the 'Drop' button to drop and delete the staging repository.
 Sometimes the upload fails in the middle. This is another reason you might have to 'Drop' the upload from the staging repository.
-
++
 If it checks out, close the repo using the 'Close' button. The repository must be closed before a public URL to it becomes available. It may take a few minutes for the repository to close. Once complete you'll see a public URL to the repository in the Nexus UI. You may also receive an email with the URL. Provide the URL to the temporary staging repository in the email that announces the release candidate.
 (Folks will need to add this repo URL to their local poms or to their local _settings.xml_ file to pull the published release candidate artifacts.)
-
++
 When the release vote concludes successfully, return here and click the 'Release' button to release the artifacts to central. The release process will automatically drop and delete the staging repository.
-
++
 .hbase-downstreamer
 [NOTE]
 ====
@@ -792,15 +792,18 @@ Make sure you are pulling from the repository when tests run and that you are no
 ====
 
 See link:https://www.apache.org/dev/publishing-maven-artifacts.html[Publishing Maven Artifacts] for some pointers on this maven staging process.
-
++
 If the HBase version ends in `-SNAPSHOT`, the artifacts go elsewhere.
 They are put into the Apache snapshots repository directly and are immediately available.
 Making a SNAPSHOT release, this is what you want to happen.
-
-At this stage, you have two tarballs in your 'build output directory' and a set of artifacts in a staging area of the maven repository, in the 'closed' state.
-
++
+At this stage, you have two tarballs in your 'build output directory' and a set of artifacts
+in a staging area of the maven repository, in the 'closed' state.
 Next sign, fingerprint and then 'stage' your release candiate build output directory via svnpubsub by committing
-your directory to link:https://dist.apache.org/repos/dist/dev/hbase/[The 'dev' distribution directory] (See comments on link:https://issues.apache.org/jira/browse/HBASE-10554[HBASE-10554 Please delete old releases from mirroring system] but in essence it is an svn checkout of https://dist.apache.org/repos/dist/dev/hbase -- releases are at https://dist.apache.org/repos/dist/release/hbase). In the _version directory_ run the following commands:
+your directory to link:https://dist.apache.org/repos/dist/dev/hbase/[The dev distribution directory]
+(See comments on link:https://issues.apache.org/jira/browse/HBASE-10554[HBASE-10554 Please delete old releases from mirroring system]
+but in essence it is an svn checkout of link:https://dist.apache.org/repos/dist/dev/hbase[dev/hbase] -- releases are at
+link:https://dist.apache.org/repos/dist/release/hbase[release/hbase]). In the _version directory_ run the following commands:
 
 [source,bourne]
 ----
@@ -867,6 +870,50 @@ See link:http://search-hadoop.com/m/DHED4dhFaU[HBase, mail # dev - On
                 recent discussion clarifying ASF release policy].
 for how we arrived at this process.
 
+[[hbase.release.announcement]]
+== Announcing Releases
+
+Once an RC has passed successfully and the needed artifacts have been staged for disribution, you'll need to let everyone know about our shiny new release. It's not a requirement, but to make things easier for release managers we have a template you can start with. Be sure you replace \_version_ and other markers with the relevant version numbers. You should manually verify all links before sending.
+
+[source,email]
+----
+The HBase team is happy to announce the immediate availability of HBase _version_.
+
+Apache HBase™ is an open-source, distributed, versioned, non-relational database.
+Apache HBase gives you low latency random access to billions of rows with
+millions of columns atop non-specialized hardware. To learn more about HBase,
+see https://hbase.apache.org/.
+
+HBase _version_ is the _nth_ minor release in the HBase _major_.x line, which aims to
+improve the stability and reliability of HBase. This release includes roughly
+XXX resolved issues not covered by previous _major_.x releases.
+
+Notable new features include:
+- List text descriptions of features that fit on one line
+- Including if JDK or Hadoop support versions changes
+- If the "stable" pointer changes, call that out
+- For those with obvious JIRA IDs, include them (HBASE-YYYYY)
+
+The full list of issues can be found in the included CHANGES.md and RELEASENOTES.md,
+or via our issue tracker:
+
+    https://s.apache.org/hbase-_version_-jira
+
+To download please follow the links and instructions on our website:
+
+    https://hbase.apache.org/downloads.html
+
+
+Question, comments, and problems are always welcome at: dev@hbase.apache.org.
+
+Thanks to all who contributed and made this release possible.
+
+Cheers,
+The HBase Dev Team
+----
+
+You should sent this message to the following lists: dev@hbase.apache.org, user@hbase.apache.org, announce@apache.org. If you'd like a spot check before sending, feel free to ask via jira or the dev list.
+
 [[documentation]]
 == Generating the HBase Reference Guide
 
@@ -909,13 +956,21 @@ For any other module, for example `hbase-common`, the tests must be strict unit
 ==== Testing the HBase Shell
 
 The HBase shell and its tests are predominantly written in jruby.
-In order to make these tests run as a part of the standard build, there is a single JUnit test, `TestShell`, that takes care of loading the jruby implemented tests and running them.
+
+In order to make these tests run as a part of the standard build, there are a few JUnit test classes that take care of loading the jruby implemented tests and running them.
+The tests were split into separate classes to accomodate class level timeouts (see <<hbase.unittests>> for specifics).
 You can run all of these tests from the top level with:
 
 [source,bourne]
 ----
+      mvn clean test -Dtest=Test*Shell
+----
+
+If you have previously done a `mvn install`, then you can instruct maven to run only the tests in the hbase-shell module with:
 
-      mvn clean test -Dtest=TestShell
+[source,bourne]
+----
+      mvn clean test -pl hbase-shell
 ----
 
 Alternatively, you may limit the shell tests that run using the system variable `shell.test`.
@@ -924,8 +979,7 @@ For example, the tests that cover the shell commands for altering tables are con
 
 [source,bourne]
 ----
-
-      mvn clean test -Dtest=TestShell -Dshell.test=/AdminAlterTableTest/
+      mvn clean test -pl hbase-shell -Dshell.test=/AdminAlterTableTest/
 ----
 
 You may also use a link:http://docs.ruby-doc.com/docs/ProgrammingRuby/html/language.html#UJ[Ruby Regular Expression
@@ -935,14 +989,13 @@ You can run all of the HBase admin related tests, including both the normal admi
 [source,bourne]
 ----
 
-      mvn clean test -Dtest=TestShell -Dshell.test=/.*Admin.*Test/
+      mvn clean test -pl hbase-shell -Dshell.test=/.*Admin.*Test/
 ----
 
 In the event of a test failure, you can see details by examining the XML version of the surefire report results
 
 [source,bourne]
 ----
-
       vim hbase-shell/target/surefire-reports/TEST-org.apache.hadoop.hbase.client.TestShell.xml
 ----
 
@@ -1462,9 +1515,8 @@ HBase ships with several ChaosMonkey policies, available in the
 [[chaos.monkey.properties]]
 ==== Configuring Individual ChaosMonkey Actions
 
-Since HBase version 1.0.0 (link:https://issues.apache.org/jira/browse/HBASE-11348[HBASE-11348]),
 ChaosMonkey integration tests can be configured per test run.
-Create a Java properties file in the HBase classpath and pass it to ChaosMonkey using
+Create a Java properties file in the HBase CLASSPATH and pass it to ChaosMonkey using
 the `-monkeyProps` configuration flag. Configurable properties, along with their default
 values if applicable, are listed in the `org.apache.hadoop.hbase.chaos.factories.MonkeyConstants`
 class. For properties that have defaults, you can override them by including them
@@ -1477,7 +1529,9 @@ The following example uses a properties file called <<monkey.properties,monkey.p
 $ bin/hbase org.apache.hadoop.hbase.IntegrationTestIngest -m slowDeterministic -monkeyProps monkey.properties
 ----
 
-The above command will start the integration tests and chaos monkey passing the properties file _monkey.properties_.
+The above command will start the integration tests and chaos monkey. It will look for the
+properties file _monkey.properties_ on the HBase CLASSPATH; e.g. inside the HBASE _conf_ dir.
+
 Here is an example chaos monkey file:
 
 [[monkey.properties]]
@@ -1492,6 +1546,8 @@ move.regions.sleep.time=80000
 batch.restart.rs.ratio=0.4f
 ----
 
+Periods/time are expressed in milliseconds.
+
 HBase 1.0.2 and newer adds the ability to restart HBase's underlying ZooKeeper quorum or
 HDFS nodes. To use these actions, you need to configure some new properties, which
 have no reasonable defaults because they are deployment-specific, in your ChaosMonkey
@@ -1530,35 +1586,6 @@ We use Git for source code management and latest development happens on `master`
 branches for past major/minor/maintenance releases and important features and bug fixes are often
  back-ported to them.
 
-=== Release Managers
-
-Each maintained release branch has a release manager, who volunteers to coordinate new features and bug fixes are backported to that release.
-The release managers are link:https://hbase.apache.org/team-list.html[committers].
-If you would like your feature or bug fix to be included in a given release, communicate with that release manager.
-If this list goes out of date or you can't reach the listed person, reach out to someone else on the list.
-
-NOTE: End-of-life releases are not included in this list.
-
-.Release Managers
-[cols="1,1", options="header"]
-|===
-| Release
-| Release Manager
-
-| 1.2
-| Sean Busbey
-
-| 1.3
-| Mikhail Antonov
-
-| 1.4
-| Andrew Purtell
-
-| 2.0
-| Michael Stack
-
-|===
-
 [[code.standards]]
 === Code Standards
 
@@ -2186,6 +2213,12 @@ When the amending author is different from the original committer, add notice of
                                 - [DISCUSSION] Best practice when amending commits cherry picked
                                 from master to branch].
 
+====== Close related GitHub PRs
+
+As a project we work to ensure there's a JIRA associated with each change, but we don't mandate any particular tool be used for reviews. Due to implementation details of the ASF's integration between hosted git repositories and GitHub, the PMC has no ability to directly close PRs on our GitHub repo. In the event that a contributor makes a Pull Request on GitHub, either because the contributor finds that easier than attaching a patch to JIRA or because a reviewer prefers that UI for examining changes, it's important to make note of the PR in the commit that goes to the master branch so that PRs are kept up to date.
+
+To read more about the details of what kinds of commit messages will work with the GitHub "close via keyword in commit" mechanism see link:https://help.github.com/articles/closing-issues-using-keywords/[the GitHub documentation for "Closing issues using keywords"]. In summary, you should include a line with the phrase "closes #XXX", where the XXX is the pull request id. The pull request id is usually given in the GitHub UI in grey at the end of the subject heading.
+
 [[committer.tests]]
 ====== Committers are responsible for making sure commits do not break the build or tests
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/external_apis.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/external_apis.adoc b/src/main/asciidoc/_chapters/external_apis.adoc
index ffb6ee6..8f65c4e 100644
--- a/src/main/asciidoc/_chapters/external_apis.adoc
+++ b/src/main/asciidoc/_chapters/external_apis.adoc
@@ -186,20 +186,20 @@ creation or mutation, and `DELETE` for deletion.
 
 |/_table_/schema
 |POST
-|Create a new table, or replace an existing table's schema
+|Update an existing table with the provided schema fragment
 |curl -vi -X POST \
   -H "Accept: text/xml" \
   -H "Content-Type: text/xml" \
-  -d '&lt;?xml version="1.0" encoding="UTF-8"?>&lt;TableSchema name="users">&lt;ColumnSchema name="cf" />&lt;/TableSchema>' \
+  -d '&lt;?xml version="1.0" encoding="UTF-8"?>&lt;TableSchema name="users">&lt;ColumnSchema name="cf" KEEP_DELETED_CELLS="true" />&lt;/TableSchema>' \
   "http://example.com:8000/users/schema"
 
 |/_table_/schema
 |PUT
-|Update an existing table with the provided schema fragment
+|Create a new table, or replace an existing table's schema
 |curl -vi -X PUT \
   -H "Accept: text/xml" \
   -H "Content-Type: text/xml" \
-  -d '&lt;?xml version="1.0" encoding="UTF-8"?>&lt;TableSchema name="users">&lt;ColumnSchema name="cf" KEEP_DELETED_CELLS="true" />&lt;/TableSchema>' \
+  -d '&lt;?xml version="1.0" encoding="UTF-8"?>&lt;TableSchema name="users">&lt;ColumnSchema name="cf" />&lt;/TableSchema>' \
   "http://example.com:8000/users/schema"
 
 |/_table_/schema
@@ -851,23 +851,14 @@ println(Bytes.toString(value))
 === Setting the Classpath
 
 To use Jython with HBase, your CLASSPATH must include HBase's classpath as well as
-the Jython JARs required by your code. First, use the following command on a server
-running the HBase RegionServer process, to get HBase's classpath.
+the Jython JARs required by your code.
 
-[source, bash]
-----
-$ ps aux |grep regionserver| awk -F 'java.library.path=' {'print $2'} | awk {'print $1'}
-
-/usr/lib/hadoop/lib/native:/usr/lib/hbase/lib/native/Linux-amd64-64
-----
-
-Set the `$CLASSPATH` environment variable to include the path you found in the previous
-step, plus the path to `jython.jar` and each additional Jython-related JAR needed for
-your project.
+Set the path to directory containing the `jython.jar` and each additional Jython-related JAR needed for
+your project. Then export HBASE_CLASSPATH pointing to the $JYTHON_HOME env. variable.
 
 [source, bash]
 ----
-$ export CLASSPATH=$CLASSPATH:/usr/lib/hadoop/lib/native:/usr/lib/hbase/lib/native/Linux-amd64-64:/path/to/jython.jar
+$ export HBASE_CLASSPATH=/directory/jython.jar
 ----
 
 Start a Jython shell with HBase and Hadoop JARs in the classpath:
@@ -877,55 +868,52 @@ $ bin/hbase org.python.util.jython
 
 .Table Creation, Population, Get, and Delete with Jython
 ====
-The following Jython code example creates a table, populates it with data, fetches
-the data, and deletes the table.
+The following Jython code example checks for table,
+if it exists, deletes it and then creates it. Then it
+populates the table with data and fetches the data.
 
 [source,jython]
 ----
 import java.lang
-from org.apache.hadoop.hbase import HBaseConfiguration, HTableDescriptor, HColumnDescriptor, HConstants, TableName
-from org.apache.hadoop.hbase.client import HBaseAdmin, HTable, Get
-from org.apache.hadoop.hbase.io import Cell, RowResult
+from org.apache.hadoop.hbase import HBaseConfiguration, HTableDescriptor, HColumnDescriptor, TableName
+from org.apache.hadoop.hbase.client import Admin, Connection, ConnectionFactory, Get, Put, Result, Table
+from org.apache.hadoop.conf import Configuration
 
 # First get a conf object.  This will read in the configuration
 # that is out in your hbase-*.xml files such as location of the
 # hbase master node.
-conf = HBaseConfiguration()
+conf = HBaseConfiguration.create()
+connection = ConnectionFactory.createConnection(conf)
+admin = connection.getAdmin()
 
-# Create a table named 'test' that has two column families,
-# one named 'content, and the other 'anchor'.  The colons
-# are required for column family names.
-tablename = TableName.valueOf("test")
+# Create a table named 'test' that has a column family
+# named 'content'.
+tableName = TableName.valueOf("test")
+table = connection.getTable(tableName)
 
-desc = HTableDescriptor(tablename)
-desc.addFamily(HColumnDescriptor("content:"))
-desc.addFamily(HColumnDescriptor("anchor:"))
-admin = HBaseAdmin(conf)
+desc = HTableDescriptor(tableName)
+desc.addFamily(HColumnDescriptor("content"))
 
 # Drop and recreate if it exists
-if admin.tableExists(tablename):
-    admin.disableTable(tablename)
-    admin.deleteTable(tablename)
-admin.createTable(desc)
+if admin.tableExists(tableName):
+    admin.disableTable(tableName)
+    admin.deleteTable(tableName)
 
-tables = admin.listTables()
-table = HTable(conf, tablename)
+admin.createTable(desc)
 
 # Add content to 'column:' on a row named 'row_x'
 row = 'row_x'
-update = Get(row)
-update.put('content:', 'some content')
-table.commit(update)
+put = Put(row)
+put.addColumn("content", "qual", "some content")
+table.put(put)
 
 # Now fetch the content just added, returns a byte[]
-data_row = table.get(row, "content:")
-data = java.lang.String(data_row.value, "UTF8")
+get = Get(row)
 
-print "The fetched row contains the value '%s'" % data
+result = table.get(get)
+data = java.lang.String(result.getValue("content", "qual"), "UTF8")
 
-# Delete the table.
-admin.disableTable(desc.getName())
-admin.deleteTable(desc.getName())
+print "The fetched row contains the value '%s'" % data
 ----
 ====
 
@@ -935,24 +923,23 @@ This example scans a table and returns the results that match a given family qua
 
 [source, jython]
 ----
-# Print all rows that are members of a particular column family
-# by passing a regex for family qualifier
-
 import java.lang
-
-from org.apache.hadoop.hbase import HBaseConfiguration
-from org.apache.hadoop.hbase.client import HTable
-
-conf = HBaseConfiguration()
-
-table = HTable(conf, "wiki")
-col = "title:.*$"
-
-scanner = table.getScanner([col], "")
+from org.apache.hadoop.hbase import TableName, HBaseConfiguration
+from org.apache.hadoop.hbase.client import Connection, ConnectionFactory, Result, ResultScanner, Table, Admin
+from org.apache.hadoop.conf import Configuration
+conf = HBaseConfiguration.create()
+connection = ConnectionFactory.createConnection(conf)
+admin = connection.getAdmin()
+tableName = TableName.valueOf('wiki')
+table = connection.getTable(tableName)
+
+cf = "title"
+attr = "attr"
+scanner = table.getScanner(cf)
 while 1:
     result = scanner.next()
     if not result:
-        break
-    print java.lang.String(result.row), java.lang.String(result.get('title:').value)
+       break
+    print java.lang.String(result.row), java.lang.String(result.getValue(cf, attr))
 ----
 ====

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/getting_started.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/getting_started.adoc b/src/main/asciidoc/_chapters/getting_started.adoc
index 1cdc0a2..84ebcaa 100644
--- a/src/main/asciidoc/_chapters/getting_started.adoc
+++ b/src/main/asciidoc/_chapters/getting_started.adoc
@@ -52,7 +52,7 @@ See <<java,Java>> for information about supported JDK versions.
 === Get Started with HBase
 
 .Procedure: Download, Configure, and Start HBase in Standalone Mode
-. Choose a download site from this list of link:https://www.apache.org/dyn/closer.cgi/hbase/[Apache Download Mirrors].
+. Choose a download site from this list of link:https://www.apache.org/dyn/closer.lua/hbase/[Apache Download Mirrors].
   Click on the suggested top link.
   This will take you to a mirror of _HBase Releases_.
   Click on the folder named _stable_ and then download the binary file that ends in _.tar.gz_ to your local filesystem.
@@ -82,7 +82,7 @@ JAVA_HOME=/usr
 +
 
 . Edit _conf/hbase-site.xml_, which is the main HBase configuration file.
-  At this time, you only need to specify the directory on the local filesystem where HBase and ZooKeeper write data.
+  At this time, you need to specify the directory on the local filesystem where HBase and ZooKeeper write data and acknowledge some risks.
   By default, a new directory is created under /tmp.
   Many servers are configured to delete the contents of _/tmp_ upon reboot, so you should store the data elsewhere.
   The following configuration will store HBase's data in the _hbase_ directory, in the home directory of the user called `testuser`.
@@ -102,6 +102,21 @@ JAVA_HOME=/usr
     <name>hbase.zookeeper.property.dataDir</name>
     <value>/home/testuser/zookeeper</value>
   </property>
+  <property>
+    <name>hbase.unsafe.stream.capability.enforce</name>
+    <value>false</value>
+    <description>
+      Controls whether HBase will check for stream capabilities (hflush/hsync).
+
+      Disable this if you intend to run on LocalFileSystem, denoted by a rootdir
+      with the 'file://' scheme, but be mindful of the NOTE below.
+
+      WARNING: Setting this to false blinds you to potential data loss and
+      inconsistent system state in the event of process and/or node failures. If
+      HBase is complaining of an inability to use hsync or hflush it's most
+      likely not a false positive.
+    </description>
+  </property>
 </configuration>
 ----
 ====
@@ -111,7 +126,14 @@ HBase will do this for you.  If you create the directory,
 HBase will attempt to do a migration, which is not what you want.
 +
 NOTE: The _hbase.rootdir_ in the above example points to a directory
-in the _local filesystem_. The 'file:/' prefix is how we denote local filesystem.
+in the _local filesystem_. The 'file://' prefix is how we denote local
+filesystem. You should take the WARNING present in the configuration example
+to heart. In standalone mode HBase makes use of the local filesystem abstraction
+from the Apache Hadoop project. That abstraction doesn't provide the durability
+promises that HBase needs to operate safely. This is fine for local development
+and testing use cases where the cost of cluster failure is well contained. It is
+not appropriate for production deployments; eventually you will lose data.
+
 To home HBase on an existing instance of HDFS, set the _hbase.rootdir_ to point at a
 directory up on your instance: e.g. _hdfs://namenode.example.org:8020/hbase_.
 For more on this variant, see the section below on Standalone HBase over HDFS.
@@ -163,7 +185,7 @@ hbase(main):001:0> create 'test', 'cf'
 
 . List Information About your Table
 +
-Use the `list` command to
+Use the `list` command to confirm your table exists
 +
 ----
 hbase(main):002:0> list 'test'
@@ -174,6 +196,22 @@ test
 => ["test"]
 ----
 
++
+Now use the `describe` command to see details, including configuration defaults
++
+----
+hbase(main):003:0> describe 'test'
+Table test is ENABLED
+test
+COLUMN FAMILIES DESCRIPTION
+{NAME => 'cf', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRITE =>
+'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WRITE => 'f
+alse', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'true', BLOCKSIZE
+ => '65536'}
+1 row(s)
+Took 0.9998 seconds
+----
+
 . Put data into your table.
 +
 To put data into your table, use the `put` command.
@@ -314,7 +352,7 @@ First, add the following property which directs HBase to run in distributed mode
 ----
 +
 Next, change the `hbase.rootdir` from the local filesystem to the address of your HDFS instance, using the `hdfs:////` URI syntax.
-In this example, HDFS is running on the localhost at port 8020.
+In this example, HDFS is running on the localhost at port 8020. Be sure to either remove the entry for `hbase.unsafe.stream.capability.enforce` or set it to true.
 +
 [source,xml]
 ----
@@ -371,7 +409,7 @@ The following command starts 3 backup servers using ports 16002/16012, 16003/160
 +
 ----
 
-$ ./bin/local-master-backup.sh 2 3 5
+$ ./bin/local-master-backup.sh start 2 3 5
 ----
 +
 To kill a backup master without killing the entire cluster, you need to find its process ID (PID). The PID is stored in a file with a name like _/tmp/hbase-USER-X-master.pid_.
@@ -566,18 +604,14 @@ On each node of the cluster, run the `jps` command and verify that the correct p
 You may see additional Java processes running on your servers as well, if they are used for other purposes.
 +
 .`node-a` `jps` Output
-====
 ----
-
 $ jps
 20355 Jps
 20071 HQuorumPeer
 20137 HMaster
 ----
-====
 +
 .`node-b` `jps` Output
-====
 ----
 $ jps
 15930 HRegionServer
@@ -585,17 +619,14 @@ $ jps
 15838 HQuorumPeer
 16010 HMaster
 ----
-====
 +
 .`node-c` `jps` Output
-====
 ----
 $ jps
 13901 Jps
 13639 HQuorumPeer
 13737 HRegionServer
 ----
-====
 +
 .ZooKeeper Process Name
 [NOTE]

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/hbase-default.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/hbase-default.adoc b/src/main/asciidoc/_chapters/hbase-default.adoc
index 7798657..f809f28 100644
--- a/src/main/asciidoc/_chapters/hbase-default.adoc
+++ b/src/main/asciidoc/_chapters/hbase-default.adoc
@@ -150,7 +150,7 @@ A comma-separated list of BaseLogCleanerDelegate invoked by
 *`hbase.master.logcleaner.ttl`*::
 +
 .Description
-Maximum time a WAL can stay in the .oldlogdir directory,
+Maximum time a WAL can stay in the oldWALs directory,
     after which it will be cleaned by a Master thread.
 +
 .Default

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/hbase_mob.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/hbase_mob.adoc b/src/main/asciidoc/_chapters/hbase_mob.adoc
index 9730529..8048772 100644
--- a/src/main/asciidoc/_chapters/hbase_mob.adoc
+++ b/src/main/asciidoc/_chapters/hbase_mob.adoc
@@ -61,12 +61,10 @@ an object is considered to be a MOB. Only `IS_MOB` is required. If you do not
 specify the `MOB_THRESHOLD`, the default threshold value of 100 KB is used.
 
 .Configure a Column for MOB Using HBase Shell
-====
 ----
 hbase> create 't1', {NAME => 'f1', IS_MOB => true, MOB_THRESHOLD => 102400}
 hbase> alter 't1', {NAME => 'f1', IS_MOB => true, MOB_THRESHOLD => 102400}
 ----
-====
 
 .Configure a Column for MOB Using the Java API
 ====
@@ -91,7 +89,6 @@ weekly policy - compact MOB Files for one week into one large MOB file
 montly policy - compact MOB Files for one  month into one large MOB File
 
 .Configure MOB compaction policy Using HBase Shell
-====
 ----
 hbase> create 't1', {NAME => 'f1', IS_MOB => true, MOB_THRESHOLD => 102400, MOB_COMPACT_PARTITION_POLICY => 'daily'}
 hbase> create 't1', {NAME => 'f1', IS_MOB => true, MOB_THRESHOLD => 102400, MOB_COMPACT_PARTITION_POLICY => 'weekly'}
@@ -101,7 +98,6 @@ hbase> alter 't1', {NAME => 'f1', IS_MOB => true, MOB_THRESHOLD => 102400, MOB_C
 hbase> alter 't1', {NAME => 'f1', IS_MOB => true, MOB_THRESHOLD => 102400, MOB_COMPACT_PARTITION_POLICY => 'weekly'}
 hbase> alter 't1', {NAME => 'f1', IS_MOB => true, MOB_THRESHOLD => 102400, MOB_COMPACT_PARTITION_POLICY => 'monthly'}
 ----
-====
 
 === Configure MOB Compaction mergeable threshold
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/images
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/images b/src/main/asciidoc/_chapters/images
index 1e0c6c1..dc4cd20 120000
--- a/src/main/asciidoc/_chapters/images
+++ b/src/main/asciidoc/_chapters/images
@@ -1 +1 @@
-../../site/resources/images
\ No newline at end of file
+../../../site/resources/images/
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/ops_mgt.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/ops_mgt.adoc b/src/main/asciidoc/_chapters/ops_mgt.adoc
index c7362ac..10508f4 100644
--- a/src/main/asciidoc/_chapters/ops_mgt.adoc
+++ b/src/main/asciidoc/_chapters/ops_mgt.adoc
@@ -68,8 +68,12 @@ Some commands take arguments. Pass no args or -h for usage.
   pe              Run PerformanceEvaluation
   ltt             Run LoadTestTool
   canary          Run the Canary tool
-  regionsplitter  Run the RegionSplitter tool
   version         Print the version
+  backup          Backup tables for recovery
+  restore         Restore tables from existing backup image
+  regionsplitter  Run RegionSplitter tool
+  rowcounter      Run RowCounter tool
+  cellcounter     Run CellCounter tool
   CLASSNAME       Run the class named CLASSNAME
 ----
 
@@ -79,7 +83,7 @@ Others, such as `hbase shell` (<<shell>>), `hbase upgrade` (<<upgrading>>), and
 === Canary
 
 There is a Canary class can help users to canary-test the HBase cluster status, with every column-family for every regions or RegionServer's granularity.
-To see the usage, use the `--help` parameter.
+To see the usage, use the `-help` parameter.
 
 ----
 $ ${HBASE_HOME}/bin/hbase canary -help
@@ -108,6 +112,13 @@ Usage: hbase canary [opts] [table1 [table2]...] | [regionserver1 [regionserver2]
    -D<configProperty>=<value> assigning or override the configuration params
 ----
 
+[NOTE]
+The `Sink` class is instantiated using the `hbase.canary.sink.class` configuration property which
+will also determine the used Monitor class. In the absence of this property RegionServerStdOutSink
+will be used. You need to use the Sink according to the passed parameters to the _canary_ command.
+As an example you have to set `hbase.canary.sink.class` property to
+`org.apache.hadoop.hbase.tool.Canary$RegionStdOutSink` for using table parameters.
+
 This tool will return non zero error codes to user for collaborating with other monitoring tools, such as Nagios.
 The error code definitions are:
 
@@ -192,10 +203,10 @@ This daemon will stop itself and return non-zero error code if any error occurs,
 $ ${HBASE_HOME}/bin/hbase canary -daemon
 ----
 
-Run repeatedly with internal 5 seconds and will not stop itself even if errors occur in the test.
+Run repeatedly with 5 second intervals and will not stop itself even if errors occur in the test.
 
 ----
-$ ${HBASE_HOME}/bin/hbase canary -daemon -interval 50000 -f false
+$ ${HBASE_HOME}/bin/hbase canary -daemon -interval 5 -f false
 ----
 
 ==== Force timeout if canary test stuck
@@ -205,7 +216,7 @@ Because of this we provide a timeout option to kill the canary test and return a
 This run sets the timeout value to 60 seconds, the default value is 600 seconds.
 
 ----
-$ ${HBASE_HOME}/bin/hbase canary -t 600000
+$ ${HBASE_HOME}/bin/hbase canary -t 60000
 ----
 
 ==== Enable write sniffing in canary
@@ -234,7 +245,7 @@ while returning normal exit code. To treat read / write failure as error, you ca
 with the `-treatFailureAsError` option. When enabled, read / write failure would result in error
 exit code.
 ----
-$ ${HBASE_HOME}/bin/hbase canary --treatFailureAsError
+$ ${HBASE_HOME}/bin/hbase canary -treatFailureAsError
 ----
 
 ==== Running Canary in a Kerberos-enabled Cluster
@@ -266,7 +277,7 @@ This example shows each of the properties with valid values.
   <value>/etc/hbase/conf/keytab.krb5</value>
 </property>
 <!-- optional params -->
-property>
+<property>
   <name>hbase.client.dns.interface</name>
   <value>default</value>
 </property>
@@ -381,7 +392,7 @@ directory.
 You can get a textual dump of a WAL file content by doing the following:
 
 ----
- $ ./bin/hbase org.apache.hadoop.hbase.regionserver.wal.FSHLog --dump hdfs://example.org:8020/hbase/.logs/example.org,60020,1283516293161/10.10.21.10%3A60020.1283973724012
+ $ ./bin/hbase org.apache.hadoop.hbase.regionserver.wal.FSHLog --dump hdfs://example.org:8020/hbase/WALs/example.org,60020,1283516293161/10.10.21.10%3A60020.1283973724012
 ----
 
 The return code will be non-zero if there are any issues with the file so you can test wholesomeness of file by redirecting `STDOUT` to `/dev/null` and testing the program return.
@@ -389,7 +400,7 @@ The return code will be non-zero if there are any issues with the file so you ca
 Similarly you can force a split of a log file directory by doing:
 
 ----
- $ ./bin/hbase org.apache.hadoop.hbase.regionserver.wal.FSHLog --split hdfs://example.org:8020/hbase/.logs/example.org,60020,1283516293161/
+ $ ./bin/hbase org.apache.hadoop.hbase.regionserver.wal.FSHLog --split hdfs://example.org:8020/hbase/WALs/example.org,60020,1283516293161/
 ----
 
 [[hlog_tool.prettyprint]]
@@ -399,7 +410,7 @@ The `WALPrettyPrinter` is a tool with configurable options to print the contents
 You can invoke it via the HBase cli with the 'wal' command.
 
 ----
- $ ./bin/hbase wal hdfs://example.org:8020/hbase/.logs/example.org,60020,1283516293161/10.10.21.10%3A60020.1283973724012
+ $ ./bin/hbase wal hdfs://example.org:8020/hbase/WALs/example.org,60020,1283516293161/10.10.21.10%3A60020.1283973724012
 ----
 
 .WAL Printing in older versions of HBase
@@ -677,6 +688,7 @@ Assuming you're running HDFS with permissions enabled, those permissions will ne
 
 For more information about bulk-loading HFiles into HBase, see <<arch.bulk.load,arch.bulk.load>>.
 
+[[walplayer]]
 === WALPlayer
 
 WALPlayer is a utility to replay WAL files into HBase.
@@ -701,25 +713,63 @@ $ bin/hbase org.apache.hadoop.hbase.mapreduce.WALPlayer /backuplogdir oldTable1,
 WALPlayer, by default, runs as a mapreduce job.
 To NOT run WALPlayer as a mapreduce job on your cluster, force it to run all in the local process by adding the flags `-Dmapreduce.jobtracker.address=local` on the command line.
 
+[[walplayer.options]]
+==== WALPlayer Options
+
+Running `WALPlayer` with no arguments prints brief usage information:
+
+----
+Usage: WALPlayer [options] <wal inputdir> <tables> [<tableMappings>]
+Replay all WAL files into HBase.
+<tables> is a comma separated list of tables.
+If no tables ("") are specified, all tables are imported.
+(Be careful, hbase:meta entries will be imported in this case.)
+
+WAL entries can be mapped to new set of tables via <tableMappings>.
+<tableMappings> is a comma separated list of target tables.
+If specified, each table in <tables> must have a mapping.
+
+By default WALPlayer will load data directly into HBase.
+To generate HFiles for a bulk data load instead, pass the following option:
+  -Dwal.bulk.output=/path/for/output
+  (Only one table can be specified, and no mapping is allowed!)
+Time range options:
+  -Dwal.start.time=[date|ms]
+  -Dwal.end.time=[date|ms]
+  (The start and the end date of timerange. The dates can be expressed
+  in milliseconds since epoch or in yyyy-MM-dd'T'HH:mm:ss.SS format.
+  E.g. 1234567890120 or 2009-02-13T23:32:30.12)
+Other options:
+  -Dmapreduce.job.name=jobName
+  Use the specified mapreduce job name for the wal player
+For performance also consider the following options:
+  -Dmapreduce.map.speculative=false
+  -Dmapreduce.reduce.speculative=false
+----
+
 [[rowcounter]]
-=== RowCounter and CellCounter
+=== RowCounter
 
-link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter]        is a mapreduce job to count all the rows of a table.
+link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html[RowCounter] is a mapreduce job to count all the rows of a table.
 This is a good utility to use as a sanity check to ensure that HBase can read all the blocks of a table if there are any concerns of metadata inconsistency.
-It will run the mapreduce all in a single process but it will run faster if you have a MapReduce cluster in place for it to exploit. It is also possible to limit
-the time range of data to be scanned by using the `--starttime=[starttime]` and `--endtime=[endtime]` flags.
+It will run the mapreduce all in a single process but it will run faster if you have a MapReduce cluster in place for it to exploit.
+It is possible to limit the time range of data to be scanned by using the `--starttime=[starttime]` and `--endtime=[endtime]` flags.
+The scanned data can be limited based on keys using the `--range=[startKey],[endKey][;[startKey],[endKey]...]` option.
 
 ----
-$ bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter <tablename> [<column1> <column2>...]
+$ bin/hbase rowcounter [options] <tablename> [--starttime=<start> --endtime=<end>] [--range=[startKey],[endKey][;[startKey],[endKey]...]] [<column1> <column2>...]
 ----
 
 RowCounter only counts one version per cell.
 
-Note: caching for the input Scan is configured via `hbase.client.scanner.caching` in the job configuration.
+For performance consider to use `-Dhbase.client.scanner.caching=100` and `-Dmapreduce.map.speculative=false` options.
+
+[[cellcounter]]
+=== CellCounter
 
 HBase ships another diagnostic mapreduce job called link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/CellCounter.html[CellCounter].
 Like RowCounter, it gathers more fine-grained statistics about your table.
-The statistics gathered by RowCounter are more fine-grained and include:
+The statistics gathered by CellCounter are more fine-grained and include:
 
 * Total number of rows in the table.
 * Total number of CFs across all rows.
@@ -730,12 +780,12 @@ The statistics gathered by RowCounter are more fine-grained and include:
 
 The program allows you to limit the scope of the run.
 Provide a row regex or prefix to limit the rows to analyze.
-Specify a time range to scan the table by using the `--starttime=[starttime]` and `--endtime=[endtime]` flags.
+Specify a time range to scan the table by using the `--starttime=<starttime>` and `--endtime=<endtime>` flags.
 
 Use `hbase.mapreduce.scan.column.family` to specify scanning a single column family.
 
 ----
-$ bin/hbase org.apache.hadoop.hbase.mapreduce.CellCounter <tablename> <outputDir> [regex or prefix]
+$ bin/hbase cellcounter <tablename> <outputDir> [reportSeparator] [regex or prefix] [--starttime=<starttime> --endtime=<endtime>]
 ----
 
 Note: just like RowCounter, caching for the input Scan is configured via `hbase.client.scanner.caching` in the job configuration.
@@ -743,8 +793,7 @@ Note: just like RowCounter, caching for the input Scan is configured via `hbase.
 === mlockall
 
 It is possible to optionally pin your servers in physical memory making them less likely to be swapped out in oversubscribed environments by having the servers call link:http://linux.die.net/man/2/mlockall[mlockall] on startup.
-See link:https://issues.apache.org/jira/browse/HBASE-4391[HBASE-4391 Add ability to
-          start RS as root and call mlockall] for how to build the optional library and have it run on startup.
+See link:https://issues.apache.org/jira/browse/HBASE-4391[HBASE-4391 Add ability to start RS as root and call mlockall] for how to build the optional library and have it run on startup.
 
 [[compaction.tool]]
 === Offline Compaction Tool
@@ -1024,13 +1073,10 @@ The script requires you to set some environment variables before running it.
 Examine the script and modify it to suit your needs.
 
 ._rolling-restart.sh_ General Usage
-====
 ----
-
 $ ./bin/rolling-restart.sh --help
 Usage: rolling-restart.sh [--config <hbase-confdir>] [--rs-only] [--master-only] [--graceful] [--maxthreads xx]
 ----
-====
 
 Rolling Restart on RegionServers Only::
   To perform a rolling restart on the RegionServers only, use the `--rs-only` option.
@@ -2645,8 +2691,10 @@ full implications and have a sufficient background in managing HBase clusters.
 It was developed by Yahoo! and they run it at scale on their large grid cluster.
 See link:http://www.slideshare.net/HBaseCon/keynote-apache-hbase-at-yahoo-scale[HBase at Yahoo! Scale].
 
-RSGroups can be defined and managed with shell commands or corresponding Java
-APIs. A server can be added to a group with hostname and port pair and tables
+RSGroups are defined and managed with shell commands. The shell drives a
+Coprocessor Endpoint whose API is marked private given this is an evolving
+feature; the Coprocessor API is not for public consumption.
+A server can be added to a group with hostname and port pair and tables
 can be moved to this group so that only regionservers in the same rsgroup can
 host the regions of the table. RegionServers and tables can only belong to one
 rsgroup at a time. By default, all tables and regionservers belong to the
@@ -2781,6 +2829,48 @@ Viewing the Master log will give you insight on rsgroup operation.
 
 If it appears stuck, restart the Master process.
 
+=== Remove RegionServer Grouping
+Removing RegionServer Grouping feature from a cluster on which it was enabled involves
+more steps in addition to removing the relevant properties from `hbase-site.xml`. This is
+to clean the RegionServer grouping related meta data so that if the feature is re-enabled
+in the future, the old meta data will not affect the functioning of the cluster.
+
+- Move all tables in non-default rsgroups to `default` regionserver group
+[source,bash]
+----
+#Reassigning table t1 from non default group - hbase shell
+hbase(main):005:0> move_tables_rsgroup 'default',['t1']
+----
+- Move all regionservers in non-default rsgroups to `default` regionserver group
+[source, bash]
+----
+#Reassigning all the servers in the non-default rsgroup to default - hbase shell
+hbase(main):008:0> move_servers_rsgroup 'default',['rs1.xxx.com:16206','rs2.xxx.com:16202','rs3.xxx.com:16204']
+----
+- Remove all non-default rsgroups. `default` rsgroup created implicitly doesn't have to be removed
+[source,bash]
+----
+#removing non default rsgroup - hbase shell
+hbase(main):009:0> remove_rsgroup 'group2'
+----
+- Remove the changes made in `hbase-site.xml` and restart the cluster
+- Drop the table `hbase:rsgroup` from `hbase`
+[source, bash]
+----
+#Through hbase shell drop table hbase:rsgroup
+hbase(main):001:0> disable 'hbase:rsgroup'
+0 row(s) in 2.6270 seconds
+
+hbase(main):002:0> drop 'hbase:rsgroup'
+0 row(s) in 1.2730 seconds
+----
+- Remove znode `rsgroup` from the cluster ZooKeeper using zkCli.sh
+[source, bash]
+----
+#From ZK remove the node /hbase/rsgroup through zkCli.sh
+rmr /hbase/rsgroup
+----
+
 === ACL
 To enable ACL, add the following to your hbase-site.xml and restart your Master:
 
@@ -2793,3 +2883,141 @@ To enable ACL, add the following to your hbase-site.xml and restart your Master:
 ----
 
 
+
+[[normalizer]]
+== Region Normalizer
+
+The Region Normalizer tries to make Regions all in a table about the same in size.
+It does this by finding a rough average. Any region that is larger than twice this
+size is split. Any region that is much smaller is merged into an adjacent region.
+It is good to run the Normalizer on occasion on a down time after the cluster has
+been running a while or say after a burst of activity such as a large delete.
+
+(The bulk of the below detail was copied wholesale from the blog by Romil Choksi at
+link:https://community.hortonworks.com/articles/54987/hbase-region-normalizer.html[HBase Region Normalizer])
+
+The Region Normalizer is feature available since HBase-1.2. It runs a set of
+pre-calculated merge/split actions to resize regions that are either too
+large or too small compared to the average region size for a given table. Region
+Normalizer when invoked computes a normalization 'plan' for all of the tables in
+HBase. System tables (such as hbase:meta, hbase:namespace, Phoenix system tables
+etc) and user tables with normalization disabled are ignored while computing the
+plan. For normalization enabled tables, normalization plan is carried out in
+parallel across multiple tables.
+
+Normalizer can be enabled or disabled globally for the entire cluster using the
+‘normalizer_switch’ command in the HBase shell. Normalization can also be
+controlled on a per table basis, which is disabled by default when a table is
+created. Normalization for a table can be enabled or disabled by setting the
+NORMALIZATION_ENABLED table attribute to true or false.
+
+To check normalizer status and enable/disable normalizer
+
+[source,bash]
+----
+hbase(main):001:0> normalizer_enabled
+true
+0 row(s) in 0.4870 seconds
+
+hbase(main):002:0> normalizer_switch false
+true
+0 row(s) in 0.0640 seconds
+
+hbase(main):003:0> normalizer_enabled
+false
+0 row(s) in 0.0120 seconds
+
+hbase(main):004:0> normalizer_switch true
+false
+0 row(s) in 0.0200 seconds
+
+hbase(main):005:0> normalizer_enabled
+true
+0 row(s) in 0.0090 seconds
+----
+
+When enabled, Normalizer is invoked in the background every 5 mins (by default),
+which can be configured using `hbase.normalization.period` in `hbase-site.xml`.
+Normalizer can also be invoked manually/programmatically at will using HBase shell’s
+`normalize` command. HBase by default uses `SimpleRegionNormalizer`, but users can
+design their own normalizer as long as they implement the RegionNormalizer Interface.
+Details about the logic used by `SimpleRegionNormalizer` to compute its normalization
+plan can be found link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/master/normalizer/SimpleRegionNormalizer.html[here].
+
+The below example shows a normalization plan being computed for an user table, and
+merge action being taken as a result of the normalization plan computed by SimpleRegionNormalizer.
+
+Consider an user table with some pre-split regions having 3 equally large regions
+(about 100K rows) and 1 relatively small region (about 25K rows). Following is the
+snippet from an hbase meta table scan showing each of the pre-split regions for
+the user table.
+
+----
+table_p8ddpd6q5z,,1469494305548.68b9892220865cb6048 column=info:regioninfo, timestamp=1469494306375, value={ENCODED => 68b9892220865cb604809c950d1adf48, NAME => 'table_p8ddpd6q5z,,1469494305548.68b989222 09c950d1adf48.   0865cb604809c950d1adf48.', STARTKEY => '', ENDKEY => '1'}
+....
+table_p8ddpd6q5z,1,1469494317178.867b77333bdc75a028 column=info:regioninfo, timestamp=1469494317848, value={ENCODED => 867b77333bdc75a028bb4c5e4b235f48, NAME => 'table_p8ddpd6q5z,1,1469494317178.867b7733 bb4c5e4b235f48.  3bdc75a028bb4c5e4b235f48.', STARTKEY => '1', ENDKEY => '3'}
+....
+table_p8ddpd6q5z,3,1469494328323.98f019a753425e7977 column=info:regioninfo, timestamp=1469494328486, value={ENCODED => 98f019a753425e7977ab8636e32deeeb, NAME => 'table_p8ddpd6q5z,3,1469494328323.98f019a7 ab8636e32deeeb.  53425e7977ab8636e32deeeb.', STARTKEY => '3', ENDKEY => '7'}
+....
+table_p8ddpd6q5z,7,1469494339662.94c64e748979ecbb16 column=info:regioninfo, timestamp=1469494339859, value={ENCODED => 94c64e748979ecbb166f6cc6550e25c6, NAME => 'table_p8ddpd6q5z,7,1469494339662.94c64e74 6f6cc6550e25c6.   8979ecbb166f6cc6550e25c6.', STARTKEY => '7', ENDKEY => '8'}
+....
+table_p8ddpd6q5z,8,1469494339662.6d2b3f5fd1595ab8e7 column=info:regioninfo, timestamp=1469494339859, value={ENCODED => 6d2b3f5fd1595ab8e7c031876057b1ee, NAME => 'table_p8ddpd6q5z,8,1469494339662.6d2b3f5f c031876057b1ee.   d1595ab8e7c031876057b1ee.', STARTKEY => '8', ENDKEY => ''}
+----
+Invoking the normalizer using ‘normalize’ int the HBase shell, the below log snippet
+from HMaster log shows the normalization plan computed as per the logic defined for
+SimpleRegionNormalizer. Since the total region size (in MB) for the adjacent smallest
+regions in the table is less than the average region size, the normalizer computes a
+plan to merge these two regions.
+
+----
+2016-07-26 07:08:26,928 DEBUG [B.fifo.QRpcServer.handler=20,queue=2,port=20000] master.HMaster: Skipping normalization for table: hbase:namespace, as it's either system table or doesn't have auto
+normalization turned on
+2016-07-26 07:08:26,928 DEBUG [B.fifo.QRpcServer.handler=20,queue=2,port=20000] master.HMaster: Skipping normalization for table: hbase:backup, as it's either system table or doesn't have auto normalization turned on
+2016-07-26 07:08:26,928 DEBUG [B.fifo.QRpcServer.handler=20,queue=2,port=20000] master.HMaster: Skipping normalization for table: hbase:meta, as it's either system table or doesn't have auto normalization turned on
+2016-07-26 07:08:26,928 DEBUG [B.fifo.QRpcServer.handler=20,queue=2,port=20000] master.HMaster: Skipping normalization for table: table_h2osxu3wat, as it's either system table or doesn't have autonormalization turned on
+2016-07-26 07:08:26,928 DEBUG [B.fifo.QRpcServer.handler=20,queue=2,port=20000] normalizer.SimpleRegionNormalizer: Computing normalization plan for table: table_p8ddpd6q5z, number of regions: 5
+2016-07-26 07:08:26,929 DEBUG [B.fifo.QRpcServer.handler=20,queue=2,port=20000] normalizer.SimpleRegionNormalizer: Table table_p8ddpd6q5z, total aggregated regions size: 12
+2016-07-26 07:08:26,929 DEBUG [B.fifo.QRpcServer.handler=20,queue=2,port=20000] normalizer.SimpleRegionNormalizer: Table table_p8ddpd6q5z, average region size: 2.4
+2016-07-26 07:08:26,929 INFO  [B.fifo.QRpcServer.handler=20,queue=2,port=20000] normalizer.SimpleRegionNormalizer: Table table_p8ddpd6q5z, small region size: 0 plus its neighbor size: 0, less thanthe avg size 2.4, merging them
+2016-07-26 07:08:26,971 INFO  [B.fifo.QRpcServer.handler=20,queue=2,port=20000] normalizer.MergeNormalizationPlan: Executing merging normalization plan: MergeNormalizationPlan{firstRegion={ENCODED=> d51df2c58e9b525206b1325fd925a971, NAME => 'table_p8ddpd6q5z,,1469514755237.d51df2c58e9b525206b1325fd925a971.', STARTKEY => '', ENDKEY => '1'}, secondRegion={ENCODED => e69c6b25c7b9562d078d9ad3994f5330, NAME => 'table_p8ddpd6q5z,1,1469514767669.e69c6b25c7b9562d078d9ad3994f5330.',
+STARTKEY => '1', ENDKEY => '3'}}
+----
+Region normalizer as per it’s computed plan, merged the region with start key as ‘’
+and end key as ‘1’, with another region having start key as ‘1’ and end key as ‘3’.
+Now, that these regions have been merged we see a single new region with start key
+as ‘’ and end key as ‘3’
+----
+table_p8ddpd6q5z,,1469516907210.e06c9b83c4a252b130e column=info:mergeA, timestamp=1469516907431,
+value=PBUF\x08\xA5\xD9\x9E\xAF\xE2*\x12\x1B\x0A\x07default\x12\x10table_p8ddpd6q5z\x1A\x00"\x011(\x000\x00 ea74d246741ba.   8\x00
+table_p8ddpd6q5z,,1469516907210.e06c9b83c4a252b130e column=info:mergeB, timestamp=1469516907431,
+value=PBUF\x08\xB5\xBA\x9F\xAF\xE2*\x12\x1B\x0A\x07default\x12\x10table_p8ddpd6q5z\x1A\x011"\x013(\x000\x0 ea74d246741ba.   08\x00
+table_p8ddpd6q5z,,1469516907210.e06c9b83c4a252b130e column=info:regioninfo, timestamp=1469516907431, value={ENCODED => e06c9b83c4a252b130eea74d246741ba, NAME => 'table_p8ddpd6q5z,,1469516907210.e06c9b83c ea74d246741ba.   4a252b130eea74d246741ba.', STARTKEY => '', ENDKEY => '3'}
+....
+table_p8ddpd6q5z,3,1469514778736.bf024670a847c0adff column=info:regioninfo, timestamp=1469514779417, value={ENCODED => bf024670a847c0adffb74b2e13408b32, NAME => 'table_p8ddpd6q5z,3,1469514778736.bf024670 b74b2e13408b32.  a847c0adffb74b2e13408b32.' STARTKEY => '3', ENDKEY => '7'}
+....
+table_p8ddpd6q5z,7,1469514790152.7c5a67bc755e649db2 column=info:regioninfo, timestamp=1469514790312, value={ENCODED => 7c5a67bc755e649db22f49af6270f1e1, NAME => 'table_p8ddpd6q5z,7,1469514790152.7c5a67bc 2f49af6270f1e1.  755e649db22f49af6270f1e1.', STARTKEY => '7', ENDKEY => '8'}
+....
+table_p8ddpd6q5z,8,1469514790152.58e7503cda69f98f47 column=info:regioninfo, timestamp=1469514790312, value={ENCODED => 58e7503cda69f98f4755178e74288c3a, NAME => 'table_p8ddpd6q5z,8,1469514790152.58e7503c 55178e74288c3a.  da69f98f4755178e74288c3a.', STARTKEY => '8', ENDKEY => ''}
+----
+
+A similar example can be seen for an user table with 3 smaller regions and 1
+relatively large region. For this example, we have an user table with 1 large region containing 100K rows, and 3 relatively smaller regions with about 33K rows each. As seen from the normalization plan, since the larger region is more than twice the average region size it ends being split into two regions – one with start key as ‘1’ and end key as ‘154717’ and the other region with start key as '154717' and end key as ‘3’
+----
+2016-07-26 07:39:45,636 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] master.HMaster: Skipping normalization for table: hbase:backup, as it's either system table or doesn't have auto normalization turned on
+2016-07-26 07:39:45,636 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] normalizer.SimpleRegionNormalizer: Computing normalization plan for table: table_p8ddpd6q5z, number of regions: 4
+2016-07-26 07:39:45,636 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] normalizer.SimpleRegionNormalizer: Table table_p8ddpd6q5z, total aggregated regions size: 12
+2016-07-26 07:39:45,636 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] normalizer.SimpleRegionNormalizer: Table table_p8ddpd6q5z, average region size: 3.0
+2016-07-26 07:39:45,636 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] normalizer.SimpleRegionNormalizer: No normalization needed, regions look good for table: table_p8ddpd6q5z
+2016-07-26 07:39:45,636 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] normalizer.SimpleRegionNormalizer: Computing normalization plan for table: table_h2osxu3wat, number of regions: 5
+2016-07-26 07:39:45,636 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] normalizer.SimpleRegionNormalizer: Table table_h2osxu3wat, total aggregated regions size: 7
+2016-07-26 07:39:45,636 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] normalizer.SimpleRegionNormalizer: Table table_h2osxu3wat, average region size: 1.4
+2016-07-26 07:39:45,636 INFO  [B.fifo.QRpcServer.handler=7,queue=1,port=20000] normalizer.SimpleRegionNormalizer: Table table_h2osxu3wat, large region table_h2osxu3wat,1,1469515926544.27f2fdbb2b6612ea163eb6b40753c3db. has size 4, more than twice avg size, splitting
+2016-07-26 07:39:45,640 INFO [B.fifo.QRpcServer.handler=7,queue=1,port=20000] normalizer.SplitNormalizationPlan: Executing splitting normalization plan: SplitNormalizationPlan{regionInfo={ENCODED => 27f2fdbb2b6612ea163eb6b40753c3db, NAME => 'table_h2osxu3wat,1,1469515926544.27f2fdbb2b6612ea163eb6b40753c3db.', STARTKEY => '1', ENDKEY => '3'}, splitPoint=null}
+2016-07-26 07:39:45,656 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] master.HMaster: Skipping normalization for table: hbase:namespace, as it's either system table or doesn't have auto normalization turned on
+2016-07-26 07:39:45,656 DEBUG [B.fifo.QRpcServer.handler=7,queue=1,port=20000] master.HMaster: Skipping normalization for table: hbase:meta, as it's either system table or doesn't
+have auto normalization turned on …..…..….
+2016-07-26 07:39:46,246 DEBUG [AM.ZK.Worker-pool2-t278] master.RegionStates: Onlined 54de97dae764b864504704c1c8d3674a on hbase-test-rc-5.openstacklocal,16020,1469419333913 {ENCODED => 54de97dae764b864504704c1c8d3674a, NAME => 'table_h2osxu3wat,1,1469518785661.54de97dae764b864504704c1c8d3674a.', STARTKEY => '1', ENDKEY => '154717'}
+2016-07-26 07:39:46,246 INFO  [AM.ZK.Worker-pool2-t278] master.RegionStates: Transition {d6b5625df331cfec84dce4f1122c567f state=SPLITTING_NEW, ts=1469518786246, server=hbase-test-rc-5.openstacklocal,16020,1469419333913} to {d6b5625df331cfec84dce4f1122c567f state=OPEN, ts=1469518786246,
+server=hbase-test-rc-5.openstacklocal,16020,1469419333913}
+2016-07-26 07:39:46,246 DEBUG [AM.ZK.Worker-pool2-t278] master.RegionStates: Onlined d6b5625df331cfec84dce4f1122c567f on hbase-test-rc-5.openstacklocal,16020,1469419333913 {ENCODED => d6b5625df331cfec84dce4f1122c567f, NAME => 'table_h2osxu3wat,154717,1469518785661.d6b5625df331cfec84dce4f1122c567f.', STARTKEY => '154717', ENDKEY => '3'}
+----

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/performance.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/performance.adoc b/src/main/asciidoc/_chapters/performance.adoc
index c917646..866779c 100644
--- a/src/main/asciidoc/_chapters/performance.adoc
+++ b/src/main/asciidoc/_chapters/performance.adoc
@@ -188,11 +188,9 @@ It is useful for tuning the IO impact of prefetching versus the time before all
 To enable prefetching on a given column family, you can use HBase Shell or use the API.
 
 .Enable Prefetch Using HBase Shell
-====
 ----
 hbase> create 'MyTable', { NAME => 'myCF', PREFETCH_BLOCKS_ON_OPEN => 'true' }
 ----
-====
 
 .Enable Prefetch Using the API
 ====

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/pv2.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/pv2.adoc b/src/main/asciidoc/_chapters/pv2.adoc
new file mode 100644
index 0000000..5ecad3f
--- /dev/null
+++ b/src/main/asciidoc/_chapters/pv2.adoc
@@ -0,0 +1,163 @@
+////
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+////
+[[pv2]]
+= Procedure Framework (Pv2): link:https://issues.apache.org/jira/browse/HBASE-12439[HBASE-12439]
+:doctype: book
+:numbered:
+:toc: left
+:icons: font
+:experimental:
+
+
+_Procedure v2 ...aims to provide a unified way to build...multi-step procedures with a rollback/roll-forward ability in case of failure (e.g. create/delete table) -- Matteo Bertozzi, the author of Pv2._
+
+With Pv2 you can build and run state machines. It was built by Matteo to make distributed state transitions in HBase resilient in the face of process failures. Previous to Pv2, state transition handling was spread about the codebase with implementation varying by transition-type and context. Pv2 was inspired by link:https://accumulo.apache.org/1.8/accumulo_user_manual.html#_fault_tolerant_executor_fate[FATE], of Apache Accumulo. +
+
+Early Pv2 aspects have been shipping in HBase with a good while now but it has continued to evolve as it takes on more involved scenarios. What we have now is powerful but intricate in operation and incomplete, in need of cleanup and hardening. In this doc we have given overview on the system so you can make use of it (and help with its polishing).
+
+This system has the awkward name of Pv2 because HBase already had the notion of a Procedure used in snapshots (see hbase-server _org.apache.hadoop.hbase.procedure_ as opposed to hbase-procedure _org.apache.hadoop.hbase.procedure2_). Pv2 supercedes and is to replace Procedure.
+
+== Procedures
+
+A Procedure is a transform made on an HBase entity. Examples of HBase entities would be Regions and Tables. +
+Procedures are run by a ProcedureExecutor instance. Procedure current state is kept in the ProcedureStore. +
+The ProcedureExecutor has but a primitive view on what goes on inside a Procedure. From its PoV, Procedures are submitted and then the ProcedureExecutor keeps calling _#execute(Object)_ until the Procedure is done. Execute may be called multiple times in the case of failure or restart, so Procedure code must be idempotent yielding the same result each time it run. Procedure code can also implement _rollback_ so steps can be undone if failure. A call to _execute()_ can result in one of following possibilities:
+
+* _execute()_ returns
+** _null_: indicates we are done.
+** _this_: indicates there is more to do so, persist current procedure state and re-_execute()_.
+** _Array_ of sub-procedures: indicates a set of procedures needed to be run to completion before we can proceed (after which we expect the framework to call our execute again).
+* _execute()_ throws exception
+** _suspend_: indicates execution of procedure is suspended and can be resumed due to some external event. The procedure state is persisted.
+** _yield_: procedure is added back to scheduler. The procedure state is not persisted.
+** _interrupted_: currently same as _yield_.
+** Any _exception_ not listed above: Procedure _state_ is changed to _FAILED_ (after which we expect the framework will attempt rollback).
+
+The ProcedureExecutor stamps the frameworks notions of Procedure State into the Procedure itself; e.g. it marks Procedures as INITIALIZING on submit. It moves the state to RUNNABLE when it goes to execute. When done, a Procedure gets marked FAILED or SUCCESS depending. Here is the list of all states as of this writing:
+
+* *_INITIALIZING_* Procedure in construction, not yet added to the executor
+* *_RUNNABLE_* Procedure added to the executor, and ready to be executed.
+* *_WAITING_* The procedure is waiting on children (subprocedures) to be completed
+* *_WAITING_TIMEOUT_* The procedure is waiting a timeout or an external event
+* *_ROLLEDBACK_* The procedure failed and was rolledback.
+* *_SUCCESS_* The procedure execution completed successfully.
+* *_FAILED_* The procedure execution failed, may need to rollback.
+
+After each execute, the Procedure state is persisted to the ProcedureStore. Hooks are invoked on Procedures so they can preserve custom state. Post-fault, the ProcedureExecutor re-hydrates its pre-crash state by replaying the content of the ProcedureStore. This makes the Procedure Framework resilient against process failure.
+
+=== Implementation
+
+In implementation, Procedures tend to divide transforms into finer-grained tasks and while some of these work items are handed off to sub-procedures,
+the bulk are done as processing _steps_ in-Procedure; each invocation of the execute is used to perform a single step, and then the Procedure relinquishes returning to the framework. The Procedure does its own tracking of where it is in the processing.
+
+What comprises a sub-task, or _step_ in the execution is up to the Procedure author but generally it is a small piece of work that cannot be further decomposed and that moves the processing forward toward its end state. Having procedures made of many small steps rather than a few large ones allows the Procedure framework give out insight on where we are in the processing. It also allows the framework be more fair in its execution. As stated per above, each step may be called multiple times (failure/restart) so steps must be implemented idempotent. +
+It is easy to confuse the state that the Procedure itself is keeping with that of the Framework itself. Try to keep them distinct. +
+
+=== Rollback
+
+Rollback is called when the procedure or one of the sub-procedures has failed. The rollback step is supposed to cleanup the resources created during the execute() step. In case of failure and restart, rollback() may be called multiple times, so again the code must be idempotent.
+
+=== Metrics
+
+There are hooks for collecting metrics on submit of the procedure and on finish.
+
+* updateMetricsOnSubmit()
+* updateMetricsOnFinish()
+
+Individual procedures can override these methods to collect procedure specific metrics. The default implementations of these methods  try to get an object implementing an interface ProcedureMetrics which encapsulates following set of generic metrics:
+
+* SubmittedCount (Counter): Total number of procedure instances submitted of a type.
+* Time (Histogram): Histogram of runtime for procedure instances.
+* FailedCount (Counter): Total number of failed procedure instances.
+
+Individual procedures can implement this object and define these generic set of metrics.
+
+=== Baggage
+
+Procedures can carry baggage. One example is the _step_ the procedure last attained (see previous section); procedures persist the enum that marks where they are currently. Other examples might be the Region or Server name the Procedure is currently working. After each call to execute, the Procedure#serializeStateData is called. Procedures can persist whatever.
+
+=== Result/State and Queries
+
+(From Matteo’s https://issues.apache.org/jira/secure/attachment/12693273/Procedurev2Notification-Bus.pdf[ProcedureV2 and Notification Bus] doc) +
+In the case of asynchronous operations, the result must be kept around until the client asks for it. Once we receive a “get” of the result we can schedule the delete of the record. For some operations the result may be “unnecessary” especially in case of failure (e.g. if the create table fail, we can query the operation result or we can just do a list table to see if it was created) so in some cases we can schedule the delete after a timeout. On the client side the operation will return a “Procedure ID”, this ID can be used to wait until the procedure is completed and get the result/exception. +
+
+[source]
+----
+Admin.doOperation() { longprocId=master.doOperation(); master.waitCompletion(procId); }  +
+----
+
+If the master goes down while performing the operation the backup master will pickup the half in­progress operation and complete it. The client will not notice the failure.
+
+== Subprocedures
+
+Subprocedures are _Procedure_ instances created and returned by _#execute(Object)_ method of a procedure instance (parent procedure). As subprocedures are of type _Procedure_, they can instantiate their own subprocedures. As its a recursive, procedure stack is maintained by the framework. The framework makes sure that the parent procedure does not proceed till all sub-procedures and their subprocedures in a procedure stack are successfully finished.
+
+== ProcedureExecutor
+
+_ProcedureExecutor_ uses _ProcedureStore_ and _ProcedureScheduler_ and executes procedures submitted to it. Some of the basic operations supported are:
+
+* _abort(procId)_: aborts specified procedure if its not finished
+* _submit(Procedure)_: submits procedure for execution
+* _retrieve:_ list of get methods to get _Procedure_ instances and results
+* _register/ unregister_ listeners: for listening on Procedure related notifications
+
+When _ProcedureExecutor_ starts it loads procedure instances persisted in _ProcedureStore_ from previous run. All unfinished procedures are resumed from the last stored state.
+
+== Nonces
+
+You can pass the nonce that came in with the RPC to the Procedure on submit at the executor. This nonce will then be serialized along w/ the Procedure on persist. If a crash, on reload, the nonce will be put back into a map of nonces to pid in case a client tries to run same procedure for a second time (it will be rejected). See the base Procedure and how nonce is a base data member.
+
+== Wait/Wake/Suspend/Yield
+
+‘suspend’ means stop processing a procedure because we can make no more progress until a condition changes; i.e. we sent RPC and need to wait on response. The way this works is that a Procedure throws a suspend exception from down in its guts as a GOTO the end-of-the-current-processing step. Suspend also puts the Procedure back on the scheduler. Problematic is we do some accounting on our way out even on suspend making it so it can take time exiting (We have to update state in the WAL).
+
+RegionTransitionProcedure#reportTransition is called on receipt of a report from a RS. For Assign and Unassign, this event response from the server we sent an RPC wakes up suspended Assign/Unassigns.
+
+== Locking
+
+Procedure Locks are not about concurrency! They are about giving a Procedure read/write access to an HBase Entity such as a Table or Region so that is possible to shut out other Procedures from making modifications to an HBase Entity state while the current one is running.
+
+Locking is optional, up to the Procedure implementor but if an entity is being operated on by a Procedure, all transforms need to be done via Procedures using the same locking scheme else havoc.
+
+Two ProcedureExecutor Worker threads can actually end up both processing the same Procedure instance. If it happens, the threads are meant to be running different parts of the one Procedure -- changes that do not stamp on each other (This gets awkward around the procedure frameworks notion of ‘suspend’. More on this below).
+
+Locks optionally may be held for the life of a Procedure. For example, if moving a Region, you probably want to have exclusive access to the HBase Region until the Region completes (or fails).  This is used in conjunction with {@link #holdLock(Object)}. If {@link #holdLock(Object)} returns true, the procedure executor will call acquireLock() once and thereafter not call {@link #releaseLock(Object)} until the Procedure is done (Normally, it calls release/acquire around each invocation of {@link #execute(Object)}.
+
+Locks also may live the life of a procedure; i.e. once an Assign Procedure starts, we do not want another procedure meddling w/ the region under assignment. Procedures that hold the lock for the life of the procedure set Procedure#holdLock to true. AssignProcedure does this as do Split and Move (If in the middle of a Region move, you do not want it Splitting).
+
+Locking can be for life of Procedure.
+
+Some locks have a hierarchy. For example, taking a region lock also takes (read) lock on its containing table and namespace to prevent another Procedure obtaining an exclusive lock on the hosting table (or namespace).
+
+== Procedure Types
+
+=== StateMachineProcedure
+
+One can consider each call to _#execute(Object)_ method as transitioning from one state to another in a state machine. Abstract class _StateMachineProcedure_ is wrapper around base _Procedure_ class which provides constructs for implementing a state machine as a _Procedure_. After each state transition current state is persisted so that, in case of crash/ restart, the state transition can be resumed from the previous state of a procedure before crash/ restart. Individual procedures need to define initial and terminus states and hooks _executeFromState()_ and _setNextState()_ are provided for state transitions.
+
+=== RemoteProcedureDispatcher
+
+A new RemoteProcedureDispatcher (+ subclass RSProcedureDispatcher) primitive takes care of running the Procedure-based Assignments ‘remote’ component. This dispatcher knows about ‘servers’. It does aggregation of assignments by time on a time/count basis so can send procedures in batches rather than one per RPC. Procedure status comes back on the back of the RegionServer heartbeat reporting online/offline regions (No more notifications via ZK). The response is passed to the AMv2 to ‘process’. It will check against the in-memory state. If there is a mismatch, it fences out the RegionServer on the assumption that something went wrong on the RS side. Timeouts trigger retries (Not Yet Implemented!). The Procedure machine ensures only one operation at a time on any one Region/Table using entity _locking_ and smarts about what is serial and what can be run concurrently (Locking was zk-based -- you’d put a znode in zk for a table -- but now has been converted to be procedure-
 based as part of this project).
+
+== References
+
+* Matteo had a slide deck on what it the Procedure Framework would look like and the problems it addresses initially link:https://issues.apache.org/jira/secure/attachment/12845124/ProcedureV2b.pdf[attached to the Pv2 issue.]
+* link:https://issues.apache.org/jira/secure/attachment/12693273/Procedurev2Notification-Bus.pdf[A good doc by Matteo] on problem and how Pv2 addresses it w/ roadmap (from the Pv2 JIRA). We should go back to the roadmap to do the Notification Bus, convertion of log splitting to Pv2, etc.

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/schema_design.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/schema_design.adoc b/src/main/asciidoc/_chapters/schema_design.adoc
index 4cd7656..b7a6936 100644
--- a/src/main/asciidoc/_chapters/schema_design.adoc
+++ b/src/main/asciidoc/_chapters/schema_design.adoc
@@ -504,11 +504,9 @@ Deleted cells are still subject to TTL and there will never be more than "maximu
 A new "raw" scan options returns all deleted rows and the delete markers.
 
 .Change the Value of `KEEP_DELETED_CELLS` Using HBase Shell
-====
 ----
 hbase> hbase> alter ‘t1′, NAME => ‘f1′, KEEP_DELETED_CELLS => true
 ----
-====
 
 .Change the Value of `KEEP_DELETED_CELLS` Using the API
 ====
@@ -1148,16 +1146,41 @@ Detect regionserver failure as fast as reasonable. Set the following parameters:
 - `dfs.namenode.avoid.read.stale.datanode = true`
 - `dfs.namenode.avoid.write.stale.datanode = true`
 
+[[shortcircuit.reads]]
 ===  Optimize on the Server Side for Low Latency
-
-* Skip the network for local blocks. In `hbase-site.xml`, set the following parameters:
+Skip the network for local blocks when the RegionServer goes to read from HDFS by exploiting HDFS's
+link:https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html[Short-Circuit Local Reads] facility.
+Note how setup must be done both at the datanode and on the dfsclient ends of the conneciton -- i.e. at the RegionServer
+and how both ends need to have loaded the hadoop native `.so` library.
+After configuring your hadoop setting _dfs.client.read.shortcircuit_ to _true_ and configuring
+the _dfs.domain.socket.path_ path for the datanode and dfsclient to share and restarting, next configure
+the regionserver/dfsclient side.
+
+* In `hbase-site.xml`, set the following parameters:
 - `dfs.client.read.shortcircuit = true`
-- `dfs.client.read.shortcircuit.buffer.size = 131072` (Important to avoid OOME)
+- `dfs.client.read.shortcircuit.skip.checksum = true` so we don't double checksum (HBase does its own checksumming to save on i/os. See <<hbase.regionserver.checksum.verify.performance>> for more on this.
+- `dfs.domain.socket.path` to match what was set for the datanodes.
+- `dfs.client.read.shortcircuit.buffer.size = 131072` Important to avoid OOME -- hbase has a default it uses if unset, see `hbase.dfs.client.read.shortcircuit.buffer.size`; its default is 131072.
 * Ensure data locality. In `hbase-site.xml`, set `hbase.hstore.min.locality.to.skip.major.compact = 0.7` (Meaning that 0.7 \<= n \<= 1)
 * Make sure DataNodes have enough handlers for block transfers. In `hdfs-site.xml`, set the following parameters:
 - `dfs.datanode.max.xcievers >= 8192`
 - `dfs.datanode.handler.count =` number of spindles
 
+Check the RegionServer logs after restart. You should only see complaint if misconfiguration.
+Otherwise, shortcircuit read operates quietly in background. It does not provide metrics so
+no optics on how effective it is but read latencies should show a marked improvement, especially if
+good data locality, lots of random reads, and dataset is larger than available cache.
+
+Other advanced configurations that you might play with, especially if shortcircuit functionality
+is complaining in the logs,  include `dfs.client.read.shortcircuit.streams.cache.size` and
+`dfs.client.socketcache.capacity`. Documentation is sparse on these options. You'll have to
+read source code.
+
+For more on short-circuit reads, see Colin's old blog on rollout,
+link:http://blog.cloudera.com/blog/2013/08/how-improved-short-circuit-local-reads-bring-better-performance-and-security-to-hadoop/[How Improved Short-Circuit Local Reads Bring Better Performance and Security to Hadoop].
+The link:https://issues.apache.org/jira/browse/HDFS-347[HDFS-347] issue also makes for an
+interesting read showing the HDFS community at its best (caveat a few comments).
+
 ===  JVM Tuning
 
 ====  Tune JVM GC for low collection latencies


[10/11] hbase git commit: HBASE-20831 Copy master doc into branch-2.1 and edit to make it suit 2.1.0

Posted by zh...@apache.org.
http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/architecture.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/architecture.adoc b/src/main/asciidoc/_chapters/architecture.adoc
index 6d362c7..19a700a 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -643,44 +643,34 @@ Documentation will eventually move to this reference guide, but the blog is the
 [[block.cache]]
 === Block Cache
 
-HBase provides two different BlockCache implementations: the default on-heap `LruBlockCache` and the `BucketCache`, which is (usually) off-heap.
-This section discusses benefits and drawbacks of each implementation, how to choose the appropriate option, and configuration options for each.
+HBase provides two different BlockCache implementations to cache data read from HDFS:
+the default on-heap `LruBlockCache` and the `BucketCache`, which is (usually) off-heap.
+This section discusses benefits and drawbacks of each implementation, how to choose the
+appropriate option, and configuration options for each.
 
 .Block Cache Reporting: UI
 [NOTE]
 ====
 See the RegionServer UI for detail on caching deploy.
-Since HBase 0.98.4, the Block Cache detail has been significantly extended showing configurations, sizings, current usage, time-in-the-cache, and even detail on block counts and types.
+See configurations, sizings, current usage, time-in-the-cache, and even detail on block counts and types.
 ====
 
 ==== Cache Choices
 
-`LruBlockCache` is the original implementation, and is entirely within the Java heap. `BucketCache` is mainly intended for keeping block cache data off-heap, although `BucketCache` can also keep data on-heap and serve from a file-backed cache.
+`LruBlockCache` is the original implementation, and is entirely within the Java heap.
+`BucketCache` is optional and mainly intended for keeping block cache data off-heap, although `BucketCache` can also be a file-backed cache.
 
-.BucketCache is production ready as of HBase 0.98.6
-[NOTE]
-====
-To run with BucketCache, you need HBASE-11678.
-This was included in 0.98.6.
-====
-
-Fetching will always be slower when fetching from BucketCache, as compared to the native on-heap LruBlockCache.
-However, latencies tend to be less erratic across time, because there is less garbage collection when you use BucketCache since it is managing BlockCache allocations, not the GC.
-If the BucketCache is deployed in off-heap mode, this memory is not managed by the GC at all.
-This is why you'd use BucketCache, so your latencies are less erratic and to mitigate GCs and heap fragmentation.
-See Nick Dimiduk's link:http://www.n10k.com/blog/blockcache-101/[BlockCache 101] for comparisons running on-heap vs off-heap tests.
-Also see link:https://people.apache.org/~stack/bc/[Comparing BlockCache Deploys] which finds that if your dataset fits inside your LruBlockCache deploy, use it otherwise if you are experiencing cache churn (or you want your cache to exist beyond the vagaries of java GC), use BucketCache.
-
-When you enable BucketCache, you are enabling a two tier caching system, an L1 cache which is implemented by an instance of LruBlockCache and an off-heap L2 cache which is implemented by BucketCache.
+When you enable BucketCache, you are enabling a two tier caching system. We used to describe the
+tiers as "L1" and "L2" but have deprecated this terminology as of hbase-2.0.0. The "L1" cache referred to an
+instance of LruBlockCache and "L2" to an off-heap BucketCache. Instead, when BucketCache is enabled,
+all DATA blocks are kept in the BucketCache tier and meta blocks -- INDEX and BLOOM blocks -- are on-heap in the `LruBlockCache`.
 Management of these two tiers and the policy that dictates how blocks move between them is done by `CombinedBlockCache`.
-It keeps all DATA blocks in the L2 BucketCache and meta blocks -- INDEX and BLOOM blocks -- on-heap in the L1 `LruBlockCache`.
-See <<offheap.blockcache>> for more detail on going off-heap.
 
 [[cache.configurations]]
 ==== General Cache Configurations
 
 Apart from the cache implementation itself, you can set some general configuration options to control how the cache performs.
-See https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html.
+See link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html[CacheConfig].
 After setting any of these options, restart or rolling restart your cluster for the configuration to take effect.
 Check logs for errors or unexpected behavior.
 
@@ -729,13 +719,13 @@ The way to calculate how much memory is available in HBase for caching is:
 number of region servers * heap size * hfile.block.cache.size * 0.99
 ----
 
-The default value for the block cache is 0.25 which represents 25% of the available heap.
+The default value for the block cache is 0.4 which represents 40% of the available heap.
 The last value (99%) is the default acceptable loading factor in the LRU cache after which eviction is started.
 The reason it is included in this equation is that it would be unrealistic to say that it is possible to use 100% of the available memory since this would make the process blocking from the point where it loads new blocks.
 Here are some examples:
 
-* One region server with the heap size set to 1 GB and the default block cache size will have 253 MB of block cache available.
-* 20 region servers with the heap size set to 8 GB and a default block cache size will have 39.6 of block cache.
+* One region server with the heap size set to 1 GB and the default block cache size will have 405 MB of block cache available.
+* 20 region servers with the heap size set to 8 GB and a default block cache size will have 63.3 of block cache.
 * 100 region servers with the heap size set to 24 GB and a block cache size of 0.5 will have about 1.16 TB of block cache.
 
 Your data is not the only resident of the block cache.
@@ -789,32 +779,59 @@ Since link:https://issues.apache.org/jira/browse/HBASE-4683[HBASE-4683 Always ca
 [[enable.bucketcache]]
 ===== How to Enable BucketCache
 
-The usual deploy of BucketCache is via a managing class that sets up two caching tiers: an L1 on-heap cache implemented by LruBlockCache and a second L2 cache implemented with BucketCache.
+The usual deploy of BucketCache is via a managing class that sets up two caching tiers:
+an on-heap cache implemented by LruBlockCache and a second  cache implemented with BucketCache.
 The managing class is link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.html[CombinedBlockCache] by default.
 The previous link describes the caching 'policy' implemented by CombinedBlockCache.
-In short, it works by keeping meta blocks -- INDEX and BLOOM in the L1, on-heap LruBlockCache tier -- and DATA blocks are kept in the L2, BucketCache tier.
-It is possible to amend this behavior in HBase since version 1.0 and ask that a column family have both its meta and DATA blocks hosted on-heap in the L1 tier by setting `cacheDataInL1` via `(HColumnDescriptor.setCacheDataInL1(true)` or in the shell, creating or amending column families setting `CACHE_DATA_IN_L1` to true: e.g.
+In short, it works by keeping meta blocks -- INDEX and BLOOM in the on-heap LruBlockCache tier -- and DATA blocks are kept in the BucketCache tier.
+
+====
+Pre-hbase-2.0.0 versions::
+Fetching will always be slower when fetching from BucketCache in pre-hbase-2.0.0,
+as compared to the native on-heap LruBlockCache. However, latencies tend to be less
+erratic across time, because there is less garbage collection when you use BucketCache since it is managing BlockCache allocations, not the GC.
+If the BucketCache is deployed in off-heap mode, this memory is not managed by the GC at all.
+This is why you'd use BucketCache in pre-2.0.0, so your latencies are less erratic,
+to mitigate GCs and heap fragmentation, and so you can safely use more memory.
+See Nick Dimiduk's link:http://www.n10k.com/blog/blockcache-101/[BlockCache 101] for comparisons running on-heap vs off-heap tests.
+Also see link:https://people.apache.org/~stack/bc/[Comparing BlockCache Deploys] which finds that if your dataset fits inside your LruBlockCache deploy, use it otherwise if you are experiencing cache churn (or you want your cache to exist beyond the vagaries of java GC), use BucketCache.
++
+In pre-2.0.0,
+one can configure the BucketCache so it receives the `victim` of an LruBlockCache eviction.
+All Data and index blocks are cached in L1 first. When eviction happens from L1, the blocks (or `victims`) will get moved to L2.
+Set `cacheDataInL1` via `(HColumnDescriptor.setCacheDataInL1(true)` or in the shell, creating or amending column families setting `CACHE_DATA_IN_L1` to true: e.g.
 [source]
 ----
 hbase(main):003:0> create 't', {NAME => 't', CONFIGURATION => {CACHE_DATA_IN_L1 => 'true'}}
 ----
 
-The BucketCache Block Cache can be deployed on-heap, off-heap, or file based.
+hbase-2.0.0+ versions::
+HBASE-11425 changed the HBase read path so it could hold the read-data off-heap avoiding copying of cached data on to the java heap.
+See <<regionserver.offheap.readpath>>. In hbase-2.0.0, off-heap latencies approach those of on-heap cache latencies with the added
+benefit of NOT provoking GC.
++
+From HBase 2.0.0 onwards, the notions of L1 and L2 have been deprecated. When BucketCache is turned on, the DATA blocks will always go to BucketCache and INDEX/BLOOM blocks go to on heap LRUBlockCache. `cacheDataInL1` support hase been removed.
+====
+
+The BucketCache Block Cache can be deployed _off-heap_, _file_ or _mmaped_ file mode.
+
+
 You set which via the `hbase.bucketcache.ioengine` setting.
-Setting it to `heap` will have BucketCache deployed inside the allocated Java heap.
-Setting it to `offheap` will have BucketCache make its allocations off-heap, and an ioengine setting of `file:PATH_TO_FILE` will direct BucketCache to use a file caching (Useful in particular if you have some fast I/O attached to the box such as SSDs).
+Setting it to `offheap` will have BucketCache make its allocations off-heap, and an ioengine setting of `file:PATH_TO_FILE` will direct BucketCache to use file caching (Useful in particular if you have some fast I/O attached to the box such as SSDs). From 2.0.0, it is possible to have more than one file backing the BucketCache. This is very useful specially when the Cache size requirement is high. For multiple backing files, configure ioengine as `files:PATH_TO_FILE1,PATH_TO_FILE2,PATH_TO_FILE3`. BucketCache can be configured to use an mmapped file also. Configure ioengine as `mmap:PATH_TO_FILE` for this.
 
-It is possible to deploy an L1+L2 setup where we bypass the CombinedBlockCache policy and have BucketCache working as a strict L2 cache to the L1 LruBlockCache.
-For such a setup, set `CacheConfig.BUCKET_CACHE_COMBINED_KEY` to `false`.
+It is possible to deploy a tiered setup where we bypass the CombinedBlockCache policy and have BucketCache working as a strict L2 cache to the L1 LruBlockCache.
+For such a setup, set `hbase.bucketcache.combinedcache.enabled` to `false`.
 In this mode, on eviction from L1, blocks go to L2.
 When a block is cached, it is cached first in L1.
 When we go to look for a cached block, we look first in L1 and if none found, then search L2.
 Let us call this deploy format, _Raw L1+L2_.
+NOTE: This L1+L2 mode is removed from 2.0.0. When BucketCache is used, it will be strictly the DATA cache and the LruBlockCache will cache INDEX/META blocks.
 
 Other BucketCache configs include: specifying a location to persist cache to across restarts, how many threads to use writing the cache, etc.
 See the link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/io/hfile/CacheConfig.html[CacheConfig.html] class for configuration options and descriptions.
 
-
+To check it enabled, look for the log line describing cache setup; it will detail how BucketCache has been deployed.
+Also see the UI. It will detail the cache tiering and their configuration.
 
 ====== BucketCache Example Configuration
 This sample provides a configuration for a 4 GB off-heap BucketCache with a 1 GB on-heap cache.
@@ -876,9 +893,10 @@ The following example configures buckets of size 4096 and 8192.
 [NOTE]
 ====
 The default maximum direct memory varies by JVM.
-Traditionally it is 64M or some relation to allocated heap size (-Xmx) or no limit at all (JDK7 apparently). HBase servers use direct memory, in particular short-circuit reading, the hosted DFSClient will allocate direct memory buffers.
+Traditionally it is 64M or some relation to allocated heap size (-Xmx) or no limit at all (JDK7 apparently). HBase servers use direct memory, in particular short-circuit reading (See <<perf.hdfs.configs.localread>>), the hosted DFSClient will allocate direct memory buffers. How much the DFSClient uses is not easy to quantify; it is the number of open HFiles * `hbase.dfs.client.read.shortcircuit.buffer.size` where `hbase.dfs.client.read.shortcircuit.buffer.size` is set to 128k in HBase -- see _hbase-default.xml_ default configurations.
 If you do off-heap block caching, you'll be making use of direct memory.
-Starting your JVM, make sure the `-XX:MaxDirectMemorySize` setting in _conf/hbase-env.sh_ is set to some value that is higher than what you have allocated to your off-heap BlockCache (`hbase.bucketcache.size`). It should be larger than your off-heap block cache and then some for DFSClient usage (How much the DFSClient uses is not easy to quantify; it is the number of open HFiles * `hbase.dfs.client.read.shortcircuit.buffer.size` where `hbase.dfs.client.read.shortcircuit.buffer.size` is set to 128k in HBase -- see _hbase-default.xml_ default configurations). Direct memory, which is part of the Java process heap, is separate from the object heap allocated by -Xmx.
+The RPCServer uses a ByteBuffer pool. From 2.0.0, these buffers are off-heap ByteBuffers.
+Starting your JVM, make sure the `-XX:MaxDirectMemorySize` setting in _conf/hbase-env.sh_ considers off-heap BlockCache (`hbase.bucketcache.size`), DFSClient usage, RPC side ByteBufferPool max size. This has to be bit higher than sum of off heap BlockCache size and max ByteBufferPool size. Allocating an extra of 1-2 GB for the max direct memory size has worked in tests. Direct memory, which is part of the Java process heap, is separate from the object heap allocated by -Xmx.
 The value allocated by `MaxDirectMemorySize` must not exceed physical RAM, and is likely to be less than the total available RAM due to other memory requirements and system constraints.
 
 You can see how much memory -- on-heap and off-heap/direct -- a RegionServer is configured to use and how much it is using at any one time by looking at the _Server Metrics: Memory_ tab in the UI.
@@ -898,7 +916,7 @@ If the deploy was using CombinedBlockCache, then the LruBlockCache L1 size was c
 where size-of-bucket-cache itself is EITHER the value of the configuration `hbase.bucketcache.size` IF it was specified as Megabytes OR `hbase.bucketcache.size` * `-XX:MaxDirectMemorySize` if `hbase.bucketcache.size` is between 0 and 1.0.
 
 In 1.0, it should be more straight-forward.
-L1 LruBlockCache size is set as a fraction of java heap using `hfile.block.cache.size setting` (not the best name) and L2 is set as above either in absolute Megabytes or as a fraction of allocated maximum direct memory.
+Onheap LruBlockCache size is set as a fraction of java heap using `hfile.block.cache.size setting` (not the best name) and BucketCache is set as above in absolute Megabytes.
 ====
 
 ==== Compressed BlockCache
@@ -911,6 +929,54 @@ For a RegionServer hosting data that can comfortably fit into cache, or if your
 
 The compressed BlockCache is disabled by default. To enable it, set `hbase.block.data.cachecompressed` to `true` in _hbase-site.xml_ on all RegionServers.
 
+[[regionserver.offheap]]
+=== RegionServer Offheap Read/Write Path
+
+[[regionserver.offheap.readpath]]
+==== Offheap read-path
+In hbase-2.0.0, link:https://issues.apache.org/jira/browse/HBASE-11425[HBASE-11425] changed the HBase read path so it
+could hold the read-data off-heap avoiding copying of cached data on to the java heap.
+This reduces GC pauses given there is less garbage made and so less to clear. The off-heap read path has a performance
+that is similar/better to that of the on-heap LRU cache.  This feature is available since HBase 2.0.0.
+If the BucketCache is in `file` mode, fetching will always be slower compared to the native on-heap LruBlockCache.
+Refer to below blogs for more details and test results on off heaped read path
+link:https://blogs.apache.org/hbase/entry/offheaping_the_read_path_in[Offheaping the Read Path in Apache HBase: Part 1 of 2]
+and link:https://blogs.apache.org/hbase/entry/offheap-read-path-in-production[Offheap Read-Path in Production - The Alibaba story]
+
+For an end-to-end off-heaped read-path, first of all there should be an off-heap backed <<offheap.blockcache>>(BC). Configure 'hbase.bucketcache.ioengine' to off-heap in
+_hbase-site.xml_. Also specify the total capacity of the BC using `hbase.bucketcache.size` config. Please remember to adjust value of 'HBASE_OFFHEAPSIZE' in
+_hbase-env.sh_. This is how we specify the max possible off-heap memory allocation for the
+RegionServer java process. This should be bigger than the off-heap BC size. Please keep in mind that there is no default for `hbase.bucketcache.ioengine`
+which means the BC is turned OFF by default (See <<direct.memory>>).
+
+Next thing to tune is the ByteBuffer pool on the RPC server side.
+The buffers from this pool will be used to accumulate the cell bytes and create a result cell block to send back to the client side.
+`hbase.ipc.server.reservoir.enabled` can be used to turn this pool ON or OFF. By default this pool is ON and available. HBase will create off heap ByteBuffers
+and pool them. Please make sure not to turn this OFF if you want end-to-end off-heaping in read path.
+If this pool is turned off, the server will create temp buffers on heap to accumulate the cell bytes and make a result cell block. This can impact the GC on a highly read loaded server.
+The user can tune this pool with respect to how many buffers are in the pool and what should be the size of each ByteBuffer.
+Use the config `hbase.ipc.server.reservoir.initial.buffer.size` to tune each of the buffer sizes. Default is 64 KB.
+
+When the read pattern is a random row read load and each of the rows are smaller in size compared to this 64 KB, try reducing this.
+When the result size is larger than one ByteBuffer size, the server will try to grab more than one buffer and make a result cell block out of these. When the pool is running out of buffers, the server will end up creating temporary on-heap buffers.
+
+The maximum number of ByteBuffers in the pool can be tuned using the config 'hbase.ipc.server.reservoir.initial.max'. Its value defaults to 64 * region server handlers configured (See the config 'hbase.regionserver.handler.count'). The math is such that by default we consider 2 MB as the result cell block size per read result and each handler will be handling a read. For 2 MB size, we need 32 buffers each of size 64 KB (See default buffer size in pool). So per handler 32 ByteBuffers(BB). We allocate twice this size as the max BBs count such that one handler can be creating the response and handing it to the RPC Responder thread and then handling a new request creating a new response cell block (using pooled buffers). Even if the responder could not send back the first TCP reply immediately, our count should allow that we should still have enough buffers in our pool without having to make temporary buffers on the heap. Again for smaller sized random row reads, tune this max count. Th
 ere are lazily created buffers and the count is the max count to be pooled.
+
+If you still see GC issues even after making end-to-end read path off-heap, look for issues in the appropriate buffer pool. Check the below RegionServer log with INFO level:
+[source]
+----
+Pool already reached its max capacity : XXX and no free buffers now. Consider increasing the value for 'hbase.ipc.server.reservoir.initial.max' ?
+----
+
+The setting for _HBASE_OFFHEAPSIZE_ in _hbase-env.sh_ should consider this off heap buffer pool at the RPC side also. We need to config this max off heap size for the RegionServer as a bit higher than the sum of this max pool size and the off heap cache size. The TCP layer will also need to create direct bytebuffers for TCP communication. Also the DFS client will need some off-heap to do its workings especially if short-circuit reads are configured. Allocating an extra of 1 - 2 GB for the max direct memory size has worked in tests.
+
+If you are using co processors and refer the Cells in the read results, DO NOT store reference to these Cells out of the scope of the CP hook methods. Some times the CPs need store info about the cell (Like its row key) for considering in the next CP hook call etc. For such cases, pls clone the required fields of the entire Cell as per the use cases. [ See CellUtil#cloneXXX(Cell) APIs ]
+
+[[regionserver.offheap.writepath]]
+==== Offheap write-path
+
+TODO
+
 [[regionserver_splitting_implementation]]
 === RegionServer Splitting Implementation
 
@@ -951,8 +1017,11 @@ However, if a RegionServer crashes or becomes unavailable before the MemStore is
 If writing to the WAL fails, the entire operation to modify the data fails.
 
 HBase uses an implementation of the link:https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/wal/WAL.html[WAL] interface.
-Usually, there is only one instance of a WAL per RegionServer.
-The RegionServer records Puts and Deletes to it, before recording them to the <<store.memstore>> for the affected <<store>>.
+Usually, there is only one instance of a WAL per RegionServer. An exception
+is the RegionServer that is carrying _hbase:meta_; the _meta_ table gets its
+own dedicated WAL.
+The RegionServer records Puts and Deletes to its WAL, before recording them
+these Mutations <<store.memstore>> for the affected <<store>>.
 
 .The HLog
 [NOTE]
@@ -962,9 +1031,33 @@ In 0.94, HLog was the name of the implementation of the WAL.
 You will likely find references to the HLog in documentation tailored to these older versions.
 ====
 
-The WAL resides in HDFS in the _/hbase/WALs/_ directory (prior to HBase 0.94, they were stored in _/hbase/.logs/_), with subdirectories per region.
+The WAL resides in HDFS in the _/hbase/WALs/_ directory, with subdirectories per region.
+
+For more general information about the concept of write ahead logs, see the Wikipedia
+link:http://en.wikipedia.org/wiki/Write-ahead_logging[Write-Ahead Log] article.
+
 
-For more general information about the concept of write ahead logs, see the Wikipedia link:http://en.wikipedia.org/wiki/Write-ahead_logging[Write-Ahead Log] article.
+[[wal.providers]]
+==== WAL Providers
+In HBase, there are a number of WAL imlementations (or 'Providers'). Each is known
+by a short name label (that unfortunately is not always descriptive). You set the provider in
+_hbase-site.xml_ passing the WAL provder short-name as the value on the
+_hbase.wal.provider_ property (Set the provider for _hbase:meta_ using the
+_hbase.wal.meta_provider_ property).
+
+ * _asyncfs_: The *default*. New since hbase-2.0.0 (HBASE-15536, HBASE-14790). This _AsyncFSWAL_ provider, as it identifies itself in RegionServer logs, is built on a new non-blocking dfsclient implementation. It is currently resident in the hbase codebase but intent is to move it back up into HDFS itself. WALs edits are written concurrently ("fan-out") style to each of the WAL-block replicas on each DataNode rather than in a chained pipeline as the default client does. Latencies should be better. See link:https://www.slideshare.net/HBaseCon/apache-hbase-improvements-and-practices-at-xiaomi[Apache HBase Improements and Practices at Xiaomi] at slide 14 onward for more detail on implementation.
+ * _filesystem_: This was the default in hbase-1.x releases. It is built on the blocking _DFSClient_ and writes to replicas in classic _DFSCLient_ pipeline mode. In logs it identifies as _FSHLog_ or _FSHLogProvider_.
+ * _multiwal_: This provider is made of multiple instances of _asyncfs_ or  _filesystem_. See the next section for more on _multiwal_.
+
+Look for the lines like the below in the RegionServer log to see which provider is in place (The below shows the default AsyncFSWALProvider):
+
+----
+2018-04-02 13:22:37,983 INFO  [regionserver/ve0528:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.AsyncFSWALProvider
+----
+
+NOTE: As the _AsyncFSWAL_ hacks into the internal of DFSClient implementation, it will be easily broken by upgrading the hadoop dependencies, even for a simple patch release. So if you do not specify the wal provider explicitly, we will first try to use the _asyncfs_, if failed, we will fall back to use _filesystem_. And notice that this may not always work, so if you still have problem starting HBase due to the problem of starting _AsyncFSWAL_, please specify _filesystem_ explicitly in the config file.
+
+NOTE: EC support has been added to hadoop-3.x, and it is incompatible with WAL as the EC output stream does not support hflush/hsync. In order to create a non-EC file in an EC directory, we need to use the new builder-based create API for _FileSystem_, but it is only introduced in hadoop-2.9+ and for HBase we still need to support hadoop-2.7.x. So please do not enable EC for the WAL directory until we find a way to deal with it.
 
 ==== MultiWAL
 With a single WAL per RegionServer, the RegionServer must write to the WAL serially, because HDFS files must be sequential. This causes the WAL to be a performance bottleneck.
@@ -1090,28 +1183,28 @@ The general process for log splitting, as described in <<log.splitting.step.by.s
 
 . If distributed log processing is enabled, the HMaster creates a _split log manager_ instance when the cluster is started.
   .. The split log manager manages all log files which need to be scanned and split.
-  .. The split log manager places all the logs into the ZooKeeper splitlog node (_/hbase/splitlog_) as tasks.
-  .. You can view the contents of the splitlog by issuing the following `zkCli` command. Example output is shown.
+  .. The split log manager places all the logs into the ZooKeeper splitWAL node (_/hbase/splitWAL_) as tasks.
+  .. You can view the contents of the splitWAL by issuing the following `zkCli` command. Example output is shown.
 +
 [source,bash]
 ----
-ls /hbase/splitlog
-[hdfs%3A%2F%2Fhost2.sample.com%3A56020%2Fhbase%2F.logs%2Fhost8.sample.com%2C57020%2C1340474893275-splitting%2Fhost8.sample.com%253A57020.1340474893900,
-hdfs%3A%2F%2Fhost2.sample.com%3A56020%2Fhbase%2F.logs%2Fhost3.sample.com%2C57020%2C1340474893299-splitting%2Fhost3.sample.com%253A57020.1340474893931,
-hdfs%3A%2F%2Fhost2.sample.com%3A56020%2Fhbase%2F.logs%2Fhost4.sample.com%2C57020%2C1340474893287-splitting%2Fhost4.sample.com%253A57020.1340474893946]
+ls /hbase/splitWAL
+[hdfs%3A%2F%2Fhost2.sample.com%3A56020%2Fhbase%2FWALs%2Fhost8.sample.com%2C57020%2C1340474893275-splitting%2Fhost8.sample.com%253A57020.1340474893900,
+hdfs%3A%2F%2Fhost2.sample.com%3A56020%2Fhbase%2FWALs%2Fhost3.sample.com%2C57020%2C1340474893299-splitting%2Fhost3.sample.com%253A57020.1340474893931,
+hdfs%3A%2F%2Fhost2.sample.com%3A56020%2Fhbase%2FWALs%2Fhost4.sample.com%2C57020%2C1340474893287-splitting%2Fhost4.sample.com%253A57020.1340474893946]
 ----
 +
 The output contains some non-ASCII characters.
 When decoded, it looks much more simple:
 +
 ----
-[hdfs://host2.sample.com:56020/hbase/.logs
+[hdfs://host2.sample.com:56020/hbase/WALs
 /host8.sample.com,57020,1340474893275-splitting
 /host8.sample.com%3A57020.1340474893900,
-hdfs://host2.sample.com:56020/hbase/.logs
+hdfs://host2.sample.com:56020/hbase/WALs
 /host3.sample.com,57020,1340474893299-splitting
 /host3.sample.com%3A57020.1340474893931,
-hdfs://host2.sample.com:56020/hbase/.logs
+hdfs://host2.sample.com:56020/hbase/WALs
 /host4.sample.com,57020,1340474893287-splitting
 /host4.sample.com%3A57020.1340474893946]
 ----
@@ -1122,7 +1215,7 @@ The listing represents WAL file names to be scanned and split, which is a list o
 +
 The split log manager is responsible for the following ongoing tasks:
 +
-* Once the split log manager publishes all the tasks to the splitlog znode, it monitors these task nodes and waits for them to be processed.
+* Once the split log manager publishes all the tasks to the splitWAL znode, it monitors these task nodes and waits for them to be processed.
 * Checks to see if there are any dead split log workers queued up.
   If it finds tasks claimed by unresponsive workers, it will resubmit those tasks.
   If the resubmit fails due to some ZooKeeper exception, the dead worker is queued up again for retry.
@@ -1140,7 +1233,7 @@ The split log manager is responsible for the following ongoing tasks:
   In the example output below, the first line of the output shows that the task is currently unassigned.
 +
 ----
-get /hbase/splitlog/hdfs%3A%2F%2Fhost2.sample.com%3A56020%2Fhbase%2F.logs%2Fhost6.sample.com%2C57020%2C1340474893287-splitting%2Fhost6.sample.com%253A57020.1340474893945
+get /hbase/splitWAL/hdfs%3A%2F%2Fhost2.sample.com%3A56020%2Fhbase%2FWALs%2Fhost6.sample.com%2C57020%2C1340474893287-splitting%2Fhost6.sample.com%253A57020.1340474893945
 
 unassigned host2.sample.com:57000
 cZxid = 0×7115
@@ -1171,12 +1264,12 @@ Based on the state of the task whose data is changed, the split log manager does
 +
 Each RegionServer runs a daemon thread called the _split log worker_, which does the work to split the logs.
 The daemon thread starts when the RegionServer starts, and registers itself to watch HBase znodes.
-If any splitlog znode children change, it notifies a sleeping worker thread to wake up and grab more tasks.
+If any splitWAL znode children change, it notifies a sleeping worker thread to wake up and grab more tasks.
 If a worker's current task's node data is changed,
 the worker checks to see if the task has been taken by another worker.
 If so, the worker thread stops work on the current task.
 +
-The worker monitors the splitlog znode constantly.
+The worker monitors the splitWAL znode constantly.
 When a new task appears, the split log worker retrieves the task paths and checks each one until it finds an unclaimed task, which it attempts to claim.
 If the claim was successful, it attempts to perform the task and updates the task's `state` property based on the splitting outcome.
 At this point, the split log worker scans for another unclaimed task.
@@ -1219,21 +1312,17 @@ A possible downside to WAL compression is that we lose more data from the last b
 mid-write. If entries in this last block were added with new dictionary entries but we failed persist the amended
 dictionary because of an abrupt termination, a read of this last block may not be able to resolve last-written entries.
 
-[[wal.compression]]
-==== WAL Compression ====
+[[wal.durability]]
+==== Durability
+It is possible to set _durability_ on each Mutation or on a Table basis. Options include:
 
-The content of the WAL can be compressed using LRU Dictionary compression.
-This can be used to speed up WAL replication to different datanodes.
-The dictionary can store up to 2^15^ elements; eviction starts after this number is exceeded.
-
-To enable WAL compression, set the `hbase.regionserver.wal.enablecompression` property to `true`.
-The default value for this property is `false`.
-By default, WAL tag compression is turned on when WAL compression is enabled.
-You can turn off WAL tag compression by setting the `hbase.regionserver.wal.tags.enablecompression` property to 'false'.
+ * _SKIP_WAL_: Do not write Mutations to the WAL (See the next section, <<wal.disable>>).
+ * _ASYNC_WAL_: Write the WAL asynchronously; do not hold-up clients waiting on the sync of their write to the filesystem but return immediately. The edit becomes visible. Meanwhile, in the background, the Mutation will be flushed to the WAL at some time later. This option currently may lose data. See HBASE-16689.
+ * _SYNC_WAL_: The *default*. Each edit is sync'd to HDFS before we return success to the client.
+ * _FSYNC_WAL_: Each edit is fsync'd to HDFS and the filesystem before we return success to the client.
 
-A possible downside to WAL compression is that we lose more data from the last block in the WAL if it ill-terminated
-mid-write. If entries in this last block were added with new dictionary entries but we failed persist the amended
-dictionary because of an abrupt termination, a read of this last block may not be able to resolve last-written entries. 
+Do not confuse the _ASYNC_WAL_ option on a Mutation or Table with the _AsyncFSWAL_ writer; they are distinct
+options unfortunately closely named
 
 [[wal.disable]]
 ==== Disabling the WAL
@@ -1249,6 +1338,7 @@ There is no way to disable the WAL for only a specific table.
 
 WARNING: If you disable the WAL for anything other than bulk loads, your data is at risk.
 
+
 [[regions.arch]]
 == Regions
 
@@ -1605,20 +1695,20 @@ Also see <<hfilev2>> for information about the HFile v2 format that was included
 [[hfile_tool]]
 ===== HFile Tool
 
-To view a textualized version of HFile content, you can use the `org.apache.hadoop.hbase.io.hfile.HFile` tool.
+To view a textualized version of HFile content, you can use the `hbase hfile` tool.
 Type the following to see usage:
 
 [source,bash]
 ----
-$ ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile
+$ ${HBASE_HOME}/bin/hbase hfile
 ----
-For example, to view the content of the file _hdfs://10.81.47.41:8020/hbase/TEST/1418428042/DSMP/4759508618286845475_, type the following:
+For example, to view the content of the file _hdfs://10.81.47.41:8020/hbase/default/TEST/1418428042/DSMP/4759508618286845475_, type the following:
 [source,bash]
 ----
- $ ${HBASE_HOME}/bin/hbase org.apache.hadoop.hbase.io.hfile.HFile -v -f hdfs://10.81.47.41:8020/hbase/TEST/1418428042/DSMP/4759508618286845475
+ $ ${HBASE_HOME}/bin/hbase hfile -v -f hdfs://10.81.47.41:8020/hbase/default/TEST/1418428042/DSMP/4759508618286845475
 ----
 If you leave off the option -v to see just a summary on the HFile.
-See usage for other things to do with the `HFile` tool.
+See usage for other things to do with the `hfile` tool.
 
 [[store.file.dir]]
 ===== StoreFile Directory Structure on HDFS
@@ -1773,9 +1863,20 @@ These parameters will be explained in context, and then will be given in a table
 ====== Being Stuck
 
 When the MemStore gets too large, it needs to flush its contents to a StoreFile.
-However, a Store can only have `hbase.hstore.blockingStoreFiles` files, so the MemStore needs to wait for the number of StoreFiles to be reduced by one or more compactions.
-However, if the MemStore grows larger than `hbase.hregion.memstore.flush.size`, it is not able to flush its contents to a StoreFile.
-If the MemStore is too large and the number of StoreFiles is also too high, the algorithm is said to be "stuck". The compaction algorithm checks for this "stuck" situation and provides mechanisms to alleviate it.
+However, Stores are configured with a bound on the number StoreFiles,
+`hbase.hstore.blockingStoreFiles`, and if in excess, the MemStore flush must wait
+until the StoreFile count is reduced by one or more compactions. If the MemStore
+is too large and the number of StoreFiles is also too high, the algorithm is said
+to be "stuck". By default we'll wait on compactions up to
+`hbase.hstore.blockingWaitTime` milliseconds. If this period expires, we'll flush
+anyways even though we are in excess of the
+`hbase.hstore.blockingStoreFiles` count.
+
+Upping the `hbase.hstore.blockingStoreFiles` count will allow flushes to happen
+but a Store with many StoreFiles in will likely have higher read latencies. Try to
+figure why Compactions are not keeping up. Is it a write spurt that is bringing
+about this situation or is a regular occurance and the cluster is under-provisioned
+for the volume of writes?
 
 [[exploringcompaction.policy]]
 ====== The ExploringCompactionPolicy Algorithm
@@ -2439,6 +2540,8 @@ See the above HDFS Architecture link for more information.
 [[arch.timelineconsistent.reads]]
 == Timeline-consistent High Available Reads
 
+NOTE: The current <<amv2, Assignment Manager V2>> does not work well with region replica, so this feature maybe broken. Use it with caution.
+
 [[casestudies.timelineconsistent.intro]]
 === Introduction
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/backup_restore.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/backup_restore.adoc b/src/main/asciidoc/_chapters/backup_restore.adoc
deleted file mode 100644
index c6dac85..0000000
--- a/src/main/asciidoc/_chapters/backup_restore.adoc
+++ /dev/null
@@ -1,912 +0,0 @@
-////
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-////
-
-[[backuprestore]]
-= Backup and Restore
-:doctype: book
-:numbered:
-:toc: left
-:icons: font
-:experimental:
-
-[[br.overview]]
-== Overview
-
-Backup and restore is a standard operation provided by many databases. An effective backup and restore
-strategy helps ensure that users can recover data in case of unexpected failures. The HBase backup and restore
-feature helps ensure that enterprises using HBase as a canonical data repository can recover from catastrophic
-failures. Another important feature is the ability to restore the database to a particular
-point-in-time, commonly referred to as a snapshot.
-
-The HBase backup and restore feature provides the ability to create full backups and incremental backups on
-tables in an HBase cluster. The full backup is the foundation on which incremental backups are applied
-to build iterative snapshots. Incremental backups can be run on a schedule to capture changes over time,
-for example by using a Cron task. Incremental backups are more cost-effective than full backups because they only capture
-the changes since the last backup and they also enable administrators to restore the database to any prior incremental backup. Furthermore, the
-utilities also enable table-level data backup-and-recovery if you do not want to restore the entire dataset
-of the backup.
-
-The backup and restore feature supplements the HBase Replication feature. While HBase replication is ideal for
-creating "hot" copies of the data (where the replicated data is immediately available for query), the backup and
-restore feature is ideal for creating "cold" copies of data (where a manual step must be taken to restore the system).
-Previously, users only had the ability to create full backups via the ExportSnapshot functionality. The incremental
-backup implementation is the novel improvement over the previous "art" provided by ExportSnapshot.
-
-[[br.terminology]]
-== Terminology
-
-The backup and restore feature introduces new terminology which can be used to understand how control flows through the
-system.
-
-* _A backup_: A logical unit of data and metadata which can restore a table to its state at a specific point in time.
-* _Full backup_: a type of backup which wholly encapsulates the contents of the table at a point in time.
-* _Incremental backup_: a type of backup which contains the changes in a table since a full backup.
-* _Backup set_: A user-defined name which references one or more tables over which a backup can be executed.
-* _Backup ID_: A unique names which identifies one backup from the rest, e.g. `backupId_1467823988425`
-
-[[br.planning]]
-== Planning
-
-There are some common strategies which can be used to implement backup and restore in your environment. The following section
-shows how these strategies are implemented and identifies potential tradeoffs with each.
-
-WARNING: This backup and restore tools has not been tested on Transparent Data Encryption (TDE) enabled HDFS clusters.
-This is related to the open issue link:https://issues.apache.org/jira/browse/HBASE-16178[HBASE-16178].
-
-[[br.intracluster.backup]]
-=== Backup within a cluster
-
-This strategy stores the backups on the same cluster as where the backup was taken. This approach is only appropriate for testing
-as it does not provide any additional safety on top of what the software itself already provides.
-
-.Intra-Cluster Backup
-image::backup-intra-cluster.png[]
-
-[[br.dedicated.cluster.backup]]
-=== Backup using a dedicated cluster
-
-This strategy provides greater fault tolerance and provides a path towards disaster recovery. In this setting, you will
-store the backup on a separate HDFS cluster by supplying the backup destination cluster’s HDFS URL to the backup utility.
-You should consider backing up to a different physical location, such as a different data center.
-
-Typically, a backup-dedicated HDFS cluster uses a more economical hardware profile to save money.
-
-.Dedicated HDFS Cluster Backup
-image::backup-dedicated-cluster.png[]
-
-[[br.cloud.or.vendor.backup]]
-=== Backup to the Cloud or a storage vendor appliance
-
-Another approach to safeguarding HBase incremental backups is to store the data on provisioned, secure servers that belong
-to third-party vendors and that are located off-site. The vendor can be a public cloud provider or a storage vendor who uses
-a Hadoop-compatible file system, such as S3 and other HDFS-compatible destinations.
-
-.Backup to Cloud or Vendor Storage Solutions
-image::backup-cloud-appliance.png[]
-
-NOTE: The HBase backup utility does not support backup to multiple destinations. A workaround is to manually create copies
-of the backup files from HDFS or S3.
-
-[[br.initial.setup]]
-== First-time configuration steps
-
-This section contains the necessary configuration changes that must be made in order to use the backup and restore feature.
-As this feature makes significant use of YARN's MapReduce framework to parallelize these I/O heavy operations, configuration
-changes extend outside of just `hbase-site.xml`.
-
-=== Allow the "hbase" system user in YARN
-
-The YARN *container-executor.cfg* configuration file must have the following property setting: _allowed.system.users=hbase_. No spaces
-are allowed in entries of this configuration file.
-
-WARNING: Skipping this step will result in runtime errors when executing the first backup tasks.
-
-*Example of a valid container-executor.cfg file for backup and restore:*
-
-[source]
-----
-yarn.nodemanager.log-dirs=/var/log/hadoop/mapred
-yarn.nodemanager.linux-container-executor.group=yarn
-banned.users=hdfs,yarn,mapred,bin
-allowed.system.users=hbase
-min.user.id=500
-----
-
-=== HBase specific changes
-
-Add the following properties to hbase-site.xml and restart HBase if it is already running.
-
-NOTE: The ",..." is an ellipsis meant to imply that this is a comma-separated list of values, not literal text which should be added to hbase-site.xml.
-
-[source]
-----
-<property>
-  <name>hbase.backup.enable</name>
-  <value>true</value>
-</property>
-<property>
-  <name>hbase.master.logcleaner.plugins</name>
-  <value>org.apache.hadoop.hbase.backup.master.BackupLogCleaner,...</value>
-</property>
-<property>
-  <name>hbase.procedure.master.classes</name>
-  <value>org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager,...</value>
-</property>
-<property>
-  <name>hbase.procedure.regionserver.classes</name>
-  <value>org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager,...</value>
-</property>
-<property>
-  <name>hbase.coprocessor.region.classes</name>
-  <value>org.apache.hadoop.hbase.backup.BackupObserver,...</value>
-</property>
-<property>
-  <name>hbase.master.hfilecleaner.plugins</name>
-  <value>org.apache.hadoop.hbase.backup.BackupHFileCleaner,...</value>
-</property>
-----
-
-== Backup and Restore commands
-
-This covers the command-line utilities that administrators would run to create, restore, and merge backups. Tools to
-inspect details on specific backup sessions is covered in the next section, <<br.administration,Administration of Backup Images>>.
-
-Run the command `hbase backup help <command>` to access the online help that provides basic information about a command
-and its options. The below information is captured in this help message for each command.
-
-// hbase backup create
-
-[[br.creating.complete.backup]]
-### Creating a Backup Image
-
-[NOTE]
-====
-For HBase clusters also using Apache Phoenix: include the SQL system catalog tables in the backup. In the event that you
-need to restore the HBase backup, access to the system catalog tables enable you to resume Phoenix interoperability with the
-restored data.
-====
-
-The first step in running the backup and restore utilities is to perform a full backup and to store the data in a separate image
-from the source. At a minimum, you must do this to get a baseline before you can rely on incremental backups.
-
-Run the following command as HBase superuser:
-
-[source]
-----
-hbase backup create <type> <backup_path>
-----
-
-After the command finishes running, the console prints a SUCCESS or FAILURE status message. The SUCCESS message includes a _backup_ ID.
-The backup ID is the Unix time (also known as Epoch time) that the HBase master received the backup request from the client.
-
-[TIP]
-====
-Record the backup ID that appears at the end of a successful backup. In case the source cluster fails and you need to recover the
-dataset with a restore operation, having the backup ID readily available can save time.
-====
-
-[[br.create.positional.cli.arguments]]
-#### Positional Command-Line Arguments
-
-_type_::
-  The type of backup to execute: _full_ or _incremental_. As a reminder, an _incremental_ backup requires a _full_ backup to
-  already exist.
-
-_backup_path_::
-  The _backup_path_ argument specifies the full filesystem URI of where to store the backup image. Valid prefixes are
-  are _hdfs:_, _webhdfs:_, _gpfs:_, and _s3fs:_.
-
-[[br.create.named.cli.arguments]]
-#### Named Command-Line Arguments
-
-_-t <table_name[,table_name]>_::
-  A comma-separated list of tables to back up. If no tables are specified, all tables are backed up. No regular-expression or
-  wildcard support is present; all table names must be explicitly listed. See <<br.using.backup.sets,Backup Sets>> for more
-  information about peforming operations on collections of tables. Mutually exclusive with the _-s_ option; one of these
-  named options are required.
-
-_-s <backup_set_name>_::
-  Identify tables to backup based on a backup set. See <<br.using.backup.sets,Using Backup Sets>> for the purpose and usage
-  of backup sets. Mutually exclusive with the _-t_ option.
-
-_-w <number_workers>_::
-  (Optional) Specifies the number of parallel workers to copy data to backup destination. Backups are currently executed by MapReduce jobs
-  so this value corresponds to the number of Mappers that will be spawned by the job.
-
-_-b <bandwidth_per_worker>_::
-  (Optional) Specifies the bandwidth of each worker in MB per second.
-
-_-d_::
-  (Optional) Enables "DEBUG" mode which prints additional logging about the backup creation.
-
-_-q <name>_::
-  (Optional) Allows specification of the name of a YARN queue which the MapReduce job to create the backup should be executed in. This option
-  is useful to prevent backup tasks from stealing resources away from other MapReduce jobs of high importance.
-
-[[br.usage.examples]]
-#### Example usage
-
-[source]
-----
-$ hbase backup create full hdfs://host5:8020/data/backup -t SALES2,SALES3 -w 3
-----
-
-This command creates a full backup image of two tables, SALES2 and SALES3, in the HDFS instance who NameNode is host5:8020
-in the path _/data/backup_. The _-w_ option specifies that no more than three parallel works complete the operation.
-
-// hbase backup restore
-
-[[br.restoring.backup]]
-### Restoring a Backup Image
-
-Run the following command as an HBase superuser. You can only restore a backup on a running HBase cluster because the data must be
-redistributed the RegionServers for the operation to complete successfully.
-
-[source]
-----
-hbase restore <backup_path> <backup_id>
-----
-
-[[br.restore.positional.args]]
-#### Positional Command-Line Arguments
-
-_backup_path_::
-  The _backup_path_ argument specifies the full filesystem URI of where to store the backup image. Valid prefixes are
-  are _hdfs:_, _webhdfs:_, _gpfs:_, and _s3fs:_.
-
-_backup_id_::
-  The backup ID that uniquely identifies the backup image to be restored.
-
-
-[[br.restore.named.args]]
-#### Named Command-Line Arguments
-
-_-t <table_name[,table_name]>_::
-  A comma-separated list of tables to restore. See <<br.using.backup.sets,Backup Sets>> for more
-  information about peforming operations on collections of tables. Mutually exclusive with the _-s_ option; one of these
-  named options are required.
-
-_-s <backup_set_name>_::
-  Identify tables to backup based on a backup set. See <<br.using.backup.sets,Using Backup Sets>> for the purpose and usage
-  of backup sets. Mutually exclusive with the _-t_ option.
-
-_-q <name>_::
-  (Optional) Allows specification of the name of a YARN queue which the MapReduce job to create the backup should be executed in. This option
-  is useful to prevent backup tasks from stealing resources away from other MapReduce jobs of high importance.
-
-_-c_::
-  (Optional) Perform a dry-run of the restore. The actions are checked, but not executed.
-
-_-m <target_tables>_::
-  (Optional) A comma-separated list of tables to restore into. If this option is not provided, the original table name is used. When
-  this option is provided, there must be an equal number of entries provided in the `-t` option.
-
-_-o_::
-  (Optional) Overwrites the target table for the restore if the table already exists.
-
-
-[[br.restore.usage]]
-#### Example of Usage
-
-[source]
-----
-hbase backup restore /tmp/backup_incremental backupId_1467823988425 -t mytable1,mytable2
-----
-
-This command restores two tables of an incremental backup image. In this example:
-• `/tmp/backup_incremental` is the path to the directory containing the backup image.
-• `backupId_1467823988425` is the backup ID.
-• `mytable1` and `mytable2` are the names of tables in the backup image to be restored.
-
-// hbase backup merge
-
-[[br.merge.backup]]
-### Merging Incremental Backup Images
-
-This command can be used to merge two or more incremental backup images into a single incremental
-backup image. This can be used to consolidate multiple, small incremental backup images into a single
-larger incremental backup image. This command could be used to merge hourly incremental backups
-into a daily incremental backup image, or daily incremental backups into a weekly incremental backup.
-
-[source]
-----
-$ hbase backup merge <backup_ids>
-----
-
-[[br.merge.backup.positional.cli.arguments]]
-#### Positional Command-Line Arguments
-
-_backup_ids_::
-  A comma-separated list of incremental backup image IDs that are to be combined into a single image.
-
-[[br.merge.backup.named.cli.arguments]]
-#### Named Command-Line Arguments
-
-None.
-
-[[br.merge.backup.example]]
-#### Example usage
-
-[source]
-----
-$ hbase backup merge backupId_1467823988425,backupId_1467827588425
-----
-
-// hbase backup set
-
-[[br.using.backup.sets]]
-### Using Backup Sets
-
-Backup sets can ease the administration of HBase data backups and restores by reducing the amount of repetitive input
-of table names. You can group tables into a named backup set with the `hbase backup set add` command. You can then use
-the -set option to invoke the name of a backup set in the `hbase backup create` or `hbase backup restore` rather than list
-individually every table in the group. You can have multiple backup sets.
-
-NOTE: Note the differentiation between the `hbase backup set add` command and the _-set_ option. The `hbase backup set add`
-command must be run before using the `-set` option in a different command because backup sets must be named and defined
-before using backup sets as a shortcut.
-
-If you run the `hbase backup set add` command and specify a backup set name that does not yet exist on your system, a new set
-is created. If you run the command with the name of an existing backup set name, then the tables that you specify are added
-to the set.
-
-In this command, the backup set name is case-sensitive.
-
-NOTE: The metadata of backup sets are stored within HBase. If you do not have access to the original HBase cluster with the
-backup set metadata, then you must specify individual table names to restore the data.
-
-To create a backup set, run the following command as the HBase superuser:
-
-[source]
-----
-$ hbase backup set <subcommand> <backup_set_name> <tables>
-----
-
-[[br.set.subcommands]]
-#### Backup Set Subcommands
-
-The following list details subcommands of the hbase backup set command.
-
-NOTE: You must enter one (and no more than one) of the following subcommands after hbase backup set to complete an operation.
-Also, the backup set name is case-sensitive in the command-line utility.
-
-_add_::
-  Adds table[s] to a backup set. Specify a _backup_set_name_ value after this argument to create a backup set.
-
-_remove_::
-  Removes tables from the set. Specify the tables to remove in the tables argument.
-
-_list_::
-  Lists all backup sets.
-
-_describe_::
-  Displays a description of a backup set. The information includes whether the set has full
-  or incremental backups, start and end times of the backups, and a list of the tables in the set. This subcommand must precede
-  a valid value for the _backup_set_name_ value.
-
-_delete_::
-  Deletes a backup set. Enter the value for the _backup_set_name_ option directly after the `hbase backup set delete` command.
-
-[[br.set.positional.cli.arguments]]
-#### Positional Command-Line Arguments
-
-_backup_set_name_::
-  Use to assign or invoke a backup set name. The backup set name must contain only printable characters and cannot have any spaces.
-
-_tables_::
-  List of tables (or a single table) to include in the backup set. Enter the table names as a comma-separated list. If no tables
-  are specified, all tables are included in the set.
-
-TIP: Maintain a log or other record of the case-sensitive backup set names and the corresponding tables in each set on a separate
-or remote cluster, backup strategy. This information can help you in case of failure on the primary cluster.
-
-[[br.set.usage]]
-#### Example of Usage
-
-[source]
-----
-$ hbase backup set add Q1Data TEAM3,TEAM_4
-----
-
-Depending on the environment, this command results in _one_ of the following actions:
-
-* If the `Q1Data` backup set does not exist, a backup set containing tables `TEAM_3` and `TEAM_4` is created.
-* If the `Q1Data` backup set exists already, the tables `TEAM_3` and `TEAM_4` are added to the `Q1Data` backup set.
-
-[[br.administration]]
-## Administration of Backup Images
-
-The `hbase backup` command has several subcommands that help with administering backup images as they accumulate. Most production
-environments require recurring backups, so it is necessary to have utilities to help manage the data of the backup repository.
-Some subcommands enable you to find information that can help identify backups that are relevant in a search for particular data.
-You can also delete backup images.
-
-The following list details each `hbase backup subcommand` that can help administer backups. Run the full command-subcommand line as
-the HBase superuser.
-
-// hbase backup progress
-
-[[br.managing.backup.progress]]
-### Managing Backup Progress
-
-You can monitor a running backup in another terminal session by running the _hbase backup progress_ command and specifying the backup ID as an argument.
-
-For example, run the following command as hbase superuser to view the progress of a backup
-
-[source]
-----
-$ hbase backup progress <backup_id>
-----
-
-[[br.progress.positional.cli.arguments]]
-#### Positional Command-Line Arguments
-
-_backup_id_::
-  Specifies the backup that you want to monitor by seeing the progress information. The backupId is case-sensitive.
-
-[[br.progress.named.cli.arguments]]
-#### Named Command-Line Arguments
-
-None.
-
-[[br.progress.example]]
-#### Example usage
-
-[source]
-----
-hbase backup progress backupId_1467823988425
-----
-
-// hbase backup history
-
-[[br.managing.backup.history]]
-### Managing Backup History
-
-This command displays a log of backup sessions. The information for each session includes backup ID, type (full or incremental), the tables
-in the backup, status, and start and end time. Specify the number of backup sessions to display with the optional -n argument.
-
-[source]
-----
-$ hbase backup history <backup_id>
-----
-
-[[br.history.positional.cli.arguments]]
-#### Positional Command-Line Arguments
-
-_backup_id_::
-  Specifies the backup that you want to monitor by seeing the progress information. The backupId is case-sensitive.
-
-[[br.history.named.cli.arguments]]
-#### Named Command-Line Arguments
-
-_-n <num_records>_::
-  (Optional) The maximum number of backup records (Default: 10).
-
-_-p <backup_root_path>_::
-  The full filesystem URI of where backup images are stored.
-
-_-s <backup_set_name>_::
-  The name of the backup set to obtain history for. Mutually exclusive with the _-t_ option.
-
-_-t_ <table_name>::
-  The name of table to obtain history for. Mutually exclusive with the _-s_ option.
-
-[[br.history.backup.example]]
-#### Example usage
-
-[source]
-----
-$ hbase backup history
-$ hbase backup history -n 20
-$ hbase backup history -t WebIndexRecords
-----
-
-// hbase backup describe
-
-[[br.describe.backup]]
-### Describing a Backup Image
-
-This command can be used to obtain information about a specific backup image.
-
-[source]
-----
-$ hbase backup describe <backup_id>
-----
-
-[[br.describe.backup.positional.cli.arguments]]
-#### Positional Command-Line Arguments
-
-_backup_id_::
-  The ID of the backup image to describe.
-
-[[br.describe.backup.named.cli.arguments]]
-#### Named Command-Line Arguments
-
-None.
-
-[[br.describe.backup.example]]
-#### Example usage
-
-[source]
-----
-$ hbase backup describe backupId_1467823988425
-----
-
-// hbase backup delete
-
-[[br.delete.backup]]
-### Deleting a Backup Image
-
-This command can be used to delete a backup image which is no longer needed.
-
-[source]
-----
-$ hbase backup delete <backup_id>
-----
-
-[[br.delete.backup.positional.cli.arguments]]
-#### Positional Command-Line Arguments
-
-_backup_id_::
-  The ID to the backup image which should be deleted.
-
-[[br.delete.backup.named.cli.arguments]]
-#### Named Command-Line Arguments
-
-None.
-
-[[br.delete.backup.example]]
-#### Example usage
-
-[source]
-----
-$ hbase backup delete backupId_1467823988425
-----
-
-// hbase backup repair
-
-[[br.repair.backup]]
-### Backup Repair Command
-
-This command attempts to correct any inconsistencies in persisted backup metadata which exists as
-the result of software errors or unhandled failure scenarios. While the backup implementation tries
-to correct all errors on its own, this tool may be necessary in the cases where the system cannot
-automatically recover on its own.
-
-[source]
-----
-$ hbase backup repair
-----
-
-[[br.repair.backup.positional.cli.arguments]]
-#### Positional Command-Line Arguments
-
-None.
-
-[[br.repair.backup.named.cli.arguments]]
-### Named Command-Line Arguments
-
-None.
-
-[[br.repair.backup.example]]
-#### Example usage
-
-[source]
-----
-$ hbase backup repair
-----
-
-[[br.backup.configuration]]
-## Configuration keys
-
-The backup and restore feature includes both required and optional configuration keys.
-
-### Required properties
-
-_hbase.backup.enable_: Controls whether or not the feature is enabled (Default: `false`). Set this value to `true`.
-
-_hbase.master.logcleaner.plugins_: A comma-separated list of classes invoked when cleaning logs in the HBase Master. Set
-this value to `org.apache.hadoop.hbase.backup.master.BackupLogCleaner` or append it to the current value.
-
-_hbase.procedure.master.classes_: A comma-separated list of classes invoked with the Procedure framework in the Master. Set
-this value to `org.apache.hadoop.hbase.backup.master.LogRollMasterProcedureManager` or append it to the current value.
-
-_hbase.procedure.regionserver.classes_: A comma-separated list of classes invoked with the Procedure framework in the RegionServer.
-Set this value to `org.apache.hadoop.hbase.backup.regionserver.LogRollRegionServerProcedureManager` or append it to the current value.
-
-_hbase.coprocessor.region.classes_: A comma-separated list of RegionObservers deployed on tables. Set this value to
-`org.apache.hadoop.hbase.backup.BackupObserver` or append it to the current value.
-
-_hbase.master.hfilecleaner.plugins_: A comma-separated list of HFileCleaners deployed on the Master. Set this value
-to `org.apache.hadoop.hbase.backup.BackupHFileCleaner` or append it to the current value.
-
-### Optional properties
-
-_hbase.backup.system.ttl_: The time-to-live in seconds of data in the `hbase:backup` tables (default: forever). This property
-is only relevant prior to the creation of the `hbase:backup` table. Use the `alter` command in the HBase shell to modify the TTL
-when this table already exists. See the <<br.filesystem.growth.warning,below section>> for more details on the impact of this
-configuration property.
-
-_hbase.backup.attempts.max_: The number of attempts to perform when taking hbase table snapshots (default: 10).
-
-_hbase.backup.attempts.pause.ms_: The amount of time to wait between failed snapshot attempts in milliseconds (default: 10000).
-
-_hbase.backup.logroll.timeout.millis_: The amount of time (in milliseconds) to wait for RegionServers to execute a WAL rolling
-in the Master's procedure framework (default: 30000).
-
-[[br.best.practices]]
-## Best Practices
-
-### Formulate a restore strategy and test it.
-
-Before you rely on a backup and restore strategy for your production environment, identify how backups must be performed,
-and more importantly, how restores must be performed. Test the plan to ensure that it is workable.
-At a minimum, store backup data from a production cluster on a different cluster or server. To further safeguard the data,
-use a backup location that is at a different physical location.
-
-If you have a unrecoverable loss of data on your primary production cluster as a result of computer system issues, you may
-be able to restore the data from a different cluster or server at the same site. However, a disaster that destroys the whole
-site renders locally stored backups useless. Consider storing the backup data and necessary resources (both computing capacity
-and operator expertise) to restore the data at a site sufficiently remote from the production site. In the case of a catastrophe
-at the whole primary site (fire, earthquake, etc.), the remote backup site can be very valuable.
-
-### Secure a full backup image first.
-
-As a baseline, you must complete a full backup of HBase data at least once before you can rely on incremental backups. The full
-backup should be stored outside of the source cluster. To ensure complete dataset recovery, you must run the restore utility
-with the option to restore baseline full backup. The full backup is the foundation of your dataset. Incremental backup data
-is applied on top of the full backup during the restore operation to return you to the point in time when backup was last taken.
-
-### Define and use backup sets for groups of tables that are logical subsets of the entire dataset.
-
-You can group tables into an object called a backup set. A backup set can save time when you have a particular group of tables
-that you expect to repeatedly back up or restore.
-
-When you create a backup set, you type table names to include in the group. The backup set includes not only groups of related
-tables, but also retains the HBase backup metadata. Afterwards, you can invoke the backup set name to indicate what tables apply
-to the command execution instead of entering all the table names individually.
-
-### Document the backup and restore strategy, and ideally log information about each backup.
-
-Document the whole process so that the knowledge base can transfer to new administrators after employee turnover. As an extra
-safety precaution, also log the calendar date, time, and other relevant details about the data of each backup. This metadata
-can potentially help locate a particular dataset in case of source cluster failure or primary site disaster. Maintain duplicate
-copies of all documentation: one copy at the production cluster site and another at the backup location or wherever it can be
-accessed by an administrator remotely from the production cluster.
-
-[[br.s3.backup.scenario]]
-## Scenario: Safeguarding Application Datasets on Amazon S3
-
-This scenario describes how a hypothetical retail business uses backups to safeguard application data and then restore the dataset
-after failure.
-
-The HBase administration team uses backup sets to store data from a group of tables that have interrelated information for an
-application called green. In this example, one table contains transaction records and the other contains customer details. The
-two tables need to be backed up and be recoverable as a group.
-
-The admin team also wants to ensure daily backups occur automatically.
-
-.Tables Composing The Backup Set
-image::backup-app-components.png[]
-
-The following is an outline of the steps and examples of commands that are used to backup the data for the _green_ application and
-to recover the data later. All commands are run when logged in as HBase superuser.
-
-1. A backup set called _green_set_ is created as an alias for both the transactions table and the customer table. The backup set can
-be used for all operations to avoid typing each table name. The backup set name is case-sensitive and should be formed with only
-printable characters and without spaces.
-
-[source]
-----
-$ hbase backup set add green_set transactions
-$ hbase backup set add green_set customer
-----
-
-2. The first backup of green_set data must be a full backup. The following command example shows how credentials are passed to Amazon
-S3 and specifies the file system with the s3a: prefix.
-
-[source]
-----
-$ ACCESS_KEY=ABCDEFGHIJKLMNOPQRST
-$ SECRET_KEY=123456789abcdefghijklmnopqrstuvwxyzABCD
-$ sudo -u hbase hbase backup create full\
-  s3a://$ACCESS_KEY:SECRET_KEY@prodhbasebackups/backups -s green_set
-----
-
-3. Incremental backups should be run according to a schedule that ensures essential data recovery in the event of a catastrophe. At
-this retail company, the HBase admin team decides that automated daily backups secures the data sufficiently. The team decides that
-they can implement this by modifying an existing Cron job that is defined in `/etc/crontab`. Consequently, IT modifies the Cron job
-by adding the following line:
-
-[source]
-----
-@daily hbase hbase backup create incremental s3a://$ACCESS_KEY:$SECRET_KEY@prodhbasebackups/backups -s green_set
-----
-
-4. A catastrophic IT incident disables the production cluster that the green application uses. An HBase system administrator of the
-backup cluster must restore the _green_set_ dataset to the point in time closest to the recovery objective.
-
-NOTE: If the administrator of the backup HBase cluster has the backup ID with relevant details in accessible records, the following
-search with the `hdfs dfs -ls` command and manually scanning the backup ID list can be bypassed. Consider continuously maintaining
-and protecting a detailed log of backup IDs outside the production cluster in your environment.
-
-The HBase administrator runs the following command on the directory where backups are stored to print the list of successful backup
-IDs on the console:
-
-`hdfs dfs -ls -t /prodhbasebackups/backups`
-
-5. The admin scans the list to see which backup was created at a date and time closest to the recovery objective. To do this, the
-admin converts the calendar timestamp of the recovery point in time to Unix time because backup IDs are uniquely identified with
-Unix time. The backup IDs are listed in reverse chronological order, meaning the most recent successful backup appears first.
-
-The admin notices that the following line in the command output corresponds with the _green_set_ backup that needs to be restored:
-
-`/prodhbasebackups/backups/backup_1467823988425`
-
-6. The admin restores green_set invoking the backup ID and the -overwrite option. The -overwrite option truncates all existing data
-in the destination and populates the tables with data from the backup dataset. Without this flag, the backup data is appended to the
-existing data in the destination. In this case, the admin decides to overwrite the data because it is corrupted.
-
-[source]
-----
-$ sudo -u hbase hbase restore -s green_set \
-  s3a://$ACCESS_KEY:$SECRET_KEY@prodhbasebackups/backups backup_1467823988425 \ -overwrite
-----
-
-[[br.data.security]]
-## Security of Backup Data
-
-With this feature which makes copying data to remote locations, it's important to take a moment to clearly state the procedural
-concerns that exist around data security. Like the HBase replication feature, backup and restore provides the constructs to automatically
-copy data from within a corporate boundary to some system outside of that boundary. It is imperative when storing sensitive data that with backup and restore, much
-less any feature which extracts data from HBase, the locations to which data is being sent has undergone a security audit to ensure
-that only authenticated users are allowed to access that data.
-
-For example, with the above example of backing up data to S3, it is of the utmost importance that the proper permissions are assigned
-to the S3 bucket to ensure that only a minimum set of authorized users are allowed to access this data. Because the data is no longer
-being accessed via HBase, and its authentication and authorization controls, we must ensure that the filesystem storing that data is
-providing a comparable level of security. This is a manual step which users *must* implement on their own.
-
-[[br.technical.details]]
-## Technical Details of Incremental Backup and Restore
-
-HBase incremental backups enable more efficient capture of HBase table images than previous attempts at serial backup and restore
-solutions, such as those that only used HBase Export and Import APIs. Incremental backups use Write Ahead Logs (WALs) to capture
-the data changes since the previous backup was created. A WAL roll (create new WALs) is executed across all RegionServers to track
-the WALs that need to be in the backup.
-
-After the incremental backup image is created, the source backup files usually are on same node as the data source. A process similar
-to the DistCp (distributed copy) tool is used to move the source backup files to the target file systems. When a table restore operation
-starts, a two-step process is initiated. First, the full backup is restored from the full backup image. Second, all WAL files from
-incremental backups between the last full backup and the incremental backup being restored are converted to HFiles, which the HBase
-Bulk Load utility automatically imports as restored data in the table.
-
-You can only restore on a live HBase cluster because the data must be redistributed to complete the restore operation successfully.
-
-[[br.filesystem.growth.warning]]
-## A Warning on File System Growth
-
-As a reminder, incremental backups are implemented via retaining the write-ahead logs which HBase primarily uses for data durability.
-Thus, to ensure that all data needing to be included in a backup is still available in the system, the HBase backup and restore feature
-retains all write-ahead logs since the last backup until the next incremental backup is executed.
-
-Like HBase Snapshots, this can have an expectedly large impact on the HDFS usage of HBase for high volume tables. Take care in enabling
-and using the backup and restore feature, specifically with a mind to removing backup sessions when they are not actively being used.
-
-The only automated, upper-bound on retained write-ahead logs for backup and restore is based on the TTL of the `hbase:backup` system table which,
-as of the time this document is written, is infinite (backup table entries are never automatically deleted). This requires that administrators
-perform backups on a schedule whose frequency is relative to the amount of available space on HDFS (e.g. less available HDFS space requires
-more aggressive backup merges and deletions). As a reminder, the TTL can be altered on the `hbase:backup` table using the `alter` command
-in the HBase shell. Modifying the configuration property `hbase.backup.system.ttl` in hbase-site.xml after the system table exists has no effect.
-
-[[br.backup.capacity.planning]]
-## Capacity Planning
-
-When designing a distributed system deployment, it is critical that some basic mathmatical rigor is executed to ensure sufficient computational
-capacity is available given the data and software requirements of the system. For this feature, the availability of network capacity is the largest
-bottleneck when estimating the performance of some implementation of backup and restore. The second most costly function is the speed at which
-data can be read/written.
-
-### Full Backups
-
-To estimate the duration of a full backup, we have to understand the general actions which are invoked:
-
-* Write-ahead log roll on each RegionServer: ones to tens of seconds per RegionServer in parallel. Relative to the load on each RegionServer.
-* Take an HBase snapshot of the table(s): tens of seconds. Relative to the number of regions and files that comprise the table.
-* Export the snapshot to the destination: see below. Relative to the size of the data and the network bandwidth to the destination.
-
-[[br.export.snapshot.cost]]
-To approximate how long the final step will take, we have to make some assumptions on hardware. Be aware that these will *not* be accurate for your
-system -- these are numbers that your or your administrator know for your system. Let's say the speed of reading data from HDFS on a single node is
-capped at 80MB/s (across all Mappers that run on that host), a modern network interface controller (NIC) supports 10Gb/s, the top-of-rack switch can
-handle 40Gb/s, and the WAN between your clusters is 10Gb/s. This means that you can only ship data to your remote at a speed of 1.25GB/s -- meaning
-that 16 nodes (`1.25 * 1024 / 80 = 16`) participating in the ExportSnapshot should be able to fully saturate the link between clusters. With more
-nodes in the cluster, we can still saturate the network but at a lesser impact on any one node which helps ensure local SLAs are made. If the size
-of the snapshot is 10TB, this would full backup would take in the ballpark of 2.5 hours (`10 * 1024 / 1.25 / (60 * 60) = 2.23hrs`)
-
-As a general statement, it is very likely that the WAN bandwidth between your local cluster and the remote storage is the largest
-bottleneck to the speed of a full backup.
-
-When the concern is restricting the computational impact of backups to a "production system", the above formulas can be reused with the optional
-command-line arguments to `hbase backup create`: `-b`, `-w`, `-q`. The `-b` option defines the bandwidth at which each worker (Mapper) would
-write data. The `-w` argument limits the number of workers that would be spawned in the DistCp job. The `-q` allows the user to specify a YARN
-queue which can limit the specific nodes where the workers will be spawned -- this can quarantine the backup workers performing the copy to
-a set of non-critical nodes. Relating the `-b` and `-w` options to our earlier equations: `-b` would be used to restrict each node from reading
-data at the full 80MB/s and `-w` is used to limit the job from spawning 16 worker tasks.
-
-### Incremental Backup
-
-Like we did for full backups, we have to understand the incremental backup process to approximate its runtime and cost.
-
-* Identify new write-ahead logs since last full or incremental backup: negligible. Apriori knowledge from the backup system table(s).
-* Read, filter, and write "minimized" HFiles equivalent to the WALs: dominated by the speed of writing data. Relative to write speed of HDFS.
-* DistCp the HFiles to the destination: <<br.export.snapshot.cost,see above>>.
-
-For the second step, the dominating cost of this operation would be the re-writing the data (under the assumption that a majority of the
-data in the WAL is preserved). In this case, we can assume an aggregate write speed of 30MB/s per node. Continuing our 16-node cluster example,
-this would require approximately 15 minutes to perform this step for 50GB of data (50 * 1024 / 60 / 60 = 14.2). The amount of time to start the
-DistCp MapReduce job would likely dominate the actual time taken to copy the data (50 / 1.25 = 40 seconds) and can be ignored.
-
-[[br.limitations]]
-## Limitations of the Backup and Restore Utility
-
-*Serial backup operations*
-
-Backup operations cannot be run concurrently. An operation includes actions like create, delete, restore, and merge. Only one active backup session is supported. link:https://issues.apache.org/jira/browse/HBASE-16391[HBASE-16391]
-will introduce multiple-backup sessions support.
-
-*No means to cancel backups*
-
-Both backup and restore operations cannot be canceled. (link:https://issues.apache.org/jira/browse/HBASE-15997[HBASE-15997], link:https://issues.apache.org/jira/browse/HBASE-15998[HBASE-15998]).
-The workaround to cancel a backup would be to kill the client-side backup command (`control-C`), ensure all relevant MapReduce jobs have exited, and then
-run the `hbase backup repair` command to ensure the system backup metadata is consistent.
-
-*Backups can only be saved to a single location*
-
-Copying backup information to multiple locations is an exercise left to the user. link:https://issues.apache.org/jira/browse/HBASE-15476[HBASE-15476] will
-introduce the ability to specify multiple-backup destinations intrinsically.
-
-*HBase superuser access is required*
-
-Only an HBase superuser (e.g. hbase) is allowed to perform backup/restore, can pose a problem for shared HBase installations. Current mitigations would require
-coordination with system administrators to build and deploy a backup and restore strategy (link:https://issues.apache.org/jira/browse/HBASE-14138[HBASE-14138]).
-
-*Backup restoration is an online operation*
-
-To perform a restore from a backup, it requires that the HBase cluster is online as a caveat of the current implementation (link:https://issues.apache.org/jira/browse/HBASE-16573[HBASE-16573]).
-
-*Some operations may fail and require re-run*
-
-The HBase backup feature is primarily client driven. While there is the standard HBase retry logic built into the HBase Connection, persistent errors in executing operations
-may propagate back to the client (e.g. snapshot failure due to region splits). The backup implementation should be moved from client-side into the ProcedureV2 framework
-in the future which would provide additional robustness around transient/retryable failures. The `hbase backup repair` command is meant to correct states which the system
-cannot automatically detect and recover from.
-
-*Avoidance of declaration of public API*
-
-While the Java API to interact with this feature exists and its implementation is separated from an interface, insufficient rigor has been applied to determine if
-it is exactly what we intend to ship to users. As such, it is marked as for a `Private` audience with the expectation that, as users begin to try the feature, there
-will be modifications that would necessitate breaking compatibility (link:https://issues.apache.org/jira/browse/HBASE-17517[HBASE-17517]).
-
-*Lack of global metrics for backup and restore*
-
-Individual backup and restore operations contain metrics about the amount of work the operation included, but there is no centralized location (e.g. the Master UI)
-which present information for consumption (link:https://issues.apache.org/jira/browse/HBASE-16565[HBASE-16565]).

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/community.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/community.adoc b/src/main/asciidoc/_chapters/community.adoc
index d141dbf..3a896cf 100644
--- a/src/main/asciidoc/_chapters/community.adoc
+++ b/src/main/asciidoc/_chapters/community.adoc
@@ -40,24 +40,6 @@ When the feature is ready for commit, 3 +1s from committers will get your featur
 See link:http://search-hadoop.com/m/asM982C5FkS1[HBase, mail # dev - Thoughts
               about large feature dev branches]
 
-[[patchplusonepolicy]]
-.Patch +1 Policy
-
-The below policy is something we put in place 09/2012.
-It is a suggested policy rather than a hard requirement.
-We want to try it first to see if it works before we cast it in stone.
-
-Apache HBase is made of link:https://issues.apache.org/jira/projects/HBASE?selectedItem=com.atlassian.jira.jira-projects-plugin:components-page[components].
-Components have one or more <<owner,OWNER>>s.
-See the 'Description' field on the link:https://issues.apache.org/jira/projects/HBASE?selectedItem=com.atlassian.jira.jira-projects-plugin:components-page[components] JIRA page for who the current owners are by component.
-
-Patches that fit within the scope of a single Apache HBase component require, at least, a +1 by one of the component's owners before commit.
-If owners are absent -- busy or otherwise -- two +1s by non-owners will suffice.
-
-Patches that span components need at least two +1s before they can be committed, preferably +1s by owners of components touched by the x-component patch (TODO: This needs tightening up but I think fine for first pass).
-
-Any -1 on a patch by anyone vetoes a patch; it cannot be committed until the justification for the -1 is addressed.
-
 [[hbase.fix.version.in.jira]]
 .How to set fix version in JIRA on issue resolve
 
@@ -85,19 +67,37 @@ We also are currently in violation of this basic tenet -- replication at least k
 [[community.roles]]
 == Community Roles
 
-[[owner]]
-.Component Owner/Lieutenant
+=== Release Managers
+
+Each maintained release branch has a release manager, who volunteers to coordinate new features and bug fixes are backported to that release.
+The release managers are link:https://hbase.apache.org/team-list.html[committers].
+If you would like your feature or bug fix to be included in a given release, communicate with that release manager.
+If this list goes out of date or you can't reach the listed person, reach out to someone else on the list.
+
+NOTE: End-of-life releases are not included in this list.
+
+.Release Managers
+[cols="1,1", options="header"]
+|===
+| Release
+| Release Manager
+
+| 1.2
+| Sean Busbey
+
+| 1.3
+| Mikhail Antonov
 
-Component owners are listed in the description field on this Apache HBase JIRA link:https://issues.apache.org/jira/projects/HBASE?selectedItem=com.atlassian.jira.jira-projects-plugin:components-page[components] page.
-The owners are listed in the 'Description' field rather than in the 'Component Lead' field because the latter only allows us list one individual whereas it is encouraged that components have multiple owners.
+| 1.4
+| Andrew Purtell
 
-Owners or component lieutenants are volunteers who are (usually, but not necessarily) expert in their component domain and may have an agenda on how they think their Apache HBase component should evolve.
+| 2.0
+| Michael Stack
 
-. Owners will try and review patches that land within their component's scope.
-. If applicable, if an owner has an agenda, they will publish their goals or the design toward which they are driving their component
+| 2.1
+| Duo Zhang
 
-If you would like to be volunteer as a component owner, just write the dev list and we'll sign you up.
-Owners do not need to be committers.
+|===
 
 [[hbase.commit.msg.format]]
 == Commit Message format

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/compression.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/compression.adoc b/src/main/asciidoc/_chapters/compression.adoc
index 6fe0d76..b2ff5ce 100644
--- a/src/main/asciidoc/_chapters/compression.adoc
+++ b/src/main/asciidoc/_chapters/compression.adoc
@@ -335,25 +335,18 @@ You do not need to re-create the table or copy data.
 If you are changing codecs, be sure the old codec is still available until all the old StoreFiles have been compacted.
 
 .Enabling Compression on a ColumnFamily of an Existing Table using HBaseShell
-====
 ----
-
 hbase> disable 'test'
 hbase> alter 'test', {NAME => 'cf', COMPRESSION => 'GZ'}
 hbase> enable 'test'
 ----
-====
 
 .Creating a New Table with Compression On a ColumnFamily
-====
 ----
-
 hbase> create 'test2', { NAME => 'cf2', COMPRESSION => 'SNAPPY' }
 ----
-====
 
 .Verifying a ColumnFamily's Compression Settings
-====
 ----
 
 hbase> describe 'test'
@@ -366,7 +359,6 @@ DESCRIPTION                                          ENABLED
  LOCKCACHE => 'true'}
 1 row(s) in 0.1070 seconds
 ----
-====
 
 ==== Testing Compression Performance
 
@@ -374,9 +366,7 @@ HBase includes a tool called LoadTestTool which provides mechanisms to test your
 You must specify either `-write` or `-update-read` as your first parameter, and if you do not specify another parameter, usage advice is printed for each option.
 
 .+LoadTestTool+ Usage
-====
 ----
-
 $ bin/hbase org.apache.hadoop.hbase.util.LoadTestTool -h
 usage: bin/hbase org.apache.hadoop.hbase.util.LoadTestTool <options>
 Options:
@@ -387,7 +377,7 @@ Options:
                               LZ4]
  -data_block_encoding <arg>   Encoding algorithm (e.g. prefix compression) to
                               use for data blocks in the test column family, one
-                              of [NONE, PREFIX, DIFF, FAST_DIFF, PREFIX_TREE].
+                              of [NONE, PREFIX, DIFF, FAST_DIFF, ROW_INDEX_V1].
  -encryption <arg>            Enables transparent encryption on the test table,
                               one of [AES]
  -generator <arg>             The class which generates load for the tool. Any
@@ -429,16 +419,12 @@ Options:
                               port numbers
  -zk_root <arg>               name of parent znode in zookeeper
 ----
-====
 
 .Example Usage of LoadTestTool
-====
 ----
-
 $ hbase org.apache.hadoop.hbase.util.LoadTestTool -write 1:10:100 -num_keys 1000000
           -read 100:30 -num_tables 1 -data_block_encoding NONE -tn load_test_tool_NONE
 ----
-====
 
 [[data.block.encoding.enable]]
 === Enable Data Block Encoding
@@ -449,9 +435,7 @@ Disable the table before altering its DATA_BLOCK_ENCODING setting.
 Following is an example using HBase Shell:
 
 .Enable Data Block Encoding On a Table
-====
 ----
-
 hbase>  disable 'test'
 hbase> alter 'test', { NAME => 'cf', DATA_BLOCK_ENCODING => 'FAST_DIFF' }
 Updating all regions with the new schema...
@@ -462,12 +446,9 @@ Done.
 hbase> enable 'test'
 0 row(s) in 0.1580 seconds
 ----
-====
 
 .Verifying a ColumnFamily's Data Block Encoding
-====
 ----
-
 hbase> describe 'test'
 DESCRIPTION                                          ENABLED
  'test', {NAME => 'cf', DATA_BLOCK_ENCODING => 'FAST true
@@ -478,7 +459,6 @@ DESCRIPTION                                          ENABLED
  e', BLOCKCACHE => 'true'}
 1 row(s) in 0.0650 seconds
 ----
-====
 
 :numbered:
 


[07/11] hbase git commit: HBASE-20831 Copy master doc into branch-2.1 and edit to make it suit 2.1.0

Posted by zh...@apache.org.
http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/asciidoc/metrics.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/metrics.adoc b/src/main/site/asciidoc/metrics.adoc
deleted file mode 100644
index be7d9a5..0000000
--- a/src/main/site/asciidoc/metrics.adoc
+++ /dev/null
@@ -1,102 +0,0 @@
-////
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
-////
-
-= Apache HBase (TM) Metrics
-
-== Introduction
-Apache HBase (TM) emits Hadoop link:http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html[metrics].
-
-== Setup
-
-First read up on Hadoop link:http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html[metrics].
-
-If you are using ganglia, the link:http://wiki.apache.org/hadoop/GangliaMetrics[GangliaMetrics] wiki page is useful read.
-
-To have HBase emit metrics, edit `$HBASE_HOME/conf/hadoop-metrics.properties` and enable metric 'contexts' per plugin.  As of this writing, hadoop supports *file* and *ganglia* plugins. Yes, the hbase metrics files is named hadoop-metrics rather than _hbase-metrics_ because currently at least the hadoop metrics system has the properties filename hardcoded. Per metrics _context_, comment out the NullContext and enable one or more plugins instead.
-
-If you enable the _hbase_ context, on regionservers you'll see total requests since last
-metric emission, count of regions and storefiles as well as a count of memstore size.
-On the master, you'll see a count of the cluster's requests.
-
-Enabling the _rpc_ context is good if you are interested in seeing
-metrics on each hbase rpc method invocation (counts and time taken).
-
-The _jvm_ context is useful for long-term stats on running hbase jvms -- memory used, thread counts, etc. As of this writing, if more than one jvm is running emitting metrics, at least in ganglia, the stats are aggregated rather than reported per instance.
-
-== Using with JMX
-
-In addition to the standard output contexts supported by the Hadoop 
-metrics package, you can also export HBase metrics via Java Management 
-Extensions (JMX).  This will allow viewing HBase stats in JConsole or 
-any other JMX client.
-
-=== Enable HBase stats collection
-
-To enable JMX support in HBase, first edit `$HBASE_HOME/conf/hadoop-metrics.properties` to support metrics refreshing. (If you've running 0.94.1 and above, or have already configured `hadoop-metrics.properties` for another output context, you can skip this step).
-[source,bash]
-----
-# Configuration of the "hbase" context for null
-hbase.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
-hbase.period=60
-
-# Configuration of the "jvm" context for null
-jvm.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
-jvm.period=60
-
-# Configuration of the "rpc" context for null
-rpc.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
-rpc.period=60
-----
-
-=== Setup JMX Remote Access
-
-For remote access, you will need to configure JMX remote passwords and access profiles.  Create the files:
-`$HBASE_HOME/conf/jmxremote.passwd` (set permissions 
-        to 600):: +
-----
-monitorRole monitorpass
-controlRole controlpass
-----
-
-`$HBASE_HOME/conf/jmxremote.access`:: +
-----
-monitorRole readonly
-controlRole readwrite
-----
-
-=== Configure JMX in HBase startup
-
-Finally, edit the `$HBASE_HOME/conf/hbase-env.sh` script to add JMX support:
-[source,bash]
-----
-HBASE_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false"
-HBASE_JMX_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.password.file=$HBASE_HOME/conf/jmxremote.passwd"
-HBASE_JMX_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.access.file=$HBASE_HOME/conf/jmxremote.access"
-
-export HBASE_MASTER_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.port=10101"
-export HBASE_REGIONSERVER_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.port=10102"
-----
-
-After restarting the processes you want to monitor, you should now be able to run JConsole (included with the JDK since JDK 5.0) to view the statistics via JMX.  HBase MBeans are exported under the *`hadoop`* domain in JMX.
-
-
-== Understanding HBase Metrics
-
-For more information on understanding HBase metrics, see the link:book.html#hbase_metrics[metrics section] in the Apache HBase Reference Guide. 
-

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/asciidoc/old_news.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/old_news.adoc b/src/main/site/asciidoc/old_news.adoc
deleted file mode 100644
index fd0e255..0000000
--- a/src/main/site/asciidoc/old_news.adoc
+++ /dev/null
@@ -1,121 +0,0 @@
-////
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
-////
-
-= Old Apache HBase (TM) News
-
-February 10th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/163139322/[HBase Meetup @ Continuuity] in Palo Alto
-
-January 30th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/158491762/[HBase Meetup @ Apple] in Cupertino
-
-January 30th, 2014:: link:http://www.meetup.com/Los-Angeles-HBase-User-group/events/160560282/[Los Angeles HBase User Group] in El Segundo
-
-October 24th, 2013:: link:http://www.meetup.com/hbaseusergroup/events/140759692/[HBase User] and link:http://www.meetup.com/hackathon/events/144366512/[Developer] Meetup at HortonWorksin Palo Alto
-
-September 26, 2013:: link:http://www.meetup.com/hbaseusergroup/events/135862292/[HBase Meetup at Arista Networks] in San Francisco
-
-August 20th, 2013:: link:http://www.meetup.com/hbaseusergroup/events/120534362/[HBase Meetup at Flurry] in San Francisco
-
-July 16th, 2013:: link:http://www.meetup.com/hbaseusergroup/events/119929152/[HBase Meetup at Twitter] in San Francisco
-
-June 25th, 2013:: link:http://www.meetup.com/hbaseusergroup/events/119154442/[Hadoop Summit Meetup].at San Jose Convention Center
-
-June 14th, 2013:: link:http://kijicon.eventbrite.com/[KijiCon: Building Big Data Apps] in San Francisco.
-
-June 13th, 2013:: link:http://www.hbasecon.com/[HBaseCon2013] in San Francisco.  Submit an Abstract!
-
-June 12th, 2013:: link:http://www.meetup.com/hackathon/events/123403802/[HBaseConHackAthon] at the Cloudera office in San Francisco.
-
-April 11th, 2013:: link:http://www.meetup.com/hbaseusergroup/events/103587852/[HBase Meetup at AdRoll] in San Francisco
-
-February 28th, 2013:: link:http://www.meetup.com/hbaseusergroup/events/96584102/[HBase Meetup at Intel Mission Campus]
-
-February 19th, 2013:: link:http://www.meetup.com/hackathon/events/103633042/[Developers PowWow] at HortonWorks' new digs
-
-January 23rd, 2013:: link:http://www.meetup.com/hbaseusergroup/events/91381312/[HBase Meetup at WibiData World HQ!]
-
-December 4th, 2012:: link:http://www.meetup.com/hackathon/events/90536432/[0.96 Bug Squashing and Testing Hackathon] at Cloudera, SF.
-
-October 29th, 2012:: link:http://www.meetup.com/hbaseusergroup/events/82791572/[HBase User Group Meetup] at Wize Commerce in San Mateo.
-
-October 25th, 2012:: link:http://www.meetup.com/HBase-NYC/events/81728932/[Strata/Hadoop World HBase Meetup.] in NYC
-
-September 11th, 2012:: link:http://www.meetup.com/hbaseusergroup/events/80621872/[Contributor's Pow-Wow at HortonWorks HQ.]
-
-August 8th, 2012:: link:http://www.apache.org/dyn/closer.cgi/hbase/[Apache HBase 0.94.1 is available for download]
-
-June 15th, 2012:: link:http://www.meetup.com/hbaseusergroup/events/59829652/[Birds-of-a-feather] in San Jose, day after:: link:http://hadoopsummit.org[Hadoop Summit]
-
-May 23rd, 2012:: link:http://www.meetup.com/hackathon/events/58953522/[HackConAthon] in Palo Alto
-
-May 22nd, 2012:: link:http://www.hbasecon.com[HBaseCon2012] in San Francisco
-
-March 27th, 2012:: link:http://www.meetup.com/hbaseusergroup/events/56021562/[Meetup @ StumbleUpon] in San Francisco
-
-January 19th, 2012:: link:http://www.meetup.com/hbaseusergroup/events/46702842/[Meetup @ EBay]
-
-January 23rd, 2012:: Apache HBase 0.92.0 released. link:http://www.apache.org/dyn/closer.cgi/hbase/[Download it!]
-
-December 23rd, 2011:: Apache HBase 0.90.5 released. link:http://www.apache.org/dyn/closer.cgi/hbase/[Download it!]
-
-November 29th, 2011:: link:http://www.meetup.com/hackathon/events/41025972/[Developer Pow-Wow in SF] at Salesforce HQ
-
-November 7th, 2011:: link:http://www.meetup.com/hbaseusergroup/events/35682812/[HBase Meetup in NYC (6PM)] at the AppNexus office
-
-August 22nd, 2011:: link:http://www.meetup.com/hbaseusergroup/events/28518471/[HBase Hackathon (11AM) and Meetup (6PM)] at FB in PA
-
-June 30th, 2011:: link:http://www.meetup.com/hbaseusergroup/events/20572251/[HBase Contributor Day], the day after the:: link:http://developer.yahoo.com/events/hadoopsummit2011/[Hadoop Summit] hosted by Y!
-
-June 8th, 2011:: link:http://berlinbuzzwords.de/wiki/hbase-workshop-and-hackathon[HBase Hackathon] in Berlin to coincide with:: link:http://berlinbuzzwords.de/[Berlin Buzzwords]
-
-May 19th, 2011: Apache HBase 0.90.3 released. link:http://www.apache.org/dyn/closer.cgi/hbase/[Download it!]
-
-April 12th, 2011: Apache HBase 0.90.2 released. link:http://www.apache.org/dyn/closer.cgi/hbase/[Download it!]
-
-March 21st, 2011:: link:http://www.meetup.com/hackathon/events/16770852/[HBase 0.92 Hackathon at StumbleUpon, SF]
-February 22nd, 2011:: link:http://www.meetup.com/hbaseusergroup/events/16492913/[HUG12: February HBase User Group at StumbleUpon SF]
-December 13th, 2010:: link:http://www.meetup.com/hackathon/calendar/15597555/[HBase Hackathon: Coprocessor Edition]
-November 19th, 2010:: link:http://huguk.org/[Hadoop HUG in London] is all about Apache HBase
-November 15-19th, 2010:: link:http://www.devoxx.com/display/Devoxx2K10/Home[Devoxx] features HBase Training and multiple HBase presentations
-
-October 12th, 2010:: HBase-related presentations by core contributors and users at:: link:http://www.cloudera.com/company/press-center/hadoop-world-nyc/[Hadoop World 2010]
-
-October 11th, 2010:: link:http://www.meetup.com/hbaseusergroup/calendar/14606174/[HUG-NYC: HBase User Group NYC Edition] (Night before Hadoop World)
-June 30th, 2010:: link:http://www.meetup.com/hbaseusergroup/calendar/13562846/[Apache HBase Contributor Workshop] (Day after Hadoop Summit)
-May 10th, 2010:: Apache HBase graduates from Hadoop sub-project to Apache Top Level Project 
-
-April 19, 2010:: Signup for link:http://www.meetup.com/hbaseusergroup/calendar/12689490/[HBase User Group Meeting, HUG10] hosted by Trend Micro
-
-March 10th, 2010:: link:http://www.meetup.com/hbaseusergroup/calendar/12689351/[HBase User Group Meeting, HUG9] hosted by Mozilla
-
-January 27th, 2010:: Sign up for the link:http://www.meetup.com/hbaseusergroup/calendar/12241393/[HBase User Group Meeting, HUG8], at StumbleUpon in SF
-
-September 8th, 2010:: Apache HBase 0.20.0 is faster, stronger, slimmer, and sweeter tasting than any previous Apache HBase release.  Get it off the link:http://www.apache.org/dyn/closer.cgi/hbase/[Releases] page.
-
-November 2-6th, 2009:: link:http://dev.us.apachecon.com/c/acus2009/[ApacheCon] in Oakland. The Apache Foundation will be celebrating its 10th anniversary in beautiful Oakland by the Bay. Lots of good talks and meetups including an HBase presentation by a couple of the lads.
-
-October 2nd, 2009:: HBase at Hadoop World in NYC. A few of us will be talking on Practical HBase out east at link:http://www.cloudera.com/hadoop-world-nyc[Hadoop World: NYC].
-
-August 7th-9th, 2009:: HUG7 and HBase Hackathon at StumbleUpon in SF: Sign up for the:: link:http://www.meetup.com/hbaseusergroup/calendar/10950511/[HBase User Group Meeting, HUG7] or for the link:http://www.meetup.com/hackathon/calendar/10951718/[Hackathon] or for both (all are welcome!).
-
-June, 2009::  HBase at HadoopSummit2009 and at NOSQL: See the link:https://hbase.apache.org/book.html#other.info.pres[presentations]
-
-March 3rd, 2009 :: HUG6 -- link:http://www.meetup.com/hbaseusergroup/calendar/9764004/[HBase User Group 6]
-
-January 30th, 2009:: LA Hbackathon: link:http://www.meetup.com/hbasela/calendar/9450876/[HBase January Hackathon Los Angeles] at link:http://streamy.com[Streamy] in Manhattan Beach
-

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/asciidoc/pseudo-distributed.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/pseudo-distributed.adoc b/src/main/site/asciidoc/pseudo-distributed.adoc
deleted file mode 100644
index d13c63b..0000000
--- a/src/main/site/asciidoc/pseudo-distributed.adoc
+++ /dev/null
@@ -1,23 +0,0 @@
-////
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
-////
-
-
-= Running Apache HBase (TM) in pseudo-distributed mode
-This page has been retired.  The contents have been moved to the link:book.html#distributed[Distributed Operation: Pseudo- and Fully-distributed modes] section in the Reference Guide.
-

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/asciidoc/replication.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/replication.adoc b/src/main/site/asciidoc/replication.adoc
deleted file mode 100644
index 9089754..0000000
--- a/src/main/site/asciidoc/replication.adoc
+++ /dev/null
@@ -1,22 +0,0 @@
-////
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
-////
-
-= Apache HBase (TM) Replication
-
-This information has been moved to link:book.html#cluster_replication"[the Cluster Replication] section of the link:book.html[Apache HBase Reference Guide].

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/asciidoc/resources.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/resources.adoc b/src/main/site/asciidoc/resources.adoc
deleted file mode 100644
index fef217e..0000000
--- a/src/main/site/asciidoc/resources.adoc
+++ /dev/null
@@ -1,27 +0,0 @@
-////
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
-////
-= Other Apache HBase (TM) Resources
-
-== Books
-HBase: The Definitive Guide:: link:http://shop.oreilly.com/product/0636920014348.do[HBase: The Definitive Guide, _Random Access to Your Planet-Size Data_] by Lars George. Publisher: O'Reilly Media, Released: August 2011, Pages: 556.
-
-HBase In Action:: link:http://www.manning.com/dimidukkhurana[HBase In Action] By Nick Dimiduk and Amandeep Khurana.  Publisher: Manning, MEAP Began: January 2012, Softbound print: Fall 2012, Pages: 350.
-
-HBase Administration Cookbook:: link:http://www.packtpub.com/hbase-administration-for-optimum-database-performance-cookbook/book[HBase Administration Cookbook] by Yifeng Jiang.  Publisher: PACKT Publishing, Release: Expected August 2012, Pages: 335.
-

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/asciidoc/sponsors.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/sponsors.adoc b/src/main/site/asciidoc/sponsors.adoc
deleted file mode 100644
index 4d7ebf3..0000000
--- a/src/main/site/asciidoc/sponsors.adoc
+++ /dev/null
@@ -1,36 +0,0 @@
-////
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
-////
-
-= Apache HBase(TM) Sponsors
-
-First off, thanks to link:http://www.apache.org/foundation/thanks.html[all who sponsor] our parent, the Apache Software Foundation.
-
-The below companies have been gracious enough to provide their commerical tool offerings free of charge to the Apache HBase(TM) project.
-
-* The crew at link:http://www.ej-technologies.com/[ej-technologies] have been letting us use link:http://www.ej-technologies.com/products/jprofiler/overview.html[JProfiler] for years now. 
-
-* The lads at link:http://headwaysoftware.com/[headway software] have given us a license for link:http://headwaysoftware.com/products/?code=Restructure101[Restructure101] so we can untangle our interdependency mess.
-
-* link:http://www.yourkit.com[YourKit] allows us to use their link:http://www.yourkit.com/overview/index.jsp[Java Profiler].
-* Some of us use link:http://www.jetbrains.com/idea[IntelliJ IDEA] thanks to link:http://www.jetbrains.com/[JetBrains].
-* Thank you to Boris at link:http://www.vectorportal.com/[Vector Portal] for granting us a license on the image on which our logo is based.
-
-== Sponsoring the Apache Software Foundation">
-To contribute to the Apache Software Foundation, a good idea in our opinion, see the link:http://www.apache.org/foundation/sponsorship.html[ASF Sponsorship] page.
-

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/custom/project-info-report.properties
----------------------------------------------------------------------
diff --git a/src/main/site/custom/project-info-report.properties b/src/main/site/custom/project-info-report.properties
deleted file mode 100644
index 912339e..0000000
--- a/src/main/site/custom/project-info-report.properties
+++ /dev/null
@@ -1,303 +0,0 @@
-# Licensed to the Apache Software Foundation (ASF) under one
-# or more contributor license agreements.  See the NOTICE file
-# distributed with this work for additional information
-# regarding copyright ownership.  The ASF licenses this file
-# to you under the Apache License, Version 2.0 (the
-# "License"); you may not use this file except in compliance
-# with the License.  You may obtain a copy of the License at
-#
-#  http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing,
-# software distributed under the License is distributed on an
-# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-# KIND, either express or implied.  See the License for the
-# specific language governing permissions and limitations
-# under the License.
-
-report.cim.access                                                  = Access
-report.cim.anthill.intro                                           = Apache HBase&#8482; uses {Anthill, http://www.anthillpro.com/html/products/anthillos/}.
-report.cim.bamboo.intro                                            = Apache HBase&#8482; uses {Bamboo, http://www.atlassian.com/software/bamboo/}.
-report.cim.buildforge.intro                                        = Apache HBase&#8482; uses {Build Forge, http://www-306.ibm.com/software/awdtools/buildforge/enterprise/}.
-report.cim.continuum.intro                                         = Apache HBase&#8482; uses {Continuum, http://continuum.apache.org/}.
-report.cim.cruisecontrol.intro                                     = Apache HBase&#8482; uses {CruiseControl, http://cruisecontrol.sourceforge.net/}.
-report.cim.description                                             = These are the definitions of all continuous integration processes that builds and tests code on a frequent, regular basis.
-report.cim.general.intro                                           = Apache HBase&#8482; uses Continuous Integration System.
-report.cim.hudson.intro                                            = Apache HBase&#8482; uses {Hudson, http://hudson-ci.org/}.
-report.cim.jenkins.intro                                           = Apache HBase&#8482; uses {Jenkins, http://jenkins-ci.org/}.
-report.cim.luntbuild.intro                                         = Apache HBase&#8482; uses {Luntbuild, http://luntbuild.javaforge.com/}.
-report.cim.travis.intro                                            = Apache HBase&#8482; uses {Travis CI, https://travis-ci.org/}.
-report.cim.name                                                    = Continuous Integration
-report.cim.nocim                                                   = No continuous integration management system is defined. Please check back at a later date.
-report.cim.notifiers.column.address                                = Address
-report.cim.notifiers.column.configuration                          = Configuration
-report.cim.notifiers.column.type                                   = Type
-report.cim.notifiers.intro                                         = Configuration for notifying developers/users when a build is unsuccessful, including user information and notification mode.
-report.cim.notifiers.nolist                                        = No notifiers are defined. Please check back at a later date.
-report.cim.notifiers.title                                         = Notifiers
-report.cim.nourl                                                   = No url to the continuous integration system is defined.
-report.cim.overview.title                                          = Overview
-report.cim.title                                                   = Continuous Integration
-report.cim.url                                                     = This is a link to the continuous integration system used by the project:
-report.dependencies.column.artifactId                              = ArtifactId
-report.dependencies.column.classifier                              = Classifier
-report.dependencies.column.description                             = Description
-report.dependencies.column.groupId                                 = GroupId
-report.dependencies.column.license                                 = License
-report.dependencies.column.optional                                = Optional
-report.dependencies.column.isOptional                              = Yes
-report.dependencies.column.isNotOptional                           = No
-report.dependencies.column.type                                    = Type
-report.dependencies.column.url                                     = URL
-report.dependencies.column.version                                 = Version
-report.dependencies.description                                    = This is a list of project's dependencies and provides information on each dependency.
-report.dependencies.file.details.cell.debuginformation.yes         = Yes
-report.dependencies.file.details.cell.debuginformation.no          = No
-report.dependencies.file.details.column.classes                    = Classes
-report.dependencies.file.details.column.debuginformation           = Debug Information
-report.dependencies.file.details.column.entries                    = Entries
-report.dependencies.file.details.column.file                       = Filename
-report.dependencies.file.details.column.javaVersion                = Java Version
-report.dependencies.file.details.column.packages                   = Packages
-report.dependencies.file.details.column.sealed                     = Sealed
-report.dependencies.file.details.column.size                       = Size
-report.dependencies.file.details.column.size.gb                    = GB
-report.dependencies.file.details.column.size.mb                    = MB
-report.dependencies.file.details.column.size.kb                    = kB
-report.dependencies.file.details.columntitle.debuginformation      = Indicates whether these dependencies have been compiled with debug information.
-report.dependencies.file.details.title                             = Dependency File Details
-report.dependencies.file.details.total                             = Total
-report.dependencies.graph.tables.licenses                          = Licenses
-report.dependencies.graph.tables.unknown                           = Unknown
-report.dependencies.graph.title                                    = Apache HBase&#8482; Dependency Graph
-report.dependencies.graph.tree.title                               = Dependency Tree
-report.dependencies.intro.compile                                  = This is a list of compile dependencies for Apache HBase&#8482;. These dependencies are required to compile and run the application:
-report.dependencies.intro.provided                                 = This is a list of provided dependencies for Apache HBase&#8482;. These dependencies are required to compile the application, but should be provided by default when using the library:
-report.dependencies.intro.runtime                                  = This is a list of runtime dependencies for Apache HBase&#8482;. These dependencies are required to run the application:
-report.dependencies.intro.system                                   = This is a list of system dependencies for Apache HBase&#8482;. These dependencies are required to compile the application:
-report.dependencies.intro.test                                     = This is a list of test dependencies for Apache HBase&#8482;. These dependencies are only required to compile and run unit tests for the application:
-report.dependencies.name                                           = Dependencies
-report.dependencies.nolist                                         = There are no dependencies for Apache HBase&#8482;. It is a standalone application that does not depend on any other project.
-report.dependencies.repo.locations.artifact.breakdown              = Repository locations for each of the Dependencies.
-report.dependencies.repo.locations.cell.release.disabled           = No
-report.dependencies.repo.locations.cell.release.enabled            = Yes
-report.dependencies.repo.locations.cell.snapshot.disabled          = No
-report.dependencies.repo.locations.cell.snapshot.enabled           = Yes
-report.dependencies.repo.locations.cell.blacklisted.disabled       = No
-report.dependencies.repo.locations.cell.blacklisted.enabled        = Yes
-report.dependencies.repo.locations.column.artifact                 = Artifact
-report.dependencies.repo.locations.column.blacklisted              = Blacklisted
-report.dependencies.repo.locations.column.release                  = Release
-report.dependencies.repo.locations.column.repoid                   = Repo ID
-report.dependencies.repo.locations.column.snapshot                 = Snapshot
-report.dependencies.repo.locations.column.url                      = URL
-report.dependencies.repo.locations.title                           = Dependency Repository Locations
-report.dependencies.title                                          = Apache HBase&#8482; Dependencies
-report.dependencies.unnamed                                        = Unnamed
-report.dependencies.transitive.intro                               = This is a list of transitive dependencies for Apache HBase&#8482;. Transitive dependencies are the dependencies of the project dependencies.
-report.dependencies.transitive.nolist                              = No transitive dependencies are required for Apache HBase&#8482;.
-report.dependencies.transitive.title                               = Apache HBase&#8482; Transitive Dependencies
-report.dependency-convergence.convergence.caption                  = Dependencies used in modules
-report.dependency-convergence.convergence.single.caption           = Dependencies used in Apache HBase&#8482;
-report.dependency-convergence.description                          = This is the convergence of dependency versions across the entire project and its sub-modules.
-report.dependency-convergence.legend                               = Legend:
-report.dependency-convergence.legend.different                     = At least one dependency has a differing version of the dependency or has SNAPSHOT dependencies.
-report.dependency-convergence.legend.shared                        = All modules/dependencies share one version of the dependency.
-report.dependency-convergence.name                                 = Dependency Convergence
-report.dependency-convergence.reactor.name                         = Reactor Dependency Convergence
-report.dependency-convergence.reactor.title                        = Reactor Dependency Convergence
-report.dependency-convergence.stats.artifacts                      = Number of unique artifacts (NOA):
-report.dependency-convergence.stats.caption                        = Statistics:
-report.dependency-convergence.stats.convergence                    = Convergence (NOD/NOA):
-report.dependency-convergence.stats.dependencies                   = Number of dependencies (NOD):
-report.dependency-convergence.stats.readyrelease                   = Ready for release (100 % convergence and no SNAPSHOTS):
-report.dependency-convergence.stats.readyrelease.error             = Error
-report.dependency-convergence.stats.readyrelease.error.convergence = There is less than 100 % convergence.
-report.dependency-convergence.stats.readyrelease.error.snapshots   = There are SNAPSHOT dependencies.
-report.dependency-convergence.stats.readyrelease.success           = Success
-report.dependency-convergence.stats.conflicting                    = Number of version-conflicting artifacts (NOC):
-report.dependency-convergence.stats.snapshots                      = Number of SNAPSHOT artifacts (NOS):
-report.dependency-convergence.stats.modules                        = Number of modules:
-report.dependency-convergence.title                                = Dependency Convergence
-report.dependency-info.name                                        = Dependency Information
-report.dependency-info.title                                       = Dependency Information
-report.dependency-info.description                                 = These are instructions for including Apache HBase&#8482; as a dependency using various dependency management tools.
-report.index.nodescription                                         = There is currently no description associated with Apache HBase&#8482;.
-report.index.title                                                 = About Apache HBase&#8482;
-report.issuetracking.bugzilla.intro                                = Apache HBase&#8482; uses {Bugzilla, http://www.bugzilla.org/}.
-report.issuetracking.custom.intro                                  = Apache HBase&#8482; uses %issueManagementSystem% to manage its issues.
-report.issuetracking.description                                   = Apache HBase&#8482; uses the following issue management system(s).
-report.issuetracking.general.intro                                 = Apache HBase&#8482; uses an Issue Management System to manage its issues.
-report.issuetracking.intro                                         = Issues, bugs, and feature requests should be submitted to the following issue tracking system for Apache HBase&#8482;.
-report.issuetracking.jira.intro                                    = Apache HBase&#8482; uses {JIRA, http://www.atlassian.com/software/jira}.
-report.issuetracking.name                                          = Issue Tracking
-report.issuetracking.noissueManagement                             = No issue management system is defined. Please check back at a later date.
-report.issuetracking.overview.title                                = Overview
-report.issuetracking.scarab.intro                                  = Apache HBase&#8482; uses {Scarab, http://scarab.tigris.org/}.
-report.issuetracking.title                                         = Issue Tracking
-report.license.description                                         = Apache HBase&#8482; uses the following project license(s).
-report.license.multiple                                            = Apache HBase&#8482; is provided under multiple licenses:
-report.license.name                                                = Apache HBase&#8482; License
-report.license.nolicense                                           = No license is defined for Apache HBase&#8482;.
-report.license.overview.intro                                      = This is the license for the Apache HBase project itself, but not necessarily its dependencies.
-report.license.overview.title                                      = Overview
-report.license.originalText                                        = [Original text]
-report.license.copy                                                = Copy of the license follows:
-report.license.title                                               = Apache HBase&#8482; License
-report.license.unnamed                                             = Unnamed
-report.mailing-lists.column.archive                                = Archive
-report.mailing-lists.column.name                                   = Name
-report.mailing-lists.column.otherArchives                          = Other Archives
-report.mailing-lists.column.post                                   = Post
-report.mailing-lists.column.subscribe                              = Subscribe
-report.mailing-lists.column.unsubscribe                            = Unsubscribe
-report.mailing-lists.description                                   = These are Apache HBase&#8482;'s mailing lists.
-report.mailing-lists.intro                                         = For each list, links are provided to subscribe, unsubscribe, and view archives.
-report.mailing-lists.name                                          = Mailing Lists
-report.mailing-lists.nolist                                        = There are no mailing lists currently associated with Apache HBase&#8482;.
-report.mailing-lists.title                                         = Apache HBase&#8482; Mailing Lists
-report.scm.accessbehindfirewall.cvs.intro                          = If you are behind a firewall that blocks HTTP access to the CVS repository, you can use the {CVSGrab, http://cvsgrab.sourceforge.net/} web interface to checkout the source code.
-report.scm.accessbehindfirewall.general.intro                      = Refer to the documentation of the SCM used for more information about access behind a firewall.
-report.scm.accessbehindfirewall.svn.intro                          = If you are behind a firewall that blocks HTTP access to the Subversion repository, you can try to access it via the developer connection:
-report.scm.accessbehindfirewall.title                              = Access from Behind a Firewall
-report.scm.accessthroughtproxy.svn.intro1                          = The Subversion client can go through a proxy, if you configure it to do so. First, edit your "servers" configuration file to indicate which proxy to use. The file's location depends on your operating system. On Linux or Unix it is located in the directory "~/.subversion". On Windows it is in "%APPDATA%\\Subversion". (Try "echo %APPDATA%", note this is a hidden directory.)
-report.scm.accessthroughtproxy.svn.intro2                          = There are comments in the file explaining what to do. If you don't have that file, get the latest Subversion client and run any command; this will cause the configuration directory and template files to be created.
-report.scm.accessthroughtproxy.svn.intro3                          = Example: Edit the 'servers' file and add something like:
-report.scm.accessthroughtproxy.title                               = Access Through a Proxy
-report.scm.anonymousaccess.cvs.intro                               = Apache HBase&#8482;'s CVS repository can be checked out through anonymous CVS with the following instruction set. When prompted for a password for anonymous, simply press the Enter key.
-report.scm.anonymousaccess.general.intro                           = Refer to the documentation of the SCM used for more information about anonymously check out. The connection url is:
-report.scm.anonymousaccess.git.intro                               = The source can be checked out anonymously from Git with this command (See {http://git-scm.com/docs/git-clone,http://git-scm.com/docs/git-clone}):
-report.scm.anonymousaccess.hg.intro                                = The source can be checked out anonymously from Mercurial with this command (See {http://www.selenic.com/mercurial/hg.1.html#clone,http://www.selenic.com/mercurial/hg.1.html#clone}):
-report.scm.anonymousaccess.svn.intro                               = The source can be checked out anonymously from Subversion with this command:
-report.scm.anonymousaccess.title                                   = Anonymous Access
-report.scm.clearcase.intro                                         = Apache HBase&#8482; uses {ClearCase, http://www-306.ibm.com/software/awdtools/clearcase/} to manage its source code. Informations on ClearCase use can be found at {http://www.redbooks.ibm.com/redbooks/pdfs/sg246399.pdf, http://www.redbooks.ibm.com/redbooks/pdfs/sg246399.pdf}.
-report.scm.cvs.intro                                               = Apache HBase&#8482; uses {Concurrent Versions System, http://www.cvshome.org/} to manage its source code. Instructions on CVS use can be found at {http://cvsbook.red-bean.com/, http://cvsbook.red-bean.com/}.
-report.scm.description                                             = This document lists ways to access the online source repository.
-report.scm.devaccess.clearcase.intro                               = Only project developers can access the ClearCase tree via this method. Substitute username with the proper value.
-report.scm.devaccess.cvs.intro                                     = Only project developers can access the CVS tree via this method. Substitute username with the proper value.
-report.scm.devaccess.general.intro                                 = Refer to the documentation of the SCM used for more information about developer check out. The connection url is:
-report.scm.devaccess.git.intro                                     = Only project developers can access the Git tree via this method (See {http://git-scm.com/docs/git-clone,http://git-scm.com/docs/git-clone}).
-report.scm.devaccess.hg.intro                                      = Only project developers can access the Mercurial tree via this method (See {http://www.selenic.com/mercurial/hg.1.html#clone,http://www.selenic.com/mercurial/hg.1.html#clone}).
-report.scm.devaccess.perforce.intro                                = Only project developers can access the Perforce tree via this method. Substitute username and password with the proper values.
-report.scm.devaccess.starteam.intro                                = Only project developers can access the Starteam tree via this method. Substitute username with the proper value.
-report.scm.devaccess.svn.intro1.https                              = Everyone can access the Subversion repository via HTTP, but committers must checkout the Subversion repository via HTTPS.
-report.scm.devaccess.svn.intro1.other                              = Committers must checkout the Subversion repository.
-report.scm.devaccess.svn.intro1.svn                                = Committers must checkout the Subversion repository via SVN.
-report.scm.devaccess.svn.intro1.svnssh                             = Committers must checkout the Subversion repository via SVN+SSH.
-report.scm.devaccess.svn.intro2                                    = To commit changes to the repository, execute the following command to commit your changes (svn will prompt you for your password):
-report.scm.devaccess.title                                         = Developer Access
-report.scm.general.intro                                           = Apache HBase&#8482; uses a Source Content Management System to manage its source code.
-report.scm.name                                                    = Source Repository
-report.scm.noscm                                                   = No source configuration management system is defined. Please check back at a later date.
-report.scm.overview.title                                          = Overview
-report.scm.git.intro                                               = Apache HBase&#8482; uses {Git, http://git-scm.com/} to manage its source code. Instructions on Git use can be found at {http://git-scm.com/documentation,http://git-scm.com/documentation}.
-report.scm.hg.intro                                                = Apache HBase&#8482; uses {Mercurial, http://mercurial.selenic.com/wiki/} to manage its source code. Instructions on Mercurial use can be found at {http://hgbook.red-bean.com/read/, http://hgbook.red-bean.com/read/}.
-report.scm.perforce.intro                                          = Apache HBase&#8482; uses {Perforce, http://www.perforce.com/} to manage its source code. Instructions on Perforce use can be found at {http://www.perforce.com/perforce/doc.051/manuals/cmdref/index.html, http://www.perforce.com/perforce/doc.051/manuals/cmdref/index.html}.
-report.scm.starteam.intro                                          = Apache HBase&#8482; uses {Starteam, http://www.borland.com/us/products/starteam/} to manage its source code.
-report.scm.svn.intro                                               = Apache HBase&#8482; uses {Subversion, http://subversion.apache.org/} to manage its source code. Instructions on Subversion use can be found at {http://svnbook.red-bean.com/, http://svnbook.red-bean.com/}.
-report.scm.title                                                   = Source Repository
-report.scm.webaccess.nourl                                         = There is no browsable version of the source repository listed for Apache HBase&#8482;. Please check back again later.
-report.scm.webaccess.title                                         = Web Browser Access
-report.scm.webaccess.url                                           = The following is a link to a browsable version of the source repository:
-report.summary.build.artifactid                                    = ArtifactId
-report.summary.build.groupid                                       = GroupId
-report.summary.build.javaVersion                                   = Java Version
-report.summary.build.title                                         = Build Information
-report.summary.build.type                                          = Type
-report.summary.build.version                                       = Version
-report.summary.description                                         = This document lists other related information of Apache HBase&#8482;
-report.summary.field                                               = Field
-report.summary.general.description                                 = Description
-report.summary.general.homepage                                    = Homepage
-report.summary.general.name                                        = Name
-report.summary.general.title                                       = Project Information
-report.summary.name                                                = Project Summary
-report.summary.organization.name                                   = Name
-report.summary.organization.title                                  = Project Organization
-report.summary.organization.url                                    = URL
-report.summary.noorganization                                      = Apache HBase&#8482; does not belong to an organization.
-report.summary.title                                               = Project Summary
-report.summary.value                                               = Value
-report.summary.download                                            = Download
-report.team-list.contributors.actualtime                           = Actual Time (GMT)
-report.team-list.contributors.email                                = Email
-report.team-list.contributors.intro                                = The following additional people have contributed to Apache HBase&#8482; through the way of suggestions, patches or documentation.
-report.team-list.contributors.image                                = Image
-report.team-list.contributors.name                                 = Name
-report.team-list.contributors.organization                         = Organization
-report.team-list.contributors.organizationurl                      = Organization URL
-report.team-list.contributors.properties                           = Properties
-report.team-list.contributors.roles                                = Roles
-report.team-list.contributors.timezone                             = Time Zone
-report.team-list.contributors.title                                = Contributors
-report.team-list.contributors.url                                  = URL
-report.team-list.description                                       = These are the members of the Apache HBase&#8482; project. These are the individuals who have contributed to the project in one form or another.
-report.team-list.developers.actualtime                             = Actual Time (GMT)
-report.team-list.developers.email                                  = Email
-report.team-list.developers.image                                  = Image
-report.team-list.developers.id                                     = Id
-report.team-list.developers.intro                                  = These are the developers with commit privileges that have directly contributed to the project in one way or another.
-report.team-list.developers.name                                   = Name
-report.team-list.developers.organization                           = Organization
-report.team-list.developers.organizationurl                        = Organization URL
-report.team-list.developers.properties                             = Properties
-report.team-list.developers.roles                                  = Roles
-report.team-list.developers.timezone                               = Time Zone
-report.team-list.developers.title                                  = Members
-report.team-list.developers.url                                    = URL
-report.team-list.intro.description1                                = A successful project requires many people to play many roles. Some members write code or documentation, while others are valuable as testers, submitting patches and suggestions.
-report.team-list.intro.description2                                = The team is comprised of Members and Contributors. Members have direct access to the source of a project and actively evolve the code-base. Contributors improve the project through submission of patches and suggestions to the Members. The number of Contributors to the project is unbounded. Get involved today. All contributions to the project are greatly appreciated.
-report.team-list.intro.title                                       = The Team
-report.team-list.name                                              = Project Team
-report.team-list.nocontributor                                     = Apache HBase&#8482; does not maintain a list of contributors.
-report.team-list.nodeveloper                                       = Apache HBase&#8482; does not maintain a list of developers.
-report.team-list.title                                             = Project Team
-report.dependencyManagement.name                                   = Dependency Management
-report.dependencyManagement.description                            = This document lists the dependencies that are defined through dependencyManagement.
-report.dependencyManagement.title                                  = Project Dependency Management
-report.dependencyManagement.nolist                                 = There are no dependencies in the DependencyManagement of Apache HBase&#8482;.
-report.dependencyManagement.column.groupId                         = GroupId
-report.dependencyManagement.column.artifactId                      = ArtifactId
-report.dependencyManagement.column.version                         = Version
-report.dependencyManagement.column.classifier                      = Classifier
-report.dependencyManagement.column.type                            = Type
-report.dependencyManagement.column.license                         = License
-report.dependencyManagement.intro.compile                          = The following is a list of compile dependencies in the DependencyManagement of Apache HBase&#8482;. These dependencies can be included in the submodules to compile and run the submodule:
-report.dependencyManagement.intro.provided                         = The following is a list of provided dependencies in the DependencyManagement of Apache HBase&#8482;. These dependencies can be included in the submodules to compile the submodule, but should be provided by default when using the library:
-report.dependencyManagement.intro.runtime                          = The following is a list of runtime dependencies in the DependencyManagement of Apache HBase&#8482;. These dependencies can be included in the submodules to run the submodule:
-report.dependencyManagement.intro.system                           = The following is a list of system dependencies in the DependencyManagement of Apache HBase&#8482;. These dependencies can be included in the submodules to compile the submodule:
-report.dependencyManagement.intro.test                             = The following is a list of test dependencies in the DependencyManagement of Apache HBase&#8482;. These dependencies can be included in the submodules to compile and run unit tests for the submodule:
-report.pluginManagement.nolist                                     = There are no plugins defined in the PluginManagement part of Apache HBase&#8482;.
-report.pluginManagement.name                                       = Plugin Management
-report.pluginManagement.description                                = This document lists the plugins that are defined through pluginManagement.
-report.pluginManagement.title                                      = Project Plugin Management
-report.plugins.name                                                = Project Plugins
-report.plugins.description                                         = This document lists the build plugins and the report plugins used by Apache HBase&#8482;.
-report.plugins.title                                               = Project Build Plugins
-report.plugins.report.title                                        = Project Report Plugins
-report.plugins.nolist                                              = There are no plugins defined in the Build part of Apache HBase&#8482;.
-report.plugins.report.nolist                                       = There are no plugins reports defined in the Reporting part of Apache HBase&#8482;.
-report.modules.nolist                                              = There are no modules declared in Apache HBase&#8482;.
-report.modules.name                                                = Project Modules
-report.modules.description                                         = This document lists the modules (sub-projects) of Apache HBase&#8482;.
-report.modules.title                                               = Project Modules
-report.modules.intro                                               = Apache HBase&#8482; has declared the following modules:
-report.modules.header.name                                         = Name
-report.modules.header.description                                  = Description
-report.distributionManagement.name                                 = Distribution Management
-report.distributionManagement.description                          = This document provides informations on the distribution management of Apache HBase&#8482;.
-report.distributionManagement.title                                = Project Distribution Management
-report.distributionManagement.nodistributionmanagement             = No distribution management is defined for Apache HBase&#8482;.
-report.distributionManagement.overview.title                       = Overview
-report.distributionManagement.overview.intro                       = The following is the distribution management information used by Apache HBase&#8482;.
-report.distributionManagement.downloadURL                          = Download URL
-report.distributionManagement.repository                           = Repository
-report.distributionManagement.snapshotRepository                   = Snapshot Repository
-report.distributionManagement.site                                 = Site
-report.distributionManagement.relocation                           = Relocation
-report.distributionManagement.field                                = Field
-report.distributionManagement.value                                = Value
-report.distributionManagement.relocation.groupid                   = GroupId
-report.distributionManagement.relocation.artifactid                = ArtifactId
-report.distributionManagement.relocation.version                   = Version
-report.distributionManagement.relocation.message                   = Message

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/.htaccess
----------------------------------------------------------------------
diff --git a/src/main/site/resources/.htaccess b/src/main/site/resources/.htaccess
deleted file mode 100644
index 20bf651..0000000
--- a/src/main/site/resources/.htaccess
+++ /dev/null
@@ -1,8 +0,0 @@
-
-# Redirect replication URL to the right section of the book
-# Rule added 2015-1-12 -- can be removed in 6 months
-Redirect permanent /replication.html /book.html#_cluster_replication
-
-# Redirect old page-per-chapter book sections to new single file.
-RedirectMatch permanent ^/book/(.*)\.html$ /book.html#$1
-RedirectMatch permanent ^/book/$ /book.html

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/book/.empty
----------------------------------------------------------------------
diff --git a/src/main/site/resources/book/.empty b/src/main/site/resources/book/.empty
deleted file mode 100644
index 5513814..0000000
--- a/src/main/site/resources/book/.empty
+++ /dev/null
@@ -1 +0,0 @@
-# This directory is here so that we can have rewrite rules in our .htaccess to maintain old links. Otherwise we fall under some top-level niceness redirects because we have a file named book.html.

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/css/site.css
----------------------------------------------------------------------
diff --git a/src/main/site/resources/css/site.css b/src/main/site/resources/css/site.css
deleted file mode 100644
index 3f42f5a..0000000
--- a/src/main/site/resources/css/site.css
+++ /dev/null
@@ -1,118 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing,
- * software distributed under the License is distributed on an
- * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- * KIND, either express or implied.  See the License for the
- * specific language governing permissions and limitations
- * under the License.
- */
-
-/*@import(https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.3.2/css/bootstrap.min.css);
-@import(https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.3.2/css/bootstrap-responsive.css);*/
-html {
-  background-color: #fff;
-}
-body {
-  font-size: 16px;
-}
-li {
-  line-height: 120%;
-}
-
-div#topbar,
-div#banner,
-div#breadcrumbs,
-div#bodyColumn,
-footer {
-  width: initial;
-  padding-left: 20px;
-  padding-right: 20px;
-  clear: both;
-}
-footer {
-  background-color: #e5e5e5;
-}
-footer .row, footer p, footer .pull-right {
-  margin: 5px;
-}
-div#search-form.navbar-search.pull-right {
-  width: 290px;
-  margin-right: 0;
-  margin-top: -5px;
-  margin-left: 0;
-  position: initial;
-}
-li#publishDate.pull-right {
-  list-style: none;
-}
-.container,
-.navbar-static-top .container,
-.navbar-fixed-top .container,
-.navbar-fixed-bottom .container,
-.navbar-inner {
-  width: initial;
-}
-/* Change the color and effect when clicking in menus */
-.dropdown-menu>li>a:hover,
-.dropdown-menu>li>a:focus,
-.dropdown-submenu:hover>a,
-.dropdown-submenu:focus>a {
-  background-color: #e5e5e5;
-  background-image: none;
-  color: #000;
-  font-weight: bolder;
-}
-
-.dropdown-backdrop {
-  position: static;
-}
-
-@media only screen and (max-width: 979px) {
-  body {
-    padding-left: 0;
-    padding-right: 0;
-    width: initial;
-    margin: 0;
-  }
-  /* Without this rule, drop-down divs are a fixed height
-   * the first time they are expanded */
-  .collapse.in {
-      height: auto !important;
-  }
-  div#search-form.navbar-search.pull-right {
-    padding: 0;
-    margin-left: ;
-    width: initial;
-    clear: both;
-  }
-}
-
-/* Fix Google Custom Search results on very narrow screens */
-@media(max-width: 480px) {
-    .gsc-overflow-hidden .nav-collapse {
-        -webkit-transform: none;
-    }
-}
-
-/* Override weird body padding thing that causes scrolling */
-@media (max-width: 767px)
-body {
-    padding-right: 0;
-    padding-left: 0;
-}
-
-@media (max-width: 767px)
-.navbar-fixed-top, .navbar-fixed-bottom, .navbar-static-top {
-  margin-left: 0;
-  margin-right: 0;
-}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/doap_Hbase.rdf
----------------------------------------------------------------------
diff --git a/src/main/site/resources/doap_Hbase.rdf b/src/main/site/resources/doap_Hbase.rdf
deleted file mode 100644
index 46082a1..0000000
--- a/src/main/site/resources/doap_Hbase.rdf
+++ /dev/null
@@ -1,57 +0,0 @@
-<?xml version="1.0"?>
-<?xml-stylesheet type="text/xsl"?>
-<rdf:RDF xml:lang="en"
-         xmlns="http://usefulinc.com/ns/doap#" 
-         xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" 
-         xmlns:asfext="http://projects.apache.org/ns/asfext#"
-         xmlns:foaf="http://xmlns.com/foaf/0.1/">
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one or more
-    contributor license agreements.  See the NOTICE file distributed with
-    this work for additional information regarding copyright ownership.
-    The ASF licenses this file to You under the Apache License, Version 2.0
-    (the "License"); you may not use this file except in compliance with
-    the License.  You may obtain a copy of the License at
-   
-         http://www.apache.org/licenses/LICENSE-2.0
-   
-    Unless required by applicable law or agreed to in writing, software
-    distributed under the License is distributed on an "AS IS" BASIS,
-    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-    See the License for the specific language governing permissions and
-    limitations under the License.
--->
-  <Project rdf:about="http://hbase.apache.org">
-    <created>2012-04-14</created>
-    <license rdf:resource="http://usefulinc.com/doap/licenses/asl20" />
-    <name>Apache HBase</name>
-    <homepage rdf:resource="http://hbase.apache.org" />
-    <asfext:pmc rdf:resource="http://hbase.apache.org" />
-    <shortdesc>Apache HBase software is the Hadoop database. Think of it as a distributed, scalable, big data store.</shortdesc>
-    <description>Use Apache HBase software when you need random, realtime read/write access to your Big Data. This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware. HBase is an open-source, distributed, versioned, column-oriented store modeled after Google's Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Hadoop and HDFS. </description>
-    <bug-database rdf:resource="http://issues.apache.org/jira/browse/HBASE" />
-    <mailing-list rdf:resource="http://hbase.apache.org/mail-lists.html" />
-    <download-page rdf:resource="http://www.apache.org/dyn/closer.cgi/hbase/" />
-    <programming-language>Java</programming-language>
-    <category rdf:resource="http://projects.apache.org/category/database" />
-    <release>
-      <Version>
-        <name>Apache hbase </name>
-        <created>2015-07-23</created>
-        <revision>2.0.0-SNAPSHOT</revision>
-      </Version>
-    </release>
-    <repository>
-      <GitRepository>
-        <location rdf:resource="git://git.apache.org/hbase.git"/>
-        <browse rdf:resource="https://git-wip-us.apache.org/repos/asf?p=hbase.git"/>
-      </GitRepository>
-    </repository>
-    <maintainer>
-      <foaf:Person>
-        <foaf:name>Apache HBase PMC</foaf:name>
-          <foaf:mbox rdf:resource="mailto:dev@hbase.apache.org"/>
-      </foaf:Person>
-    </maintainer>
-  </Project>
-</rdf:RDF>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/architecture.gif
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/architecture.gif b/src/main/site/resources/images/architecture.gif
deleted file mode 100644
index 8d84a23..0000000
Binary files a/src/main/site/resources/images/architecture.gif and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/backup-app-components.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/backup-app-components.png b/src/main/site/resources/images/backup-app-components.png
deleted file mode 100644
index 5e403e2..0000000
Binary files a/src/main/site/resources/images/backup-app-components.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/backup-cloud-appliance.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/backup-cloud-appliance.png b/src/main/site/resources/images/backup-cloud-appliance.png
deleted file mode 100644
index 76b6d5a..0000000
Binary files a/src/main/site/resources/images/backup-cloud-appliance.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/backup-dedicated-cluster.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/backup-dedicated-cluster.png b/src/main/site/resources/images/backup-dedicated-cluster.png
deleted file mode 100644
index bca282d..0000000
Binary files a/src/main/site/resources/images/backup-dedicated-cluster.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/backup-intra-cluster.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/backup-intra-cluster.png b/src/main/site/resources/images/backup-intra-cluster.png
deleted file mode 100644
index 113c577..0000000
Binary files a/src/main/site/resources/images/backup-intra-cluster.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/bc_basic.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/bc_basic.png b/src/main/site/resources/images/bc_basic.png
deleted file mode 100644
index 231de93..0000000
Binary files a/src/main/site/resources/images/bc_basic.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/bc_config.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/bc_config.png b/src/main/site/resources/images/bc_config.png
deleted file mode 100644
index 53250cf..0000000
Binary files a/src/main/site/resources/images/bc_config.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/bc_l1.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/bc_l1.png b/src/main/site/resources/images/bc_l1.png
deleted file mode 100644
index 36d7e55..0000000
Binary files a/src/main/site/resources/images/bc_l1.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/bc_l2_buckets.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/bc_l2_buckets.png b/src/main/site/resources/images/bc_l2_buckets.png
deleted file mode 100644
index 5163928..0000000
Binary files a/src/main/site/resources/images/bc_l2_buckets.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/bc_stats.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/bc_stats.png b/src/main/site/resources/images/bc_stats.png
deleted file mode 100644
index d8c6384..0000000
Binary files a/src/main/site/resources/images/bc_stats.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/big_h_logo.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/big_h_logo.png b/src/main/site/resources/images/big_h_logo.png
deleted file mode 100644
index 5256094..0000000
Binary files a/src/main/site/resources/images/big_h_logo.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/big_h_logo.svg
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/big_h_logo.svg b/src/main/site/resources/images/big_h_logo.svg
deleted file mode 100644
index ab24198..0000000
--- a/src/main/site/resources/images/big_h_logo.svg
+++ /dev/null
@@ -1,139 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Generator: Adobe Illustrator 15.1.0, SVG Export Plug-In . SVG Version: 6.00 Build 0)  -->
-
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
-   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
-   version="1.1"
-   id="Layer_1"
-   x="0px"
-   y="0px"
-   width="792px"
-   height="612px"
-   viewBox="0 0 792 612"
-   enable-background="new 0 0 792 612"
-   xml:space="preserve"
-   inkscape:version="0.48.4 r9939"
-   sodipodi:docname="big_h_same_font_hbase3_logo.png"
-   inkscape:export-filename="big_h_bitmap.png"
-   inkscape:export-xdpi="90"
-   inkscape:export-ydpi="90"><metadata
-   id="metadata3693"><rdf:RDF><cc:Work
-       rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
-         rdf:resource="http://purl.org/dc/dcmitype/StillImage" /><dc:title></dc:title></cc:Work></rdf:RDF></metadata><defs
-   id="defs3691" /><sodipodi:namedview
-   pagecolor="#000000"
-   bordercolor="#666666"
-   borderopacity="1"
-   objecttolerance="10"
-   gridtolerance="10"
-   guidetolerance="10"
-   inkscape:pageopacity="0"
-   inkscape:pageshadow="2"
-   inkscape:window-width="1440"
-   inkscape:window-height="856"
-   id="namedview3689"
-   showgrid="false"
-   inkscape:zoom="2.1814013"
-   inkscape:cx="415.39305"
-   inkscape:cy="415.72702"
-   inkscape:window-x="1164"
-   inkscape:window-y="22"
-   inkscape:window-maximized="0"
-   inkscape:current-layer="Layer_1" />
-
-
-
-
-
-
-<text
-   xml:space="preserve"
-   style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Bitsumishi;-inkscape-font-specification:Bitsumishi"
-   x="311.18643"
-   y="86.224579"
-   id="text3082"
-   sodipodi:linespacing="125%"><tspan
-     sodipodi:role="line"
-     id="tspan3084"
-     x="311.18643"
-     y="86.224579" /></text>
-<text
-   xml:space="preserve"
-   style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Bitsumishi;-inkscape-font-specification:Bitsumishi"
-   x="283.95764"
-   y="87.845337"
-   id="text3086"
-   sodipodi:linespacing="125%"><tspan
-     sodipodi:role="line"
-     id="tspan3088"
-     x="283.95764"
-     y="87.845337" /></text>
-<g
-   id="g3105"
-   transform="translate(14.669469,-80.682082)"
-   inkscape:export-filename="/Users/stack/Documents/big_h_base.png"
-   inkscape:export-xdpi="90"
-   inkscape:export-ydpi="90"><path
-     sodipodi:nodetypes="ccccccccccccccccccccccccccccc"
-     style="fill:#ba160c"
-     inkscape:connector-curvature="0"
-     id="path3677"
-     d="m 589.08202,499.77746 -40.3716,0 0,-168.36691 40.3716,0 z m -40.20304,-168.35619 -0.1684,-104.30857 40.3716,0 -0.33048,104.26805 z m -0.1684,168.35619 -40.37568,0 0,-104.82988 -259.42272,0 0,104.82988 -79.42128,0 0,-272.66476 79.42128,0 0,104.29785 224.92224,0 34.50456,0 40.37568,0 0,168.36691 z m 0,-272.66476 -40.37568,0 -0.0171,104.30857 40.55802,-0.01 z"
-     inkscape:export-filename="/Users/stack/Documents/polygon3687.png"
-     inkscape:export-xdpi="90"
-     inkscape:export-ydpi="90" /><path
-     sodipodi:nodetypes="cscsccsssccsssccscsccccccccccccccccccccc"
-     style="fill:#ba160c"
-     inkscape:connector-curvature="0"
-     id="path3679"
-     d="m 263.96692,553.27262 c 6.812,4.218 10.219,10.652 10.219,19.303 0,6.272 -2,11.571 -6.002,15.897 -4.325,4.758 -10.165,7.137 -17.519,7.137 l -28.629,0 0,-19.465 28.629,0 c 2.812,0 4.218,-2.109 4.218,-6.327 0,-4.216 -1.406,-6.325 -4.218,-6.325 l -28.629,0 0,-19.303 27.17,0 c 2.811,0 4.217,-2.109 4.217,-6.327 0,-4.216 -1.406,-6.326 -4.217,-6.326 l -27.17,0 0,-19.464 27.17,0 c 7.353,0 13.192,2.379 17.519,7.137 3.892,4.325 5.839,9.625 5.839,15.896 0,7.787 -2.866,13.842 -8.597,18.167 z m -41.931,42.338 -52.312,0 0,-51.42 19.466,0 5.259,0 27.588,0 0,19.303 -32.847,0 0,12.652 32.847,0 0,19.465 z m 0,-64.073 -32.847,0 0.0405,12.76351 -19.466,0.081 -0.0405,-32.30954 52.312,0 0,19.465 z" /><path
-     style="fill:#ba160c"
-     inkscape:connector-curvature="0"
-     id="path3683"
-     d="m 384.35292,595.61062 h -19.465 v -26.602 h -31.094 -0.618 v -19.466 h 0.618 31.094 v -11.68 c 0,-4.216 -1.406,-6.324 -4.218,-6.324 h -27.494 v -19.465 h 27.494 c 7.03,0 12.733,2.541 17.114,7.623 4.379,5.083 6.569,11.139 6.569,18.167 v 57.747 z m -51.177,-26.602 h -19.547 -12.165 v 26.602 h -19.466 v -57.748 c 0,-7.028 2.19,-13.083 6.569,-18.167 4.379,-5.083 10.03,-7.623 16.952,-7.623 h 27.656 v 19.466 h -27.656 c -2.704,0 -4.055,2.108 -4.055,6.324 v 11.68 h 12.165 19.547 v 19.466 z" /><path
-     style="fill:#ba160c"
-     inkscape:connector-curvature="0"
-     id="path3685"
-     d="m 492.35692,569.81862 c 0,7.03 -2.109,13.031 -6.327,18.006 -4.541,5.19 -10.273,7.786 -17.193,7.786 h -72.02 v -19.465 h 72.02 c 2.704,0 4.055,-2.109 4.055,-6.327 0,-4.216 -1.352,-6.325 -4.055,-6.325 h -52.394 c -6.92,0 -12.652,-2.596 -17.193,-7.787 -4.327,-4.865 -6.49,-10.813 -6.49,-17.843 0,-7.028 2.218,-13.083 6.651,-18.167 4.434,-5.083 10.112,-7.623 17.032,-7.623 h 72.021 v 19.464 h -72.021 c -2.703,0 -4.055,2.109 -4.055,6.326 0,4.109 1.352,6.164 4.055,6.164 h 52.394 c 6.92,0 12.652,2.596 17.193,7.787 4.218,4.974 6.327,10.976 6.327,18.004 z" /><polygon
-     style="fill:#ba160c"
-     transform="translate(-71.972085,223.93862)"
-     id="polygon3687"
-     points="656.952,339.555 591.906,339.555 591.906,352.207 661.331,352.207 661.331,371.672 572.44,371.672 572.44,288.135 661.494,288.135 661.494,307.599 591.906,307.599 591.906,320.089 656.952,320.089 "
-     inkscape:export-xdpi="90"
-     inkscape:export-ydpi="90" /><g
-     id="g3349"><g
-       id="g3344"><text
-         transform="scale(0.93350678,1.0712295)"
-         sodipodi:linespacing="125%"
-         id="text3076"
-         y="203.03328"
-         x="181.98402"
-         style="font-size:84.015625px;font-style:italic;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#4d4d4d;fill-opacity:1;stroke:none;font-family:Bitsumishi;-inkscape-font-specification:Bitsumishi Bold Italic"
-         xml:space="preserve"
-         inkscape:export-xdpi="90"
-         inkscape:export-ydpi="90"
-         inkscape:export-filename="/Users/stack/Documents/polygon3687.png"><tspan
-           style="font-size:84.015625px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:25.64349174px;writing-mode:lr-tb;text-anchor:start;fill:#4d4d4d;font-family:Bitsumishi;-inkscape-font-specification:Bitsumishi"
-           y="203.03328"
-           x="181.98402"
-           id="tspan3080"
-           sodipodi:role="line">APACHE</tspan></text>
-<rect
-         y="191.93103"
-         x="178.85117"
-         height="10.797735"
-         width="7.7796612"
-         id="rect3090"
-         style="fill:#4d4d4d" /></g><rect
-       style="fill:#4d4d4d"
-       id="rect3103"
-       width="8.1443329"
-       height="10.787481"
-       x="334.64697"
-       y="191.93881" /></g></g></svg>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/data_block_diff_encoding.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/data_block_diff_encoding.png b/src/main/site/resources/images/data_block_diff_encoding.png
deleted file mode 100644
index 0bd03a4..0000000
Binary files a/src/main/site/resources/images/data_block_diff_encoding.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/data_block_no_encoding.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/data_block_no_encoding.png b/src/main/site/resources/images/data_block_no_encoding.png
deleted file mode 100644
index 56498b4..0000000
Binary files a/src/main/site/resources/images/data_block_no_encoding.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/data_block_prefix_encoding.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/data_block_prefix_encoding.png b/src/main/site/resources/images/data_block_prefix_encoding.png
deleted file mode 100644
index 4271847..0000000
Binary files a/src/main/site/resources/images/data_block_prefix_encoding.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/favicon.ico
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/favicon.ico b/src/main/site/resources/images/favicon.ico
deleted file mode 100644
index 6e4d0f7..0000000
Binary files a/src/main/site/resources/images/favicon.ico and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/hadoop-logo.jpg
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/hadoop-logo.jpg b/src/main/site/resources/images/hadoop-logo.jpg
deleted file mode 100644
index 809525d..0000000
Binary files a/src/main/site/resources/images/hadoop-logo.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/hbase_logo.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/hbase_logo.png b/src/main/site/resources/images/hbase_logo.png
deleted file mode 100644
index e962ce0..0000000
Binary files a/src/main/site/resources/images/hbase_logo.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/hbase_logo.svg
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/hbase_logo.svg b/src/main/site/resources/images/hbase_logo.svg
deleted file mode 100644
index 2cc26d9..0000000
--- a/src/main/site/resources/images/hbase_logo.svg
+++ /dev/null
@@ -1,78 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Generator: Adobe Illustrator 15.1.0, SVG Export Plug-In . SVG Version: 6.00 Build 0)  -->
-
-<svg
-   xmlns:dc="http://purl.org/dc/elements/1.1/"
-   xmlns:cc="http://creativecommons.org/ns#"
-   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
-   xmlns:svg="http://www.w3.org/2000/svg"
-   xmlns="http://www.w3.org/2000/svg"
-   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
-   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
-   version="1.1"
-   id="Layer_1"
-   x="0px"
-   y="0px"
-   width="792px"
-   height="612px"
-   viewBox="0 0 792 612"
-   enable-background="new 0 0 792 612"
-   xml:space="preserve"
-   inkscape:version="0.48.4 r9939"
-   sodipodi:docname="hbase_banner_logo.png"
-   inkscape:export-filename="hbase_logo_filledin.png"
-   inkscape:export-xdpi="90"
-   inkscape:export-ydpi="90"><metadata
-   id="metadata3285"><rdf:RDF><cc:Work
-       rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
-         rdf:resource="http://purl.org/dc/dcmitype/StillImage" /><dc:title></dc:title></cc:Work></rdf:RDF></metadata><defs
-   id="defs3283" /><sodipodi:namedview
-   pagecolor="#ffffff"
-   bordercolor="#666666"
-   borderopacity="1"
-   objecttolerance="10"
-   gridtolerance="10"
-   guidetolerance="10"
-   inkscape:pageopacity="0"
-   inkscape:pageshadow="2"
-   inkscape:window-width="1131"
-   inkscape:window-height="715"
-   id="namedview3281"
-   showgrid="false"
-   inkscape:zoom="4.3628026"
-   inkscape:cx="328.98554"
-   inkscape:cy="299.51695"
-   inkscape:window-x="752"
-   inkscape:window-y="456"
-   inkscape:window-maximized="0"
-   inkscape:current-layer="Layer_1" />
-<path
-   d="m 233.586,371.672 -9.895,0 0,-51.583 9.895,0 0,51.583 z m -9.77344,-51.59213 -0.12156,-31.94487 9.895,0 -0.0405,31.98539 z m -0.12156,51.59213 -9.896,0 0,-32.117 -63.584,0 0,32.117 -19.466,0 0,-83.537 19.466,0 0,31.954 55.128,0 8.457,0 9.896,0 0,51.583 z m 0,-83.537 -9.896,0 0,31.98539 10.01756,-0.0405 z"
-   id="path3269"
-   inkscape:connector-curvature="0"
-   style="fill:#ba160c"
-   sodipodi:nodetypes="cccccccccccccccccccccccccccccc" />
-<path
-   d="m 335.939,329.334 c 6.812,4.218 10.219,10.652 10.219,19.303 0,6.272 -2,11.571 -6.002,15.897 -4.325,4.758 -10.165,7.137 -17.519,7.137 l -28.629,0 0,-19.465 28.629,0 c 2.812,0 4.218,-2.109 4.218,-6.327 0,-4.216 -1.406,-6.325 -4.218,-6.325 l -28.629,0 0,-19.303 27.17,0 c 2.811,0 4.217,-2.109 4.217,-6.327 0,-4.216 -1.406,-6.326 -4.217,-6.326 l -27.17,0 0,-19.464 27.17,0 c 7.353,0 13.192,2.379 17.519,7.137 3.892,4.325 5.839,9.625 5.839,15.896 0,7.787 -2.866,13.842 -8.597,18.167 z m -41.931,42.338 -52.312,0 0,-51.42 19.466,0 5.259,0 27.588,0 0,19.303 -32.847,0 0,12.652 32.847,0 0,19.465 z m 0,-64.073 -32.847,0 0.0405,13.24974 -19.466,-0.48623 -0.0405,-32.22851 52.312,0 0,19.465 z"
-   id="path3271"
-   inkscape:connector-curvature="0"
-   style="fill:#ba160c"
-   sodipodi:nodetypes="cscsccsssccsssccscsccccccccccccccccccccc" />
-<path
-   d="M355.123,266.419v-8.92h14.532v-5.353c0-1.932-0.644-2.899-1.933-2.899h-12.6v-8.919h12.6  c3.223,0,5.836,1.164,7.842,3.494c2.007,2.33,3.011,5.104,3.011,8.325v26.463h-8.921v-12.19H355.123L355.123,266.419z   M473.726,278.61h-29.587c-3.469,0-6.417-1.152-8.845-3.458c-2.429-2.304-3.642-5.191-3.642-8.659v-14.049  c0-3.47,1.213-6.356,3.642-8.662c2.428-2.304,5.376-3.455,8.845-3.455h29.587v8.919h-29.587c-2.378,0-3.567,1.066-3.567,3.197  v14.049c0,2.131,1.189,3.196,3.567,3.196h29.587V278.61L473.726,278.61z M567.609,278.61h-8.996v-14.718h-22.895v14.718h-8.92  v-38.282h8.92v14.644h22.895v-14.644h8.996V278.61L567.609,278.61z M661.494,249.247h-31.889v5.725h29.807v8.92h-29.807v5.797  h31.814v8.92h-40.735v-38.282h40.809V249.247z M355.123,240.328v8.919h-12.674c-1.239,0-1.858,0.967-1.858,2.899v5.353h5.575h2.435  h6.522v8.92h-6.522h-2.435h-5.575v12.19h-8.92v-26.463c0-3.221,1.004-5.996,3.011-8.325c2.006-2.33,4.596-3.494,7.768-3.494H355.123  L355.123,240.328z M254.661,266.122v-8.92h13.083c1.288,0,1.
 933-1.313,1.933-3.939c0-2.676-0.645-4.015-1.933-4.015h-13.083v-8.919  h13.083c3.32,0,5.995,1.363,8.028,4.088c1.883,2.478,2.825,5.425,2.825,8.846c0,3.419-0.942,6.342-2.825,8.771  c-2.033,2.725-4.708,4.088-8.028,4.088H254.661z M177.649,278.61h-8.92v-12.19h-14.532v-8.92h14.532v-5.353  c0-1.932-0.644-2.899-1.932-2.899h-12.6v-8.919h12.6c3.222,0,5.835,1.164,7.842,3.494c2.007,2.33,3.01,5.104,3.01,8.325V278.61  L177.649,278.61z M254.661,240.328v8.919h-15.016v7.954h15.016v8.92h-15.016v12.488h-8.92v-38.282H254.661z M154.198,266.419h-7.604  h-1.354h-5.575v12.19h-8.92v-26.463c0-3.221,1.004-5.996,3.01-8.325c2.007-2.33,4.597-3.494,7.768-3.494h12.674v8.919h-12.674  c-1.239,0-1.858,0.967-1.858,2.899v5.353h5.575h1.354h7.604V266.419z"
-   id="path3273"
-   style="fill:#666666"
-   fill="#878888" />
-<path
-   fill="#BA160C"
-   d="M456.325,371.672H436.86V345.07h-31.094h-0.618v-19.466h0.618h31.094v-11.68  c0-4.216-1.406-6.324-4.218-6.324h-27.494v-19.465h27.494c7.03,0,12.733,2.541,17.114,7.623c4.379,5.083,6.569,11.139,6.569,18.167  V371.672z M405.148,345.07h-19.547h-12.165v26.602h-19.466v-57.748c0-7.028,2.19-13.083,6.569-18.167  c4.379-5.083,10.03-7.623,16.952-7.623h27.656V307.6h-27.656c-2.704,0-4.055,2.108-4.055,6.324v11.68h12.165h19.547V345.07z"
-   id="path3275" />
-<path
-   fill="#BA160C"
-   d="M564.329,345.88c0,7.03-2.109,13.031-6.327,18.006c-4.541,5.19-10.273,7.786-17.193,7.786h-72.02v-19.465  h72.02c2.704,0,4.055-2.109,4.055-6.327c0-4.216-1.352-6.325-4.055-6.325h-52.394c-6.92,0-12.652-2.596-17.193-7.787  c-4.327-4.865-6.49-10.813-6.49-17.843c0-7.028,2.218-13.083,6.651-18.167c4.434-5.083,10.112-7.623,17.032-7.623h72.021v19.464  h-72.021c-2.703,0-4.055,2.109-4.055,6.326c0,4.109,1.352,6.164,4.055,6.164h52.394c6.92,0,12.652,2.596,17.193,7.787  C562.22,332.85,564.329,338.852,564.329,345.88z"
-   id="path3277" />
-<polygon
-   fill="#BA160C"
-   points="661.494,307.599 591.906,307.599 591.906,320.089 656.952,320.089 656.952,339.555 591.906,339.555   591.906,352.207 661.331,352.207 661.331,371.672 572.44,371.672 572.44,288.135 661.494,288.135 "
-   id="polygon3279" />
-</svg>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/hbase_logo_with_orca.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/hbase_logo_with_orca.png b/src/main/site/resources/images/hbase_logo_with_orca.png
deleted file mode 100644
index 7ed60e2..0000000
Binary files a/src/main/site/resources/images/hbase_logo_with_orca.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/hbase_logo_with_orca.xcf
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/hbase_logo_with_orca.xcf b/src/main/site/resources/images/hbase_logo_with_orca.xcf
deleted file mode 100644
index 8d88da2..0000000
Binary files a/src/main/site/resources/images/hbase_logo_with_orca.xcf and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/hbase_logo_with_orca_large.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/hbase_logo_with_orca_large.png b/src/main/site/resources/images/hbase_logo_with_orca_large.png
deleted file mode 100644
index e91eb8d..0000000
Binary files a/src/main/site/resources/images/hbase_logo_with_orca_large.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/hbase_replication_diagram.jpg
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/hbase_replication_diagram.jpg b/src/main/site/resources/images/hbase_replication_diagram.jpg
deleted file mode 100644
index c110309..0000000
Binary files a/src/main/site/resources/images/hbase_replication_diagram.jpg and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/hbasecon2015.30percent.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/hbasecon2015.30percent.png b/src/main/site/resources/images/hbasecon2015.30percent.png
deleted file mode 100644
index 26896a4..0000000
Binary files a/src/main/site/resources/images/hbasecon2015.30percent.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/hbasecon2016-stack-logo.jpg
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/hbasecon2016-stack-logo.jpg b/src/main/site/resources/images/hbasecon2016-stack-logo.jpg
deleted file mode 100644
index b59280d..0000000
Binary files a/src/main/site/resources/images/hbasecon2016-stack-logo.jpg and /dev/null differ


[04/11] hbase git commit: HBASE-20831 Copy master doc into branch-2.1 and edit to make it suit 2.1.0

Posted by zh...@apache.org.
http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/asciidoc/cygwin.adoc
----------------------------------------------------------------------
diff --git a/src/site/asciidoc/cygwin.adoc b/src/site/asciidoc/cygwin.adoc
new file mode 100644
index 0000000..5b6d5b4
--- /dev/null
+++ b/src/site/asciidoc/cygwin.adoc
@@ -0,0 +1,196 @@
+////
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+////
+
+
+== Installing Apache HBase (TM) on Windows using Cygwin
+
+== Introduction
+
+link:http://hbase.apache.org[Apache HBase (TM)] is a distributed, column-oriented store, modeled after Google's link:http://research.google.com/archive/bigtable.html[BigTable]. Apache HBase is built on top of link:http://hadoop.apache.org[Hadoop] for its link:http://hadoop.apache.org/mapreduce[MapReduce] link:http://hadoop.apache.org/hdfs[distributed file system] implementations. All these projects are open-source and part of the link:http://www.apache.org[Apache Software Foundation].
+
+== Purpose
+
+This document explains the *intricacies* of running Apache HBase on Windows using Cygwin* as an all-in-one single-node installation for testing and development. The HBase link:http://hbase.apache.org/apidocs/overview-summary.html#overview_description[Overview] and link:book.html#getting_started[QuickStart] guides on the other hand go a long way in explaning how to setup link:http://hadoop.apache.org/hbase[HBase] in more complex deployment scenarios.
+
+== Installation
+
+For running Apache HBase on Windows, 3 technologies are required:
+* Java
+* Cygwin
+* SSH
+
+The following paragraphs detail the installation of each of the aforementioned technologies.
+
+=== Java
+
+HBase depends on the link:http://java.sun.com/javase/6/[Java Platform, Standard Edition, 6 Release]. So the target system has to be provided with at least the Java Runtime Environment (JRE); however if the system will also be used for development, the Jave Development Kit (JDK) is preferred. You can download the latest versions for both from link:http://java.sun.com/javase/downloads/index.jsp[Sun's download page]. Installation is a simple GUI wizard that guides you through the process.
+
+=== Cygwin
+
+Cygwin is probably the oddest technology in this solution stack. It provides a dynamic link library that emulates most of a *nix environment on Windows. On top of that a whole bunch of the most common *nix tools are supplied. Combined, the DLL with the tools form a very *nix-alike environment on Windows.
+
+For installation, Cygwin provides the link:http://cygwin.com/setup.exe[`setup.exe` utility] that tracks the versions of all installed components on the target system and provides the mechanism for installing or updating everything from the mirror sites of Cygwin.
+
+To support installation, the `setup.exe` utility uses 2 directories on the target system. The *Root* directory for Cygwin (defaults to _C:\cygwin)_ which will become _/_ within the eventual Cygwin installation; and the *Local Package* directory (e.g. _C:\cygsetup_ that is the cache where `setup.exe`stores the packages before they are installed. The cache must not be the same folder as the Cygwin root.
+
+Perform following steps to install Cygwin, which are elaboratly detailed in the link:http://cygwin.com/cygwin-ug-net/setup-net.html[2nd chapter] of the link:http://cygwin.com/cygwin-ug-net/cygwin-ug-net.html[Cygwin User's Guide].
+
+. Make sure you have `Administrator` privileges on the target system.
+. Choose and create you Root and *Local Package* directories. A good suggestion is to use `C:\cygwin\root` and `C:\cygwin\setup` folders.
+. Download the `setup.exe` utility and save it to the *Local Package* directory. Run the `setup.exe` utility.
+.. Choose  the `Install from Internet` option.
+.. Choose your *Root* and *Local Package* folders.
+.. Select an appropriate mirror.
+.. Don't select any additional packages yet, as we only want to install Cygwin for now.
+.. Wait for download and install.
+.. Finish the installation.
+. Optionally, you can now also add a shortcut to your Start menu pointing to the `setup.exe` utility in the *Local Package *folder.
+. Add `CYGWIN_HOME` system-wide environment variable that points to your *Root* directory.
+. Add `%CYGWIN_HOME%\bin` to the end of your `PATH` environment variable.
+. Reboot the sytem after making changes to the environment variables otherwise the OS will not be able to find the Cygwin utilities.
+. Test your installation by running your freshly created shortcuts or the `Cygwin.bat` command in the *Root* folder. You should end up in a terminal window that is running a link:http://www.gnu.org/software/bash/manual/bashref.html[Bash shell]. Test the shell by issuing following commands:
+.. `cd /` should take you to thr *Root* directory in Cygwin.
+.. The `LS` commands that should list all files and folders in the current directory.
+.. Use the `exit` command to end the terminal.
+. When needed, to *uninstall* Cygwin you can simply delete the *Root* and *Local Package* directory, and the *shortcuts* that were created during installation.
+
+=== SSH
+
+HBase (and Hadoop) rely on link:http://nl.wikipedia.org/wiki/Secure_Shell[*SSH*] for interprocess/-node *communication* and launching* remote commands*. SSH will be provisioned on the target system via Cygwin, which supports running Cygwin programs as *Windows services*!
+
+. Rerun the `*setup.exe*`* utility*.
+. Leave all parameters as is, skipping through the wizard using the `Next` button until the `Select Packages` panel is shown.
+. Maximize the window and click the `View` button to toggle to the list view, which is ordered alfabetically on `Package`, making it easier to find the packages we'll need.
+. Select the following packages by clicking the status word (normally `Skip`) so it's marked for installation. Use the `Next `button to download and install the packages.
+.. `OpenSSH`
+.. `tcp_wrappers`
+.. `diffutils`
+.. `zlib`
+. Wait for the install to complete and finish the installation.
+
+=== HBase
+
+Download the *latest release* of Apache HBase from link:http://www.apache.org/dyn/closer.cgi/hbase/. As the Apache HBase distributable is just a zipped archive, installation is as simple as unpacking the archive so it ends up in its final *installation* directory. Notice that HBase has to be installed in Cygwin and a good directory suggestion is to use `/usr/local/` (or [`*Root* directory]\usr\local` in Windows slang). You should end up with a `/usr/local/hbase-_versi` installation in Cygwin.
+
+This finishes installation. We go on with the configuration.
+
+== Configuration
+
+There are 3 parts left to configure: *Java, SSH and HBase* itself. Following paragraphs explain eacht topic in detail.
+
+=== Java
+
+One important thing to remember in shell scripting in general (i.e. *nix and Windows) is that managing, manipulating and assembling path names that contains spaces can be very hard, due to the need to escape and quote those characters and strings. So we try to stay away from spaces in path names. *nix environments can help us out here very easily by using *symbolic links*.
+
+. Create a link in `/usr/local` to the Java home directory by using the following command and substituting the name of your chosen Java environment: +
+----
+LN -s /cygdrive/c/Program\ Files/Java/*_jre name_*/usr/local/*_jre name_*
+----
+. Test your java installation by changing directories to your Java folder `CD /usr/local/_jre name_` and issueing the command `./bin/java -version`. This should output your version of the chosen JRE.
+
+=== SSH
+
+Configuring *SSH *is quite elaborate, but primarily a question of launching it by default as a* Windows service*.
+
+. On Windows Vista and above make sure you run the Cygwin shell with *elevated privileges*, by right-clicking on the shortcut an using `Run as Administrator`.
+. First of all, we have to make sure the *rights on some crucial files* are correct. Use the commands underneath. You can verify all rights by using the `LS -L` command on the different files. Also, notice the auto-completion feature in the shell using `TAB` is extremely handy in these situations.
+.. `chmod +r /etc/passwd` to make the passwords file readable for all
+.. `chmod u+w /etc/passwd` to make the passwords file writable for the owner
+.. `chmod +r /etc/group` to make the groups file readable for all
+.. `chmod u+w /etc/group` to make the groups file writable for the owner
+.. `chmod 755 /var` to make the var folder writable to owner and readable and executable to all
+. Edit the */etc/hosts.allow* file using your favorite editor (why not VI in the shell!) and make sure the following two lines are in there before the `PARANOID` line: +
+----
+ALL : localhost 127.0.0.1/32 : allow
+ALL : [::1]/128 : allow
+----
+. Next we have to *configure SSH* by using the script `ssh-host-config`.
+.. If this script asks to overwrite an existing `/etc/ssh_config`, answer `yes`.
+.. If this script asks to overwrite an existing `/etc/sshd_config`, answer `yes`.
+.. If this script asks to use privilege separation, answer `yes`.
+.. If this script asks to install `sshd` as a service, answer `yes`. Make sure you started your shell as Adminstrator!
+.. If this script asks for the CYGWIN value, just `enter` as the default is `ntsec`.
+.. If this script asks to create the `sshd` account, answer `yes`.
+.. If this script asks to use a different user name as service account, answer `no` as the default will suffice.
+.. If this script asks to create the `cyg_server` account, answer `yes`. Enter a password for the account.
+. *Start the SSH service* using `net start sshd` or `cygrunsrv  --start  sshd`. Notice that `cygrunsrv` is the utility that make the process run as a Windows service. Confirm that you see a message stating that `the CYGWIN sshd service  was started succesfully.`
+. Harmonize Windows and Cygwin* user account* by using the commands: +
+----
+mkpasswd -cl > /etc/passwd
+mkgroup --local > /etc/group
+----
+. Test *the installation of SSH:
+.. Open a new Cygwin terminal.
+.. Use the command `whoami` to verify your userID.
+.. Issue an `ssh localhost` to connect to the system itself.
+.. Answer `yes` when presented with the server's fingerprint.
+.. Issue your password when prompted.
+.. Test a few commands in the remote session
+.. The `exit` command should take you back to your first shell in Cygwin.
+. `Exit` should terminate the Cygwin shell.
+
+=== HBase
+
+If all previous configurations are working properly, we just need some tinkering at the *HBase config* files to properly resolve on Windows/Cygwin. All files and paths referenced here start from the HBase `[*installation* directory]` as working directory.
+
+. HBase uses the `./conf/*hbase-env.sh*` to configure its dependencies on the runtime environment. Copy and uncomment following lines just underneath their original, change them to fit your environemnt. They should read something like: +
+----
+export JAVA_HOME=/usr/local/_jre name_
+export HBASE_IDENT_STRING=$HOSTNAME
+----
+. HBase uses the _./conf/`*hbase-default.xml*`_ file for configuration. Some properties do not resolve to existing directories because the JVM runs on Windows. This is the major issue to keep in mind when working with Cygwin: within the shell all paths are *nix-alike, hence relative to the root `/`. However, every parameter that is to be consumed within the windows processes themself, need to be Windows settings, hence `C:\`-alike. Change following propeties in the configuration file, adjusting paths where necessary to conform with your own installation:
+.. `hbase.rootdir` must read e.g. `file:///C:/cygwin/root/tmp/hbase/data`
+.. `hbase.tmp.dir` must read `C:/cygwin/root/tmp/hbase/tmp`
+.. `hbase.zookeeper.quorum` must read `127.0.0.1` because for some reason `localhost` doesn't seem to resolve properly on Cygwin.
+. Make sure the configured `hbase.rootdir` and `hbase.tmp.dir` *directories exist* and have the proper* rights* set up e.g. by issuing a `chmod 777` on them.
+
+== Testing
+
+This should conclude the installation and configuration of Apache HBase on Windows using Cygwin. So it's time *to test it*.
+
+. Start a Cygwin* terminal*, if you haven't already.
+. Change directory to HBase *installation* using `CD /usr/local/hbase-_version_`, preferably using auto-completion.
+. *Start HBase* using the command `./bin/start-hbase.sh`
+.. When prompted to accept the SSH fingerprint, answer `yes`.
+.. When prompted, provide your password. Maybe multiple times.
+.. When the command completes, the HBase server should have started.
+.. However, to be absolutely certain, check the logs in the `./logs` directory for any exceptions.
+. Next we *start the HBase shell* using the command `./bin/hbase shell`
+. We run some simple *test commands*
+.. Create a simple table using command `create 'test', 'data'`
+.. Verify the table exists using the command `list`
+.. Insert data into the table using e.g. +
+----
+put 'test', 'row1', 'data:1', 'value1'
+put 'test', 'row2', 'data:2', 'value2'
+put 'test', 'row3', 'data:3', 'value3'
+----
+.. List all rows in the table using the command `scan 'test'` that should list all the rows previously inserted. Notice how 3 new columns where added without changing the schema!
+.. Finally we get rid of the table by issuing `disable 'test'` followed by `drop 'test'` and verified by `list` which should give an empty listing.
+. *Leave the shell* by `exit`
+. To *stop the HBase server* issue the `./bin/stop-hbase.sh` command. And wait for it to complete!!! Killing the process might corrupt your data on disk.
+. In case of *problems*,
+.. Verify the HBase logs in the `./logs` directory.
+.. Try to fix the problem
+.. Get help on the forums or IRC (`#hbase@freenode.net`). People are very active and keen to help out!
+.. Stop and retest the server.
+
+== Conclusion
+
+Now your *HBase *server is running, *start coding* and build that next killer app on this particular, but scalable datastore!

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/asciidoc/export_control.adoc
----------------------------------------------------------------------
diff --git a/src/site/asciidoc/export_control.adoc b/src/site/asciidoc/export_control.adoc
new file mode 100644
index 0000000..f6e5e18
--- /dev/null
+++ b/src/site/asciidoc/export_control.adoc
@@ -0,0 +1,44 @@
+////
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+////
+
+
+= Export Control
+
+This distribution uses or includes cryptographic software. The country in
+which you currently reside may have restrictions on the import, possession,
+use, and/or re-export to another country, of encryption software. BEFORE
+using any encryption software, please check your country's laws, regulations
+and policies concerning the import, possession, or use, and re-export of
+encryption software, to see if this is permitted. See the
+link:http://www.wassenaar.org/[Wassenaar Arrangement] for more
+information.
+
+The U.S. Government Department of Commerce, Bureau of Industry and Security
+(BIS), has classified this software as Export Commodity Control Number (ECCN)
+5D002.C.1, which includes information security software using or performing
+cryptographic functions with asymmetric algorithms. The form and manner of this
+Apache Software Foundation distribution makes it eligible for export under the
+License Exception ENC Technology Software Unrestricted (TSU) exception (see the
+BIS Export Administration Regulations, Section 740.13) for both object code and
+source code.
+
+Apache HBase uses the built-in java cryptography libraries. See Oracle's
+information regarding
+link:http://www.oracle.com/us/products/export/export-regulations-345813.html[Java cryptographic export regulations]
+for more details.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/asciidoc/index.adoc
----------------------------------------------------------------------
diff --git a/src/site/asciidoc/index.adoc b/src/site/asciidoc/index.adoc
new file mode 100644
index 0000000..9b31c49
--- /dev/null
+++ b/src/site/asciidoc/index.adoc
@@ -0,0 +1,75 @@
+////
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+////
+
+= Apache HBase&#153; Home
+
+.Welcome to Apache HBase(TM)
+link:http://www.apache.org/[Apache HBase(TM)] is the link:http://hadoop.apache.org[Hadoop] database, a distributed, scalable, big data store.
+
+.When Would I Use Apache HBase?
+Use Apache HBase when you need random, realtime read/write access to your Big Data. +
+This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware.
+
+Apache HBase is an open-source, distributed, versioned, non-relational database modeled after Google's link:http://research.google.com/archive/bigtable.html[Bigtable: A Distributed Storage System for Structured Data] by Chang et al.
+
+Just as Bigtable leverages the distributed data storage provided by the Google File System, Apache HBase provides Bigtable-like capabilities on top of Hadoop and HDFS.
+
+.Features
+- Linear and modular scalability.
+- Strictly consistent reads and writes.
+- Automatic and configurable sharding of tables
+- Automatic failover support between RegionServers.
+- Convenient base classes for backing Hadoop MapReduce jobs with Apache HBase tables.
+- Easy to use Java API for client access.
+- Block cache and Bloom Filters for real-time queries.
+- Query predicate push down via server side Filters
+- Thrift gateway and a REST-ful Web service that supports XML, Protobuf, and binary data encoding options
+- Extensible jruby-based (JIRB) shell
+- Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX
+
+.Where Can I Get More Information?
+See the link:book.html#arch.overview[Architecture Overview], the link:book.html#faq[FAQ] and the other documentation links at the top!
+
+.Export Control
+The HBase distribution includes cryptographic software. See the link:export_control.html[export control notice].
+
+== News
+Feb 17, 2015:: link:http://www.meetup.com/hbaseusergroup/events/219260093/[HBase meetup around Strata+Hadoop World] in San Jose
+
+January 15th, 2015:: link:http://www.meetup.com/hbaseusergroup/events/218744798/[HBase meetup @ AppDynamics] in San Francisco
+
+November 20th, 2014::  link:http://www.meetup.com/hbaseusergroup/events/205219992/[HBase meetup @ WANdisco] in San Ramon
+
+October 27th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/207386102/[HBase Meetup @ Apple] in Cupertino
+
+October 15th, 2014:: link:http://www.meetup.com/HBase-NYC/events/207655552[HBase Meetup @ Google] on the night before Strata/HW in NYC
+
+September 25th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/203173692/[HBase Meetup @ Continuuity] in Palo Alto
+
+August 28th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/197773762/[HBase Meetup @ Sift Science] in San Francisco
+
+July 17th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/190994082/[HBase Meetup @ HP] in Sunnyvale
+
+June 5th, 2014:: link:http://www.meetup.com/Hadoop-Summit-Community-San-Jose/events/179081342/[HBase BOF at Hadoop Summit], San Jose Convention Center
+
+May 5th, 2014:: link:http://www.hbasecon.com[HBaseCon2014] at the Hilton San Francisco on Union Square
+
+March 12th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/160757912/[HBase Meetup @ Ancestry.com] in San Francisco
+
+View link:old_news.html[Old News]

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/asciidoc/metrics.adoc
----------------------------------------------------------------------
diff --git a/src/site/asciidoc/metrics.adoc b/src/site/asciidoc/metrics.adoc
new file mode 100644
index 0000000..41db2a0
--- /dev/null
+++ b/src/site/asciidoc/metrics.adoc
@@ -0,0 +1,101 @@
+////
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+////
+
+= Apache HBase (TM) Metrics
+
+== Introduction
+Apache HBase (TM) emits Hadoop link:http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html[metrics].
+
+== Setup
+
+First read up on Hadoop link:http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html[metrics].
+
+If you are using ganglia, the link:http://wiki.apache.org/hadoop/GangliaMetrics[GangliaMetrics] wiki page is useful read.
+
+To have HBase emit metrics, edit `$HBASE_HOME/conf/hadoop-metrics.properties` and enable metric 'contexts' per plugin.  As of this writing, hadoop supports *file* and *ganglia* plugins. Yes, the hbase metrics files is named hadoop-metrics rather than _hbase-metrics_ because currently at least the hadoop metrics system has the properties filename hardcoded. Per metrics _context_, comment out the NullContext and enable one or more plugins instead.
+
+If you enable the _hbase_ context, on regionservers you'll see total requests since last
+metric emission, count of regions and storefiles as well as a count of memstore size.
+On the master, you'll see a count of the cluster's requests.
+
+Enabling the _rpc_ context is good if you are interested in seeing
+metrics on each hbase rpc method invocation (counts and time taken).
+
+The _jvm_ context is useful for long-term stats on running hbase jvms -- memory used, thread counts, etc. As of this writing, if more than one jvm is running emitting metrics, at least in ganglia, the stats are aggregated rather than reported per instance.
+
+== Using with JMX
+
+In addition to the standard output contexts supported by the Hadoop
+metrics package, you can also export HBase metrics via Java Management
+Extensions (JMX).  This will allow viewing HBase stats in JConsole or
+any other JMX client.
+
+=== Enable HBase stats collection
+
+To enable JMX support in HBase, first edit `$HBASE_HOME/conf/hadoop-metrics.properties` to support metrics refreshing. (If you've running 0.94.1 and above, or have already configured `hadoop-metrics.properties` for another output context, you can skip this step).
+[source,bash]
+----
+# Configuration of the "hbase" context for null
+hbase.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
+hbase.period=60
+
+# Configuration of the "jvm" context for null
+jvm.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
+jvm.period=60
+
+# Configuration of the "rpc" context for null
+rpc.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
+rpc.period=60
+----
+
+=== Setup JMX Remote Access
+
+For remote access, you will need to configure JMX remote passwords and access profiles.  Create the files:
+`$HBASE_HOME/conf/jmxremote.passwd` (set permissions
+        to 600):: +
+----
+monitorRole monitorpass
+controlRole controlpass
+----
+
+`$HBASE_HOME/conf/jmxremote.access`:: +
+----
+monitorRole readonly
+controlRole readwrite
+----
+
+=== Configure JMX in HBase startup
+
+Finally, edit the `$HBASE_HOME/conf/hbase-env.sh` script to add JMX support:
+[source,bash]
+----
+HBASE_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false"
+HBASE_JMX_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.password.file=$HBASE_HOME/conf/jmxremote.passwd"
+HBASE_JMX_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.access.file=$HBASE_HOME/conf/jmxremote.access"
+
+export HBASE_MASTER_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.port=10101"
+export HBASE_REGIONSERVER_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.port=10102"
+----
+
+After restarting the processes you want to monitor, you should now be able to run JConsole (included with the JDK since JDK 5.0) to view the statistics via JMX.  HBase MBeans are exported under the *`hadoop`* domain in JMX.
+
+
+== Understanding HBase Metrics
+
+For more information on understanding HBase metrics, see the link:book.html#hbase_metrics[metrics section] in the Apache HBase Reference Guide.

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/asciidoc/old_news.adoc
----------------------------------------------------------------------
diff --git a/src/site/asciidoc/old_news.adoc b/src/site/asciidoc/old_news.adoc
new file mode 100644
index 0000000..75179e0
--- /dev/null
+++ b/src/site/asciidoc/old_news.adoc
@@ -0,0 +1,120 @@
+////
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+////
+
+= Old Apache HBase (TM) News
+
+February 10th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/163139322/[HBase Meetup @ Continuuity] in Palo Alto
+
+January 30th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/158491762/[HBase Meetup @ Apple] in Cupertino
+
+January 30th, 2014:: link:http://www.meetup.com/Los-Angeles-HBase-User-group/events/160560282/[Los Angeles HBase User Group] in El Segundo
+
+October 24th, 2013:: link:http://www.meetup.com/hbaseusergroup/events/140759692/[HBase User] and link:http://www.meetup.com/hackathon/events/144366512/[Developer] Meetup at HortonWorksin Palo Alto
+
+September 26, 2013:: link:http://www.meetup.com/hbaseusergroup/events/135862292/[HBase Meetup at Arista Networks] in San Francisco
+
+August 20th, 2013:: link:http://www.meetup.com/hbaseusergroup/events/120534362/[HBase Meetup at Flurry] in San Francisco
+
+July 16th, 2013:: link:http://www.meetup.com/hbaseusergroup/events/119929152/[HBase Meetup at Twitter] in San Francisco
+
+June 25th, 2013:: link:http://www.meetup.com/hbaseusergroup/events/119154442/[Hadoop Summit Meetup].at San Jose Convention Center
+
+June 14th, 2013:: link:http://kijicon.eventbrite.com/[KijiCon: Building Big Data Apps] in San Francisco.
+
+June 13th, 2013:: link:http://www.hbasecon.com/[HBaseCon2013] in San Francisco.  Submit an Abstract!
+
+June 12th, 2013:: link:http://www.meetup.com/hackathon/events/123403802/[HBaseConHackAthon] at the Cloudera office in San Francisco.
+
+April 11th, 2013:: link:http://www.meetup.com/hbaseusergroup/events/103587852/[HBase Meetup at AdRoll] in San Francisco
+
+February 28th, 2013:: link:http://www.meetup.com/hbaseusergroup/events/96584102/[HBase Meetup at Intel Mission Campus]
+
+February 19th, 2013:: link:http://www.meetup.com/hackathon/events/103633042/[Developers PowWow] at HortonWorks' new digs
+
+January 23rd, 2013:: link:http://www.meetup.com/hbaseusergroup/events/91381312/[HBase Meetup at WibiData World HQ!]
+
+December 4th, 2012:: link:http://www.meetup.com/hackathon/events/90536432/[0.96 Bug Squashing and Testing Hackathon] at Cloudera, SF.
+
+October 29th, 2012:: link:http://www.meetup.com/hbaseusergroup/events/82791572/[HBase User Group Meetup] at Wize Commerce in San Mateo.
+
+October 25th, 2012:: link:http://www.meetup.com/HBase-NYC/events/81728932/[Strata/Hadoop World HBase Meetup.] in NYC
+
+September 11th, 2012:: link:http://www.meetup.com/hbaseusergroup/events/80621872/[Contributor's Pow-Wow at HortonWorks HQ.]
+
+August 8th, 2012:: link:http://www.apache.org/dyn/closer.cgi/hbase/[Apache HBase 0.94.1 is available for download]
+
+June 15th, 2012:: link:http://www.meetup.com/hbaseusergroup/events/59829652/[Birds-of-a-feather] in San Jose, day after:: link:http://hadoopsummit.org[Hadoop Summit]
+
+May 23rd, 2012:: link:http://www.meetup.com/hackathon/events/58953522/[HackConAthon] in Palo Alto
+
+May 22nd, 2012:: link:http://www.hbasecon.com[HBaseCon2012] in San Francisco
+
+March 27th, 2012:: link:http://www.meetup.com/hbaseusergroup/events/56021562/[Meetup @ StumbleUpon] in San Francisco
+
+January 19th, 2012:: link:http://www.meetup.com/hbaseusergroup/events/46702842/[Meetup @ EBay]
+
+January 23rd, 2012:: Apache HBase 0.92.0 released. link:http://www.apache.org/dyn/closer.cgi/hbase/[Download it!]
+
+December 23rd, 2011:: Apache HBase 0.90.5 released. link:http://www.apache.org/dyn/closer.cgi/hbase/[Download it!]
+
+November 29th, 2011:: link:http://www.meetup.com/hackathon/events/41025972/[Developer Pow-Wow in SF] at Salesforce HQ
+
+November 7th, 2011:: link:http://www.meetup.com/hbaseusergroup/events/35682812/[HBase Meetup in NYC (6PM)] at the AppNexus office
+
+August 22nd, 2011:: link:http://www.meetup.com/hbaseusergroup/events/28518471/[HBase Hackathon (11AM) and Meetup (6PM)] at FB in PA
+
+June 30th, 2011:: link:http://www.meetup.com/hbaseusergroup/events/20572251/[HBase Contributor Day], the day after the:: link:http://developer.yahoo.com/events/hadoopsummit2011/[Hadoop Summit] hosted by Y!
+
+June 8th, 2011:: link:http://berlinbuzzwords.de/wiki/hbase-workshop-and-hackathon[HBase Hackathon] in Berlin to coincide with:: link:http://berlinbuzzwords.de/[Berlin Buzzwords]
+
+May 19th, 2011: Apache HBase 0.90.3 released. link:http://www.apache.org/dyn/closer.cgi/hbase/[Download it!]
+
+April 12th, 2011: Apache HBase 0.90.2 released. link:http://www.apache.org/dyn/closer.cgi/hbase/[Download it!]
+
+March 21st, 2011:: link:http://www.meetup.com/hackathon/events/16770852/[HBase 0.92 Hackathon at StumbleUpon, SF]
+February 22nd, 2011:: link:http://www.meetup.com/hbaseusergroup/events/16492913/[HUG12: February HBase User Group at StumbleUpon SF]
+December 13th, 2010:: link:http://www.meetup.com/hackathon/calendar/15597555/[HBase Hackathon: Coprocessor Edition]
+November 19th, 2010:: link:http://huguk.org/[Hadoop HUG in London] is all about Apache HBase
+November 15-19th, 2010:: link:http://www.devoxx.com/display/Devoxx2K10/Home[Devoxx] features HBase Training and multiple HBase presentations
+
+October 12th, 2010:: HBase-related presentations by core contributors and users at:: link:http://www.cloudera.com/company/press-center/hadoop-world-nyc/[Hadoop World 2010]
+
+October 11th, 2010:: link:http://www.meetup.com/hbaseusergroup/calendar/14606174/[HUG-NYC: HBase User Group NYC Edition] (Night before Hadoop World)
+June 30th, 2010:: link:http://www.meetup.com/hbaseusergroup/calendar/13562846/[Apache HBase Contributor Workshop] (Day after Hadoop Summit)
+May 10th, 2010:: Apache HBase graduates from Hadoop sub-project to Apache Top Level Project
+
+April 19, 2010:: Signup for link:http://www.meetup.com/hbaseusergroup/calendar/12689490/[HBase User Group Meeting, HUG10] hosted by Trend Micro
+
+March 10th, 2010:: link:http://www.meetup.com/hbaseusergroup/calendar/12689351/[HBase User Group Meeting, HUG9] hosted by Mozilla
+
+January 27th, 2010:: Sign up for the link:http://www.meetup.com/hbaseusergroup/calendar/12241393/[HBase User Group Meeting, HUG8], at StumbleUpon in SF
+
+September 8th, 2010:: Apache HBase 0.20.0 is faster, stronger, slimmer, and sweeter tasting than any previous Apache HBase release.  Get it off the link:http://www.apache.org/dyn/closer.cgi/hbase/[Releases] page.
+
+November 2-6th, 2009:: link:http://dev.us.apachecon.com/c/acus2009/[ApacheCon] in Oakland. The Apache Foundation will be celebrating its 10th anniversary in beautiful Oakland by the Bay. Lots of good talks and meetups including an HBase presentation by a couple of the lads.
+
+October 2nd, 2009:: HBase at Hadoop World in NYC. A few of us will be talking on Practical HBase out east at link:http://www.cloudera.com/hadoop-world-nyc[Hadoop World: NYC].
+
+August 7th-9th, 2009:: HUG7 and HBase Hackathon at StumbleUpon in SF: Sign up for the:: link:http://www.meetup.com/hbaseusergroup/calendar/10950511/[HBase User Group Meeting, HUG7] or for the link:http://www.meetup.com/hackathon/calendar/10951718/[Hackathon] or for both (all are welcome!).
+
+June, 2009::  HBase at HadoopSummit2009 and at NOSQL: See the link:https://hbase.apache.org/book.html#other.info.pres[presentations]
+
+March 3rd, 2009 :: HUG6 -- link:http://www.meetup.com/hbaseusergroup/calendar/9764004/[HBase User Group 6]
+
+January 30th, 2009:: LA Hbackathon: link:http://www.meetup.com/hbasela/calendar/9450876/[HBase January Hackathon Los Angeles] at link:http://streamy.com[Streamy] in Manhattan Beach

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/asciidoc/pseudo-distributed.adoc
----------------------------------------------------------------------
diff --git a/src/site/asciidoc/pseudo-distributed.adoc b/src/site/asciidoc/pseudo-distributed.adoc
new file mode 100644
index 0000000..ec6f53d
--- /dev/null
+++ b/src/site/asciidoc/pseudo-distributed.adoc
@@ -0,0 +1,22 @@
+////
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+////
+
+
+= Running Apache HBase (TM) in pseudo-distributed mode
+This page has been retired.  The contents have been moved to the link:book.html#distributed[Distributed Operation: Pseudo- and Fully-distributed modes] section in the Reference Guide.

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/asciidoc/replication.adoc
----------------------------------------------------------------------
diff --git a/src/site/asciidoc/replication.adoc b/src/site/asciidoc/replication.adoc
new file mode 100644
index 0000000..9089754
--- /dev/null
+++ b/src/site/asciidoc/replication.adoc
@@ -0,0 +1,22 @@
+////
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+////
+
+= Apache HBase (TM) Replication
+
+This information has been moved to link:book.html#cluster_replication"[the Cluster Replication] section of the link:book.html[Apache HBase Reference Guide].

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/asciidoc/resources.adoc
----------------------------------------------------------------------
diff --git a/src/site/asciidoc/resources.adoc b/src/site/asciidoc/resources.adoc
new file mode 100644
index 0000000..5f2d5d4
--- /dev/null
+++ b/src/site/asciidoc/resources.adoc
@@ -0,0 +1,26 @@
+////
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+////
+= Other Apache HBase (TM) Resources
+
+== Books
+HBase: The Definitive Guide:: link:http://shop.oreilly.com/product/0636920014348.do[HBase: The Definitive Guide, _Random Access to Your Planet-Size Data_] by Lars George. Publisher: O'Reilly Media, Released: August 2011, Pages: 556.
+
+HBase In Action:: link:http://www.manning.com/dimidukkhurana[HBase In Action] By Nick Dimiduk and Amandeep Khurana.  Publisher: Manning, MEAP Began: January 2012, Softbound print: Fall 2012, Pages: 350.
+
+HBase Administration Cookbook:: link:http://www.packtpub.com/hbase-administration-for-optimum-database-performance-cookbook/book[HBase Administration Cookbook] by Yifeng Jiang.  Publisher: PACKT Publishing, Release: Expected August 2012, Pages: 335.

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/asciidoc/sponsors.adoc
----------------------------------------------------------------------
diff --git a/src/site/asciidoc/sponsors.adoc b/src/site/asciidoc/sponsors.adoc
new file mode 100644
index 0000000..bf93557
--- /dev/null
+++ b/src/site/asciidoc/sponsors.adoc
@@ -0,0 +1,35 @@
+////
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+////
+
+= Apache HBase(TM) Sponsors
+
+First off, thanks to link:http://www.apache.org/foundation/thanks.html[all who sponsor] our parent, the Apache Software Foundation.
+
+The below companies have been gracious enough to provide their commerical tool offerings free of charge to the Apache HBase(TM) project.
+
+* The crew at link:http://www.ej-technologies.com/[ej-technologies] have been letting us use link:http://www.ej-technologies.com/products/jprofiler/overview.html[JProfiler] for years now.
+
+* The lads at link:http://headwaysoftware.com/[headway software] have given us a license for link:http://headwaysoftware.com/products/?code=Restructure101[Restructure101] so we can untangle our interdependency mess.
+
+* link:http://www.yourkit.com[YourKit] allows us to use their link:http://www.yourkit.com/overview/index.jsp[Java Profiler].
+* Some of us use link:http://www.jetbrains.com/idea[IntelliJ IDEA] thanks to link:http://www.jetbrains.com/[JetBrains].
+* Thank you to Boris at link:http://www.vectorportal.com/[Vector Portal] for granting us a license on the image on which our logo is based.
+
+== Sponsoring the Apache Software Foundation">
+To contribute to the Apache Software Foundation, a good idea in our opinion, see the link:http://www.apache.org/foundation/sponsorship.html[ASF Sponsorship] page.

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/custom/project-info-report.properties
----------------------------------------------------------------------
diff --git a/src/site/custom/project-info-report.properties b/src/site/custom/project-info-report.properties
new file mode 100644
index 0000000..912339e
--- /dev/null
+++ b/src/site/custom/project-info-report.properties
@@ -0,0 +1,303 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#  http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+report.cim.access                                                  = Access
+report.cim.anthill.intro                                           = Apache HBase&#8482; uses {Anthill, http://www.anthillpro.com/html/products/anthillos/}.
+report.cim.bamboo.intro                                            = Apache HBase&#8482; uses {Bamboo, http://www.atlassian.com/software/bamboo/}.
+report.cim.buildforge.intro                                        = Apache HBase&#8482; uses {Build Forge, http://www-306.ibm.com/software/awdtools/buildforge/enterprise/}.
+report.cim.continuum.intro                                         = Apache HBase&#8482; uses {Continuum, http://continuum.apache.org/}.
+report.cim.cruisecontrol.intro                                     = Apache HBase&#8482; uses {CruiseControl, http://cruisecontrol.sourceforge.net/}.
+report.cim.description                                             = These are the definitions of all continuous integration processes that builds and tests code on a frequent, regular basis.
+report.cim.general.intro                                           = Apache HBase&#8482; uses Continuous Integration System.
+report.cim.hudson.intro                                            = Apache HBase&#8482; uses {Hudson, http://hudson-ci.org/}.
+report.cim.jenkins.intro                                           = Apache HBase&#8482; uses {Jenkins, http://jenkins-ci.org/}.
+report.cim.luntbuild.intro                                         = Apache HBase&#8482; uses {Luntbuild, http://luntbuild.javaforge.com/}.
+report.cim.travis.intro                                            = Apache HBase&#8482; uses {Travis CI, https://travis-ci.org/}.
+report.cim.name                                                    = Continuous Integration
+report.cim.nocim                                                   = No continuous integration management system is defined. Please check back at a later date.
+report.cim.notifiers.column.address                                = Address
+report.cim.notifiers.column.configuration                          = Configuration
+report.cim.notifiers.column.type                                   = Type
+report.cim.notifiers.intro                                         = Configuration for notifying developers/users when a build is unsuccessful, including user information and notification mode.
+report.cim.notifiers.nolist                                        = No notifiers are defined. Please check back at a later date.
+report.cim.notifiers.title                                         = Notifiers
+report.cim.nourl                                                   = No url to the continuous integration system is defined.
+report.cim.overview.title                                          = Overview
+report.cim.title                                                   = Continuous Integration
+report.cim.url                                                     = This is a link to the continuous integration system used by the project:
+report.dependencies.column.artifactId                              = ArtifactId
+report.dependencies.column.classifier                              = Classifier
+report.dependencies.column.description                             = Description
+report.dependencies.column.groupId                                 = GroupId
+report.dependencies.column.license                                 = License
+report.dependencies.column.optional                                = Optional
+report.dependencies.column.isOptional                              = Yes
+report.dependencies.column.isNotOptional                           = No
+report.dependencies.column.type                                    = Type
+report.dependencies.column.url                                     = URL
+report.dependencies.column.version                                 = Version
+report.dependencies.description                                    = This is a list of project's dependencies and provides information on each dependency.
+report.dependencies.file.details.cell.debuginformation.yes         = Yes
+report.dependencies.file.details.cell.debuginformation.no          = No
+report.dependencies.file.details.column.classes                    = Classes
+report.dependencies.file.details.column.debuginformation           = Debug Information
+report.dependencies.file.details.column.entries                    = Entries
+report.dependencies.file.details.column.file                       = Filename
+report.dependencies.file.details.column.javaVersion                = Java Version
+report.dependencies.file.details.column.packages                   = Packages
+report.dependencies.file.details.column.sealed                     = Sealed
+report.dependencies.file.details.column.size                       = Size
+report.dependencies.file.details.column.size.gb                    = GB
+report.dependencies.file.details.column.size.mb                    = MB
+report.dependencies.file.details.column.size.kb                    = kB
+report.dependencies.file.details.columntitle.debuginformation      = Indicates whether these dependencies have been compiled with debug information.
+report.dependencies.file.details.title                             = Dependency File Details
+report.dependencies.file.details.total                             = Total
+report.dependencies.graph.tables.licenses                          = Licenses
+report.dependencies.graph.tables.unknown                           = Unknown
+report.dependencies.graph.title                                    = Apache HBase&#8482; Dependency Graph
+report.dependencies.graph.tree.title                               = Dependency Tree
+report.dependencies.intro.compile                                  = This is a list of compile dependencies for Apache HBase&#8482;. These dependencies are required to compile and run the application:
+report.dependencies.intro.provided                                 = This is a list of provided dependencies for Apache HBase&#8482;. These dependencies are required to compile the application, but should be provided by default when using the library:
+report.dependencies.intro.runtime                                  = This is a list of runtime dependencies for Apache HBase&#8482;. These dependencies are required to run the application:
+report.dependencies.intro.system                                   = This is a list of system dependencies for Apache HBase&#8482;. These dependencies are required to compile the application:
+report.dependencies.intro.test                                     = This is a list of test dependencies for Apache HBase&#8482;. These dependencies are only required to compile and run unit tests for the application:
+report.dependencies.name                                           = Dependencies
+report.dependencies.nolist                                         = There are no dependencies for Apache HBase&#8482;. It is a standalone application that does not depend on any other project.
+report.dependencies.repo.locations.artifact.breakdown              = Repository locations for each of the Dependencies.
+report.dependencies.repo.locations.cell.release.disabled           = No
+report.dependencies.repo.locations.cell.release.enabled            = Yes
+report.dependencies.repo.locations.cell.snapshot.disabled          = No
+report.dependencies.repo.locations.cell.snapshot.enabled           = Yes
+report.dependencies.repo.locations.cell.blacklisted.disabled       = No
+report.dependencies.repo.locations.cell.blacklisted.enabled        = Yes
+report.dependencies.repo.locations.column.artifact                 = Artifact
+report.dependencies.repo.locations.column.blacklisted              = Blacklisted
+report.dependencies.repo.locations.column.release                  = Release
+report.dependencies.repo.locations.column.repoid                   = Repo ID
+report.dependencies.repo.locations.column.snapshot                 = Snapshot
+report.dependencies.repo.locations.column.url                      = URL
+report.dependencies.repo.locations.title                           = Dependency Repository Locations
+report.dependencies.title                                          = Apache HBase&#8482; Dependencies
+report.dependencies.unnamed                                        = Unnamed
+report.dependencies.transitive.intro                               = This is a list of transitive dependencies for Apache HBase&#8482;. Transitive dependencies are the dependencies of the project dependencies.
+report.dependencies.transitive.nolist                              = No transitive dependencies are required for Apache HBase&#8482;.
+report.dependencies.transitive.title                               = Apache HBase&#8482; Transitive Dependencies
+report.dependency-convergence.convergence.caption                  = Dependencies used in modules
+report.dependency-convergence.convergence.single.caption           = Dependencies used in Apache HBase&#8482;
+report.dependency-convergence.description                          = This is the convergence of dependency versions across the entire project and its sub-modules.
+report.dependency-convergence.legend                               = Legend:
+report.dependency-convergence.legend.different                     = At least one dependency has a differing version of the dependency or has SNAPSHOT dependencies.
+report.dependency-convergence.legend.shared                        = All modules/dependencies share one version of the dependency.
+report.dependency-convergence.name                                 = Dependency Convergence
+report.dependency-convergence.reactor.name                         = Reactor Dependency Convergence
+report.dependency-convergence.reactor.title                        = Reactor Dependency Convergence
+report.dependency-convergence.stats.artifacts                      = Number of unique artifacts (NOA):
+report.dependency-convergence.stats.caption                        = Statistics:
+report.dependency-convergence.stats.convergence                    = Convergence (NOD/NOA):
+report.dependency-convergence.stats.dependencies                   = Number of dependencies (NOD):
+report.dependency-convergence.stats.readyrelease                   = Ready for release (100 % convergence and no SNAPSHOTS):
+report.dependency-convergence.stats.readyrelease.error             = Error
+report.dependency-convergence.stats.readyrelease.error.convergence = There is less than 100 % convergence.
+report.dependency-convergence.stats.readyrelease.error.snapshots   = There are SNAPSHOT dependencies.
+report.dependency-convergence.stats.readyrelease.success           = Success
+report.dependency-convergence.stats.conflicting                    = Number of version-conflicting artifacts (NOC):
+report.dependency-convergence.stats.snapshots                      = Number of SNAPSHOT artifacts (NOS):
+report.dependency-convergence.stats.modules                        = Number of modules:
+report.dependency-convergence.title                                = Dependency Convergence
+report.dependency-info.name                                        = Dependency Information
+report.dependency-info.title                                       = Dependency Information
+report.dependency-info.description                                 = These are instructions for including Apache HBase&#8482; as a dependency using various dependency management tools.
+report.index.nodescription                                         = There is currently no description associated with Apache HBase&#8482;.
+report.index.title                                                 = About Apache HBase&#8482;
+report.issuetracking.bugzilla.intro                                = Apache HBase&#8482; uses {Bugzilla, http://www.bugzilla.org/}.
+report.issuetracking.custom.intro                                  = Apache HBase&#8482; uses %issueManagementSystem% to manage its issues.
+report.issuetracking.description                                   = Apache HBase&#8482; uses the following issue management system(s).
+report.issuetracking.general.intro                                 = Apache HBase&#8482; uses an Issue Management System to manage its issues.
+report.issuetracking.intro                                         = Issues, bugs, and feature requests should be submitted to the following issue tracking system for Apache HBase&#8482;.
+report.issuetracking.jira.intro                                    = Apache HBase&#8482; uses {JIRA, http://www.atlassian.com/software/jira}.
+report.issuetracking.name                                          = Issue Tracking
+report.issuetracking.noissueManagement                             = No issue management system is defined. Please check back at a later date.
+report.issuetracking.overview.title                                = Overview
+report.issuetracking.scarab.intro                                  = Apache HBase&#8482; uses {Scarab, http://scarab.tigris.org/}.
+report.issuetracking.title                                         = Issue Tracking
+report.license.description                                         = Apache HBase&#8482; uses the following project license(s).
+report.license.multiple                                            = Apache HBase&#8482; is provided under multiple licenses:
+report.license.name                                                = Apache HBase&#8482; License
+report.license.nolicense                                           = No license is defined for Apache HBase&#8482;.
+report.license.overview.intro                                      = This is the license for the Apache HBase project itself, but not necessarily its dependencies.
+report.license.overview.title                                      = Overview
+report.license.originalText                                        = [Original text]
+report.license.copy                                                = Copy of the license follows:
+report.license.title                                               = Apache HBase&#8482; License
+report.license.unnamed                                             = Unnamed
+report.mailing-lists.column.archive                                = Archive
+report.mailing-lists.column.name                                   = Name
+report.mailing-lists.column.otherArchives                          = Other Archives
+report.mailing-lists.column.post                                   = Post
+report.mailing-lists.column.subscribe                              = Subscribe
+report.mailing-lists.column.unsubscribe                            = Unsubscribe
+report.mailing-lists.description                                   = These are Apache HBase&#8482;'s mailing lists.
+report.mailing-lists.intro                                         = For each list, links are provided to subscribe, unsubscribe, and view archives.
+report.mailing-lists.name                                          = Mailing Lists
+report.mailing-lists.nolist                                        = There are no mailing lists currently associated with Apache HBase&#8482;.
+report.mailing-lists.title                                         = Apache HBase&#8482; Mailing Lists
+report.scm.accessbehindfirewall.cvs.intro                          = If you are behind a firewall that blocks HTTP access to the CVS repository, you can use the {CVSGrab, http://cvsgrab.sourceforge.net/} web interface to checkout the source code.
+report.scm.accessbehindfirewall.general.intro                      = Refer to the documentation of the SCM used for more information about access behind a firewall.
+report.scm.accessbehindfirewall.svn.intro                          = If you are behind a firewall that blocks HTTP access to the Subversion repository, you can try to access it via the developer connection:
+report.scm.accessbehindfirewall.title                              = Access from Behind a Firewall
+report.scm.accessthroughtproxy.svn.intro1                          = The Subversion client can go through a proxy, if you configure it to do so. First, edit your "servers" configuration file to indicate which proxy to use. The file's location depends on your operating system. On Linux or Unix it is located in the directory "~/.subversion". On Windows it is in "%APPDATA%\\Subversion". (Try "echo %APPDATA%", note this is a hidden directory.)
+report.scm.accessthroughtproxy.svn.intro2                          = There are comments in the file explaining what to do. If you don't have that file, get the latest Subversion client and run any command; this will cause the configuration directory and template files to be created.
+report.scm.accessthroughtproxy.svn.intro3                          = Example: Edit the 'servers' file and add something like:
+report.scm.accessthroughtproxy.title                               = Access Through a Proxy
+report.scm.anonymousaccess.cvs.intro                               = Apache HBase&#8482;'s CVS repository can be checked out through anonymous CVS with the following instruction set. When prompted for a password for anonymous, simply press the Enter key.
+report.scm.anonymousaccess.general.intro                           = Refer to the documentation of the SCM used for more information about anonymously check out. The connection url is:
+report.scm.anonymousaccess.git.intro                               = The source can be checked out anonymously from Git with this command (See {http://git-scm.com/docs/git-clone,http://git-scm.com/docs/git-clone}):
+report.scm.anonymousaccess.hg.intro                                = The source can be checked out anonymously from Mercurial with this command (See {http://www.selenic.com/mercurial/hg.1.html#clone,http://www.selenic.com/mercurial/hg.1.html#clone}):
+report.scm.anonymousaccess.svn.intro                               = The source can be checked out anonymously from Subversion with this command:
+report.scm.anonymousaccess.title                                   = Anonymous Access
+report.scm.clearcase.intro                                         = Apache HBase&#8482; uses {ClearCase, http://www-306.ibm.com/software/awdtools/clearcase/} to manage its source code. Informations on ClearCase use can be found at {http://www.redbooks.ibm.com/redbooks/pdfs/sg246399.pdf, http://www.redbooks.ibm.com/redbooks/pdfs/sg246399.pdf}.
+report.scm.cvs.intro                                               = Apache HBase&#8482; uses {Concurrent Versions System, http://www.cvshome.org/} to manage its source code. Instructions on CVS use can be found at {http://cvsbook.red-bean.com/, http://cvsbook.red-bean.com/}.
+report.scm.description                                             = This document lists ways to access the online source repository.
+report.scm.devaccess.clearcase.intro                               = Only project developers can access the ClearCase tree via this method. Substitute username with the proper value.
+report.scm.devaccess.cvs.intro                                     = Only project developers can access the CVS tree via this method. Substitute username with the proper value.
+report.scm.devaccess.general.intro                                 = Refer to the documentation of the SCM used for more information about developer check out. The connection url is:
+report.scm.devaccess.git.intro                                     = Only project developers can access the Git tree via this method (See {http://git-scm.com/docs/git-clone,http://git-scm.com/docs/git-clone}).
+report.scm.devaccess.hg.intro                                      = Only project developers can access the Mercurial tree via this method (See {http://www.selenic.com/mercurial/hg.1.html#clone,http://www.selenic.com/mercurial/hg.1.html#clone}).
+report.scm.devaccess.perforce.intro                                = Only project developers can access the Perforce tree via this method. Substitute username and password with the proper values.
+report.scm.devaccess.starteam.intro                                = Only project developers can access the Starteam tree via this method. Substitute username with the proper value.
+report.scm.devaccess.svn.intro1.https                              = Everyone can access the Subversion repository via HTTP, but committers must checkout the Subversion repository via HTTPS.
+report.scm.devaccess.svn.intro1.other                              = Committers must checkout the Subversion repository.
+report.scm.devaccess.svn.intro1.svn                                = Committers must checkout the Subversion repository via SVN.
+report.scm.devaccess.svn.intro1.svnssh                             = Committers must checkout the Subversion repository via SVN+SSH.
+report.scm.devaccess.svn.intro2                                    = To commit changes to the repository, execute the following command to commit your changes (svn will prompt you for your password):
+report.scm.devaccess.title                                         = Developer Access
+report.scm.general.intro                                           = Apache HBase&#8482; uses a Source Content Management System to manage its source code.
+report.scm.name                                                    = Source Repository
+report.scm.noscm                                                   = No source configuration management system is defined. Please check back at a later date.
+report.scm.overview.title                                          = Overview
+report.scm.git.intro                                               = Apache HBase&#8482; uses {Git, http://git-scm.com/} to manage its source code. Instructions on Git use can be found at {http://git-scm.com/documentation,http://git-scm.com/documentation}.
+report.scm.hg.intro                                                = Apache HBase&#8482; uses {Mercurial, http://mercurial.selenic.com/wiki/} to manage its source code. Instructions on Mercurial use can be found at {http://hgbook.red-bean.com/read/, http://hgbook.red-bean.com/read/}.
+report.scm.perforce.intro                                          = Apache HBase&#8482; uses {Perforce, http://www.perforce.com/} to manage its source code. Instructions on Perforce use can be found at {http://www.perforce.com/perforce/doc.051/manuals/cmdref/index.html, http://www.perforce.com/perforce/doc.051/manuals/cmdref/index.html}.
+report.scm.starteam.intro                                          = Apache HBase&#8482; uses {Starteam, http://www.borland.com/us/products/starteam/} to manage its source code.
+report.scm.svn.intro                                               = Apache HBase&#8482; uses {Subversion, http://subversion.apache.org/} to manage its source code. Instructions on Subversion use can be found at {http://svnbook.red-bean.com/, http://svnbook.red-bean.com/}.
+report.scm.title                                                   = Source Repository
+report.scm.webaccess.nourl                                         = There is no browsable version of the source repository listed for Apache HBase&#8482;. Please check back again later.
+report.scm.webaccess.title                                         = Web Browser Access
+report.scm.webaccess.url                                           = The following is a link to a browsable version of the source repository:
+report.summary.build.artifactid                                    = ArtifactId
+report.summary.build.groupid                                       = GroupId
+report.summary.build.javaVersion                                   = Java Version
+report.summary.build.title                                         = Build Information
+report.summary.build.type                                          = Type
+report.summary.build.version                                       = Version
+report.summary.description                                         = This document lists other related information of Apache HBase&#8482;
+report.summary.field                                               = Field
+report.summary.general.description                                 = Description
+report.summary.general.homepage                                    = Homepage
+report.summary.general.name                                        = Name
+report.summary.general.title                                       = Project Information
+report.summary.name                                                = Project Summary
+report.summary.organization.name                                   = Name
+report.summary.organization.title                                  = Project Organization
+report.summary.organization.url                                    = URL
+report.summary.noorganization                                      = Apache HBase&#8482; does not belong to an organization.
+report.summary.title                                               = Project Summary
+report.summary.value                                               = Value
+report.summary.download                                            = Download
+report.team-list.contributors.actualtime                           = Actual Time (GMT)
+report.team-list.contributors.email                                = Email
+report.team-list.contributors.intro                                = The following additional people have contributed to Apache HBase&#8482; through the way of suggestions, patches or documentation.
+report.team-list.contributors.image                                = Image
+report.team-list.contributors.name                                 = Name
+report.team-list.contributors.organization                         = Organization
+report.team-list.contributors.organizationurl                      = Organization URL
+report.team-list.contributors.properties                           = Properties
+report.team-list.contributors.roles                                = Roles
+report.team-list.contributors.timezone                             = Time Zone
+report.team-list.contributors.title                                = Contributors
+report.team-list.contributors.url                                  = URL
+report.team-list.description                                       = These are the members of the Apache HBase&#8482; project. These are the individuals who have contributed to the project in one form or another.
+report.team-list.developers.actualtime                             = Actual Time (GMT)
+report.team-list.developers.email                                  = Email
+report.team-list.developers.image                                  = Image
+report.team-list.developers.id                                     = Id
+report.team-list.developers.intro                                  = These are the developers with commit privileges that have directly contributed to the project in one way or another.
+report.team-list.developers.name                                   = Name
+report.team-list.developers.organization                           = Organization
+report.team-list.developers.organizationurl                        = Organization URL
+report.team-list.developers.properties                             = Properties
+report.team-list.developers.roles                                  = Roles
+report.team-list.developers.timezone                               = Time Zone
+report.team-list.developers.title                                  = Members
+report.team-list.developers.url                                    = URL
+report.team-list.intro.description1                                = A successful project requires many people to play many roles. Some members write code or documentation, while others are valuable as testers, submitting patches and suggestions.
+report.team-list.intro.description2                                = The team is comprised of Members and Contributors. Members have direct access to the source of a project and actively evolve the code-base. Contributors improve the project through submission of patches and suggestions to the Members. The number of Contributors to the project is unbounded. Get involved today. All contributions to the project are greatly appreciated.
+report.team-list.intro.title                                       = The Team
+report.team-list.name                                              = Project Team
+report.team-list.nocontributor                                     = Apache HBase&#8482; does not maintain a list of contributors.
+report.team-list.nodeveloper                                       = Apache HBase&#8482; does not maintain a list of developers.
+report.team-list.title                                             = Project Team
+report.dependencyManagement.name                                   = Dependency Management
+report.dependencyManagement.description                            = This document lists the dependencies that are defined through dependencyManagement.
+report.dependencyManagement.title                                  = Project Dependency Management
+report.dependencyManagement.nolist                                 = There are no dependencies in the DependencyManagement of Apache HBase&#8482;.
+report.dependencyManagement.column.groupId                         = GroupId
+report.dependencyManagement.column.artifactId                      = ArtifactId
+report.dependencyManagement.column.version                         = Version
+report.dependencyManagement.column.classifier                      = Classifier
+report.dependencyManagement.column.type                            = Type
+report.dependencyManagement.column.license                         = License
+report.dependencyManagement.intro.compile                          = The following is a list of compile dependencies in the DependencyManagement of Apache HBase&#8482;. These dependencies can be included in the submodules to compile and run the submodule:
+report.dependencyManagement.intro.provided                         = The following is a list of provided dependencies in the DependencyManagement of Apache HBase&#8482;. These dependencies can be included in the submodules to compile the submodule, but should be provided by default when using the library:
+report.dependencyManagement.intro.runtime                          = The following is a list of runtime dependencies in the DependencyManagement of Apache HBase&#8482;. These dependencies can be included in the submodules to run the submodule:
+report.dependencyManagement.intro.system                           = The following is a list of system dependencies in the DependencyManagement of Apache HBase&#8482;. These dependencies can be included in the submodules to compile the submodule:
+report.dependencyManagement.intro.test                             = The following is a list of test dependencies in the DependencyManagement of Apache HBase&#8482;. These dependencies can be included in the submodules to compile and run unit tests for the submodule:
+report.pluginManagement.nolist                                     = There are no plugins defined in the PluginManagement part of Apache HBase&#8482;.
+report.pluginManagement.name                                       = Plugin Management
+report.pluginManagement.description                                = This document lists the plugins that are defined through pluginManagement.
+report.pluginManagement.title                                      = Project Plugin Management
+report.plugins.name                                                = Project Plugins
+report.plugins.description                                         = This document lists the build plugins and the report plugins used by Apache HBase&#8482;.
+report.plugins.title                                               = Project Build Plugins
+report.plugins.report.title                                        = Project Report Plugins
+report.plugins.nolist                                              = There are no plugins defined in the Build part of Apache HBase&#8482;.
+report.plugins.report.nolist                                       = There are no plugins reports defined in the Reporting part of Apache HBase&#8482;.
+report.modules.nolist                                              = There are no modules declared in Apache HBase&#8482;.
+report.modules.name                                                = Project Modules
+report.modules.description                                         = This document lists the modules (sub-projects) of Apache HBase&#8482;.
+report.modules.title                                               = Project Modules
+report.modules.intro                                               = Apache HBase&#8482; has declared the following modules:
+report.modules.header.name                                         = Name
+report.modules.header.description                                  = Description
+report.distributionManagement.name                                 = Distribution Management
+report.distributionManagement.description                          = This document provides informations on the distribution management of Apache HBase&#8482;.
+report.distributionManagement.title                                = Project Distribution Management
+report.distributionManagement.nodistributionmanagement             = No distribution management is defined for Apache HBase&#8482;.
+report.distributionManagement.overview.title                       = Overview
+report.distributionManagement.overview.intro                       = The following is the distribution management information used by Apache HBase&#8482;.
+report.distributionManagement.downloadURL                          = Download URL
+report.distributionManagement.repository                           = Repository
+report.distributionManagement.snapshotRepository                   = Snapshot Repository
+report.distributionManagement.site                                 = Site
+report.distributionManagement.relocation                           = Relocation
+report.distributionManagement.field                                = Field
+report.distributionManagement.value                                = Value
+report.distributionManagement.relocation.groupid                   = GroupId
+report.distributionManagement.relocation.artifactid                = ArtifactId
+report.distributionManagement.relocation.version                   = Version
+report.distributionManagement.relocation.message                   = Message

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/.htaccess
----------------------------------------------------------------------
diff --git a/src/site/resources/.htaccess b/src/site/resources/.htaccess
new file mode 100644
index 0000000..20bf651
--- /dev/null
+++ b/src/site/resources/.htaccess
@@ -0,0 +1,8 @@
+
+# Redirect replication URL to the right section of the book
+# Rule added 2015-1-12 -- can be removed in 6 months
+Redirect permanent /replication.html /book.html#_cluster_replication
+
+# Redirect old page-per-chapter book sections to new single file.
+RedirectMatch permanent ^/book/(.*)\.html$ /book.html#$1
+RedirectMatch permanent ^/book/$ /book.html

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/book/.empty
----------------------------------------------------------------------
diff --git a/src/site/resources/book/.empty b/src/site/resources/book/.empty
new file mode 100644
index 0000000..5513814
--- /dev/null
+++ b/src/site/resources/book/.empty
@@ -0,0 +1 @@
+# This directory is here so that we can have rewrite rules in our .htaccess to maintain old links. Otherwise we fall under some top-level niceness redirects because we have a file named book.html.

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/css/site.css
----------------------------------------------------------------------
diff --git a/src/site/resources/css/site.css b/src/site/resources/css/site.css
new file mode 100644
index 0000000..3f42f5a
--- /dev/null
+++ b/src/site/resources/css/site.css
@@ -0,0 +1,118 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*@import(https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.3.2/css/bootstrap.min.css);
+@import(https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.3.2/css/bootstrap-responsive.css);*/
+html {
+  background-color: #fff;
+}
+body {
+  font-size: 16px;
+}
+li {
+  line-height: 120%;
+}
+
+div#topbar,
+div#banner,
+div#breadcrumbs,
+div#bodyColumn,
+footer {
+  width: initial;
+  padding-left: 20px;
+  padding-right: 20px;
+  clear: both;
+}
+footer {
+  background-color: #e5e5e5;
+}
+footer .row, footer p, footer .pull-right {
+  margin: 5px;
+}
+div#search-form.navbar-search.pull-right {
+  width: 290px;
+  margin-right: 0;
+  margin-top: -5px;
+  margin-left: 0;
+  position: initial;
+}
+li#publishDate.pull-right {
+  list-style: none;
+}
+.container,
+.navbar-static-top .container,
+.navbar-fixed-top .container,
+.navbar-fixed-bottom .container,
+.navbar-inner {
+  width: initial;
+}
+/* Change the color and effect when clicking in menus */
+.dropdown-menu>li>a:hover,
+.dropdown-menu>li>a:focus,
+.dropdown-submenu:hover>a,
+.dropdown-submenu:focus>a {
+  background-color: #e5e5e5;
+  background-image: none;
+  color: #000;
+  font-weight: bolder;
+}
+
+.dropdown-backdrop {
+  position: static;
+}
+
+@media only screen and (max-width: 979px) {
+  body {
+    padding-left: 0;
+    padding-right: 0;
+    width: initial;
+    margin: 0;
+  }
+  /* Without this rule, drop-down divs are a fixed height
+   * the first time they are expanded */
+  .collapse.in {
+      height: auto !important;
+  }
+  div#search-form.navbar-search.pull-right {
+    padding: 0;
+    margin-left: ;
+    width: initial;
+    clear: both;
+  }
+}
+
+/* Fix Google Custom Search results on very narrow screens */
+@media(max-width: 480px) {
+    .gsc-overflow-hidden .nav-collapse {
+        -webkit-transform: none;
+    }
+}
+
+/* Override weird body padding thing that causes scrolling */
+@media (max-width: 767px)
+body {
+    padding-right: 0;
+    padding-left: 0;
+}
+
+@media (max-width: 767px)
+.navbar-fixed-top, .navbar-fixed-bottom, .navbar-static-top {
+  margin-left: 0;
+  margin-right: 0;
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/doap_Hbase.rdf
----------------------------------------------------------------------
diff --git a/src/site/resources/doap_Hbase.rdf b/src/site/resources/doap_Hbase.rdf
new file mode 100644
index 0000000..86e22bd
--- /dev/null
+++ b/src/site/resources/doap_Hbase.rdf
@@ -0,0 +1,57 @@
+<?xml version="1.0"?>
+<?xml-stylesheet type="text/xsl"?>
+<rdf:RDF xml:lang="en"
+         xmlns="http://usefulinc.com/ns/doap#"
+         xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+         xmlns:asfext="http://projects.apache.org/ns/asfext#"
+         xmlns:foaf="http://xmlns.com/foaf/0.1/">
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one or more
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership.
+    The ASF licenses this file to You under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with
+    the License.  You may obtain a copy of the License at
+
+         http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+  <Project rdf:about="http://hbase.apache.org">
+    <created>2012-04-14</created>
+    <license rdf:resource="http://usefulinc.com/doap/licenses/asl20" />
+    <name>Apache HBase</name>
+    <homepage rdf:resource="http://hbase.apache.org" />
+    <asfext:pmc rdf:resource="http://hbase.apache.org" />
+    <shortdesc>Apache HBase software is the Hadoop database. Think of it as a distributed, scalable, big data store.</shortdesc>
+    <description>Use Apache HBase software when you need random, realtime read/write access to your Big Data. This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware. HBase is an open-source, distributed, versioned, column-oriented store modeled after Google's Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Hadoop and HDFS. </description>
+    <bug-database rdf:resource="http://issues.apache.org/jira/browse/HBASE" />
+    <mailing-list rdf:resource="http://hbase.apache.org/mail-lists.html" />
+    <download-page rdf:resource="http://www.apache.org/dyn/closer.cgi/hbase/" />
+    <programming-language>Java</programming-language>
+    <category rdf:resource="http://projects.apache.org/category/database" />
+    <release>
+      <Version>
+        <name>Apache hbase </name>
+        <created>2015-07-23</created>
+        <revision>2.0.0-SNAPSHOT</revision>
+      </Version>
+    </release>
+    <repository>
+      <GitRepository>
+        <location rdf:resource="git://git.apache.org/hbase.git"/>
+        <browse rdf:resource="https://git-wip-us.apache.org/repos/asf?p=hbase.git"/>
+      </GitRepository>
+    </repository>
+    <maintainer>
+      <foaf:Person>
+        <foaf:name>Apache HBase PMC</foaf:name>
+          <foaf:mbox rdf:resource="mailto:dev@hbase.apache.org"/>
+      </foaf:Person>
+    </maintainer>
+  </Project>
+</rdf:RDF>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/architecture.gif
----------------------------------------------------------------------
diff --git a/src/site/resources/images/architecture.gif b/src/site/resources/images/architecture.gif
new file mode 100644
index 0000000..8d84a23
Binary files /dev/null and b/src/site/resources/images/architecture.gif differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/backup-app-components.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/backup-app-components.png b/src/site/resources/images/backup-app-components.png
new file mode 100644
index 0000000..5e403e2
Binary files /dev/null and b/src/site/resources/images/backup-app-components.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/backup-cloud-appliance.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/backup-cloud-appliance.png b/src/site/resources/images/backup-cloud-appliance.png
new file mode 100644
index 0000000..76b6d5a
Binary files /dev/null and b/src/site/resources/images/backup-cloud-appliance.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/backup-dedicated-cluster.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/backup-dedicated-cluster.png b/src/site/resources/images/backup-dedicated-cluster.png
new file mode 100644
index 0000000..bca282d
Binary files /dev/null and b/src/site/resources/images/backup-dedicated-cluster.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/backup-intra-cluster.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/backup-intra-cluster.png b/src/site/resources/images/backup-intra-cluster.png
new file mode 100644
index 0000000..113c577
Binary files /dev/null and b/src/site/resources/images/backup-intra-cluster.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/bc_basic.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/bc_basic.png b/src/site/resources/images/bc_basic.png
new file mode 100644
index 0000000..231de93
Binary files /dev/null and b/src/site/resources/images/bc_basic.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/bc_config.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/bc_config.png b/src/site/resources/images/bc_config.png
new file mode 100644
index 0000000..53250cf
Binary files /dev/null and b/src/site/resources/images/bc_config.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/bc_l1.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/bc_l1.png b/src/site/resources/images/bc_l1.png
new file mode 100644
index 0000000..36d7e55
Binary files /dev/null and b/src/site/resources/images/bc_l1.png differ


[06/11] hbase git commit: HBASE-20831 Copy master doc into branch-2.1 and edit to make it suit 2.1.0

Posted by zh...@apache.org.
http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/hbasecon2016-stacked.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/hbasecon2016-stacked.png b/src/main/site/resources/images/hbasecon2016-stacked.png
deleted file mode 100644
index 4ff181e..0000000
Binary files a/src/main/site/resources/images/hbasecon2016-stacked.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/hbasecon2017.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/hbasecon2017.png b/src/main/site/resources/images/hbasecon2017.png
deleted file mode 100644
index 4b25f89..0000000
Binary files a/src/main/site/resources/images/hbasecon2017.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/hbaseconasia2017.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/hbaseconasia2017.png b/src/main/site/resources/images/hbaseconasia2017.png
deleted file mode 100644
index 8548870..0000000
Binary files a/src/main/site/resources/images/hbaseconasia2017.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/hfile.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/hfile.png b/src/main/site/resources/images/hfile.png
deleted file mode 100644
index 5762970..0000000
Binary files a/src/main/site/resources/images/hfile.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/hfilev2.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/hfilev2.png b/src/main/site/resources/images/hfilev2.png
deleted file mode 100644
index 54cc0cf..0000000
Binary files a/src/main/site/resources/images/hfilev2.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/jumping-orca_rotated.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/jumping-orca_rotated.png b/src/main/site/resources/images/jumping-orca_rotated.png
deleted file mode 100644
index 4c2c72e..0000000
Binary files a/src/main/site/resources/images/jumping-orca_rotated.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/jumping-orca_rotated.xcf
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/jumping-orca_rotated.xcf b/src/main/site/resources/images/jumping-orca_rotated.xcf
deleted file mode 100644
index 01be6ff..0000000
Binary files a/src/main/site/resources/images/jumping-orca_rotated.xcf and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/jumping-orca_rotated_12percent.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/jumping-orca_rotated_12percent.png b/src/main/site/resources/images/jumping-orca_rotated_12percent.png
deleted file mode 100644
index 1942f9a..0000000
Binary files a/src/main/site/resources/images/jumping-orca_rotated_12percent.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/jumping-orca_rotated_25percent.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/jumping-orca_rotated_25percent.png b/src/main/site/resources/images/jumping-orca_rotated_25percent.png
deleted file mode 100644
index 219c657..0000000
Binary files a/src/main/site/resources/images/jumping-orca_rotated_25percent.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/jumping-orca_transparent_rotated.xcf
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/jumping-orca_transparent_rotated.xcf b/src/main/site/resources/images/jumping-orca_transparent_rotated.xcf
deleted file mode 100644
index be9e3d9..0000000
Binary files a/src/main/site/resources/images/jumping-orca_transparent_rotated.xcf and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/region_split_process.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/region_split_process.png b/src/main/site/resources/images/region_split_process.png
deleted file mode 100644
index 2717617..0000000
Binary files a/src/main/site/resources/images/region_split_process.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/region_states.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/region_states.png b/src/main/site/resources/images/region_states.png
deleted file mode 100644
index ba69e97..0000000
Binary files a/src/main/site/resources/images/region_states.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/replication_overview.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/replication_overview.png b/src/main/site/resources/images/replication_overview.png
deleted file mode 100644
index 47d7b4c..0000000
Binary files a/src/main/site/resources/images/replication_overview.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/images/timeline_consistency.png
----------------------------------------------------------------------
diff --git a/src/main/site/resources/images/timeline_consistency.png b/src/main/site/resources/images/timeline_consistency.png
deleted file mode 100644
index 94c47e0..0000000
Binary files a/src/main/site/resources/images/timeline_consistency.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/1.5-HBASE/maven-fluido-skin-1.5-HBASE.jar
----------------------------------------------------------------------
diff --git a/src/main/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/1.5-HBASE/maven-fluido-skin-1.5-HBASE.jar b/src/main/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/1.5-HBASE/maven-fluido-skin-1.5-HBASE.jar
deleted file mode 100644
index 5b93209..0000000
Binary files a/src/main/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/1.5-HBASE/maven-fluido-skin-1.5-HBASE.jar and /dev/null differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/1.5-HBASE/maven-fluido-skin-1.5-HBASE.pom
----------------------------------------------------------------------
diff --git a/src/main/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/1.5-HBASE/maven-fluido-skin-1.5-HBASE.pom b/src/main/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/1.5-HBASE/maven-fluido-skin-1.5-HBASE.pom
deleted file mode 100644
index d12092b..0000000
--- a/src/main/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/1.5-HBASE/maven-fluido-skin-1.5-HBASE.pom
+++ /dev/null
@@ -1,718 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
-  <modelVersion>4.0.0</modelVersion>
-
-  <parent>
-    <groupId>org.apache.maven.skins</groupId>
-    <artifactId>maven-skins</artifactId>
-    <version>10</version>
-    <relativePath>../maven-skins/pom.xml</relativePath>
-  </parent>
-
-  <artifactId>maven-fluido-skin</artifactId>
-  <version>1.5-HBASE</version>
-
-  <name>Apache Maven Fluido Skin</name>
-  <description>The Apache Maven Fluido Skin is an Apache Maven site skin
-    built on top of Twitter's bootstrap.</description>
-  <inceptionYear>2011</inceptionYear>
-
-  <scm>
-    <connection>scm:svn:http://svn.apache.org/repos/asf/maven/skins/trunk/maven-fluido-skin/</connection>
-    <developerConnection>scm:svn:https://svn.apache.org/repos/asf/maven/skins/trunk/maven-fluido-skin/</developerConnection>
-    <url>http://svn.apache.org/viewvc/maven/skins/trunk/maven-fluido-skin/</url>
-  </scm>
-  <issueManagement>
-    <system>jira</system>
-    <url>https://issues.apache.org/jira/browse/MSKINS/component/12326474</url>
-  </issueManagement>
-  <distributionManagement>
-    <site>
-      <id>apache.website</id>
-      <url>scm:svn:https://svn.apache.org/repos/infra/websites/production/maven/components/${maven.site.path}</url>
-    </site>
-  </distributionManagement>
-
-  <contributors>
-    <!-- in alphabetical order -->
-    <contributor>
-      <name>Bruno P. Kinoshita</name>
-      <email>brunodepaulak AT yahoo DOT com DOT br</email>
-    </contributor>
-    <contributor>
-      <name>Carlos Villaronga</name>
-      <email>cvillaronga AT gmail DOT com</email>
-    </contributor>
-    <contributor>
-      <name>Christian Grobmeier</name>
-      <email>grobmeier AT apache DOT org</email>
-    </contributor>
-    <contributor>
-      <name>Conny Kreyssel</name>
-      <email>dev AT kreyssel DOT org</email>
-    </contributor>
-    <contributor>
-      <name>Michael Koch</name>
-      <email>tensberg AT gmx DOT net</email>
-    </contributor>
-    <contributor>
-      <name>Emmanuel Hugonnet</name>
-      <email>emmanuel DOT hugonnet AT gmail DOT com</email>
-    </contributor>
-    <contributor>
-      <name>Ivan Habunek</name>
-      <email>ihabunek AT apache DOT org</email>
-    </contributor>
-    <contributor>
-      <name>Eric Barboni</name>
-    </contributor>
-    <contributor>
-      <name>Michael Osipov</name>
-      <email>michaelo AT apache DOT org</email>
-    </contributor>
-  </contributors>
-
-  <properties>
-    <bootstrap.version>2.3.2</bootstrap.version>
-    <jquery.version>1.11.2</jquery.version>
-  </properties>
-
-  <build>
-    <resources>
-      <resource>
-        <directory>.</directory>
-        <targetPath>META-INF</targetPath>
-        <includes>
-          <include>NOTICE</include>
-          <include>LICENSE</include>
-        </includes>
-      </resource>
-
-      <!-- exclude css and js since will include the minified version -->
-      <resource>
-        <directory>${basedir}/src/main/resources</directory>
-        <excludes>
-          <exclude>css/**</exclude>
-          <exclude>js/**</exclude>
-        </excludes>
-        <filtering>true</filtering> <!-- add skin-info -->
-      </resource>
-
-      <!-- include the print.css -->
-      <resource>
-        <directory>${basedir}/src/main/resources</directory>
-        <includes>
-          <include>css/print.css</include>
-        </includes>
-      </resource>
-
-      <!-- include minified only -->
-      <resource>
-        <directory>${project.build.directory}/${project.build.finalName}</directory>
-        <includes>
-          <include>css/apache-maven-fluido-${project.version}.min.css</include>
-          <include>js/apache-maven-fluido-${project.version}.min.js</include>
-        </includes>
-      </resource>
-    </resources>
-
-    <pluginManagement>
-      <plugins>
-        <plugin>
-          <groupId>org.apache.rat</groupId>
-          <artifactId>apache-rat-plugin</artifactId>
-          <configuration>
-            <excludes combine.children="append">
-              <exclude>src/main/resources/fonts/glyphicons-halflings-regular.svg</exclude>
-              <exclude>src/main/resources/js/prettify.js</exclude>
-              <exclude>src/main/resources/js/jquery-*.js</exclude>
-            </excludes>
-          </configuration>
-        </plugin>
-      </plugins>
-    </pluginManagement>
-    <plugins>
-      <plugin>
-        <groupId>org.apache.maven.plugins</groupId>
-        <artifactId>maven-resources-plugin</artifactId>
-        <dependencies><!-- TODO remove when upgrading to version 2.8: see MSHARED-325 / MRESOURCES-192 -->
-          <dependency>
-              <groupId>org.apache.maven.shared</groupId>
-              <artifactId>maven-filtering</artifactId>
-              <version>1.3</version>
-          </dependency>
-        </dependencies>
-        <configuration>
-          <delimiters>
-            <delimiter>@</delimiter>
-          </delimiters>
-          <useDefaultDelimiters>false</useDefaultDelimiters>
-        </configuration>
-      </plugin>
-      <plugin>
-        <groupId>com.samaxes.maven</groupId>
-        <artifactId>maven-minify-plugin</artifactId>
-        <version>1.3.5</version>
-        <executions>
-          <execution>
-            <id>default-minify</id>
-            <phase>generate-resources</phase>
-            <configuration>
-              <webappSourceDir>${basedir}/src/main/resources</webappSourceDir>
-              <cssSourceDir>css</cssSourceDir>
-              <cssSourceFiles>
-                <cssSourceFile>bootstrap-${bootstrap.version}.css</cssSourceFile>
-                <cssSourceFile>maven-base.css</cssSourceFile>
-                <cssSourceFile>maven-theme.css</cssSourceFile>
-                <cssSourceFile>prettify.css</cssSourceFile>
-              </cssSourceFiles>
-              <cssFinalFile>apache-maven-fluido-${project.version}.css</cssFinalFile>
-              <jsSourceDir>js</jsSourceDir>
-              <jsSourceFiles>
-                <jsSourceFile>jquery-${jquery.version}.js</jsSourceFile>
-                <jsSourceFile>bootstrap-${bootstrap.version}.js</jsSourceFile>
-                <jsSourceFile>prettify.js</jsSourceFile>
-                <jsSourceFile>fluido.js</jsSourceFile>
-              </jsSourceFiles>
-              <jsFinalFile>apache-maven-fluido-${project.version}.js</jsFinalFile>
-            </configuration>
-            <goals>
-              <goal>minify</goal>
-            </goals>
-          </execution>
-        </executions>
-      </plugin>
-    </plugins>
-  </build>
-
-  <profiles>
-    <profile>
-      <id>run-its</id>
-      <build>
-        <plugins>
-          <plugin>
-            <groupId>org.apache.maven.plugins</groupId>
-            <artifactId>maven-invoker-plugin</artifactId>
-            <configuration>
-              <debug>true</debug>
-              <projectsDirectory>src/it</projectsDirectory>
-              <cloneProjectsTo>${project.build.directory}/it</cloneProjectsTo>
-              <preBuildHookScript>setup</preBuildHookScript>
-              <postBuildHookScript>verify</postBuildHookScript>
-              <localRepositoryPath>${project.build.directory}/local-repo</localRepositoryPath>
-              <settingsFile>src/it/settings.xml</settingsFile>
-              <pomIncludes>
-                <pomInclude>*/pom.xml</pomInclude>
-              </pomIncludes>
-              <goals>
-                <goal>site</goal>
-              </goals>
-            </configuration>
-            <executions>
-              <execution>
-                <id>integration-test</id>
-                <goals>
-                  <goal>install</goal>
-                  <goal>integration-test</goal>
-                  <goal>verify</goal>
-                </goals>
-              </execution>
-            </executions>
-          </plugin>
-        </plugins>
-      </build>
-    </profile>
-    <profile>
-      <id>reporting</id>
-      <build>
-        <plugins>
-          <plugin>
-            <groupId>org.apache.maven.plugins</groupId>
-            <artifactId>maven-resources-plugin</artifactId>
-            <executions>
-              <execution>
-                <id>copy-sidebar</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/sidebar/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/sidebar/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-topbar</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/topbar/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/topbar/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-topbar-inverse</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/topbar-inverse/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/topbar-inverse/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-10</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-10/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-10/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-13</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-13/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-13/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-14</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-14/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-14/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-14_sitesearch</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-14_sitesearch/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-14_sitesearch/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-15</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-15/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-15/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-16</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-16/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-16/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-17</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-17/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-17/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-21</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-21/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-21/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-22</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-22/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-22/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-22_default</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-22_default/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-22_default/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-22_topbar</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-22_topbar/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-22_topbar/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-23</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-23/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-23/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-24</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-24/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-24/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-24_topbar</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-24_topbar/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-24_topbar/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-25</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-25/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-25/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-28</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-28/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-28/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-31</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-31/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-31/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-33</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-33/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-33/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-33_topbar</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-33_topbar/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-33_topbar/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-34</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-34/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-34/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-34_topbar</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-34_topbar/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-34_topbar/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-41</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-41/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-41/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-72</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-72/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-72/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-75</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-75/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-75/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-76</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-76/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-76/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-76_topbar</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-76_topbar/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-76_topbar/</outputDirectory>
-                </configuration>
-              </execution>
-              <execution>
-                <id>copy-mskins-85</id>
-                <phase>site</phase>
-                <goals>
-                  <goal>copy-resources</goal>
-                </goals>
-                <configuration>
-                  <resources>
-                    <resource>
-                      <directory>${project.build.directory}/it/mskins-85/target/site/</directory>
-                    </resource>
-                  </resources>
-                  <outputDirectory>${project.build.directory}/site/mskins-85/</outputDirectory>
-                </configuration>
-              </execution>
-            </executions>
-          </plugin>
-        </plugins>
-      </build>
-      <reporting>
-        <plugins>
-          <plugin>
-            <groupId>org.apache.maven.plugins</groupId>
-            <artifactId>maven-invoker-plugin</artifactId>
-            <version>1.8</version>
-          </plugin>
-        </plugins>
-      </reporting>
-    </profile>
-  </profiles>
-</project>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/maven-metadata-local.xml
----------------------------------------------------------------------
diff --git a/src/main/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/maven-metadata-local.xml b/src/main/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/maven-metadata-local.xml
deleted file mode 100644
index 65791e8..0000000
--- a/src/main/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/maven-metadata-local.xml
+++ /dev/null
@@ -1,12 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<metadata>
-  <groupId>org.apache.maven.skins</groupId>
-  <artifactId>maven-fluido-skin</artifactId>
-  <versioning>
-    <release>1.5-HBASE</release>
-    <versions>
-      <version>1.5-HBASE</version>
-    </versions>
-    <lastUpdated>20151111033340</lastUpdated>
-  </versioning>
-</metadata>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/site.xml
----------------------------------------------------------------------
diff --git a/src/main/site/site.xml b/src/main/site/site.xml
deleted file mode 100644
index f036702..0000000
--- a/src/main/site/site.xml
+++ /dev/null
@@ -1,131 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
--->
-
-<project xmlns="http://maven.apache.org/DECORATION/1.0.0"
-    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-    xsi:schemaLocation="http://maven.apache.org/DECORATION/1.0.0 http://maven.apache.org/xsd/decoration-1.0.0.xsd">
-  <skin>
-    <groupId>org.apache.maven.skins</groupId>
-    <artifactId>maven-fluido-skin</artifactId>
-    <version>1.5-HBASE</version>
-  </skin>
-  <custom>
-    <fluidoSkin>
-      <topBarEnabled>true</topBarEnabled>
-      <sideBarEnabled>false</sideBarEnabled>
-      <googleSearch>
-        <!-- The ID of the Google custom search engine to use.
-             This one searches hbase.apache.org, issues.apache.org/browse/HBASE-*,
-             and user and dev mailing list archives. -->
-        <customSearch>000385458301414556862:sq1bb0xugjg</customSearch>
-      </googleSearch>
-      <sourceLineNumbersEnabled>false</sourceLineNumbersEnabled>
-      <skipGenerationDate>true</skipGenerationDate>
-      <breadcrumbDivider>»</breadcrumbDivider>
-    </fluidoSkin>
-  </custom>
-  <bannerLeft>
-    <name />
-    <src />
-    <href />
-    <!--
-    <name/>
-    <height>0</height>
-    <width>0</width>
--->
-  </bannerLeft>
-  <bannerRight>
-    <name>Apache HBase</name>
-    <src>images/hbase_logo_with_orca_large.png</src>
-    <href>http://hbase.apache.org/</href>
-  </bannerRight>
-  <publishDate position="bottom"/>
-  <version position="none"/>
-  <body>
-    <head>
-      <meta name="viewport" content="width=device-width, initial-scale=1.0"></meta>
-      <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.3.2/css/bootstrap-responsive.min.css"/>
-      <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.9.1/styles/github.min.css"/>
-      <link rel="stylesheet" href="css/site.css"/>
-      <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.9.1/highlight.min.js"></script>
-    </head>
-    <menu name="Apache HBase Project">
-      <item name="Overview" href="index.html"/>
-      <item name="License" href="license.html"/>
-      <item name="Downloads" href="http://www.apache.org/dyn/closer.cgi/hbase/"/>
-      <item name="Release Notes" href="https://issues.apache.org/jira/browse/HBASE?report=com.atlassian.jira.plugin.system.project:changelog-panel#selectedTab=com.atlassian.jira.plugin.system.project%3Achangelog-panel" />
-      <item name="Code Of Conduct" href="coc.html"/>
-      <item name="Blog" href="http://blogs.apache.org/hbase/"/>
-      <item name="Mailing Lists" href="mail-lists.html"/>
-      <item name="Team" href="team-list.html"/>
-      <item name="ReviewBoard" href="https://reviews.apache.org/"/>
-      <item name="Thanks" href="sponsors.html"/>
-      <item name="Powered by HBase" href="poweredbyhbase.html"/>
-      <item name="Other resources" href="resources.html"/>
-    </menu>
-    <menu name="Project Information">
-      <item name="Project Summary" href="project-summary.html"/>
-      <item name="Dependency Information" href="dependency-info.html"/>
-      <item name="Team" href="team-list.html"/>
-      <item name="Source Repository" href="source-repository.html"/>
-      <item name="Issue Tracking" href="issue-tracking.html"/>
-      <item name="Dependency Management" href="dependency-management.html"/>
-      <item name="Dependencies" href="dependencies.html"/>
-      <item name="Dependency Convergence" href="dependency-convergence.html"/>
-      <item name="Continuous Integration" href="integration.html"/>
-      <item name="Plugin Management" href="plugin-management.html"/>
-      <item name="Plugins" href="plugins.html"/>
-    </menu>
-    <menu name="Documentation and API">
-      <item name="Reference Guide" href="book.html" target="_blank" />
-      <item name="Reference Guide (PDF)" href="apache_hbase_reference_guide.pdf" target="_blank" />
-      <item name="Getting Started" href="book.html#quickstart" target="_blank" />
-      <item name="User API" href="apidocs/index.html" target="_blank" />
-      <item name="User API (Test)" href="testapidocs/index.html" target="_blank" />
-      <item name="Developer API" href="https://hbase.apache.org/2.0/devapidocs/index.html" target="_blank" />
-      <item name="Developer API (Test)" href="https://hbase.apache.org/2.0/testdevapidocs/index.html" target="_blank" />
-      <item name="中文参考指南(单页)" href="http://abloz.com/hbase/book.html" target="_blank" />
-      <item name="FAQ" href="book.html#faq" target="_blank" />
-      <item name="Videos/Presentations" href="book.html#other.info" target="_blank" />
-      <item name="Wiki" href="http://wiki.apache.org/hadoop/Hbase" target="_blank" />
-      <item name="ACID Semantics" href="acid-semantics.html" target="_blank" />
-      <item name="Bulk Loads" href="book.html#arch.bulk.load" target="_blank" />
-      <item name="Metrics" href="metrics.html" target="_blank" />
-      <item name="HBase on Windows" href="cygwin.html" target="_blank" />
-      <item name="Cluster replication" href="book.html#replication" target="_blank" />
-      <item name="1.2 Documentation">
-        <item name="API" href="1.2/apidocs/index.html" target="_blank" />
-        <item name="X-Ref" href="1.2/xref/index.html" target="_blank" />
-        <item name="Ref Guide (single-page)" href="1.2/book.html" target="_blank" />
-      </item>
-      <item name="1.1 Documentation">
-        <item name="API" href="1.1/apidocs/index.html" target="_blank" />
-        <item name="X-Ref" href="1.1/xref/index.html" target="_blank" />
-        <item name="Ref Guide (single-page)" href="1.1/book.html" target="_blank" />
-      </item>
-    </menu>
-    <menu name="ASF">
-      <item name="Apache Software Foundation" href="http://www.apache.org/foundation/" target="_blank" />
-      <item name="How Apache Works" href="http://www.apache.org/foundation/how-it-works.html" target="_blank" />
-      <item name="Sponsoring Apache" href="http://www.apache.org/foundation/sponsorship.html" target="_blank" />
-    </menu>
-    </body>
-</project>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/xdoc/acid-semantics.xml
----------------------------------------------------------------------
diff --git a/src/main/site/xdoc/acid-semantics.xml b/src/main/site/xdoc/acid-semantics.xml
deleted file mode 100644
index 2d4eb6a..0000000
--- a/src/main/site/xdoc/acid-semantics.xml
+++ /dev/null
@@ -1,235 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
-          "http://forrest.apache.org/dtd/document-v20.dtd">
-
-<document xmlns="http://maven.apache.org/XDOC/2.0"
-  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
-  <properties>
-    <title> 
-      Apache HBase (TM) ACID Properties
-    </title>
-  </properties>
-
-  <body>
-    <section name="About this Document">
-      <p>Apache HBase (TM) is not an ACID compliant database. However, it does guarantee certain specific
-      properties.</p>
-      <p>This specification enumerates the ACID properties of HBase.</p>
-    </section>
-    <section name="Definitions">
-      <p>For the sake of common vocabulary, we define the following terms:</p>
-      <dl>
-        <dt>Atomicity</dt>
-        <dd>an operation is atomic if it either completes entirely or not at all</dd>
-
-        <dt>Consistency</dt>
-        <dd>
-          all actions cause the table to transition from one valid state directly to another
-          (eg a row will not disappear during an update, etc)
-        </dd>
-
-        <dt>Isolation</dt>
-        <dd>
-          an operation is isolated if it appears to complete independently of any other concurrent transaction
-        </dd>
-
-        <dt>Durability</dt>
-        <dd>any update that reports &quot;successful&quot; to the client will not be lost</dd>
-
-        <dt>Visibility</dt>
-        <dd>an update is considered visible if any subsequent read will see the update as having been committed</dd>
-      </dl>
-      <p>
-        The terms <em>must</em> and <em>may</em> are used as specified by RFC 2119.
-        In short, the word &quot;must&quot; implies that, if some case exists where the statement
-        is not true, it is a bug. The word &quot;may&quot; implies that, even if the guarantee
-        is provided in a current release, users should not rely on it.
-      </p>
-    </section>
-    <section name="APIs to consider">
-      <ul>
-        <li>Read APIs
-        <ul>
-          <li>get</li>
-          <li>scan</li>
-        </ul>
-        </li>
-        <li>Write APIs</li>
-        <ul>
-          <li>put</li>
-          <li>batch put</li>
-          <li>delete</li>
-        </ul>
-        <li>Combination (read-modify-write) APIs</li>
-        <ul>
-          <li>incrementColumnValue</li>
-          <li>checkAndPut</li>
-        </ul>
-      </ul>
-    </section>
-
-    <section name="Guarantees Provided">
-
-      <section name="Atomicity">
-
-        <ol>
-          <li>All mutations are atomic within a row. Any put will either wholly succeed or wholly fail.[3]</li>
-          <ol>
-            <li>An operation that returns a &quot;success&quot; code has completely succeeded.</li>
-            <li>An operation that returns a &quot;failure&quot; code has completely failed.</li>
-            <li>An operation that times out may have succeeded and may have failed. However,
-            it will not have partially succeeded or failed.</li>
-          </ol>
-          <li> This is true even if the mutation crosses multiple column families within a row.</li>
-          <li> APIs that mutate several rows will _not_ be atomic across the multiple rows.
-          For example, a multiput that operates on rows 'a','b', and 'c' may return having
-          mutated some but not all of the rows. In such cases, these APIs will return a list
-          of success codes, each of which may be succeeded, failed, or timed out as described above.</li>
-          <li> The checkAndPut API happens atomically like the typical compareAndSet (CAS) operation
-          found in many hardware architectures.</li>
-          <li> The order of mutations is seen to happen in a well-defined order for each row, with no
-          interleaving. For example, if one writer issues the mutation &quot;a=1,b=1,c=1&quot; and
-          another writer issues the mutation &quot;a=2,b=2,c=2&quot;, the row must either
-          be &quot;a=1,b=1,c=1&quot; or &quot;a=2,b=2,c=2&quot; and must <em>not</em> be something
-          like &quot;a=1,b=2,c=1&quot;.</li>
-          <ol>
-            <li>Please note that this is not true _across rows_ for multirow batch mutations.</li>
-          </ol>
-        </ol>
-      </section>
-      <section name="Consistency and Isolation">
-        <ol>
-          <li>All rows returned via any access API will consist of a complete row that existed at
-          some point in the table's history.</li>
-          <li>This is true across column families - i.e a get of a full row that occurs concurrent
-          with some mutations 1,2,3,4,5 will return a complete row that existed at some point in time
-          between mutation i and i+1 for some i between 1 and 5.</li>
-          <li>The state of a row will only move forward through the history of edits to it.</li>
-        </ol>
-
-        <section name="Consistency of Scans">
-        <p>
-          A scan is <strong>not</strong> a consistent view of a table. Scans do
-          <strong>not</strong> exhibit <em>snapshot isolation</em>.
-        </p>
-        <p>
-          Rather, scans have the following properties:
-        </p>
-
-        <ol>
-          <li>
-            Any row returned by the scan will be a consistent view (i.e. that version
-            of the complete row existed at some point in time) [1]
-          </li>
-          <li>
-            A scan will always reflect a view of the data <em>at least as new as</em>
-            the beginning of the scan. This satisfies the visibility guarantees
-          enumerated below.</li>
-          <ol>
-            <li>For example, if client A writes data X and then communicates via a side
-            channel to client B, any scans started by client B will contain data at least
-            as new as X.</li>
-            <li>A scan _must_ reflect all mutations committed prior to the construction
-            of the scanner, and _may_ reflect some mutations committed subsequent to the
-            construction of the scanner.</li>
-            <li>Scans must include <em>all</em> data written prior to the scan (except in
-            the case where data is subsequently mutated, in which case it _may_ reflect
-            the mutation)</li>
-          </ol>
-        </ol>
-        <p>
-          Those familiar with relational databases will recognize this isolation level as &quot;read committed&quot;.
-        </p>
-        <p>
-          Please note that the guarantees listed above regarding scanner consistency
-          are referring to &quot;transaction commit time&quot;, not the &quot;timestamp&quot;
-          field of each cell. That is to say, a scanner started at time <em>t</em> may see edits
-          with a timestamp value greater than <em>t</em>, if those edits were committed with a
-          &quot;forward dated&quot; timestamp before the scanner was constructed.
-        </p>
-        </section>
-      </section>
-      <section name="Visibility">
-        <ol>
-          <li> When a client receives a &quot;success&quot; response for any mutation, that
-          mutation is immediately visible to both that client and any client with whom it
-          later communicates through side channels. [3]</li>
-          <li> A row must never exhibit so-called &quot;time-travel&quot; properties. That
-          is to say, if a series of mutations moves a row sequentially through a series of
-          states, any sequence of concurrent reads will return a subsequence of those states.</li>
-          <ol>
-            <li>For example, if a row's cells are mutated using the &quot;incrementColumnValue&quot;
-            API, a client must never see the value of any cell decrease.</li>
-            <li>This is true regardless of which read API is used to read back the mutation.</li>
-          </ol>
-          <li> Any version of a cell that has been returned to a read operation is guaranteed to
-          be durably stored.</li>
-        </ol>
-
-      </section>
-      <section name="Durability">
-        <ol>
-          <li> All visible data is also durable data. That is to say, a read will never return
-          data that has not been made durable on disk[2]</li>
-          <li> Any operation that returns a &quot;success&quot; code (eg does not throw an exception)
-          will be made durable.[3]</li>
-          <li> Any operation that returns a &quot;failure&quot; code will not be made durable
-          (subject to the Atomicity guarantees above)</li>
-          <li> All reasonable failure scenarios will not affect any of the guarantees of this document.</li>
-
-        </ol>
-      </section>
-      <section name="Tunability">
-        <p>All of the above guarantees must be possible within Apache HBase. For users who would like to trade
-        off some guarantees for performance, HBase may offer several tuning options. For example:</p>
-        <ul>
-          <li>Visibility may be tuned on a per-read basis to allow stale reads or time travel.</li>
-          <li>Durability may be tuned to only flush data to disk on a periodic basis</li>
-        </ul>
-      </section>
-    </section>
-    <section name="More Information">
-      <p>
-      For more information, see the <a href="book.html#client">client architecture</a> or <a href="book.html#datamodel">data model</a> sections in the Apache HBase Reference Guide. 
-      </p>
-    </section>
-    
-    <section name="Footnotes">
-      <p>[1] A consistent view is not guaranteed intra-row scanning -- i.e. fetching a portion of
-          a row in one RPC then going back to fetch another portion of the row in a subsequent RPC.
-          Intra-row scanning happens when you set a limit on how many values to return per Scan#next
-          (See <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setBatch(int)">Scan#setBatch(int)</a>).
-      </p>
-
-      <p>[2] In the context of Apache HBase, &quot;durably on disk&quot; implies an hflush() call on the transaction
-      log. This does not actually imply an fsync() to magnetic media, but rather just that the data has been
-      written to the OS cache on all replicas of the log. In the case of a full datacenter power loss, it is
-      possible that the edits are not truly durable.</p>
-      <p>[3] Puts will either wholly succeed or wholly fail, provided that they are actually sent
-      to the RegionServer.  If the writebuffer is used, Puts will not be sent until the writebuffer is filled
-      or it is explicitly flushed.</p>
-      
-    </section>
-
-  </body>
-</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/xdoc/bulk-loads.xml
----------------------------------------------------------------------
diff --git a/src/main/site/xdoc/bulk-loads.xml b/src/main/site/xdoc/bulk-loads.xml
deleted file mode 100644
index 2195003..0000000
--- a/src/main/site/xdoc/bulk-loads.xml
+++ /dev/null
@@ -1,34 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-<document xmlns="http://maven.apache.org/XDOC/2.0"
-  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
-  <properties>
-    <title> 
-      Bulk Loads in Apache HBase (TM)
-    </title>
-  </properties>
-  <body>
-       <p>This page has been retired.  The contents have been moved to the 
-      <a href="http://hbase.apache.org/book.html#arch.bulk.load">Bulk Loading</a> section
- in the Reference Guide.
- </p>
-  </body>
-</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/xdoc/coc.xml
----------------------------------------------------------------------
diff --git a/src/main/site/xdoc/coc.xml b/src/main/site/xdoc/coc.xml
deleted file mode 100644
index fc2b549..0000000
--- a/src/main/site/xdoc/coc.xml
+++ /dev/null
@@ -1,92 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
-          "http://forrest.apache.org/dtd/document-v20.dtd">
-
-<document xmlns="http://maven.apache.org/XDOC/2.0"
-  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
-  <properties>
-    <title>
-      Code of Conduct Policy
-    </title>
-  </properties>
-  <body>
-  <section name="Code of Conduct Policy">
-<p>
-We expect participants in discussions on the HBase project mailing lists, IRC
-channels, and JIRA issues to abide by the Apache Software Foundation's
-<a href="http://apache.org/foundation/policies/conduct.html">Code of Conduct</a>.
-</p>
-<p>
-If you feel there has been a violation of this code, please point out your
-concerns publicly in a friendly and matter of fact manner. Nonverbal
-communication is prone to misinterpretation and misunderstanding. Everyone has
-bad days and sometimes says things they regret later. Someone else's
-communication style may clash with yours, but the difference can be amicably
-resolved. After pointing out your concerns please be generous upon receiving an
-apology.
-</p>
-<p>
-Should there be repeated instances of code of conduct violations, or if there is
-an obvious and severe violation, the HBase PMC may become involved. When this
-happens the PMC will openly discuss the matter, most likely on the dev@hbase
-mailing list, and will consider taking the following actions, in order, if there
-is a continuing problem with an individual:
-<ol>
-<li>A friendly off-list warning;</li>
-<li>A friendly public warning, if the communication at issue was on list, otherwise another off-list warning;</li>
-<li>A three month suspension from the public mailing lists and possible operator action in the IRC channels.</li>
-<li>A permanent ban from the public mailing lists, IRC channels, and project JIRA.</li>
-</ol>
-</p>
-<p>
-For flagrant violations requiring a firm response the PMC may opt to skip early
-steps. No action will be taken before public discussion leading to consensus or
-a successful majority vote.
-</p>
-  </section>
-  <section name="Diversity Statement">
-<p>
-As a project and a community, we encourage you to participate in the HBase project
-in whatever capacity suits you, whether it involves development, documentation,
-answering questions on mailing lists, triaging issue and patch review, managing
-releases, or any other way that you want to help. We appreciate your
-contributions and the time you dedicate to the HBase project. We strive to
-recognize the work of participants publicly. Please let us know if we can
-improve in this area.
-</p>
-<p>
-We value diversity and strive to support participation by people with all
-different backgrounds. Rich projects grow from groups with different points of
-view and different backgrounds. We welcome your suggestions about how we can
-welcome participation by people at all skill levels and with all aspects of the
-project.
-</p>
-<p>
-If you can think of something we are doing that we shouldn't, or something that
-we should do but aren't, please let us know. If you feel comfortable doing so,
-use the public mailing lists. Otherwise, reach out to a PMC member or send an
-email to <a href="mailto:private@hbase.apache.org">the private PMC mailing list</a>.
-</p>
-  </section>
-  </body>
-</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/xdoc/cygwin.xml
----------------------------------------------------------------------
diff --git a/src/main/site/xdoc/cygwin.xml b/src/main/site/xdoc/cygwin.xml
deleted file mode 100644
index 406c0a9..0000000
--- a/src/main/site/xdoc/cygwin.xml
+++ /dev/null
@@ -1,245 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-<document xmlns="http://maven.apache.org/XDOC/2.0"
-  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
-  <properties>
-    <title>Installing Apache HBase (TM) on Windows using Cygwin</title>
-  </properties>
-
-<body>
-<section name="Introduction">
-<p><a title="HBase project" href="http://hbase.apache.org" target="_blank">Apache HBase (TM)</a> is a distributed, column-oriented store, modeled after Google's <a title="Google's BigTable" href="http://research.google.com/archive/bigtable.html" target="_blank">BigTable</a>. Apache HBase is built on top of <a title="Hadoop project" href="http://hadoop.apache.org">Hadoop</a> for its <a title="Hadoop MapReduce project" href="http://hadoop.apache.org/mapreduce" target="_blank">MapReduce </a>and <a title="Hadoop DFS project" href="http://hadoop.apache.org/hdfs">distributed file system</a> implementation. All these projects are open-source and part of the <a title="The Apache Software Foundation" href="http://www.apache.org/" target="_blank">Apache Software Foundation</a>.</p>
-
-<p style="text-align: justify; ">As being distributed, large scale platforms, the Hadoop and HBase projects mainly focus on <em><strong>*nix</strong></em><strong> environments</strong> for production installations. However, being developed in <strong>Java</strong>, both projects are fully <strong>portable</strong> across platforms and, hence, also to the <strong>Windows operating system</strong>. For ease of development the projects rely on <a title="Cygwin site" href="http://www.cygwin.com/" target="_blank">Cygwin</a> to have a *nix-like environment on Windows to run the shell scripts.</p>
-</section>
-<section name="Purpose">
-<p style="text-align: justify; ">This document explains the <strong>intricacies of running Apache HBase on Windows using Cygwin</strong> as an all-in-one single-node installation for testing and development. The HBase <a title="HBase Overview" href="http://hbase.apache.org/apidocs/overview-summary.html#overview_description" target="_blank">Overview</a> and <a title="HBase QuickStart" href="http://hbase.apache.org/book/quickstart.html" target="_blank">QuickStart</a> guides on the other hand go a long way in explaning how to setup <a title="HBase project" href="http://hadoop.apache.org/hbase" target="_blank">HBase</a> in more complex deployment scenario's.</p>
-</section>
-
-<section name="Installation">
-<p style="text-align: justify; ">For running Apache HBase on Windows, 3 technologies are required: <strong>Java, Cygwin and SSH</strong>. The following paragraphs detail the installation of each of the aforementioned technologies.</p>
-<section name="Java">
-<p style="text-align: justify; ">HBase depends on the <a title="Java Platform, Standard Edition, 6 Release" href="http://java.sun.com/javase/6/" target="_blank">Java Platform, Standard Edition, 6 Release</a>. So the target system has to be provided with at least the Java Runtime Environment (JRE); however if the system will also be used for development, the Jave Development Kit (JDK) is preferred. You can download the latest versions for both from <a title="Java SE Downloads" href="http://java.sun.com/javase/downloads/index.jsp" target="_blank">Sun's download page</a>. Installation is a simple GUI wizard that guides you through the process.</p>
-</section>
-<section name="Cygwin">
-<p style="text-align: justify; ">Cygwin is probably the oddest technology in this solution stack. It provides a dynamic link library that emulates most of a *nix environment on Windows. On top of that a whole bunch of the most common *nix tools are supplied. Combined, the DLL with the tools form a very *nix-alike environment on Windows.</p>
-
-<p style="text-align: justify; ">For installation, Cygwin provides the <a title="Cygwin Setup Utility" href="http://cygwin.com/setup.exe" target="_blank"><strong><code>setup.exe</code> utility</strong></a> that tracks the versions of all installed components on the target system and provides the mechanism for <strong>installing</strong> or <strong>updating </strong>everything from the mirror sites of Cygwin.</p>
-
-<p style="text-align: justify; ">To support installation, the <code>setup.exe</code> utility uses 2 directories on the target system. The <strong>Root</strong> directory for Cygwin (defaults to <code>C:\cygwin)</code> which will become <code>/</code> within the eventual Cygwin installation; and the <strong>Local Package </strong>directory (e.g. <code>C:\cygsetup</code> that is the cache where <code>setup.exe</code> stores the packages before they are installed. The cache must not be the same folder as the Cygwin root.</p>
-
-<p style="text-align: justify; ">Perform following steps to install Cygwin, which are elaboratly detailed in the <a title="Setting Up Cygwin" href="http://cygwin.com/cygwin-ug-net/setup-net.html" target="_self">2nd chapter</a> of the <a title="Cygwin User's Guide" href="http://cygwin.com/cygwin-ug-net/cygwin-ug-net.html" target="_blank">Cygwin User's Guide</a>:</p>
-
-<ol style="text-align: justify; ">
-	<li>Make sure you have <code>Administrator</code> privileges on the target system.</li>
-	<li>Choose and create you <strong>Root</strong> and <strong>Local Package</strong> directories. A good suggestion is to use <code>C:\cygwin\root</code> and <code>C:\cygwin\setup</code> folders.</li>
-	<li>Download the <code>setup.exe</code> utility and save it to the <strong>Local Package</strong> directory.</li>
-	<li>Run the <code>setup.exe</code> utility,
-<ol>
-	<li>Choose  the <code>Install from Internet</code> option,</li>
-	<li>Choose your <strong>Root</strong> and <strong>Local Package</strong> folders</li>
-	<li>and select an appropriate mirror.</li>
-	<li>Don't select any additional packages yet, as we only want to install Cygwin for now.</li>
-	<li>Wait for download and install</li>
-	<li>Finish the installation</li>
-</ol>
-</li>
-	<li>Optionally, you can now also add a shortcut to your Start menu pointing to the <code>setup.exe</code> utility in the <strong>Local Package </strong>folder.</li>
-	<li>Add <code>CYGWIN_HOME</code> system-wide environment variable that points to your <strong>Root </strong>directory.</li>
-	<li>Add <code>%CYGWIN_HOME%\bin</code> to the end of your <code>PATH</code> environment variable.</li>
-	<li>Reboot the sytem after making changes to the environment variables otherwise the OS will not be able to find the Cygwin utilities.</li>
-	<li>Test your installation by running your freshly created shortcuts or the <code>Cygwin.bat</code> command in the <strong>Root</strong> folder. You should end up in a terminal window that is running a <a title="Bash Reference Manual" href="http://www.gnu.org/software/bash/manual/bashref.html" target="_blank">Bash shell</a>. Test the shell by issuing following commands:
-<ol>
-	<li><code>cd /</code> should take you to thr <strong>Root</strong> directory in Cygwin;</li>
-	<li>the <code>LS</code> commands that should list all files and folders in the current directory.</li>
-	<li>Use the <code>exit</code> command to end the terminal.</li>
-</ol>
-</li>
-	<li>When needed, to <strong>uninstall</strong> Cygwin you can simply delete the <strong>Root</strong> and <strong>Local Package</strong> directory, and the <strong>shortcuts</strong> that were created during installation.</li>
-</ol>
-</section>
-<section name="SSH">
-<p style="text-align: justify; ">HBase (and Hadoop) rely on <a title="Secure Shell" href="http://nl.wikipedia.org/wiki/Secure_Shell" target="_blank"><strong>SSH</strong></a> for interprocess/-node <strong>communication</strong> and launching<strong> remote commands</strong>. SSH will be provisioned on the target system via Cygwin, which supports running Cygwin programs as <strong>Windows services</strong>!</p>
-
-<ol style="text-align: justify; ">
-	<li>Rerun the <code><strong>setup.exe</strong></code><strong> utility</strong>.</li>
-	<li>Leave all parameters as is, skipping through the wizard using the <code>Next</code> button until the <code>Select Packages</code> panel is shown.</li>
-	<li>Maximize the window and click the <code>View</code> button to toggle to the list view, which is ordered alfabetically on <code>Package</code>, making it easier to find the packages we'll need.</li>
-	<li>Select the following packages by clicking the status word (normally <code>Skip</code>) so it's marked for installation. Use the <code>Next </code>button to download and install the packages.
-<ol>
-	<li>OpenSSH</li>
-	<li>tcp_wrappers</li>
-	<li>diffutils</li>
-	<li>zlib</li>
-</ol>
-</li>
-	<li>Wait for the install to complete and finish the installation.</li>
-</ol>
-</section>
-<section name="HBase">
-<p style="text-align: justify; ">Download the <strong>latest release </strong>of Apache HBase from the <a title="HBase Releases" href="http://www.apache.org/dyn/closer.cgi/hbase/" target="_blank">website</a>. As the Apache HBase distributable is just a zipped archive, installation is as simple as unpacking the archive so it ends up in its final <strong>installation</strong> directory. Notice that HBase has to be installed in Cygwin and a good directory suggestion is to use <code>/usr/local/</code> (or [<code><strong>Root</strong> directory]\usr\local</code> in Windows slang). You should end up with a <code>/usr/local/hbase-<em>&lt;version&gt;</em></code> installation in Cygwin.</p>
-
-This finishes installation. We go on with the configuration.
-</section>
-</section>
-<section name="Configuration">
-<p style="text-align: justify; ">There are 3 parts left to configure: <strong>Java, SSH and HBase</strong> itself. Following paragraphs explain eacht topic in detail.</p>
-<section name="Java">
-<p style="text-align: justify; ">One important thing to remember in shell scripting in general (i.e. *nix and Windows) is that managing, manipulating and assembling path names that contains spaces can be very hard, due to the need to escape and quote those characters and strings. So we try to stay away from spaces in path names. *nix environments can help us out here very easily by using <strong>symbolic links</strong>.</p>
-
-<ol style="text-align: justify; ">
-	<li style="text-align: justify; ">Create a link in <code>/usr/local</code> to the Java home directory by using the following command and substituting the name of your chosen Java environment:
-<pre>LN -s /cygdrive/c/Program\ Files/Java/<em>&lt;jre name&gt; </em>/usr/local/<em>&lt;jre name&gt;</em></pre>
-</li>
-	<li>Test your java installation by changing directories to your Java folder <code>CD /usr/local/<em>&lt;jre name&gt;</em></code> and issueing the command <code>./bin/java -version</code>. This should output your version of the chosen JRE.</li>
-</ol>
-</section>
-<section>
-<title>SSH</title>
-<p style="text-align: justify; ">Configuring <strong>SSH </strong>is quite elaborate, but primarily a question of launching it by default as a<strong> Windows service</strong>.</p>
-
-<ol style="text-align: justify; ">
-	<li style="text-align: justify; ">On Windows Vista and above make sure you run the Cygwin shell with <strong>elevated privileges</strong>, by right-clicking on the shortcut an using <code>Run as Administrator</code>.</li>
-	<li style="text-align: justify; ">First of all, we have to make sure the <strong>rights on some crucial files</strong> are correct. Use the commands underneath. You can verify all rights by using the <code>LS -L</code> command on the different files. Also, notice the auto-completion feature in the shell using <code>&lt;TAB&gt;</code> is extremely handy in these situations.
-<ol>
-	<li><code>chmod +r /etc/passwd</code> to make the passwords file readable for all</li>
-	<li><code>chmod u+w /etc/passwd</code> to make the passwords file writable for the owner</li>
-	<li><code>chmod +r /etc/group</code> to make the groups file readable for all</li>
-</ol>
-<ol>
-	<li><code>chmod u+w /etc/group</code> to make the groups file writable for the owner</li>
-</ol>
-<ol>
-	<li><code>chmod 755 /var</code> to make the var folder writable to owner and readable and executable to all</li>
-</ol>
-</li>
-	<li>Edit the <strong>/etc/hosts.allow</strong> file using your favorite editor (why not VI in the shell!) and make sure the following two lines are in there before the <code>PARANOID</code> line:
-<ol>
-	<li><code>ALL : localhost 127.0.0.1/32 : allow</code></li>
-	<li><code>ALL : [::1]/128 : allow</code></li>
-</ol>
-</li>
-	<li>Next we have to <strong>configure SSH</strong> by using the script <code>ssh-host-config</code>
-<ol>
-	<li>If this script asks to overwrite an existing <code>/etc/ssh_config</code>, answer <code>yes</code>.</li>
-	<li>If this script asks to overwrite an existing <code>/etc/sshd_config</code>, answer <code>yes</code>.</li>
-	<li>If this script asks to use privilege separation, answer <code>yes</code>.</li>
-	<li>If this script asks to install <code>sshd</code> as a service, answer <code>yes</code>. Make sure you started your shell as Adminstrator!</li>
-	<li>If this script asks for the CYGWIN value, just <code>&lt;enter&gt;</code> as the default is <code>ntsec</code>.</li>
-	<li>If this script asks to create the <code>sshd</code> account, answer <code>yes</code>.</li>
-	<li>If this script asks to use a different user name as service account, answer <code>no</code> as the default will suffice.</li>
-	<li>If this script asks to create the <code>cyg_server</code> account, answer <code>yes</code>. Enter a password for the account.</li>
-</ol>
-</li>
-	<li><strong>Start the SSH service</strong> using <code>net start sshd</code> or <code>cygrunsrv  --start  sshd</code>. Notice that <code>cygrunsrv</code> is the utility that make the process run as a Windows service. Confirm that you see a message stating that <code>the CYGWIN sshd service  was started succesfully.</code></li>
-	<li>Harmonize Windows and Cygwin<strong> user account</strong> by using the commands:
-<ol>
-	<li><code>mkpasswd -cl &gt; /etc/passwd</code></li>
-	<li><code>mkgroup --local &gt; /etc/group</code></li>
-</ol>
-</li>
-	<li><strong>Test </strong>the installation of SSH:
-<ol>
-	<li>Open a new Cygwin terminal</li>
-	<li>Use the command <code>whoami</code> to verify your userID</li>
-	<li>Issue an <code>ssh localhost</code> to connect to the system itself
-<ol>
-	<li>Answer <code>yes</code> when presented with the server's fingerprint</li>
-	<li>Issue your password when prompted</li>
-	<li>test a few commands in the remote session</li>
-	<li>The <code>exit</code> command should take you back to your first shell in Cygwin</li>
-</ol>
-</li>
-	<li><code>Exit</code> should terminate the Cygwin shell.</li>
-</ol>
-</li>
-</ol>
-</section>
-<section name="HBase">
-If all previous configurations are working properly, we just need some tinkering at the <strong>HBase config</strong> files to properly resolve on Windows/Cygwin. All files and paths referenced here start from the HBase <code>[<strong>installation</strong> directory]</code> as working directory.
-<ol>
-	<li>HBase uses the <code>./conf/<strong>hbase-env.sh</strong></code> to configure its dependencies on the runtime environment. Copy and uncomment following lines just underneath their original, change them to fit your environemnt. They should read something like:
-<ol>
-	<li><code>export JAVA_HOME=/usr/local/<em>&lt;jre name&gt;</em></code></li>
-	<li><code>export HBASE_IDENT_STRING=$HOSTNAME</code> as this most likely does not inlcude spaces.</li>
-</ol>
-</li>
-	<li>HBase uses the ./conf/<code><strong>hbase-default.xml</strong></code> file for configuration. Some properties do not resolve to existing directories because the JVM runs on Windows. This is the major issue to keep in mind when working with Cygwin: within the shell all paths are *nix-alike, hence relative to the root <code>/</code>. However, every parameter that is to be consumed within the windows processes themself, need to be Windows settings, hence <code>C:\</code>-alike. Change following propeties in the configuration file, adjusting paths where necessary to conform with your own installation:
-<ol>
-	<li><code>hbase.rootdir</code> must read e.g. <code>file:///C:/cygwin/root/tmp/hbase/data</code></li>
-	<li><code>hbase.tmp.dir</code> must read <code>C:/cygwin/root/tmp/hbase/tmp</code></li>
-	<li><code>hbase.zookeeper.quorum</code> must read <code>127.0.0.1</code> because for some reason <code>localhost</code> doesn't seem to resolve properly on Cygwin.</li>
-</ol>
-</li>
-	<li>Make sure the configured <code>hbase.rootdir</code> and <code>hbase.tmp.dir</code> <strong>directories exist</strong> and have the proper<strong> rights</strong> set up e.g. by issuing a <code>chmod 777</code> on them.</li>
-</ol>
-</section>
-</section>
-<section>
-<title>Testing</title>
-<p>
-This should conclude the installation and configuration of Apache HBase on Windows using Cygwin. So it's time <strong>to test it</strong>.
-<ol>
-	<li>Start a Cygwin<strong> terminal</strong>, if you haven't already.</li>
-	<li>Change directory to HBase <strong>installation</strong> using <code>CD /usr/local/hbase-<em>&lt;version&gt;</em></code>, preferably using auto-completion.</li>
-	<li><strong>Start HBase</strong> using the command <code>./bin/start-hbase.sh</code>
-<ol>
-	<li>When prompted to accept the SSH fingerprint, answer <code>yes</code>.</li>
-	<li>When prompted, provide your password. Maybe multiple times.</li>
-	<li>When the command completes, the HBase server should have started.</li>
-	<li>However, to be absolutely certain, check the logs in the <code>./logs</code> directory for any exceptions.</li>
-</ol>
-</li>
-	<li>Next we <strong>start the HBase shell</strong> using the command <code>./bin/hbase shell</code></li>
-	<li>We run some simple <strong>test commands</strong>
-<ol>
-	<li>Create a simple table using command <code>create 'test', 'data'</code></li>
-	<li>Verify the table exists using the command <code>list</code></li>
-	<li>Insert data into the table using e.g.
-<pre>put 'test', 'row1', 'data:1', 'value1'
-put 'test', 'row2', 'data:2', 'value2'
-put 'test', 'row3', 'data:3', 'value3'</pre>
-</li>
-	<li>List all rows in the table using the command <code>scan 'test'</code> that should list all the rows previously inserted. Notice how 3 new columns where added without changing the schema!</li>
-	<li>Finally we get rid of the table by issuing <code>disable 'test'</code> followed by <code>drop 'test'</code> and verified by <code>list</code> which should give an empty listing.</li>
-</ol>
-</li>
-	<li><strong>Leave the shell</strong> by <code>exit</code></li>
-	<li>To <strong>stop the HBase server</strong> issue the <code>./bin/stop-hbase.sh</code> command. And wait for it to complete!!! Killing the process might corrupt your data on disk.</li>
-	<li>In case of <strong>problems</strong>,
-<ol>
-	<li>verify the HBase logs in the <code>./logs</code> directory.</li>
-	<li>Try to fix the problem</li>
-	<li>Get help on the forums or IRC (<code>#hbase@freenode.net</code>). People are very active and keen to help out!</li>
-	<li>Stopr, restart and retest the server.</li>
-</ol>
-</li>
-</ol>
-</p>
-</section>
-
-<section name="Conclusion">
-<p>
-Now your <strong>HBase </strong>server is running, <strong>start coding</strong> and build that next killer app on this particular, but scalable datastore!
-</p>
-</section>
-</body>
-</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/xdoc/export_control.xml
----------------------------------------------------------------------
diff --git a/src/main/site/xdoc/export_control.xml b/src/main/site/xdoc/export_control.xml
deleted file mode 100644
index 0fd5c4f..0000000
--- a/src/main/site/xdoc/export_control.xml
+++ /dev/null
@@ -1,59 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
-          "http://forrest.apache.org/dtd/document-v20.dtd">
-
-<document xmlns="http://maven.apache.org/XDOC/2.0"
-  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
-  <properties>
-    <title>
-      Export Control
-    </title>
-  </properties>
-  <body>
-  <section name="Export Control">
-<p>
-This distribution uses or includes cryptographic software. The country in
-which you currently reside may have restrictions on the import, possession,
-use, and/or re-export to another country, of encryption software. BEFORE
-using any encryption software, please check your country's laws, regulations
-and policies concerning the import, possession, or use, and re-export of
-encryption software, to see if this is permitted. See the
-<a href="http://www.wassenaar.org/">Wassenaar Arrangement</a> for more
-information.</p>
-<p>
-The U.S. Government Department of Commerce, Bureau of Industry and Security 
-(BIS), has classified this software as Export Commodity Control Number (ECCN) 
-5D002.C.1, which includes information security software using or performing 
-cryptographic functions with asymmetric algorithms. The form and manner of this
-Apache Software Foundation distribution makes it eligible for export under the 
-License Exception ENC Technology Software Unrestricted (TSU) exception (see the
-BIS Export Administration Regulations, Section 740.13) for both object code and
-source code.</p>
-<p>
-Apache HBase uses the built-in java cryptography libraries. See Oracle's
-information regarding
-<a href="http://www.oracle.com/us/products/export/export-regulations-345813.html">Java cryptographic export regulations</a>
-for more details.</p>
-  </section>
-  </body>
-</document>


[05/11] hbase git commit: HBASE-20831 Copy master doc into branch-2.1 and edit to make it suit 2.1.0

Posted by zh...@apache.org.
http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/xdoc/index.xml
----------------------------------------------------------------------
diff --git a/src/main/site/xdoc/index.xml b/src/main/site/xdoc/index.xml
deleted file mode 100644
index 1848d40..0000000
--- a/src/main/site/xdoc/index.xml
+++ /dev/null
@@ -1,109 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-<document xmlns="http://maven.apache.org/XDOC/2.0"
-  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
-  <properties>
-    <title>Apache HBase&#8482; Home</title>
-    <link rel="shortcut icon" href="/images/favicon.ico" />
-  </properties>
-
-  <body>
-    <section name="Welcome to Apache HBase&#8482;">
-        <p><a href="http://www.apache.org/">Apache</a> HBase&#8482; is the <a href="http://hadoop.apache.org/">Hadoop</a> database, a distributed, scalable, big data store.
-    </p>
-    <p>Use Apache HBase&#8482; when you need random, realtime read/write access to your Big Data.
-    This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware.
-Apache HBase is an open-source, distributed, versioned, non-relational database modeled after Google's <a href="http://research.google.com/archive/bigtable.html">Bigtable: A Distributed Storage System for Structured Data</a> by Chang et al.
- Just as Bigtable leverages the distributed data storage provided by the Google File System, Apache HBase provides Bigtable-like capabilities on top of Hadoop and HDFS.
-    </p>
-  </section>
-    <section name="Download">
-    <p>
-    Click <b><a href="http://www.apache.org/dyn/closer.cgi/hbase/">here</a></b> to download Apache HBase&#8482;.
-    </p>
-    </section>
-    <section name="Features">
-    <p>
-<ul>
-    <li>Linear and modular scalability.
-</li>
-    <li>Strictly consistent reads and writes.
-</li>
-    <li>Automatic and configurable sharding of tables
-</li>
-    <li>Automatic failover support between RegionServers.
-</li>
-    <li>Convenient base classes for backing Hadoop MapReduce jobs with Apache HBase tables.
-</li>
-    <li>Easy to use Java API for client access.
-</li>
-    <li>Block cache and Bloom Filters for real-time queries.
-</li>
-    <li>Query predicate push down via server side Filters
-</li>
-    <li>Thrift gateway and a REST-ful Web service that supports XML, Protobuf, and binary data encoding options
-</li>
-    <li>Extensible jruby-based (JIRB) shell
-</li>
-    <li>Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX
-</li>
-</ul>
-</p>
-</section>
-     <section name="More Info">
-   <p>See the <a href="http://hbase.apache.org/book.html#arch.overview">Architecture Overview</a>, the <a href="http://hbase.apache.org/book.html#faq">Apache HBase Reference Guide FAQ</a>,
-    and the other documentation links.
-   </p>
-   <dl>
-     <dt>Export Control</dt>
-   <dd><p>The HBase distribution includes cryptographic software. See the export control notice <a href="export_control.html">here</a>
-   </p></dd>
-     <dt>Code Of Conduct</dt>
-   <dd><p>We expect participants in discussions on the HBase project mailing lists, Slack and IRC channels, and JIRA issues to abide by the Apache Software Foundation's <a href="http://apache.org/foundation/policies/conduct.html">Code of Conduct</a>. More information can be found <a href="coc.html">here</a>.
-   </p></dd>
- </dl>
-</section>
-
-     <section name="News">
-       <p>August 4th, 2017 <a href="https://easychair.org/cfp/HBaseConAsia2017">HBaseCon Asia 2017</a> @ the Huawei Campus in Shenzhen, China</p>
-       <p>June 12th, 2017 <a href="https://easychair.org/cfp/hbasecon2017">HBaseCon2017</a> at the Crittenden Buildings on the Google Mountain View Campus</p>
-       <p>April 25th, 2017 <a href="https://www.meetup.com/hbaseusergroup/events/239291716/">Meetup</a> @ Visa in Palo Alto</p>
-        <p>December 8th, 2016 <a href="https://www.meetup.com/hbaseusergroup/events/235542241/">Meetup@Splice</a> in San Francisco</p>
-       <p>September 26th, 2016 <a href="http://www.meetup.com/HBase-NYC/events/233024937/">HBaseConEast2016</a> at Google in Chelsea, NYC</p>
-         <p>May 24th, 2016 <a href="http://www.hbasecon.com/">HBaseCon2016</a> at The Village, 969 Market, San Francisco</p>
-       <p>June 25th, 2015 <a href="http://www.zusaar.com/event/14057003">HBase Summer Meetup 2015</a> in Tokyo</p>
-       <p>May 7th, 2015 <a href="http://hbasecon.com/">HBaseCon2015</a> in San Francisco</p>
-       <p>February 17th, 2015 <a href="http://www.meetup.com/hbaseusergroup/events/219260093/">HBase meetup around Strata+Hadoop World</a> in San Jose</p>
-       <p>January 15th, 2015 <a href="http://www.meetup.com/hbaseusergroup/events/218744798/">HBase meetup @ AppDynamics</a> in San Francisco</p>
-       <p>November 20th, 2014 <a href="http://www.meetup.com/hbaseusergroup/events/205219992/">HBase meetup @ WANdisco</a> in San Ramon</p>
-       <p>October 27th, 2014 <a href="http://www.meetup.com/hbaseusergroup/events/207386102/">HBase Meetup @ Apple</a> in Cupertino</p>
-       <p>October 15th, 2014 <a href="http://www.meetup.com/HBase-NYC/events/207655552/">HBase Meetup @ Google</a> on the night before Strata/HW in NYC</p>
-       <p>September 25th, 2014 <a href="http://www.meetup.com/hbaseusergroup/events/203173692/">HBase Meetup @ Continuuity</a> in Palo Alto</p>
-         <p>August 28th, 2014 <a href="http://www.meetup.com/hbaseusergroup/events/197773762/">HBase Meetup @ Sift Science</a> in San Francisco</p>
-         <p>July 17th, 2014 <a href="http://www.meetup.com/hbaseusergroup/events/190994082/">HBase Meetup @ HP</a> in Sunnyvale</p>
-         <p>June 5th, 2014 <a href="http://www.meetup.com/Hadoop-Summit-Community-San-Jose/events/179081342/">HBase BOF at Hadoop Summit</a>, San Jose Convention Center</p>
-         <p>May 5th, 2014 <a href="http://www.hbasecon.com/">HBaseCon2014</a> at the Hilton San Francisco on Union Square</p>
-         <p>March 12th, 2014 <a href="http://www.meetup.com/hbaseusergroup/events/160757912/">HBase Meetup @ Ancestry.com</a> in San Francisco</p>
-      <p><small><a href="old_news.html">Old News</a></small></p>
-    </section>
-  </body>
-
-</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/xdoc/metrics.xml
----------------------------------------------------------------------
diff --git a/src/main/site/xdoc/metrics.xml b/src/main/site/xdoc/metrics.xml
deleted file mode 100644
index f3ab7d7..0000000
--- a/src/main/site/xdoc/metrics.xml
+++ /dev/null
@@ -1,150 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-<document xmlns="http://maven.apache.org/XDOC/2.0"
-  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
-  <properties>
-    <title> 
-      Apache HBase (TM) Metrics
-    </title>
-  </properties>
-
-  <body>
-    <section name="Introduction">
-      <p>
-      Apache HBase (TM) emits Hadoop <a href="http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html">metrics</a>.
-      </p>
-      </section>
-      <section name="Setup">
-      <p>First read up on Hadoop <a href="http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html">metrics</a>.
-      If you are using ganglia, the <a href="http://wiki.apache.org/hadoop/GangliaMetrics">GangliaMetrics</a>
-      wiki page is useful read.</p>
-      <p>To have HBase emit metrics, edit <code>$HBASE_HOME/conf/hadoop-metrics.properties</code>
-      and enable metric 'contexts' per plugin.  As of this writing, hadoop supports
-      <strong>file</strong> and <strong>ganglia</strong> plugins.
-      Yes, the hbase metrics files is named hadoop-metrics rather than
-      <em>hbase-metrics</em> because currently at least the hadoop metrics system has the
-      properties filename hardcoded. Per metrics <em>context</em>,
-      comment out the NullContext and enable one or more plugins instead.
-      </p>
-      <p>
-      If you enable the <em>hbase</em> context, on regionservers you'll see total requests since last
-      metric emission, count of regions and storefiles as well as a count of memstore size.
-      On the master, you'll see a count of the cluster's requests.
-      </p>
-      <p>
-      Enabling the <em>rpc</em> context is good if you are interested in seeing
-      metrics on each hbase rpc method invocation (counts and time taken).
-      </p>
-      <p>
-      The <em>jvm</em> context is
-      useful for long-term stats on running hbase jvms -- memory used, thread counts, etc.
-      As of this writing, if more than one jvm is running emitting metrics, at least
-      in ganglia, the stats are aggregated rather than reported per instance.
-      </p>
-    </section>
-
-    <section name="Using with JMX">
-      <p>
-      In addition to the standard output contexts supported by the Hadoop 
-      metrics package, you can also export HBase metrics via Java Management 
-      Extensions (JMX).  This will allow viewing HBase stats in JConsole or 
-      any other JMX client.
-      </p>
-      <section name="Enable HBase stats collection">
-      <p>
-      To enable JMX support in HBase, first edit 
-      <code>$HBASE_HOME/conf/hadoop-metrics.properties</code> to support 
-      metrics refreshing. (If you've running 0.94.1 and above, or have already configured 
-      <code>hadoop-metrics.properties</code> for another output context,
-      you can skip this step).
-      </p>
-      <source>
-# Configuration of the "hbase" context for null
-hbase.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
-hbase.period=60
-
-# Configuration of the "jvm" context for null
-jvm.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
-jvm.period=60
-
-# Configuration of the "rpc" context for null
-rpc.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
-rpc.period=60
-      </source>
-      </section>
-      <section name="Setup JMX remote access">
-      <p>
-      For remote access, you will need to configure JMX remote passwords 
-      and access profiles.  Create the files:
-      </p>
-      <dl>
-        <dt><code>$HBASE_HOME/conf/jmxremote.passwd</code> (set permissions 
-        to 600)</dt>
-        <dd>
-        <source>
-monitorRole monitorpass
-controlRole controlpass
-        </source>
-        </dd>
-        
-        <dt><code>$HBASE_HOME/conf/jmxremote.access</code></dt>
-        <dd>
-        <source>
-monitorRole readonly
-controlRole readwrite
-        </source>
-        </dd>
-      </dl>
-      </section>
-      <section name="Configure JMX in HBase startup">
-      <p>
-      Finally, edit the <code>$HBASE_HOME/conf/hbase-env.sh</code>
-      script to add JMX support: 
-      </p>
-      <dl>
-        <dt><code>$HBASE_HOME/conf/hbase-env.sh</code></dt>
-        <dd>
-        <p>Add the lines:</p>
-        <source>
-HBASE_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false"
-HBASE_JMX_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.password.file=$HBASE_HOME/conf/jmxremote.passwd"
-HBASE_JMX_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.access.file=$HBASE_HOME/conf/jmxremote.access"
-
-export HBASE_MASTER_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.port=10101"
-export HBASE_REGIONSERVER_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.port=10102"
-        </source>
-        </dd>
-      </dl>
-      <p>
-      After restarting the processes you want to monitor, you should now be 
-      able to run JConsole (included with the JDK since JDK 5.0) to view 
-      the statistics via JMX.  HBase MBeans are exported under the 
-      <strong><code>hadoop</code></strong> domain in JMX.
-      </p>
-      </section>
-      <section name="Understanding HBase Metrics">
-      <p>
-      For more information on understanding HBase metrics, see the <a href="book.html#hbase_metrics">metrics section</a> in the Apache HBase Reference Guide. 
-      </p>
-      </section>
-    </section>
-  </body>
-</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/xdoc/old_news.xml
----------------------------------------------------------------------
diff --git a/src/main/site/xdoc/old_news.xml b/src/main/site/xdoc/old_news.xml
deleted file mode 100644
index 94e1882..0000000
--- a/src/main/site/xdoc/old_news.xml
+++ /dev/null
@@ -1,92 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
-          "http://forrest.apache.org/dtd/document-v20.dtd">
-
-<document xmlns="http://maven.apache.org/XDOC/2.0"
-  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
-  <properties>
-    <title>
-      Old Apache HBase (TM) News
-    </title>
-  </properties>
-  <body>
-  <section name="Old News">
-         <p>February 10th, 2014 <a href="http://www.meetup.com/hbaseusergroup/events/163139322/">HBase Meetup @ Continuuity</a> in Palo Alto</p>
-         <p>January 30th, 2014 <a href="http://www.meetup.com/hbaseusergroup/events/158491762/">HBase Meetup @ Apple</a> in Cupertino</p>
-         <p>January 30th, 2014 <a href="http://www.meetup.com/Los-Angeles-HBase-User-group/events/160560282/">Los Angeles HBase User Group</a> in El Segundo</p>
-         <p>October 24th, 2013 <a href="http://www.meetup.com/hbaseusergroup/events/140759692/">HBase User and <a href="http://www.meetup.com/hackathon/events/144366512/">Developer</a> Meetup at HortonWorks</a>.in Palo Alto</p>
-         <p>September 26, 2013 <a href="http://www.meetup.com/hbaseusergroup/events/135862292/">HBase Meetup at Arista Networks</a>.in San Francisco</p>
-         <p>August 20th, 2013 <a href="http://www.meetup.com/hbaseusergroup/events/120534362/">HBase Meetup at Flurry</a>.in San Francisco</p>
-         <p>July 16th, 2013 <a href="http://www.meetup.com/hbaseusergroup/events/119929152/">HBase Meetup at Twitter</a>.in San Francisco</p>
-         <p>June 25th, 2013 <a href="http://www.meetup.com/hbaseusergroup/events/119154442/">Hadoop Summit Meetup</a>.at San Jose Convention Center</p>
-         <p>June 14th, 2013 <a href="http://kijicon.eventbrite.com/">KijiCon: Building Big Data Apps</a> in San Francisco.</p>
-         <p>June 13th, 2013 <a href="http://www.hbasecon.com/">HBaseCon2013</a> in San Francisco.  Submit an Abstract!</p>
-         <p>June 12th, 2013 <a href="http://www.meetup.com/hackathon/events/123403802/">HBaseConHackAthon</a> at the Cloudera office in San Francisco.</p>
-         <p>April 11th, 2013 <a href="http://www.meetup.com/hbaseusergroup/events/103587852/">HBase Meetup at AdRoll</a> in San Francisco</p>
-         <p>February 28th, 2013 <a href="http://www.meetup.com/hbaseusergroup/events/96584102/">HBase Meetup at Intel Mission Campus</a></p>
-         <p>February 19th, 2013 <a href="http://www.meetup.com/hackathon/events/103633042/">Developers PowWow</a> at HortonWorks' new digs</p>
-         <p>January 23rd, 2013 <a href="http://www.meetup.com/hbaseusergroup/events/91381312/">HBase Meetup at WibiData World HQ!</a></p>
-            <p>December 4th, 2012 <a href="http://www.meetup.com/hackathon/events/90536432/">0.96 Bug Squashing and Testing Hackathon</a> at Cloudera, SF.</p>
-            <p>October 29th, 2012 <a href="http://www.meetup.com/hbaseusergroup/events/82791572/">HBase User Group Meetup</a> at Wize Commerce in San Mateo.</p>
-            <p>October 25th, 2012 <a href="http://www.meetup.com/HBase-NYC/events/81728932/">Strata/Hadoop World HBase Meetup.</a> in NYC</p>
-            <p>September 11th, 2012 <a href="http://www.meetup.com/hbaseusergroup/events/80621872/">Contributor's Pow-Wow at HortonWorks HQ.</a></p>
-            <p>August 8th, 2012 <a href="http://www.apache.org/dyn/closer.cgi/hbase/">Apache HBase 0.94.1 is available for download</a></p>
-            <p>June 15th, 2012 <a href="http://www.meetup.com/hbaseusergroup/events/59829652/">Birds-of-a-feather</a> in San Jose, day after <a href="http://hadoopsummit.org">Hadoop Summit</a></p>
-            <p>May 23rd, 2012 <a href="http://www.meetup.com/hackathon/events/58953522/">HackConAthon</a> in Palo Alto</p>
-            <p>May 22nd, 2012 <a href="http://www.hbasecon.com">HBaseCon2012</a> in San Francisco</p>
-            <p>March 27th, 2012 <a href="http://www.meetup.com/hbaseusergroup/events/56021562/">Meetup @ StumbleUpon</a> in San Francisco</p>
-
-            <p>January 19th, 2012 <a href="http://www.meetup.com/hbaseusergroup/events/46702842/">Meetup @ EBay</a></p>
-            <p>January 23rd, 2012 Apache HBase 0.92.0 released. <a href="http://www.apache.org/dyn/closer.cgi/hbase/">Download it!</a></p>
-            <p>December 23rd, 2011 Apache HBase 0.90.5 released. <a href="http://www.apache.org/dyn/closer.cgi/hbase/">Download it!</a></p>
-            <p>November 29th, 2011 <a href="http://www.meetup.com/hackathon/events/41025972/">Developer Pow-Wow in SF</a> at Salesforce HQ</p>
-            <p>November 7th, 2011 <a href="http://www.meetup.com/hbaseusergroup/events/35682812/">HBase Meetup in NYC (6PM)</a> at the AppNexus office</p>
-            <p>August 22nd, 2011 <a href="http://www.meetup.com/hbaseusergroup/events/28518471/">HBase Hackathon (11AM) and Meetup (6PM)</a> at FB in PA</p>
-            <p>June 30th, 2011 <a href="http://www.meetup.com/hbaseusergroup/events/20572251/">HBase Contributor Day</a>, the day after the <a href="http://developer.yahoo.com/events/hadoopsummit2011/">Hadoop Summit</a> hosted by Y!</p>
-            <p>June 8th, 2011 <a href="http://berlinbuzzwords.de/wiki/hbase-workshop-and-hackathon">HBase Hackathon</a> in Berlin to coincide with <a href="http://berlinbuzzwords.de/">Berlin Buzzwords</a></p>
-            <p>May 19th, 2011 Apache HBase 0.90.3 released. <a href="http://www.apache.org/dyn/closer.cgi/hbase/">Download it!</a></p>
-            <p>April 12th, 2011 Apache HBase 0.90.2 released. <a href="http://www.apache.org/dyn/closer.cgi/hbase/">Download it!</a></p>
-            <p>March 21st, <a href="http://www.meetup.com/hackathon/events/16770852/">HBase 0.92 Hackathon at StumbleUpon, SF</a></p>
-            <p>February 22nd, <a href="http://www.meetup.com/hbaseusergroup/events/16492913/">HUG12: February HBase User Group at StumbleUpon SF</a></p>
-            <p>December 13th, <a href="http://www.meetup.com/hackathon/calendar/15597555/">HBase Hackathon: Coprocessor Edition</a></p>
-      <p>November 19th, <a href="http://huguk.org/">Hadoop HUG in London</a> is all about Apache HBase</p>
-      <p>November 15-19th, <a href="http://www.devoxx.com/display/Devoxx2K10/Home">Devoxx</a> features HBase Training and multiple HBase presentations</p>
-      <p>October 12th, HBase-related presentations by core contributors and users at <a href="http://www.cloudera.com/company/press-center/hadoop-world-nyc/">Hadoop World 2010</a></p>
-      <p>October 11th, <a href="http://www.meetup.com/hbaseusergroup/calendar/14606174/">HUG-NYC: HBase User Group NYC Edition</a> (Night before Hadoop World)</p>
-      <p>June 30th, <a href="http://www.meetup.com/hbaseusergroup/calendar/13562846/">Apache HBase Contributor Workshop</a> (Day after Hadoop Summit)</p>
-      <p>May 10th, 2010: Apache HBase graduates from Hadoop sub-project to Apache Top Level Project </p>
-      <p>Signup for <a href="http://www.meetup.com/hbaseusergroup/calendar/12689490/">HBase User Group Meeting, HUG10</a> hosted by Trend Micro, April 19th, 2010</p>
-
-      <p><a href="http://www.meetup.com/hbaseusergroup/calendar/12689351/">HBase User Group Meeting, HUG9</a> hosted by Mozilla, March 10th, 2010</p>
-      <p>Sign up for the <a href="http://www.meetup.com/hbaseusergroup/calendar/12241393/">HBase User Group Meeting, HUG8</a>, January 27th, 2010 at StumbleUpon in SF</p>
-      <p>September 8th, 2010: Apache HBase 0.20.0 is faster, stronger, slimmer, and sweeter tasting than any previous Apache HBase release.  Get it off the <a href="http://www.apache.org/dyn/closer.cgi/hbase/">Releases</a> page.</p>
-      <p><a href="http://dev.us.apachecon.com/c/acus2009/">ApacheCon</a> in Oakland: November 2-6th, 2009:
-      The Apache Foundation will be celebrating its 10th anniversary in beautiful Oakland by the Bay. Lots of good talks and meetups including an HBase presentation by a couple of the lads.</p>
-      <p>HBase at Hadoop World in NYC: October 2nd, 2009: A few of us will be talking on Practical HBase out east at <a href="http://www.cloudera.com/hadoop-world-nyc">Hadoop World: NYC</a>.</p>
-      <p>HUG7 and HBase Hackathon: August 7th-9th, 2009 at StumbleUpon in SF: Sign up for the <a href="http://www.meetup.com/hbaseusergroup/calendar/10950511/">HBase User Group Meeting, HUG7</a> or for the <a href="http://www.meetup.com/hackathon/calendar/10951718/">Hackathon</a> or for both (all are welcome!).</p>
-      <p>June, 2009 -- HBase at HadoopSummit2009 and at NOSQL: See the <a href="http://wiki.apache.org/hadoop/HBase/HBasePresentations">presentations</a></p>
-      <p>March 3rd, 2009 -- HUG6: <a href="http://www.meetup.com/hbaseusergroup/calendar/9764004/">HBase User Group 6</a></p>
-      <p>January 30th, 2009 -- LA Hbackathon:<a href="http://www.meetup.com/hbasela/calendar/9450876/">HBase January Hackathon Los Angeles</a> at <a href="http://streamy.com" >Streamy</a> in Manhattan Beach</p>
-  </section>
-  </body>
-</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/xdoc/poweredbyhbase.xml
----------------------------------------------------------------------
diff --git a/src/main/site/xdoc/poweredbyhbase.xml b/src/main/site/xdoc/poweredbyhbase.xml
deleted file mode 100644
index ff1ba59..0000000
--- a/src/main/site/xdoc/poweredbyhbase.xml
+++ /dev/null
@@ -1,398 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-<document xmlns="http://maven.apache.org/XDOC/2.0"
-  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
-  <properties>
-    <title>Powered By Apache HBase&#153;</title>
-  </properties>
-
-<body>
-<section name="Powered By Apache HBase&#153;">
-  <p>This page lists some institutions and projects which are using HBase. To
-    have your organization added, file a documentation JIRA or email
-    <a href="mailto:dev@hbase.apache.org">hbase-dev</a> with the relevant
-    information. If you notice out-of-date information, use the same avenues to
-    report it.
-  </p>
-  <p><b>These items are user-submitted and the HBase team assumes no responsibility for their accuracy.</b></p>
-  <dl>
-  <dt><a href="http://www.adobe.com">Adobe</a></dt>
-  <dd>We currently have about 30 nodes running HDFS, Hadoop and HBase  in clusters
-    ranging from 5 to 14 nodes on both production and development. We plan a
-    deployment on an 80 nodes cluster. We are using HBase in several areas from
-    social services to structured data and processing for internal use. We constantly
-    write data to HBase and run mapreduce jobs to process then store it back to
-    HBase or external systems. Our production cluster has been running since Oct 2008.</dd>
-
-  <dt><a href="http://spark-packages.org/package/Huawei-Spark/Spark-SQL-on-HBase">Project Astro</a></dt>
-  <dd>
-    Astro provides fast Spark SQL/DataFrame capabilities to HBase data,
-    featuring super-efficient access to multi-dimensional HBase rows through
-    native Spark execution in HBase coprocessor plus systematic and accurate
-    partition pruning and predicate pushdown from arbitrarily complex data
-    filtering logic. The batch load is optimized to run on the Spark execution
-    engine. Note that <a href="http://spark-packages.org/package/Huawei-Spark/Spark-SQL-on-HBase">Spark-SQL-on-HBase</a>
-    is the release site. Interested parties are free to make clones and claim
-    to be "latest(and active)", but they are not endorsed by the owner.
-  </dd>
-
-  <dt><a href="http://axibase.com/products/axibase-time-series-database/">Axibase
-    Time Series Database (ATSD)</a></dt>
-  <dd>ATSD runs on top of HBase to collect, analyze and visualize time series
-    data at scale. ATSD capabilities include optimized storage schema, built-in
-    rule engine, forecasting algorithms (Holt-Winters and ARIMA) and next-generation
-    graphics designed for high-frequency data. Primary use cases: IT infrastructure
-    monitoring, data consolidation, operational historian in OPC environments.</dd>
-
-  <dt><a href="http://www.benipaltechnologies.com">Benipal Technologies</a></dt>
-  <dd>We have a 35 node cluster used for HBase and Mapreduce with Lucene / SOLR
-    and katta integration to create and finetune our search databases. Currently,
-    our HBase installation has over 10 Billion rows with 100s of datapoints per row.
-    We compute over 10<sup>18</sup> calculations daily using MapReduce directly on HBase. We
-    heart HBase.</dd>
-
-  <dt><a href="https://github.com/ermanpattuk/BigSecret">BigSecret</a></dt>
-  <dd>BigSecret is a security framework that is designed to secure Key-Value data,
-    while preserving efficient processing capabilities. It achieves cell-level
-    security, using combinations of different cryptographic techniques, in an
-    efficient and secure manner. It provides a wrapper library around HBase.</dd>
-
-  <dt><a href="http://caree.rs">Caree.rs</a></dt>
-  <dd>Accelerated hiring platform for HiTech companies. We use HBase and Hadoop
-    for all aspects of our backend - job and company data storage, analytics
-    processing, machine learning algorithms for our hire recommendation engine.
-    Our live production site is directly served from HBase. We use cascading for
-    running offline data processing jobs.</dd>
-
-  <dt><a href="http://www.celer-tech.com/">Celer Technologies</a></dt>
-  <dd>Celer Technologies is a global financial software company that creates
-    modular-based systems that have the flexibility to meet tomorrow's business
-    environment, today.  The Celer framework uses Hadoop/HBase for storing all
-    financial data for trading, risk, clearing in a single data store. With our
-    flexible framework and all the data in Hadoop/HBase, clients can build new
-    features to quickly extract data based on their trading, risk and clearing
-    activities from one single location.</dd>
-
-  <dt><a href="http://www.explorys.net">Explorys</a></dt>
-  <dd>Explorys uses an HBase cluster containing over a billion anonymized clinical
-    records, to enable subscribers to search and analyze patient populations,
-    treatment protocols, and clinical outcomes.</dd>
-
-  <dt><a href="http://www.facebook.com/notes/facebook-engineering/the-underlying-technology-of-messages/454991608919">Facebook</a></dt>
-  <dd>Facebook uses HBase to power their Messages infrastructure.</dd>
-
-  <dt><a href="http://www.filmweb.pl">Filmweb</a></dt>
-  <dd>Filmweb is a film web portal with a large dataset of films, persons and
-    movie-related entities. We have just started a small cluster of 3 HBase nodes
-    to handle our web cache persistency layer. We plan to increase the cluster
-    size, and also to start migrating some of the data from our databases which
-    have some demanding scalability requirements.</dd>
-
-  <dt><a href="http://www.flurry.com">Flurry</a></dt>
-  <dd>Flurry provides mobile application analytics. We use HBase and Hadoop for
-    all of our analytics processing, and serve all of our live requests directly
-    out of HBase on our 50 node production cluster with tens of billions of rows
-    over several tables.</dd>
-
-  <dt><a href="http://gumgum.com">GumGum</a></dt>
-  <dd>GumGum is an In-Image Advertising Platform. We use HBase on an 15-node
-    Amazon EC2 High-CPU Extra Large (c1.xlarge) cluster for both real-time data
-    and analytics. Our production cluster has been running since June 2010.</dd>
-
-  <dt><a href="http://helprace.com/help-desk/">Helprace</a></dt>
-  <dd>Helprace is a customer service platform which uses Hadoop for analytics
-    and internal searching and filtering. Being on HBase we can share our HBase
-    and Hadoop cluster with other Hadoop processes - this particularly helps in
-    keeping community speeds up. We use Hadoop and HBase on small cluster with 4
-    cores and 32 GB RAM each.</dd>
-
-  <dt><a href="http://hubspot.com">HubSpot</a></dt>
-  <dd>HubSpot is an online marketing platform, providing analytics, email, and
-    segmentation of leads/contacts.  HBase is our primary datastore for our customers'
-    customer data, with multiple HBase clusters powering the majority of our
-    product.  We have nearly 200 regionservers across the various clusters, and
-    2 hadoop clusters also with nearly 200 tasktrackers.  We use c1.xlarge in EC2
-    for both, but are starting to move some of that to baremetal hardware.  We've
-    been running HBase for over 2 years.</dd>
-
-  <dt><a href="http://www.infolinks.com/">Infolinks</a></dt>
-  <dd>Infolinks is an In-Text ad provider. We use HBase to process advertisement
-    selection and user events for our In-Text ad network. The reports generated
-    from HBase are used as feedback for our production system to optimize ad
-    selection.</dd>
-
-  <dt><a href="http://www.kalooga.com">Kalooga</a></dt>
-  <dd>Kalooga is a discovery service for image galleries. We use Hadoop, HBase
-    and Pig on a 20-node cluster for our crawling, analysis and events
-    processing.</dd>
-
-  <dt><a href="http://www.leanxcale.com/">LeanXcale</a></dt>
-  <dd>LeanXcale provides an ultra-scalable transactional &amp; SQL database that
-  stores its data on HBase and it is able to scale to 1000s of nodes. It
-  also provides a standalone full ACID HBase with transactions across
-  arbitrary sets of rows and tables.</dd>
-
-
-  <dt><a href="http://www.mahalo.com">Mahalo</a></dt>
-  <dd>Mahalo, "...the world's first human-powered search engine". All the markup
-    that powers the wiki is stored in HBase. It's been in use for a few months now.
-    MediaWiki - the same software that power Wikipedia - has version/revision control.
-    Mahalo's in-house editors produce a lot of revisions per day, which was not
-    working well in a RDBMS. An hbase-based solution for this was built and tested,
-    and the data migrated out of MySQL and into HBase. Right now it's at something
-    like 6 million items in HBase. The upload tool runs every hour from a shell
-    script to back up that data, and on 6 nodes takes about 5-10 minutes to run -
-    and does not slow down production at all.</dd>
-
-  <dt><a href="http://www.meetup.com">Meetup</a></dt>
-  <dd>Meetup is on a mission to help the world’s people self-organize into local
-    groups.  We use Hadoop and HBase to power a site-wide, real-time activity
-    feed system for all of our members and groups.  Group activity is written
-    directly to HBase, and indexed per member, with the member's custom feed
-    served directly from HBase for incoming requests.  We're running HBase
-    0.20.0 on a 11 node cluster.</dd>
-
-  <dt><a href="http://www.mendeley.com">Mendeley</a></dt>
-  <dd>Mendeley is creating a platform for researchers to collaborate and share
-    their research online. HBase is helping us to create the world's largest
-    research paper collection and is being used to store all our raw imported data.
-    We use a lot of map reduce jobs to process these papers into pages displayed
-    on the site. We also use HBase with Pig to do analytics and produce the article
-    statistics shown on the web site. You can find out more about how we use HBase
-    in the <a href="http://www.slideshare.net/danharvey/hbase-at-mendeley">HBase
-    At Mendeley</a> slide presentation.</dd>
-
-  <dt><a href="http://www.ngdata.com">NGDATA</a></dt>
-  <dd>NGDATA delivers <a href="http://www.ngdata.com/site/products/lily.html">Lily</a>,
-    the consumer intelligence solution that delivers a unique combination of Big
-    Data management, machine learning technologies and consumer intelligence
-    applications in one integrated solution to allow better, and more dynamic,
-    consumer insights. Lily allows companies to process and analyze massive structured
-    and unstructured data, scale storage elastically and locate actionable data
-    quickly from large data sources in near real time.</dd>
-
-  <dt><a href="http://ning.com">Ning</a></dt>
-  <dd>Ning uses HBase to store and serve the results of processing user events
-    and log files, which allows us to provide near-real time analytics and
-    reporting. We use a small cluster of commodity machines with 4 cores and 16GB
-    of RAM per machine to handle all our analytics and reporting needs.</dd>
-
-  <dt><a href="http://www.worldcat.org">OCLC</a></dt>
-  <dd>OCLC uses HBase as the main data store for WorldCat, a union catalog which
-    aggregates the collections of 72,000 libraries in 112 countries and territories.
-    WorldCat is currently comprised of nearly 1 billion records with nearly 2
-    billion library ownership indications. We're running a 50 Node HBase cluster
-    and a separate offline map-reduce cluster.</dd>
-
-  <dt><a href="http://olex.openlogic.com">OpenLogic</a></dt>
-  <dd>OpenLogic stores all the world's Open Source packages, versions, files,
-    and lines of code in HBase for both near-real-time access and analytical
-    purposes. The production cluster has well over 100TB of disk spread across
-    nodes with 32GB+ RAM and dual-quad or dual-hex core CPU's.</dd>
-
-  <dt><a href="http://www.openplaces.org">Openplaces</a></dt>
-  <dd>Openplaces is a search engine for travel that uses HBase to store terabytes
-    of web pages and travel-related entity records (countries, cities, hotels,
-    etc.). We have dozens of MapReduce jobs that crunch data on a daily basis.
-    We use a 20-node cluster for development, a 40-node cluster for offline
-    production processing and an EC2 cluster for the live web site.</dd>
-
-  <dt><a href="http://www.pnl.gov">Pacific Northwest National Laboratory</a></dt>
-  <dd>Hadoop and HBase (Cloudera distribution) are being used within PNNL's
-    Computational Biology &amp; Bioinformatics Group for a systems biology data
-    warehouse project that integrates high throughput proteomics and transcriptomics
-    data sets coming from instruments in the Environmental  Molecular Sciences
-    Laboratory, a US Department of Energy national user facility located at PNNL.
-    The data sets are being merged and annotated with other public genomics
-    information in the data warehouse environment, with Hadoop analysis programs
-    operating on the annotated data in the HBase tables. This work is hosted by
-    <a href="http://www.pnl.gov/news/release.aspx?id=908">olympus</a>, a large PNNL
-    institutional computing cluster, with the HBase tables being stored in olympus's
-    Lustre file system.</dd>
-
-  <dt><a href="http://www.readpath.com/">ReadPath</a></dt>
-  <dd>|ReadPath uses HBase to store several hundred million RSS items and dictionary
-    for its RSS newsreader. Readpath is currently running on an 8 node cluster.</dd>
-
-  <dt><a href="http://resu.me/">resu.me</a></dt>
-  <dd>Career network for the net generation. We use HBase and Hadoop for all
-    aspects of our backend - user and resume data storage, analytics processing,
-    machine learning algorithms for our job recommendation engine. Our live
-    production site is directly served from HBase. We use cascading for running
-    offline data processing jobs.</dd>
-
-  <dt><a href="http://www.runa.com/">Runa Inc.</a></dt>
-  <dd>Runa Inc. offers a SaaS that enables online merchants to offer dynamic
-    per-consumer, per-product promotions embedded in their website. To implement
-    this we collect the click streams of all their visitors to determine along
-    with the rules of the merchant what promotion to offer the visitor at different
-    points of their browsing the Merchant website. So we have lots of data and have
-    to do lots of off-line and real-time analytics. HBase is the core for us.
-    We also use Clojure and our own open sourced distributed processing framework,
-    Swarmiji. The HBase Community has been key to our forward movement with HBase.
-    We're looking for experienced developers to join us to help make things go even
-    faster!</dd>
-
-  <dt><a href="http://www.sematext.com/">Sematext</a></dt>
-  <dd>Sematext runs
-    <a href="http://www.sematext.com/search-analytics/index.html">Search Analytics</a>,
-    a service that uses HBase to store search activity and MapReduce to produce
-    reports showing user search behaviour and experience. Sematext runs
-    <a href="http://www.sematext.com/spm/index.html">Scalable Performance Monitoring (SPM)</a>,
-    a service that uses HBase to store performance data over time, crunch it with
-    the help of MapReduce, and display it in a visually rich browser-based UI.
-    Interestingly, SPM features
-    <a href="http://www.sematext.com/spm/hbase-performance-monitoring/index.html">SPM for HBase</a>,
-    which is specifically designed to monitor all HBase performance metrics.</dd>
-
-  <dt><a href="http://www.socialmedia.com/">SocialMedia</a></dt>
-  <dd>SocialMedia uses HBase to store and process user events which allows us to
-    provide near-realtime user metrics and reporting. HBase forms the heart of
-    our Advertising Network data storage and management system. We use HBase as
-    a data source and sink for both realtime request cycle queries and as a
-    backend for mapreduce analysis.</dd>
-
-  <dt><a href="http://www.splicemachine.com/">Splice Machine</a></dt>
-  <dd>Splice Machine is built on top of HBase.  Splice Machine is a full-featured
-    ANSI SQL database that provides real-time updates, secondary indices, ACID
-    transactions, optimized joins, triggers, and UDFs.</dd>
-
-  <dt><a href="http://www.streamy.com/">Streamy</a></dt>
-  <dd>Streamy is a recently launched realtime social news site.  We use HBase
-    for all of our data storage, query, and analysis needs, replacing an existing
-    SQL-based system.  This includes hundreds of millions of documents, sparse
-    matrices, logs, and everything else once done in the relational system. We
-    perform significant in-memory caching of query results similar to a traditional
-    Memcached/SQL setup as well as other external components to perform joining
-    and sorting.  We also run thousands of daily MapReduce jobs using HBase tables
-    for log analysis, attention data processing, and feed crawling.  HBase has
-    helped us scale and distribute in ways we could not otherwise, and the
-    community has provided consistent and invaluable assistance.</dd>
-
-  <dt><a href="http://www.stumbleupon.com/">Stumbleupon</a></dt>
-  <dd>Stumbleupon and <a href="http://su.pr">Su.pr</a> use HBase as a real time
-    data storage and analytics platform. Serving directly out of HBase, various site
-    features and statistics are kept up to date in a real time fashion. We also
-    use HBase a map-reduce data source to overcome traditional query speed limits
-    in MySQL.</dd>
-
-  <dt><a href="http://www.tokenizer.org">Shopping Engine at Tokenizer</a></dt>
-  <dd>Shopping Engine at Tokenizer is a web crawler; it uses HBase to store URLs
-    and Outlinks (AnchorText + LinkedURL): more than a billion. It was initially
-    designed as Nutch-Hadoop extension, then (due to very specific 'shopping'
-    scenario) moved to SOLR + MySQL(InnoDB) (ten thousands queries per second),
-    and now - to HBase. HBase is significantly faster due to: no need for huge
-    transaction logs, column-oriented design exactly matches 'lazy' business logic,
-    data compression, !MapReduce support. Number of mutable 'indexes' (term from
-    RDBMS) significantly reduced due to the fact that each 'row::column' structure
-    is physically sorted by 'row'. MySQL InnoDB engine is best DB choice for
-    highly-concurrent updates. However, necessity to flash a block of data to
-    harddrive even if we changed only few bytes is obvious bottleneck. HBase
-    greatly helps: not-so-popular in modern DBMS 'delete-insert', 'mutable primary
-    key', and 'natural primary key' patterns become a big advantage with HBase.</dd>
-
-  <dt><a href="http://traackr.com/">Traackr</a></dt>
-  <dd>Traackr uses HBase to store and serve online influencer data in real-time.
-    We use MapReduce to frequently re-score our entire data set as we keep updating
-    influencer metrics on a daily basis.</dd>
-
-  <dt><a href="http://trendmicro.com/">Trend Micro</a></dt>
-  <dd>Trend Micro uses HBase as a foundation for cloud scale storage for a variety
-    of applications. We have been developing with HBase since version 0.1 and
-    production since version 0.20.0.</dd>
-
-  <dt><a href="http://www.twitter.com">Twitter</a></dt>
-  <dd>Twitter runs HBase across its entire Hadoop cluster. HBase provides a
-    distributed, read/write backup of all  mysql tables in Twitter's production
-    backend, allowing engineers to run MapReduce jobs over the data while maintaining
-    the ability to apply periodic row updates (something that is more difficult
-    to do with vanilla HDFS).  A number of applications including people search
-    rely on HBase internally for data generation. Additionally, the operations
-    team uses HBase as a timeseries database for cluster-wide monitoring/performance
-    data.</dd>
-
-  <dt><a href="http://www.udanax.org">Udanax.org</a></dt>
-  <dd>Udanax.org is a URL shortener which use 10 nodes HBase cluster to store URLs,
-    Web Log data and response the real-time request on its Web Server. This
-    application is now used for some twitter clients and a number of web sites.
-    Currently API requests are almost 30 per second and web redirection requests
-    are about 300 per second.</dd>
-
-  <dt><a href="http://www.veoh.com/">Veoh Networks</a></dt>
-  <dd>Veoh Networks uses HBase to store and process visitor (human) and entity
-    (non-human) profiles which are used for behavioral targeting, demographic
-    detection, and personalization services.  Our site reads this data in
-    real-time (heavily cached) and submits updates via various batch map/reduce
-    jobs. With 25 million unique visitors a month storing this data in a traditional
-    RDBMS is not an option. We currently have a 24 node Hadoop/HBase cluster and
-    our profiling system is sharing this cluster with our other Hadoop data
-    pipeline processes.</dd>
-
-  <dt><a href="http://www.videosurf.com/">VideoSurf</a></dt>
-  <dd>VideoSurf - "The video search engine that has taught computers to see".
-    We're using HBase to persist various large graphs of data and other statistics.
-    HBase was a real win for us because it let us store substantially larger
-    datasets without the need for manually partitioning the data and its
-    column-oriented nature allowed us to create schemas that were substantially
-    more efficient for storing and retrieving data.</dd>
-
-  <dt><a href="http://www.visibletechnologies.com/">Visible Technologies</a></dt>
-  <dd>Visible Technologies uses Hadoop, HBase, Katta, and more to collect, parse,
-    store, and search hundreds of millions of Social Media content. We get incredibly
-    fast throughput and very low latency on commodity hardware. HBase enables our
-    business to exist.</dd>
-
-  <dt><a href="http://www.worldlingo.com/">WorldLingo</a></dt>
-  <dd>The WorldLingo Multilingual Archive. We use HBase to store millions of
-    documents that we scan using Map/Reduce jobs to machine translate them into
-    all or selected target languages from our set of available machine translation
-    languages. We currently store 12 million documents but plan to eventually
-    reach the 450 million mark. HBase allows us to scale out as we need to grow
-    our storage capacities. Combined with Hadoop to keep the data replicated and
-    therefore fail-safe we have the backbone our service can rely on now and in
-    the future. !WorldLingo is using HBase since December 2007 and is along with
-    a few others one of the longest running HBase installation. Currently we are
-    running the latest HBase 0.20 and serving directly from it at
-    <a href="http://www.worldlingo.com/ma/enwiki/en/HBase">MultilingualArchive</a>.</dd>
-
-  <dt><a href="http://www.yahoo.com/">Yahoo!</a></dt>
-  <dd>Yahoo! uses HBase to store document fingerprint for detecting near-duplications.
-    We have a cluster of few nodes that runs HDFS, mapreduce, and HBase. The table
-    contains millions of rows. We use this for querying duplicated documents with
-    realtime traffic.</dd>
-
-  <dt><a href="http://h50146.www5.hp.com/products/software/security/icewall/eng/">HP IceWall SSO</a></dt>
-  <dd>HP IceWall SSO is a web-based single sign-on solution and uses HBase to store
-    user data to authenticate users. We have supported RDB and LDAP previously but
-    have newly supported HBase with a view to authenticate over tens of millions
-    of users and devices.</dd>
-
-  <dt><a href="http://www.ymc.ch/en/big-data-analytics-en?utm_source=hadoopwiki&amp;utm_medium=poweredbypage&amp;utm_campaign=ymc.ch">YMC AG</a></dt>
-  <dd><ul>
-    <li>operating a Cloudera Hadoop/HBase cluster for media monitoring purpose</li>
-    <li>offering technical and operative consulting for the Hadoop stack + ecosystem</li>
-    <li>editor of <a href="http://www.ymc.ch/en/hbase-split-visualisation-introducing-hannibal?utm_source=hadoopwiki&amp;utm_medium=poweredbypageamp;utm_campaign=ymc.ch">Hannibal</a>, a open-source tool
-    to visualize HBase regions sizes and splits that helps running HBase in production</li>
-  </ul></dd>
-  </dl>
-</section>
-</body>
-</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/xdoc/pseudo-distributed.xml
----------------------------------------------------------------------
diff --git a/src/main/site/xdoc/pseudo-distributed.xml b/src/main/site/xdoc/pseudo-distributed.xml
deleted file mode 100644
index 670f1e7..0000000
--- a/src/main/site/xdoc/pseudo-distributed.xml
+++ /dev/null
@@ -1,42 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
-          "http://forrest.apache.org/dtd/document-v20.dtd">
-
-<document xmlns="http://maven.apache.org/XDOC/2.0"
-  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
-  <properties>
-    <title> 
-Running Apache HBase (TM) in pseudo-distributed mode
-    </title>
-  </properties>
-
-  <body>
-      <p>This page has been retired.  The contents have been moved to the 
-      <a href="http://hbase.apache.org/book.html#distributed">Distributed Operation: Pseudo- and Fully-distributed modes</a> section
- in the Reference Guide.
- </p>
-
- </body>
-
-</document>
-

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/xdoc/replication.xml
----------------------------------------------------------------------
diff --git a/src/main/site/xdoc/replication.xml b/src/main/site/xdoc/replication.xml
deleted file mode 100644
index a2fcfcb..0000000
--- a/src/main/site/xdoc/replication.xml
+++ /dev/null
@@ -1,35 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
-          "http://forrest.apache.org/dtd/document-v20.dtd">
-
-<document xmlns="http://maven.apache.org/XDOC/2.0"
-  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
-  <properties>
-    <title>
-      Apache HBase (TM) Replication
-    </title>
-  </properties>
-  <body>
-    <p>This information has been moved to <a href="http://hbase.apache.org/book.html#cluster_replication">the Cluster Replication</a> section of the <a href="http://hbase.apache.org/book.html">Apache HBase Reference Guide</a>.</p>
-  </body>
-</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/xdoc/resources.xml
----------------------------------------------------------------------
diff --git a/src/main/site/xdoc/resources.xml b/src/main/site/xdoc/resources.xml
deleted file mode 100644
index 19548b6..0000000
--- a/src/main/site/xdoc/resources.xml
+++ /dev/null
@@ -1,45 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-<document xmlns="http://maven.apache.org/XDOC/2.0"
-  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
-  <properties>
-    <title>Other Apache HBase (TM) Resources</title>
-  </properties>
-
-<body>
-<section name="Other Apache HBase Resources">
-<section name="Books">
-<section name="HBase: The Definitive Guide">
-<p><a href="http://shop.oreilly.com/product/0636920014348.do">HBase: The Definitive Guide <i>Random Access to Your Planet-Size Data</i></a> by Lars George. Publisher: O'Reilly Media, Released: August 2011, Pages: 556.</p>
-</section>
-<section name="HBase In Action">
-<p><a href="http://www.manning.com/dimidukkhurana/">HBase In Action</a> By Nick Dimiduk and Amandeep Khurana.  Publisher: Manning, MEAP Began: January 2012, Softbound print: Fall 2012, Pages: 350.</p>
-</section>
-<section name="HBase Administration Cookbook">
-<p><a href="http://www.packtpub.com/hbase-administration-for-optimum-database-performance-cookbook/book">HBase Administration Cookbook</a> by Yifeng Jiang.  Publisher: PACKT Publishing, Release: Expected August 2012, Pages: 335.</p>
-</section>
-<section name="HBase High Performance Cookbook">
-  <p><a href="https://www.packtpub.com/big-data-and-business-intelligence/hbase-high-performance-cookbook">HBase High Performance Cookbook</a> by Ruchir Choudhry.  Publisher: PACKT Publishing, Release: January 2017, Pages: 350.</p>
-</section>
-</section>
-</section>
-</body>
-</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/xdoc/sponsors.xml
----------------------------------------------------------------------
diff --git a/src/main/site/xdoc/sponsors.xml b/src/main/site/xdoc/sponsors.xml
deleted file mode 100644
index 332f56a..0000000
--- a/src/main/site/xdoc/sponsors.xml
+++ /dev/null
@@ -1,50 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-<document xmlns="http://maven.apache.org/XDOC/2.0"
-  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
-  <properties>
-    <title>Apache HBase&#153; Sponsors</title>
-  </properties>
-
-<body>
-<section name="Sponsors">
-    <p>First off, thanks to <a href="http://www.apache.org/foundation/thanks.html">all who sponsor</a>
-       our parent, the Apache Software Foundation.
-    </p>
-<p>The below companies have been gracious enough to provide their commerical tool offerings free of charge to the Apache HBase&#153; project.
-<ul>
-	<li>The crew at <a href="http://www.ej-technologies.com/">ej-technologies</a> have
-        been let us use <a href="http://www.ej-technologies.com/products/jprofiler/overview.html">JProfiler</a> for years now.</li>
-	<li>The lads at <a href="http://headwaysoftware.com/">headway software</a> have
-        given us a license for <a href="http://headwaysoftware.com/products/?code=Restructure101">Restructure101</a>
-        so we can untangle our interdependency mess.</li>
-	<li><a href="http://www.yourkit.com">YourKit</a> allows us to use their <a href="http://www.yourkit.com/overview/index.jsp">Java Profiler</a>.</li>
-	<li>Some of us use <a href="http://www.jetbrains.com/idea">IntelliJ IDEA</a> thanks to <a href="http://www.jetbrains.com/">JetBrains</a>.</li>
-  <li>Thank you to Boris at <a href="http://www.vectorportal.com/">Vector Portal</a> for granting us a license on the <a href="http://www.vectorportal.com/subcategory/205/KILLER-WHALE-FREE-VECTOR.eps/ifile/9136/detailtest.asp">image</a> on which our logo is based.</li>
-</ul>
-</p>
-</section>
-<section name="Sponsoring the Apache Software Foundation">
-<p>To contribute to the Apache Software Foundation, a good idea in our opinion, see the <a href="http://www.apache.org/foundation/sponsorship.html">ASF Sponsorship</a> page.
-</p>
-</section>
-</body>
-</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/xdoc/supportingprojects.xml
----------------------------------------------------------------------
diff --git a/src/main/site/xdoc/supportingprojects.xml b/src/main/site/xdoc/supportingprojects.xml
deleted file mode 100644
index f949a57..0000000
--- a/src/main/site/xdoc/supportingprojects.xml
+++ /dev/null
@@ -1,161 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-<document xmlns="http://maven.apache.org/XDOC/2.0"
-  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
-  <properties>
-    <title>Supporting Projects</title>
-  </properties>
-
-<body>
-<section name="Supporting Projects">
-  <p>This page is a list of projects that are related to HBase. To
-    have your project added, file a documentation JIRA or email
-    <a href="mailto:dev@hbase.apache.org">hbase-dev</a> with the relevant
-    information. If you notice out-of-date information, use the same avenues to
-    report it.
-  </p>
-  <p><b>These items are user-submitted and the HBase team assumes no responsibility for their accuracy.</b></p>
-  <h3>Projects that add new features to HBase</h3>
-  <dl>
-   <dt><a href="https://github.com/XiaoMi/themis/">Themis</a></dt>
-   <dd>Themis provides cross-row/cross-table transaction on HBase based on
-    Google's Percolator.</dd>
-   <dt><a href="https://github.com/caskdata/tephra">Tephra</a></dt>
-   <dd>Cask Tephra provides globally consistent transactions on top of Apache
-    HBase.</dd>
-   <dt><a href="https://github.com/VCNC/haeinsa">Haeinsa</a></dt>
-   <dd>Haeinsa is linearly scalable multi-row, multi-table transaction library
-    for HBase.</dd>
-   <dt><a href="https://github.com/juwi/HBase-TAggregator">HBase TAggregator</a></dt>
-   <dd>An HBase coprocessor for timeseries-based aggregations.</dd>
-   <dt><a href="http://trafodion.incubator.apache.org/">Apache Trafodion</a></dt>
-   <dd>Apache Trafodion is a webscale SQL-on-Hadoop solution enabling
-    transactional or operational workloads on Hadoop.</dd>
-   <dt><a href="http://phoenix.apache.org/">Apache Phoenix</a></dt>
-   <dd>Apache Phoenix is a relational database layer over HBase delivered as a
-    client-embedded JDBC driver targeting low latency queries over HBase data.</dd>
-   <dt><a href="https://github.com/cloudera/hue/tree/master/apps/hbase">Hue HBase Browser</a></dt>
-   <dd>An Easy &amp; Powerful WebUI for HBase, distributed with <a href="https://www.gethue.com">Hue</a>.</dd>
-   <dt><a href="https://github.com/NGDATA/hbase-indexer/tree/master/hbase-sep">HBase SEP</a></dt>
-   <dd>the HBase Side Effect Processor, a system for asynchronously and reliably listening to HBase
-    mutation events, based on HBase replication.</dd>
-   <dt><a href="https://github.com/ngdata/hbase-indexer">Lily HBase Indexer</a></dt>
-   <dd>indexes HBase content to Solr by listening to the replication stream
-    (uses the HBase SEP).</dd>
-   <dt><a href="https://github.com/sonalgoyal/crux/">Crux</a></dt>
-   <dd> - HBase Reporting and Analysis with support for simple and composite keys,
-    get and range scans, column based filtering, charting.</dd>
-   <dt><a href="https://github.com/yahoo/omid/">Omid</a></dt>
-   <dd> - Lock-free transactional support on top of HBase providing Snapshot
-    Isolation.</dd>
-   <dt><a href="http://dev.tailsweep.com/projects/parhely">Parhely</a></dt>
-   <dd>ORM for HBase</dd>
-   <dt><a href="http://code.google.com/p/hbase-writer/">HBase-Writer</a></dt>
-   <dd> Heritrix2 Processor for writing crawls to HBase.</dd>
-   <dt><a href="http://www.pigi-project.org/">Pigi Project</a></dt>
-   <dd>The Pigi Project is an ORM-like framework. It includes a configurable
-    index system and a simple object to HBase mapping framework (or indexing for
-    HBase if you like).  Designed for use by web applications.</dd>
-   <dt><a href="http://code.google.com/p/hbase-thrift/">hbase-thrift</a></dt>
-   <dd>hbase-thrift generates and installs Perl and Python Thrift bindings for
-    HBase.</dd>
-   <dt><a href="http://belowdeck.kissintelligentsystems.com/ohm">OHM</a></dt>
-   <dd>OHM is a weakly relational ORM for HBase which provides Object Mapping and
-    Column indexing. It has its own compiler capable of generating interface
-    code for multiple languages. Currently C# (via the Thrift API), with support
-    for Java currently in development. The compiler is easily extensible to add
-    support for other languages.</dd>
-   <dt><a href="http://datastore.googlecode.com">datastore</a></dt>
-   <dd>Aims to be an implementation of the
-    <a href="http://code.google.com/appengine/docs/python/datastore/">Google app-engine datastore</a>
-    in Java using HBase instead of bigtable.</dd>
-   <dt><a href="http://datanucleus.org">DataNucleus</a></dt>
-   <dd>DataNucleus is a Java JDO/JPA/REST implementation. It supports HBase and
-    many other datastores.</dd>
-   <dt><a href="http://github.com/impetus-opensource/Kundera">Kundera</a></dt>
-   <dd>Kundera is a JPA 2.0 based object-datastore mapping library for HBase,
-    Cassandra and MongoDB.</dd>
-   <dt><a href="http://github.com/zohmg/zohmg/tree/master">Zohmg</a></dt>
-   <dd>Zohmg is a time-series data store that uses HBase as its backing store.</dd>
-   <dt><a href="http://grails.org/plugin/gorm-hbase">Grails Support</a></dt>
-   <dd>Grails HBase plug-in.</dd>
-   <dt><a href="http://www.bigrecord.org">BigRecord</a></dt>
-   <dd>is an active_record-based object mapping layer for Ruby on Rails.</dd>
-   <dt><a href="http://github.com/greglu/hbase-stargate">hbase-stargate</a></dt>
-   <dd>Ruby client for HBase Stargate.</dd>
-   <dt><a href="http://github.com/ghelmling/meetup.beeno">Meetup.Beeno</a></dt>
-   <dd>Meetup.Beeno is a simple HBase Java "beans" mapping framework based on
-    annotations. It includes a rudimentary high level query API that generates
-    the appropriate server-side filters.</dd>
-   <dt><a href="http://www.springsource.org/spring-data/hadoop">Spring Hadoop</a></dt>
-   <dd> - The Spring Hadoop project provides support for writing Apache Hadoop
-    applications that benefit from the features of Spring, Spring Batch and
-    Spring Integration.</dd>
-   <dt><a href="https://jira.springsource.org/browse/SPR-5950">Spring Framework HBase Template</a></dt>
-   <dd>Spring Framework HBase Template provides HBase data access templates
-    similar to what is provided in Spring for JDBC, Hibernate, iBatis, etc.
-    If you find this useful, please vote for its inclusion in the Spring Framework.</dd>
-   <dt><a href="http://github.com/davidsantiago/clojure-hbase">Clojure-HBase</a></dt>
-   <dd>A library for convenient access to HBase from Clojure.</dd>
-   <dt><a href="http://www.lilyproject.org/lily/about/playground/hbaseindexes.html">HBase indexing library</a></dt>
-   <dd>A library for building and querying HBase-table-based indexes.</dd>
-   <dt><a href="http://github.com/akkumar/hbasene">HBasene</a></dt>
-   <dd>Lucene+HBase - Using HBase as the backing store for the TF-IDF
-    representations needed by Lucene. Also, contains a library for constructing
-    lucene indices from HBase schema.</dd>
-   <dt><a href="http://github.com/larsgeorge/jmxtoolkit">JMXToolkit</a></dt>
-   <dd>A HBase tailored JMX toolkit enabling monitoring with Cacti and checking
-    with Nagios or similar.</dd>
-   <dt><a href="http://github.com/ykulbak/ihbase">IHBASE</a></dt>
-   <dd>IHBASE provides faster scans by indexing regions, each region has its own
-    index. The indexed columns are user-defined and indexes can be intersected or
-    joined in a single query.</dd>
-   <dt><a href="http://github.com/apurtell/hbase-ec2">HBASE EC2 scripts</a></dt>
-   <dd>This collection of bash scripts allows you to run HBase clusters on
-    Amazon's Elastic Compute Cloud (EC2) service with best practices baked in.</dd>
-   <dt><a href="http://github.com/apurtell/hbase-stargate">Stargate</a></dt>
-   <dd>Stargate provides an enhanced RESTful interface.</dd>
-   <dt><a href="http://github.com/hbase-trx/hbase-transactional-tableindexed">HBase-trx</a></dt>
-   <dd>HBase-trx provides Transactional (JTA) and indexed extensions of HBase.</dd>
-   <dt><a href="http://github.com/simplegeo/python-hbase-thrift">HBase Thrift Python client Debian package</a></dt>
-   <dd>Debian packages for the HBase Thrift Python client (see readme for
-    sources.list setup)</dd>
-   <dt><a href="http://github.com/amitrathore/capjure">capjure</a></dt>
-   <dd>capjure is a persistence helper for HBase. It is written in the Clojure
-    language, and supports persisting of native hash-maps.</dd>
-   <dt><a href="http://github.com/sematext/HBaseHUT">HBaseHUT</a></dt>
-   <dd>(High Update Throughput for HBase) It focuses on write performance during
-    records update (by avoiding doing Get on every Put to update record).</dd>
-   <dt><a href="http://github.com/sematext/HBaseWD">HBaseWD</a></dt>
-   <dd>HBase Writes Distributor spreads records over the cluster even when their
-    keys are sequential, while still allowing fast range scans over them</dd>
-   <dt><a href="http://code.google.com/p/hbase-jdo/">HBase UI Tool &amp; Util</a></dt>
-   <dd>HBase UI Tool &amp; Util is an HBase UI client and simple util module.
-    It can handle hbase more easily like jdo(not persistence api)</dd>
-  </dl>
-  <h3>Example HBase Applications</h3>
-  <ul>
-    <li><a href="http://github.com/andreisavu/feedaggregator">HBase powered feed aggregator</a>
-    by Savu Andrei -- 200909</li>
-  </ul>
-</section>
-</body>
-</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/asciidoc/acid-semantics.adoc
----------------------------------------------------------------------
diff --git a/src/site/asciidoc/acid-semantics.adoc b/src/site/asciidoc/acid-semantics.adoc
new file mode 100644
index 0000000..0b56aa8
--- /dev/null
+++ b/src/site/asciidoc/acid-semantics.adoc
@@ -0,0 +1,118 @@
+////
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+////
+
+= Apache HBase (TM) ACID Properties
+
+== About this Document
+
+Apache HBase (TM) is not an ACID compliant database. However, it does guarantee certain specific properties.
+
+This specification enumerates the ACID properties of HBase.
+
+== Definitions
+
+For the sake of common vocabulary, we define the following terms:
+Atomicity::
+  An operation is atomic if it either completes entirely or not at all.
+
+Consistency::
+  All actions cause the table to transition from one valid state directly to another (eg a row will not disappear during an update, etc).
+
+Isolation::
+  an operation is isolated if it appears to complete independently of any other concurrent transaction.
+
+Durability::
+  Any update that reports &quot;successful&quot; to the client will not be lost.
+
+Visibility::
+  An update is considered visible if any subsequent read will see the update as having been committed.
+
+
+The terms _must_ and _may_ are used as specified by link:[RFC 2119].
+
+In short, the word &quot;must&quot; implies that, if some case exists where the statement is not true, it is a bug. The word _may_ implies that, even if the guarantee is provided in a current release, users should not rely on it.
+
+== APIs to Consider
+- Read APIs
+* get
+* scan
+- Write APIs
+* put
+* batch put
+* delete
+- Combination (read-modify-write) APIs
+* incrementColumnValue
+* checkAndPut
+
+== Guarantees Provided
+
+.Atomicity
+.  All mutations are atomic within a row. Any put will either wholely succeed or wholely fail.footnoteref[Puts will either wholely succeed or wholely fail, provided that they are actually sent to the RegionServer.  If the writebuffer is used, Puts will not be sent until the writebuffer is filled or it is explicitly flushed.]
+.. An operation that returns a _success_ code has completely succeeded.
+.. An operation that returns a _failure_ code has completely failed.
+.. An operation that times out may have succeeded and may have failed. However, it will not have partially succeeded or failed.
+. This is true even if the mutation crosses multiple column families within a row.
+. APIs that mutate several rows will _not_ be atomic across the multiple rows. For example, a multiput that operates on rows 'a','b', and 'c' may return having mutated some but not all of the rows. In such cases, these APIs will return a list of success codes, each of which may be succeeded, failed, or timed out as described above.
+. The checkAndPut API happens atomically like the typical _compareAndSet (CAS)_ operation found in many hardware architectures.
+. The order of mutations is seen to happen in a well-defined order for each row, with no interleaving. For example, if one writer issues the mutation `a=1,b=1,c=1` and another writer issues the mutation `a=2,b=2,c=`, the row must either be `a=1,b=1,c=1` or `a=2,b=2,c=2` and must *not* be something like `a=1,b=2,c=1`. +
+NOTE:This is not true _across rows_ for multirow batch mutations.
+
+== Consistency and Isolation
+. All rows returned via any access API will consist of a complete row that existed at some point in the table's history.
+. This is true across column families - i.e a get of a full row that occurs concurrent with some mutations 1,2,3,4,5 will return a complete row that existed at some point in time between mutation i and i+1 for some i between 1 and 5.
+. The state of a row will only move forward through the history of edits to it.
+
+== Consistency of Scans
+A scan is *not* a consistent view of a table. Scans do *not* exhibit _snapshot isolation_.
+
+Rather, scans have the following properties:
+. Any row returned by the scan will be a consistent view (i.e. that version of the complete row existed at some point in time)footnoteref[consistency,A consistent view is not guaranteed intra-row scanning -- i.e. fetching a portion of a row in one RPC then going back to fetch another portion of the row in a subsequent RPC. Intra-row scanning happens when you set a limit on how many values to return per Scan#next (See link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setBatch(int)"[Scan#setBatch(int)]).]
+. A scan will always reflect a view of the data _at least as new as_ the beginning of the scan. This satisfies the visibility guarantees enumerated below.
+.. For example, if client A writes data X and then communicates via a side channel to client B, any scans started by client B will contain data at least as new as X.
+.. A scan _must_ reflect all mutations committed prior to the construction of the scanner, and _may_ reflect some mutations committed subsequent to the construction of the scanner.
+.. Scans must include _all_ data written prior to the scan (except in the case where data is subsequently mutated, in which case it _may_ reflect the mutation)
+
+Those familiar with relational databases will recognize this isolation level as "read committed".
+
+NOTE: The guarantees listed above regarding scanner consistency are referring to "transaction commit time", not the "timestamp" field of each cell. That is to say, a scanner started at time _t_ may see edits with a timestamp value greater than _t_, if those edits were committed with a "forward dated" timestamp before the scanner was constructed.
+
+== Visibility
+
+. When a client receives a &quot;success&quot; response for any mutation, that mutation is immediately visible to both that client and any client with whom it later communicates through side channels.footnoteref[consistency]
+. A row must never exhibit so-called "time-travel" properties. That is to say, if a series of mutations moves a row sequentially through a series of states, any sequence of concurrent reads will return a subsequence of those states. +
+For example, if a row's cells are mutated using the `incrementColumnValue` API, a client must never see the value of any cell decrease. +
+This is true regardless of which read API is used to read back the mutation.
+. Any version of a cell that has been returned to a read operation is guaranteed to be durably stored.
+
+== Durability
+. All visible data is also durable data. That is to say, a read will never return data that has not been made durable on disk.footnoteref[durability,In the context of Apache HBase, _durably on disk_; implies an `hflush()` call on the transaction log. This does not actually imply an `fsync()` to magnetic media, but rather just that the data has been written to the OS cache on all replicas of the log. In the case of a full datacenter power loss, it is possible that the edits are not truly durable.]
+. Any operation that returns a &quot;success&quot; code (eg does not throw an exception) will be made durable.footnoteref[durability]
+. Any operation that returns a &quot;failure&quot; code will not be made durable (subject to the Atomicity guarantees above).
+. All reasonable failure scenarios will not affect any of the guarantees of this document.
+
+== Tunability
+
+All of the above guarantees must be possible within Apache HBase. For users who would like to trade off some guarantees for performance, HBase may offer several tuning options. For example:
+
+* Visibility may be tuned on a per-read basis to allow stale reads or time travel.
+* Durability may be tuned to only flush data to disk on a periodic basis.
+
+== More Information
+
+For more information, see the link:book.html#client[client architecture] and  link:book.html#datamodel[data model] sections in the Apache HBase Reference Guide.

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/asciidoc/bulk-loads.adoc
----------------------------------------------------------------------
diff --git a/src/site/asciidoc/bulk-loads.adoc b/src/site/asciidoc/bulk-loads.adoc
new file mode 100644
index 0000000..8fc9a1a
--- /dev/null
+++ b/src/site/asciidoc/bulk-loads.adoc
@@ -0,0 +1,22 @@
+////
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+////
+
+= Bulk Loads in Apache HBase (TM)
+
+This page has been retired.  The contents have been moved to the link:book.html#arch.bulk.load[Bulk Loading] section in the Reference Guide.


[02/11] hbase git commit: HBASE-20831 Copy master doc into branch-2.1 and edit to make it suit 2.1.0

Posted by zh...@apache.org.
http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/xdoc/cygwin.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/cygwin.xml b/src/site/xdoc/cygwin.xml
new file mode 100644
index 0000000..406c0a9
--- /dev/null
+++ b/src/site/xdoc/cygwin.xml
@@ -0,0 +1,245 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title>Installing Apache HBase (TM) on Windows using Cygwin</title>
+  </properties>
+
+<body>
+<section name="Introduction">
+<p><a title="HBase project" href="http://hbase.apache.org" target="_blank">Apache HBase (TM)</a> is a distributed, column-oriented store, modeled after Google's <a title="Google's BigTable" href="http://research.google.com/archive/bigtable.html" target="_blank">BigTable</a>. Apache HBase is built on top of <a title="Hadoop project" href="http://hadoop.apache.org">Hadoop</a> for its <a title="Hadoop MapReduce project" href="http://hadoop.apache.org/mapreduce" target="_blank">MapReduce </a>and <a title="Hadoop DFS project" href="http://hadoop.apache.org/hdfs">distributed file system</a> implementation. All these projects are open-source and part of the <a title="The Apache Software Foundation" href="http://www.apache.org/" target="_blank">Apache Software Foundation</a>.</p>
+
+<p style="text-align: justify; ">As being distributed, large scale platforms, the Hadoop and HBase projects mainly focus on <em><strong>*nix</strong></em><strong> environments</strong> for production installations. However, being developed in <strong>Java</strong>, both projects are fully <strong>portable</strong> across platforms and, hence, also to the <strong>Windows operating system</strong>. For ease of development the projects rely on <a title="Cygwin site" href="http://www.cygwin.com/" target="_blank">Cygwin</a> to have a *nix-like environment on Windows to run the shell scripts.</p>
+</section>
+<section name="Purpose">
+<p style="text-align: justify; ">This document explains the <strong>intricacies of running Apache HBase on Windows using Cygwin</strong> as an all-in-one single-node installation for testing and development. The HBase <a title="HBase Overview" href="http://hbase.apache.org/apidocs/overview-summary.html#overview_description" target="_blank">Overview</a> and <a title="HBase QuickStart" href="http://hbase.apache.org/book/quickstart.html" target="_blank">QuickStart</a> guides on the other hand go a long way in explaning how to setup <a title="HBase project" href="http://hadoop.apache.org/hbase" target="_blank">HBase</a> in more complex deployment scenario's.</p>
+</section>
+
+<section name="Installation">
+<p style="text-align: justify; ">For running Apache HBase on Windows, 3 technologies are required: <strong>Java, Cygwin and SSH</strong>. The following paragraphs detail the installation of each of the aforementioned technologies.</p>
+<section name="Java">
+<p style="text-align: justify; ">HBase depends on the <a title="Java Platform, Standard Edition, 6 Release" href="http://java.sun.com/javase/6/" target="_blank">Java Platform, Standard Edition, 6 Release</a>. So the target system has to be provided with at least the Java Runtime Environment (JRE); however if the system will also be used for development, the Jave Development Kit (JDK) is preferred. You can download the latest versions for both from <a title="Java SE Downloads" href="http://java.sun.com/javase/downloads/index.jsp" target="_blank">Sun's download page</a>. Installation is a simple GUI wizard that guides you through the process.</p>
+</section>
+<section name="Cygwin">
+<p style="text-align: justify; ">Cygwin is probably the oddest technology in this solution stack. It provides a dynamic link library that emulates most of a *nix environment on Windows. On top of that a whole bunch of the most common *nix tools are supplied. Combined, the DLL with the tools form a very *nix-alike environment on Windows.</p>
+
+<p style="text-align: justify; ">For installation, Cygwin provides the <a title="Cygwin Setup Utility" href="http://cygwin.com/setup.exe" target="_blank"><strong><code>setup.exe</code> utility</strong></a> that tracks the versions of all installed components on the target system and provides the mechanism for <strong>installing</strong> or <strong>updating </strong>everything from the mirror sites of Cygwin.</p>
+
+<p style="text-align: justify; ">To support installation, the <code>setup.exe</code> utility uses 2 directories on the target system. The <strong>Root</strong> directory for Cygwin (defaults to <code>C:\cygwin)</code> which will become <code>/</code> within the eventual Cygwin installation; and the <strong>Local Package </strong>directory (e.g. <code>C:\cygsetup</code> that is the cache where <code>setup.exe</code> stores the packages before they are installed. The cache must not be the same folder as the Cygwin root.</p>
+
+<p style="text-align: justify; ">Perform following steps to install Cygwin, which are elaboratly detailed in the <a title="Setting Up Cygwin" href="http://cygwin.com/cygwin-ug-net/setup-net.html" target="_self">2nd chapter</a> of the <a title="Cygwin User's Guide" href="http://cygwin.com/cygwin-ug-net/cygwin-ug-net.html" target="_blank">Cygwin User's Guide</a>:</p>
+
+<ol style="text-align: justify; ">
+	<li>Make sure you have <code>Administrator</code> privileges on the target system.</li>
+	<li>Choose and create you <strong>Root</strong> and <strong>Local Package</strong> directories. A good suggestion is to use <code>C:\cygwin\root</code> and <code>C:\cygwin\setup</code> folders.</li>
+	<li>Download the <code>setup.exe</code> utility and save it to the <strong>Local Package</strong> directory.</li>
+	<li>Run the <code>setup.exe</code> utility,
+<ol>
+	<li>Choose  the <code>Install from Internet</code> option,</li>
+	<li>Choose your <strong>Root</strong> and <strong>Local Package</strong> folders</li>
+	<li>and select an appropriate mirror.</li>
+	<li>Don't select any additional packages yet, as we only want to install Cygwin for now.</li>
+	<li>Wait for download and install</li>
+	<li>Finish the installation</li>
+</ol>
+</li>
+	<li>Optionally, you can now also add a shortcut to your Start menu pointing to the <code>setup.exe</code> utility in the <strong>Local Package </strong>folder.</li>
+	<li>Add <code>CYGWIN_HOME</code> system-wide environment variable that points to your <strong>Root </strong>directory.</li>
+	<li>Add <code>%CYGWIN_HOME%\bin</code> to the end of your <code>PATH</code> environment variable.</li>
+	<li>Reboot the sytem after making changes to the environment variables otherwise the OS will not be able to find the Cygwin utilities.</li>
+	<li>Test your installation by running your freshly created shortcuts or the <code>Cygwin.bat</code> command in the <strong>Root</strong> folder. You should end up in a terminal window that is running a <a title="Bash Reference Manual" href="http://www.gnu.org/software/bash/manual/bashref.html" target="_blank">Bash shell</a>. Test the shell by issuing following commands:
+<ol>
+	<li><code>cd /</code> should take you to thr <strong>Root</strong> directory in Cygwin;</li>
+	<li>the <code>LS</code> commands that should list all files and folders in the current directory.</li>
+	<li>Use the <code>exit</code> command to end the terminal.</li>
+</ol>
+</li>
+	<li>When needed, to <strong>uninstall</strong> Cygwin you can simply delete the <strong>Root</strong> and <strong>Local Package</strong> directory, and the <strong>shortcuts</strong> that were created during installation.</li>
+</ol>
+</section>
+<section name="SSH">
+<p style="text-align: justify; ">HBase (and Hadoop) rely on <a title="Secure Shell" href="http://nl.wikipedia.org/wiki/Secure_Shell" target="_blank"><strong>SSH</strong></a> for interprocess/-node <strong>communication</strong> and launching<strong> remote commands</strong>. SSH will be provisioned on the target system via Cygwin, which supports running Cygwin programs as <strong>Windows services</strong>!</p>
+
+<ol style="text-align: justify; ">
+	<li>Rerun the <code><strong>setup.exe</strong></code><strong> utility</strong>.</li>
+	<li>Leave all parameters as is, skipping through the wizard using the <code>Next</code> button until the <code>Select Packages</code> panel is shown.</li>
+	<li>Maximize the window and click the <code>View</code> button to toggle to the list view, which is ordered alfabetically on <code>Package</code>, making it easier to find the packages we'll need.</li>
+	<li>Select the following packages by clicking the status word (normally <code>Skip</code>) so it's marked for installation. Use the <code>Next </code>button to download and install the packages.
+<ol>
+	<li>OpenSSH</li>
+	<li>tcp_wrappers</li>
+	<li>diffutils</li>
+	<li>zlib</li>
+</ol>
+</li>
+	<li>Wait for the install to complete and finish the installation.</li>
+</ol>
+</section>
+<section name="HBase">
+<p style="text-align: justify; ">Download the <strong>latest release </strong>of Apache HBase from the <a title="HBase Releases" href="http://www.apache.org/dyn/closer.cgi/hbase/" target="_blank">website</a>. As the Apache HBase distributable is just a zipped archive, installation is as simple as unpacking the archive so it ends up in its final <strong>installation</strong> directory. Notice that HBase has to be installed in Cygwin and a good directory suggestion is to use <code>/usr/local/</code> (or [<code><strong>Root</strong> directory]\usr\local</code> in Windows slang). You should end up with a <code>/usr/local/hbase-<em>&lt;version&gt;</em></code> installation in Cygwin.</p>
+
+This finishes installation. We go on with the configuration.
+</section>
+</section>
+<section name="Configuration">
+<p style="text-align: justify; ">There are 3 parts left to configure: <strong>Java, SSH and HBase</strong> itself. Following paragraphs explain eacht topic in detail.</p>
+<section name="Java">
+<p style="text-align: justify; ">One important thing to remember in shell scripting in general (i.e. *nix and Windows) is that managing, manipulating and assembling path names that contains spaces can be very hard, due to the need to escape and quote those characters and strings. So we try to stay away from spaces in path names. *nix environments can help us out here very easily by using <strong>symbolic links</strong>.</p>
+
+<ol style="text-align: justify; ">
+	<li style="text-align: justify; ">Create a link in <code>/usr/local</code> to the Java home directory by using the following command and substituting the name of your chosen Java environment:
+<pre>LN -s /cygdrive/c/Program\ Files/Java/<em>&lt;jre name&gt; </em>/usr/local/<em>&lt;jre name&gt;</em></pre>
+</li>
+	<li>Test your java installation by changing directories to your Java folder <code>CD /usr/local/<em>&lt;jre name&gt;</em></code> and issueing the command <code>./bin/java -version</code>. This should output your version of the chosen JRE.</li>
+</ol>
+</section>
+<section>
+<title>SSH</title>
+<p style="text-align: justify; ">Configuring <strong>SSH </strong>is quite elaborate, but primarily a question of launching it by default as a<strong> Windows service</strong>.</p>
+
+<ol style="text-align: justify; ">
+	<li style="text-align: justify; ">On Windows Vista and above make sure you run the Cygwin shell with <strong>elevated privileges</strong>, by right-clicking on the shortcut an using <code>Run as Administrator</code>.</li>
+	<li style="text-align: justify; ">First of all, we have to make sure the <strong>rights on some crucial files</strong> are correct. Use the commands underneath. You can verify all rights by using the <code>LS -L</code> command on the different files. Also, notice the auto-completion feature in the shell using <code>&lt;TAB&gt;</code> is extremely handy in these situations.
+<ol>
+	<li><code>chmod +r /etc/passwd</code> to make the passwords file readable for all</li>
+	<li><code>chmod u+w /etc/passwd</code> to make the passwords file writable for the owner</li>
+	<li><code>chmod +r /etc/group</code> to make the groups file readable for all</li>
+</ol>
+<ol>
+	<li><code>chmod u+w /etc/group</code> to make the groups file writable for the owner</li>
+</ol>
+<ol>
+	<li><code>chmod 755 /var</code> to make the var folder writable to owner and readable and executable to all</li>
+</ol>
+</li>
+	<li>Edit the <strong>/etc/hosts.allow</strong> file using your favorite editor (why not VI in the shell!) and make sure the following two lines are in there before the <code>PARANOID</code> line:
+<ol>
+	<li><code>ALL : localhost 127.0.0.1/32 : allow</code></li>
+	<li><code>ALL : [::1]/128 : allow</code></li>
+</ol>
+</li>
+	<li>Next we have to <strong>configure SSH</strong> by using the script <code>ssh-host-config</code>
+<ol>
+	<li>If this script asks to overwrite an existing <code>/etc/ssh_config</code>, answer <code>yes</code>.</li>
+	<li>If this script asks to overwrite an existing <code>/etc/sshd_config</code>, answer <code>yes</code>.</li>
+	<li>If this script asks to use privilege separation, answer <code>yes</code>.</li>
+	<li>If this script asks to install <code>sshd</code> as a service, answer <code>yes</code>. Make sure you started your shell as Adminstrator!</li>
+	<li>If this script asks for the CYGWIN value, just <code>&lt;enter&gt;</code> as the default is <code>ntsec</code>.</li>
+	<li>If this script asks to create the <code>sshd</code> account, answer <code>yes</code>.</li>
+	<li>If this script asks to use a different user name as service account, answer <code>no</code> as the default will suffice.</li>
+	<li>If this script asks to create the <code>cyg_server</code> account, answer <code>yes</code>. Enter a password for the account.</li>
+</ol>
+</li>
+	<li><strong>Start the SSH service</strong> using <code>net start sshd</code> or <code>cygrunsrv  --start  sshd</code>. Notice that <code>cygrunsrv</code> is the utility that make the process run as a Windows service. Confirm that you see a message stating that <code>the CYGWIN sshd service  was started succesfully.</code></li>
+	<li>Harmonize Windows and Cygwin<strong> user account</strong> by using the commands:
+<ol>
+	<li><code>mkpasswd -cl &gt; /etc/passwd</code></li>
+	<li><code>mkgroup --local &gt; /etc/group</code></li>
+</ol>
+</li>
+	<li><strong>Test </strong>the installation of SSH:
+<ol>
+	<li>Open a new Cygwin terminal</li>
+	<li>Use the command <code>whoami</code> to verify your userID</li>
+	<li>Issue an <code>ssh localhost</code> to connect to the system itself
+<ol>
+	<li>Answer <code>yes</code> when presented with the server's fingerprint</li>
+	<li>Issue your password when prompted</li>
+	<li>test a few commands in the remote session</li>
+	<li>The <code>exit</code> command should take you back to your first shell in Cygwin</li>
+</ol>
+</li>
+	<li><code>Exit</code> should terminate the Cygwin shell.</li>
+</ol>
+</li>
+</ol>
+</section>
+<section name="HBase">
+If all previous configurations are working properly, we just need some tinkering at the <strong>HBase config</strong> files to properly resolve on Windows/Cygwin. All files and paths referenced here start from the HBase <code>[<strong>installation</strong> directory]</code> as working directory.
+<ol>
+	<li>HBase uses the <code>./conf/<strong>hbase-env.sh</strong></code> to configure its dependencies on the runtime environment. Copy and uncomment following lines just underneath their original, change them to fit your environemnt. They should read something like:
+<ol>
+	<li><code>export JAVA_HOME=/usr/local/<em>&lt;jre name&gt;</em></code></li>
+	<li><code>export HBASE_IDENT_STRING=$HOSTNAME</code> as this most likely does not inlcude spaces.</li>
+</ol>
+</li>
+	<li>HBase uses the ./conf/<code><strong>hbase-default.xml</strong></code> file for configuration. Some properties do not resolve to existing directories because the JVM runs on Windows. This is the major issue to keep in mind when working with Cygwin: within the shell all paths are *nix-alike, hence relative to the root <code>/</code>. However, every parameter that is to be consumed within the windows processes themself, need to be Windows settings, hence <code>C:\</code>-alike. Change following propeties in the configuration file, adjusting paths where necessary to conform with your own installation:
+<ol>
+	<li><code>hbase.rootdir</code> must read e.g. <code>file:///C:/cygwin/root/tmp/hbase/data</code></li>
+	<li><code>hbase.tmp.dir</code> must read <code>C:/cygwin/root/tmp/hbase/tmp</code></li>
+	<li><code>hbase.zookeeper.quorum</code> must read <code>127.0.0.1</code> because for some reason <code>localhost</code> doesn't seem to resolve properly on Cygwin.</li>
+</ol>
+</li>
+	<li>Make sure the configured <code>hbase.rootdir</code> and <code>hbase.tmp.dir</code> <strong>directories exist</strong> and have the proper<strong> rights</strong> set up e.g. by issuing a <code>chmod 777</code> on them.</li>
+</ol>
+</section>
+</section>
+<section>
+<title>Testing</title>
+<p>
+This should conclude the installation and configuration of Apache HBase on Windows using Cygwin. So it's time <strong>to test it</strong>.
+<ol>
+	<li>Start a Cygwin<strong> terminal</strong>, if you haven't already.</li>
+	<li>Change directory to HBase <strong>installation</strong> using <code>CD /usr/local/hbase-<em>&lt;version&gt;</em></code>, preferably using auto-completion.</li>
+	<li><strong>Start HBase</strong> using the command <code>./bin/start-hbase.sh</code>
+<ol>
+	<li>When prompted to accept the SSH fingerprint, answer <code>yes</code>.</li>
+	<li>When prompted, provide your password. Maybe multiple times.</li>
+	<li>When the command completes, the HBase server should have started.</li>
+	<li>However, to be absolutely certain, check the logs in the <code>./logs</code> directory for any exceptions.</li>
+</ol>
+</li>
+	<li>Next we <strong>start the HBase shell</strong> using the command <code>./bin/hbase shell</code></li>
+	<li>We run some simple <strong>test commands</strong>
+<ol>
+	<li>Create a simple table using command <code>create 'test', 'data'</code></li>
+	<li>Verify the table exists using the command <code>list</code></li>
+	<li>Insert data into the table using e.g.
+<pre>put 'test', 'row1', 'data:1', 'value1'
+put 'test', 'row2', 'data:2', 'value2'
+put 'test', 'row3', 'data:3', 'value3'</pre>
+</li>
+	<li>List all rows in the table using the command <code>scan 'test'</code> that should list all the rows previously inserted. Notice how 3 new columns where added without changing the schema!</li>
+	<li>Finally we get rid of the table by issuing <code>disable 'test'</code> followed by <code>drop 'test'</code> and verified by <code>list</code> which should give an empty listing.</li>
+</ol>
+</li>
+	<li><strong>Leave the shell</strong> by <code>exit</code></li>
+	<li>To <strong>stop the HBase server</strong> issue the <code>./bin/stop-hbase.sh</code> command. And wait for it to complete!!! Killing the process might corrupt your data on disk.</li>
+	<li>In case of <strong>problems</strong>,
+<ol>
+	<li>verify the HBase logs in the <code>./logs</code> directory.</li>
+	<li>Try to fix the problem</li>
+	<li>Get help on the forums or IRC (<code>#hbase@freenode.net</code>). People are very active and keen to help out!</li>
+	<li>Stopr, restart and retest the server.</li>
+</ol>
+</li>
+</ol>
+</p>
+</section>
+
+<section name="Conclusion">
+<p>
+Now your <strong>HBase </strong>server is running, <strong>start coding</strong> and build that next killer app on this particular, but scalable datastore!
+</p>
+</section>
+</body>
+</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/xdoc/export_control.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/export_control.xml b/src/site/xdoc/export_control.xml
new file mode 100644
index 0000000..e57660a
--- /dev/null
+++ b/src/site/xdoc/export_control.xml
@@ -0,0 +1,59 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
+          "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title>
+      Export Control
+    </title>
+  </properties>
+  <body>
+  <section name="Export Control">
+<p>
+This distribution uses or includes cryptographic software. The country in
+which you currently reside may have restrictions on the import, possession,
+use, and/or re-export to another country, of encryption software. BEFORE
+using any encryption software, please check your country's laws, regulations
+and policies concerning the import, possession, or use, and re-export of
+encryption software, to see if this is permitted. See the
+<a href="http://www.wassenaar.org/">Wassenaar Arrangement</a> for more
+information.</p>
+<p>
+The U.S. Government Department of Commerce, Bureau of Industry and Security
+(BIS), has classified this software as Export Commodity Control Number (ECCN)
+5D002.C.1, which includes information security software using or performing
+cryptographic functions with asymmetric algorithms. The form and manner of this
+Apache Software Foundation distribution makes it eligible for export under the
+License Exception ENC Technology Software Unrestricted (TSU) exception (see the
+BIS Export Administration Regulations, Section 740.13) for both object code and
+source code.</p>
+<p>
+Apache HBase uses the built-in java cryptography libraries. See Oracle's
+information regarding
+<a href="http://www.oracle.com/us/products/export/export-regulations-345813.html">Java cryptographic export regulations</a>
+for more details.</p>
+  </section>
+  </body>
+</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/xdoc/index.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/index.xml b/src/site/xdoc/index.xml
new file mode 100644
index 0000000..1848d40
--- /dev/null
+++ b/src/site/xdoc/index.xml
@@ -0,0 +1,109 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title>Apache HBase&#8482; Home</title>
+    <link rel="shortcut icon" href="/images/favicon.ico" />
+  </properties>
+
+  <body>
+    <section name="Welcome to Apache HBase&#8482;">
+        <p><a href="http://www.apache.org/">Apache</a> HBase&#8482; is the <a href="http://hadoop.apache.org/">Hadoop</a> database, a distributed, scalable, big data store.
+    </p>
+    <p>Use Apache HBase&#8482; when you need random, realtime read/write access to your Big Data.
+    This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware.
+Apache HBase is an open-source, distributed, versioned, non-relational database modeled after Google's <a href="http://research.google.com/archive/bigtable.html">Bigtable: A Distributed Storage System for Structured Data</a> by Chang et al.
+ Just as Bigtable leverages the distributed data storage provided by the Google File System, Apache HBase provides Bigtable-like capabilities on top of Hadoop and HDFS.
+    </p>
+  </section>
+    <section name="Download">
+    <p>
+    Click <b><a href="http://www.apache.org/dyn/closer.cgi/hbase/">here</a></b> to download Apache HBase&#8482;.
+    </p>
+    </section>
+    <section name="Features">
+    <p>
+<ul>
+    <li>Linear and modular scalability.
+</li>
+    <li>Strictly consistent reads and writes.
+</li>
+    <li>Automatic and configurable sharding of tables
+</li>
+    <li>Automatic failover support between RegionServers.
+</li>
+    <li>Convenient base classes for backing Hadoop MapReduce jobs with Apache HBase tables.
+</li>
+    <li>Easy to use Java API for client access.
+</li>
+    <li>Block cache and Bloom Filters for real-time queries.
+</li>
+    <li>Query predicate push down via server side Filters
+</li>
+    <li>Thrift gateway and a REST-ful Web service that supports XML, Protobuf, and binary data encoding options
+</li>
+    <li>Extensible jruby-based (JIRB) shell
+</li>
+    <li>Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX
+</li>
+</ul>
+</p>
+</section>
+     <section name="More Info">
+   <p>See the <a href="http://hbase.apache.org/book.html#arch.overview">Architecture Overview</a>, the <a href="http://hbase.apache.org/book.html#faq">Apache HBase Reference Guide FAQ</a>,
+    and the other documentation links.
+   </p>
+   <dl>
+     <dt>Export Control</dt>
+   <dd><p>The HBase distribution includes cryptographic software. See the export control notice <a href="export_control.html">here</a>
+   </p></dd>
+     <dt>Code Of Conduct</dt>
+   <dd><p>We expect participants in discussions on the HBase project mailing lists, Slack and IRC channels, and JIRA issues to abide by the Apache Software Foundation's <a href="http://apache.org/foundation/policies/conduct.html">Code of Conduct</a>. More information can be found <a href="coc.html">here</a>.
+   </p></dd>
+ </dl>
+</section>
+
+     <section name="News">
+       <p>August 4th, 2017 <a href="https://easychair.org/cfp/HBaseConAsia2017">HBaseCon Asia 2017</a> @ the Huawei Campus in Shenzhen, China</p>
+       <p>June 12th, 2017 <a href="https://easychair.org/cfp/hbasecon2017">HBaseCon2017</a> at the Crittenden Buildings on the Google Mountain View Campus</p>
+       <p>April 25th, 2017 <a href="https://www.meetup.com/hbaseusergroup/events/239291716/">Meetup</a> @ Visa in Palo Alto</p>
+        <p>December 8th, 2016 <a href="https://www.meetup.com/hbaseusergroup/events/235542241/">Meetup@Splice</a> in San Francisco</p>
+       <p>September 26th, 2016 <a href="http://www.meetup.com/HBase-NYC/events/233024937/">HBaseConEast2016</a> at Google in Chelsea, NYC</p>
+         <p>May 24th, 2016 <a href="http://www.hbasecon.com/">HBaseCon2016</a> at The Village, 969 Market, San Francisco</p>
+       <p>June 25th, 2015 <a href="http://www.zusaar.com/event/14057003">HBase Summer Meetup 2015</a> in Tokyo</p>
+       <p>May 7th, 2015 <a href="http://hbasecon.com/">HBaseCon2015</a> in San Francisco</p>
+       <p>February 17th, 2015 <a href="http://www.meetup.com/hbaseusergroup/events/219260093/">HBase meetup around Strata+Hadoop World</a> in San Jose</p>
+       <p>January 15th, 2015 <a href="http://www.meetup.com/hbaseusergroup/events/218744798/">HBase meetup @ AppDynamics</a> in San Francisco</p>
+       <p>November 20th, 2014 <a href="http://www.meetup.com/hbaseusergroup/events/205219992/">HBase meetup @ WANdisco</a> in San Ramon</p>
+       <p>October 27th, 2014 <a href="http://www.meetup.com/hbaseusergroup/events/207386102/">HBase Meetup @ Apple</a> in Cupertino</p>
+       <p>October 15th, 2014 <a href="http://www.meetup.com/HBase-NYC/events/207655552/">HBase Meetup @ Google</a> on the night before Strata/HW in NYC</p>
+       <p>September 25th, 2014 <a href="http://www.meetup.com/hbaseusergroup/events/203173692/">HBase Meetup @ Continuuity</a> in Palo Alto</p>
+         <p>August 28th, 2014 <a href="http://www.meetup.com/hbaseusergroup/events/197773762/">HBase Meetup @ Sift Science</a> in San Francisco</p>
+         <p>July 17th, 2014 <a href="http://www.meetup.com/hbaseusergroup/events/190994082/">HBase Meetup @ HP</a> in Sunnyvale</p>
+         <p>June 5th, 2014 <a href="http://www.meetup.com/Hadoop-Summit-Community-San-Jose/events/179081342/">HBase BOF at Hadoop Summit</a>, San Jose Convention Center</p>
+         <p>May 5th, 2014 <a href="http://www.hbasecon.com/">HBaseCon2014</a> at the Hilton San Francisco on Union Square</p>
+         <p>March 12th, 2014 <a href="http://www.meetup.com/hbaseusergroup/events/160757912/">HBase Meetup @ Ancestry.com</a> in San Francisco</p>
+      <p><small><a href="old_news.html">Old News</a></small></p>
+    </section>
+  </body>
+
+</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/xdoc/metrics.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/metrics.xml b/src/site/xdoc/metrics.xml
new file mode 100644
index 0000000..a029269
--- /dev/null
+++ b/src/site/xdoc/metrics.xml
@@ -0,0 +1,150 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title>
+      Apache HBase (TM) Metrics
+    </title>
+  </properties>
+
+  <body>
+    <section name="Introduction">
+      <p>
+      Apache HBase (TM) emits Hadoop <a href="http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html">metrics</a>.
+      </p>
+      </section>
+      <section name="Setup">
+      <p>First read up on Hadoop <a href="http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html">metrics</a>.
+      If you are using ganglia, the <a href="http://wiki.apache.org/hadoop/GangliaMetrics">GangliaMetrics</a>
+      wiki page is useful read.</p>
+      <p>To have HBase emit metrics, edit <code>$HBASE_HOME/conf/hadoop-metrics.properties</code>
+      and enable metric 'contexts' per plugin.  As of this writing, hadoop supports
+      <strong>file</strong> and <strong>ganglia</strong> plugins.
+      Yes, the hbase metrics files is named hadoop-metrics rather than
+      <em>hbase-metrics</em> because currently at least the hadoop metrics system has the
+      properties filename hardcoded. Per metrics <em>context</em>,
+      comment out the NullContext and enable one or more plugins instead.
+      </p>
+      <p>
+      If you enable the <em>hbase</em> context, on regionservers you'll see total requests since last
+      metric emission, count of regions and storefiles as well as a count of memstore size.
+      On the master, you'll see a count of the cluster's requests.
+      </p>
+      <p>
+      Enabling the <em>rpc</em> context is good if you are interested in seeing
+      metrics on each hbase rpc method invocation (counts and time taken).
+      </p>
+      <p>
+      The <em>jvm</em> context is
+      useful for long-term stats on running hbase jvms -- memory used, thread counts, etc.
+      As of this writing, if more than one jvm is running emitting metrics, at least
+      in ganglia, the stats are aggregated rather than reported per instance.
+      </p>
+    </section>
+
+    <section name="Using with JMX">
+      <p>
+      In addition to the standard output contexts supported by the Hadoop
+      metrics package, you can also export HBase metrics via Java Management
+      Extensions (JMX).  This will allow viewing HBase stats in JConsole or
+      any other JMX client.
+      </p>
+      <section name="Enable HBase stats collection">
+      <p>
+      To enable JMX support in HBase, first edit
+      <code>$HBASE_HOME/conf/hadoop-metrics.properties</code> to support
+      metrics refreshing. (If you've running 0.94.1 and above, or have already configured
+      <code>hadoop-metrics.properties</code> for another output context,
+      you can skip this step).
+      </p>
+      <source>
+# Configuration of the "hbase" context for null
+hbase.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
+hbase.period=60
+
+# Configuration of the "jvm" context for null
+jvm.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
+jvm.period=60
+
+# Configuration of the "rpc" context for null
+rpc.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
+rpc.period=60
+      </source>
+      </section>
+      <section name="Setup JMX remote access">
+      <p>
+      For remote access, you will need to configure JMX remote passwords
+      and access profiles.  Create the files:
+      </p>
+      <dl>
+        <dt><code>$HBASE_HOME/conf/jmxremote.passwd</code> (set permissions
+        to 600)</dt>
+        <dd>
+        <source>
+monitorRole monitorpass
+controlRole controlpass
+        </source>
+        </dd>
+
+        <dt><code>$HBASE_HOME/conf/jmxremote.access</code></dt>
+        <dd>
+        <source>
+monitorRole readonly
+controlRole readwrite
+        </source>
+        </dd>
+      </dl>
+      </section>
+      <section name="Configure JMX in HBase startup">
+      <p>
+      Finally, edit the <code>$HBASE_HOME/conf/hbase-env.sh</code>
+      script to add JMX support:
+      </p>
+      <dl>
+        <dt><code>$HBASE_HOME/conf/hbase-env.sh</code></dt>
+        <dd>
+        <p>Add the lines:</p>
+        <source>
+HBASE_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.ssl=false"
+HBASE_JMX_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.password.file=$HBASE_HOME/conf/jmxremote.passwd"
+HBASE_JMX_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.access.file=$HBASE_HOME/conf/jmxremote.access"
+
+export HBASE_MASTER_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.port=10101"
+export HBASE_REGIONSERVER_OPTS="$HBASE_JMX_OPTS -Dcom.sun.management.jmxremote.port=10102"
+        </source>
+        </dd>
+      </dl>
+      <p>
+      After restarting the processes you want to monitor, you should now be
+      able to run JConsole (included with the JDK since JDK 5.0) to view
+      the statistics via JMX.  HBase MBeans are exported under the
+      <strong><code>hadoop</code></strong> domain in JMX.
+      </p>
+      </section>
+      <section name="Understanding HBase Metrics">
+      <p>
+      For more information on understanding HBase metrics, see the <a href="book.html#hbase_metrics">metrics section</a> in the Apache HBase Reference Guide.
+      </p>
+      </section>
+    </section>
+  </body>
+</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/xdoc/old_news.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/old_news.xml b/src/site/xdoc/old_news.xml
new file mode 100644
index 0000000..94e1882
--- /dev/null
+++ b/src/site/xdoc/old_news.xml
@@ -0,0 +1,92 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
+          "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title>
+      Old Apache HBase (TM) News
+    </title>
+  </properties>
+  <body>
+  <section name="Old News">
+         <p>February 10th, 2014 <a href="http://www.meetup.com/hbaseusergroup/events/163139322/">HBase Meetup @ Continuuity</a> in Palo Alto</p>
+         <p>January 30th, 2014 <a href="http://www.meetup.com/hbaseusergroup/events/158491762/">HBase Meetup @ Apple</a> in Cupertino</p>
+         <p>January 30th, 2014 <a href="http://www.meetup.com/Los-Angeles-HBase-User-group/events/160560282/">Los Angeles HBase User Group</a> in El Segundo</p>
+         <p>October 24th, 2013 <a href="http://www.meetup.com/hbaseusergroup/events/140759692/">HBase User and <a href="http://www.meetup.com/hackathon/events/144366512/">Developer</a> Meetup at HortonWorks</a>.in Palo Alto</p>
+         <p>September 26, 2013 <a href="http://www.meetup.com/hbaseusergroup/events/135862292/">HBase Meetup at Arista Networks</a>.in San Francisco</p>
+         <p>August 20th, 2013 <a href="http://www.meetup.com/hbaseusergroup/events/120534362/">HBase Meetup at Flurry</a>.in San Francisco</p>
+         <p>July 16th, 2013 <a href="http://www.meetup.com/hbaseusergroup/events/119929152/">HBase Meetup at Twitter</a>.in San Francisco</p>
+         <p>June 25th, 2013 <a href="http://www.meetup.com/hbaseusergroup/events/119154442/">Hadoop Summit Meetup</a>.at San Jose Convention Center</p>
+         <p>June 14th, 2013 <a href="http://kijicon.eventbrite.com/">KijiCon: Building Big Data Apps</a> in San Francisco.</p>
+         <p>June 13th, 2013 <a href="http://www.hbasecon.com/">HBaseCon2013</a> in San Francisco.  Submit an Abstract!</p>
+         <p>June 12th, 2013 <a href="http://www.meetup.com/hackathon/events/123403802/">HBaseConHackAthon</a> at the Cloudera office in San Francisco.</p>
+         <p>April 11th, 2013 <a href="http://www.meetup.com/hbaseusergroup/events/103587852/">HBase Meetup at AdRoll</a> in San Francisco</p>
+         <p>February 28th, 2013 <a href="http://www.meetup.com/hbaseusergroup/events/96584102/">HBase Meetup at Intel Mission Campus</a></p>
+         <p>February 19th, 2013 <a href="http://www.meetup.com/hackathon/events/103633042/">Developers PowWow</a> at HortonWorks' new digs</p>
+         <p>January 23rd, 2013 <a href="http://www.meetup.com/hbaseusergroup/events/91381312/">HBase Meetup at WibiData World HQ!</a></p>
+            <p>December 4th, 2012 <a href="http://www.meetup.com/hackathon/events/90536432/">0.96 Bug Squashing and Testing Hackathon</a> at Cloudera, SF.</p>
+            <p>October 29th, 2012 <a href="http://www.meetup.com/hbaseusergroup/events/82791572/">HBase User Group Meetup</a> at Wize Commerce in San Mateo.</p>
+            <p>October 25th, 2012 <a href="http://www.meetup.com/HBase-NYC/events/81728932/">Strata/Hadoop World HBase Meetup.</a> in NYC</p>
+            <p>September 11th, 2012 <a href="http://www.meetup.com/hbaseusergroup/events/80621872/">Contributor's Pow-Wow at HortonWorks HQ.</a></p>
+            <p>August 8th, 2012 <a href="http://www.apache.org/dyn/closer.cgi/hbase/">Apache HBase 0.94.1 is available for download</a></p>
+            <p>June 15th, 2012 <a href="http://www.meetup.com/hbaseusergroup/events/59829652/">Birds-of-a-feather</a> in San Jose, day after <a href="http://hadoopsummit.org">Hadoop Summit</a></p>
+            <p>May 23rd, 2012 <a href="http://www.meetup.com/hackathon/events/58953522/">HackConAthon</a> in Palo Alto</p>
+            <p>May 22nd, 2012 <a href="http://www.hbasecon.com">HBaseCon2012</a> in San Francisco</p>
+            <p>March 27th, 2012 <a href="http://www.meetup.com/hbaseusergroup/events/56021562/">Meetup @ StumbleUpon</a> in San Francisco</p>
+
+            <p>January 19th, 2012 <a href="http://www.meetup.com/hbaseusergroup/events/46702842/">Meetup @ EBay</a></p>
+            <p>January 23rd, 2012 Apache HBase 0.92.0 released. <a href="http://www.apache.org/dyn/closer.cgi/hbase/">Download it!</a></p>
+            <p>December 23rd, 2011 Apache HBase 0.90.5 released. <a href="http://www.apache.org/dyn/closer.cgi/hbase/">Download it!</a></p>
+            <p>November 29th, 2011 <a href="http://www.meetup.com/hackathon/events/41025972/">Developer Pow-Wow in SF</a> at Salesforce HQ</p>
+            <p>November 7th, 2011 <a href="http://www.meetup.com/hbaseusergroup/events/35682812/">HBase Meetup in NYC (6PM)</a> at the AppNexus office</p>
+            <p>August 22nd, 2011 <a href="http://www.meetup.com/hbaseusergroup/events/28518471/">HBase Hackathon (11AM) and Meetup (6PM)</a> at FB in PA</p>
+            <p>June 30th, 2011 <a href="http://www.meetup.com/hbaseusergroup/events/20572251/">HBase Contributor Day</a>, the day after the <a href="http://developer.yahoo.com/events/hadoopsummit2011/">Hadoop Summit</a> hosted by Y!</p>
+            <p>June 8th, 2011 <a href="http://berlinbuzzwords.de/wiki/hbase-workshop-and-hackathon">HBase Hackathon</a> in Berlin to coincide with <a href="http://berlinbuzzwords.de/">Berlin Buzzwords</a></p>
+            <p>May 19th, 2011 Apache HBase 0.90.3 released. <a href="http://www.apache.org/dyn/closer.cgi/hbase/">Download it!</a></p>
+            <p>April 12th, 2011 Apache HBase 0.90.2 released. <a href="http://www.apache.org/dyn/closer.cgi/hbase/">Download it!</a></p>
+            <p>March 21st, <a href="http://www.meetup.com/hackathon/events/16770852/">HBase 0.92 Hackathon at StumbleUpon, SF</a></p>
+            <p>February 22nd, <a href="http://www.meetup.com/hbaseusergroup/events/16492913/">HUG12: February HBase User Group at StumbleUpon SF</a></p>
+            <p>December 13th, <a href="http://www.meetup.com/hackathon/calendar/15597555/">HBase Hackathon: Coprocessor Edition</a></p>
+      <p>November 19th, <a href="http://huguk.org/">Hadoop HUG in London</a> is all about Apache HBase</p>
+      <p>November 15-19th, <a href="http://www.devoxx.com/display/Devoxx2K10/Home">Devoxx</a> features HBase Training and multiple HBase presentations</p>
+      <p>October 12th, HBase-related presentations by core contributors and users at <a href="http://www.cloudera.com/company/press-center/hadoop-world-nyc/">Hadoop World 2010</a></p>
+      <p>October 11th, <a href="http://www.meetup.com/hbaseusergroup/calendar/14606174/">HUG-NYC: HBase User Group NYC Edition</a> (Night before Hadoop World)</p>
+      <p>June 30th, <a href="http://www.meetup.com/hbaseusergroup/calendar/13562846/">Apache HBase Contributor Workshop</a> (Day after Hadoop Summit)</p>
+      <p>May 10th, 2010: Apache HBase graduates from Hadoop sub-project to Apache Top Level Project </p>
+      <p>Signup for <a href="http://www.meetup.com/hbaseusergroup/calendar/12689490/">HBase User Group Meeting, HUG10</a> hosted by Trend Micro, April 19th, 2010</p>
+
+      <p><a href="http://www.meetup.com/hbaseusergroup/calendar/12689351/">HBase User Group Meeting, HUG9</a> hosted by Mozilla, March 10th, 2010</p>
+      <p>Sign up for the <a href="http://www.meetup.com/hbaseusergroup/calendar/12241393/">HBase User Group Meeting, HUG8</a>, January 27th, 2010 at StumbleUpon in SF</p>
+      <p>September 8th, 2010: Apache HBase 0.20.0 is faster, stronger, slimmer, and sweeter tasting than any previous Apache HBase release.  Get it off the <a href="http://www.apache.org/dyn/closer.cgi/hbase/">Releases</a> page.</p>
+      <p><a href="http://dev.us.apachecon.com/c/acus2009/">ApacheCon</a> in Oakland: November 2-6th, 2009:
+      The Apache Foundation will be celebrating its 10th anniversary in beautiful Oakland by the Bay. Lots of good talks and meetups including an HBase presentation by a couple of the lads.</p>
+      <p>HBase at Hadoop World in NYC: October 2nd, 2009: A few of us will be talking on Practical HBase out east at <a href="http://www.cloudera.com/hadoop-world-nyc">Hadoop World: NYC</a>.</p>
+      <p>HUG7 and HBase Hackathon: August 7th-9th, 2009 at StumbleUpon in SF: Sign up for the <a href="http://www.meetup.com/hbaseusergroup/calendar/10950511/">HBase User Group Meeting, HUG7</a> or for the <a href="http://www.meetup.com/hackathon/calendar/10951718/">Hackathon</a> or for both (all are welcome!).</p>
+      <p>June, 2009 -- HBase at HadoopSummit2009 and at NOSQL: See the <a href="http://wiki.apache.org/hadoop/HBase/HBasePresentations">presentations</a></p>
+      <p>March 3rd, 2009 -- HUG6: <a href="http://www.meetup.com/hbaseusergroup/calendar/9764004/">HBase User Group 6</a></p>
+      <p>January 30th, 2009 -- LA Hbackathon:<a href="http://www.meetup.com/hbasela/calendar/9450876/">HBase January Hackathon Los Angeles</a> at <a href="http://streamy.com" >Streamy</a> in Manhattan Beach</p>
+  </section>
+  </body>
+</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/xdoc/poweredbyhbase.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/poweredbyhbase.xml b/src/site/xdoc/poweredbyhbase.xml
new file mode 100644
index 0000000..ff1ba59
--- /dev/null
+++ b/src/site/xdoc/poweredbyhbase.xml
@@ -0,0 +1,398 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title>Powered By Apache HBase&#153;</title>
+  </properties>
+
+<body>
+<section name="Powered By Apache HBase&#153;">
+  <p>This page lists some institutions and projects which are using HBase. To
+    have your organization added, file a documentation JIRA or email
+    <a href="mailto:dev@hbase.apache.org">hbase-dev</a> with the relevant
+    information. If you notice out-of-date information, use the same avenues to
+    report it.
+  </p>
+  <p><b>These items are user-submitted and the HBase team assumes no responsibility for their accuracy.</b></p>
+  <dl>
+  <dt><a href="http://www.adobe.com">Adobe</a></dt>
+  <dd>We currently have about 30 nodes running HDFS, Hadoop and HBase  in clusters
+    ranging from 5 to 14 nodes on both production and development. We plan a
+    deployment on an 80 nodes cluster. We are using HBase in several areas from
+    social services to structured data and processing for internal use. We constantly
+    write data to HBase and run mapreduce jobs to process then store it back to
+    HBase or external systems. Our production cluster has been running since Oct 2008.</dd>
+
+  <dt><a href="http://spark-packages.org/package/Huawei-Spark/Spark-SQL-on-HBase">Project Astro</a></dt>
+  <dd>
+    Astro provides fast Spark SQL/DataFrame capabilities to HBase data,
+    featuring super-efficient access to multi-dimensional HBase rows through
+    native Spark execution in HBase coprocessor plus systematic and accurate
+    partition pruning and predicate pushdown from arbitrarily complex data
+    filtering logic. The batch load is optimized to run on the Spark execution
+    engine. Note that <a href="http://spark-packages.org/package/Huawei-Spark/Spark-SQL-on-HBase">Spark-SQL-on-HBase</a>
+    is the release site. Interested parties are free to make clones and claim
+    to be "latest(and active)", but they are not endorsed by the owner.
+  </dd>
+
+  <dt><a href="http://axibase.com/products/axibase-time-series-database/">Axibase
+    Time Series Database (ATSD)</a></dt>
+  <dd>ATSD runs on top of HBase to collect, analyze and visualize time series
+    data at scale. ATSD capabilities include optimized storage schema, built-in
+    rule engine, forecasting algorithms (Holt-Winters and ARIMA) and next-generation
+    graphics designed for high-frequency data. Primary use cases: IT infrastructure
+    monitoring, data consolidation, operational historian in OPC environments.</dd>
+
+  <dt><a href="http://www.benipaltechnologies.com">Benipal Technologies</a></dt>
+  <dd>We have a 35 node cluster used for HBase and Mapreduce with Lucene / SOLR
+    and katta integration to create and finetune our search databases. Currently,
+    our HBase installation has over 10 Billion rows with 100s of datapoints per row.
+    We compute over 10<sup>18</sup> calculations daily using MapReduce directly on HBase. We
+    heart HBase.</dd>
+
+  <dt><a href="https://github.com/ermanpattuk/BigSecret">BigSecret</a></dt>
+  <dd>BigSecret is a security framework that is designed to secure Key-Value data,
+    while preserving efficient processing capabilities. It achieves cell-level
+    security, using combinations of different cryptographic techniques, in an
+    efficient and secure manner. It provides a wrapper library around HBase.</dd>
+
+  <dt><a href="http://caree.rs">Caree.rs</a></dt>
+  <dd>Accelerated hiring platform for HiTech companies. We use HBase and Hadoop
+    for all aspects of our backend - job and company data storage, analytics
+    processing, machine learning algorithms for our hire recommendation engine.
+    Our live production site is directly served from HBase. We use cascading for
+    running offline data processing jobs.</dd>
+
+  <dt><a href="http://www.celer-tech.com/">Celer Technologies</a></dt>
+  <dd>Celer Technologies is a global financial software company that creates
+    modular-based systems that have the flexibility to meet tomorrow's business
+    environment, today.  The Celer framework uses Hadoop/HBase for storing all
+    financial data for trading, risk, clearing in a single data store. With our
+    flexible framework and all the data in Hadoop/HBase, clients can build new
+    features to quickly extract data based on their trading, risk and clearing
+    activities from one single location.</dd>
+
+  <dt><a href="http://www.explorys.net">Explorys</a></dt>
+  <dd>Explorys uses an HBase cluster containing over a billion anonymized clinical
+    records, to enable subscribers to search and analyze patient populations,
+    treatment protocols, and clinical outcomes.</dd>
+
+  <dt><a href="http://www.facebook.com/notes/facebook-engineering/the-underlying-technology-of-messages/454991608919">Facebook</a></dt>
+  <dd>Facebook uses HBase to power their Messages infrastructure.</dd>
+
+  <dt><a href="http://www.filmweb.pl">Filmweb</a></dt>
+  <dd>Filmweb is a film web portal with a large dataset of films, persons and
+    movie-related entities. We have just started a small cluster of 3 HBase nodes
+    to handle our web cache persistency layer. We plan to increase the cluster
+    size, and also to start migrating some of the data from our databases which
+    have some demanding scalability requirements.</dd>
+
+  <dt><a href="http://www.flurry.com">Flurry</a></dt>
+  <dd>Flurry provides mobile application analytics. We use HBase and Hadoop for
+    all of our analytics processing, and serve all of our live requests directly
+    out of HBase on our 50 node production cluster with tens of billions of rows
+    over several tables.</dd>
+
+  <dt><a href="http://gumgum.com">GumGum</a></dt>
+  <dd>GumGum is an In-Image Advertising Platform. We use HBase on an 15-node
+    Amazon EC2 High-CPU Extra Large (c1.xlarge) cluster for both real-time data
+    and analytics. Our production cluster has been running since June 2010.</dd>
+
+  <dt><a href="http://helprace.com/help-desk/">Helprace</a></dt>
+  <dd>Helprace is a customer service platform which uses Hadoop for analytics
+    and internal searching and filtering. Being on HBase we can share our HBase
+    and Hadoop cluster with other Hadoop processes - this particularly helps in
+    keeping community speeds up. We use Hadoop and HBase on small cluster with 4
+    cores and 32 GB RAM each.</dd>
+
+  <dt><a href="http://hubspot.com">HubSpot</a></dt>
+  <dd>HubSpot is an online marketing platform, providing analytics, email, and
+    segmentation of leads/contacts.  HBase is our primary datastore for our customers'
+    customer data, with multiple HBase clusters powering the majority of our
+    product.  We have nearly 200 regionservers across the various clusters, and
+    2 hadoop clusters also with nearly 200 tasktrackers.  We use c1.xlarge in EC2
+    for both, but are starting to move some of that to baremetal hardware.  We've
+    been running HBase for over 2 years.</dd>
+
+  <dt><a href="http://www.infolinks.com/">Infolinks</a></dt>
+  <dd>Infolinks is an In-Text ad provider. We use HBase to process advertisement
+    selection and user events for our In-Text ad network. The reports generated
+    from HBase are used as feedback for our production system to optimize ad
+    selection.</dd>
+
+  <dt><a href="http://www.kalooga.com">Kalooga</a></dt>
+  <dd>Kalooga is a discovery service for image galleries. We use Hadoop, HBase
+    and Pig on a 20-node cluster for our crawling, analysis and events
+    processing.</dd>
+
+  <dt><a href="http://www.leanxcale.com/">LeanXcale</a></dt>
+  <dd>LeanXcale provides an ultra-scalable transactional &amp; SQL database that
+  stores its data on HBase and it is able to scale to 1000s of nodes. It
+  also provides a standalone full ACID HBase with transactions across
+  arbitrary sets of rows and tables.</dd>
+
+
+  <dt><a href="http://www.mahalo.com">Mahalo</a></dt>
+  <dd>Mahalo, "...the world's first human-powered search engine". All the markup
+    that powers the wiki is stored in HBase. It's been in use for a few months now.
+    MediaWiki - the same software that power Wikipedia - has version/revision control.
+    Mahalo's in-house editors produce a lot of revisions per day, which was not
+    working well in a RDBMS. An hbase-based solution for this was built and tested,
+    and the data migrated out of MySQL and into HBase. Right now it's at something
+    like 6 million items in HBase. The upload tool runs every hour from a shell
+    script to back up that data, and on 6 nodes takes about 5-10 minutes to run -
+    and does not slow down production at all.</dd>
+
+  <dt><a href="http://www.meetup.com">Meetup</a></dt>
+  <dd>Meetup is on a mission to help the world’s people self-organize into local
+    groups.  We use Hadoop and HBase to power a site-wide, real-time activity
+    feed system for all of our members and groups.  Group activity is written
+    directly to HBase, and indexed per member, with the member's custom feed
+    served directly from HBase for incoming requests.  We're running HBase
+    0.20.0 on a 11 node cluster.</dd>
+
+  <dt><a href="http://www.mendeley.com">Mendeley</a></dt>
+  <dd>Mendeley is creating a platform for researchers to collaborate and share
+    their research online. HBase is helping us to create the world's largest
+    research paper collection and is being used to store all our raw imported data.
+    We use a lot of map reduce jobs to process these papers into pages displayed
+    on the site. We also use HBase with Pig to do analytics and produce the article
+    statistics shown on the web site. You can find out more about how we use HBase
+    in the <a href="http://www.slideshare.net/danharvey/hbase-at-mendeley">HBase
+    At Mendeley</a> slide presentation.</dd>
+
+  <dt><a href="http://www.ngdata.com">NGDATA</a></dt>
+  <dd>NGDATA delivers <a href="http://www.ngdata.com/site/products/lily.html">Lily</a>,
+    the consumer intelligence solution that delivers a unique combination of Big
+    Data management, machine learning technologies and consumer intelligence
+    applications in one integrated solution to allow better, and more dynamic,
+    consumer insights. Lily allows companies to process and analyze massive structured
+    and unstructured data, scale storage elastically and locate actionable data
+    quickly from large data sources in near real time.</dd>
+
+  <dt><a href="http://ning.com">Ning</a></dt>
+  <dd>Ning uses HBase to store and serve the results of processing user events
+    and log files, which allows us to provide near-real time analytics and
+    reporting. We use a small cluster of commodity machines with 4 cores and 16GB
+    of RAM per machine to handle all our analytics and reporting needs.</dd>
+
+  <dt><a href="http://www.worldcat.org">OCLC</a></dt>
+  <dd>OCLC uses HBase as the main data store for WorldCat, a union catalog which
+    aggregates the collections of 72,000 libraries in 112 countries and territories.
+    WorldCat is currently comprised of nearly 1 billion records with nearly 2
+    billion library ownership indications. We're running a 50 Node HBase cluster
+    and a separate offline map-reduce cluster.</dd>
+
+  <dt><a href="http://olex.openlogic.com">OpenLogic</a></dt>
+  <dd>OpenLogic stores all the world's Open Source packages, versions, files,
+    and lines of code in HBase for both near-real-time access and analytical
+    purposes. The production cluster has well over 100TB of disk spread across
+    nodes with 32GB+ RAM and dual-quad or dual-hex core CPU's.</dd>
+
+  <dt><a href="http://www.openplaces.org">Openplaces</a></dt>
+  <dd>Openplaces is a search engine for travel that uses HBase to store terabytes
+    of web pages and travel-related entity records (countries, cities, hotels,
+    etc.). We have dozens of MapReduce jobs that crunch data on a daily basis.
+    We use a 20-node cluster for development, a 40-node cluster for offline
+    production processing and an EC2 cluster for the live web site.</dd>
+
+  <dt><a href="http://www.pnl.gov">Pacific Northwest National Laboratory</a></dt>
+  <dd>Hadoop and HBase (Cloudera distribution) are being used within PNNL's
+    Computational Biology &amp; Bioinformatics Group for a systems biology data
+    warehouse project that integrates high throughput proteomics and transcriptomics
+    data sets coming from instruments in the Environmental  Molecular Sciences
+    Laboratory, a US Department of Energy national user facility located at PNNL.
+    The data sets are being merged and annotated with other public genomics
+    information in the data warehouse environment, with Hadoop analysis programs
+    operating on the annotated data in the HBase tables. This work is hosted by
+    <a href="http://www.pnl.gov/news/release.aspx?id=908">olympus</a>, a large PNNL
+    institutional computing cluster, with the HBase tables being stored in olympus's
+    Lustre file system.</dd>
+
+  <dt><a href="http://www.readpath.com/">ReadPath</a></dt>
+  <dd>|ReadPath uses HBase to store several hundred million RSS items and dictionary
+    for its RSS newsreader. Readpath is currently running on an 8 node cluster.</dd>
+
+  <dt><a href="http://resu.me/">resu.me</a></dt>
+  <dd>Career network for the net generation. We use HBase and Hadoop for all
+    aspects of our backend - user and resume data storage, analytics processing,
+    machine learning algorithms for our job recommendation engine. Our live
+    production site is directly served from HBase. We use cascading for running
+    offline data processing jobs.</dd>
+
+  <dt><a href="http://www.runa.com/">Runa Inc.</a></dt>
+  <dd>Runa Inc. offers a SaaS that enables online merchants to offer dynamic
+    per-consumer, per-product promotions embedded in their website. To implement
+    this we collect the click streams of all their visitors to determine along
+    with the rules of the merchant what promotion to offer the visitor at different
+    points of their browsing the Merchant website. So we have lots of data and have
+    to do lots of off-line and real-time analytics. HBase is the core for us.
+    We also use Clojure and our own open sourced distributed processing framework,
+    Swarmiji. The HBase Community has been key to our forward movement with HBase.
+    We're looking for experienced developers to join us to help make things go even
+    faster!</dd>
+
+  <dt><a href="http://www.sematext.com/">Sematext</a></dt>
+  <dd>Sematext runs
+    <a href="http://www.sematext.com/search-analytics/index.html">Search Analytics</a>,
+    a service that uses HBase to store search activity and MapReduce to produce
+    reports showing user search behaviour and experience. Sematext runs
+    <a href="http://www.sematext.com/spm/index.html">Scalable Performance Monitoring (SPM)</a>,
+    a service that uses HBase to store performance data over time, crunch it with
+    the help of MapReduce, and display it in a visually rich browser-based UI.
+    Interestingly, SPM features
+    <a href="http://www.sematext.com/spm/hbase-performance-monitoring/index.html">SPM for HBase</a>,
+    which is specifically designed to monitor all HBase performance metrics.</dd>
+
+  <dt><a href="http://www.socialmedia.com/">SocialMedia</a></dt>
+  <dd>SocialMedia uses HBase to store and process user events which allows us to
+    provide near-realtime user metrics and reporting. HBase forms the heart of
+    our Advertising Network data storage and management system. We use HBase as
+    a data source and sink for both realtime request cycle queries and as a
+    backend for mapreduce analysis.</dd>
+
+  <dt><a href="http://www.splicemachine.com/">Splice Machine</a></dt>
+  <dd>Splice Machine is built on top of HBase.  Splice Machine is a full-featured
+    ANSI SQL database that provides real-time updates, secondary indices, ACID
+    transactions, optimized joins, triggers, and UDFs.</dd>
+
+  <dt><a href="http://www.streamy.com/">Streamy</a></dt>
+  <dd>Streamy is a recently launched realtime social news site.  We use HBase
+    for all of our data storage, query, and analysis needs, replacing an existing
+    SQL-based system.  This includes hundreds of millions of documents, sparse
+    matrices, logs, and everything else once done in the relational system. We
+    perform significant in-memory caching of query results similar to a traditional
+    Memcached/SQL setup as well as other external components to perform joining
+    and sorting.  We also run thousands of daily MapReduce jobs using HBase tables
+    for log analysis, attention data processing, and feed crawling.  HBase has
+    helped us scale and distribute in ways we could not otherwise, and the
+    community has provided consistent and invaluable assistance.</dd>
+
+  <dt><a href="http://www.stumbleupon.com/">Stumbleupon</a></dt>
+  <dd>Stumbleupon and <a href="http://su.pr">Su.pr</a> use HBase as a real time
+    data storage and analytics platform. Serving directly out of HBase, various site
+    features and statistics are kept up to date in a real time fashion. We also
+    use HBase a map-reduce data source to overcome traditional query speed limits
+    in MySQL.</dd>
+
+  <dt><a href="http://www.tokenizer.org">Shopping Engine at Tokenizer</a></dt>
+  <dd>Shopping Engine at Tokenizer is a web crawler; it uses HBase to store URLs
+    and Outlinks (AnchorText + LinkedURL): more than a billion. It was initially
+    designed as Nutch-Hadoop extension, then (due to very specific 'shopping'
+    scenario) moved to SOLR + MySQL(InnoDB) (ten thousands queries per second),
+    and now - to HBase. HBase is significantly faster due to: no need for huge
+    transaction logs, column-oriented design exactly matches 'lazy' business logic,
+    data compression, !MapReduce support. Number of mutable 'indexes' (term from
+    RDBMS) significantly reduced due to the fact that each 'row::column' structure
+    is physically sorted by 'row'. MySQL InnoDB engine is best DB choice for
+    highly-concurrent updates. However, necessity to flash a block of data to
+    harddrive even if we changed only few bytes is obvious bottleneck. HBase
+    greatly helps: not-so-popular in modern DBMS 'delete-insert', 'mutable primary
+    key', and 'natural primary key' patterns become a big advantage with HBase.</dd>
+
+  <dt><a href="http://traackr.com/">Traackr</a></dt>
+  <dd>Traackr uses HBase to store and serve online influencer data in real-time.
+    We use MapReduce to frequently re-score our entire data set as we keep updating
+    influencer metrics on a daily basis.</dd>
+
+  <dt><a href="http://trendmicro.com/">Trend Micro</a></dt>
+  <dd>Trend Micro uses HBase as a foundation for cloud scale storage for a variety
+    of applications. We have been developing with HBase since version 0.1 and
+    production since version 0.20.0.</dd>
+
+  <dt><a href="http://www.twitter.com">Twitter</a></dt>
+  <dd>Twitter runs HBase across its entire Hadoop cluster. HBase provides a
+    distributed, read/write backup of all  mysql tables in Twitter's production
+    backend, allowing engineers to run MapReduce jobs over the data while maintaining
+    the ability to apply periodic row updates (something that is more difficult
+    to do with vanilla HDFS).  A number of applications including people search
+    rely on HBase internally for data generation. Additionally, the operations
+    team uses HBase as a timeseries database for cluster-wide monitoring/performance
+    data.</dd>
+
+  <dt><a href="http://www.udanax.org">Udanax.org</a></dt>
+  <dd>Udanax.org is a URL shortener which use 10 nodes HBase cluster to store URLs,
+    Web Log data and response the real-time request on its Web Server. This
+    application is now used for some twitter clients and a number of web sites.
+    Currently API requests are almost 30 per second and web redirection requests
+    are about 300 per second.</dd>
+
+  <dt><a href="http://www.veoh.com/">Veoh Networks</a></dt>
+  <dd>Veoh Networks uses HBase to store and process visitor (human) and entity
+    (non-human) profiles which are used for behavioral targeting, demographic
+    detection, and personalization services.  Our site reads this data in
+    real-time (heavily cached) and submits updates via various batch map/reduce
+    jobs. With 25 million unique visitors a month storing this data in a traditional
+    RDBMS is not an option. We currently have a 24 node Hadoop/HBase cluster and
+    our profiling system is sharing this cluster with our other Hadoop data
+    pipeline processes.</dd>
+
+  <dt><a href="http://www.videosurf.com/">VideoSurf</a></dt>
+  <dd>VideoSurf - "The video search engine that has taught computers to see".
+    We're using HBase to persist various large graphs of data and other statistics.
+    HBase was a real win for us because it let us store substantially larger
+    datasets without the need for manually partitioning the data and its
+    column-oriented nature allowed us to create schemas that were substantially
+    more efficient for storing and retrieving data.</dd>
+
+  <dt><a href="http://www.visibletechnologies.com/">Visible Technologies</a></dt>
+  <dd>Visible Technologies uses Hadoop, HBase, Katta, and more to collect, parse,
+    store, and search hundreds of millions of Social Media content. We get incredibly
+    fast throughput and very low latency on commodity hardware. HBase enables our
+    business to exist.</dd>
+
+  <dt><a href="http://www.worldlingo.com/">WorldLingo</a></dt>
+  <dd>The WorldLingo Multilingual Archive. We use HBase to store millions of
+    documents that we scan using Map/Reduce jobs to machine translate them into
+    all or selected target languages from our set of available machine translation
+    languages. We currently store 12 million documents but plan to eventually
+    reach the 450 million mark. HBase allows us to scale out as we need to grow
+    our storage capacities. Combined with Hadoop to keep the data replicated and
+    therefore fail-safe we have the backbone our service can rely on now and in
+    the future. !WorldLingo is using HBase since December 2007 and is along with
+    a few others one of the longest running HBase installation. Currently we are
+    running the latest HBase 0.20 and serving directly from it at
+    <a href="http://www.worldlingo.com/ma/enwiki/en/HBase">MultilingualArchive</a>.</dd>
+
+  <dt><a href="http://www.yahoo.com/">Yahoo!</a></dt>
+  <dd>Yahoo! uses HBase to store document fingerprint for detecting near-duplications.
+    We have a cluster of few nodes that runs HDFS, mapreduce, and HBase. The table
+    contains millions of rows. We use this for querying duplicated documents with
+    realtime traffic.</dd>
+
+  <dt><a href="http://h50146.www5.hp.com/products/software/security/icewall/eng/">HP IceWall SSO</a></dt>
+  <dd>HP IceWall SSO is a web-based single sign-on solution and uses HBase to store
+    user data to authenticate users. We have supported RDB and LDAP previously but
+    have newly supported HBase with a view to authenticate over tens of millions
+    of users and devices.</dd>
+
+  <dt><a href="http://www.ymc.ch/en/big-data-analytics-en?utm_source=hadoopwiki&amp;utm_medium=poweredbypage&amp;utm_campaign=ymc.ch">YMC AG</a></dt>
+  <dd><ul>
+    <li>operating a Cloudera Hadoop/HBase cluster for media monitoring purpose</li>
+    <li>offering technical and operative consulting for the Hadoop stack + ecosystem</li>
+    <li>editor of <a href="http://www.ymc.ch/en/hbase-split-visualisation-introducing-hannibal?utm_source=hadoopwiki&amp;utm_medium=poweredbypageamp;utm_campaign=ymc.ch">Hannibal</a>, a open-source tool
+    to visualize HBase regions sizes and splits that helps running HBase in production</li>
+  </ul></dd>
+  </dl>
+</section>
+</body>
+</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/xdoc/pseudo-distributed.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/pseudo-distributed.xml b/src/site/xdoc/pseudo-distributed.xml
new file mode 100644
index 0000000..fa1ad80
--- /dev/null
+++ b/src/site/xdoc/pseudo-distributed.xml
@@ -0,0 +1,41 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
+          "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title>
+Running Apache HBase (TM) in pseudo-distributed mode
+    </title>
+  </properties>
+
+  <body>
+      <p>This page has been retired.  The contents have been moved to the
+      <a href="http://hbase.apache.org/book.html#distributed">Distributed Operation: Pseudo- and Fully-distributed modes</a> section
+ in the Reference Guide.
+ </p>
+
+ </body>
+
+</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/xdoc/replication.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/replication.xml b/src/site/xdoc/replication.xml
new file mode 100644
index 0000000..a2fcfcb
--- /dev/null
+++ b/src/site/xdoc/replication.xml
@@ -0,0 +1,35 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
+          "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title>
+      Apache HBase (TM) Replication
+    </title>
+  </properties>
+  <body>
+    <p>This information has been moved to <a href="http://hbase.apache.org/book.html#cluster_replication">the Cluster Replication</a> section of the <a href="http://hbase.apache.org/book.html">Apache HBase Reference Guide</a>.</p>
+  </body>
+</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/xdoc/resources.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/resources.xml b/src/site/xdoc/resources.xml
new file mode 100644
index 0000000..19548b6
--- /dev/null
+++ b/src/site/xdoc/resources.xml
@@ -0,0 +1,45 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title>Other Apache HBase (TM) Resources</title>
+  </properties>
+
+<body>
+<section name="Other Apache HBase Resources">
+<section name="Books">
+<section name="HBase: The Definitive Guide">
+<p><a href="http://shop.oreilly.com/product/0636920014348.do">HBase: The Definitive Guide <i>Random Access to Your Planet-Size Data</i></a> by Lars George. Publisher: O'Reilly Media, Released: August 2011, Pages: 556.</p>
+</section>
+<section name="HBase In Action">
+<p><a href="http://www.manning.com/dimidukkhurana/">HBase In Action</a> By Nick Dimiduk and Amandeep Khurana.  Publisher: Manning, MEAP Began: January 2012, Softbound print: Fall 2012, Pages: 350.</p>
+</section>
+<section name="HBase Administration Cookbook">
+<p><a href="http://www.packtpub.com/hbase-administration-for-optimum-database-performance-cookbook/book">HBase Administration Cookbook</a> by Yifeng Jiang.  Publisher: PACKT Publishing, Release: Expected August 2012, Pages: 335.</p>
+</section>
+<section name="HBase High Performance Cookbook">
+  <p><a href="https://www.packtpub.com/big-data-and-business-intelligence/hbase-high-performance-cookbook">HBase High Performance Cookbook</a> by Ruchir Choudhry.  Publisher: PACKT Publishing, Release: January 2017, Pages: 350.</p>
+</section>
+</section>
+</section>
+</body>
+</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/xdoc/sponsors.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/sponsors.xml b/src/site/xdoc/sponsors.xml
new file mode 100644
index 0000000..332f56a
--- /dev/null
+++ b/src/site/xdoc/sponsors.xml
@@ -0,0 +1,50 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title>Apache HBase&#153; Sponsors</title>
+  </properties>
+
+<body>
+<section name="Sponsors">
+    <p>First off, thanks to <a href="http://www.apache.org/foundation/thanks.html">all who sponsor</a>
+       our parent, the Apache Software Foundation.
+    </p>
+<p>The below companies have been gracious enough to provide their commerical tool offerings free of charge to the Apache HBase&#153; project.
+<ul>
+	<li>The crew at <a href="http://www.ej-technologies.com/">ej-technologies</a> have
+        been let us use <a href="http://www.ej-technologies.com/products/jprofiler/overview.html">JProfiler</a> for years now.</li>
+	<li>The lads at <a href="http://headwaysoftware.com/">headway software</a> have
+        given us a license for <a href="http://headwaysoftware.com/products/?code=Restructure101">Restructure101</a>
+        so we can untangle our interdependency mess.</li>
+	<li><a href="http://www.yourkit.com">YourKit</a> allows us to use their <a href="http://www.yourkit.com/overview/index.jsp">Java Profiler</a>.</li>
+	<li>Some of us use <a href="http://www.jetbrains.com/idea">IntelliJ IDEA</a> thanks to <a href="http://www.jetbrains.com/">JetBrains</a>.</li>
+  <li>Thank you to Boris at <a href="http://www.vectorportal.com/">Vector Portal</a> for granting us a license on the <a href="http://www.vectorportal.com/subcategory/205/KILLER-WHALE-FREE-VECTOR.eps/ifile/9136/detailtest.asp">image</a> on which our logo is based.</li>
+</ul>
+</p>
+</section>
+<section name="Sponsoring the Apache Software Foundation">
+<p>To contribute to the Apache Software Foundation, a good idea in our opinion, see the <a href="http://www.apache.org/foundation/sponsorship.html">ASF Sponsorship</a> page.
+</p>
+</section>
+</body>
+</document>


[03/11] hbase git commit: HBASE-20831 Copy master doc into branch-2.1 and edit to make it suit 2.1.0

Posted by zh...@apache.org.
http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/bc_l2_buckets.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/bc_l2_buckets.png b/src/site/resources/images/bc_l2_buckets.png
new file mode 100644
index 0000000..5163928
Binary files /dev/null and b/src/site/resources/images/bc_l2_buckets.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/bc_stats.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/bc_stats.png b/src/site/resources/images/bc_stats.png
new file mode 100644
index 0000000..d8c6384
Binary files /dev/null and b/src/site/resources/images/bc_stats.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/big_h_logo.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/big_h_logo.png b/src/site/resources/images/big_h_logo.png
new file mode 100644
index 0000000..5256094
Binary files /dev/null and b/src/site/resources/images/big_h_logo.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/big_h_logo.svg
----------------------------------------------------------------------
diff --git a/src/site/resources/images/big_h_logo.svg b/src/site/resources/images/big_h_logo.svg
new file mode 100644
index 0000000..ab24198
--- /dev/null
+++ b/src/site/resources/images/big_h_logo.svg
@@ -0,0 +1,139 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Generator: Adobe Illustrator 15.1.0, SVG Export Plug-In . SVG Version: 6.00 Build 0)  -->
+
+<svg
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
+   xmlns:cc="http://creativecommons.org/ns#"
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+   xmlns:svg="http://www.w3.org/2000/svg"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   version="1.1"
+   id="Layer_1"
+   x="0px"
+   y="0px"
+   width="792px"
+   height="612px"
+   viewBox="0 0 792 612"
+   enable-background="new 0 0 792 612"
+   xml:space="preserve"
+   inkscape:version="0.48.4 r9939"
+   sodipodi:docname="big_h_same_font_hbase3_logo.png"
+   inkscape:export-filename="big_h_bitmap.png"
+   inkscape:export-xdpi="90"
+   inkscape:export-ydpi="90"><metadata
+   id="metadata3693"><rdf:RDF><cc:Work
+       rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
+         rdf:resource="http://purl.org/dc/dcmitype/StillImage" /><dc:title></dc:title></cc:Work></rdf:RDF></metadata><defs
+   id="defs3691" /><sodipodi:namedview
+   pagecolor="#000000"
+   bordercolor="#666666"
+   borderopacity="1"
+   objecttolerance="10"
+   gridtolerance="10"
+   guidetolerance="10"
+   inkscape:pageopacity="0"
+   inkscape:pageshadow="2"
+   inkscape:window-width="1440"
+   inkscape:window-height="856"
+   id="namedview3689"
+   showgrid="false"
+   inkscape:zoom="2.1814013"
+   inkscape:cx="415.39305"
+   inkscape:cy="415.72702"
+   inkscape:window-x="1164"
+   inkscape:window-y="22"
+   inkscape:window-maximized="0"
+   inkscape:current-layer="Layer_1" />
+
+
+
+
+
+
+<text
+   xml:space="preserve"
+   style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Bitsumishi;-inkscape-font-specification:Bitsumishi"
+   x="311.18643"
+   y="86.224579"
+   id="text3082"
+   sodipodi:linespacing="125%"><tspan
+     sodipodi:role="line"
+     id="tspan3084"
+     x="311.18643"
+     y="86.224579" /></text>
+<text
+   xml:space="preserve"
+   style="font-size:40px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;font-family:Bitsumishi;-inkscape-font-specification:Bitsumishi"
+   x="283.95764"
+   y="87.845337"
+   id="text3086"
+   sodipodi:linespacing="125%"><tspan
+     sodipodi:role="line"
+     id="tspan3088"
+     x="283.95764"
+     y="87.845337" /></text>
+<g
+   id="g3105"
+   transform="translate(14.669469,-80.682082)"
+   inkscape:export-filename="/Users/stack/Documents/big_h_base.png"
+   inkscape:export-xdpi="90"
+   inkscape:export-ydpi="90"><path
+     sodipodi:nodetypes="ccccccccccccccccccccccccccccc"
+     style="fill:#ba160c"
+     inkscape:connector-curvature="0"
+     id="path3677"
+     d="m 589.08202,499.77746 -40.3716,0 0,-168.36691 40.3716,0 z m -40.20304,-168.35619 -0.1684,-104.30857 40.3716,0 -0.33048,104.26805 z m -0.1684,168.35619 -40.37568,0 0,-104.82988 -259.42272,0 0,104.82988 -79.42128,0 0,-272.66476 79.42128,0 0,104.29785 224.92224,0 34.50456,0 40.37568,0 0,168.36691 z m 0,-272.66476 -40.37568,0 -0.0171,104.30857 40.55802,-0.01 z"
+     inkscape:export-filename="/Users/stack/Documents/polygon3687.png"
+     inkscape:export-xdpi="90"
+     inkscape:export-ydpi="90" /><path
+     sodipodi:nodetypes="cscsccsssccsssccscsccccccccccccccccccccc"
+     style="fill:#ba160c"
+     inkscape:connector-curvature="0"
+     id="path3679"
+     d="m 263.96692,553.27262 c 6.812,4.218 10.219,10.652 10.219,19.303 0,6.272 -2,11.571 -6.002,15.897 -4.325,4.758 -10.165,7.137 -17.519,7.137 l -28.629,0 0,-19.465 28.629,0 c 2.812,0 4.218,-2.109 4.218,-6.327 0,-4.216 -1.406,-6.325 -4.218,-6.325 l -28.629,0 0,-19.303 27.17,0 c 2.811,0 4.217,-2.109 4.217,-6.327 0,-4.216 -1.406,-6.326 -4.217,-6.326 l -27.17,0 0,-19.464 27.17,0 c 7.353,0 13.192,2.379 17.519,7.137 3.892,4.325 5.839,9.625 5.839,15.896 0,7.787 -2.866,13.842 -8.597,18.167 z m -41.931,42.338 -52.312,0 0,-51.42 19.466,0 5.259,0 27.588,0 0,19.303 -32.847,0 0,12.652 32.847,0 0,19.465 z m 0,-64.073 -32.847,0 0.0405,12.76351 -19.466,0.081 -0.0405,-32.30954 52.312,0 0,19.465 z" /><path
+     style="fill:#ba160c"
+     inkscape:connector-curvature="0"
+     id="path3683"
+     d="m 384.35292,595.61062 h -19.465 v -26.602 h -31.094 -0.618 v -19.466 h 0.618 31.094 v -11.68 c 0,-4.216 -1.406,-6.324 -4.218,-6.324 h -27.494 v -19.465 h 27.494 c 7.03,0 12.733,2.541 17.114,7.623 4.379,5.083 6.569,11.139 6.569,18.167 v 57.747 z m -51.177,-26.602 h -19.547 -12.165 v 26.602 h -19.466 v -57.748 c 0,-7.028 2.19,-13.083 6.569,-18.167 4.379,-5.083 10.03,-7.623 16.952,-7.623 h 27.656 v 19.466 h -27.656 c -2.704,0 -4.055,2.108 -4.055,6.324 v 11.68 h 12.165 19.547 v 19.466 z" /><path
+     style="fill:#ba160c"
+     inkscape:connector-curvature="0"
+     id="path3685"
+     d="m 492.35692,569.81862 c 0,7.03 -2.109,13.031 -6.327,18.006 -4.541,5.19 -10.273,7.786 -17.193,7.786 h -72.02 v -19.465 h 72.02 c 2.704,0 4.055,-2.109 4.055,-6.327 0,-4.216 -1.352,-6.325 -4.055,-6.325 h -52.394 c -6.92,0 -12.652,-2.596 -17.193,-7.787 -4.327,-4.865 -6.49,-10.813 -6.49,-17.843 0,-7.028 2.218,-13.083 6.651,-18.167 4.434,-5.083 10.112,-7.623 17.032,-7.623 h 72.021 v 19.464 h -72.021 c -2.703,0 -4.055,2.109 -4.055,6.326 0,4.109 1.352,6.164 4.055,6.164 h 52.394 c 6.92,0 12.652,2.596 17.193,7.787 4.218,4.974 6.327,10.976 6.327,18.004 z" /><polygon
+     style="fill:#ba160c"
+     transform="translate(-71.972085,223.93862)"
+     id="polygon3687"
+     points="656.952,339.555 591.906,339.555 591.906,352.207 661.331,352.207 661.331,371.672 572.44,371.672 572.44,288.135 661.494,288.135 661.494,307.599 591.906,307.599 591.906,320.089 656.952,320.089 "
+     inkscape:export-xdpi="90"
+     inkscape:export-ydpi="90" /><g
+     id="g3349"><g
+       id="g3344"><text
+         transform="scale(0.93350678,1.0712295)"
+         sodipodi:linespacing="125%"
+         id="text3076"
+         y="203.03328"
+         x="181.98402"
+         style="font-size:84.015625px;font-style:italic;font-variant:normal;font-weight:bold;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#4d4d4d;fill-opacity:1;stroke:none;font-family:Bitsumishi;-inkscape-font-specification:Bitsumishi Bold Italic"
+         xml:space="preserve"
+         inkscape:export-xdpi="90"
+         inkscape:export-ydpi="90"
+         inkscape:export-filename="/Users/stack/Documents/polygon3687.png"><tspan
+           style="font-size:84.015625px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:start;line-height:125%;letter-spacing:25.64349174px;writing-mode:lr-tb;text-anchor:start;fill:#4d4d4d;font-family:Bitsumishi;-inkscape-font-specification:Bitsumishi"
+           y="203.03328"
+           x="181.98402"
+           id="tspan3080"
+           sodipodi:role="line">APACHE</tspan></text>
+<rect
+         y="191.93103"
+         x="178.85117"
+         height="10.797735"
+         width="7.7796612"
+         id="rect3090"
+         style="fill:#4d4d4d" /></g><rect
+       style="fill:#4d4d4d"
+       id="rect3103"
+       width="8.1443329"
+       height="10.787481"
+       x="334.64697"
+       y="191.93881" /></g></g></svg>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/data_block_diff_encoding.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/data_block_diff_encoding.png b/src/site/resources/images/data_block_diff_encoding.png
new file mode 100644
index 0000000..0bd03a4
Binary files /dev/null and b/src/site/resources/images/data_block_diff_encoding.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/data_block_no_encoding.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/data_block_no_encoding.png b/src/site/resources/images/data_block_no_encoding.png
new file mode 100644
index 0000000..56498b4
Binary files /dev/null and b/src/site/resources/images/data_block_no_encoding.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/data_block_prefix_encoding.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/data_block_prefix_encoding.png b/src/site/resources/images/data_block_prefix_encoding.png
new file mode 100644
index 0000000..4271847
Binary files /dev/null and b/src/site/resources/images/data_block_prefix_encoding.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/favicon.ico
----------------------------------------------------------------------
diff --git a/src/site/resources/images/favicon.ico b/src/site/resources/images/favicon.ico
new file mode 100644
index 0000000..6e4d0f7
Binary files /dev/null and b/src/site/resources/images/favicon.ico differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/hadoop-logo.jpg
----------------------------------------------------------------------
diff --git a/src/site/resources/images/hadoop-logo.jpg b/src/site/resources/images/hadoop-logo.jpg
new file mode 100644
index 0000000..809525d
Binary files /dev/null and b/src/site/resources/images/hadoop-logo.jpg differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/hbase_logo.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/hbase_logo.png b/src/site/resources/images/hbase_logo.png
new file mode 100644
index 0000000..e962ce0
Binary files /dev/null and b/src/site/resources/images/hbase_logo.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/hbase_logo.svg
----------------------------------------------------------------------
diff --git a/src/site/resources/images/hbase_logo.svg b/src/site/resources/images/hbase_logo.svg
new file mode 100644
index 0000000..2cc26d9
--- /dev/null
+++ b/src/site/resources/images/hbase_logo.svg
@@ -0,0 +1,78 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Generator: Adobe Illustrator 15.1.0, SVG Export Plug-In . SVG Version: 6.00 Build 0)  -->
+
+<svg
+   xmlns:dc="http://purl.org/dc/elements/1.1/"
+   xmlns:cc="http://creativecommons.org/ns#"
+   xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+   xmlns:svg="http://www.w3.org/2000/svg"
+   xmlns="http://www.w3.org/2000/svg"
+   xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+   xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+   version="1.1"
+   id="Layer_1"
+   x="0px"
+   y="0px"
+   width="792px"
+   height="612px"
+   viewBox="0 0 792 612"
+   enable-background="new 0 0 792 612"
+   xml:space="preserve"
+   inkscape:version="0.48.4 r9939"
+   sodipodi:docname="hbase_banner_logo.png"
+   inkscape:export-filename="hbase_logo_filledin.png"
+   inkscape:export-xdpi="90"
+   inkscape:export-ydpi="90"><metadata
+   id="metadata3285"><rdf:RDF><cc:Work
+       rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
+         rdf:resource="http://purl.org/dc/dcmitype/StillImage" /><dc:title></dc:title></cc:Work></rdf:RDF></metadata><defs
+   id="defs3283" /><sodipodi:namedview
+   pagecolor="#ffffff"
+   bordercolor="#666666"
+   borderopacity="1"
+   objecttolerance="10"
+   gridtolerance="10"
+   guidetolerance="10"
+   inkscape:pageopacity="0"
+   inkscape:pageshadow="2"
+   inkscape:window-width="1131"
+   inkscape:window-height="715"
+   id="namedview3281"
+   showgrid="false"
+   inkscape:zoom="4.3628026"
+   inkscape:cx="328.98554"
+   inkscape:cy="299.51695"
+   inkscape:window-x="752"
+   inkscape:window-y="456"
+   inkscape:window-maximized="0"
+   inkscape:current-layer="Layer_1" />
+<path
+   d="m 233.586,371.672 -9.895,0 0,-51.583 9.895,0 0,51.583 z m -9.77344,-51.59213 -0.12156,-31.94487 9.895,0 -0.0405,31.98539 z m -0.12156,51.59213 -9.896,0 0,-32.117 -63.584,0 0,32.117 -19.466,0 0,-83.537 19.466,0 0,31.954 55.128,0 8.457,0 9.896,0 0,51.583 z m 0,-83.537 -9.896,0 0,31.98539 10.01756,-0.0405 z"
+   id="path3269"
+   inkscape:connector-curvature="0"
+   style="fill:#ba160c"
+   sodipodi:nodetypes="cccccccccccccccccccccccccccccc" />
+<path
+   d="m 335.939,329.334 c 6.812,4.218 10.219,10.652 10.219,19.303 0,6.272 -2,11.571 -6.002,15.897 -4.325,4.758 -10.165,7.137 -17.519,7.137 l -28.629,0 0,-19.465 28.629,0 c 2.812,0 4.218,-2.109 4.218,-6.327 0,-4.216 -1.406,-6.325 -4.218,-6.325 l -28.629,0 0,-19.303 27.17,0 c 2.811,0 4.217,-2.109 4.217,-6.327 0,-4.216 -1.406,-6.326 -4.217,-6.326 l -27.17,0 0,-19.464 27.17,0 c 7.353,0 13.192,2.379 17.519,7.137 3.892,4.325 5.839,9.625 5.839,15.896 0,7.787 -2.866,13.842 -8.597,18.167 z m -41.931,42.338 -52.312,0 0,-51.42 19.466,0 5.259,0 27.588,0 0,19.303 -32.847,0 0,12.652 32.847,0 0,19.465 z m 0,-64.073 -32.847,0 0.0405,13.24974 -19.466,-0.48623 -0.0405,-32.22851 52.312,0 0,19.465 z"
+   id="path3271"
+   inkscape:connector-curvature="0"
+   style="fill:#ba160c"
+   sodipodi:nodetypes="cscsccsssccsssccscsccccccccccccccccccccc" />
+<path
+   d="M355.123,266.419v-8.92h14.532v-5.353c0-1.932-0.644-2.899-1.933-2.899h-12.6v-8.919h12.6  c3.223,0,5.836,1.164,7.842,3.494c2.007,2.33,3.011,5.104,3.011,8.325v26.463h-8.921v-12.19H355.123L355.123,266.419z   M473.726,278.61h-29.587c-3.469,0-6.417-1.152-8.845-3.458c-2.429-2.304-3.642-5.191-3.642-8.659v-14.049  c0-3.47,1.213-6.356,3.642-8.662c2.428-2.304,5.376-3.455,8.845-3.455h29.587v8.919h-29.587c-2.378,0-3.567,1.066-3.567,3.197  v14.049c0,2.131,1.189,3.196,3.567,3.196h29.587V278.61L473.726,278.61z M567.609,278.61h-8.996v-14.718h-22.895v14.718h-8.92  v-38.282h8.92v14.644h22.895v-14.644h8.996V278.61L567.609,278.61z M661.494,249.247h-31.889v5.725h29.807v8.92h-29.807v5.797  h31.814v8.92h-40.735v-38.282h40.809V249.247z M355.123,240.328v8.919h-12.674c-1.239,0-1.858,0.967-1.858,2.899v5.353h5.575h2.435  h6.522v8.92h-6.522h-2.435h-5.575v12.19h-8.92v-26.463c0-3.221,1.004-5.996,3.011-8.325c2.006-2.33,4.596-3.494,7.768-3.494H355.123  L355.123,240.328z M254.661,266.122v-8.92h13.083c1.288,0,1.
 933-1.313,1.933-3.939c0-2.676-0.645-4.015-1.933-4.015h-13.083v-8.919  h13.083c3.32,0,5.995,1.363,8.028,4.088c1.883,2.478,2.825,5.425,2.825,8.846c0,3.419-0.942,6.342-2.825,8.771  c-2.033,2.725-4.708,4.088-8.028,4.088H254.661z M177.649,278.61h-8.92v-12.19h-14.532v-8.92h14.532v-5.353  c0-1.932-0.644-2.899-1.932-2.899h-12.6v-8.919h12.6c3.222,0,5.835,1.164,7.842,3.494c2.007,2.33,3.01,5.104,3.01,8.325V278.61  L177.649,278.61z M254.661,240.328v8.919h-15.016v7.954h15.016v8.92h-15.016v12.488h-8.92v-38.282H254.661z M154.198,266.419h-7.604  h-1.354h-5.575v12.19h-8.92v-26.463c0-3.221,1.004-5.996,3.01-8.325c2.007-2.33,4.597-3.494,7.768-3.494h12.674v8.919h-12.674  c-1.239,0-1.858,0.967-1.858,2.899v5.353h5.575h1.354h7.604V266.419z"
+   id="path3273"
+   style="fill:#666666"
+   fill="#878888" />
+<path
+   fill="#BA160C"
+   d="M456.325,371.672H436.86V345.07h-31.094h-0.618v-19.466h0.618h31.094v-11.68  c0-4.216-1.406-6.324-4.218-6.324h-27.494v-19.465h27.494c7.03,0,12.733,2.541,17.114,7.623c4.379,5.083,6.569,11.139,6.569,18.167  V371.672z M405.148,345.07h-19.547h-12.165v26.602h-19.466v-57.748c0-7.028,2.19-13.083,6.569-18.167  c4.379-5.083,10.03-7.623,16.952-7.623h27.656V307.6h-27.656c-2.704,0-4.055,2.108-4.055,6.324v11.68h12.165h19.547V345.07z"
+   id="path3275" />
+<path
+   fill="#BA160C"
+   d="M564.329,345.88c0,7.03-2.109,13.031-6.327,18.006c-4.541,5.19-10.273,7.786-17.193,7.786h-72.02v-19.465  h72.02c2.704,0,4.055-2.109,4.055-6.327c0-4.216-1.352-6.325-4.055-6.325h-52.394c-6.92,0-12.652-2.596-17.193-7.787  c-4.327-4.865-6.49-10.813-6.49-17.843c0-7.028,2.218-13.083,6.651-18.167c4.434-5.083,10.112-7.623,17.032-7.623h72.021v19.464  h-72.021c-2.703,0-4.055,2.109-4.055,6.326c0,4.109,1.352,6.164,4.055,6.164h52.394c6.92,0,12.652,2.596,17.193,7.787  C562.22,332.85,564.329,338.852,564.329,345.88z"
+   id="path3277" />
+<polygon
+   fill="#BA160C"
+   points="661.494,307.599 591.906,307.599 591.906,320.089 656.952,320.089 656.952,339.555 591.906,339.555   591.906,352.207 661.331,352.207 661.331,371.672 572.44,371.672 572.44,288.135 661.494,288.135 "
+   id="polygon3279" />
+</svg>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/hbase_logo_with_orca.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/hbase_logo_with_orca.png b/src/site/resources/images/hbase_logo_with_orca.png
new file mode 100644
index 0000000..7ed60e2
Binary files /dev/null and b/src/site/resources/images/hbase_logo_with_orca.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/hbase_logo_with_orca.xcf
----------------------------------------------------------------------
diff --git a/src/site/resources/images/hbase_logo_with_orca.xcf b/src/site/resources/images/hbase_logo_with_orca.xcf
new file mode 100644
index 0000000..8d88da2
Binary files /dev/null and b/src/site/resources/images/hbase_logo_with_orca.xcf differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/hbase_logo_with_orca_large.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/hbase_logo_with_orca_large.png b/src/site/resources/images/hbase_logo_with_orca_large.png
new file mode 100644
index 0000000..e91eb8d
Binary files /dev/null and b/src/site/resources/images/hbase_logo_with_orca_large.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/hbase_replication_diagram.jpg
----------------------------------------------------------------------
diff --git a/src/site/resources/images/hbase_replication_diagram.jpg b/src/site/resources/images/hbase_replication_diagram.jpg
new file mode 100644
index 0000000..c110309
Binary files /dev/null and b/src/site/resources/images/hbase_replication_diagram.jpg differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/hbasecon2015.30percent.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/hbasecon2015.30percent.png b/src/site/resources/images/hbasecon2015.30percent.png
new file mode 100644
index 0000000..26896a4
Binary files /dev/null and b/src/site/resources/images/hbasecon2015.30percent.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/hbasecon2016-stack-logo.jpg
----------------------------------------------------------------------
diff --git a/src/site/resources/images/hbasecon2016-stack-logo.jpg b/src/site/resources/images/hbasecon2016-stack-logo.jpg
new file mode 100644
index 0000000..b59280d
Binary files /dev/null and b/src/site/resources/images/hbasecon2016-stack-logo.jpg differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/hbasecon2016-stacked.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/hbasecon2016-stacked.png b/src/site/resources/images/hbasecon2016-stacked.png
new file mode 100644
index 0000000..4ff181e
Binary files /dev/null and b/src/site/resources/images/hbasecon2016-stacked.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/hbasecon2017.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/hbasecon2017.png b/src/site/resources/images/hbasecon2017.png
new file mode 100644
index 0000000..4b25f89
Binary files /dev/null and b/src/site/resources/images/hbasecon2017.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/hbaseconasia2017.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/hbaseconasia2017.png b/src/site/resources/images/hbaseconasia2017.png
new file mode 100644
index 0000000..8548870
Binary files /dev/null and b/src/site/resources/images/hbaseconasia2017.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/hfile.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/hfile.png b/src/site/resources/images/hfile.png
new file mode 100644
index 0000000..5762970
Binary files /dev/null and b/src/site/resources/images/hfile.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/hfilev2.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/hfilev2.png b/src/site/resources/images/hfilev2.png
new file mode 100644
index 0000000..54cc0cf
Binary files /dev/null and b/src/site/resources/images/hfilev2.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/jumping-orca_rotated.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/jumping-orca_rotated.png b/src/site/resources/images/jumping-orca_rotated.png
new file mode 100644
index 0000000..4c2c72e
Binary files /dev/null and b/src/site/resources/images/jumping-orca_rotated.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/jumping-orca_rotated.xcf
----------------------------------------------------------------------
diff --git a/src/site/resources/images/jumping-orca_rotated.xcf b/src/site/resources/images/jumping-orca_rotated.xcf
new file mode 100644
index 0000000..01be6ff
Binary files /dev/null and b/src/site/resources/images/jumping-orca_rotated.xcf differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/jumping-orca_rotated_12percent.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/jumping-orca_rotated_12percent.png b/src/site/resources/images/jumping-orca_rotated_12percent.png
new file mode 100644
index 0000000..1942f9a
Binary files /dev/null and b/src/site/resources/images/jumping-orca_rotated_12percent.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/jumping-orca_rotated_25percent.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/jumping-orca_rotated_25percent.png b/src/site/resources/images/jumping-orca_rotated_25percent.png
new file mode 100644
index 0000000..219c657
Binary files /dev/null and b/src/site/resources/images/jumping-orca_rotated_25percent.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/jumping-orca_transparent_rotated.xcf
----------------------------------------------------------------------
diff --git a/src/site/resources/images/jumping-orca_transparent_rotated.xcf b/src/site/resources/images/jumping-orca_transparent_rotated.xcf
new file mode 100644
index 0000000..be9e3d9
Binary files /dev/null and b/src/site/resources/images/jumping-orca_transparent_rotated.xcf differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/region_split_process.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/region_split_process.png b/src/site/resources/images/region_split_process.png
new file mode 100644
index 0000000..2717617
Binary files /dev/null and b/src/site/resources/images/region_split_process.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/region_states.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/region_states.png b/src/site/resources/images/region_states.png
new file mode 100644
index 0000000..ba69e97
Binary files /dev/null and b/src/site/resources/images/region_states.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/replication_overview.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/replication_overview.png b/src/site/resources/images/replication_overview.png
new file mode 100644
index 0000000..47d7b4c
Binary files /dev/null and b/src/site/resources/images/replication_overview.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/images/timeline_consistency.png
----------------------------------------------------------------------
diff --git a/src/site/resources/images/timeline_consistency.png b/src/site/resources/images/timeline_consistency.png
new file mode 100644
index 0000000..94c47e0
Binary files /dev/null and b/src/site/resources/images/timeline_consistency.png differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/1.5-HBASE/maven-fluido-skin-1.5-HBASE.jar
----------------------------------------------------------------------
diff --git a/src/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/1.5-HBASE/maven-fluido-skin-1.5-HBASE.jar b/src/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/1.5-HBASE/maven-fluido-skin-1.5-HBASE.jar
new file mode 100644
index 0000000..5b93209
Binary files /dev/null and b/src/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/1.5-HBASE/maven-fluido-skin-1.5-HBASE.jar differ

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/1.5-HBASE/maven-fluido-skin-1.5-HBASE.pom
----------------------------------------------------------------------
diff --git a/src/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/1.5-HBASE/maven-fluido-skin-1.5-HBASE.pom b/src/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/1.5-HBASE/maven-fluido-skin-1.5-HBASE.pom
new file mode 100644
index 0000000..d12092b
--- /dev/null
+++ b/src/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/1.5-HBASE/maven-fluido-skin-1.5-HBASE.pom
@@ -0,0 +1,718 @@
+<?xml version="1.0" encoding="UTF-8"?>
+
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+  <modelVersion>4.0.0</modelVersion>
+
+  <parent>
+    <groupId>org.apache.maven.skins</groupId>
+    <artifactId>maven-skins</artifactId>
+    <version>10</version>
+    <relativePath>../maven-skins/pom.xml</relativePath>
+  </parent>
+
+  <artifactId>maven-fluido-skin</artifactId>
+  <version>1.5-HBASE</version>
+
+  <name>Apache Maven Fluido Skin</name>
+  <description>The Apache Maven Fluido Skin is an Apache Maven site skin
+    built on top of Twitter's bootstrap.</description>
+  <inceptionYear>2011</inceptionYear>
+
+  <scm>
+    <connection>scm:svn:http://svn.apache.org/repos/asf/maven/skins/trunk/maven-fluido-skin/</connection>
+    <developerConnection>scm:svn:https://svn.apache.org/repos/asf/maven/skins/trunk/maven-fluido-skin/</developerConnection>
+    <url>http://svn.apache.org/viewvc/maven/skins/trunk/maven-fluido-skin/</url>
+  </scm>
+  <issueManagement>
+    <system>jira</system>
+    <url>https://issues.apache.org/jira/browse/MSKINS/component/12326474</url>
+  </issueManagement>
+  <distributionManagement>
+    <site>
+      <id>apache.website</id>
+      <url>scm:svn:https://svn.apache.org/repos/infra/websites/production/maven/components/${maven.site.path}</url>
+    </site>
+  </distributionManagement>
+
+  <contributors>
+    <!-- in alphabetical order -->
+    <contributor>
+      <name>Bruno P. Kinoshita</name>
+      <email>brunodepaulak AT yahoo DOT com DOT br</email>
+    </contributor>
+    <contributor>
+      <name>Carlos Villaronga</name>
+      <email>cvillaronga AT gmail DOT com</email>
+    </contributor>
+    <contributor>
+      <name>Christian Grobmeier</name>
+      <email>grobmeier AT apache DOT org</email>
+    </contributor>
+    <contributor>
+      <name>Conny Kreyssel</name>
+      <email>dev AT kreyssel DOT org</email>
+    </contributor>
+    <contributor>
+      <name>Michael Koch</name>
+      <email>tensberg AT gmx DOT net</email>
+    </contributor>
+    <contributor>
+      <name>Emmanuel Hugonnet</name>
+      <email>emmanuel DOT hugonnet AT gmail DOT com</email>
+    </contributor>
+    <contributor>
+      <name>Ivan Habunek</name>
+      <email>ihabunek AT apache DOT org</email>
+    </contributor>
+    <contributor>
+      <name>Eric Barboni</name>
+    </contributor>
+    <contributor>
+      <name>Michael Osipov</name>
+      <email>michaelo AT apache DOT org</email>
+    </contributor>
+  </contributors>
+
+  <properties>
+    <bootstrap.version>2.3.2</bootstrap.version>
+    <jquery.version>1.11.2</jquery.version>
+  </properties>
+
+  <build>
+    <resources>
+      <resource>
+        <directory>.</directory>
+        <targetPath>META-INF</targetPath>
+        <includes>
+          <include>NOTICE</include>
+          <include>LICENSE</include>
+        </includes>
+      </resource>
+
+      <!-- exclude css and js since will include the minified version -->
+      <resource>
+        <directory>${basedir}/src/main/resources</directory>
+        <excludes>
+          <exclude>css/**</exclude>
+          <exclude>js/**</exclude>
+        </excludes>
+        <filtering>true</filtering> <!-- add skin-info -->
+      </resource>
+
+      <!-- include the print.css -->
+      <resource>
+        <directory>${basedir}/src/main/resources</directory>
+        <includes>
+          <include>css/print.css</include>
+        </includes>
+      </resource>
+
+      <!-- include minified only -->
+      <resource>
+        <directory>${project.build.directory}/${project.build.finalName}</directory>
+        <includes>
+          <include>css/apache-maven-fluido-${project.version}.min.css</include>
+          <include>js/apache-maven-fluido-${project.version}.min.js</include>
+        </includes>
+      </resource>
+    </resources>
+
+    <pluginManagement>
+      <plugins>
+        <plugin>
+          <groupId>org.apache.rat</groupId>
+          <artifactId>apache-rat-plugin</artifactId>
+          <configuration>
+            <excludes combine.children="append">
+              <exclude>src/main/resources/fonts/glyphicons-halflings-regular.svg</exclude>
+              <exclude>src/main/resources/js/prettify.js</exclude>
+              <exclude>src/main/resources/js/jquery-*.js</exclude>
+            </excludes>
+          </configuration>
+        </plugin>
+      </plugins>
+    </pluginManagement>
+    <plugins>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-resources-plugin</artifactId>
+        <dependencies><!-- TODO remove when upgrading to version 2.8: see MSHARED-325 / MRESOURCES-192 -->
+          <dependency>
+              <groupId>org.apache.maven.shared</groupId>
+              <artifactId>maven-filtering</artifactId>
+              <version>1.3</version>
+          </dependency>
+        </dependencies>
+        <configuration>
+          <delimiters>
+            <delimiter>@</delimiter>
+          </delimiters>
+          <useDefaultDelimiters>false</useDefaultDelimiters>
+        </configuration>
+      </plugin>
+      <plugin>
+        <groupId>com.samaxes.maven</groupId>
+        <artifactId>maven-minify-plugin</artifactId>
+        <version>1.3.5</version>
+        <executions>
+          <execution>
+            <id>default-minify</id>
+            <phase>generate-resources</phase>
+            <configuration>
+              <webappSourceDir>${basedir}/src/main/resources</webappSourceDir>
+              <cssSourceDir>css</cssSourceDir>
+              <cssSourceFiles>
+                <cssSourceFile>bootstrap-${bootstrap.version}.css</cssSourceFile>
+                <cssSourceFile>maven-base.css</cssSourceFile>
+                <cssSourceFile>maven-theme.css</cssSourceFile>
+                <cssSourceFile>prettify.css</cssSourceFile>
+              </cssSourceFiles>
+              <cssFinalFile>apache-maven-fluido-${project.version}.css</cssFinalFile>
+              <jsSourceDir>js</jsSourceDir>
+              <jsSourceFiles>
+                <jsSourceFile>jquery-${jquery.version}.js</jsSourceFile>
+                <jsSourceFile>bootstrap-${bootstrap.version}.js</jsSourceFile>
+                <jsSourceFile>prettify.js</jsSourceFile>
+                <jsSourceFile>fluido.js</jsSourceFile>
+              </jsSourceFiles>
+              <jsFinalFile>apache-maven-fluido-${project.version}.js</jsFinalFile>
+            </configuration>
+            <goals>
+              <goal>minify</goal>
+            </goals>
+          </execution>
+        </executions>
+      </plugin>
+    </plugins>
+  </build>
+
+  <profiles>
+    <profile>
+      <id>run-its</id>
+      <build>
+        <plugins>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-invoker-plugin</artifactId>
+            <configuration>
+              <debug>true</debug>
+              <projectsDirectory>src/it</projectsDirectory>
+              <cloneProjectsTo>${project.build.directory}/it</cloneProjectsTo>
+              <preBuildHookScript>setup</preBuildHookScript>
+              <postBuildHookScript>verify</postBuildHookScript>
+              <localRepositoryPath>${project.build.directory}/local-repo</localRepositoryPath>
+              <settingsFile>src/it/settings.xml</settingsFile>
+              <pomIncludes>
+                <pomInclude>*/pom.xml</pomInclude>
+              </pomIncludes>
+              <goals>
+                <goal>site</goal>
+              </goals>
+            </configuration>
+            <executions>
+              <execution>
+                <id>integration-test</id>
+                <goals>
+                  <goal>install</goal>
+                  <goal>integration-test</goal>
+                  <goal>verify</goal>
+                </goals>
+              </execution>
+            </executions>
+          </plugin>
+        </plugins>
+      </build>
+    </profile>
+    <profile>
+      <id>reporting</id>
+      <build>
+        <plugins>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-resources-plugin</artifactId>
+            <executions>
+              <execution>
+                <id>copy-sidebar</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/sidebar/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/sidebar/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-topbar</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/topbar/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/topbar/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-topbar-inverse</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/topbar-inverse/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/topbar-inverse/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-10</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-10/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-10/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-13</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-13/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-13/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-14</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-14/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-14/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-14_sitesearch</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-14_sitesearch/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-14_sitesearch/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-15</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-15/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-15/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-16</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-16/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-16/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-17</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-17/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-17/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-21</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-21/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-21/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-22</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-22/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-22/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-22_default</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-22_default/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-22_default/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-22_topbar</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-22_topbar/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-22_topbar/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-23</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-23/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-23/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-24</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-24/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-24/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-24_topbar</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-24_topbar/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-24_topbar/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-25</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-25/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-25/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-28</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-28/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-28/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-31</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-31/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-31/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-33</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-33/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-33/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-33_topbar</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-33_topbar/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-33_topbar/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-34</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-34/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-34/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-34_topbar</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-34_topbar/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-34_topbar/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-41</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-41/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-41/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-72</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-72/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-72/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-75</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-75/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-75/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-76</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-76/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-76/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-76_topbar</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-76_topbar/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-76_topbar/</outputDirectory>
+                </configuration>
+              </execution>
+              <execution>
+                <id>copy-mskins-85</id>
+                <phase>site</phase>
+                <goals>
+                  <goal>copy-resources</goal>
+                </goals>
+                <configuration>
+                  <resources>
+                    <resource>
+                      <directory>${project.build.directory}/it/mskins-85/target/site/</directory>
+                    </resource>
+                  </resources>
+                  <outputDirectory>${project.build.directory}/site/mskins-85/</outputDirectory>
+                </configuration>
+              </execution>
+            </executions>
+          </plugin>
+        </plugins>
+      </build>
+      <reporting>
+        <plugins>
+          <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-invoker-plugin</artifactId>
+            <version>1.8</version>
+          </plugin>
+        </plugins>
+      </reporting>
+    </profile>
+  </profiles>
+</project>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/maven-metadata-local.xml
----------------------------------------------------------------------
diff --git a/src/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/maven-metadata-local.xml b/src/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/maven-metadata-local.xml
new file mode 100644
index 0000000..65791e8
--- /dev/null
+++ b/src/site/resources/repo/org/apache/maven/skins/maven-fluido-skin/maven-metadata-local.xml
@@ -0,0 +1,12 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<metadata>
+  <groupId>org.apache.maven.skins</groupId>
+  <artifactId>maven-fluido-skin</artifactId>
+  <versioning>
+    <release>1.5-HBASE</release>
+    <versions>
+      <version>1.5-HBASE</version>
+    </versions>
+    <lastUpdated>20151111033340</lastUpdated>
+  </versioning>
+</metadata>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/site.xml
----------------------------------------------------------------------
diff --git a/src/site/site.xml b/src/site/site.xml
new file mode 100644
index 0000000..f036702
--- /dev/null
+++ b/src/site/site.xml
@@ -0,0 +1,131 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+-->
+
+<project xmlns="http://maven.apache.org/DECORATION/1.0.0"
+    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+    xsi:schemaLocation="http://maven.apache.org/DECORATION/1.0.0 http://maven.apache.org/xsd/decoration-1.0.0.xsd">
+  <skin>
+    <groupId>org.apache.maven.skins</groupId>
+    <artifactId>maven-fluido-skin</artifactId>
+    <version>1.5-HBASE</version>
+  </skin>
+  <custom>
+    <fluidoSkin>
+      <topBarEnabled>true</topBarEnabled>
+      <sideBarEnabled>false</sideBarEnabled>
+      <googleSearch>
+        <!-- The ID of the Google custom search engine to use.
+             This one searches hbase.apache.org, issues.apache.org/browse/HBASE-*,
+             and user and dev mailing list archives. -->
+        <customSearch>000385458301414556862:sq1bb0xugjg</customSearch>
+      </googleSearch>
+      <sourceLineNumbersEnabled>false</sourceLineNumbersEnabled>
+      <skipGenerationDate>true</skipGenerationDate>
+      <breadcrumbDivider>»</breadcrumbDivider>
+    </fluidoSkin>
+  </custom>
+  <bannerLeft>
+    <name />
+    <src />
+    <href />
+    <!--
+    <name/>
+    <height>0</height>
+    <width>0</width>
+-->
+  </bannerLeft>
+  <bannerRight>
+    <name>Apache HBase</name>
+    <src>images/hbase_logo_with_orca_large.png</src>
+    <href>http://hbase.apache.org/</href>
+  </bannerRight>
+  <publishDate position="bottom"/>
+  <version position="none"/>
+  <body>
+    <head>
+      <meta name="viewport" content="width=device-width, initial-scale=1.0"></meta>
+      <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/2.3.2/css/bootstrap-responsive.min.css"/>
+      <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.9.1/styles/github.min.css"/>
+      <link rel="stylesheet" href="css/site.css"/>
+      <script src="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/8.9.1/highlight.min.js"></script>
+    </head>
+    <menu name="Apache HBase Project">
+      <item name="Overview" href="index.html"/>
+      <item name="License" href="license.html"/>
+      <item name="Downloads" href="http://www.apache.org/dyn/closer.cgi/hbase/"/>
+      <item name="Release Notes" href="https://issues.apache.org/jira/browse/HBASE?report=com.atlassian.jira.plugin.system.project:changelog-panel#selectedTab=com.atlassian.jira.plugin.system.project%3Achangelog-panel" />
+      <item name="Code Of Conduct" href="coc.html"/>
+      <item name="Blog" href="http://blogs.apache.org/hbase/"/>
+      <item name="Mailing Lists" href="mail-lists.html"/>
+      <item name="Team" href="team-list.html"/>
+      <item name="ReviewBoard" href="https://reviews.apache.org/"/>
+      <item name="Thanks" href="sponsors.html"/>
+      <item name="Powered by HBase" href="poweredbyhbase.html"/>
+      <item name="Other resources" href="resources.html"/>
+    </menu>
+    <menu name="Project Information">
+      <item name="Project Summary" href="project-summary.html"/>
+      <item name="Dependency Information" href="dependency-info.html"/>
+      <item name="Team" href="team-list.html"/>
+      <item name="Source Repository" href="source-repository.html"/>
+      <item name="Issue Tracking" href="issue-tracking.html"/>
+      <item name="Dependency Management" href="dependency-management.html"/>
+      <item name="Dependencies" href="dependencies.html"/>
+      <item name="Dependency Convergence" href="dependency-convergence.html"/>
+      <item name="Continuous Integration" href="integration.html"/>
+      <item name="Plugin Management" href="plugin-management.html"/>
+      <item name="Plugins" href="plugins.html"/>
+    </menu>
+    <menu name="Documentation and API">
+      <item name="Reference Guide" href="book.html" target="_blank" />
+      <item name="Reference Guide (PDF)" href="apache_hbase_reference_guide.pdf" target="_blank" />
+      <item name="Getting Started" href="book.html#quickstart" target="_blank" />
+      <item name="User API" href="apidocs/index.html" target="_blank" />
+      <item name="User API (Test)" href="testapidocs/index.html" target="_blank" />
+      <item name="Developer API" href="https://hbase.apache.org/2.0/devapidocs/index.html" target="_blank" />
+      <item name="Developer API (Test)" href="https://hbase.apache.org/2.0/testdevapidocs/index.html" target="_blank" />
+      <item name="中文参考指南(单页)" href="http://abloz.com/hbase/book.html" target="_blank" />
+      <item name="FAQ" href="book.html#faq" target="_blank" />
+      <item name="Videos/Presentations" href="book.html#other.info" target="_blank" />
+      <item name="Wiki" href="http://wiki.apache.org/hadoop/Hbase" target="_blank" />
+      <item name="ACID Semantics" href="acid-semantics.html" target="_blank" />
+      <item name="Bulk Loads" href="book.html#arch.bulk.load" target="_blank" />
+      <item name="Metrics" href="metrics.html" target="_blank" />
+      <item name="HBase on Windows" href="cygwin.html" target="_blank" />
+      <item name="Cluster replication" href="book.html#replication" target="_blank" />
+      <item name="1.2 Documentation">
+        <item name="API" href="1.2/apidocs/index.html" target="_blank" />
+        <item name="X-Ref" href="1.2/xref/index.html" target="_blank" />
+        <item name="Ref Guide (single-page)" href="1.2/book.html" target="_blank" />
+      </item>
+      <item name="1.1 Documentation">
+        <item name="API" href="1.1/apidocs/index.html" target="_blank" />
+        <item name="X-Ref" href="1.1/xref/index.html" target="_blank" />
+        <item name="Ref Guide (single-page)" href="1.1/book.html" target="_blank" />
+      </item>
+    </menu>
+    <menu name="ASF">
+      <item name="Apache Software Foundation" href="http://www.apache.org/foundation/" target="_blank" />
+      <item name="How Apache Works" href="http://www.apache.org/foundation/how-it-works.html" target="_blank" />
+      <item name="Sponsoring Apache" href="http://www.apache.org/foundation/sponsorship.html" target="_blank" />
+    </menu>
+    </body>
+</project>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/xdoc/acid-semantics.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/acid-semantics.xml b/src/site/xdoc/acid-semantics.xml
new file mode 100644
index 0000000..d3f0dd9
--- /dev/null
+++ b/src/site/xdoc/acid-semantics.xml
@@ -0,0 +1,235 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
+          "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title>
+      Apache HBase (TM) ACID Properties
+    </title>
+  </properties>
+
+  <body>
+    <section name="About this Document">
+      <p>Apache HBase (TM) is not an ACID compliant database. However, it does guarantee certain specific
+      properties.</p>
+      <p>This specification enumerates the ACID properties of HBase.</p>
+    </section>
+    <section name="Definitions">
+      <p>For the sake of common vocabulary, we define the following terms:</p>
+      <dl>
+        <dt>Atomicity</dt>
+        <dd>an operation is atomic if it either completes entirely or not at all</dd>
+
+        <dt>Consistency</dt>
+        <dd>
+          all actions cause the table to transition from one valid state directly to another
+          (eg a row will not disappear during an update, etc)
+        </dd>
+
+        <dt>Isolation</dt>
+        <dd>
+          an operation is isolated if it appears to complete independently of any other concurrent transaction
+        </dd>
+
+        <dt>Durability</dt>
+        <dd>any update that reports &quot;successful&quot; to the client will not be lost</dd>
+
+        <dt>Visibility</dt>
+        <dd>an update is considered visible if any subsequent read will see the update as having been committed</dd>
+      </dl>
+      <p>
+        The terms <em>must</em> and <em>may</em> are used as specified by RFC 2119.
+        In short, the word &quot;must&quot; implies that, if some case exists where the statement
+        is not true, it is a bug. The word &quot;may&quot; implies that, even if the guarantee
+        is provided in a current release, users should not rely on it.
+      </p>
+    </section>
+    <section name="APIs to consider">
+      <ul>
+        <li>Read APIs
+        <ul>
+          <li>get</li>
+          <li>scan</li>
+        </ul>
+        </li>
+        <li>Write APIs</li>
+        <ul>
+          <li>put</li>
+          <li>batch put</li>
+          <li>delete</li>
+        </ul>
+        <li>Combination (read-modify-write) APIs</li>
+        <ul>
+          <li>incrementColumnValue</li>
+          <li>checkAndPut</li>
+        </ul>
+      </ul>
+    </section>
+
+    <section name="Guarantees Provided">
+
+      <section name="Atomicity">
+
+        <ol>
+          <li>All mutations are atomic within a row. Any put will either wholly succeed or wholly fail.[3]</li>
+          <ol>
+            <li>An operation that returns a &quot;success&quot; code has completely succeeded.</li>
+            <li>An operation that returns a &quot;failure&quot; code has completely failed.</li>
+            <li>An operation that times out may have succeeded and may have failed. However,
+            it will not have partially succeeded or failed.</li>
+          </ol>
+          <li> This is true even if the mutation crosses multiple column families within a row.</li>
+          <li> APIs that mutate several rows will _not_ be atomic across the multiple rows.
+          For example, a multiput that operates on rows 'a','b', and 'c' may return having
+          mutated some but not all of the rows. In such cases, these APIs will return a list
+          of success codes, each of which may be succeeded, failed, or timed out as described above.</li>
+          <li> The checkAndPut API happens atomically like the typical compareAndSet (CAS) operation
+          found in many hardware architectures.</li>
+          <li> The order of mutations is seen to happen in a well-defined order for each row, with no
+          interleaving. For example, if one writer issues the mutation &quot;a=1,b=1,c=1&quot; and
+          another writer issues the mutation &quot;a=2,b=2,c=2&quot;, the row must either
+          be &quot;a=1,b=1,c=1&quot; or &quot;a=2,b=2,c=2&quot; and must <em>not</em> be something
+          like &quot;a=1,b=2,c=1&quot;.</li>
+          <ol>
+            <li>Please note that this is not true _across rows_ for multirow batch mutations.</li>
+          </ol>
+        </ol>
+      </section>
+      <section name="Consistency and Isolation">
+        <ol>
+          <li>All rows returned via any access API will consist of a complete row that existed at
+          some point in the table's history.</li>
+          <li>This is true across column families - i.e a get of a full row that occurs concurrent
+          with some mutations 1,2,3,4,5 will return a complete row that existed at some point in time
+          between mutation i and i+1 for some i between 1 and 5.</li>
+          <li>The state of a row will only move forward through the history of edits to it.</li>
+        </ol>
+
+        <section name="Consistency of Scans">
+        <p>
+          A scan is <strong>not</strong> a consistent view of a table. Scans do
+          <strong>not</strong> exhibit <em>snapshot isolation</em>.
+        </p>
+        <p>
+          Rather, scans have the following properties:
+        </p>
+
+        <ol>
+          <li>
+            Any row returned by the scan will be a consistent view (i.e. that version
+            of the complete row existed at some point in time) [1]
+          </li>
+          <li>
+            A scan will always reflect a view of the data <em>at least as new as</em>
+            the beginning of the scan. This satisfies the visibility guarantees
+          enumerated below.</li>
+          <ol>
+            <li>For example, if client A writes data X and then communicates via a side
+            channel to client B, any scans started by client B will contain data at least
+            as new as X.</li>
+            <li>A scan _must_ reflect all mutations committed prior to the construction
+            of the scanner, and _may_ reflect some mutations committed subsequent to the
+            construction of the scanner.</li>
+            <li>Scans must include <em>all</em> data written prior to the scan (except in
+            the case where data is subsequently mutated, in which case it _may_ reflect
+            the mutation)</li>
+          </ol>
+        </ol>
+        <p>
+          Those familiar with relational databases will recognize this isolation level as &quot;read committed&quot;.
+        </p>
+        <p>
+          Please note that the guarantees listed above regarding scanner consistency
+          are referring to &quot;transaction commit time&quot;, not the &quot;timestamp&quot;
+          field of each cell. That is to say, a scanner started at time <em>t</em> may see edits
+          with a timestamp value greater than <em>t</em>, if those edits were committed with a
+          &quot;forward dated&quot; timestamp before the scanner was constructed.
+        </p>
+        </section>
+      </section>
+      <section name="Visibility">
+        <ol>
+          <li> When a client receives a &quot;success&quot; response for any mutation, that
+          mutation is immediately visible to both that client and any client with whom it
+          later communicates through side channels. [3]</li>
+          <li> A row must never exhibit so-called &quot;time-travel&quot; properties. That
+          is to say, if a series of mutations moves a row sequentially through a series of
+          states, any sequence of concurrent reads will return a subsequence of those states.</li>
+          <ol>
+            <li>For example, if a row's cells are mutated using the &quot;incrementColumnValue&quot;
+            API, a client must never see the value of any cell decrease.</li>
+            <li>This is true regardless of which read API is used to read back the mutation.</li>
+          </ol>
+          <li> Any version of a cell that has been returned to a read operation is guaranteed to
+          be durably stored.</li>
+        </ol>
+
+      </section>
+      <section name="Durability">
+        <ol>
+          <li> All visible data is also durable data. That is to say, a read will never return
+          data that has not been made durable on disk[2]</li>
+          <li> Any operation that returns a &quot;success&quot; code (eg does not throw an exception)
+          will be made durable.[3]</li>
+          <li> Any operation that returns a &quot;failure&quot; code will not be made durable
+          (subject to the Atomicity guarantees above)</li>
+          <li> All reasonable failure scenarios will not affect any of the guarantees of this document.</li>
+
+        </ol>
+      </section>
+      <section name="Tunability">
+        <p>All of the above guarantees must be possible within Apache HBase. For users who would like to trade
+        off some guarantees for performance, HBase may offer several tuning options. For example:</p>
+        <ul>
+          <li>Visibility may be tuned on a per-read basis to allow stale reads or time travel.</li>
+          <li>Durability may be tuned to only flush data to disk on a periodic basis</li>
+        </ul>
+      </section>
+    </section>
+    <section name="More Information">
+      <p>
+      For more information, see the <a href="book.html#client">client architecture</a> or <a href="book.html#datamodel">data model</a> sections in the Apache HBase Reference Guide.
+      </p>
+    </section>
+
+    <section name="Footnotes">
+      <p>[1] A consistent view is not guaranteed intra-row scanning -- i.e. fetching a portion of
+          a row in one RPC then going back to fetch another portion of the row in a subsequent RPC.
+          Intra-row scanning happens when you set a limit on how many values to return per Scan#next
+          (See <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setBatch(int)">Scan#setBatch(int)</a>).
+      </p>
+
+      <p>[2] In the context of Apache HBase, &quot;durably on disk&quot; implies an hflush() call on the transaction
+      log. This does not actually imply an fsync() to magnetic media, but rather just that the data has been
+      written to the OS cache on all replicas of the log. In the case of a full datacenter power loss, it is
+      possible that the edits are not truly durable.</p>
+      <p>[3] Puts will either wholly succeed or wholly fail, provided that they are actually sent
+      to the RegionServer.  If the writebuffer is used, Puts will not be sent until the writebuffer is filled
+      or it is explicitly flushed.</p>
+
+    </section>
+
+  </body>
+</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/xdoc/bulk-loads.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/bulk-loads.xml b/src/site/xdoc/bulk-loads.xml
new file mode 100644
index 0000000..2cbec1f
--- /dev/null
+++ b/src/site/xdoc/bulk-loads.xml
@@ -0,0 +1,34 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title>
+      Bulk Loads in Apache HBase (TM)
+    </title>
+  </properties>
+  <body>
+       <p>This page has been retired.  The contents have been moved to the
+      <a href="http://hbase.apache.org/book.html#arch.bulk.load">Bulk Loading</a> section
+ in the Reference Guide.
+ </p>
+  </body>
+</document>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/site/xdoc/coc.xml
----------------------------------------------------------------------
diff --git a/src/site/xdoc/coc.xml b/src/site/xdoc/coc.xml
new file mode 100644
index 0000000..fc2b549
--- /dev/null
+++ b/src/site/xdoc/coc.xml
@@ -0,0 +1,92 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
+          "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<document xmlns="http://maven.apache.org/XDOC/2.0"
+  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+  xsi:schemaLocation="http://maven.apache.org/XDOC/2.0 http://maven.apache.org/xsd/xdoc-2.0.xsd">
+  <properties>
+    <title>
+      Code of Conduct Policy
+    </title>
+  </properties>
+  <body>
+  <section name="Code of Conduct Policy">
+<p>
+We expect participants in discussions on the HBase project mailing lists, IRC
+channels, and JIRA issues to abide by the Apache Software Foundation's
+<a href="http://apache.org/foundation/policies/conduct.html">Code of Conduct</a>.
+</p>
+<p>
+If you feel there has been a violation of this code, please point out your
+concerns publicly in a friendly and matter of fact manner. Nonverbal
+communication is prone to misinterpretation and misunderstanding. Everyone has
+bad days and sometimes says things they regret later. Someone else's
+communication style may clash with yours, but the difference can be amicably
+resolved. After pointing out your concerns please be generous upon receiving an
+apology.
+</p>
+<p>
+Should there be repeated instances of code of conduct violations, or if there is
+an obvious and severe violation, the HBase PMC may become involved. When this
+happens the PMC will openly discuss the matter, most likely on the dev@hbase
+mailing list, and will consider taking the following actions, in order, if there
+is a continuing problem with an individual:
+<ol>
+<li>A friendly off-list warning;</li>
+<li>A friendly public warning, if the communication at issue was on list, otherwise another off-list warning;</li>
+<li>A three month suspension from the public mailing lists and possible operator action in the IRC channels.</li>
+<li>A permanent ban from the public mailing lists, IRC channels, and project JIRA.</li>
+</ol>
+</p>
+<p>
+For flagrant violations requiring a firm response the PMC may opt to skip early
+steps. No action will be taken before public discussion leading to consensus or
+a successful majority vote.
+</p>
+  </section>
+  <section name="Diversity Statement">
+<p>
+As a project and a community, we encourage you to participate in the HBase project
+in whatever capacity suits you, whether it involves development, documentation,
+answering questions on mailing lists, triaging issue and patch review, managing
+releases, or any other way that you want to help. We appreciate your
+contributions and the time you dedicate to the HBase project. We strive to
+recognize the work of participants publicly. Please let us know if we can
+improve in this area.
+</p>
+<p>
+We value diversity and strive to support participation by people with all
+different backgrounds. Rich projects grow from groups with different points of
+view and different backgrounds. We welcome your suggestions about how we can
+welcome participation by people at all skill levels and with all aspects of the
+project.
+</p>
+<p>
+If you can think of something we are doing that we shouldn't, or something that
+we should do but aren't, please let us know. If you feel comfortable doing so,
+use the public mailing lists. Otherwise, reach out to a PMC member or send an
+email to <a href="mailto:private@hbase.apache.org">the private PMC mailing list</a>.
+</p>
+  </section>
+  </body>
+</document>


[11/11] hbase git commit: HBASE-20831 Copy master doc into branch-2.1 and edit to make it suit 2.1.0

Posted by zh...@apache.org.
HBASE-20831 Copy master doc into branch-2.1 and edit to make it suit 2.1.0


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/61d70604
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/61d70604
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/61d70604

Branch: refs/heads/branch-2
Commit: 61d706044e674866f96aee8d5cf74d0e72dddfc1
Parents: 4653d4a
Author: zhangduo <zh...@apache.org>
Authored: Wed Jul 4 21:40:52 2018 +0800
Committer: zhangduo <zh...@apache.org>
Committed: Thu Jul 5 15:01:43 2018 +0800

----------------------------------------------------------------------
 pom.xml                                         |  14 +-
 src/main/asciidoc/_chapters/amv2.adoc           | 173 ++++
 .../asciidoc/_chapters/appendix_acl_matrix.adoc |   1 +
 .../appendix_contributing_to_documentation.adoc |   6 +-
 .../appendix_hbase_incompatibilities.adoc       | 714 +++++++++++++++
 .../_chapters/appendix_hfile_format.adoc        |   2 +-
 src/main/asciidoc/_chapters/architecture.adoc   | 253 +++--
 src/main/asciidoc/_chapters/backup_restore.adoc | 912 -------------------
 src/main/asciidoc/_chapters/community.adoc      |  54 +-
 src/main/asciidoc/_chapters/compression.adoc    |  22 +-
 src/main/asciidoc/_chapters/configuration.adoc  |  60 +-
 src/main/asciidoc/_chapters/datamodel.adoc      |  35 +
 src/main/asciidoc/_chapters/developer.adoc      | 127 ++-
 src/main/asciidoc/_chapters/external_apis.adoc  | 109 +--
 .../asciidoc/_chapters/getting_started.adoc     |  57 +-
 src/main/asciidoc/_chapters/hbase-default.adoc  |   2 +-
 src/main/asciidoc/_chapters/hbase_mob.adoc      |   4 -
 src/main/asciidoc/_chapters/images              |   2 +-
 src/main/asciidoc/_chapters/ops_mgt.adoc        | 280 +++++-
 src/main/asciidoc/_chapters/performance.adoc    |   2 -
 src/main/asciidoc/_chapters/pv2.adoc            | 163 ++++
 src/main/asciidoc/_chapters/schema_design.adoc  |  33 +-
 src/main/asciidoc/_chapters/security.adoc       |  13 +-
 src/main/asciidoc/_chapters/shell.adoc          |   8 +-
 src/main/asciidoc/_chapters/tracing.adoc        |   6 +-
 .../asciidoc/_chapters/troubleshooting.adoc     | 131 ++-
 src/main/asciidoc/_chapters/unit_testing.adoc   |   2 -
 src/main/asciidoc/_chapters/upgrading.adoc      | 405 ++++++++
 src/main/asciidoc/book.adoc                     |   4 +-
 src/main/asciidoc/images                        |   2 +-
 src/main/site/asciidoc/acid-semantics.adoc      | 118 ---
 src/main/site/asciidoc/bulk-loads.adoc          |  23 -
 src/main/site/asciidoc/cygwin.adoc              | 197 ----
 src/main/site/asciidoc/export_control.adoc      |  44 -
 src/main/site/asciidoc/index.adoc               |  75 --
 src/main/site/asciidoc/metrics.adoc             | 102 ---
 src/main/site/asciidoc/old_news.adoc            | 121 ---
 src/main/site/asciidoc/pseudo-distributed.adoc  |  23 -
 src/main/site/asciidoc/replication.adoc         |  22 -
 src/main/site/asciidoc/resources.adoc           |  27 -
 src/main/site/asciidoc/sponsors.adoc            |  36 -
 .../site/custom/project-info-report.properties  | 303 ------
 src/main/site/resources/.htaccess               |   8 -
 src/main/site/resources/book/.empty             |   1 -
 src/main/site/resources/css/site.css            | 118 ---
 src/main/site/resources/doap_Hbase.rdf          |  57 --
 src/main/site/resources/images/architecture.gif | Bin 15461 -> 0 bytes
 .../resources/images/backup-app-components.png  | Bin 24366 -> 0 bytes
 .../resources/images/backup-cloud-appliance.png | Bin 30114 -> 0 bytes
 .../images/backup-dedicated-cluster.png         | Bin 24950 -> 0 bytes
 .../resources/images/backup-intra-cluster.png   | Bin 19348 -> 0 bytes
 src/main/site/resources/images/bc_basic.png     | Bin 239294 -> 0 bytes
 src/main/site/resources/images/bc_config.png    | Bin 124066 -> 0 bytes
 src/main/site/resources/images/bc_l1.png        | Bin 91603 -> 0 bytes
 .../site/resources/images/bc_l2_buckets.png     | Bin 143801 -> 0 bytes
 src/main/site/resources/images/bc_stats.png     | Bin 111566 -> 0 bytes
 src/main/site/resources/images/big_h_logo.png   | Bin 2286 -> 0 bytes
 src/main/site/resources/images/big_h_logo.svg   | 139 ---
 .../images/data_block_diff_encoding.png         | Bin 54479 -> 0 bytes
 .../resources/images/data_block_no_encoding.png | Bin 46836 -> 0 bytes
 .../images/data_block_prefix_encoding.png       | Bin 35271 -> 0 bytes
 src/main/site/resources/images/favicon.ico      | Bin 1150 -> 0 bytes
 src/main/site/resources/images/hadoop-logo.jpg  | Bin 9443 -> 0 bytes
 src/main/site/resources/images/hbase_logo.png   | Bin 2997 -> 0 bytes
 src/main/site/resources/images/hbase_logo.svg   |  78 --
 .../resources/images/hbase_logo_with_orca.png   | Bin 11618 -> 0 bytes
 .../resources/images/hbase_logo_with_orca.xcf   | Bin 84265 -> 0 bytes
 .../images/hbase_logo_with_orca_large.png       | Bin 21196 -> 0 bytes
 .../images/hbase_replication_diagram.jpg        | Bin 52298 -> 0 bytes
 .../resources/images/hbasecon2015.30percent.png | Bin 8684 -> 0 bytes
 .../images/hbasecon2016-stack-logo.jpg          | Bin 32105 -> 0 bytes
 .../resources/images/hbasecon2016-stacked.png   | Bin 24924 -> 0 bytes
 src/main/site/resources/images/hbasecon2017.png | Bin 3982 -> 0 bytes
 .../site/resources/images/hbaseconasia2017.png  | Bin 23656 -> 0 bytes
 src/main/site/resources/images/hfile.png        | Bin 33661 -> 0 bytes
 src/main/site/resources/images/hfilev2.png      | Bin 57858 -> 0 bytes
 .../resources/images/jumping-orca_rotated.png   | Bin 52812 -> 0 bytes
 .../resources/images/jumping-orca_rotated.xcf   | Bin 77560 -> 0 bytes
 .../images/jumping-orca_rotated_12percent.png   | Bin 2401 -> 0 bytes
 .../images/jumping-orca_rotated_25percent.png   | Bin 4780 -> 0 bytes
 .../images/jumping-orca_transparent_rotated.xcf | Bin 135399 -> 0 bytes
 .../resources/images/region_split_process.png   | Bin 338255 -> 0 bytes
 .../site/resources/images/region_states.png     | Bin 99146 -> 0 bytes
 .../resources/images/replication_overview.png   | Bin 207537 -> 0 bytes
 .../resources/images/timeline_consistency.png   | Bin 88301 -> 0 bytes
 .../1.5-HBASE/maven-fluido-skin-1.5-HBASE.jar   | Bin 344936 -> 0 bytes
 .../1.5-HBASE/maven-fluido-skin-1.5-HBASE.pom   | 718 ---------------
 .../maven-fluido-skin/maven-metadata-local.xml  |  12 -
 src/main/site/site.xml                          | 131 ---
 src/main/site/xdoc/acid-semantics.xml           | 235 -----
 src/main/site/xdoc/bulk-loads.xml               |  34 -
 src/main/site/xdoc/coc.xml                      |  92 --
 src/main/site/xdoc/cygwin.xml                   | 245 -----
 src/main/site/xdoc/export_control.xml           |  59 --
 src/main/site/xdoc/index.xml                    | 109 ---
 src/main/site/xdoc/metrics.xml                  | 150 ---
 src/main/site/xdoc/old_news.xml                 |  92 --
 src/main/site/xdoc/poweredbyhbase.xml           | 398 --------
 src/main/site/xdoc/pseudo-distributed.xml       |  42 -
 src/main/site/xdoc/replication.xml              |  35 -
 src/main/site/xdoc/resources.xml                |  45 -
 src/main/site/xdoc/sponsors.xml                 |  50 -
 src/main/site/xdoc/supportingprojects.xml       | 161 ----
 src/site/asciidoc/acid-semantics.adoc           | 118 +++
 src/site/asciidoc/bulk-loads.adoc               |  22 +
 src/site/asciidoc/cygwin.adoc                   | 196 ++++
 src/site/asciidoc/export_control.adoc           |  44 +
 src/site/asciidoc/index.adoc                    |  75 ++
 src/site/asciidoc/metrics.adoc                  | 101 ++
 src/site/asciidoc/old_news.adoc                 | 120 +++
 src/site/asciidoc/pseudo-distributed.adoc       |  22 +
 src/site/asciidoc/replication.adoc              |  22 +
 src/site/asciidoc/resources.adoc                |  26 +
 src/site/asciidoc/sponsors.adoc                 |  35 +
 src/site/custom/project-info-report.properties  | 303 ++++++
 src/site/resources/.htaccess                    |   8 +
 src/site/resources/book/.empty                  |   1 +
 src/site/resources/css/site.css                 | 118 +++
 src/site/resources/doap_Hbase.rdf               |  57 ++
 src/site/resources/images/architecture.gif      | Bin 0 -> 15461 bytes
 .../resources/images/backup-app-components.png  | Bin 0 -> 24366 bytes
 .../resources/images/backup-cloud-appliance.png | Bin 0 -> 30114 bytes
 .../images/backup-dedicated-cluster.png         | Bin 0 -> 24950 bytes
 .../resources/images/backup-intra-cluster.png   | Bin 0 -> 19348 bytes
 src/site/resources/images/bc_basic.png          | Bin 0 -> 239294 bytes
 src/site/resources/images/bc_config.png         | Bin 0 -> 124066 bytes
 src/site/resources/images/bc_l1.png             | Bin 0 -> 91603 bytes
 src/site/resources/images/bc_l2_buckets.png     | Bin 0 -> 143801 bytes
 src/site/resources/images/bc_stats.png          | Bin 0 -> 111566 bytes
 src/site/resources/images/big_h_logo.png        | Bin 0 -> 2286 bytes
 src/site/resources/images/big_h_logo.svg        | 139 +++
 .../images/data_block_diff_encoding.png         | Bin 0 -> 54479 bytes
 .../resources/images/data_block_no_encoding.png | Bin 0 -> 46836 bytes
 .../images/data_block_prefix_encoding.png       | Bin 0 -> 35271 bytes
 src/site/resources/images/favicon.ico           | Bin 0 -> 1150 bytes
 src/site/resources/images/hadoop-logo.jpg       | Bin 0 -> 9443 bytes
 src/site/resources/images/hbase_logo.png        | Bin 0 -> 2997 bytes
 src/site/resources/images/hbase_logo.svg        |  78 ++
 .../resources/images/hbase_logo_with_orca.png   | Bin 0 -> 11618 bytes
 .../resources/images/hbase_logo_with_orca.xcf   | Bin 0 -> 84265 bytes
 .../images/hbase_logo_with_orca_large.png       | Bin 0 -> 21196 bytes
 .../images/hbase_replication_diagram.jpg        | Bin 0 -> 52298 bytes
 .../resources/images/hbasecon2015.30percent.png | Bin 0 -> 8684 bytes
 .../images/hbasecon2016-stack-logo.jpg          | Bin 0 -> 32105 bytes
 .../resources/images/hbasecon2016-stacked.png   | Bin 0 -> 24924 bytes
 src/site/resources/images/hbasecon2017.png      | Bin 0 -> 3982 bytes
 src/site/resources/images/hbaseconasia2017.png  | Bin 0 -> 23656 bytes
 src/site/resources/images/hfile.png             | Bin 0 -> 33661 bytes
 src/site/resources/images/hfilev2.png           | Bin 0 -> 57858 bytes
 .../resources/images/jumping-orca_rotated.png   | Bin 0 -> 52812 bytes
 .../resources/images/jumping-orca_rotated.xcf   | Bin 0 -> 77560 bytes
 .../images/jumping-orca_rotated_12percent.png   | Bin 0 -> 2401 bytes
 .../images/jumping-orca_rotated_25percent.png   | Bin 0 -> 4780 bytes
 .../images/jumping-orca_transparent_rotated.xcf | Bin 0 -> 135399 bytes
 .../resources/images/region_split_process.png   | Bin 0 -> 338255 bytes
 src/site/resources/images/region_states.png     | Bin 0 -> 99146 bytes
 .../resources/images/replication_overview.png   | Bin 0 -> 207537 bytes
 .../resources/images/timeline_consistency.png   | Bin 0 -> 88301 bytes
 .../1.5-HBASE/maven-fluido-skin-1.5-HBASE.jar   | Bin 0 -> 344936 bytes
 .../1.5-HBASE/maven-fluido-skin-1.5-HBASE.pom   | 718 +++++++++++++++
 .../maven-fluido-skin/maven-metadata-local.xml  |  12 +
 src/site/site.xml                               | 131 +++
 src/site/xdoc/acid-semantics.xml                | 235 +++++
 src/site/xdoc/bulk-loads.xml                    |  34 +
 src/site/xdoc/coc.xml                           |  92 ++
 src/site/xdoc/cygwin.xml                        | 245 +++++
 src/site/xdoc/export_control.xml                |  59 ++
 src/site/xdoc/index.xml                         | 109 +++
 src/site/xdoc/metrics.xml                       | 150 +++
 src/site/xdoc/old_news.xml                      |  92 ++
 src/site/xdoc/poweredbyhbase.xml                | 398 ++++++++
 src/site/xdoc/pseudo-distributed.xml            |  41 +
 src/site/xdoc/replication.xml                   |  35 +
 src/site/xdoc/resources.xml                     |  45 +
 src/site/xdoc/sponsors.xml                      |  50 +
 src/site/xdoc/supportingprojects.xml            | 161 ++++
 176 files changed, 6435 insertions(+), 5353 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index b0a53c8..4db89bd 100755
--- a/pom.xml
+++ b/pom.xml
@@ -856,7 +856,7 @@
               <exclude>.svn/**</exclude>
               <exclude>**/.settings/**</exclude>
               <exclude>**/patchprocess/**</exclude>
-              <exclude>src/main/site/resources/repo/**</exclude>
+              <exclude>src/site/resources/repo/**</exclude>
               <exclude>**/dependency-reduced-pom.xml</exclude>
               <exclude>**/rat.txt</exclude>
               <!-- exclude the shaded protobuf files -->
@@ -1136,8 +1136,8 @@
           </dependency>
         </dependencies>
         <configuration>
-          <siteDirectory>${basedir}/src/main/site</siteDirectory>
-          <customBundle>${basedir}/src/main/site/custom/project-info-report.properties</customBundle>
+          <siteDirectory>${basedir}/src/site</siteDirectory>
+          <customBundle>${basedir}/src/site/custom/project-info-report.properties</customBundle>
           <inputEncoding>UTF-8</inputEncoding>
           <outputEncoding>UTF-8</outputEncoding>
         </configuration>
@@ -1217,7 +1217,7 @@
               <outputDirectory>${project.reporting.outputDirectory}/</outputDirectory>
               <resources>
                 <resource>
-                  <directory>${basedir}/src/main/site/resources/</directory>
+                  <directory>${basedir}/src/site/resources/</directory>
                   <includes>
                     <include>.htaccess</include>
                   </includes>
@@ -1236,7 +1236,7 @@
               <outputDirectory>${project.reporting.outputDirectory}/</outputDirectory>
               <resources>
                 <resource>
-                  <directory>${basedir}/src/main/site/resources/</directory>
+                  <directory>${basedir}/src/site/resources/</directory>
                   <includes>
                     <include>book/**</include>
                   </includes>
@@ -3442,7 +3442,7 @@
             </reports>
           </reportSet>
         </reportSets>
-        <!-- see src/main/site/site.xml for selected reports -->
+        <!-- see src/site/site.xml for selected reports -->
         <configuration>
           <dependencyLocationsEnabled>false</dependencyLocationsEnabled>
         </configuration>
@@ -3677,7 +3677,7 @@
     <repository>
         <id>project.local</id>
         <name>project</name>
-        <url>file:${project.basedir}/src/main/site/resources/repo</url>
+        <url>file:${project.basedir}/src/site/resources/repo</url>
     </repository>
 </repositories>
 </project>

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/amv2.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/amv2.adoc b/src/main/asciidoc/_chapters/amv2.adoc
new file mode 100644
index 0000000..49841ce
--- /dev/null
+++ b/src/main/asciidoc/_chapters/amv2.adoc
@@ -0,0 +1,173 @@
+////
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+////
+[[amv2]]
+= AMv2 Description for Devs
+:doctype: book
+:numbered:
+:toc: left
+:icons: font
+:experimental:
+
+The AssignmentManager (AM) in HBase Master manages assignment of Regions over a cluster of RegionServers.
+
+The AMv2 project is a redo of Assignment in an attempt at addressing the root cause of many of our operational issues in production, namely slow assignment and problematic accounting such that Regions are misplaced stuck offline in the notorious _Regions-In-Transition (RIT)_ limbo state.
+
+Below are notes for devs on key aspects of AMv2 in no particular order.
+
+== Background
+
+Assignment in HBase 1.x has been problematic in operation. It is not hard to see why. Region state is kept at the other end of an RPC in ZooKeeper (Terminal states -- i.e. OPEN or CLOSED -- are published to the _hbase:meta_ table). In HBase-1.x.x, state has multiple writers with Master and RegionServers all able to make state edits concurrently (in _hbase:meta_ table and out on ZooKeeper). If clocks are awry or watchers missed, state changes can be skipped or overwritten. Locking of HBase Entities -- tables, regions -- is not comprehensive so a table operation -- disable/enable -- could clash with a region-level operation; a split or merge. Region state is distributed and hard to reason about and test. Assignment is slow in operation because each assign involves moving remote znodes through transitions. Cluster size tends to top out at a couple of hundred thousand regions; beyond this, cluster start/stop takes hours and is prone to corruption.
+
+AMv2 (AssignmentManager Version 2) is a refactor (https://issues.apache.org/jira/browse/HBASE-14350[HBASE-14350]) of the hbase-1.x AssignmentManager putting it up on a https://issues.apache.org/jira/browse/HBASE-12439[ProcedureV2 (HBASE-12439)] basis. ProcedureV2 (Pv2)__,__ is an awkwardly named system that allows describing and running multi-step state machines. It is performant and persists all state to a Store which is recoverable post crash. See the companion chapter on <<pv2>>, to learn more about the ProcedureV2 system.
+
+In AMv2, all assignment, crash handling, splits and merges are recast as Procedures(v2).  ZooKeeper is purged from the mix. As before, the final assignment state gets published to _hbase:meta_ for non-Master participants to read (all-clients) with intermediate state kept in the local Pv2 WAL-based ‘store’ but only the active Master, a single-writer, evolves state. The Master’s in-memory cluster image is the authority and if disagreement, RegionServers are forced to comply. Pv2 adds shared/exclusive locking of all core HBase Entities -- namespace, tables, and regions -- to ensure one actor at a time access and to prevent operations contending over resources (move/split, disable/assign, etc.).
+
+This redo of AM atop of a purposed, performant state machine with all operations taking on the common Procedure form with a single state writer only moves our AM to a new level of resilience and scale.
+
+== New System
+
+Each Region Assign or Unassign of a Region is now a Procedure. A Move (Region) Procedure is a compound of Procedures; it is the running of an Unassign Procedure followed by an Assign Procedure. The Move Procedure spawns the Assign and Unassign in series and then waits on their completions.
+
+And so on. ServerCrashProcedure spawns the WAL splitting tasks and then the reassign of all regions that were hosted on the crashed server as subprocedures.
+
+AMv2 Procedures are run by the Master in a ProcedureExecutor instance. All Procedures make use of utility provided by the Pv2 framework.
+
+For example, Procedures persist each state transition to the frameworks’ Procedure Store. The default implementation is done as a WAL kept on HDFS. On crash, we reopen the Store and rerun all WALs of Procedure transitions to put the Assignment State Machine back into the attitude it had just before crash. We then continue Procedure execution.
+
+In the new system, the Master is the Authority on all things Assign. Previous we were ambiguous; e.g. the RegionServer was in charge of Split operations. Master keeps an in-memory image of Region states and servers. If disagreement, the Master always prevails; at an extreme it will kill the RegionServer that is in disagreement.
+
+A new RegionStateStore class takes care of publishing the terminal Region state, whether OPEN or CLOSED, out to the _hbase:meta _table__.__
+
+RegionServers now report their run version on Connection. This version is available inside the AM for use running migrating rolling restarts.
+
+== Procedures Detail
+
+=== Assign/Unassign
+
+Assign and Unassign subclass a common RegionTransitionProcedure. There can only be one RegionTransitionProcedure per region running at a time since the RTP instance takes a lock on the region. The RTP base Procedure has three steps; a store the procedure step (REGION_TRANSITION_QUEUE); a dispatch of the procedure open or close followed by a suspend waiting on the remote regionserver to report successful open or fail (REGION_TRANSITION_DISPATCH) or notification that the server fielding the request crashed; and finally registration of the successful open/close in hbase:meta (REGION_TRANSITION_FINISH).
+
+Here is how the assign of a region 56f985a727afe80a184dac75fbf6860c looks in the logs. The assign was provoked by a Server Crash (Process ID 1176 or pid=1176 which when it is the parent of a procedure, it is identified as ppid=1176). The assign is pid=1179, the second region of the two being assigned by this Server Crash.
+
+[source]
+----
+2017-05-23 12:04:24,175 INFO  [ProcExecWrkr-30] procedure2.ProcedureExecutor: Initialized subprocedures=[{pid=1178, ppid=1176, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=IntegrationTestBigLinkedList, region=bfd57f0b72fd3ca77e9d3c5e3ae48d76, target=ve0540.halxg.example.org,16020,1495525111232}, {pid=1179, ppid=1176, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=IntegrationTestBigLinkedList, region=56f985a727afe80a184dac75fbf6860c, target=ve0540.halxg.example.org,16020,1495525111232}]
+----
+
+Next we start the assign by queuing (‘registering’) the Procedure with the framework.
+
+[source]
+----
+2017-05-23 12:04:24,241 INFO  [ProcExecWrkr-30] assignment.AssignProcedure: Start pid=1179, ppid=1176, state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure table=IntegrationTestBigLinkedList, region=56f985a727afe80a184dac75fbf6860c, target=ve0540.halxg.example.org,16020,1495525111232; rit=OFFLINE, location=ve0540.halxg.example.org,16020,1495525111232; forceNewPlan=false, retain=false
+----
+
+Track the running of Procedures in logs by tracing their process id -- here pid=1179.
+
+Next we move to the dispatch phase where we update hbase:meta table setting the region state as OPENING on server ve540. We then dispatch an rpc to ve540 asking it to open the region. Thereafter we suspend the Assign until we get a message back from ve540 on whether it has opened the region successfully (or not).
+
+[source]
+----
+2017-05-23 12:04:24,494 INFO  [ProcExecWrkr-38] assignment.RegionStateStore: pid=1179 updating hbase:meta row=IntegrationTestBigLinkedList,H\xE3@\x8D\x964\x9D\xDF\x8F@9\x0F\xC8\xCC\xC2,1495566261066.56f985a727afe80a184dac75fbf6860c., regionState=OPENING, regionLocation=ve0540.halxg.example.org,16020,1495525111232
+2017-05-23 12:04:24,498 INFO  [ProcExecWrkr-38] assignment.RegionTransitionProcedure: Dispatch pid=1179, ppid=1176, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=IntegrationTestBigLinkedList, region=56f985a727afe80a184dac75fbf6860c, target=ve0540.halxg.example.org,16020,1495525111232; rit=OPENING, location=ve0540.halxg.example.org,16020,1495525111232
+----
+
+Below we log the incoming report that the region opened successfully on ve540. The Procedure is woken up (you can tell it the procedure is running by the name of the thread, its a ProcedureExecutor thread, ProcExecWrkr-9).  The woken up Procedure updates state in hbase:meta to denote the region as open on ve0540. It then reports finished and exits.
+
+[source]
+----
+2017-05-23 12:04:26,643 DEBUG [RpcServer.default.FPBQ.Fifo.handler=46,queue=1,port=16000] assignment.RegionTransitionProcedure: Received report OPENED seqId=11984985, pid=1179, ppid=1176, state=RUNNABLE:REGION_TRANSITION_DISPATCH; AssignProcedure table=IntegrationTestBigLinkedList, region=56f985a727afe80a184dac75fbf6860c, target=ve0540.halxg.example.org,16020,1495525111232; rit=OPENING, location=ve0540.halxg.example.org,16020,1495525111232                                                                                                                                                                       2017-05-23 12:04:26,643 INFO  [ProcExecWrkr-9] assignment.RegionStateStore: pid=1179 updating hbase:meta row=IntegrationTestBigLinkedList,H\xE3@\x8D\x964\x9D\xDF\x8F@9\x0F\xC8\xCC\xC2,1495566261066.56f985a727afe80a184dac75fbf6860c., regionState=OPEN, openSeqNum=11984985, regionLocation=ve0540.halxg.example.org,16020,1495525111232
+2017-05-23 12:04:26,836 INFO  [ProcExecWrkr-9] procedure2.ProcedureExecutor: Finish suprocedure pid=1179, ppid=1176, state=SUCCESS; AssignProcedure table=IntegrationTestBigLinkedList, region=56f985a727afe80a184dac75fbf6860c, target=ve0540.halxg.example.org,16020,1495525111232
+----
+Unassign looks similar given it is based on the base RegionTransitionProcedure. It has the same state transitions and does basically the same steps but with different state name (CLOSING, CLOSED).
+
+Most other procedures are subclasses of a Pv2 StateMachine implementation. We have both Table and Region focused StateMachines types.
+
+== UI
+
+Along the top-bar on the Master, you can now find a ‘Procedures&Locks’ tab which takes you to a page that is ugly but useful. It dumps currently running procedures and framework locks. Look at this when you can’t figure what stuff is stuck; it will at least identify problematic procedures (take the pid and grep the logs…). Look for ROLLEDBACK or pids that have been RUNNING for a long time.
+
+== Logging
+
+Procedures log their process ids as pid= and their parent ids (ppid=) everywhere. Work has been done so you can grep the pid and see history of a procedure operation.
+
+== Implementation Notes
+
+In this section we note some idiosyncrasies of operation as an attempt at saving you some head-scratching.
+
+=== Region Transition RPC and RS Heartbeat can arrive at ~same time on Master
+
+Reporting Region Transition on a RegionServer is now a RPC distinct from RS heartbeating (‘RegionServerServices’ Service). An heartbeat and a status update can arrive at the Master at about the same time. The Master will update its internal state for a Region but this same state is checked when heartbeat processing. We may find the unexpected; i.e. a Region just reported as CLOSED so heartbeat is surprised to find region OPEN on the back of the RS report. In the new system, all slaves must cow to the Masters’ understanding of cluster state; the Master will kill/close any misaligned entities.
+
+To address the above, we added a lastUpdate for in-memory Master state. Let a region state have some vintage before we act on it (one second currently).
+
+=== Master as RegionServer or as RegionServer that just does system tables
+
+AMv2 enforces current master branch default of HMaster carrying system tables only; i.e. the Master in an HBase cluster acts also as a RegionServer only it is the exclusive host for tables such as _hbase:meta_, _hbase:namespace_, etc., the core system tables. This is causing a couple of test failures as AMv1, though it is not supposed to, allows moving hbase:meta off Master while AMv2 does not.
+
+== New Configs
+
+These configs all need doc on when you’d change them.
+
+=== hbase.procedure.remote.dispatcher.threadpool.size
+
+Defaults 128
+
+=== hbase.procedure.remote.dispatcher.delay.msec
+
+Default 150ms
+
+=== hbase.procedure.remote.dispatcher.max.queue.size
+
+Default 32
+
+=== hbase.regionserver.rpc.startup.waittime
+
+Default 60 seconds.
+
+== Tools
+
+HBASE-15592 Print Procedure WAL Content
+
+Patch in https://issues.apache.org/jira/browse/HBASE-18152[HBASE-18152] [AMv2] Corrupt Procedure WAL file; procedure data stored out of order https://issues.apache.org/jira/secure/attachment/12871066/reading_bad_wal.patch[https://issues.apache.org/jira/secure/attachment/12871066/reading_bad_wal.patch]
+
+=== MasterProcedureSchedulerPerformanceEvaluation
+
+Tool to test performance of locks and queues in procedure scheduler independently from other framework components. Run this after any substantial changes in proc system. Prints nice output:
+
+----
+******************************************
+Time - addBack     : 5.0600sec
+Ops/sec - addBack  : 1.9M
+Time - poll        : 19.4590sec
+Ops/sec - poll     : 501.9K
+Num Operations     : 10000000
+
+Completed          : 10000006
+Yield              : 22025876
+
+Num Tables         : 5
+Regions per table  : 10
+Operations type    : both
+Threads            : 10
+******************************************
+Raw format for scripts
+
+RESULT [num_ops=10000000, ops_type=both, num_table=5, regions_per_table=10, threads=10, num_yield=22025876, time_addback_ms=5060, time_poll_ms=19459]
+----

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc b/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
index d5ea076..cb17346 100644
--- a/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
+++ b/src/main/asciidoc/_chapters/appendix_acl_matrix.adoc
@@ -160,6 +160,7 @@ In case the table goes out of date, the unit tests which check for accuracy of p
 |                  | getUserPermissions(global level) | global(A)
 |                  | getUserPermissions(namespace level) | global(A)\|NS(A)
 |                  | getUserPermissions(table level) | global(A)\|NS(A)\|TableOwner\|table(A)\|CF(A)\|CQ(A)
+|                  | hasPermission(table level) | global(A)\|SelfUserCheck
 | RegionServer | stopRegionServer | superuser\|global(A)
 |              | mergeRegions | superuser\|global(A)
 |              | rollWALWriterRequest | superuser\|global(A)

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc b/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc
index 6570c9c..a603c16 100644
--- a/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc
+++ b/src/main/asciidoc/_chapters/appendix_contributing_to_documentation.adoc
@@ -119,7 +119,7 @@ JIRA and add a version number to the name of the new patch.
 
 === Editing the HBase Website
 
-The source for the HBase website is in the HBase source, in the _src/main/site/_ directory.
+The source for the HBase website is in the HBase source, in the _src/site/_ directory.
 Within this directory, source for the individual pages is in the _xdocs/_ directory,
 and images referenced in those pages are in the _resources/images/_ directory.
 This directory also stores images used in the HBase Reference Guide.
@@ -216,7 +216,7 @@ link:http://www.google.com[Google]
 ----
 image::sunset.jpg[Alt Text]
 ----
-(put the image in the src/main/site/resources/images directory)
+(put the image in the src/site/resources/images directory)
 | An inline image | The image with alt text, as part of the text flow |
 ----
 image:sunset.jpg [Alt Text]
@@ -389,7 +389,7 @@ Inline images cannot have titles. They are generally small images like GUI butto
 image:sunset.jpg[Alt Text]
 ----
 
-When doing a local build, save the image to the _src/main/site/resources/images/_ directory.
+When doing a local build, save the image to the _src/site/resources/images/_ directory.
 When you link to the image, do not include the directory portion of the path.
 The image will be copied to the appropriate target location during the build of the output.
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/appendix_hbase_incompatibilities.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/appendix_hbase_incompatibilities.adoc b/src/main/asciidoc/_chapters/appendix_hbase_incompatibilities.adoc
new file mode 100644
index 0000000..d450f04
--- /dev/null
+++ b/src/main/asciidoc/_chapters/appendix_hbase_incompatibilities.adoc
@@ -0,0 +1,714 @@
+////
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+////
+
+[appendix]
+== Known Incompatibilities Among HBase Versions
+:doctype: book
+:numbered:
+:toc: left
+:icons: font
+:experimental:
+:toc: left
+:source-language: java
+
+== HBase 2.0 Incompatible Changes
+
+This appendix describes incompatible changes from earlier versions of HBase against HBase 2.0.
+This list is not meant to be wholly encompassing of all possible incompatibilities.
+Instead, this content is intended to give insight into some obvious incompatibilities which most
+users will face coming from HBase 1.x releases.
+
+=== List of Major Changes for HBase 2.0
+* HBASE-1912- HBCK is a HBase database checking tool for capturing the inconsistency. As an HBase administrator, you should not use HBase version 1.0  hbck tool to check the HBase 2.0 database. Doing so will break the database and throw an exception error.
+* HBASE-16189 and HBASE-18945- You cannot open the HBase 2.0 hfiles through HBase 1.0 version.  If you are an admin or an HBase user who is using HBase version 1.x, you must first do a rolling upgrade to the latest version of HBase 1.x and then upgrade to HBase 2.0.
+* HBASE-18240 - Changed the ReplicationEndpoint Interface. It also introduces a new hbase-third party 1.0 that packages all the third party utilities, which are expected to run in the hbase cluster.
+
+=== Coprocessor API changes
+
+* HBASE-16769 - Deprecated PB references from MasterObserver and RegionServerObserver.
+* HBASE-17312 - [JDK8] Use default method for Observer Coprocessors. The interface classes of BaseMasterAndRegionObserver, BaseMasterObserver, BaseRegionObserver, BaseRegionServerObserver and BaseWALObserver uses JDK8's 'default' keyword to provide empty and no-op implementations.
+* Interface HTableInterface
+  HBase 2.0 introduces following changes to the methods listed below:
+
+==== [−] interface CoprocessorEnvironment changes (2)
+
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Abstract method getTable ( TableName ) has been removed. | A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method getTable ( TableName, ExecutorService ) has been removed. | A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+* Public Audience
+
+The following tables describes the coprocessor changes.
+
+===== [−] class CoprocessorRpcChannel  (1)
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| This class has become interface.| A client program may be interrupted by IncompatibleClassChangeError or InstantiationError exception depending on the usage of this class.
+|===
+
+===== Class CoprocessorHost<E>
+Classes that were Audience Private but were removed.
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Type of field coprocessors has been changed from java.util.SortedSet<E> to org.apache.hadoop.hbase.util.SortedList<E>.| A client program may be interrupted by NoSuchFieldError exception.
+|===
+
+
+==== MasterObserver
+HBase 2.0 introduces following changes to the MasterObserver interface.
+
+===== [−] interface MasterObserver  (14)
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Abstract method voidpostCloneSnapshot ( ObserverContext<MasterCoprocessorEnvironment>, HBaseProtos.SnapshotDescription, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method voidpostCreateTable ( ObserverContext<MasterCoprocessorEnvironment>, HTableDescriptor, HRegionInfo[ ] ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
+| Abstract method voidpostDeleteSnapshot (ObserverContext<MasterCoprocessorEnvironment>, HBaseProtos.SnapshotDescription ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
+| Abstract method voidpostGetTableDescriptors ( ObserverContext<MasterCoprocessorEnvironment>, List<HTableDescriptor> ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
+| Abstract method voidpostModifyTable ( ObserverContext<MasterCoprocessorEnvironment>, TableName, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
+| Abstract method voidpostRestoreSnapshot ( ObserverContext<MasterCoprocessorEnvironment>, HBaseProtos.SnapshotDescription, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
+| Abstract method voidpostSnapshot ( ObserverContext<MasterCoprocessorEnvironment>, HBaseProtos.SnapshotDescription, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
+| Abstract method voidpreCloneSnapshot ( ObserverContext<MasterCoprocessorEnvironment>, HBaseProtos.SnapshotDescription, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
+| Abstract method voidpreCreateTable ( ObserverContext<MasterCoprocessorEnvironment>, HTableDescriptor, HRegionInfo[ ] ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
+| Abstract method voidpreDeleteSnapshot ( ObserverContext<MasterCoprocessorEnvironment>, HBaseProtos.SnapshotDescription ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
+| Abstract method voidpreGetTableDescriptors ( ObserverContext<MasterCoprocessorEnvironment>, List<TableName>, List<HTableDescriptor> ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
+| Abstract method voidpreModifyTable ( ObserverContext<MasterCoprocessorEnvironment>, TableName, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
+| Abstract method voidpreRestoreSnapshot ( ObserverContext<MasterCoprocessorEnvironment>, HBaseProtos.SnapshotDescription, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
+| Abstract method voidpreSnapshot ( ObserverContext<MasterCoprocessorEnvironment>, HBaseProtos.SnapshotDescription, HTableDescriptor ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodErrorexception.
+|===
+
+==== RegionObserver
+HBase 2.0 introduces following changes to the RegionObserver interface.
+
+===== [−] interface RegionObserver  (13)
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Abstract method voidpostCloseRegionOperation ( ObserverContext<RegionCoprocessorEnvironment>, HRegion.Operation ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method voidpostCompactSelection ( ObserverContext<RegionCoprocessorEnvironment>, Store, ImmutableList<StoreFile> ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method voidpostCompactSelection ( ObserverContext<RegionCoprocessorEnvironment>, Store, ImmutableList<StoreFile>, CompactionRequest ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method voidpostGetClosestRowBefore ( ObserverContext<RegionCoprocessorEnvironment>, byte[ ], byte[ ], Result ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method DeleteTrackerpostInstantiateDeleteTracker ( ObserverContext<RegionCoprocessorEnvironment>, DeleteTracker ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method voidpostSplit ( ObserverContext<RegionCoprocessorEnvironment>, HRegion, HRegion ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method voidpostStartRegionOperation ( ObserverContext<RegionCoprocessorEnvironment>, HRegion.Operation ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method StoreFile.ReaderpostStoreFileReaderOpen ( ObserverContext<RegionCoprocessorEnvironment>, FileSystem, Path, FSDataInputStreamWrapper, long, CacheConfig, Reference, StoreFile.Reader ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method voidpostWALRestore ( ObserverContext<RegionCoprocessorEnvironment>, HRegionInfo, HLogKey, WALEdit ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method InternalScannerpreFlushScannerOpen ( ObserverContext<RegionCoprocessorEnvironment>, Store, KeyValueScanner, InternalScanner ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method voidpreGetClosestRowBefore ( ObserverContext<RegionCoprocessorEnvironment>, byte[ ], byte[ ], Result ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method StoreFile.ReaderpreStoreFileReaderOpen ( ObserverContext<RegionCoprocessorEnvironment>, FileSystem, Path, FSDataInputStreamWrapper, long, CacheConfig, Reference, StoreFile.Reader ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method voidpreWALRestore ( ObserverContext<RegionCoprocessorEnvironment>, HRegionInfo, HLogKey, WALEdit ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+==== WALObserver
+HBase 2.0 introduces following changes to the WALObserver interface.
+
+====== [−] interface WALObserver
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Abstract method voidpostWALWrite ( ObserverContext<WALCoprocessorEnvironment>, HRegionInfo, HLogKey, WALEdit ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method booleanpreWALWrite ( ObserverContext<WALCoprocessorEnvironment>, HRegionInfo, HLogKey, WALEdit ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+==== Miscellaneous
+HBase 2.0 introduces changes to the following classes:
+
+hbase-server-1.0.0.jar, OnlineRegions.class package org.apache.hadoop.hbase.regionserver
+[cols="1,1", frame="all"]
+===== [−] OnlineRegions.getFromOnlineRegions ( String p1 ) [abstract]  :  HRegion
+org/apache/hadoop/hbase/regionserver/OnlineRegions.getFromOnlineRegions:(Ljava/lang/String;)Lorg/apache/hadoop/hbase/regionserver/HRegion;
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from Region to Region.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+hbase-server-1.0.0.jar, RegionCoprocessorEnvironment.class package org.apache.hadoop.hbase.coprocessor
+
+===== [−] RegionCoprocessorEnvironment.getRegion ( ) [abstract]  : HRegion
+org/apache/hadoop/hbase/coprocessor/RegionCoprocessorEnvironment.getRegion:()Lorg/apache/hadoop/hbase/regionserver/HRegion;
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from org.apache.hadoop.hbase.regionserver.HRegion to org.apache.hadoop.hbase.regionserver.Region.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+hbase-server-1.0.0.jar, RegionCoprocessorHost.class package org.apache.hadoop.hbase.regionserver
+
+===== [−] RegionCoprocessorHost.postAppend ( Append append, Result result )  : void
+org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.postAppend:(Lorg/apache/hadoop/hbase/client/Append;Lorg/apache/hadoop/hbase/client/Result;)V
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from void to org.apache.hadoop.hbase.client.Result.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+===== [−] RegionCoprocessorHost.preStoreFileReaderOpen ( FileSystem fs, Path p,   FSDataInputStreamWrapper in, long size,CacheConfig cacheConf, Reference r )  :  StoreFile.Reader
+org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.preStoreFileReaderOpen:(Lorg/apache/hadoop/fs/FileSystem;Lorg/apache/hadoop/fs/Path;Lorg/apache/hadoop/hbase/io/FSDataInputStreamWrapper;JLorg/apache/hadoop/hbase/io/hfile/CacheConfig;Lorg/apache/hadoop/hbase/io/Reference;)Lorg/apache/hadoop/hbase/regionserver/StoreFile$Reader;
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from StoreFile.Reader to StoreFileReader.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+==== IPC
+==== Scheduler changes:
+1. Following methods became abstract:
+
+package org.apache.hadoop.hbase.ipc
+
+===== [−]class RpcScheduler (1)
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Abstract method void dispatch ( CallRunner ) has been removed from this class.| A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+hbase-server-1.0.0.jar, RpcScheduler.class package org.apache.hadoop.hbase.ipc
+
+===== [−] RpcScheduler.dispatch ( CallRunner p1 ) [abstract]  :  void  1
+org/apache/hadoop/hbase/ipc/RpcScheduler.dispatch:(Lorg/apache/hadoop/hbase/ipc/CallRunner;)V
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from void to boolean.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+2. Following abstract methods have been removed:
+
+===== [−]interface PriorityFunction  (2)
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Abstract method longgetDeadline ( RPCProtos.RequestHeader, Message ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method int getPriority ( RPCProtos.RequestHeader, Message ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+==== Server API changes:
+
+===== [−] class RpcServer  (12)
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Type of field CurCall has been changed from java.lang.ThreadLocal<RpcServer.Call> to java.lang.ThreadLocal<RpcCall>.| A client program may be interrupted by NoSuchFieldError exception.
+| This class became abstract.| A client program may be interrupted by InstantiationError exception.
+| Abstract method int getNumOpenConnections ( ) has been added to this class.| This class became abstract and a client program may be interrupted by InstantiationError exception.
+| Field callQueueSize of type org.apache.hadoop.hbase.util.Counter has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
+| Field connectionList of type java.util.List<RpcServer.Connection> has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
+| Field maxIdleTime of type int has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
+| Field numConnections of type int has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
+| Field port of type int has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
+| Field purgeTimeout of type long has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
+| Field responder of type RpcServer.Responder has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
+| Field socketSendBufferSize of type int has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
+| Field thresholdIdleConnections of type int has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
+|===
+
+Following abstract method has been removed:
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Abstract method Pair<Message,CellScanner>call ( BlockingService, Descriptors.MethodDescriptor, Message, CellScanner, long, MonitoredRPCHandler ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+==== Replication and WAL changes:
+HBASE-18733: WALKey has been purged completely in HBase 2.0.
+Following are the changes to the WALKey:
+
+===== [−] classWALKey (8)
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Access level of field clusterIds has been changed from protected to private.| A client program may be interrupted by IllegalAccessError exception.
+| Access level of field compressionContext has been changed from protected to private.| A client program may be interrupted by IllegalAccessError exception.
+| Access level of field encodedRegionName has been changed from protected to private.| A client program may be interrupted by IllegalAccessError exception.
+| Access level of field tablename has been changed from protectedto private.| A client program may be interrupted by IllegalAccessError exception.
+| Access level of field writeTime has been changed from protectedto private.| A client program may be interrupted by IllegalAccessError exception.
+|===
+
+Following fields have been removed:
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Field LOG of type org.apache.commons.logging.Log has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
+| Field VERSION of type WALKey.Version has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
+| Field logSeqNum of type long has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
+|===
+
+Following are the changes to the WALEdit.class:
+hbase-server-1.0.0.jar, WALEdit.class package org.apache.hadoop.hbase.regionserver.wal
+
+===== WALEdit.getCompaction ( Cell kv ) [static]  :  WALProtos.CompactionDescriptor  (1)
+org/apache/hadoop/hbase/regionserver/wal/WALEdit.getCompaction:(Lorg/apache/hadoop/hbase/Cell;)Lorg/apache/hadoop/hbase/protobuf/generated/WALProtos$CompactionDescriptor;
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.WALProtos.CompactionDescriptor to org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.CompactionDescriptor.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+===== WALEdit.getFlushDescriptor ( Cell cell ) [static]  :  WALProtos.FlushDescriptor  (1)
+org/apache/hadoop/hbase/regionserver/wal/WALEdit.getFlushDescriptor:(Lorg/apache/hadoop/hbase/Cell;)Lorg/apache/hadoop/hbase/protobuf/generated/WALProtos$FlushDescriptor;
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.WALProtos.FlushDescriptor to org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.FlushDescriptor.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+===== WALEdit.getRegionEventDescriptor ( Cell cell ) [static]  :  WALProtos.RegionEventDescriptor  (1)
+org/apache/hadoop/hbase/regionserver/wal/WALEdit.getRegionEventDescriptor:(Lorg/apache/hadoop/hbase/Cell;)Lorg/apache/hadoop/hbase/protobuf/generated/WALProtos$RegionEventDescriptor;
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.WALProtos.RegionEventDescriptor to org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.RegionEventDescriptor.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+Following is the change to the WALKey.class:
+package org.apache.hadoop.hbase.wal
+
+===== WALKey.getBuilder ( WALCellCodec.ByteStringCompressor compressor )  :  WALProtos.WALKey.Builder  1
+org/apache/hadoop/hbase/wal/WALKey.getBuilder:(Lorg/apache/hadoop/hbase/regionserver/wal/WALCellCodec$ByteStringCompressor;)Lorg/apache/hadoop/hbase/protobuf/generated/WALProtos$WALKey$Builder;
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.WALProtos.WALKey.Builder to org.apache.hadoop.hbase.shaded.protobuf.generated.WALProtos.WALKey.Builder.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+==== Deprecated APIs or coprocessor:
+
+HBASE-16769 - PB references from MasterObserver and RegionServerObserver has been removed.
+
+==== Admin Interface API changes:
+You cannot administer an HBase 2.0 cluster with an HBase 1.0 client that includes RelicationAdmin, ACC, Thrift and REST usage of Admin ops. Methods returning protobufs have been changed to return POJOs instead. pb is not used in the APIs anymore. Returns have changed from void to Future for async methods.
+HBASE-18106 - Admin.listProcedures and Admin.listLocks were renamed to getProcedures and getLocks.
+MapReduce makes use of Admin doing following admin.getClusterStatus() to calcluate Splits.
+
+* Thrift usage of Admin API:
+compact(ByteBuffer)
+createTable(ByteBuffer, List<ColumnDescriptor>)
+deleteTable(ByteBuffer)
+disableTable(ByteBuffer)
+enableTable(ByteBuffer)
+getTableNames()
+majorCompact(ByteBuffer)
+
+* REST usage of Admin API:
+hbase-rest
+org.apache.hadoop.hbase.rest
+RootResource
+getTableList()
+    TableName[] tableNames = servlet.getAdmin().listTableNames();
+SchemaResource
+delete(UriInfo)
+      Admin admin = servlet.getAdmin();
+update(TableSchemaModel, boolean, UriInfo)
+      Admin admin = servlet.getAdmin();
+StorageClusterStatusResource
+get(UriInfo)
+      ClusterStatus status = servlet.getAdmin().getClusterStatus();
+StorageClusterVersionResource
+get(UriInfo)
+      model.setVersion(servlet.getAdmin().getClusterStatus().getHBaseVersion());
+TableResource
+exists()
+    return servlet.getAdmin().tableExists(TableName.valueOf(table));
+
+Following are the changes to the Admin interface:
+
+===== [−] interface Admin (9)
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Abstract method createTableAsync ( HTableDescriptor, byte[ ][ ] ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method disableTableAsync ( TableName ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method enableTableAsync ( TableName ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method getCompactionState ( TableName ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method getCompactionStateForRegion ( byte[ ] ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method isSnapshotFinished ( HBaseProtos.SnapshotDescription ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method snapshot ( String, TableName, HBaseProtos.SnapshotDescription.Type ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method snapshot ( HBaseProtos.SnapshotDescription ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method takeSnapshotAsync ( HBaseProtos.SnapshotDescription ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+Following are the changes to the Admin.class:
+hbase-client-1.0.0.jar, Admin.class package org.apache.hadoop.hbase.client
+
+===== [−] Admin.createTableAsync ( HTableDescriptor p1, byte[ ][ ] p2 ) [abstract]  :  void  1
+org/apache/hadoop/hbase/client/Admin.createTableAsync:(Lorg/apache/hadoop/hbase/HTableDescriptor;[[B)V
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from void to java.util.concurrent.Future<java.lang.Void>.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+===== [−] Admin.disableTableAsync ( TableName p1 ) [abstract]  :  void  1
+org/apache/hadoop/hbase/client/Admin.disableTableAsync:(Lorg/apache/hadoop/hbase/TableName;)V
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from void to java.util.concurrent.Future<java.lang.Void>.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+===== Admin.enableTableAsync ( TableName p1 ) [abstract]  :  void  1
+org/apache/hadoop/hbase/client/Admin.enableTableAsync:(Lorg/apache/hadoop/hbase/TableName;)V
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from void to java.util.concurrent.Future<java.lang.Void>.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+===== [−] Admin.getCompactionState ( TableName p1 ) [abstract]  :  AdminProtos.GetRegionInfoResponse.CompactionState  1
+org/apache/hadoop/hbase/client/Admin.getCompactionState:(Lorg/apache/hadoop/hbase/TableName;)Lorg/apache/hadoop/hbase/protobuf/generated/AdminProtos$GetRegionInfoResponse$CompactionState;
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.AdminProtos.GetRegionInfoResponse.CompactionState to CompactionState.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+===== [−] Admin.getCompactionStateForRegion ( byte[ ] p1 ) [abstract]  :  AdminProtos.GetRegionInfoResponse.CompactionState  1
+org/apache/hadoop/hbase/client/Admin.getCompactionStateForRegion:([B)Lorg/apache/hadoop/hbase/protobuf/generated/AdminProtos$GetRegionInfoResponse$CompactionState;
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.AdminProtos.GetRegionInfoResponse.CompactionState to CompactionState.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+==== HTableDescriptor and HColumnDescriptor changes
+HTableDescriptor and HColumnDescriptor has become interfaces and you can create it through Builders. HCD has become CFD. It no longer implements writable interface.
+package org.apache.hadoop.hbase
+
+===== [−] class HColumnDescriptor  (1)
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Removed super-interface org.apache.hadoop.io.WritableComparable<HColumnDescriptor>.| A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+HColumnDescriptor in 1.0.0
+{code}
+@InterfaceAudience.Public
+@InterfaceStability.Evolving
+public class HColumnDescriptor implements WritableComparable<HColumnDescriptor> {
+{code}
+
+HColumnDescriptor in 2.0
+{code}
+@InterfaceAudience.Public
+@Deprecated // remove it in 3.0
+public class HColumnDescriptor implements ColumnFamilyDescriptor, Comparable<HColumnDescriptor> {
+{code}
+
+For META_TABLEDESC, the maker method had been deprecated already in HTD in 1.0.0.  OWNER_KEY is still in HTD.
+
+===== class HTableDescriptor  (3)
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Removed super-interface org.apache.hadoop.io.WritableComparable<HTableDescriptor>.| A client program may be interrupted by NoSuchMethodError exception.
+| Field META_TABLEDESC of type HTableDescriptor has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
+|===
+
+hbase-client-1.0.0.jar, HTableDescriptor.class package org.apache.hadoop.hbase
+
+===== [−] HTableDescriptor.getColumnFamilies ( )  :  HColumnDescriptor[ ]  (1)
+org/apache/hadoop/hbase/HTableDescriptor.getColumnFamilies:()[Lorg/apache/hadoop/hbase/HColumnDescriptor;
+
+===== [−] class HColumnDescriptor  (1)
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from HColumnDescriptor[]to client.ColumnFamilyDescriptor[].| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+===== [−] HTableDescriptor.getCoprocessors ( )  :  List<String>  (1)
+org/apache/hadoop/hbase/HTableDescriptor.getCoprocessors:()Ljava/util/List;
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from java.util.List<java.lang.String> to java.util.Collection.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+* HBASE-12990 MetaScanner is removed and it is replaced by MetaTableAccessor.
+
+===== HTableWrapper changes:
+hbase-server-1.0.0.jar, HTableWrapper.class package org.apache.hadoop.hbase.client
+
+===== [−] HTableWrapper.createWrapper ( List<HTableInterface> openTables, TableName tableName, CoprocessorHost.Environment env, ExecutorService pool ) [static]  :  HTableInterface  1
+org/apache/hadoop/hbase/client/HTableWrapper.createWrapper:(Ljava/util/List;Lorg/apache/hadoop/hbase/TableName;Lorg/apache/hadoop/hbase/coprocessor/CoprocessorHost$Environment;Ljava/util/concurrent/ExecutorService;)Lorg/apache/hadoop/hbase/client/HTableInterface;
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from HTableInterface to Table.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+* HBASE-12586: Delete all public HTable constructors and delete ConnectionManager#{delete,get}Connection.
+* HBASE-9117: Remove HTablePool and all HConnection pooling related APIs.
+* HBASE-13214: Remove deprecated and unused methods from HTable class
+Following are the changes to the Table interface:
+
+===== [−] interface Table  (4)
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Abstract method batch ( List<?> ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method batchCallback ( List<?>, Batch.Callback<R> )has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method getWriteBufferSize ( ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method setWriteBufferSize ( long ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+==== Deprecated buffer methods in Table (in 1.0.1) and removed in 2.0.0
+
+* HBASE-13298- Clarify if Table.{set|get}WriteBufferSize() is deprecated or not.
+
+* LockTimeoutException and OperationConflictException classes have been removed.
+
+==== class OperationConflictException  (1)
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| This class has been removed.| A client program may be interrupted by NoClassDefFoundErrorexception.
+|===
+
+==== class class LockTimeoutException  (1)
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| This class has been removed.| A client program may be interrupted by NoClassDefFoundErrorexception.
+|===
+
+==== Filter API changes:
+Following methods have been removed:
+package org.apache.hadoop.hbase.filter
+
+===== [−] class Filter  (2)
+|===
+| Change | Result
+| Abstract method getNextKeyHint ( KeyValue ) has been removed from this class.|A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method transform ( KeyValue ) has been removed from this class.| A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+*  HBASE-12296 Filters should work with ByteBufferedCell.
+*  HConnection is removed in HBase 2.0.
+*  RegionLoad and ServerLoad internally moved to shaded PB.
+
+===== [−] class RegionLoad (1)
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Type of field regionLoadPB has been changed from protobuf.generated.ClusterStatusProtos.RegionLoad to shaded.protobuf.generated.ClusterStatusProtos.RegionLoad.|A client program may be interrupted by NoSuchFieldError exception.
+|===
+
+* HBASE-15783:AccessControlConstants#OP_ATTRIBUTE_ACL_STRATEGY_CELL_FIRST is not used any more.
+package org.apache.hadoop.hbase.security.access
+
+===== [−] interface AccessControlConstants (3)
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Field OP_ATTRIBUTE_ACL_STRATEGY of type java.lang.Stringhas been removed from this interface.| A client program may be interrupted by NoSuchFieldError exception.
+| Field OP_ATTRIBUTE_ACL_STRATEGY_CELL_FIRST of type byte[] has been removed from this interface.| A client program may be interrupted by NoSuchFieldError exception.
+| Field OP_ATTRIBUTE_ACL_STRATEGY_DEFAULT of type byte[] has been removed from this interface.| A client program may be interrupted by NoSuchFieldError exception.
+|===
+
+===== ServerLoad returns long instead of int 1
+hbase-client-1.0.0.jar, ServerLoad.class package org.apache.hadoop.hbase
+
+===== [−] ServerLoad.getNumberOfRequests ( )  :  int  1
+org/apache/hadoop/hbase/ServerLoad.getNumberOfRequests:()I
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from int to long.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+===== [−] ServerLoad.getReadRequestsCount ( )  :  int  1
+org/apache/hadoop/hbase/ServerLoad.getReadRequestsCount:()I
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from int to long.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+===== [−] ServerLoad.getTotalNumberOfRequests ( )  :  int  1
+org/apache/hadoop/hbase/ServerLoad.getTotalNumberOfRequests:()I
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from int to long.|This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+===== [−]ServerLoad.getWriteRequestsCount ( )  :  int  1
+org/apache/hadoop/hbase/ServerLoad.getWriteRequestsCount:()I
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from int to long.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+* HBASE-13636 Remove deprecation for HBASE-4072 (Reading of zoo.cfg)
+* HConstants are removed. HBASE-16040 Remove configuration "hbase.replication"
+
+===== [−]class HConstants (6)
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Field DEFAULT_HBASE_CONFIG_READ_ZOOKEEPER_CONFIG of type boolean has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
+| Field HBASE_CONFIG_READ_ZOOKEEPER_CONFIG of type java.lang.String has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
+| Field REPLICATION_ENABLE_DEFAULT of type boolean has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
+| Field REPLICATION_ENABLE_KEY of type java.lang.String has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
+| Field ZOOKEEPER_CONFIG_NAME of type java.lang.String has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
+| Field ZOOKEEPER_USEMULTI of type java.lang.String has been removed from this class.| A client program may be interrupted by NoSuchFieldError exception.
+|===
+
+* HBASE-18732: [compat 1-2] HBASE-14047 removed Cell methods without deprecation cycle.
+
+===== [−]interface Cell  5
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Abstract method getFamily ( ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method getMvccVersion ( ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method getQualifier ( ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method getRow ( ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+| Abstract method getValue ( ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+* HBASE-18795:Expose KeyValue.getBuffer() for tests alone. Allows KV#getBuffer in tests only that was deprecated previously.
+
+==== Region scanner changes:
+===== [−]interface RegionScanner (1)
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Abstract method boolean nextRaw ( List<Cell>, int ) has been removed from this interface.| A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+==== StoreFile changes:
+===== [−] class StoreFile (1)
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| This class became interface.| A client program may be interrupted by IncompatibleClassChangeError or InstantiationError exception dependent on the usage of this class.
+|===
+
+==== Mapreduce changes:
+HFile*Format has been removed in HBase 2.0.
+
+==== ClusterStatus changes:
+HBASE-15843: Replace RegionState.getRegionInTransition() Map with a Set
+hbase-client-1.0.0.jar, ClusterStatus.class package org.apache.hadoop.hbase
+
+===== [−] ClusterStatus.getRegionsInTransition ( )  :  Map<String,RegionState>  1
+org/apache/hadoop/hbase/ClusterStatus.getRegionsInTransition:()Ljava/util/Map;
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+|Return value type has been changed from java.util.Map<java.lang.String,master.RegionState> to java.util.List<master.RegionState>.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+Other changes in ClusterStatus include removal of convert methods that were no longer necessary after purge of PB from API.
+
+==== Purge of PBs from API
+PBs have been deprecated in APIs in HBase 2.0.
+
+===== [−] HBaseSnapshotException.getSnapshotDescription ( )  :  HBaseProtos.SnapshotDescription  1
+org/apache/hadoop/hbase/snapshot/HBaseSnapshotException.getSnapshotDescription:()Lorg/apache/hadoop/hbase/protobuf/generated/HBaseProtos$SnapshotDescription;
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription to org.apache.hadoop.hbase.client.SnapshotDescription.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+* HBASE-15609: Remove PB references from Result, DoubleColumnInterpreter and any such public facing class for 2.0.
+hbase-client-1.0.0.jar, Result.class package org.apache.hadoop.hbase.client
+
+===== [−] Result.getStats ( )  :  ClientProtos.RegionLoadStats  1
+org/apache/hadoop/hbase/client/Result.getStats:()Lorg/apache/hadoop/hbase/protobuf/generated/ClientProtos$RegionLoadStats;
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from org.apache.hadoop.hbase.protobuf.generated.ClientProtos.RegionLoadStats to RegionLoadStats.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+==== REST changes:
+hbase-rest-1.0.0.jar, Client.class package org.apache.hadoop.hbase.rest.client
+
+===== [−] Client.getHttpClient ( )  :  HttpClient  1
+org/apache/hadoop/hbase/rest/client/Client.getHttpClient:()Lorg/apache/commons/httpclient/HttpClient
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from org.apache.commons.httpclient.HttpClient to org.apache.http.client.HttpClient.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+hbase-rest-1.0.0.jar, Response.class package org.apache.hadoop.hbase.rest.client
+
+===== [−] Response.getHeaders ( )  :  Header[ ]  1
+org/apache/hadoop/hbase/rest/client/Response.getHeaders:()[Lorg/apache/commons/httpclient/Header;
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from org.apache.commons.httpclient.Header[] to org.apache.http.Header[].| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+==== PrettyPrinter changes:
+hbase-server-1.0.0.jar, HFilePrettyPrinter.class package org.apache.hadoop.hbase.io.hfile
+
+===== [−]HFilePrettyPrinter.processFile ( Path file )  :  void  1
+org/apache/hadoop/hbase/io/hfile/HFilePrettyPrinter.processFile:(Lorg/apache/hadoop/fs/Path;)V
+[cols="1,1", frame="all"]
+|===
+| Change | Result
+| Return value type has been changed from void to int.| This method has been removed because the return type is part of the method signature. A client program may be interrupted by NoSuchMethodError exception.
+|===
+
+==== AccessControlClient changes:
+HBASE-13171 Change AccessControlClient methods to accept connection object to reduce setup time. Parameters have been changed in the following methods:
+
+* hbase-client-1.2.7-SNAPSHOT.jar, AccessControlClient.class
+package org.apache.hadoop.hbase.security.access
+AccessControlClient.getUserPermissions ( Configuration conf, String tableRegex ) [static]  :  List<UserPermission> *DEPRECATED*
+org/apache/hadoop/hbase/security/access/AccessControlClient.getUserPermissions:(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;)Ljava/util/List;
+
+* AccessControlClient.grant ( Configuration conf, String namespace, String userName, Permission.Action... actions )[static]  :  void *DEPRECATED*
+org/apache/hadoop/hbase/security/access/AccessControlClient.grant:(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;Ljava/lang/String;[Lorg/apache/hadoop/hbase/security/access/Permission$Action;)V
+
+* AccessControlClient.grant ( Configuration conf, String userName, Permission.Action... actions ) [static]  :  void *DEPRECATED*
+org/apache/hadoop/hbase/security/access/AccessControlClient.grant:(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;[Lorg/apache/hadoop/hbase/security/access/Permission$Action;)V
+
+* AccessControlClient.grant ( Configuration conf, TableName tableName, String userName, byte[ ] family, byte[ ] qual,Permission.Action... actions ) [static]  :  void *DEPRECATED*
+org/apache/hadoop/hbase/security/access/AccessControlClient.grant:(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/hbase/TableName;Ljava/lang/String;[B[B[Lorg/apache/hadoop/hbase/security/access/Permission$Action;)V
+
+* AccessControlClient.isAccessControllerRunning ( Configuration conf ) [static]  :  boolean *DEPRECATED*
+org/apache/hadoop/hbase/security/access/AccessControlClient.isAccessControllerRunning:(Lorg/apache/hadoop/conf/Configuration;)Z
+
+* AccessControlClient.revoke ( Configuration conf, String namespace, String userName, Permission.Action... actions )[static]  :  void *DEPRECATED*
+org/apache/hadoop/hbase/security/access/AccessControlClient.revoke:(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;Ljava/lang/String;[Lorg/apache/hadoop/hbase/security/access/Permission$Action;)V
+
+* AccessControlClient.revoke ( Configuration conf, String userName, Permission.Action... actions ) [static]  :  void *DEPRECATED*
+org/apache/hadoop/hbase/security/access/AccessControlClient.revoke:(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;[Lorg/apache/hadoop/hbase/security/access/Permission$Action;)V
+
+* AccessControlClient.revoke ( Configuration conf, TableName tableName, String username, byte[ ] family, byte[ ] qualifier,Permission.Action... actions ) [static]  :  void *DEPRECATED*
+org/apache/hadoop/hbase/security/access/AccessControlClient.revoke:(Lorg/apache/hadoop/conf/Configuration;Lorg/apache/hadoop/hbase/TableName;Ljava/lang/String;[B[B[Lorg/apache/hadoop/hbase/security/access/Permission$Action;)V
+* HBASE-18731: [compat 1-2] Mark protected methods of QuotaSettings that touch Protobuf internals as IA.Private

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/appendix_hfile_format.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/appendix_hfile_format.adoc b/src/main/asciidoc/_chapters/appendix_hfile_format.adoc
index ba82499..0f37beb 100644
--- a/src/main/asciidoc/_chapters/appendix_hfile_format.adoc
+++ b/src/main/asciidoc/_chapters/appendix_hfile_format.adoc
@@ -94,7 +94,7 @@ The version of HBase introducing the above features reads both version 1 and 2 H
 A version 2 HFile is structured as follows:
 
 .HFile Version 2 Structure
-image:hfilev2.png[HFile Version 2]
+image::hfilev2.png[HFile Version 2]
 
 ==== Unified version 2 block format