You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by zh...@apache.org on 2018/07/05 07:20:15 UTC

[08/11] hbase git commit: HBASE-20831 Copy master doc into branch-2.1 and edit to make it suit 2.1.0

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/security.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/security.adoc b/src/main/asciidoc/_chapters/security.adoc
index ef7d6c4..dae6c53 100644
--- a/src/main/asciidoc/_chapters/security.adoc
+++ b/src/main/asciidoc/_chapters/security.adoc
@@ -662,6 +662,7 @@ You also need to enable the DataBlockEncoder for the column family, for encoding
 You can enable compression of each tag in the WAL, if WAL compression is also enabled, by setting the value of `hbase.regionserver.wal.tags.enablecompression` to `true` in _hbase-site.xml_.
 Tag compression uses dictionary encoding.
 
+Coprocessors that run server-side on RegionServers can perform get and set operations on cell Tags. Tags are stripped out at the RPC layer before the read response is sent back, so clients do not see these tags.
 Tag compression is not supported when using WAL encryption.
 
 [[hbase.accesscontrol.configuration]]
@@ -1086,7 +1087,6 @@ public static void revokeFromTable(final HBaseTestingUtility util, final String
 . Showing a User's Effective Permissions
 +
 .HBase Shell
-====
 ----
 hbase> user_permission 'user'
 
@@ -1094,7 +1094,6 @@ hbase> user_permission '.*'
 
 hbase> user_permission JAVA_REGEX
 ----
-====
 
 .API
 ====
@@ -1234,11 +1233,9 @@ Refer to the official API for usage instructions.
 . Define the List of Visibility Labels
 +
 .HBase Shell
-====
 ----
 hbase> add_labels [ 'admin', 'service', 'developer', 'test' ]
 ----
-====
 +
 .Java API
 ====
@@ -1265,7 +1262,6 @@ public static void addLabels() throws Exception {
 . Associate Labels with Users
 +
 .HBase Shell
-====
 ----
 hbase> set_auths 'service', [ 'service' ]
 ----
@@ -1281,7 +1277,6 @@ hbase> set_auths 'qa', [ 'test', 'developer' ]
 ----
 hbase> set_auths '@qagroup', [ 'test' ]
 ----
-====
 +
 .Java API
 ====
@@ -1305,7 +1300,6 @@ public void testSetAndGetUserAuths() throws Throwable {
 . Clear Labels From Users
 +
 .HBase Shell
-====
 ----
 hbase> clear_auths 'service', [ 'service' ]
 ----
@@ -1321,7 +1315,6 @@ hbase> clear_auths 'qa', [ 'test', 'developer' ]
 ----
 hbase> clear_auths '@qagroup', [ 'test', 'developer' ]
 ----
-====
 +
 .Java API
 ====
@@ -1345,7 +1338,6 @@ The label is only applied when data is written.
 The label is associated with a given version of the cell.
 +
 .HBase Shell
-====
 ----
 hbase> set_visibility 'user', 'admin|service|developer', { COLUMNS => 'i' }
 ----
@@ -1357,7 +1349,6 @@ hbase> set_visibility 'user', 'admin|service', { COLUMNS => 'pii' }
 ----
 hbase> set_visibility 'user', 'test', { COLUMNS => [ 'i', 'pii' ], FILTER => "(PrefixFilter ('test'))" }
 ----
-====
 +
 NOTE: HBase Shell support for applying labels or permissions to cells is for testing and verification support, and should not be employed for production use because it won't apply the labels to cells that don't exist yet.
 The correct way to apply cell level labels is to do so in the application code when storing the values.
@@ -1408,12 +1399,10 @@ set as an additional filter. It will further filter your results, rather than
 giving you additional authorization.
 
 .HBase Shell
-====
 ----
 hbase> get_auths 'myUser'
 hbase> scan 'table1', AUTHORIZATIONS => ['private']
 ----
-====
 
 .Java API
 ====

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/shell.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/shell.adoc b/src/main/asciidoc/_chapters/shell.adoc
index 13b8dd1..5612e1d 100644
--- a/src/main/asciidoc/_chapters/shell.adoc
+++ b/src/main/asciidoc/_chapters/shell.adoc
@@ -145,7 +145,6 @@ For instance, if your script creates a table, but returns a non-zero exit value,
 You can enter HBase Shell commands into a text file, one command per line, and pass that file to the HBase Shell.
 
 .Example Command File
-====
 ----
 create 'test', 'cf'
 list 'test'
@@ -158,7 +157,6 @@ get 'test', 'row1'
 disable 'test'
 enable 'test'
 ----
-====
 
 .Directing HBase Shell to Execute the Commands
 ====
@@ -227,7 +225,7 @@ The table reference can be used to perform data read write operations such as pu
 For example, previously you would always specify a table name:
 
 ----
-hbase(main):000:0> create ‘t’, ‘f’
+hbase(main):000:0> create 't', 'f'
 0 row(s) in 1.0970 seconds
 hbase(main):001:0> put 't', 'rold', 'f', 'v'
 0 row(s) in 0.0080 seconds
@@ -291,7 +289,7 @@ hbase(main):012:0> tab = get_table 't'
 0 row(s) in 0.0010 seconds
 
 => Hbase::Table - t
-hbase(main):013:0> tab.put ‘r1’ ,’f’, ‘v’
+hbase(main):013:0> tab.put 'r1' ,'f', 'v'
 0 row(s) in 0.0100 seconds
 hbase(main):014:0> tab.scan
 ROW                                COLUMN+CELL
@@ -305,7 +303,7 @@ You can then use jruby to script table operations based on these names.
 The list_snapshots command also acts similarly.
 
 ----
-hbase(main):016 > tables = list(‘t.*’)
+hbase(main):016 > tables = list('t.*')
 TABLE
 t
 1 row(s) in 0.1040 seconds

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/tracing.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/tracing.adoc b/src/main/asciidoc/_chapters/tracing.adoc
index 8bd1962..7305aa8 100644
--- a/src/main/asciidoc/_chapters/tracing.adoc
+++ b/src/main/asciidoc/_chapters/tracing.adoc
@@ -30,8 +30,10 @@
 :icons: font
 :experimental:
 
-link:https://issues.apache.org/jira/browse/HBASE-6449[HBASE-6449] added support for tracing requests through HBase, using the open source tracing library, link:https://htrace.incubator.apache.org/[HTrace].
-Setting up tracing is quite simple, however it currently requires some very minor changes to your client code (it would not be very difficult to remove this requirement).
+HBase includes facilities for tracing requests using the open source tracing library, link:https://htrace.incubator.apache.org/[Apache HTrace].
+Setting up tracing is quite simple, however it currently requires some very minor changes to your client code (this requirement may be removed in the future).
+
+Support for this feature using HTrace 3 in HBase was added in link:https://issues.apache.org/jira/browse/HBASE-6449[HBASE-6449]. Starting with HBase 2.0, there was a non-compatible update to HTrace 4 via link:https://issues.apache.org/jira/browse/HBASE-18601[HBASE-18601]. The examples provided in this section will be using HTrace 4 package names, syntax, and conventions. For older examples, please consult previous versions of this guide.
 
 [[tracing.spanreceivers]]
 === SpanReceivers

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/troubleshooting.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/troubleshooting.adoc b/src/main/asciidoc/_chapters/troubleshooting.adoc
index eb62b33..0340105 100644
--- a/src/main/asciidoc/_chapters/troubleshooting.adoc
+++ b/src/main/asciidoc/_chapters/troubleshooting.adoc
@@ -102,9 +102,9 @@ To disable, set the logging level back to `INFO` level.
 === JVM Garbage Collection Logs
 
 [NOTE]
-----
+====
 All example Garbage Collection logs in this section are based on Java 8 output. The introduction of Unified Logging in Java 9 and newer will result in very different looking logs.
-----
+====
 
 HBase is memory intensive, and using the default GC you can see long pauses in all threads including the _Juliet Pause_ aka "GC of Death". To help debug this or confirm this is happening GC logging can be turned on in the Java virtual machine.
 
@@ -806,10 +806,12 @@ The HDFS directory structure of HBase tables in the cluster is...
 ----
 
 /hbase
-    /<Table>                    (Tables in the cluster)
-        /<Region>               (Regions for the table)
-            /<ColumnFamily>     (ColumnFamilies for the Region for the table)
-                /<StoreFile>    (StoreFiles for the ColumnFamily for the Regions for the table)
+    /data
+        /<Namespace>                    (Namespaces in the cluster)
+            /<Table>                    (Tables in the cluster)
+                /<Region>               (Regions for the table)
+                    /<ColumnFamily>     (ColumnFamilies for the Region for the table)
+                        /<StoreFile>    (StoreFiles for the ColumnFamily for the Regions for the table)
 ----
 
 The HDFS directory structure of HBase WAL is..
@@ -817,7 +819,7 @@ The HDFS directory structure of HBase WAL is..
 ----
 
 /hbase
-    /.logs
+    /WALs
         /<RegionServer>    (RegionServers)
             /<WAL>         (WAL files for the RegionServer)
 ----
@@ -827,7 +829,7 @@ See the link:https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hd
 [[trouble.namenode.0size.hlogs]]
 ==== Zero size WALs with data in them
 
-Problem: when getting a listing of all the files in a RegionServer's _.logs_ directory, one file has a size of 0 but it contains data.
+Problem: when getting a listing of all the files in a RegionServer's _WALs_ directory, one file has a size of 0 but it contains data.
 
 Answer: It's an HDFS quirk.
 A file that's currently being written to will appear to have a size of 0 but once it's closed it will show its true size
@@ -941,6 +943,96 @@ java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path
 \... then there is a path issue with the compression libraries.
 See the Configuration section on link:[LZO compression configuration].
 
+[[trouble.rs.startup.hsync]]
+==== RegionServer aborts due to lack of hsync for filesystem
+
+In order to provide data durability for writes to the cluster HBase relies on the ability to durably save state in a write ahead log. When using a version of Apache Hadoop Common's filesystem API that supports checking on the availability of needed calls, HBase will proactively abort the cluster if it finds it can't operate safely.
+
+For RegionServer roles, the failure will show up in logs like this:
+
+----
+2018-04-05 11:36:22,785 ERROR [regionserver/192.168.1.123:16020] wal.AsyncFSWALProvider: The RegionServer async write ahead log provider relies on the ability to call hflush and hsync for proper operation during component failures, but the current FileSystem does not support doing so. Please check the config value of 'hbase.wal.dir' and ensure it points to a FileSystem mount that has suitable capabilities for output streams.
+2018-04-05 11:36:22,799 ERROR [regionserver/192.168.1.123:16020] regionserver.HRegionServer: ***** ABORTING region server 192.168.1.123,16020,1522946074234: Unhandled: cannot get log writer *****
+java.io.IOException: cannot get log writer
+        at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:112)
+        at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:612)
+        at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.createWriterInstance(AsyncFSWAL.java:124)
+        at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:759)
+        at org.apache.hadoop.hbase.regionserver.wal.AbstractFSWAL.rollWriter(AbstractFSWAL.java:489)
+        at org.apache.hadoop.hbase.regionserver.wal.AsyncFSWAL.<init>(AsyncFSWAL.java:251)
+        at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:69)
+        at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createWAL(AsyncFSWALProvider.java:44)
+        at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:138)
+        at org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getWAL(AbstractFSWALProvider.java:57)
+        at org.apache.hadoop.hbase.wal.WALFactory.getWAL(WALFactory.java:252)
+        at org.apache.hadoop.hbase.regionserver.HRegionServer.getWAL(HRegionServer.java:2105)
+        at org.apache.hadoop.hbase.regionserver.HRegionServer.buildServerLoad(HRegionServer.java:1326)
+        at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1191)
+        at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1007)
+        at java.lang.Thread.run(Thread.java:745)
+Caused by: org.apache.hadoop.hbase.util.CommonFSUtils$StreamLacksCapabilityException: hflush and hsync
+        at org.apache.hadoop.hbase.io.asyncfs.AsyncFSOutputHelper.createOutput(AsyncFSOutputHelper.java:69)
+        at org.apache.hadoop.hbase.regionserver.wal.AsyncProtobufLogWriter.initOutput(AsyncProtobufLogWriter.java:168)
+        at org.apache.hadoop.hbase.regionserver.wal.AbstractProtobufLogWriter.init(AbstractProtobufLogWriter.java:167)
+        at org.apache.hadoop.hbase.wal.AsyncFSWALProvider.createAsyncWriter(AsyncFSWALProvider.java:99)
+        ... 15 more
+
+----
+
+If you are attempting to run in standalone mode and see this error, please walk back through the section <<quickstart>> and ensure you have included *all* the given configuration settings.
+
+[[trouble.rs.startup.asyncfs]]
+==== RegionServer aborts due to can not initialize access to HDFS
+
+We will try to use _AsyncFSWAL_ for HBase-2.x as it has better performance while consuming less resources. But the problem for _AsyncFSWAL_ is that it hacks into the internal of the DFSClient implementation, so it will easily be broken when upgrading hadoop, even for a simple patch release.
+
+If you do not specify the wal provider, we will try to fall back to the old _FSHLog_ if we fail to initialize _AsyncFSWAL_, but it may not always work. The failure will show up in logs like this:
+
+----
+18/07/02 18:51:06 WARN concurrent.DefaultPromise: An exception was
+thrown by org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete()
+java.lang.Error: Couldn't properly initialize access to HDFS
+internals. Please update your WAL Provider to not make use of the
+'asyncfs' provider. See HBASE-16110 for more information.
+     at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.<clinit>(FanOutOneBlockAsyncDFSOutputSaslHelper.java:268)
+     at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:661)
+     at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:118)
+     at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:720)
+     at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:715)
+     at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
+     at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
+     at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
+     at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
+     at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
+     at org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
+     at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.fulfillConnectPromise(AbstractEpollChannel.java:638)
+     at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.finishConnect(AbstractEpollChannel.java:676)
+     at org.apache.hbase.thirdparty.io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe.epollOutReady(AbstractEpollChannel.java:552)
+     at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:394)
+     at org.apache.hbase.thirdparty.io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:304)
+     at org.apache.hbase.thirdparty.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
+     at org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:138)
+     at java.lang.Thread.run(Thread.java:748)
+ Caused by: java.lang.NoSuchMethodException:
+org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo)
+     at java.lang.Class.getDeclaredMethod(Class.java:2130)
+     at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:232)
+     at org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.<clinit>(FanOutOneBlockAsyncDFSOutputSaslHelper.java:262)
+     ... 18 more
+----
+
+If you hit this error, please specify _FSHLog_, i.e, _filesystem_, explicitly in your config file.
+
+[source,xml]
+----
+<property>
+  <name>hbase.wal.provider</name>
+  <value>filesystem</value>
+</property>
+----
+
+And do not forget to send an email to the user@hbase.apache.org or dev@hbase.apache.org to report the failure and also your hadoop version, we will try to fix the problem ASAP in the next release.
+
 [[trouble.rs.runtime]]
 === Runtime Errors
 
@@ -1127,6 +1219,29 @@ Sure fire solution is to just use Hadoop dfs to delete the HBase root and let HB
 
 If you have many regions on your cluster and you see an error like that reported above in this sections title in your logs, see link:https://issues.apache.org/jira/browse/HBASE-4246[HBASE-4246 Cluster with too many regions cannot withstand some master failover scenarios].
 
+[[trouble.master.startup.hsync]]
+==== Master fails to become active due to lack of hsync for filesystem
+
+HBase's internal framework for cluster operations requires the ability to durably save state in a write ahead log. When using a version of Apache Hadoop Common's filesystem API that supports checking on the availability of needed calls, HBase will proactively abort the cluster if it finds it can't operate safely.
+
+For Master roles, the failure will show up in logs like this:
+
+----
+2018-04-05 11:18:44,653 ERROR [Thread-21] master.HMaster: Failed to become active master
+java.lang.IllegalStateException: The procedure WAL relies on the ability to hsync for proper operation during component failures, but the underlying filesystem does not support doing so. Please check the config value of 'hbase.procedure.store.wal.use.hsync' to set the desired level of robustness and ensure the config value of 'hbase.wal.dir' points to a FileSystem mount that can provide it.
+        at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:1034)
+        at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(WALProcedureStore.java:374)
+        at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.start(ProcedureExecutor.java:530)
+        at org.apache.hadoop.hbase.master.HMaster.startProcedureExecutor(HMaster.java:1267)
+        at org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1173)
+        at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:881)
+        at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2048)
+        at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:568)
+        at java.lang.Thread.run(Thread.java:745)
+----
+
+If you are attempting to run in standalone mode and see this error, please walk back through the section <<quickstart>> and ensure you have included *all* the given configuration settings.
+
 [[trouble.master.shutdown]]
 === Shutdown Errors
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/unit_testing.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/unit_testing.adoc b/src/main/asciidoc/_chapters/unit_testing.adoc
index e503f81..3329a75 100644
--- a/src/main/asciidoc/_chapters/unit_testing.adoc
+++ b/src/main/asciidoc/_chapters/unit_testing.adoc
@@ -327,7 +327,5 @@ A record is inserted, a Get is performed from the same table, and the insertion
 
 NOTE: Starting the mini-cluster takes about 20-30 seconds, but that should be appropriate for integration testing.
 
-To use an HBase mini-cluster on Microsoft Windows, you need to use a Cygwin environment.
-
 See the paper at link:http://blog.sematext.com/2010/08/30/hbase-case-study-using-hbasetestingutility-for-local-testing-development/[HBase Case-Study: Using HBaseTestingUtility for Local Testing and
                 Development] (2010) for more information about HBaseTestingUtility.

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/_chapters/upgrading.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc b/src/main/asciidoc/_chapters/upgrading.adoc
index ef20c7d..bc2ec1c 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -314,6 +314,411 @@ Quitting...
 
 == Upgrade Paths
 
+[[upgrade2.0]]
+=== Upgrading from 1.x to 2.x
+
+In this section we will first call out significant changes compared to the prior stable HBase release and then go over the upgrade process. Be sure to read the former with care so you avoid suprises.
+
+==== Changes of Note!
+
+First we'll cover deployment / operational changes that you might hit when upgrading to HBase 2.0+. After that we'll call out changes for downstream applications. Please note that Coprocessors are covered in the operational section. Also note that this section is not meant to convey information about new features that may be of interest to you. For a complete summary of changes, please see the CHANGES.txt file in the source release artifact for the version you are planning to upgrade to.
+
+[[upgrade2.0.basic.requirements]]
+.Update to basic prerequisite minimums in HBase 2.0+
+As noted in the section <<basic.prerequisites>>, HBase 2.0+ requires a minimum of Java 8 and Hadoop 2.6. The HBase community recommends ensuring you have already completed any needed upgrades in prerequisites prior to upgrading your HBase version.
+
+[[upgrade2.0.hbck]]
+.HBCK must match HBase server version
+You *must not* use an HBase 1.x version of HBCK against an HBase 2.0+ cluster. HBCK is strongly tied to the HBase server version. Using the HBCK tool from an earlier release against an HBase 2.0+ cluster will destructively alter said cluster in unrecoverable ways.
+
+As of HBase 2.0, HBCK is a read-only tool that can report the status of some non-public system internals. You should not rely on the format nor content of these internals to remain consistent across HBase releases.
+
+////
+Link to a ref guide section on HBCK in 2.0 that explains use and calls out the inability of clients and server sides to detect version of each other.
+////
+
+[[upgrade2.0.removed.configs]]
+.Configuration settings no longer in HBase 2.0+
+
+The following configuration settings are no longer applicable or available. For details, please see the detailed release notes.
+
+* hbase.config.read.zookeeper.config (see <<upgrade2.0.zkconfig>> for migration details)
+* hbase.zookeeper.useMulti (HBase now always uses ZK's multi functionality)
+* hbase.rpc.client.threads.max
+* hbase.rpc.client.nativetransport
+* hbase.fs.tmp.dir
+// These next two seem worth a call out section?
+* hbase.bucketcache.combinedcache.enabled
+* hbase.bucketcache.ioengine no longer supports the 'heap' value.
+* hbase.bulkload.staging.dir
+* hbase.balancer.tablesOnMaster wasn't removed, strictly speaking, but its meaning has fundamentally changed and users should not set it. See the section <<upgrade2.0.regions.on.master>> for details.
+* hbase.master.distributed.log.replay See the section <<upgrade2.0.distributed.log.replay>> for details
+* hbase.regionserver.disallow.writes.when.recovering See the section <<upgrade2.0.distributed.log.replay>> for details
+* hbase.regionserver.wal.logreplay.batch.size See the section <<upgrade2.0.distributed.log.replay>> for details
+* hbase.master.catalog.timeout
+* hbase.regionserver.catalog.timeout
+* hbase.metrics.exposeOperationTimes
+* hbase.metrics.showTableName
+* hbase.online.schema.update.enable (HBase now always supports this)
+* hbase.thrift.htablepool.size.max
+
+[[upgrade2.0.renamed.configs]]
+.Configuration properties that were renamed in HBase 2.0+
+
+The following properties have been renamed. Attempts to set the old property will be ignored at run time.
+
+.Renamed properties
+[options="header"]
+|============================================================================================================
+|Old name |New name
+|hbase.rpc.server.nativetransport |hbase.netty.nativetransport
+|hbase.netty.rpc.server.worker.count |hbase.netty.worker.count
+|hbase.hfile.compactions.discharger.interval |hbase.hfile.compaction.discharger.interval
+|hbase.hregion.percolumnfamilyflush.size.lower.bound |hbase.hregion.percolumnfamilyflush.size.lower.bound.min
+|============================================================================================================
+
+[[upgrade2.0.changed.defaults]]
+.Configuration settings with different defaults in HBase 2.0+
+
+The following configuration settings changed their default value. Where applicable, the value to set to restore the behavior of HBase 1.2 is given.
+
+* hbase.security.authorization now defaults to false. set to true to restore same behavior as previous default.
+* hbase.client.retries.number is now set to 10. Previously it was 35. Downstream users are advised to use client timeouts as described in section <<config_timeouts>> instead.
+* hbase.client.serverside.retries.multiplier is now set to 3. Previously it was 10. Downstream users are advised to use client timesout as describe in section <<config_timeouts>> instead.
+* hbase.master.fileSplitTimeout is now set to 10 minutes. Previously it was 30 seconds.
+* hbase.regionserver.logroll.multiplier is now set to 0.5. Previously it was 0.95. This change is tied with the following doubling of block size. Combined, these two configuration changes should make for WALs of about the same size as those in hbase-1.x but there should be less incidence of small blocks because we fail to roll the WAL before we hit the blocksize threshold. See link:https://issues.apache.org/jira/browse/HBASE-19148[HBASE-19148] for discussion.
+* hbase.regionserver.hlog.blocksize defaults to 2x the HDFS default block size for the WAL dir. Previously it was equal to the HDFS default block size for the WAL dir.
+* hbase.client.start.log.errors.counter changed to 5. Previously it was 9.
+* hbase.ipc.server.callqueue.type changed to 'fifo'. In HBase versions 1.0 - 1.2 it was 'deadline'. In prior and later 1.x versions it already defaults to 'fifo'.
+* hbase.hregion.memstore.chunkpool.maxsize is 1.0 by default. Previously it was 0.0. Effectively, this means previously we would not use a chunk pool when our memstore is onheap and now we will. See the section <<gcpause>> for more infromation about the MSLAB chunk pool.
+* hbase.master.cleaner.interval is now set to 10 minutes. Previously it was 1 minute.
+* hbase.master.procedure.threads will now default to 1/4 of the number of available CPUs, but not less than 16 threads. Previously it would be number of threads equal to number of CPUs.
+* hbase.hstore.blockingStoreFiles is now 16. Previously it was 10.
+* hbase.http.max.threads is now 16. Previously it was 10.
+* hbase.client.max.perserver.tasks is now 2. Previously it was 5.
+* hbase.normalizer.period is now 5 minutes. Previously it was 30 minutes.
+* hbase.regionserver.region.split.policy is now SteppingSplitPolicy. Previously it was IncreasingToUpperBoundRegionSplitPolicy.
+* replication.source.ratio is now 0.5. Previously it was 0.1.
+
+[[upgrade2.0.regions.on.master]]
+."Master hosting regions" feature broken and unsupported
+
+The feature "Master acts as region server" and associated follow-on work available in HBase 1.y is non-functional in HBase 2.y and should not be used in a production setting due to deadlock on Master initialization. Downstream users are advised to treat related configuration settings as experimental and the feature as inappropriate for production settings.
+
+A brief summary of related changes:
+
+* Master no longer carries regions by default
+* hbase.balancer.tablesOnMaster is a boolean, default false (if it holds an HBase 1.x list of tables, will default to false)
+* hbase.balancer.tablesOnMaster.systemTablesOnly is boolean to keep user tables off master. default false
+* those wishing to replicate old list-of-servers config should deploy a stand-alone RegionServer process and then rely on Region Server Groups
+
+[[upgrade2.0.distributed.log.replay]]
+."Distributed Log Replay" feature broken and removed
+
+The Distributed Log Replay feature was broken and has been removed from HBase 2.y+. As a consequence all related configs, metrics, RPC fields, and logging have also been removed. Note that this feature was found to be unreliable in the run up to HBase 1.0, defaulted to being unused, and was effectively removed in HBase 1.2.0 when we started ignoring the config that turns it on (link:https://issues.apache.org/jira/browse/HBASE-14465[HBASE-14465]). If you are currently using the feature, be sure to perform a clean shutdown, ensure all DLR work is complete, and disable the feature prior to upgrading.
+
+[[upgrade2.0.prefix-tree.removed]]
+._prefix-tree_ encoding removed
+
+The prefix-tree encoding was removed from HBase 2.0.0 (link:https://issues.apache.org/jira/browse/HBASE-19179[HBASE-19179]).
+It was (late!) deprecated in hbase-1.2.7, hbase-1.4.0, and hbase-1.3.2.
+
+This feature was removed because it as not being actively maintained. If interested in reviving this
+sweet facility which improved random read latencies at the expensive of slowed writes,
+write the HBase developers list at _dev at hbase dot apache dot org_.
+
+The prefix-tree encoding needs to be removed from all tables before upgrading to HBase 2.0+.
+To do that first you need to change the encoding from PREFIX_TREE to something else that is supported in HBase 2.0.
+After that you have to major compact the tables that were using PREFIX_TREE encoding before.
+To check which column families are using incompatible data block encoding you can use <<ops.pre-upgrade,Pre-Upgrade Validator>>.
+
+[[upgrade2.0.metrics]]
+.Changed metrics
+
+The following metrics have changed names:
+
+* Metrics previously published under the name "AssignmentManger" [sic] are now published under the name "AssignmentManager"
+
+The following metrics have changed their meaning:
+
+* The metric 'blockCacheEvictionCount' published on a per-region server basis no longer includes blocks removed from the cache due to the invalidation of the hfiles they are from (e.g. via compaction).
+* The metric 'totalRequestCount' increments once per request; previously it incremented by the number of `Actions` carried in the request; e.g. if a request was a `multi` made of four Gets and two Puts, we'd increment 'totalRequestCount' by six; now we increment by one regardless. Expect to see lower values for this metric in hbase-2.0.0.
+* The 'readRequestCount' now counts reads that return a non-empty row where in older hbases, we'd increment 'readRequestCount' whether a Result or not. This change will flatten the profile of the read-requests graphs if requests for non-existent rows. A YCSB read-heavy workload can do this dependent on how the database was loaded.
+
+The following metrics have been removed:
+
+* Metrics related to the Distributed Log Replay feature are no longer present. They were previsouly found in the region server context under the name 'replay'. See the section <<upgrade2.0.distributed.log.replay>> for details.
+
+The following metrics have been added:
+
+* 'totalRowActionRequestCount' is a count of region row actions summing reads and writes.
+
+[[upgrade2.0.logging]]
+.Changed logging
+HBase-2.0.0 now uses link:https://www.slf4j.org/[slf4j] as its logging frontend.
+Prevously, we used link:http://logging.apache.org/log4j/1.2/[log4j (1.2)].
+For most the transition should be seamless; slf4j does a good job interpreting
+_log4j.properties_ logging configuration files such that you should not notice
+any difference in your log system emissions.
+
+That said, your _log4j.properties_ may need freshening. See link:https://issues.apache.org/jira/browse/HBASE-20351[HBASE-20351]
+for example, where a stale log configuration file manifest as netty configuration
+being dumped at DEBUG level as preamble on every shell command invocation.
+
+[[upgrade2.0.zkconfig]]
+.ZooKeeper configs no longer read from zoo.cfg
+
+HBase no longer optionally reads the 'zoo.cfg' file for ZooKeeper related configuration settings. If you previously relied on the 'hbase.config.read.zookeeper.config' config for this functionality, you should migrate any needed settings to the hbase-site.xml file while adding the prefix 'hbase.zookeeper.property.' to each property name.
+
+[[upgrade2.0.permissions]]
+.Changes in permissions
+The following permission related changes either altered semantics or defaults:
+
+* Permissions granted to a user now merge with existing permissions for that user, rather than over-writing them. (see link:https://issues.apache.org/jira/browse/HBASE-17472[the release note on HBASE-17472] for details)
+* Region Server Group commands (added in 1.4.0) now require admin privileges.
+
+[[upgrade2.0.admin.commands]]
+.Most Admin APIs don't work against an HBase 2.0+ cluster from pre-HBase 2.0 clients
+
+A number of admin commands are known to not work when used from a pre-HBase 2.0 client. This includes an HBase Shell that has the library jars from pre-HBase 2.0. You will need to plan for an outage of use of admin APIs and commands until you can also update to the needed client version.
+
+The following client operations do not work against HBase 2.0+ cluster when executed from a pre-HBase 2.0 client:
+
+* list_procedures
+* split
+* merge_region
+* list_quotas
+* enable_table_replication
+* disable_table_replication
+* Snapshot related commands
+
+.Deprecated in 1.0 admin commands have been removed.
+
+The following commands that were deprecated in 1.0 have been removed. Where applicable the replacement command is listed.
+
+* The 'hlog' command has been removed. Downstream users should rely on the 'wal' command instead.
+
+[[upgrade2.0.memory]]
+.Region Server memory consumption changes.
+
+Users upgrading from versions prior to HBase 1.4 should read the instructions in section <<upgrade1.4.memory>>.
+
+Additionally, HBase 2.0 has changed how memstore memory is tracked for flushing decisions. Previously, both the data size and overhead for storage were used to calculate utilization against the flush threashold. Now, only data size is used to make these per-region decisions. Globally the addition of the storage overhead is used to make decisions about forced flushes.
+
+[[upgrade2.0.ui.splitmerge.by.row]]
+.Web UI for splitting and merging operate on row prefixes
+
+Previously, the Web UI included functionality on table status pages to merge or split based on an encoded region name. In HBase 2.0, instead this functionality works by taking a row prefix.
+
+[[upgrade2.0.replication]]
+.Special upgrading for Replication users from pre-HBase 1.4
+
+User running versions of HBase prior to the 1.4.0 release that make use of replication should be sure to read the instructions in the section <<upgrade1.4.replication>>.
+
+[[upgrade2.0.shell]]
+.HBase shell changes
+
+The HBase shell command relies on a bundled JRuby instance. This bundled JRuby been updated from version 1.6.8 to version 9.1.10.0. The represents a change from Ruby 1.8 to Ruby 2.3.3, which introduces non-compatible language changes for user scripts.
+
+The HBase shell command now ignores the '--return-values' flag that was present in early HBase 1.4 releases. Instead the shell always behaves as though that flag were passed. If you wish to avoid having expression results printed in the console you should alter your IRB configuration as noted in the section <<irbrc>>.
+
+[[upgrade2.0.coprocessors]]
+.Coprocessor APIs have changed in HBase 2.0+
+
+All Coprocessor APIs have been refactored to improve supportability around binary API compatibility for future versions of HBase. If you or applications you rely on have custom HBase coprocessors, you should read link:https://issues.apache.org/jira/browse/HBASE-18169[the release notes for HBASE-18169] for details of changes you will need to make prior to upgrading to HBase 2.0+.
+
+For example, if you had a BaseRegionObserver in HBase 1.2 then at a minimum you will need to update it to implement both RegionObserver and RegionCoprocessor and add the method
+
+[source,java]
+----
+...
+  @Override
+  public Optional<RegionObserver> getRegionObserver() {
+    return Optional.of(this);
+  }
+...
+----
+
+////
+This would be a good place to link to a coprocessor migration guide
+////
+
+[[upgrade2.0.hfile3.only]]
+.HBase 2.0+ can no longer write HFile v2 files.
+
+HBase has simplified our internal HFile handling. As a result, we can no longer write HFile versions earlier than the default of version 3. Upgrading users should ensure that hfile.format.version is not set to 2 in hbase-site.xml before upgrading. Failing to do so will cause Region Server failure. HBase can still read HFiles written in the older version 2 format.
+
+[[upgrade2.0.pb.wal.only]]
+.HBase 2.0+ can no longer read Sequence File based WAL file.
+
+HBase can no longer read the deprecated WAL files written in the Apache Hadoop Sequence File format. The hbase.regionserver.hlog.reader.impl and hbase.regionserver.hlog.reader.impl configuration entries should be set to use the Protobuf based WAL reader / writer classes. This implementation has been the default since HBase 0.96, so legacy WAL files should not be a concern for most downstream users.
+
+A clean cluster shutdown should ensure there are no WAL files. If you are unsure of a given WAL file's format you can use the `hbase wal` command to parse files while the HBase cluster is offline. In HBase 2.0+, this command will not be able to read a Sequence File based WAL. For more information on the tool see the section <<hlog_tool.prettyprint>>.
+
+[[upgrade2.0.filters]]
+.Change in behavior for filters
+
+The Filter ReturnCode NEXT_ROW has been redefined as skipping to next row in current family, not to next row in all family. it’s more reasonable, because ReturnCode is a concept in store level, not in region level.
+
+[[upgrade2.0.shaded.client.preferred]]
+.Downstream HBase 2.0+ users should use the shaded client
+Downstream users are strongly urged to rely on the Maven coordinates org.apache.hbase:hbase-shaded-client for their runtime use. This artifact contains all the needed implementation details for talking to an HBase cluster while minimizing the number of third party dependencies exposed.
+
+Note that this artifact exposes some classes in the org.apache.hadoop package space (e.g. o.a.h.configuration.Configuration) so that we can maintain source compatibility with our public API. Those classes are included so that they can be altered to use the same relocated third party dependencies as the rest of the HBase client code. In the event that you need to *also* use Hadoop in your code, you should ensure all Hadoop related jars precede the HBase client jar in your classpath.
+
+[[upgrade2.0.mapreduce.module]]
+.Downstream HBase 2.0+ users of MapReduce must switch to new artifact
+Downstream users of HBase's integration for Apache Hadoop MapReduce must switch to relying on the org.apache.hbase:hbase-shaded-mapreduce module for their runtime use. Historically, downstream users relied on either the org.apache.hbase:hbase-server or org.apache.hbase:hbase-shaded-server artifacts for these classes. Both uses are no longer supported and in the vast majority of cases will fail at runtime.
+
+Note that this artifact exposes some classes in the org.apache.hadoop package space (e.g. o.a.h.configuration.Configuration) so that we can maintain source compatibility with our public API. Those classes are included so that they can be altered to use the same relocated third party dependencies as the rest of the HBase client code. In the event that you need to *also* use Hadoop in your code, you should ensure all Hadoop related jars precede the HBase client jar in your classpath.
+
+[[upgrade2.0.dependencies]]
+.Significant changes to runtime classpath
+A number of internal dependencies for HBase were updated or removed from the runtime classpath. Downstream client users who do not follow the guidance in <<upgrade2.0.shaded.client.preferred>> will have to examine the set of dependencies Maven pulls in for impact. Downstream users of LimitedPrivate Coprocessor APIs will need to examine the runtime environment for impact. For details on our new handling of third party libraries that have historically been a problem with respect to harmonizing compatible runtime versions, see the reference guide section <<thirdparty>>.
+
+[[upgrade2.0.public.api]]
+.Multiple breaking changes to source and binary compatibility for client API
+The Java client API for HBase has a number of changes that break both source and binary compatibility for details see the Compatibility Check Report for the release you'll be upgrading to.
+
+[[upgrade2.0.tracing]]
+.Tracing implementation changes
+The backing implementation of HBase's tracing features was updated from Apache HTrace 3 to HTrace 4, which includes several breaking changes. While HTrace 3 and 4 can coexist in the same runtime, they will not integrate with each other, leading to disjoint trace information.
+
+The internal changes to HBase during this upgrade were sufficient for compilation, but it has not been confirmed that there are no regressions in tracing functionality. Please consider this feature expiremental for the immediate future.
+
+If you previously relied on client side tracing integrated with HBase operations, it is recommended that you upgrade your usage to HTrace 4 as well.
+
+[[upgrade2.0.perf]]
+.Performance
+
+You will likely see a change in the performance profile on upgrade to hbase-2.0.0 given
+read and write paths have undergone significant change. On release, writes may be
+slower with reads about the same or much better, dependent on context. Be prepared
+to spend time re-tuning (See <<performance>>).
+Performance is also an area that is now under active review so look forward to
+improvement in coming releases (See
+link:https://issues.apache.org/jira/browse/HBASE-20188[HBASE-20188 TESTING Performance]).
+
+////
+This would be a good place to link to an appendix on migrating applications
+////
+
+[[upgrade2.0.coprocessors]]
+==== Upgrading Coprocessors to 2.0
+Coprocessors have changed substantially in 2.0 ranging from top level design changes in class
+hierarchies to changed/removed methods, interfaces, etc.
+(Parent jira: link:https://issues.apache.org/jira/browse/HBASE-18169[HBASE-18169 Coprocessor fix
+and cleanup before 2.0.0 release]). Some of the reasons for such widespread changes:
+
+. Pass Interfaces instead of Implementations; e.g. TableDescriptor instead of HTableDescriptor and
+Region instead of HRegion (link:https://issues.apache.org/jira/browse/HBASE-18241[HBASE-18241]
+Change client.Table and client.Admin to not use HTableDescriptor).
+. Design refactor so implementers need to fill out less boilerplate and so we can do more
+compile-time checking (link:https://issues.apache.org/jira/browse/HBASE-17732[HBASE-17732])
+. Purge Protocol Buffers from Coprocessor API
+(link:https://issues.apache.org/jira/browse/HBASE-18859[HBASE-18859],
+link:https://issues.apache.org/jira/browse/HBASE-16769[HBASE-16769], etc)
+. Cut back on what we expose to Coprocessors removing hooks on internals that were too private to
+ expose (for eg. link:https://issues.apache.org/jira/browse/HBASE-18453[HBASE-18453]
+ CompactionRequest should not be exposed to user directly;
+ link:https://issues.apache.org/jira/browse/HBASE-18298[HBASE-18298] RegionServerServices Interface
+ cleanup for CP expose; etc)
+
+To use coprocessors in 2.0, they should be rebuilt against new API otherwise they will fail to
+load and HBase processes will die.
+
+Suggested order of changes to upgrade the coprocessors:
+
+. Directly implement observer interfaces instead of extending Base*Observer classes. Change
+ `Foo extends BaseXXXObserver` to `Foo implements XXXObserver`.
+ (link:https://issues.apache.org/jira/browse/HBASE-17312[HBASE-17312]).
+. Adapt to design change from Inheritence to Composition
+ (link:https://issues.apache.org/jira/browse/HBASE-17732[HBASE-17732]) by following
+ link:https://github.com/apache/hbase/blob/master/dev-support/design-docs/Coprocessor_Design_Improvements-Use_composition_instead_of_inheritance-HBASE-17732.adoc#migrating-existing-cps-to-new-design[this
+ example].
+. getTable() has been removed from the CoprocessorEnvrionment, coprocessors should self-manage
+ Table instances.
+
+Some examples of writing coprocessors with new API can be found in hbase-example module
+link:https://github.com/apache/hbase/tree/branch-2.0/hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example[here] .
+
+Lastly, if an api has been changed/removed that breaks you in an irreparable way, and if there's a
+good justification to add it back, bring it our notice (dev@hbase.apache.org).
+
+[[upgrade2.0.rolling.upgrades]]
+==== Rolling Upgrade from 1.x to 2.x
+
+Rolling upgrades are currently an experimental feature.
+They have had limited testing. There are likely corner
+cases as yet uncovered in our
+limited experience so you should be careful if you go this
+route. The stop/upgrade/start described in the next section,
+<<upgrade2.0.process>>, is the safest route.
+
+That said, the below is a prescription for a
+rolling upgrade of a 1.4 cluster.
+
+.Pre-Requirements
+* Upgrade to the latest 1.4.x release. Pre 1.4 releases may also work but are not tested, so please upgrade to 1.4.3+ before upgrading to 2.x, unless you are an expert and familiar with the region assignment and crash processing. See the section <<upgrade1.4>> on how to upgrade to 1.4.x.
+* Make sure that the zk-less assignment is enabled, i.e, set `hbase.assignment.usezk` to `false`. This is the most important thing. It allows the 1.x master to assign/unassign regions to/from 2.x region servers. See the release note section of link:https://issues.apache.org/jira/browse/HBASE-11059[HBASE-11059] on how to migrate from zk based assignment to zk less assignment.
+* We have tested rolling upgrading from 1.4.3 to 2.1.0, but it should also work if you want to upgrade to 2.0.x.
+
+.Instructions
+. Unload a region server and upgrade it to 2.1.0. With link:https://issues.apache.org/jira/browse/HBASE-17931[HBASE-17931] in place, the meta region and regions for other system tables will be moved to this region server immediately. If not, please move them manually to the new region server. This is very important because
+** The schema of meta region is hard coded, if meta is on an old region server, then the new region servers can not access it as it does not have some families, for example, table state.
+** Client with lower version can communicate with server with higher version, but not vice versa. If the meta region is on an old region server, the new region server will use a client with higher version to communicate with a server with lower version, this may introduce strange problems.
+. Rolling upgrade all other region servers.
+. Upgrading masters.
+
+It is OK that during the rolling upgrading there are region server crashes. The 1.x master can assign regions to both 1.x and 2.x region servers, and link:https://issues.apache.org/jira/browse/HBASE-19166[HBASE-19166] fixed a problem so that 1.x region server can also read the WALs written by 2.x region server and split them.
+
+NOTE: please read the <<Changes of Note!,Changes of Note!>> section carefully before rolling upgrading. Make sure that you do not use the removed features in 2.0, for example, the prefix-tree encoding, the old hfile format, etc. They could both fail the upgrading and leave the cluster in an intermediate state and hard to recover.
+
+NOTE: If you have success running this prescription, please notify the dev list with a note on your experience and/or update the above with any deviations you may have taken so others going this route can benefit from your efforts.
+
+[[upgrade2.0.process]]
+==== Upgrade process from 1.x to 2.x
+
+To upgrade an existing HBase 1.x cluster, you should:
+
+* Clean shutdown of existing 1.x cluster
+* Update coprocessors
+* Upgrade Master roles first
+* Upgrade RegionServers
+* (Eventually) Upgrade Clients
+
+[[upgrade1.4]]
+=== Upgrading from pre-1.4 to 1.4+
+
+[[upgrade1.4.memory]]
+==== Region Server memory consumption changes.
+
+Users upgrading from versions prior to HBase 1.4 should be aware that the estimates of heap usage by the memstore objects (KeyValue, object and array header sizes, etc) have been made more accurate for heap sizes up to 32G (using CompressedOops), resulting in them dropping by 10-50% in practice. This also results in less number of flushes and compactions due to "fatter" flushes. YMMV. As a result, the actual heap usage of the memstore before being flushed may increase by up to 100%. If configured memory limits for the region server had been tuned based on observed usage, this change could result in worse GC behavior or even OutOfMemory errors. Set the environment property (not hbase-site.xml) "hbase.memorylayout.use.unsafe" to false to disable.
+
+
+[[upgrade1.4.replication]]
+==== Replication peer's TableCFs config
+
+Before 1.4, the table name can't include namespace for replication peer's TableCFs config. It was fixed by add TableCFs to ReplicationPeerConfig which was stored on Zookeeper. So when upgrade to 1.4, you have to update the original ReplicationPeerConfig data on Zookeeper firstly. There are four steps to upgrade when your cluster have a replication peer with TableCFs config.
+
+* Disable the replication peer.
+* If master has permission to write replication peer znode, then rolling update master directly. If not, use TableCFsUpdater tool to update the replication peer's config.
+[source,bash]
+----
+$ bin/hbase org.apache.hadoop.hbase.replication.master.TableCFsUpdater update
+----
+* Rolling update regionservers.
+* Enable the replication peer.
+
+Notes:
+
+* Can't use the old client(before 1.4) to change the replication peer's config. Because the client will write config to Zookeeper directly, the old client will miss TableCFs config. And the old client write TableCFs config to the old tablecfs znode, it will not work for new version regionserver.
+
+[[upgrade1.4.rawscan]]
+==== Raw scan now ignores TTL
+
+Doing a raw scan will now return results that have expired according to TTL settings.
+
 [[upgrade1.0]]
 === Upgrading to 1.x
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/book.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/book.adoc b/src/main/asciidoc/book.adoc
index 0a21e7b..764d7b4 100644
--- a/src/main/asciidoc/book.adoc
+++ b/src/main/asciidoc/book.adoc
@@ -63,7 +63,6 @@ include::_chapters/security.adoc[]
 include::_chapters/architecture.adoc[]
 include::_chapters/hbase_mob.adoc[]
 include::_chapters/inmemory_compaction.adoc[]
-include::_chapters/backup_restore.adoc[]
 include::_chapters/hbase_apis.adoc[]
 include::_chapters/external_apis.adoc[]
 include::_chapters/thrift_filter_language.adoc[]
@@ -75,6 +74,8 @@ include::_chapters/ops_mgt.adoc[]
 include::_chapters/developer.adoc[]
 include::_chapters/unit_testing.adoc[]
 include::_chapters/protobuf.adoc[]
+include::_chapters/pv2.adoc[]
+include::_chapters/amv2.adoc[]
 include::_chapters/zookeeper.adoc[]
 include::_chapters/community.adoc[]
 
@@ -94,3 +95,4 @@ include::_chapters/asf.adoc[]
 include::_chapters/orca.adoc[]
 include::_chapters/tracing.adoc[]
 include::_chapters/rpc.adoc[]
+include::_chapters/appendix_hbase_incompatibilities.adoc[]

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/asciidoc/images
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/images b/src/main/asciidoc/images
index 06d04d0..02e8e94 120000
--- a/src/main/asciidoc/images
+++ b/src/main/asciidoc/images
@@ -1 +1 @@
-../site/resources/images
\ No newline at end of file
+../../site/resources/images/
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/asciidoc/acid-semantics.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/acid-semantics.adoc b/src/main/site/asciidoc/acid-semantics.adoc
deleted file mode 100644
index 0038901..0000000
--- a/src/main/site/asciidoc/acid-semantics.adoc
+++ /dev/null
@@ -1,118 +0,0 @@
-////
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
-////
-
-= Apache HBase (TM) ACID Properties
-
-== About this Document
-
-Apache HBase (TM) is not an ACID compliant database. However, it does guarantee certain specific properties.
-
-This specification enumerates the ACID properties of HBase.
-
-== Definitions
-
-For the sake of common vocabulary, we define the following terms:
-Atomicity::
-  An operation is atomic if it either completes entirely or not at all.
-
-Consistency::
-  All actions cause the table to transition from one valid state directly to another (eg a row will not disappear during an update, etc).
-
-Isolation::
-  an operation is isolated if it appears to complete independently of any other concurrent transaction.
-
-Durability::
-  Any update that reports &quot;successful&quot; to the client will not be lost.
-
-Visibility::
-  An update is considered visible if any subsequent read will see the update as having been committed.
-
-
-The terms _must_ and _may_ are used as specified by link:[RFC 2119].
-
-In short, the word &quot;must&quot; implies that, if some case exists where the statement is not true, it is a bug. The word _may_ implies that, even if the guarantee is provided in a current release, users should not rely on it.
-
-== APIs to Consider
-- Read APIs
-* get
-* scan
-- Write APIs
-* put
-* batch put
-* delete
-- Combination (read-modify-write) APIs
-* incrementColumnValue
-* checkAndPut
-
-== Guarantees Provided
-
-.Atomicity
-.  All mutations are atomic within a row. Any put will either wholely succeed or wholely fail.footnoteref[Puts will either wholely succeed or wholely fail, provided that they are actually sent to the RegionServer.  If the writebuffer is used, Puts will not be sent until the writebuffer is filled or it is explicitly flushed.]
-.. An operation that returns a _success_ code has completely succeeded.
-.. An operation that returns a _failure_ code has completely failed.
-.. An operation that times out may have succeeded and may have failed. However, it will not have partially succeeded or failed.
-. This is true even if the mutation crosses multiple column families within a row.
-. APIs that mutate several rows will _not_ be atomic across the multiple rows. For example, a multiput that operates on rows 'a','b', and 'c' may return having mutated some but not all of the rows. In such cases, these APIs will return a list of success codes, each of which may be succeeded, failed, or timed out as described above.
-. The checkAndPut API happens atomically like the typical _compareAndSet (CAS)_ operation found in many hardware architectures.
-. The order of mutations is seen to happen in a well-defined order for each row, with no interleaving. For example, if one writer issues the mutation `a=1,b=1,c=1` and another writer issues the mutation `a=2,b=2,c=`, the row must either be `a=1,b=1,c=1` or `a=2,b=2,c=2` and must *not* be something like `a=1,b=2,c=1`. +
-NOTE:This is not true _across rows_ for multirow batch mutations.
-
-== Consistency and Isolation
-. All rows returned via any access API will consist of a complete row that existed at some point in the table's history.
-. This is true across column families - i.e a get of a full row that occurs concurrent with some mutations 1,2,3,4,5 will return a complete row that existed at some point in time between mutation i and i+1 for some i between 1 and 5.
-. The state of a row will only move forward through the history of edits to it.
-
-== Consistency of Scans
-A scan is *not* a consistent view of a table. Scans do *not* exhibit _snapshot isolation_.
-
-Rather, scans have the following properties:
-. Any row returned by the scan will be a consistent view (i.e. that version of the complete row existed at some point in time)footnoteref[consistency,A consistent view is not guaranteed intra-row scanning -- i.e. fetching a portion of a row in one RPC then going back to fetch another portion of the row in a subsequent RPC. Intra-row scanning happens when you set a limit on how many values to return per Scan#next (See link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html#setBatch(int)"[Scan#setBatch(int)]).]
-. A scan will always reflect a view of the data _at least as new as_ the beginning of the scan. This satisfies the visibility guarantees enumerated below.
-.. For example, if client A writes data X and then communicates via a side channel to client B, any scans started by client B will contain data at least as new as X.
-.. A scan _must_ reflect all mutations committed prior to the construction of the scanner, and _may_ reflect some mutations committed subsequent to the construction of the scanner.
-.. Scans must include _all_ data written prior to the scan (except in the case where data is subsequently mutated, in which case it _may_ reflect the mutation)
-
-Those familiar with relational databases will recognize this isolation level as "read committed".
-
-NOTE: The guarantees listed above regarding scanner consistency are referring to "transaction commit time", not the "timestamp" field of each cell. That is to say, a scanner started at time _t_ may see edits with a timestamp value greater than _t_, if those edits were committed with a "forward dated" timestamp before the scanner was constructed.
-
-== Visibility
-
-. When a client receives a &quot;success&quot; response for any mutation, that mutation is immediately visible to both that client and any client with whom it later communicates through side channels.footnoteref[consistency]
-. A row must never exhibit so-called "time-travel" properties. That is to say, if a series of mutations moves a row sequentially through a series of states, any sequence of concurrent reads will return a subsequence of those states. +
-For example, if a row's cells are mutated using the `incrementColumnValue` API, a client must never see the value of any cell decrease. +
-This is true regardless of which read API is used to read back the mutation.
-. Any version of a cell that has been returned to a read operation is guaranteed to be durably stored.
-
-== Durability
-. All visible data is also durable data. That is to say, a read will never return data that has not been made durable on disk.footnoteref[durability,In the context of Apache HBase, _durably on disk_; implies an `hflush()` call on the transaction log. This does not actually imply an `fsync()` to magnetic media, but rather just that the data has been written to the OS cache on all replicas of the log. In the case of a full datacenter power loss, it is possible that the edits are not truly durable.]
-. Any operation that returns a &quot;success&quot; code (eg does not throw an exception) will be made durable.footnoteref[durability]
-. Any operation that returns a &quot;failure&quot; code will not be made durable (subject to the Atomicity guarantees above).
-. All reasonable failure scenarios will not affect any of the guarantees of this document.
-
-== Tunability
-
-All of the above guarantees must be possible within Apache HBase. For users who would like to trade off some guarantees for performance, HBase may offer several tuning options. For example:
-
-* Visibility may be tuned on a per-read basis to allow stale reads or time travel.
-* Durability may be tuned to only flush data to disk on a periodic basis.
-
-== More Information
-
-For more information, see the link:book.html#client[client architecture] and  link:book.html#datamodel[data model] sections in the Apache HBase Reference Guide. 

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/asciidoc/bulk-loads.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/bulk-loads.adoc b/src/main/site/asciidoc/bulk-loads.adoc
deleted file mode 100644
index fc320d8..0000000
--- a/src/main/site/asciidoc/bulk-loads.adoc
+++ /dev/null
@@ -1,23 +0,0 @@
-////
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
-////
-
-= Bulk Loads in Apache HBase (TM)
-
-This page has been retired.  The contents have been moved to the link:book.html#arch.bulk.load[Bulk Loading] section in the Reference Guide.
-

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/asciidoc/cygwin.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/cygwin.adoc b/src/main/site/asciidoc/cygwin.adoc
deleted file mode 100644
index 11c4df4..0000000
--- a/src/main/site/asciidoc/cygwin.adoc
+++ /dev/null
@@ -1,197 +0,0 @@
-////
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
-////
-
-
-== Installing Apache HBase (TM) on Windows using Cygwin
-
-== Introduction
-
-link:http://hbase.apache.org[Apache HBase (TM)] is a distributed, column-oriented store, modeled after Google's link:http://research.google.com/archive/bigtable.html[BigTable]. Apache HBase is built on top of link:http://hadoop.apache.org[Hadoop] for its link:http://hadoop.apache.org/mapreduce[MapReduce] link:http://hadoop.apache.org/hdfs[distributed file system] implementations. All these projects are open-source and part of the link:http://www.apache.org[Apache Software Foundation].
-
-== Purpose
-
-This document explains the *intricacies* of running Apache HBase on Windows using Cygwin* as an all-in-one single-node installation for testing and development. The HBase link:http://hbase.apache.org/apidocs/overview-summary.html#overview_description[Overview] and link:book.html#getting_started[QuickStart] guides on the other hand go a long way in explaning how to setup link:http://hadoop.apache.org/hbase[HBase] in more complex deployment scenarios.
-
-== Installation
-
-For running Apache HBase on Windows, 3 technologies are required: 
-* Java
-* Cygwin
-* SSH 
-
-The following paragraphs detail the installation of each of the aforementioned technologies.
-
-=== Java
-
-HBase depends on the link:http://java.sun.com/javase/6/[Java Platform, Standard Edition, 6 Release]. So the target system has to be provided with at least the Java Runtime Environment (JRE); however if the system will also be used for development, the Jave Development Kit (JDK) is preferred. You can download the latest versions for both from link:http://java.sun.com/javase/downloads/index.jsp[Sun's download page]. Installation is a simple GUI wizard that guides you through the process.
-
-=== Cygwin
-
-Cygwin is probably the oddest technology in this solution stack. It provides a dynamic link library that emulates most of a *nix environment on Windows. On top of that a whole bunch of the most common *nix tools are supplied. Combined, the DLL with the tools form a very *nix-alike environment on Windows.
-
-For installation, Cygwin provides the link:http://cygwin.com/setup.exe[`setup.exe` utility] that tracks the versions of all installed components on the target system and provides the mechanism for installing or updating everything from the mirror sites of Cygwin.
-
-To support installation, the `setup.exe` utility uses 2 directories on the target system. The *Root* directory for Cygwin (defaults to _C:\cygwin)_ which will become _/_ within the eventual Cygwin installation; and the *Local Package* directory (e.g. _C:\cygsetup_ that is the cache where `setup.exe`stores the packages before they are installed. The cache must not be the same folder as the Cygwin root.
-
-Perform following steps to install Cygwin, which are elaboratly detailed in the link:http://cygwin.com/cygwin-ug-net/setup-net.html[2nd chapter] of the link:http://cygwin.com/cygwin-ug-net/cygwin-ug-net.html[Cygwin User's Guide].
-
-. Make sure you have `Administrator` privileges on the target system.
-. Choose and create you Root and *Local Package* directories. A good suggestion is to use `C:\cygwin\root` and `C:\cygwin\setup` folders.
-. Download the `setup.exe` utility and save it to the *Local Package* directory. Run the `setup.exe` utility.
-.. Choose  the `Install from Internet` option.
-.. Choose your *Root* and *Local Package* folders.
-.. Select an appropriate mirror.
-.. Don't select any additional packages yet, as we only want to install Cygwin for now.
-.. Wait for download and install.
-.. Finish the installation.
-. Optionally, you can now also add a shortcut to your Start menu pointing to the `setup.exe` utility in the *Local Package *folder.
-. Add `CYGWIN_HOME` system-wide environment variable that points to your *Root* directory.
-. Add `%CYGWIN_HOME%\bin` to the end of your `PATH` environment variable.
-. Reboot the sytem after making changes to the environment variables otherwise the OS will not be able to find the Cygwin utilities.
-. Test your installation by running your freshly created shortcuts or the `Cygwin.bat` command in the *Root* folder. You should end up in a terminal window that is running a link:http://www.gnu.org/software/bash/manual/bashref.html[Bash shell]. Test the shell by issuing following commands:
-.. `cd /` should take you to thr *Root* directory in Cygwin.
-.. The `LS` commands that should list all files and folders in the current directory.
-.. Use the `exit` command to end the terminal.
-. When needed, to *uninstall* Cygwin you can simply delete the *Root* and *Local Package* directory, and the *shortcuts* that were created during installation.
-
-=== SSH
-
-HBase (and Hadoop) rely on link:http://nl.wikipedia.org/wiki/Secure_Shell[*SSH*] for interprocess/-node *communication* and launching* remote commands*. SSH will be provisioned on the target system via Cygwin, which supports running Cygwin programs as *Windows services*!
-
-. Rerun the `*setup.exe*`* utility*.
-. Leave all parameters as is, skipping through the wizard using the `Next` button until the `Select Packages` panel is shown.
-. Maximize the window and click the `View` button to toggle to the list view, which is ordered alfabetically on `Package`, making it easier to find the packages we'll need.
-. Select the following packages by clicking the status word (normally `Skip`) so it's marked for installation. Use the `Next `button to download and install the packages.
-.. `OpenSSH`
-.. `tcp_wrappers`
-.. `diffutils`
-.. `zlib`
-. Wait for the install to complete and finish the installation.
-
-=== HBase
-
-Download the *latest release* of Apache HBase from link:http://www.apache.org/dyn/closer.cgi/hbase/. As the Apache HBase distributable is just a zipped archive, installation is as simple as unpacking the archive so it ends up in its final *installation* directory. Notice that HBase has to be installed in Cygwin and a good directory suggestion is to use `/usr/local/` (or [`*Root* directory]\usr\local` in Windows slang). You should end up with a `/usr/local/hbase-_versi` installation in Cygwin.
-
-This finishes installation. We go on with the configuration.
-
-== Configuration
-
-There are 3 parts left to configure: *Java, SSH and HBase* itself. Following paragraphs explain eacht topic in detail.
-
-=== Java
-
-One important thing to remember in shell scripting in general (i.e. *nix and Windows) is that managing, manipulating and assembling path names that contains spaces can be very hard, due to the need to escape and quote those characters and strings. So we try to stay away from spaces in path names. *nix environments can help us out here very easily by using *symbolic links*.
-
-. Create a link in `/usr/local` to the Java home directory by using the following command and substituting the name of your chosen Java environment: +
-----
-LN -s /cygdrive/c/Program\ Files/Java/*_jre name_*/usr/local/*_jre name_*
-----
-. Test your java installation by changing directories to your Java folder `CD /usr/local/_jre name_` and issueing the command `./bin/java -version`. This should output your version of the chosen JRE.
-
-=== SSH 
-
-Configuring *SSH *is quite elaborate, but primarily a question of launching it by default as a* Windows service*.
-
-. On Windows Vista and above make sure you run the Cygwin shell with *elevated privileges*, by right-clicking on the shortcut an using `Run as Administrator`.
-. First of all, we have to make sure the *rights on some crucial files* are correct. Use the commands underneath. You can verify all rights by using the `LS -L` command on the different files. Also, notice the auto-completion feature in the shell using `TAB` is extremely handy in these situations.
-.. `chmod +r /etc/passwd` to make the passwords file readable for all
-.. `chmod u+w /etc/passwd` to make the passwords file writable for the owner
-.. `chmod +r /etc/group` to make the groups file readable for all
-.. `chmod u+w /etc/group` to make the groups file writable for the owner
-.. `chmod 755 /var` to make the var folder writable to owner and readable and executable to all
-. Edit the */etc/hosts.allow* file using your favorite editor (why not VI in the shell!) and make sure the following two lines are in there before the `PARANOID` line: +
-----
-ALL : localhost 127.0.0.1/32 : allow
-ALL : [::1]/128 : allow
-----
-. Next we have to *configure SSH* by using the script `ssh-host-config`.
-.. If this script asks to overwrite an existing `/etc/ssh_config`, answer `yes`.
-.. If this script asks to overwrite an existing `/etc/sshd_config`, answer `yes`.
-.. If this script asks to use privilege separation, answer `yes`.
-.. If this script asks to install `sshd` as a service, answer `yes`. Make sure you started your shell as Adminstrator!
-.. If this script asks for the CYGWIN value, just `enter` as the default is `ntsec`.
-.. If this script asks to create the `sshd` account, answer `yes`.
-.. If this script asks to use a different user name as service account, answer `no` as the default will suffice.
-.. If this script asks to create the `cyg_server` account, answer `yes`. Enter a password for the account.
-. *Start the SSH service* using `net start sshd` or `cygrunsrv  --start  sshd`. Notice that `cygrunsrv` is the utility that make the process run as a Windows service. Confirm that you see a message stating that `the CYGWIN sshd service  was started succesfully.`
-. Harmonize Windows and Cygwin* user account* by using the commands: +
-----
-mkpasswd -cl > /etc/passwd
-mkgroup --local > /etc/group
-----
-. Test *the installation of SSH:
-.. Open a new Cygwin terminal.
-.. Use the command `whoami` to verify your userID.
-.. Issue an `ssh localhost` to connect to the system itself.
-.. Answer `yes` when presented with the server's fingerprint.
-.. Issue your password when prompted.
-.. Test a few commands in the remote session
-.. The `exit` command should take you back to your first shell in Cygwin.
-. `Exit` should terminate the Cygwin shell.
-
-=== HBase
-
-If all previous configurations are working properly, we just need some tinkering at the *HBase config* files to properly resolve on Windows/Cygwin. All files and paths referenced here start from the HBase `[*installation* directory]` as working directory.
-
-. HBase uses the `./conf/*hbase-env.sh*` to configure its dependencies on the runtime environment. Copy and uncomment following lines just underneath their original, change them to fit your environemnt. They should read something like: +
-----
-export JAVA_HOME=/usr/local/_jre name_
-export HBASE_IDENT_STRING=$HOSTNAME
-----
-. HBase uses the _./conf/`*hbase-default.xml*`_ file for configuration. Some properties do not resolve to existing directories because the JVM runs on Windows. This is the major issue to keep in mind when working with Cygwin: within the shell all paths are *nix-alike, hence relative to the root `/`. However, every parameter that is to be consumed within the windows processes themself, need to be Windows settings, hence `C:\`-alike. Change following propeties in the configuration file, adjusting paths where necessary to conform with your own installation:
-.. `hbase.rootdir` must read e.g. `file:///C:/cygwin/root/tmp/hbase/data`
-.. `hbase.tmp.dir` must read `C:/cygwin/root/tmp/hbase/tmp`
-.. `hbase.zookeeper.quorum` must read `127.0.0.1` because for some reason `localhost` doesn't seem to resolve properly on Cygwin.
-. Make sure the configured `hbase.rootdir` and `hbase.tmp.dir` *directories exist* and have the proper* rights* set up e.g. by issuing a `chmod 777` on them.
-
-== Testing
-
-This should conclude the installation and configuration of Apache HBase on Windows using Cygwin. So it's time *to test it*.
-
-. Start a Cygwin* terminal*, if you haven't already.
-. Change directory to HBase *installation* using `CD /usr/local/hbase-_version_`, preferably using auto-completion.
-. *Start HBase* using the command `./bin/start-hbase.sh`
-.. When prompted to accept the SSH fingerprint, answer `yes`.
-.. When prompted, provide your password. Maybe multiple times.
-.. When the command completes, the HBase server should have started.
-.. However, to be absolutely certain, check the logs in the `./logs` directory for any exceptions.
-. Next we *start the HBase shell* using the command `./bin/hbase shell`
-. We run some simple *test commands*
-.. Create a simple table using command `create 'test', 'data'`
-.. Verify the table exists using the command `list`
-.. Insert data into the table using e.g. +
-----
-put 'test', 'row1', 'data:1', 'value1'
-put 'test', 'row2', 'data:2', 'value2'
-put 'test', 'row3', 'data:3', 'value3'
-----
-.. List all rows in the table using the command `scan 'test'` that should list all the rows previously inserted. Notice how 3 new columns where added without changing the schema!
-.. Finally we get rid of the table by issuing `disable 'test'` followed by `drop 'test'` and verified by `list` which should give an empty listing.
-. *Leave the shell* by `exit`
-. To *stop the HBase server* issue the `./bin/stop-hbase.sh` command. And wait for it to complete!!! Killing the process might corrupt your data on disk.
-. In case of *problems*,
-.. Verify the HBase logs in the `./logs` directory.
-.. Try to fix the problem
-.. Get help on the forums or IRC (`#hbase@freenode.net`). People are very active and keen to help out!
-.. Stop and retest the server.
-
-== Conclusion
-
-Now your *HBase *server is running, *start coding* and build that next killer app on this particular, but scalable datastore!
-

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/asciidoc/export_control.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/export_control.adoc b/src/main/site/asciidoc/export_control.adoc
deleted file mode 100644
index 1bbefb5..0000000
--- a/src/main/site/asciidoc/export_control.adoc
+++ /dev/null
@@ -1,44 +0,0 @@
-////
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
-////
-
-
-= Export Control
-
-This distribution uses or includes cryptographic software. The country in
-which you currently reside may have restrictions on the import, possession,
-use, and/or re-export to another country, of encryption software. BEFORE
-using any encryption software, please check your country's laws, regulations
-and policies concerning the import, possession, or use, and re-export of
-encryption software, to see if this is permitted. See the
-link:http://www.wassenaar.org/[Wassenaar Arrangement] for more
-information.
-
-The U.S. Government Department of Commerce, Bureau of Industry and Security 
-(BIS), has classified this software as Export Commodity Control Number (ECCN) 
-5D002.C.1, which includes information security software using or performing 
-cryptographic functions with asymmetric algorithms. The form and manner of this
-Apache Software Foundation distribution makes it eligible for export under the 
-License Exception ENC Technology Software Unrestricted (TSU) exception (see the
-BIS Export Administration Regulations, Section 740.13) for both object code and
-source code.
-
-Apache HBase uses the built-in java cryptography libraries. See Oracle's
-information regarding
-link:http://www.oracle.com/us/products/export/export-regulations-345813.html[Java cryptographic export regulations]
-for more details.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/61d70604/src/main/site/asciidoc/index.adoc
----------------------------------------------------------------------
diff --git a/src/main/site/asciidoc/index.adoc b/src/main/site/asciidoc/index.adoc
deleted file mode 100644
index 9b31c49..0000000
--- a/src/main/site/asciidoc/index.adoc
+++ /dev/null
@@ -1,75 +0,0 @@
-////
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
-////
-
-= Apache HBase&#153; Home
-
-.Welcome to Apache HBase(TM)
-link:http://www.apache.org/[Apache HBase(TM)] is the link:http://hadoop.apache.org[Hadoop] database, a distributed, scalable, big data store.
-
-.When Would I Use Apache HBase?
-Use Apache HBase when you need random, realtime read/write access to your Big Data. +
-This project's goal is the hosting of very large tables -- billions of rows X millions of columns -- atop clusters of commodity hardware.
-
-Apache HBase is an open-source, distributed, versioned, non-relational database modeled after Google's link:http://research.google.com/archive/bigtable.html[Bigtable: A Distributed Storage System for Structured Data] by Chang et al.
-
-Just as Bigtable leverages the distributed data storage provided by the Google File System, Apache HBase provides Bigtable-like capabilities on top of Hadoop and HDFS.
-
-.Features
-- Linear and modular scalability.
-- Strictly consistent reads and writes.
-- Automatic and configurable sharding of tables
-- Automatic failover support between RegionServers.
-- Convenient base classes for backing Hadoop MapReduce jobs with Apache HBase tables.
-- Easy to use Java API for client access.
-- Block cache and Bloom Filters for real-time queries.
-- Query predicate push down via server side Filters
-- Thrift gateway and a REST-ful Web service that supports XML, Protobuf, and binary data encoding options
-- Extensible jruby-based (JIRB) shell
-- Support for exporting metrics via the Hadoop metrics subsystem to files or Ganglia; or via JMX
-
-.Where Can I Get More Information?
-See the link:book.html#arch.overview[Architecture Overview], the link:book.html#faq[FAQ] and the other documentation links at the top!
-
-.Export Control
-The HBase distribution includes cryptographic software. See the link:export_control.html[export control notice].
-
-== News
-Feb 17, 2015:: link:http://www.meetup.com/hbaseusergroup/events/219260093/[HBase meetup around Strata+Hadoop World] in San Jose
-
-January 15th, 2015:: link:http://www.meetup.com/hbaseusergroup/events/218744798/[HBase meetup @ AppDynamics] in San Francisco
-
-November 20th, 2014::  link:http://www.meetup.com/hbaseusergroup/events/205219992/[HBase meetup @ WANdisco] in San Ramon
-
-October 27th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/207386102/[HBase Meetup @ Apple] in Cupertino
-
-October 15th, 2014:: link:http://www.meetup.com/HBase-NYC/events/207655552[HBase Meetup @ Google] on the night before Strata/HW in NYC
-
-September 25th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/203173692/[HBase Meetup @ Continuuity] in Palo Alto
-
-August 28th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/197773762/[HBase Meetup @ Sift Science] in San Francisco
-
-July 17th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/190994082/[HBase Meetup @ HP] in Sunnyvale
-
-June 5th, 2014:: link:http://www.meetup.com/Hadoop-Summit-Community-San-Jose/events/179081342/[HBase BOF at Hadoop Summit], San Jose Convention Center
-
-May 5th, 2014:: link:http://www.hbasecon.com[HBaseCon2014] at the Hilton San Francisco on Union Square
-
-March 12th, 2014:: link:http://www.meetup.com/hbaseusergroup/events/160757912/[HBase Meetup @ Ancestry.com] in San Francisco
-
-View link:old_news.html[Old News]