You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by nd...@apache.org on 2015/08/19 00:35:54 UTC

[02/15] hbase git commit: HBASE-14066 clean out old docbook docs from branch-1.

http://git-wip-us.apache.org/repos/asf/hbase/blob/0acbff24/src/main/docbkx/unit_testing.xml
----------------------------------------------------------------------
diff --git a/src/main/docbkx/unit_testing.xml b/src/main/docbkx/unit_testing.xml
deleted file mode 100644
index 8d8c756..0000000
--- a/src/main/docbkx/unit_testing.xml
+++ /dev/null
@@ -1,330 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<chapter version="5.0" xml:id="unit.tests" xmlns="http://docbook.org/ns/docbook"
-    xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xi="http://www.w3.org/2001/XInclude"
-    xmlns:svg="http://www.w3.org/2000/svg" xmlns:m="http://www.w3.org/1998/Math/MathML"
-    xmlns:html="http://www.w3.org/1999/xhtml" xmlns:db="http://docbook.org/ns/docbook">
-    <!--
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
--->
-
-    <title>Unit Testing HBase Applications</title>
-    <para>This chapter discusses unit testing your HBase application using JUnit, Mockito, MRUnit,
-        and HBaseTestingUtility. Much of the information comes from <link
-            xlink:href="http://blog.cloudera.com/blog/2013/09/how-to-test-hbase-applications-using-popular-tools/"
-            >a community blog post about testing HBase applications</link>. For information on unit
-        tests for HBase itself, see <xref linkend="hbase.tests"/>.</para>
-
-    <section>
-        <title>JUnit</title>
-        <para>HBase uses <link xlink:href="http://junit.org">JUnit</link> 4 for unit tests</para>
-        <para>This example will add unit tests to the following example class:</para>
-        <programlisting language="java">
-public class MyHBaseDAO {
-
-    public static void insertRecord(HTableInterface table, HBaseTestObj obj)
-    throws Exception {
-        Put put = createPut(obj);
-        table.put(put);
-    }
-    
-    private static Put createPut(HBaseTestObj obj) {
-        Put put = new Put(Bytes.toBytes(obj.getRowKey()));
-        put.add(Bytes.toBytes("CF"), Bytes.toBytes("CQ-1"),
-                    Bytes.toBytes(obj.getData1()));
-        put.add(Bytes.toBytes("CF"), Bytes.toBytes("CQ-2"),
-                    Bytes.toBytes(obj.getData2()));
-        return put;
-    }
-}                
-            </programlisting>
-        <para>The first step is to add JUnit dependencies to your Maven POM file:</para>
-        <programlisting language="xml"><![CDATA[
-<dependency>
-    <groupId>junit</groupId>
-    <artifactId>junit</artifactId>
-    <version>4.11</version>
-    <scope>test</scope>
-</dependency>                
-                ]]></programlisting>
-        <para>Next, add some unit tests to your code. Tests are annotated with
-                <literal>@Test</literal>. Here, the unit tests are in bold.</para>
-        <programlisting language="java">
-public class TestMyHbaseDAOData {
-  @Test
-  public void testCreatePut() throws Exception {
-  HBaseTestObj obj = new HBaseTestObj();
-  obj.setRowKey("ROWKEY-1");
-  obj.setData1("DATA-1");
-  obj.setData2("DATA-2");
-  Put put = MyHBaseDAO.createPut(obj);
-  <userinput>assertEquals(obj.getRowKey(), Bytes.toString(put.getRow()));
-  assertEquals(obj.getData1(), Bytes.toString(put.get(Bytes.toBytes("CF"), Bytes.toBytes("CQ-1")).get(0).getValue()));
-  assertEquals(obj.getData2(), Bytes.toString(put.get(Bytes.toBytes("CF"), Bytes.toBytes("CQ-2")).get(0).getValue()));</userinput>
-  }
-}                
-            </programlisting>
-        <para>These tests ensure that your <code>createPut</code> method creates, populates, and
-            returns a <code>Put</code> object with expected values. Of course, JUnit can do much
-            more than this. For an introduction to JUnit, see <link
-                xlink:href="https://github.com/junit-team/junit/wiki/Getting-started"
-                >https://github.com/junit-team/junit/wiki/Getting-started</link>. </para>
-    </section>
-
-    <section xml:id="mockito">
-        <title>Mockito</title>
-        <para>Mockito is a mocking framework. It goes further than JUnit by allowing you to test the
-            interactions between objects without having to replicate the entire environment. You can
-            read more about Mockito at its project site, <link
-                xlink:href="https://code.google.com/p/mockito/"
-                >https://code.google.com/p/mockito/</link>.</para>
-        <para>You can use Mockito to do unit testing on smaller units. For instance, you can mock a
-                <classname>org.apache.hadoop.hbase.Server</classname> instance or a
-                <classname>org.apache.hadoop.hbase.master.MasterServices</classname> interface
-            reference rather than a full-blown
-                <classname>org.apache.hadoop.hbase.master.HMaster</classname>.</para>
-        <para>This example builds upon the example code in <xref linkend="unit.tests"/>, to test the
-                <code>insertRecord</code> method.</para>
-        <para>First, add a dependency for Mockito to your Maven POM file.</para>
-        <programlisting language="xml"><![CDATA[
-<dependency>
-    <groupId>org.mockito</groupId>
-    <artifactId>mockito-all</artifactId>
-    <version>1.9.5</version>
-    <scope>test</scope>
-</dependency>                   
-                   ]]></programlisting>
-        <para>Next, add a <code>@RunWith</code> annotation to your test class, to direct it to use
-            Mockito.</para>
-        <programlisting language="java">
-<userinput>@RunWith(MockitoJUnitRunner.class)</userinput>
-public class TestMyHBaseDAO{
-  @Mock 
-  private HTableInterface table;
-  @Mock
-  private HTablePool hTablePool;
-  @Captor
-  private ArgumentCaptor putCaptor;
-
-  @Test
-  public void testInsertRecord() throws Exception {
-    //return mock table when getTable is called
-    when(hTablePool.getTable("tablename")).thenReturn(table);
-    //create test object and make a call to the DAO that needs testing
-    HBaseTestObj obj = new HBaseTestObj();
-    obj.setRowKey("ROWKEY-1");
-    obj.setData1("DATA-1");
-    obj.setData2("DATA-2");
-    MyHBaseDAO.insertRecord(table, obj);
-    verify(table).put(putCaptor.capture());
-    Put put = putCaptor.getValue();
-  
-    assertEquals(Bytes.toString(put.getRow()), obj.getRowKey());
-    assert(put.has(Bytes.toBytes("CF"), Bytes.toBytes("CQ-1")));
-    assert(put.has(Bytes.toBytes("CF"), Bytes.toBytes("CQ-2")));
-    assertEquals(Bytes.toString(put.get(Bytes.toBytes("CF"),Bytes.toBytes("CQ-1")).get(0).getValue()), "DATA-1");
-    assertEquals(Bytes.toString(put.get(Bytes.toBytes("CF"),Bytes.toBytes("CQ-2")).get(0).getValue()), "DATA-2");
-  }
-}                   
-               </programlisting>
-        <para>This code populates <code>HBaseTestObj</code> with “ROWKEY-1”, “DATA-1”, “DATA-2” as
-            values. It then inserts the record into the mocked table. The Put that the DAO would
-            have inserted is captured, and values are tested to verify that they are what you
-            expected them to be.</para>
-        <para>The key here is to manage htable pool and htable instance creation outside the DAO.
-            This allows you to mock them cleanly and test Puts as shown above. Similarly, you can
-            now expand into other operations such as Get, Scan, or Delete.</para>
-
-    </section>
-    <section>
-        <title>MRUnit</title>
-        <para><link xlink:href="http://mrunit.apache.org/">Apache MRUnit</link> is a library that
-            allows you to unit-test MapReduce jobs. You can use it to test HBase jobs in the same
-            way as other MapReduce jobs.</para>
-        <para>Given a MapReduce job that writes to an HBase table called <literal>MyTest</literal>,
-            which has one column family called <literal>CF</literal>, the reducer of such a job
-            could look like the following:</para>
-        <programlisting language="java"><![CDATA[
-public class MyReducer extends TableReducer<Text, Text, ImmutableBytesWritable> {
-   public static final byte[] CF = "CF".getBytes();
-   public static final byte[] QUALIFIER = "CQ-1".getBytes();
-   public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
-     //bunch of processing to extract data to be inserted, in our case, lets say we are simply
-     //appending all the records we receive from the mapper for this particular
-     //key and insert one record into HBase
-     StringBuffer data = new StringBuffer();
-     Put put = new Put(Bytes.toBytes(key.toString()));
-     for (Text val : values) {
-         data = data.append(val);
-     }
-     put.add(CF, QUALIFIER, Bytes.toBytes(data.toString()));
-     //write to HBase
-     context.write(new ImmutableBytesWritable(Bytes.toBytes(key.toString())), put);
-   }
- }  ]]>                  
-                </programlisting>
-        <para>To test this code, the first step is to add a dependency to MRUnit to your Maven POM
-            file. </para>
-        <programlisting language="xml"><![CDATA[
-<dependency>
-   <groupId>org.apache.mrunit</groupId>
-   <artifactId>mrunit</artifactId>
-   <version>1.0.0 </version>
-   <scope>test</scope>
-</dependency>                    
-                    ]]></programlisting>
-        <para>Next, use the ReducerDriver provided by MRUnit, in your Reducer job.</para>
-        <programlisting language="java"><![CDATA[
-public class MyReducerTest {
-    ReduceDriver<Text, Text, ImmutableBytesWritable, Writable> reduceDriver;
-    byte[] CF = "CF".getBytes();
-    byte[] QUALIFIER = "CQ-1".getBytes();
-
-    @Before
-    public void setUp() {
-      MyReducer reducer = new MyReducer();
-      reduceDriver = ReduceDriver.newReduceDriver(reducer);
-    }
-  
-   @Test
-   public void testHBaseInsert() throws IOException {
-      String strKey = "RowKey-1", strValue = "DATA", strValue1 = "DATA1", 
-strValue2 = "DATA2";
-      List<Text> list = new ArrayList<Text>();
-      list.add(new Text(strValue));
-      list.add(new Text(strValue1));
-      list.add(new Text(strValue2));
-      //since in our case all that the reducer is doing is appending the records that the mapper   
-      //sends it, we should get the following back
-      String expectedOutput = strValue + strValue1 + strValue2;
-     //Setup Input, mimic what mapper would have passed
-      //to the reducer and run test
-      reduceDriver.withInput(new Text(strKey), list);
-      //run the reducer and get its output
-      List<Pair<ImmutableBytesWritable, Writable>> result = reduceDriver.run();
-    
-      //extract key from result and verify
-      assertEquals(Bytes.toString(result.get(0).getFirst().get()), strKey);
-    
-      //extract value for CF/QUALIFIER and verify
-      Put a = (Put)result.get(0).getSecond();
-      String c = Bytes.toString(a.get(CF, QUALIFIER).get(0).getValue());
-      assertEquals(expectedOutput,c );
-   }
-
-}                    
-                    ]]></programlisting>
-        <para>Your MRUnit test verifies that the output is as expected, the Put that is inserted
-            into HBase has the correct value, and the ColumnFamily and ColumnQualifier have the
-            correct values.</para>
-        <para>MRUnit includes a MapperDriver to test mapping jobs, and you can use MRUnit to test
-            other operations, including reading from HBase, processing data, or writing to
-            HDFS,</para>
-    </section>
-
-    <section>
-        <title>Integration Testing with a HBase Mini-Cluster</title>
-        <para>HBase ships with HBaseTestingUtility, which makes it easy to write integration tests
-            using a <firstterm>mini-cluster</firstterm>. The first step is to add some dependencies
-            to your Maven POM file. Check the versions to be sure they are appropriate.</para>
-        <programlisting language="xml"><![CDATA[
-<dependency>
-    <groupId>org.apache.hadoop</groupId>
-    <artifactId>hadoop-common</artifactId>
-    <version>2.0.0</version>
-    <type>test-jar</type>
-    <scope>test</scope>
-</dependency>
-
-<dependency>
-    <groupId>org.apache.hbase</groupId>
-    <artifactId>hbase</artifactId>
-    <version>0.98.3</version>
-    <type>test-jar</type>
-    <scope>test</scope>
-</dependency>
-        
-<dependency>
-    <groupId>org.apache.hadoop</groupId>
-    <artifactId>hadoop-hdfs</artifactId>
-    <version>2.0.0</version>
-    <type>test-jar</type>
-    <scope>test</scope>
-</dependency>
-
-<dependency>
-    <groupId>org.apache.hadoop</groupId>
-    <artifactId>hadoop-hdfs</artifactId>
-    <version>2.0.0</version>
-    <scope>test</scope>
-</dependency>                    
-                    ]]></programlisting>
-        <para>This code represents an integration test for the MyDAO insert shown in <xref
-                linkend="unit.tests"/>.</para>
-        <programlisting language="java">
-public class MyHBaseIntegrationTest {
-    private static HBaseTestingUtility utility;
-    byte[] CF = "CF".getBytes();
-    byte[] QUALIFIER = "CQ-1".getBytes();
-    
-    @Before
-    public void setup() throws Exception {
-    	utility = new HBaseTestingUtility();
-    	utility.startMiniCluster();
-    }
-
-    @Test
-        public void testInsert() throws Exception {
-       	 HTableInterface table = utility.createTable(Bytes.toBytes("MyTest"),
-       			 Bytes.toBytes("CF"));
-       	 HBaseTestObj obj = new HBaseTestObj();
-       	 obj.setRowKey("ROWKEY-1");
-       	 obj.setData1("DATA-1");
-       	 obj.setData2("DATA-2");
-       	 MyHBaseDAO.insertRecord(table, obj);
-       	 Get get1 = new Get(Bytes.toBytes(obj.getRowKey()));
-       	 get1.addColumn(CF, CQ1);
-       	 Result result1 = table.get(get1);
-       	 assertEquals(Bytes.toString(result1.getRow()), obj.getRowKey());
-       	 assertEquals(Bytes.toString(result1.value()), obj.getData1());
-       	 Get get2 = new Get(Bytes.toBytes(obj.getRowKey()));
-       	 get2.addColumn(CF, CQ2);
-       	 Result result2 = table.get(get2);
-       	 assertEquals(Bytes.toString(result2.getRow()), obj.getRowKey());
-       	 assertEquals(Bytes.toString(result2.value()), obj.getData2());
-    }
-}                    
-                </programlisting>
-        <para>This code creates an HBase mini-cluster and starts it. Next, it creates a table called
-                <literal>MyTest</literal> with one column family, <literal>CF</literal>. A record is
-            inserted, a Get is performed from the same table, and the insertion is verified.</para>
-        <note>
-            <para>Starting the mini-cluster takes about 20-30 seconds, but that should be
-                appropriate for integration testing. </para>
-        </note>
-        <para>To use an HBase mini-cluster on Microsoft Windows, you need to use a Cygwin
-            environment.</para>
-        <para>See the paper at <link
-                xlink:href="http://blog.sematext.com/2010/08/30/hbase-case-study-using-hbasetestingutility-for-local-testing-development/"
-                >HBase Case-Study: Using HBaseTestingUtility for Local Testing and
-                Development</link> (2010) for more information about HBaseTestingUtility.</para>
-    </section>
-
-</chapter>
-
-                      
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hbase/blob/0acbff24/src/main/docbkx/upgrading.xml
----------------------------------------------------------------------
diff --git a/src/main/docbkx/upgrading.xml b/src/main/docbkx/upgrading.xml
deleted file mode 100644
index d5708a4..0000000
--- a/src/main/docbkx/upgrading.xml
+++ /dev/null
@@ -1,833 +0,0 @@
-<?xml version="1.0"?>
-<chapter
-    xml:id="upgrading"
-    version="5.0"
-    xmlns="http://docbook.org/ns/docbook"
-    xmlns:xlink="http://www.w3.org/1999/xlink"
-    xmlns:xi="http://www.w3.org/2001/XInclude"
-    xmlns:svg="http://www.w3.org/2000/svg"
-    xmlns:m="http://www.w3.org/1998/Math/MathML"
-    xmlns:html="http://www.w3.org/1999/xhtml"
-    xmlns:db="http://docbook.org/ns/docbook">
-    <!--
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
--->
-    <title>Upgrading</title>
-
-
-    <para>You cannot skip major versions upgrading. If you are upgrading from version 0.90.x to
-        0.94.x, you must first go from 0.90.x to 0.92.x and then go from 0.92.x to 0.94.x.</para>
-    <note>
-        <para>It may be possible to skip across versions -- for example go from 0.92.2 straight to
-            0.98.0 just following the 0.96.x upgrade instructions -- but we have not tried it so
-            cannot say whether it works or not.</para>
-    </note>
-    <para> Review <xref
-            linkend="configuration" />, in particular the section on Hadoop version. </para>
-    <section
-        xml:id="hbase.versioning">
-        <title>HBase version number and compatibility</title>
-        <para>HBase has two versioning schemes, pre-1.0 and post-1.0. Both are detailed below. </para>
-		    
-	    <section xml:id="hbase.versioning.post10">
-		  <title>Post 1.0 versions</title>
-		  <para>Starting with 1.0.0 release, HBase uses <link xlink:href="http://semver.org/">Semantic Versioning</link> for it release versioning.
-		In summary:
-		<blockquote>
-	    <para>
-		Given a version number MAJOR.MINOR.PATCH, increment the:
-        <itemizedlist>
-          <listitem>MAJOR version when you make incompatible API changes,</listitem>
-          <listitem>MINOR version when you add functionality in a backwards-compatible manner, and</listitem>
-          <listitem>PATCH version when you make backwards-compatible bug fixes.</listitem>
-          <listitem>Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.</listitem>
-	    </itemizedlist>
-        </para>
-        </blockquote>
-		</para>
-
-	    <section xml:id="hbase.versioning.compat">
-		  <title>Compatibility Dimensions</title>	
-		<para>In addition to the usual API versioning considerations HBase has other compatibility dimensions that we need to consider.</para>
-
-	    <section>
-		  <title>Client-Server wire protocol compatibility</title>	
-            <para><itemizedlist>
-              <listitem>Allows updating client and server out of sync.</listitem>
-              <listitem>We could only allow upgrading the server first. I.e. the server would be backward compatible to an old client, that way new APIs are OK.</listitem>
-              <listitem>Example: A user should be able to use an old client to connect to an upgraded cluster.</listitem>
-            </itemizedlist></para>
-        </section>
-	    <section>
-		  <title>Server-Server protocol compatibility</title>	
-          <para><itemizedlist>
-	        <listitem>Servers of different versions can co-exist in the same cluster.</listitem>
-	        <listitem>The wire protocol between servers is compatible.</listitem>
-	        <listitem>Workers for distributed tasks, such as replication and log splitting, can co-exist in the same cluster.</listitem>
-	        <listitem>Dependent protocols (such as using ZK for coordination) will also not be changed.</listitem>
-	        <listitem>Example: A user can perform a rolling upgrade.</listitem>
-          </itemizedlist></para> 
-        </section>
-	    <section>
-		  <title>File format compatibility</title>
-          <para><itemizedlist>
-	        <listitem>Support file formats backward and forward compatible</listitem>
-	        <listitem>Example: File, ZK encoding, directory layout is upgraded automatically as part of an HBase upgrade. User can rollback to the older version and everything will continue to work.</listitem>
-          </itemizedlist></para> 
-        </section>
-	    <section>
-		  <title>Client API compatibility</title>	
-          <para><itemizedlist>
-	        <listitem>Allow changing or removing existing client APIs.</listitem>
-	        <listitem>An API needs to deprecated for a major version before we will change/remove it.</listitem>
-	        <listitem>Example: A user using a newly deprecated api does not need to modify application code with hbase api calls until the next major version.</listitem>
-          </itemizedlist></para> 
-        </section>
-	    <section>
-		  <title>Client Binary compatibility</title>	
-          <para><itemizedlist>
-	        <listitem>Old client code can run unchanged (no recompilation needed) against new jars.</listitem>
-	        <listitem>Example: Old compiled client code will work unchanged with the new jars.</listitem>
-          </itemizedlist></para> 
-        </section>
-	    <section>
-		  <title>Server-Side Limited API compatibility (taken from Hadoop)</title>	
-          <para><itemizedlist>
-	        <listitem>Internal APIs are marked as Stable, Evolving, or Unstable</listitem>
-	        <listitem>This implies binary compatibility for coprocessors and plugins (pluggable classes, including replication) as long as these are only using marked interfaces/classes.</listitem>
-	        <listitem>Example: Old compiled Coprocessor, Filter, or Plugin code will work unchanged with the new jars.</listitem>
-          </itemizedlist></para> 
-        </section>
-	    <section>
-		  <title>Dependency Compatibility</title>	
-          <para><itemizedlist>
-	        <listitem>An upgrade of HBase will not require an incompatible upgrade of a dependent project, including the Java runtime.</listitem>
-	        <listitem>Example: An upgrade of Hadoop will not invalidate any of the compatibilities guarantees we made.</listitem>
-          </itemizedlist></para> 
-        </section>
-	    <section>
-		  <title>Operational Compatibility</title>	
-          <para><itemizedlist>
-	        <listitem>Metric changes</listitem>
-	        <listitem>Behavioral changes of services</listitem>
-	        <listitem>Web page APIs</listitem>
-          </itemizedlist></para> 
-        </section>
-	    <section>
-		  <title>Summary</title>	
-            <para><itemizedlist>
-	          <listitem>A patch upgrade is a drop-in replacement. Any change that is not Java binary compatible would not be allowed.<footnote><link xlink:href="http://docs.oracle.com/javase/specs/jls/se7/html/jls-13.html"/></footnote></listitem>
-	          <listitem>A minor upgrade requires no application/client code modification. Ideally it would be a drop-in replacement but client code, coprocessors, filters, etc might have to be recompiled if new jars are used.</listitem>
-	          <listitem>A major upgrade allows the HBase community to make breaking changes.</listitem> 
-          </itemizedlist></para> 
-        </section>
-   	    <section>
-		  <title>Compatibility Matrix <footnote><para>Note that this indicates what could break, not that it will break. We will/should add specifics in our release notes.</para></footnote></title>	
-           <para> (Y means we support the compatibility. N means we can break it.) </para>
-      <table>
-        <title>Compatibility Matrix</title>
-        <tgroup
-          cols="4"
-          align="left"
-          colsep="1"
-          rowsep="1">
-          <colspec
-            colname="c1"
-            align="left" />
-          <colspec
-            colname="c2"
-            align="center" />
-          <colspec
-            colname="c3"
-            align="center" />
-          <colspec
-            colname="c4"
-            align="center" />
-          <thead>
-            <row>
-              <entry> </entry>
-              <entry>Major</entry>
-              <entry>Minor</entry>
-              <entry>Patch</entry>
-            </row>
-          </thead>
-          <tbody>
-            <row>
-              <entry>Client-Server wire Compatibility</entry>
-              <entry>N</entry>
-              <entry>Y</entry>
-              <entry>Y</entry>
-            </row>
-            <row>
-              <entry>Server-Server Compatibility</entry>
-              <entry>N</entry>
-              <entry>Y</entry>
-              <entry>Y</entry>
-            </row>
-            <row>
-              <entry>File Format Compatibility</entry>
-              <entry>N<footnote><para>Running an offline upgrade tool without rollback might be needed. We will typically only support migrating data from major version X to major version X+1.
-</para></footnote></entry>
-              <entry>Y</entry>
-              <entry>Y</entry>
-            </row>
-            <row>
-              <entry>Client API Compatibility</entry>
-              <entry>N</entry>
-              <entry>Y</entry>
-              <entry>Y</entry>
-            </row>
-            <row>
-              <entry>Client Binary Compatibility</entry>
-              <entry>N</entry>
-              <entry>N</entry>
-              <entry>Y</entry>
-            </row>
-            <row>
-              <entry>Server-Side Limited API Compatibility</entry>
-              <entry></entry>
-              <entry></entry>
-              <entry></entry>
-            </row>
-            <row>
-              <entry><itemizedlist><listitem>Stable</listitem></itemizedlist></entry>
-              <entry>N</entry>
-              <entry>Y</entry>
-              <entry>Y</entry>
-            </row>
-            <row>
-              <entry><itemizedlist><listitem>Evolving</listitem></itemizedlist></entry>
-              <entry>N</entry>
-              <entry>N</entry>
-              <entry>Y</entry>
-            </row>
-            <row>
-              <entry><itemizedlist><listitem>Unstable</listitem></itemizedlist></entry>
-              <entry>N</entry>
-              <entry>N</entry>
-              <entry>N</entry>
-            </row>
-            <row>
-              <entry>Dependency Compatibility</entry>
-              <entry>N</entry>
-              <entry>Y</entry>
-              <entry>Y</entry>
-            </row>
-            <row>
-              <entry>Operational Compatibility</entry>
-              <entry>N</entry>
-              <entry>N</entry>
-              <entry>Y</entry>
-            </row>
-          </tbody>
-          </tgroup>
-        </table>
-      </section>
-
-	    <section xml:id="hbase.client.api">
-		  <title>HBase API surface</title>
-		  <para> HBase has a lot of API points, but for the compatibility matrix above, we differentiate between Client API, Limited Private API, and Private API. HBase uses a version of 
-		  <link xlink:href="https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html">Hadoop's Interface classification</link>. HBase's Interface classification classes can be found <link xlink:href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/classification/package-summary.html"> here</link>. 
-		<itemizedlist>
-		<listitem>InterfaceAudience: captures the intended audience, possible values are Public (for end users and external projects), LimitedPrivate (for other Projects, Coprocessors or other plugin points), and Private (for internal use).</listitem>
-        <listitem>InterfaceStability: describes what types of interface changes are permitted. Possible values are Stable, Evolving, Unstable, and Deprecated.</listitem>
-        </itemizedlist>
-		</para>
-		
-       <section xml:id="hbase.client.api">
-		  <title>HBase Client API</title>
-		  <para>HBase Client API consists of all the classes or methods that are marked with InterfaceAudience.Public interface. All main classes in hbase-client and dependent modules have either InterfaceAudience.Public, InterfaceAudience.LimitedPrivate, or InterfaceAudience.Private marker. Not all classes in other modules (hbase-server, etc) have the marker. If a class is not annotated with one of these, it is assumed to be a InterfaceAudience.Private class. </para>
-        </section>
-
-       <section xml:id="hbase.limitetprivate.api">
-		  <title>HBase LimitedPrivate API</title>
-		  <para>LimitedPrivate annotation comes with a set of target consumers for the interfaces. Those consumers are coprocessors, phoenix, replication endpoint implemnetations or similar.   At this point, HBase only guarantees source and binary compatibility for these interfaces between patch versions. </para>
-        </section>
-
-        <section xml:id="hbase.private.api">
-		  <title>HBase Private API</title>
-		  <para>All classes annotated with InterfaceAudience.Private or all classes that do not have the annotation are for HBase internal use only. The interfaces and method signatures can change at any point in time. If you are relying on a particular interface that is marked Private, you should open a jira to propose changing the interface to be Public or LimitedPrivate, or an interface exposed for this purpose. </para>
-        </section>
-
-        </section>
-        </section>
-		
-	    </section>
-	
-	    <section xml:id="hbase.versioning.pre10">
-		  <title>Pre 1.0 versions</title>
-		  <para></para>
-
-        <para> Before the semantic versioning scheme pre-1.0, HBase tracked either Hadoop's versions (0.2x) 
-	           or 0.9x versions. If you are into the arcane, checkout our old wiki page on <link
-                xlink:href="http://wiki.apache.org/hadoop/Hbase/HBaseVersions">HBase
-                Versioning</link> which tries to connect the HBase version dots. Below sections cover ONLY the 
-                releases before 1.0.</para>
-        <section
-            xml:id="hbase.development.series">
-            <title>Odd/Even Versioning or "Development"" Series Releases</title>
-            <para>Ahead of big releases, we have been putting up preview versions to start the
-                feedback cycle turning-over earlier. These "Development" Series releases, always
-                odd-numbered, come with no guarantees, not even regards being able to upgrade
-                between two sequential releases (we reserve the right to break compatibility across
-                "Development" Series releases). Needless to say, these releases are not for
-                production deploys. They are a preview of what is coming in the hope that interested
-                parties will take the release for a test drive and flag us early if we there are
-                issues we've missed ahead of our rolling a production-worthy release. </para>
-            <para>Our first "Development" Series was the 0.89 set that came out ahead of HBase
-                0.90.0. HBase 0.95 is another "Development" Series that portends HBase 0.96.0.
-                0.99.x is the last series in "developer preview" mode before 1.0. Afterwards, 
-                we will be using semantic versioning naming scheme (see above).
-            </para>
-        </section>
-        <section
-            xml:id="hbase.binary.compatibility">
-            <title>Binary Compatibility</title>
-            <para>When we say two HBase versions are compatible, we mean that the versions are wire
-                and binary compatible. Compatible HBase versions means that clients can talk to
-                compatible but differently versioned servers. It means too that you can just swap
-                out the jars of one version and replace them with the jars of another, compatible
-                version and all will just work. Unless otherwise specified, HBase point versions are
-                (mostly) binary compatible. You can safely do rolling upgrades between binary compatible
-                versions; i.e. across point versions: e.g. from 0.94.5 to 0.94.6. See <link
-                xlink:href="http://search-hadoop.com/m/bOOvwHGW981/Does+compatibility+between+versions+also+mean+binary+compatibility%253F&amp;subj=Re+">Does
-                            compatibility between versions also mean binary compatibility?</link>
-                        discussion on the hbaes dev mailing list. </para>
-        </section>
-
- 
-	    </section>
-        <section xml:id="hbase.rolling.upgrade">
-          <title><firstterm>Rolling Upgrades</firstterm></title>
-          <para>A rolling upgrade is the process by which you update the servers
-            in your cluster a server at a time. You can rolling upgrade across HBase versions
-            if they are binary or wire compatible.
-            See <xlnk href="hbase.rolling.restart" /> for more on what this means.
-            Coarsely, a rolling upgrade is a graceful stop each server,
-            update the software, and then restart.  You do this for each server in the cluster.
-            Usually you upgrade the Master first and then the regionservers.
-            See <xlink href="rolling" /> for tools that can help use the rolling upgrade process.
-          </para>
-          <para>For example, in the below, hbase was symlinked to the actual hbase install.
-            On upgrade, before running a rolling restart over the cluser, we changed the symlink
-            to point at the new HBase software version and then ran 
-            <programlisting>$ HADOOP_HOME=~/hadoop-2.6.0-CRC-SNAPSHOT ~/hbase/bin/rolling-restart.sh --config ~/conf_hbase</programlisting>
-            The rolling-restart script will first gracefully stop and restart the master, and then
-            each of the regionservers in turn. Because the symlink was changed, on restart the
-            server will come up using the new hbase version.  Check logs for errors as the
-            rolling upgrade proceeds.
-          </para>
-        <section
-            xml:id="hbase.rolling.restart">
-            <title>Rolling Upgrade between versions that are Binary/Wire compatibile</title>
-            <para>Unless otherwise specified, HBase point versions are binary compatible. You can do
-              a <xlink href="hbase.rolling.upgrade" /> between hbase point versions.
-                For example, you can go to 0.94.6 from 0.94.5 by doing a rolling upgrade
-                across the cluster replacing the 0.94.5 binary with a 0.94.6 binary.</para>
-              <para>In the minor version-particular sections below, we call out where the versions
-                are wire/protocol compatible and in this case, it is also possible to do a
-                <xlink href="hbase.rolling.upgrade" />. For example, in
-            <xlink href="upgrade1.0.rolling.upgrade" />, we
-              state that it is possible to do a rolling upgrade between hbase-0.98.x and hbase-1.0.0.</para>
-        </section>
-        </section>
-    </section>
-    <section xml:id="upgrade1.0">
-        <title>Upgrading from 0.98.x to 1.0.x</title>
-        <para>In this section we first note the significant changes that come in with 1.0.0 HBase and then
-          we go over the upgrade process.  Be sure to read the significant changes section with care
-          so you avoid surprises.
-        </para>
-        <section xml:id="upgrade1.0.changes">
-            <title>Changes of Note!</title>
-            <para>In here we list important changes that are in 1.0.0 since 0.98.x., changes you should
-            be aware that will go into effect once you upgrade.</para>
-            <section xml:id="zookeeper.3.4"><title>ZooKeeper 3.4 is required in HBase 1.0.0</title>
-              <para>See <xref linkend="zookeeper.requirements" />.</para>
-            </section>
-            <section xml:id="default.ports.changed"><title>HBase Default Ports Changed</title>
-              <para>The ports used by HBase changed.  The used to be in the 600XX range.  In
-                hbase-1.0.0 they have been moved up out of the ephemeral port range and are
-                160XX instead (Master web UI was 60010 and is now 16030; the RegionServer
-                web UI was 60030 and is now 16030, etc). If you want to keep the old port
-                locations, copy the port setting configs from <filename>hbase-default.xml</filename>
-                into <filename>hbase-site.xml</filename>, change them back to the old values
-                from hbase-0.98.x era, and ensure you've distributed your configurations before
-              you restart.</para>
-            </section>
-            <section xml:id="upgrade1.0.hbase.bucketcache.percentage.in.combinedcache">
-                <title>hbase.bucketcache.percentage.in.combinedcache configuration has been REMOVED</title>
-                <para>You may have made use of this configuration if you are using BucketCache.
-                    If NOT using BucketCache, this change does not effect you.
-                    Its removal means that your L1 LruBlockCache is now sized
-                    using <varname>hfile.block.cache.size</varname> -- i.e. the way you
-                    would size the onheap L1 LruBlockCache if you were NOT doing
-                    BucketCache -- and the BucketCache size is not whatever the
-                    setting for hbase.bucketcache.size is.  You may need to adjust
-                    configs to get the LruBlockCache and BucketCache sizes set to
-                    what they were in 0.98.x and previous.  If you did not set this
-                    config., its default value was 0.9.  If you do nothing, your
-                    BucketCache will increase in size by 10%.  Your L1 LruBlockCache will
-                    become <varname>hfile.block.cache.size</varname> times your java
-                    heap size (hfile.block.cache.size is a float between 0.0 and 1.0).
-                    To read more, see
-                    <link xlink:href="https://issues.apache.org/jira/browse/HBASE-11520">HBASE-11520 Simplify offheap cache config by removing the confusing "hbase.bucketcache.percentage.in.combinedcache"</link>.
-                </para>
-          </section>
-          <section xml:id="hbase-12068"><title>If you have your own customer filters....</title>
-            <para>See the release notes on the issue <link xlink:href="https://issues.apache.org/jira/browse/HBASE-12068">HBASE-12068 [Branch-1] Avoid need to always do KeyValueUtil#ensureKeyValue for Filter transformCell</link>;
-              be sure to follow the recommendations therein.
-            </para>
-          </section>
-          <section xml:id="dlr"><title>Distributed Log Replay</title>
-            <para>
-              <xref linkend="distributed.log.replay" /> is off by default in hbase-1.0.
-                Enabling it can make a big difference improving HBase MTTR. Enable this
-                feature if you are doing a clean stop/start when you are upgrading.
-                You cannot rolling upgrade on to this feature (caveat if you are running
-                on a version of hbase in excess of hbase-0.98.4 -- see
-                <link xlink:href="https://issues.apache.org/jira/browse/HBASE-12577">HBASE-12577 Disable distributed log replay by default</link> for more).
-            </para>
-          </section>
-        </section>
-        <section xml:id="upgrade1.0.rolling.upgrade">
-          <title>Rolling upgrade from 0.98.x to HBase 1.0.0</title>
-          <note><title>From 0.96.x to 1.0.0</title>
-            <para>You cannot do a <xlink href="rolling.upgrade" /> from 0.96.x to 1.0.0 without
-              first doing a rolling upgrade to 0.98.x. See comment in
-              <link xlink:href="https://issues.apache.org/jira/browse/HBASE-11164?focusedCommentId=14182330&amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&#35;comment-14182330">HBASE-11164 Document and test rolling updates from 0.98 -> 1.0</link> for the why.
-              Also because hbase-1.0.0 enables hfilev3 by default,
-              <link xlink:href="https://issues.apache.org/jira/browse/HBASE-9801">HBASE-9801 Change the default HFile version to V3</link>,
-              and support for hfilev3 only arrives in 0.98, this is another reason you cannot rolling upgrade from hbase-0.96.x;
-              if the rolling upgrade stalls, the 0.96.x servers cannot open files written by the servers running the newer hbase-1.0.0
-              hfilev3 writing servers.
-            </para>
-          </note>
-          <para>There are no known issues running a <xlink href="hbase.rolling.upgrade" /> from hbase-0.98.x to hbase-1.0.0.
-          
-          </para>
-        </section>
-        <section xml:id="upgrade1.0.from.0.94">
-          <title>Upgrading to 1.0 from 0.94</title>
-          <para>You cannot rolling upgrade from 0.94.x to 1.x.x.  You must stop your cluster,
-            install the 1.x.x software, run the migration described at <xref linkend="executing.the.0.96.upgrade" />
-            (substituting 1.x.x. wherever we make mention of 0.96.x in the section below),
-            and then restart.  Be sure to upgrade your zookeeper if it is a version less than the required 3.4.x.
-          </para>
-
-        </section>
-    </section>
-
-    <section
-        xml:id="upgrade0.98">
-        <title>Upgrading from 0.96.x to 0.98.x</title>
-        <para>A rolling upgrade from 0.96.x to 0.98.x works. The two versions are not binary
-            compatible.</para>
-        <para>Additional steps are required to take advantage of some of the new features of 0.98.x,
-            including cell visibility labels, cell ACLs, and transparent server side encryption. See
-            the <xref
-                linkend="security" /> chapter of this guide for more information. Significant
-            performance improvements include a change to the write ahead log threading model that
-            provides higher transaction throughput under high load, reverse scanners, MapReduce over
-            snapshot files, and striped compaction.</para>
-        <para>Clients and servers can run with 0.98.x and 0.96.x versions. However, applications may
-            need to be recompiled due to changes in the Java API.</para>
-    </section>
-    <section>
-        <title>Upgrading from 0.94.x to 0.98.x</title>
-        <para> A rolling upgrade from 0.94.x directly to 0.98.x does not work. The upgrade path
-            follows the same procedures as <xref
-                linkend="upgrade0.96" />. Additional steps are required to use some of the new
-            features of 0.98.x. See <xref
-                linkend="upgrade0.98" /> for an abbreviated list of these features. </para>
-    </section>
-    <section
-        xml:id="upgrade0.96">
-        <title>Upgrading from 0.94.x to 0.96.x</title>
-        <subtitle>The "Singularity"</subtitle>
-        <note><title>HBase 0.96.x was EOL'd, September 1st, 2014</title><para>
-            Do not deploy 0.96.x  Deploy a 0.98.x at least.
-            See <link xlink:href="https://issues.apache.org/jira/browse/HBASE-11642">EOL 0.96</link>.
-        </para></note>
-
-        <para>You will have to stop your old 0.94.x cluster completely to upgrade. If you are
-            replicating between clusters, both clusters will have to go down to upgrade. Make sure
-            it is a clean shutdown. The less WAL files around, the faster the upgrade will run (the
-            upgrade will split any log files it finds in the filesystem as part of the upgrade
-            process). All clients must be upgraded to 0.96 too. </para>
-        <para>The API has changed. You will need to recompile your code against 0.96 and you may
-            need to adjust applications to go against new APIs (TODO: List of changes). </para>
-        <section xml:id="executing.the.0.96.upgrade">
-            <title>Executing the 0.96 Upgrade</title>
-            <note>
-              <title>HDFS and ZooKeeper must be up!</title>
-                <para>HDFS and ZooKeeper should be up and running during the upgrade process.</para>
-            </note>
-            <para>hbase-0.96.0 comes with an upgrade script. Run
-                <programlisting language="bourne">$ bin/hbase upgrade</programlisting> to see its usage. The script
-                has two main modes: -check, and -execute. </para>
-            <section>
-                <title>check</title>
-                <para>The <emphasis>check</emphasis> step is run against a running 0.94 cluster. Run
-                    it from a downloaded 0.96.x binary. The <emphasis>check</emphasis> step is
-                    looking for the presence of <filename>HFileV1</filename> files. These are
-                    unsupported in hbase-0.96.0. To purge them -- have them rewritten as HFileV2 --
-                    you must run a compaction. </para>
-                <para>The <emphasis>check</emphasis> step prints stats at the end of its run (grep
-                    for “Result:” in the log) printing absolute path of the tables it scanned, any
-                    HFileV1 files found, the regions containing said files (the regions we need to
-                    major compact to purge the HFileV1s), and any corrupted files if any found. A
-                    corrupt file is unreadable, and so is undefined (neither HFileV1 nor HFileV2). </para>
-                <para>To run the check step, run <command>$ bin/hbase upgrade -check</command>. Here
-                    is sample output:</para>
-                <screen>
-Tables Processed:
-hdfs://localhost:41020/myHBase/.META.
-hdfs://localhost:41020/myHBase/usertable
-hdfs://localhost:41020/myHBase/TestTable
-hdfs://localhost:41020/myHBase/t
-
-Count of HFileV1: 2
-HFileV1:
-hdfs://localhost:41020/myHBase/usertable    /fa02dac1f38d03577bd0f7e666f12812/family/249450144068442524
-hdfs://localhost:41020/myHBase/usertable    /ecdd3eaee2d2fcf8184ac025555bb2af/family/249450144068442512
-
-Count of corrupted files: 1
-Corrupted Files:
-hdfs://localhost:41020/myHBase/usertable/fa02dac1f38d03577bd0f7e666f12812/family/1
-Count of Regions with HFileV1: 2
-Regions to Major Compact:
-hdfs://localhost:41020/myHBase/usertable/fa02dac1f38d03577bd0f7e666f12812
-hdfs://localhost:41020/myHBase/usertable/ecdd3eaee2d2fcf8184ac025555bb2af
-
-There are some HFileV1, or corrupt files (files with incorrect major version)
-                </screen>
-                <para>In the above sample output, there are two HFileV1 in two regions, and one
-                    corrupt file. Corrupt files should probably be removed. The regions that have
-                    HFileV1s need to be major compacted. To major compact, start up the hbase shell
-                    and review how to compact an individual region. After the major compaction is
-                    done, rerun the check step and the HFileV1s shoudl be gone, replaced by HFileV2
-                    instances. </para>
-                <para>By default, the check step scans the hbase root directory (defined as
-                    hbase.rootdir in the configuration). To scan a specific directory only, pass the
-                        <emphasis>-dir</emphasis> option.</para>
-                <screen language="bourne">$ bin/hbase upgrade -check -dir /myHBase/testTable</screen>
-                <para>The above command would detect HFileV1s in the /myHBase/testTable directory. </para>
-                <para> Once the check step reports all the HFileV1 files have been rewritten, it is
-                    safe to proceed with the upgrade. </para>
-            </section>
-            <section>
-                <title>execute</title>
-                <para>After the check step shows the cluster is free of HFileV1, it is safe to
-                    proceed with the upgrade. Next is the <emphasis>execute</emphasis> step. You
-                    must <emphasis>SHUTDOWN YOUR 0.94.x CLUSTER</emphasis> before you can run the
-                        <emphasis>execute</emphasis> step. The execute step will not run if it
-                    detects running HBase masters or regionservers. <note>
-                        <para>HDFS and ZooKeeper should be up and running during the upgrade
-                            process. If zookeeper is managed by HBase, then you can start zookeeper
-                            so it is available to the upgrade by running <command>$
-                                ./hbase/bin/hbase-daemon.sh start zookeeper</command>
-                        </para>
-                    </note>
-                </para>
-                <para> The <emphasis>execute</emphasis> upgrade step is made of three substeps. </para>
-                <itemizedlist>
-                    <listitem>
-                        <para>Namespaces: HBase 0.96.0 has support for namespaces. The upgrade needs
-                            to reorder directories in the filesystem for namespaces to work.</para>
-                    </listitem>
-                    <listitem>
-                        <para>ZNodes: All znodes are purged so that new ones can be written in their
-                            place using a new protobuf'ed format and a few are migrated in place:
-                            e.g. replication and table state znodes</para>
-                    </listitem>
-                    <listitem>
-                        <para>WAL Log Splitting: If the 0.94.x cluster shutdown was not clean, we'll
-                            split WAL logs as part of migration before we startup on 0.96.0. This
-                            WAL splitting runs slower than the native distributed WAL splitting
-                            because it is all inside the single upgrade process (so try and get a
-                            clean shutdown of the 0.94.0 cluster if you can). </para>
-                    </listitem>
-                </itemizedlist>
-                <para> To run the <emphasis>execute</emphasis> step, make sure that first you have
-                    copied hbase-0.96.0 binaries everywhere under servers and under clients. Make
-                    sure the 0.94.0 cluster is down. Then do as follows:</para>
-                <screen language="bourne">$ bin/hbase upgrade -execute</screen>
-                <para>Here is some sample output.</para>
-                <programlisting>
-Starting Namespace upgrade
-Created version file at hdfs://localhost:41020/myHBase with version=7
-Migrating table testTable to hdfs://localhost:41020/myHBase/.data/default/testTable
-…..
-Created version file at hdfs://localhost:41020/myHBase with version=8
-Successfully completed NameSpace upgrade.
-Starting Znode upgrade
-….
-Successfully completed Znode upgrade
-
-Starting Log splitting
-…
-Successfully completed Log splitting
-         </programlisting>
-                <para> If the output from the execute step looks good, stop the zookeeper instance
-                    you started to do the upgrade:
-                    <programlisting language="bourne">$ ./hbase/bin/hbase-daemon.sh stop zookeeper</programlisting>
-                    Now start up hbase-0.96.0. </para>
-            </section>
-            <section
-                xml:id="s096.migration.troubleshooting">
-                <title>Troubleshooting</title>
-                <section
-                    xml:id="s096.migration.troubleshooting.old.client">
-                    <title>Old Client connecting to 0.96 cluster</title>
-                    <para>It will fail with an exception like the below. Upgrade.</para>
-                    <screen>17:22:15  Exception in thread "main" java.lang.IllegalArgumentException: Not a host:port pair: PBUF
-17:22:15  *
-17:22:15   api-compat-8.ent.cloudera.com ��  ���(
-17:22:15    at org.apache.hadoop.hbase.util.Addressing.parseHostname(Addressing.java:60)
-17:22:15    at org.apache.hadoop.hbase.ServerName.&amp;init>(ServerName.java:101)
-17:22:15    at org.apache.hadoop.hbase.ServerName.parseVersionedServerName(ServerName.java:283)
-17:22:15    at org.apache.hadoop.hbase.MasterAddressTracker.bytesToServerName(MasterAddressTracker.java:77)
-17:22:15    at org.apache.hadoop.hbase.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:61)
-17:22:15    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:703)
-17:22:15    at org.apache.hadoop.hbase.client.HBaseAdmin.&amp;init>(HBaseAdmin.java:126)
-17:22:15    at Client_4_3_0.setup(Client_4_3_0.java:716)
-17:22:15    at Client_4_3_0.main(Client_4_3_0.java:63)</screen>
-                </section>
-            </section>
-            <section>
-                <title>Upgrading <code>META</code> to use Protocol Buffers (Protobuf)</title>
-                <para>When you upgrade from versions prior to 0.96, <code>META</code> needs to be
-                    converted to use protocol buffers. This is controlled by the configuration
-                    option <option>hbase.MetaMigrationConvertingToPB</option>, which is set to
-                        <literal>true</literal> by default. Therefore, by default, no action is
-                    required on your part.</para>
-                <para>The migration is a one-time event. However, every time your cluster starts,
-                        <code>META</code> is scanned to ensure that it does not need to be
-                    converted. If you have a very large number of regions, this scan can take a long
-                    time. Starting in 0.98.5, you can set
-                        <option>hbase.MetaMigrationConvertingToPB</option> to
-                        <literal>false</literal> in <filename>hbase-site.xml</filename>, to disable
-                    this start-up scan. This should be considered an expert-level setting.</para>
-            </section>
-        </section>
-
-
-    </section>
-
-    <section
-        xml:id="upgrade0.94">
-        <title>Upgrading from 0.92.x to 0.94.x</title>
-        <para>We used to think that 0.92 and 0.94 were interface compatible and that you can do a
-            rolling upgrade between these versions but then we figured that <link
-                xlink:href="https://issues.apache.org/jira/browse/HBASE-5357">HBASE-5357 Use builder
-                pattern in HColumnDescriptor</link> changed method signatures so rather than return
-            void they instead return HColumnDescriptor. This will throw</para>
-        <screen>java.lang.NoSuchMethodError: org.apache.hadoop.hbase.HColumnDescriptor.setMaxVersions(I)V</screen>
-        <para>.... so 0.92 and 0.94 are NOT compatible. You cannot do a rolling upgrade between them.</para> </section>
-    <section
-        xml:id="upgrade0.92">
-        <title>Upgrading from 0.90.x to 0.92.x</title>
-        <subtitle>Upgrade Guide</subtitle>
-        <para>You will find that 0.92.0 runs a little differently to 0.90.x releases. Here are a few
-            things to watch out for upgrading from 0.90.x to 0.92.0. </para>
-        <note>
-            <title>tl;dr</title>
-            <para> If you've not patience, here are the important things to know upgrading. <orderedlist>
-                    <listitem>
-                        <para>Once you upgrade, you can’t go back.</para>
-                    </listitem>
-                    <listitem>
-                        <para> MSLAB is on by default. Watch that heap usage if you have a lot of
-                            regions.</para>
-                    </listitem>
-                    <listitem>
-                        <para> Distributed Log Splitting is on by default. It should make region server
-                            failover faster. </para>
-                    </listitem>
-                    <listitem>
-                        <para> There’s a separate tarball for security. </para>
-                    </listitem>
-                    <listitem>
-                        <para> If -XX:MaxDirectMemorySize is set in your hbase-env.sh, it’s going to
-                            enable the experimental off-heap cache (You may not want this). </para>
-                    </listitem>
-                </orderedlist>
-            </para>
-        </note>
-
-        <section>
-            <title>You can’t go back! </title>
-            <para>To move to 0.92.0, all you need to do is shutdown your cluster, replace your hbase
-                0.90.x with hbase 0.92.0 binaries (be sure you clear out all 0.90.x instances) and
-                restart (You cannot do a rolling restart from 0.90.x to 0.92.x -- you must restart).
-                On startup, the <varname>.META.</varname> table content is rewritten removing the
-                table schema from the <varname>info:regioninfo</varname> column. Also, any flushes
-                done post first startup will write out data in the new 0.92.0 file format, <link
-                    xlink:href="http://hbase.apache.org/book.html#hfilev2">HFile V2</link>. This
-                means you cannot go back to 0.90.x once you’ve started HBase 0.92.0 over your HBase
-                data directory. </para>
-        </section>
-
-        <section>
-            <title>MSLAB is ON by default </title>
-            <para>In 0.92.0, the <link
-                    xlink:href="http://hbase.apache.org/book.html#hbase.hregion.memstore.mslab.enabled">hbase.hregion.memstore.mslab.enabled</link>
-                flag is set to true (See <xref
-                    linkend="mslab" />). In 0.90.x it was <constant>false</constant>. When it is
-                enabled, memstores will step allocate memory in MSLAB 2MB chunks even if the
-                memstore has zero or just a few small elements. This is fine usually but if you had
-                lots of regions per regionserver in a 0.90.x cluster (and MSLAB was off), you may
-                find yourself OOME'ing on upgrade because the <code>thousands of regions * number of
-                    column families * 2MB MSLAB (at a minimum)</code> puts your heap over the top.
-                Set <varname>hbase.hregion.memstore.mslab.enabled</varname> to
-                    <constant>false</constant> or set the MSLAB size down from 2MB by setting
-                    <varname>hbase.hregion.memstore.mslab.chunksize</varname> to something less.
-            </para>
-        </section>
-
-        <section xml:id="dls">
-            <title>Distributed Log Splitting is on by default </title>
-            <para>Previous, WAL logs on crash were split by the Master alone. In 0.92.0, log
-                splitting is done by the cluster (See See “HBASE-1364 [performance] Distributed
-                splitting of regionserver commit logs” or see the blog post
-                <link xlink:href="http://blog.cloudera.com/blog/2012/07/hbase-log-splitting/">Apache HBase Log Splitting</link>).
-                This should cut down significantly on the
-                amount of time it takes splitting logs and getting regions back online again.
-            </para>
-        </section>
-
-        <section>
-            <title>Memory accounting is different now </title>
-            <para>In 0.92.0, <xref
-                    linkend="hfilev2" /> indices and bloom filters take up residence in the same LRU
-                used caching blocks that come from the filesystem. In 0.90.x, the HFile v1 indices
-                lived outside of the LRU so they took up space even if the index was on a ‘cold’
-                file, one that wasn’t being actively used. With the indices now in the LRU, you may
-                find you have less space for block caching. Adjust your block cache accordingly. See
-                the <xref
-                    linkend="block.cache" /> for more detail. The block size default size has been
-                changed in 0.92.0 from 0.2 (20 percent of heap) to 0.25. </para>
-        </section>
-
-
-        <section>
-            <title>On the Hadoop version to use </title>
-            <para>Run 0.92.0 on Hadoop 1.0.x (or CDH3u3 when it ships). The performance benefits are
-                worth making the move. Otherwise, our Hadoop prescription is as it has been; you
-                need an Hadoop that supports a working sync. See <xref
-                    linkend="hadoop" />. </para>
-
-            <para>If running on Hadoop 1.0.x (or CDH3u3), enable local read. See <link
-                    xlink:href="http://files.meetup.com/1350427/hug_ebay_jdcryans.pdf">Practical
-                    Caching</link> presentation for ruminations on the performance benefits ‘going
-                local’ (and for how to enable local reads). </para>
-        </section>
-        <section>
-            <title>HBase 0.92.0 ships with ZooKeeper 3.4.2 </title>
-            <para>If you can, upgrade your zookeeper. If you can’t, 3.4.2 clients should work
-                against 3.3.X ensembles (HBase makes use of 3.4.2 API). </para>
-        </section>
-        <section>
-            <title>Online alter is off by default </title>
-            <para>In 0.92.0, we’ve added an experimental online schema alter facility (See <xref
-                    linkend="hbase.online.schema.update.enable" />). Its off by default. Enable it
-                at your own risk. Online alter and splitting tables do not play well together so be
-                sure your cluster quiescent using this feature (for now). </para>
-        </section>
-        <section>
-            <title>WebUI </title>
-            <para>The webui has had a few additions made in 0.92.0. It now shows a list of the
-                regions currently transitioning, recent compactions/flushes, and a process list of
-                running processes (usually empty if all is well and requests are being handled
-                promptly). Other additions including requests by region, a debugging servlet dump,
-                etc. </para>
-        </section>
-        <section>
-            <title>Security tarball </title>
-            <para>We now ship with two tarballs; secure and insecure HBase. Documentation on how to
-                setup a secure HBase is on the way. </para>
-        </section>
-
-        <section>
-            <title>Changes in HBase replication </title>
-            <para>0.92.0 adds two new features: multi-slave and multi-master replication. The way to
-                enable this is the same as adding a new peer, so in order to have multi-master you
-                would just run add_peer for each cluster that acts as a master to the other slave
-                clusters. Collisions are handled at the timestamp level which may or may not be what
-                you want, this needs to be evaluated on a per use case basis. Replication is still
-                experimental in 0.92 and is disabled by default, run it at your own risk. </para>
-        </section>
-
-
-        <section>
-            <title>RegionServer now aborts if OOME </title>
-            <para>If an OOME, we now have the JVM kill -9 the regionserver process so it goes down
-                fast. Previous, a RegionServer might stick around after incurring an OOME limping
-                along in some wounded state. To disable this facility, and recommend you leave it in
-                place, you’d need to edit the bin/hbase file. Look for the addition of the
-                -XX:OnOutOfMemoryError="kill -9 %p" arguments (See [HBASE-4769] - ‘Abort
-                RegionServer Immediately on OOME’) </para>
-        </section>
-
-
-        <section>
-            <title>HFile V2 and the “Bigger, Fewer” Tendency </title>
-            <para>0.92.0 stores data in a new format, <xref
-                    linkend="hfilev2" />. As HBase runs, it will move all your data from HFile v1 to
-                HFile v2 format. This auto-migration will run in the background as flushes and
-                compactions run. HFile V2 allows HBase run with larger regions/files. In fact, we
-                encourage that all HBasers going forward tend toward Facebook axiom #1, run with
-                larger, fewer regions. If you have lots of regions now -- more than 100s per host --
-                you should look into setting your region size up after you move to 0.92.0 (In
-                0.92.0, default size is now 1G, up from 256M), and then running online merge tool
-                (See “HBASE-1621 merge tool should work on online cluster, but disabled table”).
-            </para>
-        </section>
-    </section>
-    <section
-        xml:id="upgrade0.90">
-        <title>Upgrading to HBase 0.90.x from 0.20.x or 0.89.x</title>
-        <para>This version of 0.90.x HBase can be started on data written by HBase 0.20.x or HBase
-            0.89.x. There is no need of a migration step. HBase 0.89.x and 0.90.x does write out the
-            name of region directories differently -- it names them with a md5 hash of the region
-            name rather than a jenkins hash -- so this means that once started, there is no going
-            back to HBase 0.20.x. </para>
-        <para> Be sure to remove the <filename>hbase-default.xml</filename> from your
-                <filename>conf</filename> directory on upgrade. A 0.20.x version of this file will
-            have sub-optimal configurations for 0.90.x HBase. The
-                <filename>hbase-default.xml</filename> file is now bundled into the HBase jar and
-            read from there. If you would like to review the content of this file, see it in the src
-            tree at <filename>src/main/resources/hbase-default.xml</filename> or see <xref
-                linkend="hbase_default_configurations" />. </para>
-        <para> Finally, if upgrading from 0.20.x, check your <varname>.META.</varname> schema in the
-            shell. In the past we would recommend that users run with a 16kb
-                <varname>MEMSTORE_FLUSHSIZE</varname>. Run <code>hbase> scan '-ROOT-'</code> in the
-            shell. This will output the current <varname>.META.</varname> schema. Check
-                <varname>MEMSTORE_FLUSHSIZE</varname> size. Is it 16kb (16384)? If so, you will need
-            to change this (The 'normal'/default value is 64MB (67108864)). Run the script
-                <filename>bin/set_meta_memstore_size.rb</filename>. This will make the necessary
-            edit to your <varname>.META.</varname> schema. Failure to run this change will make for
-            a slow cluster. See <link
-                        xlink:href="https://issues.apache.org/jira/browse/HBASE-3499">HBASE-3499
-                        Users upgrading to 0.90.0 need to have their .META. table updated with the
-                        right MEMSTORE_SIZE</link>
-                </para>
-    </section>
-</chapter>