You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by bu...@apache.org on 2015/07/14 04:49:35 UTC

[10/15] hbase git commit: HBASE-14066 clean out old docbook docs from branch-1.

http://git-wip-us.apache.org/repos/asf/hbase/blob/fdd2692f/src/main/docbkx/developer.xml
----------------------------------------------------------------------
diff --git a/src/main/docbkx/developer.xml b/src/main/docbkx/developer.xml
deleted file mode 100644
index a6b5dc2..0000000
--- a/src/main/docbkx/developer.xml
+++ /dev/null
@@ -1,2343 +0,0 @@
-<?xml version="1.0"?>
-<chapter
-    xml:id="developer"
-    version="5.0"
-    xmlns="http://docbook.org/ns/docbook"
-    xmlns:xlink="http://www.w3.org/1999/xlink"
-    xmlns:xi="http://www.w3.org/2001/XInclude"
-    xmlns:svg="http://www.w3.org/2000/svg"
-    xmlns:m="http://www.w3.org/1998/Math/MathML"
-    xmlns:html="http://www.w3.org/1999/xhtml"
-    xmlns:db="http://docbook.org/ns/docbook">
-    <!--
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
--->
-    <title>Building and Developing Apache HBase</title>
-    <para>This chapter contains information and guidelines for building and releasing HBase code and
-        documentation. Being familiar with these guidelines will help the HBase committers to use
-        your contributions more easily.</para>
-    <section xml:id="getting.involved">
-        <title>Getting Involved</title>
-        <para>Apache HBase gets better only when people contribute! If you are looking to contribute
-            to Apache HBase, look for <link
-                xlink:href="https://issues.apache.org/jira/issues/?jql=project%20%3D%20HBASE%20AND%20labels%20in%20(beginner)"
-                >issues in JIRA tagged with the label 'beginner'</link>. These are issues HBase
-            contributors have deemed worthy but not of immediate priority and a good way to ramp on
-            HBase internals. See <link xlink:href="http://search-hadoop.com/m/DHED43re96">What label
-                is used for issues that are good on ramps for new contributors?</link> from the dev
-            mailing list for background.</para>
-        <para>Before you get started submitting code to HBase, please refer to <xref
-                linkend="developing"/>.</para>
-        <para>As Apache HBase is an Apache Software Foundation project, see <xref linkend="asf"/>
-            for more information about how the ASF functions. </para>
-        <section xml:id="mailing.list">
-            <title>Mailing Lists</title>
-            <para>Sign up for the dev-list and the user-list.  See the
-                <link xlink:href="http://hbase.apache.org/mail-lists.html">mailing lists</link> page.
-                Posing questions - and helping to answer other people's questions - is encouraged!
-                There are varying levels of experience on both lists so patience and politeness are encouraged (and please
-                stay on topic.)
-            </para>
-        </section>
-        <section xml:id="irc">
-            <title>Internet Relay Chat (IRC)</title>
-            <para>For real-time questions and discussions, use the <literal>#hbase</literal> IRC
-                channel on the <link xlink:href="https://freenode.net/">FreeNode</link> IRC network.
-                FreeNode offers a web-based client, but most people prefer a native client, and
-                several clients are available for each operating system.</para>
-        </section>
-        <section xml:id="jira">
-            <title>Jira</title>
-            <para>Check for existing issues in <link
-                xlink:href="https://issues.apache.org/jira/browse/HBASE">Jira</link>. If it's
-                either a new feature request, enhancement, or a bug, file a ticket. </para>
-            <para>To check for existing issues which you can tackle as a beginner, search for <link
-                    xlink:href="https://issues.apache.org/jira/issues/?jql=project%20%3D%20HBASE%20AND%20labels%20in%20(beginner)"
-                    >issues in JIRA tagged with the label 'beginner'</link>.</para>
-            <itemizedlist xml:id="jira.priorities">
-                <title>JIRA Priorities</title>
-                <listitem>
-                    <para>Blocker: Should only be used if the issue WILL cause data loss or cluster
-                        instability reliably.</para>
-                </listitem>
-                <listitem>
-                    <para>Critical: The issue described can cause data loss or cluster instability
-                        in some cases.</para>
-                </listitem>
-                <listitem>
-                    <para>Major: Important but not tragic issues, like updates to the client API
-                        that will add a lot of much-needed functionality or significant bugs that
-                        need to be fixed but that don't cause data loss.</para>
-                </listitem>
-                <listitem>
-                    <para>Minor: Useful enhancements and annoying but not damaging bugs.</para>
-                </listitem>
-                <listitem>
-                    <para>Trivial: Useful enhancements but generally cosmetic.</para>
-                </listitem>
-            </itemizedlist>
-            <example xml:id="submitting.patches.jira.code">
-                <title>Code Blocks in Jira Comments</title>
-                <para>A commonly used macro in Jira is {code}. Everything inside the tags is
-                    preformatted, as in this example.</para>
-                <programlisting>
-{code}
-code snippet
-{code}
-            </programlisting>
-            </example>
-        </section>  <!--  jira -->
-        
-    </section>  <!--  getting involved -->
-    
-    <section xml:id="repos">
-        <title>Apache HBase Repositories</title>
-        <para>There are two different repositories for Apache HBase: Subversion (SVN) and Git. GIT
-            is our repository of record for all but the Apache HBase website. We used to be on SVN.
-            We migrated. See <link xlink:href="https://issues.apache.org/jira/browse/INFRA-7768"
-                >Migrate Apache HBase SVN Repos to Git</link>. Updating hbase.apache.org still
-            requires use of SVN (See <xref linkend="hbase.org"/>). See <link
-                xlink:href="http://hbase.apache.org/source-repository.html">Source Code
-                Management</link> page for contributor and committer links or seach for HBase on the
-                <link xlink:href="http://git.apache.org/">Apache Git</link> page.</para>
-    </section>
-
-    <section xml:id="ides">
-        <title>IDEs</title>
-        <section xml:id="eclipse">
-            <title>Eclipse</title>
-            <section xml:id="eclipse.code.formatting">
-                <title>Code Formatting</title>
-                <para>Under the <filename>dev-support/</filename> folder, you will find
-                        <filename>hbase_eclipse_formatter.xml</filename>. We encourage you to have
-                    this formatter in place in eclipse when editing HBase code.</para>
-                <procedure>
-                    <title>Load the HBase Formatter Into Eclipse</title>
-                    <step>
-                        <para>Open the <menuchoice>
-                                <guimenu>Eclipse</guimenu>
-                                <guimenuitem>Preferences</guimenuitem>
-                            </menuchoice> menu item.</para>
-                    </step>
-                    <step>
-                        <para>In Preferences, click the <menuchoice>
-                                <guimenu>Java</guimenu>
-                                <guisubmenu>Code Style</guisubmenu>
-                                <guimenuitem>Formatter</guimenuitem>
-                            </menuchoice> menu item.</para>
-                    </step>
-                    <step>
-                        <para>Click <guibutton>Import</guibutton> and browse to the location of the
-                                <filename>hbase_eclipse_formatter.xml</filename> file, which is in
-                            the <filename>dev-support/</filename> directory. Click
-                                <guibutton>Apply</guibutton>.</para>
-                    </step>
-                    <step>
-                        <para>Still in Preferences, click <menuchoice>
-                                <guimenu>Java Editor</guimenu>
-                                <guimenuitem>Save Actions</guimenuitem>
-                            </menuchoice>. Be sure the following options are selected:</para>
-                        <itemizedlist>
-                            <listitem><para>Perform the selected actions on save</para></listitem>
-                            <listitem><para>Format source code</para></listitem>
-                            <listitem><para>Format edited lines</para></listitem>
-                        </itemizedlist>
-                        <para>Click <guibutton>Apply</guibutton>. Close all dialog boxes and return
-                            to the main window.</para>
-                    </step>
-                </procedure>
-
-                <para>In addition to the automatic formatting, make sure you follow the style
-                    guidelines explained in <xref linkend="common.patch.feedback"/></para>
-                <para>Also, no <code>@author</code> tags - that's a rule. Quality Javadoc comments
-                    are appreciated. And include the Apache license.</para>
-            </section>
-            <section xml:id="eclipse.git.plugin">
-                <title>Eclipse Git Plugin</title>
-                <para>If you cloned the project via git, download and install the Git plugin (EGit).
-                    Attach to your local git repo (via the <guilabel>Git Repositories</guilabel>
-                    window) and you'll be able to see file revision history, generate patches,
-                    etc.</para>
-            </section>
-            <section xml:id="eclipse.maven.setup">
-                <title>HBase Project Setup in Eclipse using <code>m2eclipse</code></title>
-                <para>The easiest way is to use the <command>m2eclipse</command> plugin for Eclipse.
-                    Eclipse Indigo or newer includes <command>m2eclipse</command>, or you can
-                    download it from <link xlink:href="http://www.eclipse.org/m2e/"
-                        >http://www.eclipse.org/m2e/</link>/. It provides Maven integration for
-                    Eclipse, and even lets you use the direct Maven commands from within Eclipse to
-                    compile and test your project.</para>
-                <para>To import the project, click <menuchoice>
-                        <guimenu>File</guimenu>
-                        <guisubmenu>Import</guisubmenu>
-                        <guisubmenu>Maven</guisubmenu>
-                        <guimenuitem>Existing Maven Projects</guimenuitem>
-                    </menuchoice> and select the HBase root directory. <code>m2eclipse</code>
-                    locates all the hbase modules for you.</para>
-                <para>If you install <command>m2eclipse</command> and import HBase in your
-                    workspace, do the following to fix your eclipse Build Path. </para>
-                <orderedlist>
-                    <listitem>
-                        <para>Remove <filename>target</filename> folder</para>
-                    </listitem>
-                    <listitem>
-                        <para>Add <filename>target/generated-jamon</filename> and
-                                <filename>target/generated-sources/java</filename> folders.</para>
-                    </listitem>
-                    <listitem>
-                        <para>Remove from your Build Path the exclusions on the
-                                <filename>src/main/resources</filename> and
-                                <filename>src/test/resources</filename> to avoid error message in
-                            the console, such as the following:</para>
-                        <screen>Failed to execute goal 
-org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project hbase:
-'An Ant BuildException has occured: Replace: source file .../target/classes/hbase-default.xml 
-doesn't exist</screen>
-                        <para>This will also reduce the eclipse build cycles and make your life
-                            easier when developing. </para>
-                    </listitem>
-                </orderedlist>
-            </section>
-            <section xml:id="eclipse.commandline">
-                <title>HBase Project Setup in Eclipse Using the Command Line</title>
-                <para>Instead of using <code>m2eclipse</code>, you can generate the Eclipse files
-                    from the command line. </para>
-                <orderedlist>
-                    <listitem>
-                        <para>First, run the following command, which builds HBase. You only need to
-                            do this once.</para>
-                        <programlisting language="bourne">mvn clean install -DskipTests</programlisting>
-                    </listitem>
-                    <listitem>
-                        <para>Close Eclipse, and execute the following command from the terminal, in
-                            your local HBase project directory, to generate new
-                                <filename>.project</filename> and <filename>.classpath</filename>
-                            files.</para>
-                        <programlisting language="bourne">mvn eclipse:eclipse</programlisting>
-                    </listitem>
-                    <listitem>
-                        <para>Reopen Eclipse and import the <filename>.project</filename> file in
-                            the HBase directory to a workspace.</para>
-                    </listitem>
-                </orderedlist>
-            </section>
-            <section xml:id="eclipse.maven.class">
-                <title>Maven Classpath Variable</title>
-                <para>The <varname>$M2_REPO</varname> classpath variable needs to be set up for the
-                    project. This needs to be set to your local Maven repository, which is usually
-                        <filename>~/.m2/repository</filename></para>
-                <para>If this classpath variable is not configured, you will see compile errors in
-                    Eclipse like this: </para>
-                <screen>
-Description	Resource	Path	Location	Type
-The project cannot be built until build path errors are resolved	hbase		Unknown	Java Problem
-Unbound classpath variable: 'M2_REPO/asm/asm/3.1/asm-3.1.jar' in project 'hbase'	hbase		Build path	Build Path Problem
-Unbound classpath variable: 'M2_REPO/com/google/guava/guava/r09/guava-r09.jar' in project 'hbase'	hbase		Build path	Build Path Problem
-Unbound classpath variable: 'M2_REPO/com/google/protobuf/protobuf-java/2.3.0/protobuf-java-2.3.0.jar' in project 'hbase'	hbase		Build path	Build Path Problem Unbound classpath variable:
-                </screen>
-            </section>
-            <section xml:id="eclipse.issues">
-                <title>Eclipse Known Issues</title>
-                <para>Eclipse will currently complain about <filename>Bytes.java</filename>. It is
-                    not possible to turn these errors off.</para>
-                <screen>
-Description	Resource	Path	Location	Type
-Access restriction: The method arrayBaseOffset(Class) from the type Unsafe is not accessible due to restriction on required library /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Classes/classes.jar	Bytes.java	/hbase/src/main/java/org/apache/hadoop/hbase/util	line 1061	Java Problem
-Access restriction: The method arrayIndexScale(Class) from the type Unsafe is not accessible due to restriction on required library /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Classes/classes.jar	Bytes.java	/hbase/src/main/java/org/apache/hadoop/hbase/util	line 1064	Java Problem
-Access restriction: The method getLong(Object, long) from the type Unsafe is not accessible due to restriction on required library /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Classes/classes.jar	Bytes.java	/hbase/src/main/java/org/apache/hadoop/hbase/util	line 1111	Java Problem
-             </screen>
-            </section>
-            <section xml:id="eclipse.more">
-                <title>Eclipse - More Information</title>
-                <para>For additional information on setting up Eclipse for HBase development on
-                    Windows, see <link
-                        xlink:href="http://michaelmorello.blogspot.com/2011/09/hbase-subversion-eclipse-windows.html"
-                        >Michael Morello's blog</link> on the topic. </para>
-            </section>  
-        </section>
-        <section>
-            <title>IntelliJ IDEA</title>
-            <para>You can set up IntelliJ IDEA for similar functinoality as Eclipse. Follow these steps.</para>
-            <orderedlist>
-                <title>Project Setup in IntelliJ IDEA</title>
-                <listitem>
-                    <para>Select <menuchoice>
-                            <guimenu>Import Project</guimenu>
-                            <guisubmenu>Import Project From External Model</guisubmenu>
-                            <guimenuitem>Maven</guimenuitem>
-                        </menuchoice></para>
-                </listitem>
-                <listitem>
-                    <para>You do not need to select a profile. Be sure <guilabel>Maven project
-                            required</guilabel> is selected, and click
-                        <guibutton>Next</guibutton>.</para>
-                </listitem>
-                <listitem>
-                    <para>Select the location for the JDK.</para>
-                </listitem>
-            </orderedlist>
-            <formalpara>
-                <title>Using the HBase Formatter in IntelliJ IDEA</title>
-                <para>Using the Eclipse Code Formatter plugin for IntelliJ IDEA, you can import the
-                    HBase code formatter described in <xref linkend="eclipse.code.formatting" />.</para>
-            </formalpara>
-        </section>
-        <section>
-            <title>Other IDEs</title>
-            <para>It would be userful to mirror the <xref linkend="eclipse"/> set-up instructions
-                for other IDEs. If you would like to assist, please have a look at <link
-                    xlink:href="https://issues.apache.org/jira/browse/HBASE-11704"
-                    >HBASE-11704</link>.</para>
-        </section>
-    </section>
-
-    <section xml:id="build">
-        <title>Building Apache HBase</title>
-        <section xml:id="build.basic">
-            <title>Basic Compile</title>
-            <para>HBase is compiled using Maven. You must use Maven 3.x. To check your Maven
-                version, run the command <command>mvn -version</command>.</para>
-            <note>
-                <title>JDK Version Requirements</title>
-                <para> Starting with HBase 1.0 you must use Java 7 or later to build from source
-                    code. See <xref linkend="java"/> for more complete information about supported
-                    JDK versions. </para>
-            </note>
-            <section xml:id="maven.build.commands">
-                <title>Maven Build Commands</title>
-                <para>All commands are executed from the local HBase project directory. </para>
-                <section>
-                    <title>Package</title>
-                    <para>The simplest command to compile HBase from its java source code is to use
-                        the <code>package</code> target, which builds JARs with the compiled
-                        files.</para>
-                    <programlisting language="bourne">mvn package -DskipTests</programlisting>
-                    <para>Or, to clean up before compiling:</para>
-                    <programlisting language="bourne">mvn clean package -DskipTests</programlisting>
-                    <para>With Eclipse set up as explained above in <xref linkend="eclipse"/>, you
-                        can also use the <guimenu>Build</guimenu> command in Eclipse. To create the
-                        full installable HBase package takes a little bit more work, so read on.
-                    </para>
-                </section>
-                <section xml:id="maven.build.commands.compile">
-                    <title>Compile</title>
-                    <para>The <code>compile</code> target does not create the JARs with the compiled
-                        files.</para>
-                    <programlisting language="bourne">mvn compile</programlisting>
-                    <programlisting language="bourne">mvn clean compile</programlisting>
-                </section>
-                <section>
-                    <title>Install</title>
-                    <para>To install the JARs in your <filename>~/.m2/</filename> directory, use the
-                            <code>install</code> target.</para>
-                    <programlisting language="bourne">mvn install</programlisting>
-                    <programlisting language="bourne">mvn clean install</programlisting>
-                    <programlisting language="bourne">mvn clean install -DskipTests</programlisting>
-                </section>
-            </section>
-            <section xml:id="maven.build.commands.unitall">
-                <title>Running all or individual Unit Tests</title>
-                <para>See the <xref linkend="hbase.unittests.cmds"/> section in <xref
-                        linkend="hbase.unittests"/></para>
-            </section>
-
-            <section xml:id="maven.build.hadoop">
-                <title>Building against various hadoop versions.</title>
-                <para>As of 0.96, Apache HBase supports building against Apache Hadoop versions:
-                    1.0.3, 2.0.0-alpha and 3.0.0-SNAPSHOT. By default, in 0.96 and earlier, we will
-                    build with Hadoop-1.0.x. As of 0.98, Hadoop 1.x is deprecated and Hadoop 2.x is
-                    the default. To change the version to build against, add a hadoop.profile
-                    property when you invoke <command>mvn</command>:</para>
-                <programlisting language="bourne">mvn -Dhadoop.profile=1.0 ...</programlisting>
-                <para> The above will build against whatever explicit hadoop 1.x version we have in
-                    our <filename>pom.xml</filename> as our '1.0' version. Tests may not all pass so
-                    you may need to pass <code>-DskipTests</code> unless you are inclined to fix the
-                    failing tests.</para>
-                <note xml:id="maven.build.passing.default.profile">
-                    <title>'dependencyManagement.dependencies.dependency.artifactId' for
-                        org.apache.hbase:${compat.module}:test-jar with value '${compat.module}'
-                        does not match a valid id pattern</title>
-                    <para>You will see ERRORs like the above title if you pass the
-                            <emphasis>default</emphasis> profile; e.g. if you pass
-                            <property>hadoop.profile=1.1</property> when building 0.96 or
-                            <property>hadoop.profile=2.0</property> when building hadoop 0.98; just
-                        drop the hadoop.profile stipulation in this case to get your build to run
-                        again. This seems to be a maven pecularity that is probably fixable but
-                        we've not spent the time trying to figure it.</para>
-                </note>
-
-                <para> Similarly, for 3.0, you would just replace the profile value. Note that
-                    Hadoop-3.0.0-SNAPSHOT does not currently have a deployed maven artificat - you
-                    will need to build and install your own in your local maven repository if you
-                    want to run against this profile. </para>
-                <para> In earilier versions of Apache HBase, you can build against older versions of
-                    Apache Hadoop, notably, Hadoop 0.22.x and 0.23.x. If you are running, for
-                    example HBase-0.94 and wanted to build against Hadoop 0.23.x, you would run
-                    with:</para>
-                <programlisting language="bourne">mvn -Dhadoop.profile=22 ...</programlisting>
-            </section>
-            <section xml:id="build.protobuf">
-                <title>Build Protobuf</title>
-                <para>You may need to change the protobuf definitions that reside in the
-                        <filename>hbase-protocol</filename> module or other modules.</para>
-                <para> The protobuf files are located in
-                        <filename>hbase-protocol/src/main/protobuf</filename>. For the change to be
-                    effective, you will need to regenerate the classes. You can use maven profile
-                        <code>compile-protobuf</code> to do this.</para>
-                <programlisting language="bourne">mvn compile -Pcompile-protobuf</programlisting>
-                <para>You may also want to define <varname>protoc.path</varname> for the protoc
-                    binary, using the following command:</para>
-                <programlisting language="bourne">
-mvn compile -Pcompile-protobuf -Dprotoc.path=/opt/local/bin/protoc
-             </programlisting>
-                <para>Read the <filename>hbase-protocol/README.txt</filename> for more details.
-                </para>
-            </section>
-
-            <section xml:id="build.thrift">
-                <title>Build Thrift</title>
-                <para>You may need to change the thrift definitions that reside in the
-                  <filename>hbase-thrift</filename> module or other modules.</para>
-                <para>The thrift files are located in
-                  <filename>hbase-thrift/src/main/resources</filename>.
-                  For the change to be effective, you will need to regenerate the classes.
-                  You can use maven profile  <code>compile-thrift</code> to do this.</para>
-                <programlisting language="bourne">mvn compile -Pcompile-thrift</programlisting>
-                <para>You may also want to define <varname>thrift.path</varname> for the thrift
-                  binary, using the following command:</para>
-                <programlisting language="bourne">
-                  mvn compile -Pcompile-thrift -Dthrift.path=/opt/local/bin/thrift
-                </programlisting>
-            </section>
-
-            <section>
-                <title>Build a Tarball</title>
-                <para>You can build a tarball without going through the release process described in
-                        <xref linkend="releasing"/>, by running the following command:</para>
-                <screen>mvn -DskipTests clean install &amp;&amp; mvn -DskipTests package assembly:single</screen>
-                <para>The distribution tarball is built in
-                            <filename>hbase-assembly/target/hbase-<replaceable>&lt;version&gt;</replaceable>-bin.tar.gz</filename>.</para>
-            </section>
-            <section xml:id="build.gotchas">
-                <title>Build Gotchas</title>
-                <para>If you see <code>Unable to find resource 'VM_global_library.vm'</code>, ignore
-                    it. Its not an error. It is <link
-                        xlink:href="http://jira.codehaus.org/browse/MSITE-286">officially
-                        ugly</link> though. </para>
-            </section>
-            <section xml:id="build.snappy">
-                <title>Building in snappy compression support</title>
-                <para>Pass <code>-Psnappy</code> to trigger the <code>hadoop-snappy</code> maven profile
-                    for building Google Snappy native libraries into HBase. See also <xref
-                        linkend="snappy.compression.installation"/></para>
-            </section>
-        </section>
-    </section>
-    <section xml:id="releasing">
-        <title>Releasing Apache HBase</title>
-        <note>
-            <title>Building against HBase 1.x</title>
-            <para>HBase 1.x requires Java 7 to build. See <xref linkend="java"/> for Java
-                requirements per HBase release.</para>
-        </note>
-        <section>
-            <title>Building against HBase 0.96-0.98</title>
-            <para>HBase 0.96.x will run on Hadoop 1.x or Hadoop 2.x. HBase 0.98 still runs on both,
-                but HBase 0.98 deprecates use of Hadoop 1. HBase 1.x will <emphasis>not</emphasis>
-                run on Hadoop 1. In the following procedures, we make a distinction between HBase
-                1.x builds and the awkward process involved building HBase 0.96/0.98 for either
-                Hadoop 1 or Hadoop 2 targets. </para>
-            <para>You must choose which Hadoop to build against. It is not possible to build a
-                single HBase binary that runs against both Hadoop 1 and Hadoop 2. Hadoop is included
-                in the build, because it is needed to run HBase in standalone mode. Therefore, the
-                set of modules included in the tarball changes, depending on the build target. To
-                determine which HBase you have, look at the HBase version. The Hadoop version is
-                embedded within it.</para>
-            <para>Maven, our build system, natively does not allow a single product to be built
-                against different dependencies. Also, Maven cannot change the set of included
-                modules and write out the correct <filename>pom.xml</filename> files with
-                appropriate dependencies, even using two build targets, one for Hadoop 1 and another
-                for Hadoop 2. A prerequisite step is required, which takes as input the current
-                    <filename>pom.xml</filename>s and generates Hadoop 1 or Hadoop 2 versions using
-                a script in the <filename>dev-tools/</filename> directory, called
-                        <filename>generate-hadoop<replaceable>X</replaceable>-poms.sh</filename>
-                where <replaceable>X</replaceable> is either <literal>1</literal> or
-                    <literal>2</literal>. You then reference these generated poms when you build.
-                For now, just be aware of the difference between HBase 1.x builds and those of HBase
-                0.96-0.98. This difference is important to the build instructions.</para>
-
-
-            <example xml:id="mvn.settings.file">
-                <title>Example <filename>~/.m2/settings.xml</filename> File</title>
-                <para>Publishing to maven requires you sign the artifacts you want to upload. For
-                    the build to sign them for you, you a properly configured
-                        <filename>settings.xml</filename> in your local repository under
-                        <filename>.m2</filename>, such as the following.</para>
-                <programlisting language="xml"><![CDATA[<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
-  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
-  xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
-                      http://maven.apache.org/xsd/settings-1.0.0.xsd">
-  <servers>
-    <!- To publish a snapshot of some part of Maven -->
-    <server>
-      <id>apache.snapshots.https</id>
-      <username>YOUR_APACHE_ID
-      </username>
-      <password>YOUR_APACHE_PASSWORD
-      </password>
-    </server>
-    <!-- To publish a website using Maven -->
-    <!-- To stage a release of some part of Maven -->
-    <server>
-      <id>apache.releases.https</id>
-      <username>YOUR_APACHE_ID
-      </username>
-      <password>YOUR_APACHE_PASSWORD
-      </password>
-    </server>
-  </servers>
-  <profiles>
-    <profile>
-      <id>apache-release</id>
-      <properties>
-    <gpg.keyname>YOUR_KEYNAME</gpg.keyname>
-    <!--Keyname is something like this ... 00A5F21E... do gpg --list-keys to find it-->
-    <gpg.passphrase>YOUR_KEY_PASSWORD
-    </gpg.passphrase>
-      </properties>
-    </profile>
-  </profiles>
-</settings>]]>
-                </programlisting>
-            </example>
-
-        </section>
-        <section xml:id="maven.release">
-            <title>Making a Release Candidate</title>
-            <note>
-                <para>These instructions are for building HBase 1.0.x. For building earlier
-                    versions, the process is different. See this section under the respective
-                    release documentation folders. </para></note>
-            <formalpara>
-                <title>Point Releases</title>
-                <para>If you are making a point release (for example to quickly address a critical
-                    incompatability or security problem) off of a release branch instead of a
-                    development branch, the tagging instructions are slightly different. I'll prefix
-                    those special steps with <emphasis>Point Release Only</emphasis>. </para>
-            </formalpara>
-
-            <formalpara>
-                <title>Before You Begin</title>
-                <para>Before you make a release candidate, do a practice run by deploying a
-                    snapshot. Before you start, check to be sure recent builds have been passing for
-                    the branch from where you are going to take your release. You should also have
-                    tried recent branch tips out on a cluster under load, perhaps by running the
-                        <code>hbase-it</code> integration test suite for a few hours to 'burn in'
-                    the near-candidate bits. </para>
-            </formalpara>
-            <note>
-                <title>Point Release Only</title>
-                <para>At this point you should tag the previous release branch (ex: 0.96.1) with the
-                    new point release tag (e.g. 0.96.1.1 tag). Any commits with changes for the
-                    point release should be appled to the new tag. </para>
-            </note>
-
-
-            <para>The Hadoop <link xlink:href="http://wiki.apache.org/hadoop/HowToRelease">How To
-                    Release</link> wiki page is used as a model for most of the instructions below,
-                and may have more detail on particular sections, so it is worth review.</para>
-            
-            <note>
-                <title>Specifying the Heap Space for Maven on OSX</title>
-                <para>On OSX, you may need to specify the heap space for Maven commands, by setting
-                    the <varname>MAVEN_OPTS</varname> variable to <literal>-Xmx3g</literal>. You can
-                    prefix the variable to the Maven command, as in the following example:</para>
-                <screen>MAVEN_OPTS="-Xmx2g" mvn package</screen>
-                <para>You could also set this in an environment variable or alias in your
-                    shell.</para>
-            </note>
-            <procedure>
-                <title>Release Procedure</title>
-                <para>The script <filename>dev-support/make_rc.sh</filename> automates many of these
-                    steps. It does not do the modification of the <filename>CHANGES.txt</filename>
-                    for the release, the close of the staging repository in Apache Maven (human
-                    intervention is needed here), the checking of the produced artifacts to ensure
-                    they are 'good' -- e.g. extracting the produced tarballs, verifying that they
-                    look right, then starting HBase and checking that everything is running
-                    correctly, then the signing and pushing of the tarballs to <link
-                        xlink:href="http://people.apache.org">people.apache.org</link>. The script
-                    handles everything else, and comes in handy.</para>
-                <step>
-                    <title>Update the <filename>CHANGES.txt</filename> file and the POM files.</title>
-                    <para>Update <filename>CHANGES.txt</filename> with the changes since the last
-                        release. Make sure the URL to the JIRA points to the proper location which
-                        lists fixes for this release. Adjust the version in all the POM files
-                        appropriately. If you are making a release candidate, you must remove the
-                            <literal>-SNAPSHOT</literal> label from all versions. If you are running
-                        this receipe to publish a snapshot, you must keep the
-                            <literal>-SNAPSHOT</literal> suffix on the hbase version. The <link
-                            xlink:href="http://mojo.codehaus.org/versions-maven-plugin/">Versions
-                            Maven Plugin</link> can be of use here. To set a version in all the many
-                        poms of the hbase multi-module project, use a command like the
-                        following:</para>
-                    <programlisting language="bourne">
-$ mvn clean org.codehaus.mojo:versions-maven-plugin:1.3.1:set -DnewVersion=0.96.0
-                    </programlisting>
-                    <para>Checkin the <filename>CHANGES.txt</filename> and any version
-                        changes.</para>
-                </step>
-                <step>
-                    <title>Update the documentation.</title>
-                    <para> Update the documentation under <filename>src/main/docbkx</filename>. This
-                        usually involves copying the latest from trunk and making version-particular
-                        adjustments to suit this release candidate version. </para>
-                </step>
-                <step>
-                    <title>Build the source tarball.</title>
-                    <para>Now, build the source tarball. This tarball is Hadoop-version-independent.
-                        It is just the pure source code and documentation without a particular
-                        hadoop taint, etc. Add the <varname>-Prelease</varname> profile when
-                        building. It checks files for licenses and will fail the build if unlicensed
-                        files are present.</para>
-                    <programlisting language="bourne">
-$ mvn clean install -DskipTests assembly:single -Dassembly.file=hbase-assembly/src/main/assembly/src.xml -Prelease
-                        </programlisting>
-                    <para>Extract the tarball and make sure it looks good. A good test for the src
-                        tarball being 'complete' is to see if you can build new tarballs from this
-                        source bundle. If the source tarball is good, save it off to a
-                            <emphasis>version directory</emphasis>, a directory somewhere where you
-                        are collecting all of the tarballs you will publish as part of the release
-                        candidate. For example if you were building a hbase-0.96.0 release
-                        candidate, you might call the directory
-                        <filename>hbase-0.96.0RC0</filename>. Later you will publish this directory
-                        as our release candidate up on <link xlink:href="people.apache.org/~YOU"
-                                >people.apache.org/<replaceable>~YOU</replaceable>/</link>. </para>
-                </step>
-                <step>
-                    <title>Build the binary tarball.</title>
-                    <para>Next, build the binary tarball. Add the <varname>-Prelease</varname>
-                        profile when building. It checks files for licenses and will fail the build
-                        if unlicensed files are present. Do it in two steps.</para>
-                    <substeps>
-                        <step>
-                            <para>First install into the local repository</para>
-                            <programlisting language="bourne">
-$ mvn clean install -DskipTests -Prelease</programlisting>
-                        </step>
-                        <step>
-                            <para>Next, generate documentation and assemble the tarball.</para>
-                            <programlisting language="bourne">
-$ mvn install -DskipTests site assembly:single -Prelease</programlisting>
-                        </step>
-                    </substeps>
-                    <para> Otherwise, the build complains that hbase modules are not in the maven
-                        repository when you try to do it at once, especially on fresh repository. It
-                        seems that you need the install goal in both steps.</para>
-                    <para>Extract the generated tarball and check it out. Look at the documentation,
-                        see if it runs, etc. If good, copy the tarball to the above mentioned
-                            <emphasis>version directory</emphasis>. </para>
-                </step>
-                <step>
-                    <title>Create a new tag.</title>
-                    <note>
-                        <title>Point Release Only</title>
-                        <para>The following step that creates a new tag can be skipped since you've
-                            already created the point release tag</para>
-                    </note>
-                    <para>Tag the release at this point since it looks good. If you find an issue
-                        later, you can delete the tag and start over. Release needs to be tagged for
-                        the next step.</para>
-                </step>
-                <step>
-                    <title>Deploy to the Maven Repository.</title>
-                    <para>Next, deploy HBase to the Apache Maven repository, using the
-                            <varname>apache-release</varname> profile instead of the
-                            <varname>release</varname> profile when running the <command>mvn
-                            deploy</command> command. This profile invokes the Apache pom referenced
-                        by our pom files, and also signs your artifacts published to Maven, as long
-                        as the <filename>settings.xml</filename> is configured correctly, as
-                        described in <xref linkend="mvn.settings.file"/>.</para>
-                    <programlisting language="bourne">
-$ mvn deploy -DskipTests -Papache-release</programlisting>
-                    <para>This command copies all artifacts up to a temporary staging Apache mvn
-                        repository in an 'open' state. More work needs to be done on these maven
-                        artifacts to make them generally available. </para>
-                </step>
-                <step>
-                    <title>Make the Release Candidate available.</title>
-                    <para>The artifacts are in the maven repository in the staging area in the
-                        'open' state. While in this 'open' state you can check out what you've
-                        published to make sure all is good. To do this, login at <link
-                            xlink:href="http://repository.apache.org">repository.apache.org</link>
-                        using your Apache ID. Find your artifacts in the staging repository. Browse
-                        the content. Make sure all artifacts made it up and that the poms look
-                        generally good. If it checks out, 'close' the repo. This will make the
-                        artifacts publically available. You will receive an email with the URL to
-                        give out for the temporary staging repository for others to use trying out
-                        this new release candidate. Include it in the email that announces the
-                        release candidate. Folks will need to add this repo URL to their local poms
-                        or to their local <filename>settings.xml</filename> file to pull the
-                        published release candidate artifacts. If the published artifacts are
-                        incomplete or have problems, just delete the 'open' staged artifacts.</para>
-                    <note>
-                        <title>hbase-downstreamer</title>
-                        <para> See the <link
-                                xlink:href="https://github.com/saintstack/hbase-downstreamer"
-                                >hbase-downstreamer</link> test for a simple example of a project
-                            that is downstream of HBase an depends on it. Check it out and run its
-                            simple test to make sure maven artifacts are properly deployed to the
-                            maven repository. Be sure to edit the pom to point to the proper staging
-                            repository. Make sure you are pulling from the repository when tests run
-                            and that you are not getting from your local repository, by either
-                            passing the <code>-U</code> flag or deleting your local repo content and
-                            check maven is pulling from remote out of the staging repository.
-                        </para>
-                    </note> 
-                    <para>See <link
-                            xlink:href="http://www.apache.org/dev/publishing-maven-artifacts.html"
-                            >Publishing Maven Artifacts</link> for some pointers on this maven
-                        staging process.</para>
-                    <note>
-                        <para>We no longer publish using the maven release plugin. Instead we do
-                                <command>mvn deploy</command>. It seems to give us a backdoor to
-                            maven release publishing. If there is no <emphasis>-SNAPSHOT</emphasis>
-                            on the version string, then we are 'deployed' to the apache maven
-                            repository staging directory from which we can publish URLs for
-                            candidates and later, if they pass, publish as release (if a
-                                <emphasis>-SNAPSHOT</emphasis> on the version string, deploy will
-                            put the artifacts up into apache snapshot repos). </para>
-                    </note>
-                    <para>If the HBase version ends in <varname>-SNAPSHOT</varname>, the artifacts
-                        go elsewhere. They are put into the Apache snapshots repository directly and
-                        are immediately available. Making a SNAPSHOT release, this is what you want
-                        to happen.</para>
-                </step>
-                <step>
-                    <title>If you used the <filename>make_rc.sh</filename> script instead of doing
-                        the above manually, do your sanity checks now.</title>
-                    <para> At this stage, you have two tarballs in your 'version directory' and a
-                        set of artifacts in a staging area of the maven repository, in the 'closed'
-                        state. These are publicly accessible in a temporary staging repository whose
-                        URL you should have gotten in an email. The above mentioned script,
-                            <filename>make_rc.sh</filename> does all of the above for you minus the
-                        check of the artifacts built, the closing of the staging repository up in
-                        maven, and the tagging of the release. If you run the script, do your checks
-                        at this stage verifying the src and bin tarballs and checking what is up in
-                        staging using hbase-downstreamer project. Tag before you start the build.
-                        You can always delete it if the build goes haywire. </para>
-                </step>
-                <step>
-                    <title>Sign and upload your version directory to <link
-                            xlink:href="http://people.apache.org">people.apache.org</link>.</title>
-                    <para> If all checks out, next put the <emphasis>version directory</emphasis> up
-                        on <link xlink:href="http://people.apache.org">people.apache.org</link>. You
-                        will need to sign and fingerprint them before you push them up. In the
-                            <emphasis>version directory</emphasis> run the following commands:
-                    </para>
-                    <programlisting language="bourne">
-$ for i in *.tar.gz; do echo $i; gpg --print-mds $i > $i.mds ; done
-$ for i in *.tar.gz; do echo $i; gpg --armor --output $i.asc --detach-sig $i  ; done
-$ cd ..
-# Presuming our 'version directory' is named 0.96.0RC0, now copy it up to people.apache.org.
-$ rsync -av 0.96.0RC0 people.apache.org:public_html
-                    </programlisting>
-                    <para>Make sure the <link xlink:href="http://people.apache.org"
-                            >people.apache.org</link> directory is showing and that the mvn repo
-                        URLs are good. Announce the release candidate on the mailing list and call a
-                        vote. </para>
-                </step>
-            </procedure>
-        </section>
-        <section xml:id="maven.snapshot">
-            <title>Publishing a SNAPSHOT to maven</title>
-            <para>Make sure your <filename>settings.xml</filename> is set up properly, as in <xref
-                    linkend="mvn.settings.file"/>. Make sure the hbase version includes
-                    <varname>-SNAPSHOT</varname> as a suffix. Following is an example of publishing
-                SNAPSHOTS of a release that had an hbase version of 0.96.0 in its poms.</para>
-                <programlisting language="bourne">
- $ mvn clean install -DskipTests  javadoc:aggregate site assembly:single -Prelease
- $ mvn -DskipTests  deploy -Papache-release</programlisting>
-            <para>The <filename>make_rc.sh</filename> script mentioned above (see <xref
-                    linkend="maven.release"/>) can help you publish <varname>SNAPSHOTS</varname>.
-                Make sure your <varname>hbase.version</varname> has a <varname>-SNAPSHOT</varname>
-                suffix before running the script. It will put a snapshot up into the apache snapshot
-                repository for you. </para>
-        </section>
-
-    </section>
-
-    <section xml:id="hbase.rc.voting">
-        <title>Voting on Release Candidates</title>
-        <para> Everyone is encouraged to try and vote on HBase release candidates. Only the votes of
-            PMC members are binding. PMC members, please read this WIP doc on policy voting for a
-            release candidate, <link
-                xlink:href="https://github.com/rectang/asfrelease/blob/master/release.md">Release
-                Policy</link>. <quote>Before casting +1 binding votes, individuals are required to
-                download the signed source code package onto their own hardware, compile it as
-                provided, and test the resulting executable on their own platform, along with also
-                validating cryptographic signatures and verifying that the package meets the
-                requirements of the ASF policy on releases.</quote> Regards the latter, run
-                <command>mvn apache-rat:check</command> to verify all files are suitably licensed.
-            See <link xlink:href="http://search-hadoop.com/m/DHED4dhFaU">HBase, mail # dev - On
-                recent discussion clarifying ASF release policy</link>. for how we arrived at this
-            process. </para>
-    </section>
-    <section xml:id="documentation">
-          <title>Generating the HBase Reference Guide</title>
-        <para>The manual is marked up using <link xlink:href="http://www.docbook.org/"
-                >docbook</link>. We then use the <link
-                xlink:href="http://code.google.com/p/docbkx-tools/">docbkx maven plugin</link> to
-            transform the markup to html. This plugin is run when you specify the
-                <command>site</command> goal as in when you run <command>mvn site</command> or you
-            can call the plugin explicitly to just generate the manual by doing <command>mvn
-                docbkx:generate-html</command>. When you run <command>mvn site</command>, the
-            documentation is generated twice, once to generate the multipage manual and then again
-            for the single page manual, which is easier to search. See <xref
-                linkend="appendix_contributing_to_documentation"/> for more information on building
-            the documentation. </para>
-      </section>
-    <section xml:id="hbase.org">
-        <title>Updating <link xlink:href="http://hbase.apache.org">hbase.apache.org</link></title>
-        <section xml:id="hbase.org.site.contributing">
-            <title>Contributing to hbase.apache.org</title>
-            <para>See <xref linkend="appendix_contributing_to_documentation"/> for more information
-                on contributing to the documentation or website.</para>
-        </section>
-        <section xml:id="hbase.org.site.publishing">
-            <title>Publishing <link xlink:href="http://hbase.apache.org"
-                >hbase.apache.org</link></title>
-            <para>As of <link xlink:href="https://issues.apache.org/jira/browse/INFRA-5680"
-                    >INFRA-5680 Migrate apache hbase website</link>, to publish the website, build
-                it using Maven, and then deploy it over a checkout of
-                    <filename>https://svn.apache.org/repos/asf/hbase/hbase.apache.org/trunk</filename>
-                and check in your changes. The script
-                    <filename>dev-scripts/publish_hbase_website.sh</filename> is provided to
-                automate this process and to be sure that stale files are removed from SVN. Review
-                the script even if you decide to publish the website manually. Use the script as
-                follows:</para>
-            <screen>$ <userinput>publish_hbase_website.sh -h</userinput>
-<![CDATA[Usage: publish_hbase_website.sh [-i | -a] [-g <dir>] [-s <dir>]]]>
- -h          Show this message
- -i          Prompts the user for input
- -a          Does not prompt the user. Potentially dangerous.
- -g          The local location of the HBase git repository
- -s          The local location of the HBase svn checkout
- Either --interactive or --silent is required.
- Edit the script to set default Git and SVN directories.
-            </screen>
-            <note><para>The SVN commit takes a long time.</para></note>
-        </section>
-    </section>
-    <section xml:id="hbase.tests">
-        <title>Tests</title>
-
-        <para> Developers, at a minimum, should familiarize themselves with the unit test detail;
-            unit tests in HBase have a character not usually seen in other projects.</para>
-        <para>This information is about unit tests for HBase itself. For developing unit tests for
-            your HBase applications, see <xref linkend="unit.tests"/>.</para>
-        <section xml:id="hbase.moduletests">
-            <title>Apache HBase Modules</title>
-            <para>As of 0.96, Apache HBase is split into multiple modules. This creates
-                "interesting" rules for how and where tests are written. If you are writing code for
-                    <classname>hbase-server</classname>, see <xref linkend="hbase.unittests"/> for
-                how to write your tests. These tests can spin up a minicluster and will need to be
-                categorized. For any other module, for example <classname>hbase-common</classname>,
-                the tests must be strict unit tests and just test the class under test - no use of
-                the HBaseTestingUtility or minicluster is allowed (or even possible given the
-                dependency tree).</para>
-  <section xml:id="hbase.moduletest.shell">
-    <title>Testing the HBase Shell</title>
-    <para>
-      The HBase shell and its tests are predominantly written in jruby. In order to make these
-      tests run as a part of the standard build, there is a single JUnit test,
-      <classname>TestShell</classname>, that takes care of loading the jruby implemented tests and
-      running them. You can run all of these tests from the top level with:
-    </para>
-    <programlisting language="bourne">
-      mvn clean test -Dtest=TestShell
-    </programlisting>
-    <para>
-      Alternatively, you may limit the shell tests that run using the system variable
-      <classname>shell.test</classname>. This value may specify a particular test case by name. For
-      example, the tests that cover the shell commands for altering tables are contained in the test
-      case <classname>AdminAlterTableTest</classname> and you can run them with:
-    </para>
-    <programlisting language="bourne">
-      mvn clean test -Dtest=TestShell -Dshell.test=AdminAlterTableTest
-    </programlisting>
-    <para>
-      You may also use a <link xlink:href=
-      "http://docs.ruby-doc.com/docs/ProgrammingRuby/html/language.html#UJ">Ruby Regular Expression
-      literal</link> (in the <classname>/pattern/</classname> style) to select a set of test cases.
-      You can run all of the HBase admin related tests, including both the normal administration and
-      the security administration, with the command:
-    </para>
-    <programlisting language="bourne">
-      mvn clean test -Dtest=TestShell -Dshell.test=/.*Admin.*Test/
-    </programlisting>
-    <para>
-      In the event of a test failure, you can see details by examining the XML version of the
-      surefire report results
-    </para>
-    <programlisting language="bourne">
-      vim hbase-shell/target/surefire-reports/TEST-org.apache.hadoop.hbase.client.TestShell.xml
-    </programlisting>
-  </section>
-  <section xml:id="hbase.moduletest.run">
-  <title>Running Tests in other Modules</title>
-  <para>If the module you are developing in has no other dependencies on other HBase modules, then
-  you can cd into that module and just run:</para>
-  <programlisting language="bourne">mvn test</programlisting>
-  <para>which will just run the tests IN THAT MODULE. If there are other dependencies on other modules,
-  then you will have run the command from the ROOT HBASE DIRECTORY. This will run the tests in the other
-  modules, unless you specify to skip the tests in that module. For instance, to skip the tests in the hbase-server module,
-  you would run:</para>
-  <programlisting language="bourne">mvn clean test -PskipServerTests</programlisting>
-  <para>from the top level directory to run all the tests in modules other than hbase-server. Note that you
-  can specify to skip tests in multiple modules as well as just for a single module. For example, to skip
-  the tests in <classname>hbase-server</classname> and <classname>hbase-common</classname>, you would run:</para>
-  <programlisting language="bourne">mvn clean test -PskipServerTests -PskipCommonTests</programlisting>
-  <para>Also, keep in mind that if you are running tests in the <classname>hbase-server</classname> module you will need to
-  apply the maven profiles discussed in <xref linkend="hbase.unittests.cmds"/> to get the tests to run properly.</para>
-  </section>
-</section>
-
-<section xml:id="hbase.unittests">
-<title>Unit Tests</title>
-<para>Apache HBase unit tests are subdivided into four categories: small, medium, large, and
-integration with corresponding JUnit <link xlink:href="http://www.junit.org/node/581">categories</link>:
-<classname>SmallTests</classname>, <classname>MediumTests</classname>,
-<classname>LargeTests</classname>, <classname>IntegrationTests</classname>.
-JUnit categories are denoted using java annotations and look like this in your unit test code.</para>
-<programlisting language="java">...
-@Category(SmallTests.class)
-public class TestHRegionInfo {
-  @Test
-  public void testCreateHRegionInfoName() throws Exception {
-    // ...
-  }
-}</programlisting>
-            <para>The above example shows how to mark a unit test as belonging to the
-                    <literal>small</literal> category. All unit tests in HBase have a
-                categorization. </para>
-            <para> The first three categories, <literal>small</literal>, <literal>medium</literal>,
-                and <literal>large</literal>, are for tests run when you type <code>$ mvn
-                    test</code>. In other words, these three categorizations are for HBase unit
-                tests. The <literal>integration</literal> category is not for unit tests, but for
-                integration tests. These are run when you invoke <code>$ mvn verify</code>.
-                Integration tests are described in <xref linkend="integration.tests"/>.</para>
-            <para>HBase uses a patched maven surefire plugin and maven profiles to implement
-                its unit test characterizations. </para>
-            <para>Keep reading to figure which annotation of the set small, medium, and large to
-                put on your new HBase unit test. </para>
-
-            <variablelist>
-                <title>Categorizing Tests</title>
-                <varlistentry xml:id="hbase.unittests.small">
-                    <term>Small Tests<indexterm><primary>SmallTests</primary></indexterm></term>
-                    <listitem>
-                        <para>
-                            <emphasis>Small</emphasis> tests are executed in a shared JVM. We put in
-                            this category all the tests that can be executed quickly in a shared
-                            JVM. The maximum execution time for a small test is 15 seconds, and
-                            small tests should not use a (mini)cluster.</para>
-                    </listitem>
-                </varlistentry>
-
-                <varlistentry xml:id="hbase.unittests.medium">
-                    <term>Medium Tests<indexterm><primary>MediumTests</primary></indexterm></term>
-                    <listitem>
-                        <para><emphasis>Medium</emphasis> tests represent tests that must be
-                            executed before proposing a patch. They are designed to run in less than
-                            30 minutes altogether, and are quite stable in their results. They are
-                            designed to last less than 50 seconds individually. They can use a
-                            cluster, and each of them is executed in a separate JVM. </para>
-                    </listitem>
-                </varlistentry>
-
-                <varlistentry xml:id="hbase.unittests.large">
-                    <term>Large Tests<indexterm><primary>LargeTests</primary></indexterm></term>
-                    <listitem>
-                        <para><emphasis>Large</emphasis> tests are everything else. They are
-                            typically large-scale tests, regression tests for specific bugs, timeout
-                            tests, performance tests. They are executed before a commit on the
-                            pre-integration machines. They can be run on the developer machine as
-                            well. </para>
-                    </listitem>
-                </varlistentry>
-                <varlistentry xml:id="hbase.unittests.integration">
-                    <term>Integration
-                            Tests<indexterm><primary>IntegrationTests</primary></indexterm></term>
-                    <listitem>
-                        <para><emphasis>Integration</emphasis> tests are system level tests. See
-                                <xref linkend="integration.tests"/> for more info. </para>
-                    </listitem>
-                </varlistentry>
-            </variablelist>
-        </section>
-
-        <section xml:id="hbase.unittests.cmds">
-            <title>Running tests</title>
-
-            <section xml:id="hbase.unittests.cmds.test">
-                <title>Default: small and medium category tests </title>
-                <para>Running <code language="bourne">mvn test</code> will
-                    execute all small tests in a single JVM (no fork) and then medium tests in a
-                    separate JVM for each test instance. Medium tests are NOT executed if there is
-                    an error in a small test. Large tests are NOT executed. There is one report for
-                    small tests, and one report for medium tests if they are executed. </para>
-            </section>
-
-            <section xml:id="hbase.unittests.cmds.test.runAllTests">
-                <title>Running all tests</title>
-                <para>Running
-                    <code language="bourne">mvn test -P runAllTests</code> will
-                    execute small tests in a single JVM then medium and large tests in a separate
-                    JVM for each test. Medium and large tests are NOT executed if there is an error
-                    in a small test. Large tests are NOT executed if there is an error in a small or
-                    medium test. There is one report for small tests, and one report for medium and
-                    large tests if they are executed. </para>
-            </section>
-
-            <section xml:id="hbase.unittests.cmds.test.localtests.mytest">
-                <title>Running a single test or all tests in a package</title>
-                <para>To run an individual test, e.g. <classname>MyTest</classname>, rum <code
-                        language="bourne">mvn test -Dtest=MyTest</code> You can also pass multiple,
-                    individual tests as a comma-delimited list: <code language="bourne">mvn test
-                        -Dtest=MyTest1,MyTest2,MyTest3</code> You can also pass a package, which
-                    will run all tests under the package: <code language="bourne">mvn test
-                        '-Dtest=org.apache.hadoop.hbase.client.*'</code>
-                </para>
-
-                <para> When <code>-Dtest</code> is specified, the <code>localTests</code> profile
-                    will be used. It will use the official release of maven surefire, rather than
-                    our custom surefire plugin, and the old connector (The HBase build uses a
-                    patched version of the maven surefire plugin). Each junit test is executed in a
-                    separate JVM (A fork per test class). There is no parallelization when tests are
-                    running in this mode. You will see a new message at the end of the -report:
-                        <literal>"[INFO] Tests are skipped"</literal>. It's harmless. However, you
-                    need to make sure the sum of <code>Tests run:</code> in the <code>Results
-                        :</code> section of test reports matching the number of tests you specified
-                    because no error will be reported when a non-existent test case is specified.
-                </para>
-            </section>
-
-            <section xml:id="hbase.unittests.cmds.test.profiles">
-                <title>Other test invocation permutations</title>
-                <para>Running <command>mvn test -P runSmallTests</command> will execute "small"
-                    tests only, using a single JVM. </para>
-                <para>Running <command>mvn test -P runMediumTests</command> will execute "medium"
-                    tests only, launching a new JVM for each test-class. </para>
-                <para>Running <command>mvn test -P runLargeTests</command> will execute "large"
-                    tests only, launching a new JVM for each test-class. </para>
-                <para>For convenience, you can run <command>mvn test -P runDevTests</command> to
-                    execute both small and medium tests, using a single JVM. </para>
-            </section>
-
-            <section xml:id="hbase.unittests.test.faster">
-                <title>Running tests faster</title>
-                <para> By default, <code>$ mvn test -P runAllTests</code> runs 5 tests in parallel.
-                    It can be increased on a developer's machine. Allowing that you can have 2 tests
-                    in parallel per core, and you need about 2GB of memory per test (at the
-                    extreme), if you have an 8 core, 24GB box, you can have 16 tests in parallel.
-                    but the memory available limits it to 12 (24/2), To run all tests with 12 tests
-                    in parallel, do this: <command>mvn test -P runAllTests
-                        -Dsurefire.secondPartForkCount=12</command>. If using a version earlier than 
-                    2.0, do: <command>mvn test -P runAllTests -Dsurefire.secondPartThreadCount=12
-                    </command>. To increase the speed, you can as well use a ramdisk. You will need 2GB 
-                    of memory to run all tests. You will also need to delete the files between two 
-                    test run. The typical way to configure a ramdisk on Linux is:</para>
-                <screen language="bourne">$ sudo mkdir /ram2G
-sudo mount -t tmpfs -o size=2048M tmpfs /ram2G</screen>
-                <para>You can then use it to run all HBase tests on 2.0 with the command: </para>
-                <screen language="bourne">mvn test
-                        -P runAllTests -Dsurefire.secondPartForkCount=12
-                        -Dtest.build.data.basedirectory=/ram2G</screen>
-                <para>On earlier versions, use: </para>
-                <screen language="bourne">mvn test
-                        -P runAllTests -Dsurefire.secondPartThreadCount=12
-                        -Dtest.build.data.basedirectory=/ram2G</screen>
-            </section>
-
-            <section xml:id="hbase.unittests.cmds.test.hbasetests">
-                <title><command>hbasetests.sh</command></title>
-                <para>It's also possible to use the script <command>hbasetests.sh</command>. This
-                    script runs the medium and large tests in parallel with two maven instances, and
-                    provides a single report. This script does not use the hbase version of surefire
-                    so no parallelization is being done other than the two maven instances the
-                    script sets up. It must be executed from the directory which contains the
-                        <filename>pom.xml</filename>.</para>
-                <para>For example running <command>./dev-support/hbasetests.sh</command> will
-                    execute small and medium tests. Running <command>./dev-support/hbasetests.sh
-                        runAllTests</command> will execute all tests. Running
-                        <command>./dev-support/hbasetests.sh replayFailed</command> will rerun the
-                    failed tests a second time, in a separate jvm and without parallelisation.
-                </para>
-            </section>
-            <section xml:id="hbase.unittests.resource.checker">
-                <title>Test Resource Checker<indexterm><primary>Test Resource
-                        Checker</primary></indexterm></title>
-                <para> A custom Maven SureFire plugin listener checks a number of resources before
-                    and after each HBase unit test runs and logs its findings at the end of the test
-                    output files which can be found in <filename>target/surefire-reports</filename>
-                    per Maven module (Tests write test reports named for the test class into this
-                    directory. Check the <filename>*-out.txt</filename> files). The resources
-                    counted are the number of threads, the number of file descriptors, etc. If the
-                    number has increased, it adds a <emphasis>LEAK?</emphasis> comment in the logs.
-                    As you can have an HBase instance running in the background, some threads can be
-                    deleted/created without any specific action in the test. However, if the test
-                    does not work as expected, or if the test should not impact these resources,
-                    it's worth checking these log lines
-                        <computeroutput>...hbase.ResourceChecker(157): before...</computeroutput>
-                    and <computeroutput>...hbase.ResourceChecker(157): after...</computeroutput>.
-                    For example: </para>
-                <screen>2012-09-26 09:22:15,315 INFO [pool-1-thread-1]
-hbase.ResourceChecker(157): after:
-regionserver.TestColumnSeeking#testReseeking Thread=65 (was 65),
-OpenFileDescriptor=107 (was 107), MaxFileDescriptor=10240 (was 10240),
-ConnectionCount=1 (was 1) </screen>
-            </section>
-        </section>
-
-        <section xml:id="hbase.tests.writing">
-            <title>Writing Tests</title>
-            <section xml:id="hbase.tests.rules">
-                <title>General rules</title>
-                <itemizedlist>
-                    <listitem>
-                        <para>As much as possible, tests should be written as category small
-                            tests.</para>
-                    </listitem>
-                    <listitem>
-                        <para>All tests must be written to support parallel execution on the same
-                            machine, hence they should not use shared resources as fixed ports or
-                            fixed file names.</para>
-                    </listitem>
-                    <listitem>
-                        <para>Tests should not overlog. More than 100 lines/second makes the logs
-                            complex to read and use i/o that are hence not available for the other
-                            tests.</para>
-                    </listitem>
-                    <listitem>
-                        <para>Tests can be written with <classname>HBaseTestingUtility</classname>.
-                            This class offers helper functions to create a temp directory and do the
-                            cleanup, or to start a cluster.</para>
-                    </listitem>
-                </itemizedlist>
-            </section>
-            <section xml:id="hbase.tests.categories">
-                <title>Categories and execution time</title>
-                <itemizedlist>
-                    <listitem>
-                        <para>All tests must be categorized, if not they could be skipped.</para>
-                    </listitem>
-                    <listitem>
-                        <para>All tests should be written to be as fast as possible.</para>
-                    </listitem>
-                    <listitem>
-                        <para>Small category tests should last less than 15 seconds, and must not
-                            have any side effect.</para>
-                    </listitem>
-                    <listitem>
-                        <para>Medium category tests should last less than 50 seconds.</para>
-                    </listitem>
-                    <listitem>
-                        <para>Large category tests should last less than 3 minutes. This should
-                            ensure a good parallelization for people using it, and ease the analysis
-                            when the test fails.</para>
-                    </listitem>
-                </itemizedlist>
-            </section>
-            <section xml:id="hbase.tests.sleeps">
-                <title>Sleeps in tests</title>
-                <para>Whenever possible, tests should not use <methodname>Thread.sleep</methodname>,
-                    but rather waiting for the real event they need. This is faster and clearer for
-                    the reader. Tests should not do a <methodname>Thread.sleep</methodname> without
-                    testing an ending condition. This allows understanding what the test is waiting
-                    for. Moreover, the test will work whatever the machine performance is. Sleep
-                    should be minimal to be as fast as possible. Waiting for a variable should be
-                    done in a 40ms sleep loop. Waiting for a socket operation should be done in a
-                    200 ms sleep loop. </para>
-            </section>
-
-            <section xml:id="hbase.tests.cluster">
-                <title>Tests using a cluster </title>
-
-                <para>Tests using a HRegion do not have to start a cluster: A region can use the
-                    local file system. Start/stopping a cluster cost around 10 seconds. They should
-                    not be started per test method but per test class. Started cluster must be
-                    shutdown using <methodname>HBaseTestingUtility#shutdownMiniCluster</methodname>,
-                    which cleans the directories. As most as possible, tests should use the default
-                    settings for the cluster. When they don't, they should document it. This will
-                    allow to share the cluster later. </para>
-            </section>
-        </section>
-
-        <section xml:id="integration.tests">
-            <title>Integration Tests</title>
-            <para>HBase integration/system tests are tests that are beyond HBase unit tests. They
-                are generally long-lasting, sizeable (the test can be asked to 1M rows or 1B rows),
-                targetable (they can take configuration that will point them at the ready-made
-                cluster they are to run against; integration tests do not include cluster start/stop
-                code), and verifying success, integration tests rely on public APIs only; they do
-                not attempt to examine server internals asserting success/fail. Integration tests
-                are what you would run when you need to more elaborate proofing of a release
-                candidate beyond what unit tests can do. They are not generally run on the Apache
-                Continuous Integration build server, however, some sites opt to run integration
-                tests as a part of their continuous testing on an actual cluster. </para>
-            <para> Integration tests currently live under the <filename>src/test</filename>
-                directory in the hbase-it submodule and will match the regex:
-                    <filename>**/IntegrationTest*.java</filename>. All integration tests are also
-                annotated with <code>@Category(IntegrationTests.class)</code>. </para>
-
-            <para> Integration tests can be run in two modes: using a mini cluster, or against an
-                actual distributed cluster. Maven failsafe is used to run the tests using the mini
-                cluster. IntegrationTestsDriver class is used for executing the tests against a
-                distributed cluster. Integration tests SHOULD NOT assume that they are running
-                against a mini cluster, and SHOULD NOT use private API's to access cluster state. To
-                interact with the distributed or mini cluster uniformly,
-                    <code>IntegrationTestingUtility</code>, and <code>HBaseCluster</code> classes,
-                and public client API's can be used. </para>
-
-            <para> On a distributed cluster, integration tests that use ChaosMonkey or otherwise
-                manipulate services thru cluster manager (e.g. restart regionservers) use SSH to do
-                it. To run these, test process should be able to run commands on remote end, so ssh
-                should be configured accordingly (for example, if HBase runs under hbase user in
-                your cluster, you can set up passwordless ssh for that user and run the test also
-                under it). To facilitate that, <code>hbase.it.clustermanager.ssh.user</code>,
-                    <code>hbase.it.clustermanager.ssh.opts</code> and
-                    <code>hbase.it.clustermanager.ssh.cmd</code> configuration settings can be used.
-                "User" is the remote user that cluster manager should use to perform ssh commands.
-                "Opts" contains additional options that are passed to SSH (for example, "-i
-                /tmp/my-key"). Finally, if you have some custom environment setup, "cmd" is the
-                override format for the entire tunnel (ssh) command. The default string is
-                    {<code>/usr/bin/ssh %1$s %2$s%3$s%4$s "%5$s"</code>} and is a good starting
-                point. This is a standard Java format string with 5 arguments that is used to
-                execute the remote command. The argument 1 (%1$s) is SSH options set the via opts
-                setting or via environment variable, 2 is SSH user name, 3 is "@" if username is set
-                or "" otherwise, 4 is the target host name, and 5 is the logical command to execute
-                (that may include single quotes, so don't use them). For example, if you run the
-                tests under non-hbase user and want to ssh as that user and change to hbase on
-                remote machine, you can use {<code>/usr/bin/ssh %1$s %2$s%3$s%4$s "su hbase - -c
-                    \"%5$s\""</code>}. That way, to kill RS (for example) integration tests may run
-                    {<code>/usr/bin/ssh some-hostname "su hbase - -c \"ps aux | ... | kill
-                    ...\""</code>}. The command is logged in the test logs, so you can verify it is
-                correct for your environment. </para>
-              <para>To disable the running of Integration Tests, pass the following profile on the
-                command line <code>-PskipIntegrationTests</code>. For example,
-                <programlisting>$ mvn clean install test -Dtest=TestZooKeeper  -PskipIntegrationTests</programlisting></para>
-
-            <section xml:id="maven.build.commands.integration.tests.mini">
-                <title>Running integration tests against mini cluster</title>
-                <para>HBase 0.92 added a <varname>verify</varname> maven target. Invoking it, for
-                    example by doing <code>mvn verify</code>, will run all the phases up to and
-                    including the verify phase via the maven <link
-                        xlink:href="http://maven.apache.org/plugins/maven-failsafe-plugin/">failsafe
-                        plugin</link>, running all the above mentioned HBase unit tests as well as
-                    tests that are in the HBase integration test group. After you have completed
-                        <command>mvn install -DskipTests</command> You can run just the integration
-                    tests by invoking:</para>
-                <programlisting language="bourne">
-cd hbase-it
-mvn verify</programlisting>
-                <para>If you just want to run the integration tests in top-level, you need to run
-                    two commands. First: <command>mvn failsafe:integration-test</command> This
-                    actually runs ALL the integration tests. </para>
-                <note>
-                    <para>This command will always output <code>BUILD SUCCESS</code> even if there
-                        are test failures. </para>
-                </note>
-                <para>At this point, you could grep the output by hand looking for failed tests.
-                    However, maven will do this for us; just use: <command>mvn
-                        failsafe:verify</command> The above command basically looks at all the test
-                    results (so don't remove the 'target' directory) for test failures and reports
-                    the results.</para>
-
-                <section xml:id="maven.build.commands.integration.tests2">
-                    <title>Running a subset of Integration tests</title>
-                    <para>This is very similar to how you specify running a subset of unit tests
-                        (see above), but use the property <code>it.test</code> instead of
-                            <code>test</code>. To just run
-                            <classname>IntegrationTestClassXYZ.java</classname>, use: <command>mvn
-                            failsafe:integration-test -Dit.test=IntegrationTestClassXYZ</command>
-                        The next thing you might want to do is run groups of integration tests, say
-                        all integration tests that are named IntegrationTestClassX*.java:
-                            <command>mvn failsafe:integration-test -Dit.test=*ClassX*</command> This
-                        runs everything that is an integration test that matches *ClassX*. This
-                        means anything matching: "**/IntegrationTest*ClassX*". You can also run
-                        multiple groups of integration tests using comma-delimited lists (similar to
-                        unit tests). Using a list of matches still supports full regex matching for
-                        each of the groups.This would look something like: <command>mvn
-                            failsafe:integration-test -Dit.test=*ClassX*, *ClassY</command>
-                    </para>
-                </section>
-            </section>
-            <section xml:id="maven.build.commands.integration.tests.distributed">
-                <title>Running integration tests against distributed cluster</title>
-                <para> If you have an already-setup HBase cluster, you can launch the integration
-                    tests by invoking the class <code>IntegrationTestsDriver</code>. You may have to
-                    run test-compile first. The configuration will be picked by the bin/hbase
-                    script. <programlisting language="bourne">mvn test-compile</programlisting> Then
-                    launch the tests with:</para>
-                <programlisting language="bourne">bin/hbase [--config config_dir] org.apache.hadoop.hbase.IntegrationTestsDriver</programlisting>
-                <para>Pass <code>-h</code> to get usage on this sweet tool. Running the
-                    IntegrationTestsDriver without any argument will launch tests found under
-                        <code>hbase-it/src/test</code>, having
-                        <code>@Category(IntegrationTests.class)</code> annotation, and a name
-                    starting with <code>IntegrationTests</code>. See the usage, by passing -h, to
-                    see how to filter test classes. You can pass a regex which is checked against
-                    the full class name; so, part of class name can be used. IntegrationTestsDriver
-                    uses Junit to run the tests. Currently there is no support for running
-                    integration tests against a distributed cluster using maven (see <link
-                        xlink:href="https://issues.apache.org/jira/browse/HBASE-6201"
-                        >HBASE-6201</link>). </para>
-
-                <para> The tests interact with the distributed cluster by using the methods in the
-                        <code>DistributedHBaseCluster</code> (implementing
-                    <code>HBaseCluster</code>) class, which in turn uses a pluggable
-                        <code>ClusterManager</code>. Concrete implementations provide actual
-                    functionality for carrying out deployment-specific and environment-dependent
-                    tasks (SSH, etc). The default <code>ClusterManager</code> is
-                        <code>HBaseClusterManager</code>, which uses SSH to remotely execute
-                    start/stop/kill/signal commands, and assumes some posix commands (ps, etc). Also
-     

<TRUNCATED>