You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by mi...@apache.org on 2016/02/23 18:08:52 UTC

[44/51] [partial] hbase-site git commit: Published site at 58283fa1b1b10beec62cefa40babff6a1424b06c.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/d02dd5db/book.html
----------------------------------------------------------------------
diff --git a/book.html b/book.html
index 4a56991..e612b86 100644
--- a/book.html
+++ b/book.html
@@ -70,7 +70,7 @@
 <li><a href="#versions">27. Versions</a></li>
 <li><a href="#dm.sort">28. Sort Order</a></li>
 <li><a href="#dm.column.metadata">29. Column Metadata</a></li>
-<li><a href="#_joins">30. Joins</a></li>
+<li><a href="#joins">30. Joins</a></li>
 <li><a href="#_acid">31. ACID</a></li>
 </ul>
 </li>
@@ -126,7 +126,7 @@
 <li><a href="#arch.catalog">63. Catalog Tables</a></li>
 <li><a href="#architecture.client">64. Client</a></li>
 <li><a href="#client.filter">65. Client Request Filters</a></li>
-<li><a href="#_master">66. Master</a></li>
+<li><a href="#architecture.master">66. Master</a></li>
 <li><a href="#regionserver.arch">67. RegionServer</a></li>
 <li><a href="#regions.arch">68. Regions</a></li>
 <li><a href="#arch.bulk.load">69. Bulk Loading</a></li>
@@ -227,7 +227,7 @@
 <li><a href="#tools">128. HBase Tools and Utilities</a></li>
 <li><a href="#ops.regionmgt">129. Region Management</a></li>
 <li><a href="#node.management">130. Node Management</a></li>
-<li><a href="#_hbase_metrics">131. HBase Metrics</a></li>
+<li><a href="#hbase_metrics">131. HBase Metrics</a></li>
 <li><a href="#ops.monitoring">132. HBase Monitoring</a></li>
 <li><a href="#_cluster_replication">133. Cluster Replication</a></li>
 <li><a href="#_running_multiple_workloads_on_a_single_cluster">134. Running Multiple Workloads On a Single Cluster</a></li>
@@ -254,7 +254,7 @@
 <li><a href="#unit.tests">Unit Testing HBase Applications</a>
 <ul class="sectlevel1">
 <li><a href="#_junit">149. JUnit</a></li>
-<li><a href="#_mockito">150. Mockito</a></li>
+<li><a href="#mockito">150. Mockito</a></li>
 <li><a href="#_mrunit">151. MRUnit</a></li>
 <li><a href="#_integration_testing_with_an_hbase_mini_cluster">152. Integration Testing with an HBase Mini-Cluster</a></li>
 </ul>
@@ -281,7 +281,7 @@
 <li><a href="#compression">Appendix E: Compression and Data Block Encoding In HBase</a></li>
 <li><a href="#data.block.encoding.enable">158. Enable Data Block Encoding</a></li>
 <li><a href="#sql">Appendix F: SQL over HBase</a></li>
-<li><a href="#_ycsb">Appendix G: YCSB</a></li>
+<li><a href="#ycsb">Appendix G: YCSB</a></li>
 <li><a href="#_hfile_format_2">Appendix H: HFile format</a></li>
 <li><a href="#other.info">Appendix I: Other Information About HBase</a></li>
 <li><a href="#hbase.history">Appendix J: HBase History</a></li>
@@ -361,7 +361,7 @@ Yours, the HBase Community.</p>
 <div class="paragraph">
 <p>To protect existing HBase installations from new vulnerabilities, please <strong>do not</strong> use JIRA to report security-related bugs. Instead, send your report to the mailing list <a href="mailto:private@apache.org">private@apache.org</a>, which allows anyone to send messages, but restricts who can read them. Someone on that list will contact you to follow up on your report.</p>
 </div>
-<div class="paragraph">
+<div id="hbase_supported_tested_definitions" class="paragraph">
 <div class="title">Support and Testing Expectations</div>
 <p>The phrases /supported/, /not supported/, /tested/, and /not tested/ occur several
 places throughout this guide. In the interest of clarity, here is a brief explanation
@@ -1299,7 +1299,7 @@ Still, you can test what happens when the primary Master or a RegionServer disap
 This chapter expands upon the <a href="#getting_started">Getting Started</a> chapter to further explain configuration of Apache HBase.
 Please read this chapter carefully, especially the <a href="#basic.prerequisites">Basic Prerequisites</a>
 to ensure that your HBase testing and deployment goes smoothly, and prevent data loss.
-Familiarize yourself with <a href="#hbase_supported_tested_definitions">[hbase_supported_tested_definitions]</a> as well.
+Familiarize yourself with <a href="#hbase_supported_tested_definitions">Support and Testing Expectations</a> as well.
 </div>
 </div>
 <div class="sect1">
@@ -1461,7 +1461,7 @@ In HBase 0.98.5 and newer, you must set <code>JAVA_HOME</code> on each node of y
 </tr>
 </table>
 </div>
-<div class="dlist">
+<div id="os" class="dlist">
 <div class="title">Operating System Utilities</div>
 <dl>
 <dt class="hdlist1">ssh</dt>
@@ -1481,6 +1481,10 @@ See <a href="#loopback.ip">Loopback IP</a> for more details.</p>
 <dd>
 <p>The clocks on cluster nodes should be synchronized. A small amount of variation is acceptable, but larger amounts of skew can cause erratic and unexpected behavior. Time synchronization is one of the first things to check if you see unexplained problems in your cluster. It is recommended that you run a Network Time Protocol (NTP) service, or another time-synchronization mechanism, on your cluster, and that all nodes look to the same service for time synchronization. See the <a href="http://www.tldp.org/LDP/sag/html/basic-ntp-config.html">Basic NTP Configuration</a> at <em class="citetitle">The Linux Documentation Project (TLDP)</em> to set up NTP.</p>
 </dd>
+</dl>
+</div>
+<div id="ulimit" class="dlist">
+<dl>
 <dt class="hdlist1">Limits on Number of Files and Processes (ulimit)</dt>
 <dd>
 <p>Apache HBase is a database. It requires the ability to open a large number of files at once. Many Linux distributions limit the number of files a single user is allowed to open to <code>1024</code> (or <code>256</code> on older versions of OS X). You can check this limit on your servers by running the command <code>ulimit -n</code> when logged in as the user which runs HBase. See <a href="#trouble.rs.runtime.filehandles">the Troubleshooting section</a> for some of the problems you may experience if the limit is too low. You may also notice errors such as the following:</p>
@@ -1942,7 +1946,7 @@ Zookeeper binds to a well known port so clients may talk to HBase.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="_distributed"><a class="anchor" href="#_distributed"></a>5.2. Distributed</h3>
+<h3 id="distributed"><a class="anchor" href="#distributed"></a>5.2. Distributed</h3>
 <div class="paragraph">
 <p>Distributed mode can be subdivided into distributed but all daemons run on a single node&#8201;&#8212;&#8201;a.k.a. <em>pseudo-distributed</em>&#8201;&#8212;&#8201;and <em>fully-distributed</em> where the daemons are spread across all nodes in the cluster.
 The <em>pseudo-distributed</em> vs. <em>fully-distributed</em> nomenclature comes from Hadoop.</p>
@@ -5335,7 +5339,7 @@ See the configuration <a href="#fail.fast.expired.active.master">fail.fast.expir
 </div>
 </div>
 <div class="sect2">
-<h3 id="_recommended_configurations"><a class="anchor" href="#_recommended_configurations"></a>9.2. Recommended Configurations</h3>
+<h3 id="recommended_configurations"><a class="anchor" href="#recommended_configurations"></a>9.2. Recommended Configurations</h3>
 <div class="sect3">
 <h4 id="recommended_configurations.zk"><a class="anchor" href="#recommended_configurations.zk"></a>9.2.1. ZooKeeper Configuration</h4>
 <div class="sect4">
@@ -5777,7 +5781,7 @@ It may be possible to skip across versions&#8201;&#8212;&#8201;for example go fr
 </table>
 </div>
 <div class="paragraph">
-<p>Review <a href="#configuration">Apache HBase Configuration</a>, in particular <a href="#hadoop"><a href="http://hadoop.apache.org">Hadoop</a></a>. Familiarize yourself with <a href="#hbase_supported_tested_definitions">[hbase_supported_tested_definitions]</a>.</p>
+<p>Review <a href="#configuration">Apache HBase Configuration</a>, in particular <a href="#hadoop"><a href="http://hadoop.apache.org">Hadoop</a></a>. Familiarize yourself with <a href="#hbase_supported_tested_definitions">Support and Testing Expectations</a>.</p>
 </div>
 </div>
 </div>
@@ -7755,7 +7759,7 @@ For more information about how HBase stores data internally, see <a href="#keyva
 </div>
 </div>
 <div class="sect1">
-<h2 id="_joins"><a class="anchor" href="#_joins"></a>30. Joins</h2>
+<h2 id="joins"><a class="anchor" href="#joins"></a>30. Joins</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>Whether HBase supports joins is a common question on the dist-list, and there is a simple answer:  it doesn&#8217;t, at not least in the way that RDBMS' support them (e.g., with equi-joins or outer-joins in SQL).  As has been illustrated in this chapter, the read data model operations in HBase are Get and Scan.</p>
@@ -7861,7 +7865,7 @@ of this chapter to get more details after you have gone through this list.</p>
 <p>Aim to have regions sized between 10 and 50 GB.</p>
 </li>
 <li>
-<p>Aim to have cells no larger than 10 MB, or 50 MB if you use <a href="#mob">[mob]</a>. Otherwise,
+<p>Aim to have cells no larger than 10 MB, or 50 MB if you use <a href="#hbase_mob">mob</a>. Otherwise,
 consider storing your cell data in HDFS and store a pointer to the data in HBase.</p>
 </li>
 <li>
@@ -8082,7 +8086,7 @@ Whatever patterns are selected for ColumnFamilies, attributes, and rowkeys they
 <p>Try to keep the ColumnFamily names as small as possible, preferably one character (e.g. "d" for data/default).</p>
 </div>
 <div class="paragraph">
-<p>See <a href="#keyvalue">[keyvalue]</a> for more information on HBase stores data internally to see why this is important.</p>
+<p>See <a href="#keyvalue">KeyValue</a> for more information on HBase stores data internally to see why this is important.</p>
 </div>
 </div>
 <div class="sect3">
@@ -8329,7 +8333,7 @@ Take that into consideration when making your design, as well as block size for
 <h2 id="schema.joins"><a class="anchor" href="#schema.joins"></a>38. Joins</h2>
 <div class="sectionbody">
 <div class="paragraph">
-<p>If you have multiple tables, don&#8217;t forget to factor in the potential for <a href="#joins">[joins]</a> into the schema design.</p>
+<p>If you have multiple tables, don&#8217;t forget to factor in the potential for <a href="#joins">Joins</a> into the schema design.</p>
 </div>
 </div>
 </div>
@@ -8590,7 +8594,7 @@ These would be generated with MapReduce jobs into another table.</p>
 <h3 id="secondary.indexes.coproc"><a class="anchor" href="#secondary.indexes.coproc"></a>41.5. Coprocessor Secondary Index</h3>
 <div class="paragraph">
 <p>Coprocessors act like RDBMS triggers. These were added in 0.92.
-For more information, see <a href="#coprocessors">coprocessors</a></p>
+For more information, see <a href="#cp">coprocessors</a></p>
 </div>
 </div>
 </div>
@@ -10809,7 +10813,7 @@ When copying keys, configuration files, or other files containing sensitive stri
 </tr>
 </table>
 </div>
-<div class="olist arabic">
+<div id="security.data.basic.server.side" class="olist arabic">
 <div class="title">Procedure: Basic Server-Side Configuration</div>
 <ol class="arabic">
 <li>
@@ -11095,7 +11099,7 @@ Groups are created and manipulated externally to HBase, via the Hadoop group map
 <div class="olist arabic">
 <ol class="arabic">
 <li>
-<p>As a prerequisite, perform the steps in <a href="#security.data.basic.server.side">[security.data.basic.server.side]</a>.</p>
+<p>As a prerequisite, perform the steps in <a href="#security.data.basic.server.side">Procedure: Basic Server-Side Configuration</a>.</p>
 </li>
 <li>
 <p>Install and configure the AccessController coprocessor, by setting the following properties in <em>hbase-site.xml</em>.
@@ -11528,7 +11532,7 @@ hbase&gt; user_permission JAVA_REGEX</pre>
 </div>
 </div>
 <div class="sect2">
-<h3 id="_visibility_labels"><a class="anchor" href="#_visibility_labels"></a>60.3. Visibility Labels</h3>
+<h3 id="hbase.visibility.labels"><a class="anchor" href="#hbase.visibility.labels"></a>60.3. Visibility Labels</h3>
 <div class="paragraph">
 <p>Visibility labels control can be used to only permit users or principals associated with a given label to read or access cells with that label.
 For instance, you might label a cell <code>top-secret</code>, and only grant access to that label to the <code>managers</code> group.
@@ -11645,7 +11649,7 @@ Visibility labels are not currently applied for superusers.
 <div class="olist arabic">
 <ol class="arabic">
 <li>
-<p>As a prerequisite, perform the steps in <a href="#security.data.basic.server.side">[security.data.basic.server.side]</a>.</p>
+<p>As a prerequisite, perform the steps in <a href="#security.data.basic.server.side">Procedure: Basic Server-Side Configuration</a>.</p>
 </li>
 <li>
 <p>Install and configure the VisibilityController coprocessor by setting the following properties in <em>hbase-site.xml</em>.
@@ -13085,7 +13089,7 @@ See <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/Firs
 </div>
 </div>
 <div class="sect1">
-<h2 id="_master"><a class="anchor" href="#_master"></a>66. Master</h2>
+<h2 id="architecture.master"><a class="anchor" href="#architecture.master"></a>66. Master</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p><code>HMaster</code> is the implementation of the Master Server.
@@ -13664,7 +13668,7 @@ If writing to the WAL fails, the entire operation to modify the data fails.</p>
 <div class="paragraph">
 <p>HBase uses an implementation of the <a href="http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/wal/WAL.html">WAL</a> interface.
 Usually, there is only one instance of a WAL per RegionServer.
-The RegionServer records Puts and Deletes to it, before recording them to the <a href="#store.memstore">MemStore</a> for the affected <a href="#store">[store]</a>.</p>
+The RegionServer records Puts and Deletes to it, before recording them to the <a href="#store.memstore">MemStore</a> for the affected <a href="#store">Store</a>.</p>
 </div>
 <div class="admonitionblock note">
 <table>
@@ -14621,7 +14625,7 @@ The <code>force</code> parameter overrides this behaviour and is for expert use
 </div>
 </div>
 <div class="sect2">
-<h3 id="_store"><a class="anchor" href="#_store"></a>68.7. Store</h3>
+<h3 id="store"><a class="anchor" href="#store"></a>68.7. Store</h3>
 <div class="paragraph">
 <p>A Store hosts a MemStore and 0 or more StoreFiles (HFiles). A Store corresponds to a column family for a table for a given region.</p>
 </div>
@@ -14714,7 +14718,7 @@ Also see <a href="#hfilev2">HBase file format with inline blocks (version 2)</a>
 </div>
 </div>
 <div class="sect4">
-<h5 id="_hfile_tool"><a class="anchor" href="#_hfile_tool"></a>HFile Tool</h5>
+<h5 id="hfile_tool"><a class="anchor" href="#hfile_tool"></a>HFile Tool</h5>
 <div class="paragraph">
 <p>To view a textualized version of HFile content, you can use the <code>org.apache.hadoop.hbase.io.hfile.HFile</code> tool.
 Type the following to see usage:</p>
@@ -14759,7 +14763,7 @@ For more information on compression, see <a href="#compression">Compression and
 </div>
 </div>
 <div class="sect3">
-<h4 id="_keyvalue"><a class="anchor" href="#_keyvalue"></a>68.7.6. KeyValue</h4>
+<h4 id="keyvalue"><a class="anchor" href="#keyvalue"></a>68.7.6. KeyValue</h4>
 <div class="paragraph">
 <p>The KeyValue class is the heart of data storage in HBase.
 KeyValue wraps a byte array and takes offsets and lengths into the passed array which specify where to start interpreting the content as KeyValue.</p>
@@ -14926,15 +14930,15 @@ Minor and major compactions differ in the following ways.</p>
 <div class="paragraph">
 <p><em>Minor compactions</em> usually select a small number of small, adjacent StoreFiles and rewrite them as a single StoreFile.
 Minor compactions do not drop (filter out) deletes or expired versions, because of potential side effects.
-See <a href="#compaction.and.deletes">[compaction.and.deletes]</a> and <a href="#compaction.and.versions">[compaction.and.versions]</a> for information on how deletes and versions are handled in relation to compactions.
+See <a href="#compaction.and.deletes">Compaction and Deletions</a> and <a href="#compaction.and.versions">Compaction and Versions</a> for information on how deletes and versions are handled in relation to compactions.
 The end result of a minor compaction is fewer, larger StoreFiles for a given Store.</p>
 </div>
 <div class="paragraph">
 <p>The end result of a <em>major compaction</em> is a single StoreFile per Store.
 Major compactions also process delete markers and max versions.
-See <a href="#compaction.and.deletes">[compaction.and.deletes]</a> and <a href="#compaction.and.versions">[compaction.and.versions]</a> for information on how deletes and versions are handled in relation to compactions.</p>
+See <a href="#compaction.and.deletes">Compaction and Deletions</a> and <a href="#compaction.and.versions">Compaction and Versions</a> for information on how deletes and versions are handled in relation to compactions.</p>
 </div>
-<div class="paragraph">
+<div id="compaction.and.deletes" class="paragraph">
 <div class="title">Compaction and Deletions</div>
 <p>When an explicit deletion occurs in HBase, the data is not actually deleted.
 Instead, a <em>tombstone</em> marker is written.
@@ -14943,7 +14947,7 @@ During a major compaction, the data is actually deleted, and the tombstone marke
 If the deletion happens because of an expired TTL, no tombstone is created.
 Instead, the expired data is filtered out and is not written back to the compacted StoreFile.</p>
 </div>
-<div class="paragraph">
+<div id="compaction.and.versions" class="paragraph">
 <div class="title">Compaction and Versions</div>
 <p>When you create a Column Family, you can specify the maximum number of versions to keep, by specifying <code>HColumnDescriptor.setMaxVersions(int versions)</code>.
 The default value is <code>3</code>.
@@ -15252,7 +15256,7 @@ producing four StoreFiles.</p>
 you are balancing write costs with read costs. Raising the value (to something like
 1.4) will have more write costs, because you will compact larger StoreFiles.
 However, during reads, HBase will need to seek through fewer StoreFiles to
-accomplish the read. Consider this approach if you cannot take advantage of <a href="#bloom">[bloom]</a>.</p>
+accomplish the read. Consider this approach if you cannot take advantage of <a href="#blooms">Bloom Filters</a>.</p>
 </li>
 <li>
 <p>Alternatively, you can lower this value to something like 1.0 to reduce the
@@ -15560,7 +15564,7 @@ Candidate because previous file was selected and 1 is less than the min-size, bu
 </td>
 <td class="content">
 <div class="title">Impact of Key Configuration Options</div>
-This information is now included in the configuration parameter table in <a href="#compaction.configuration.parameters">[compaction.configuration.parameters]</a>.
+This information is now included in the configuration parameter table in <a href="#compaction.parameters">Parameters Used by Compaction Algorithm</a>.
 </td>
 </tr>
 </table>
@@ -15756,7 +15760,7 @@ When at least <code>hbase.store.stripe.compaction.minFilesL0</code> such files (
 </div>
 <div id="ops.stripe.config.compact" class="paragraph">
 <div class="title">Normal Compaction Configuration and Stripe Compaction</div>
-<p>All the settings that apply to normal compactions (see <a href="#compaction.configuration.parameters">[compaction.configuration.parameters]</a>) apply to stripe compactions.
+<p>All the settings that apply to normal compactions (see <a href="#compaction.parameters">Parameters Used by Compaction Algorithm</a>) apply to stripe compactions.
 The exceptions are the minimum and maximum number of files, which are set to higher values by default because the files in stripes are smaller.
 To control these for stripe compactions, use <code>hbase.store.stripe.compaction.minFiles</code> and <code>hbase.store.stripe.compaction.maxFiles</code>, rather than <code>hbase.hstore.compaction.min</code> and <code>hbase.hstore.compaction.max</code>.</p>
 </div>
@@ -15842,7 +15846,7 @@ If the target table does not already exist in HBase, this tool will create the t
 <div class="sect2">
 <h3 id="arch.bulk.load.also"><a class="anchor" href="#arch.bulk.load.also"></a>69.4. See Also</h3>
 <div class="paragraph">
-<p>For more information about the referenced utilities, see <a href="#importtsv">[importtsv]</a> and  <a href="#completebulkload">[completebulkload]</a>.</p>
+<p>For more information about the referenced utilities, see <a href="#importtsv">ImportTsv</a> and  <a href="#completebulkload">CompleteBulkLoad</a>.</p>
 </div>
 <div class="paragraph">
 <p>See <a href="http://blog.cloudera.com/blog/2013/09/how-to-use-hbase-bulk-loading-and-why/">How-to: Use HBase Bulk Loading, and Why</a> for a recent blog on current state of bulk loading.</p>
@@ -19093,7 +19097,7 @@ dependencies.</p>
 </table>
 </div>
 <div class="sect3">
-<h4 id="_using_hbase_shell"><a class="anchor" href="#_using_hbase_shell"></a>87.3.1. Using HBase Shell</h4>
+<h4 id="load_coprocessor_in_shell"><a class="anchor" href="#load_coprocessor_in_shell"></a>87.3.1. Using HBase Shell</h4>
 <div class="olist arabic">
 <ol class="arabic">
 <li>
@@ -19236,7 +19240,7 @@ verifies whether the given class is actually contained in the jar file.
 <div class="sect2">
 <h3 id="_dynamic_unloading"><a class="anchor" href="#_dynamic_unloading"></a>87.4. Dynamic Unloading</h3>
 <div class="sect3">
-<h4 id="_using_hbase_shell_2"><a class="anchor" href="#_using_hbase_shell_2"></a>87.4.1. Using HBase Shell</h4>
+<h4 id="_using_hbase_shell"><a class="anchor" href="#_using_hbase_shell"></a>87.4.1. Using HBase Shell</h4>
 <div class="olist arabic">
 <ol class="arabic">
 <li>
@@ -19689,7 +19693,7 @@ logging.</p>
 <dt class="hdlist1">Coprocessor Configuration</dt>
 <dd>
 <p>If you do not want to load coprocessors from the HBase Shell, you can add their configuration
-properties to <code>hbase-site.xml</code>. In <a href="#load_coprocessor_in_shell">[load_coprocessor_in_shell]</a>, two arguments are
+properties to <code>hbase-site.xml</code>. In <a href="#load_coprocessor_in_shell">Using HBase Shell</a>, two arguments are
 set: <code>arg1=1,arg2=2</code>. These could have been added to <code>hbase-site.xml</code> as follows:</p>
 </dd>
 </dl>
@@ -19750,7 +19754,7 @@ ResultScanner scanner = table.getScanner(scan);
 <div class="paragraph">
 <p>HBase 0.98.5 introduced the ability to monitor some statistics relating to the amount of time
 spent executing a given Coprocessor.
-You can see these statistics via the HBase Metrics framework (see <a href="#hbase_metrics">[hbase_metrics]</a> or the Web UI
+You can see these statistics via the HBase Metrics framework (see <a href="#hbase_metrics">HBase Metrics</a> or the Web UI
 for a given Region Server, via the <em>Coprocessor Metrics</em> tab.
 These statistics are valuable for debugging and benchmarking the performance impact of a given
 Coprocessor on your cluster.
@@ -19758,7 +19762,7 @@ Tracked statistics include min, max, average, and 90th, 95th, and 99th percentil
 All times are shown in milliseconds.
 The statistics are calculated over Coprocessor execution samples recorded during the reporting
 interval, which is 10 seconds by default.
-The metrics sampling rate as described in <a href="#hbase_metrics">[hbase_metrics]</a>.</p>
+The metrics sampling rate as described in <a href="#hbase_metrics">HBase Metrics</a>.</p>
 </div>
 <div class="imageblock">
 <div class="content">
@@ -19951,7 +19955,7 @@ See <a href="#block.cache">Block Cache</a></p>
 <h2 id="perf.configurations"><a class="anchor" href="#perf.configurations"></a>94. HBase Configurations</h2>
 <div class="sectionbody">
 <div class="paragraph">
-<p>See <a href="#recommended_configurations">[recommended_configurations]</a>.</p>
+<p>See <a href="#recommended_configurations">Recommended Configurations</a>.</p>
 </div>
 <div class="sect2">
 <h3 id="perf.compactions.and.splits"><a class="anchor" href="#perf.compactions.and.splits"></a>94.1. Managing Compactions</h3>
@@ -20031,7 +20035,7 @@ This memory setting is often adjusted for the RegionServer process depending on
 <div class="sect2">
 <h3 id="perf.hstore.blockingstorefiles"><a class="anchor" href="#perf.hstore.blockingstorefiles"></a>94.7. <code>hbase.hstore.blockingStoreFiles</code></h3>
 <div class="paragraph">
-<p>See <a href="#hbase.hstore.blockingstorefiles">[hbase.hstore.blockingstorefiles]</a>.
+<p>See <a href="#hbase.hstore.blockingStoreFiles">[hbase.hstore.blockingStoreFiles]</a>.
 If there is blocking in the RegionServer logs, increasing this can help.</p>
 </div>
 </div>
@@ -20318,7 +20322,7 @@ Larger cell values require larger blocksizes.
 There is an inverse relationship between blocksize and the resulting StoreFile indexes (i.e., if the blocksize is doubled then the resulting indexes should be roughly halved).</p>
 </div>
 <div class="paragraph">
-<p>See <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html">HColumnDescriptor</a> and <a href="#store">[store]</a>for more information.</p>
+<p>See <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html">HColumnDescriptor</a> and <a href="#store">Store</a>for more information.</p>
 </div>
 </div>
 <div class="sect2">
@@ -20346,7 +20350,7 @@ When it&#8217;s in-memory (e.g., in the MemStore) or on the wire (e.g., transfer
 So while using ColumnFamily compression is a best practice, but it&#8217;s not going to completely eliminate the impact of over-sized Keys, over-sized ColumnFamily names, or over-sized Column names.</p>
 </div>
 <div class="paragraph">
-<p>See <a href="#keysize">Try to minimize row and column sizes</a> on for schema design tips, and <a href="#keyvalue">[keyvalue]</a> for more information on HBase stores data internally.</p>
+<p>See <a href="#keysize">Try to minimize row and column sizes</a> on for schema design tips, and <a href="#keyvalue">KeyValue</a> for more information on HBase stores data internally.</p>
 </div>
 </div>
 </div>
@@ -20738,7 +20742,7 @@ If this is set to 0 (the default), hedged reads are disabled.</p>
 </div>
 <div class="paragraph">
 <p>Use the following metrics to tune the settings for hedged reads on your cluster.
-See <a href="#hbase_metrics">[hbase_metrics]</a>  for more information.</p>
+See <a href="#hbase_metrics">HBase Metrics</a>  for more information.</p>
 </div>
 <div class="ulist">
 <div class="title">Metrics for Hedged Reads</div>
@@ -20900,7 +20904,7 @@ For example, short latency-sensitive disk reads will have to wait in line behind
 MR jobs that write to HBase will also generate flushes and compactions, which will in turn invalidate blocks in the <a href="#block.cache">Block Cache</a>.</p>
 </div>
 <div class="paragraph">
-<p>If you need to process the data from your live HBase cluster in MR, you can ship the deltas with <a href="#copy.table">[copy.table]</a> or use replication to get the new data in real time on the OLAP cluster.
+<p>If you need to process the data from your live HBase cluster in MR, you can ship the deltas with <a href="#copy.table">CopyTable</a> or use replication to get the new data in real time on the OLAP cluster.
 In the worst case, if you really need to collocate both, set MR to use less Map and Reduce slots than you&#8217;d normally configure, possibly just one.</p>
 </div>
 <div class="paragraph">
@@ -21196,7 +21200,7 @@ A quality question that includes all context and exhibits evidence the author ha
 <p>The RegionServer web UI lists online regions and their start/end keys, as well as point-in-time RegionServer metrics (requests, regions, storeFileIndexSize, compactionQueueSize, etc.).</p>
 </div>
 <div class="paragraph">
-<p>See <a href="#hbase_metrics">[hbase_metrics]</a> for more information in metric definitions.</p>
+<p>See <a href="#hbase_metrics">HBase Metrics</a> for more information in metric definitions.</p>
 </div>
 </div>
 <div class="sect3">
@@ -21536,7 +21540,7 @@ You can also tail all the logs at the same time, edit files, etc.</p>
 <h2 id="trouble.client"><a class="anchor" href="#trouble.client"></a>109. Client</h2>
 <div class="sectionbody">
 <div class="paragraph">
-<p>For more information on the HBase client, see <a href="#client">client</a>.</p>
+<p>For more information on the HBase client, see <a href="#architecture.client">client</a>.</p>
 </div>
 <div class="sect2">
 <h3 id="_missed_scan_results_due_to_mismatch_of_code_hbase_client_scanner_max_result_size_code_between_client_and_server"><a class="anchor" href="#_missed_scan_results_due_to_mismatch_of_code_hbase_client_scanner_max_result_size_code_between_client_and_server"></a>109.1. Missed Scan Results Due To Mismatch Of <code>hbase.client.scanner.max.result.size</code> Between Client and Server</h3>
@@ -22207,7 +22211,7 @@ to use. Was=myhost-1234, Now=ip-10-55-88-99.ec2.internal</pre>
 <h2 id="trouble.master"><a class="anchor" href="#trouble.master"></a>114. Master</h2>
 <div class="sectionbody">
 <div class="paragraph">
-<p>For more information on the Master, see <a href="#master">master</a>.</p>
+<p>For more information on the Master, see <a href="#architecture.master">master</a>.</p>
 </div>
 <div class="sect2">
 <h3 id="trouble.master.startup"><a class="anchor" href="#trouble.master.startup"></a>114.1. Startup Errors</h3>
@@ -23309,7 +23313,7 @@ If inconsistencies, run <code>hbck</code> a few times because the inconsistency
 <div class="sect2">
 <h3 id="hfile_tool2"><a class="anchor" href="#hfile_tool2"></a>128.5. HFile Tool</h3>
 <div class="paragraph">
-<p>See <a href="#hfile_tool">[hfile_tool]</a>.</p>
+<p>See <a href="#hfile_tool">HFile Tool</a>.</p>
 </div>
 </div>
 <div class="sect2">
@@ -23382,7 +23386,7 @@ In those versions, you can print the contents of a WAL using the same configurat
 </div>
 </div>
 <div class="sect2">
-<h3 id="_copytable"><a class="anchor" href="#_copytable"></a>128.8. CopyTable</h3>
+<h3 id="copy.table"><a class="anchor" href="#copy.table"></a>128.8. CopyTable</h3>
 <div class="paragraph">
 <p>CopyTable is a utility that can copy part or of all of a table, either to the same cluster or another cluster.
 The target table must first exist.
@@ -23464,7 +23468,7 @@ For performance consider the following general options:
 </div>
 </div>
 <div class="sect2">
-<h3 id="_export"><a class="anchor" href="#_export"></a>128.9. Export</h3>
+<h3 id="export"><a class="anchor" href="#export"></a>128.9. Export</h3>
 <div class="paragraph">
 <p>Export is a utility that will dump the contents of table to HDFS in a sequence file.
 Invoke via:</p>
@@ -23495,7 +23499,7 @@ specifying column families and applying filters during the export.
 </div>
 </div>
 <div class="sect2">
-<h3 id="_import"><a class="anchor" href="#_import"></a>128.10. Import</h3>
+<h3 id="import"><a class="anchor" href="#import"></a>128.10. Import</h3>
 <div class="paragraph">
 <p>Import is a utility that will load data that has been exported back into HBase.
 Invoke via:</p>
@@ -23527,7 +23531,7 @@ To see usage instructions, run the command with no options.
 </div>
 </div>
 <div class="sect2">
-<h3 id="_importtsv"><a class="anchor" href="#_importtsv"></a>128.11. ImportTsv</h3>
+<h3 id="importtsv"><a class="anchor" href="#importtsv"></a>128.11. ImportTsv</h3>
 <div class="paragraph">
 <p>ImportTsv is a utility that will load data in TSV format into HBase.
 It has two distinct usages: loading data from TSV format in HDFS into HBase via Puts, and preparing StoreFiles to be loaded via the <code>completebulkload</code>.</p>
@@ -23632,7 +23636,7 @@ The second and third columns in the file will be imported as "d:c1" and "d:c2",
 </div>
 </div>
 <div class="sect2">
-<h3 id="_completebulkload"><a class="anchor" href="#_completebulkload"></a>128.12. CompleteBulkLoad</h3>
+<h3 id="completebulkload"><a class="anchor" href="#completebulkload"></a>128.12. CompleteBulkLoad</h3>
 <div class="paragraph">
 <p>The <code>completebulkload</code> utility will move generated StoreFiles into an HBase table.
 This utility is often used in conjunction with output from <a href="#importtsv">importtsv</a>.</p>
@@ -23976,7 +23980,7 @@ It will verify the region deployed in the new location before it will moves the
 At this point, the <em>graceful_stop.sh</em> tells the RegionServer <code>stop</code>.
 The master will at this point notice the RegionServer gone but all regions will have already been redeployed and because the RegionServer went down cleanly, there will be no WAL logs to split.</p>
 </div>
-<div class="admonitionblock note">
+<div id="lb" class="admonitionblock note">
 <table>
 <tr>
 <td class="icon">
@@ -24222,7 +24226,7 @@ In this case, or if you are in a OLAP environment and require having locality, t
 </div>
 </div>
 <div class="sect1">
-<h2 id="_hbase_metrics"><a class="anchor" href="#_hbase_metrics"></a>131. HBase Metrics</h2>
+<h2 id="hbase_metrics"><a class="anchor" href="#hbase_metrics"></a>131. HBase Metrics</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>HBase emits metrics which adhere to the <a href="http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/metrics/package-summary.html">Hadoop metrics</a> API.
@@ -24964,7 +24968,7 @@ The following configuration settings are recommended for maintaining an even dis
 </li>
 </ul>
 </div>
-<div class="paragraph">
+<div id="cluster.replication.preserving.tags" class="paragraph">
 <div class="title">Preserving Tags During Replication</div>
 <p>By default, the codec used for replication between clusters strips tags, such as cell-level ACLs, from cells.
 To prevent the tags from being stripped, you can use a different codec which does not strip them.
@@ -25270,7 +25274,7 @@ The new layout will be:</p>
 <p>HBase provides the following mechanisms for managing the performance of a cluster
 handling multiple workloads:
 . <a href="#quota">Quotas</a>
-. <a href="#request-queues">[request-queues]</a>
+. <a href="#request_queues">Request Queues</a>
 . <a href="#multiple-typed-queues">Multiple-Typed Queues</a></p>
 </div>
 <div class="sect2">
@@ -25285,7 +25289,7 @@ the following limits:</p>
 <p><a href="#request-quotas">The number or size of requests(read, write, or read+write) in a given timeframe</a></p>
 </li>
 <li>
-<p><a href="#namespace-quotas">The number of tables allowed in a namespace</a></p>
+<p><a href="#namespace_quotas">The number of tables allowed in a namespace</a></p>
 </li>
 </ol>
 </div>
@@ -25547,7 +25551,7 @@ See the HBase page on <a href="http://hbase.apache.org/book.html#replication">re
 <div class="sect2">
 <h3 id="ops.backup.live.copytable"><a class="anchor" href="#ops.backup.live.copytable"></a>135.3. Live Cluster Backup - CopyTable</h3>
 <div class="paragraph">
-<p>The <a href="#copytable">copytable</a> utility could either be used to copy data from one table to another on the same cluster, or to copy data to another table on another cluster.</p>
+<p>The <a href="#copy.table">copytable</a> utility could either be used to copy data from one table to another on the same cluster, or to copy data to another table on another cluster.</p>
 </div>
 <div class="paragraph">
 <p>Since the cluster is up, there is a risk that edits could be missed in the copy process.</p>
@@ -25948,7 +25952,7 @@ See <a href="#compaction">compaction</a> for some details.</p>
 <div class="paragraph">
 <p>When provisioning for large data sizes, however, it&#8217;s good to keep in mind that compactions can affect write throughput.
 Thus, for write-intensive workloads, you may opt for less frequent compactions and more store files per regions.
-Minimum number of files for compactions (<code>hbase.hstore.compaction.min</code>) can be set to higher value; <a href="#hbase.hstore.blockingstorefiles">hbase.hstore.blockingStoreFiles</a> should also be increased, as more files might accumulate in such case.
+Minimum number of files for compactions (<code>hbase.hstore.compaction.min</code>) can be set to higher value; <a href="#hbase.hstore.blockingStoreFiles">hbase.hstore.blockingStoreFiles</a> should also be increased, as more files might accumulate in such case.
 You may also consider manually managing compactions: <a href="#managed.compactions">managed.compactions</a></p>
 </div>
 </div>
@@ -26112,7 +26116,7 @@ See <a href="http://hbase.apache.org/source-repository.html">Source Code
 <h2 id="_ides"><a class="anchor" href="#_ides"></a>141. IDEs</h2>
 <div class="sectionbody">
 <div class="sect2">
-<h3 id="_eclipse"><a class="anchor" href="#_eclipse"></a>141.1. Eclipse</h3>
+<h3 id="eclipse"><a class="anchor" href="#eclipse"></a>141.1. Eclipse</h3>
 <div class="sect3">
 <h4 id="eclipse.code.formatting"><a class="anchor" href="#eclipse.code.formatting"></a>141.1.1. Code Formatting</h4>
 <div class="paragraph">
@@ -28313,7 +28317,7 @@ However, at times it is easier to refer to different version of a patch if you a
 </li>
 </ul>
 </div>
-<div class="dlist">
+<div id="patching.methods" class="dlist">
 <div class="title">Methods to Create Patches</div>
 <dl>
 <dt class="hdlist1">Eclipse</dt>
@@ -28366,7 +28370,7 @@ See <a href="#hbase.tests">hbase.tests</a> for more on how the annotations work.
 </div>
 </div>
 <div class="sect3">
-<h4 id="_reviewboard"><a class="anchor" href="#_reviewboard"></a>148.8.4. ReviewBoard</h4>
+<h4 id="reviewboard"><a class="anchor" href="#reviewboard"></a>148.8.4. ReviewBoard</h4>
 <div class="paragraph">
 <p>Patches larger than one screen, or patches that will be tricky to review, should go through <a href="http://reviews.apache.org">ReviewBoard</a>.</p>
 </div>
@@ -28783,7 +28787,7 @@ For an introduction to JUnit, see <a href="https://github.com/junit-team/junit/w
 </div>
 </div>
 <div class="sect1">
-<h2 id="_mockito"><a class="anchor" href="#_mockito"></a>150. Mockito</h2>
+<h2 id="mockito"><a class="anchor" href="#mockito"></a>150. Mockito</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>Mockito is a mocking framework.
@@ -29161,7 +29165,7 @@ In the example below we have ZooKeeper persist to <em>/user/local/zookeeper</em>
 <div class="paragraph">
 <p>The newer version, the better.
 For example, some folks have been bitten by <a href="https://issues.apache.org/jira/browse/ZOOKEEPER-1277">ZOOKEEPER-1277</a>.
-If running zookeeper 3.5+, you can ask hbase to make use of the new multi operation by enabling <a href="#hbase.zookeeper.usemulti">hbase.zookeeper.useMulti</a>" in your <em>hbase-site.xml</em>.</p>
+If running zookeeper 3.5+, you can ask hbase to make use of the new multi operation by enabling <a href="#hbase.zookeeper.useMulti">hbase.zookeeper.useMulti</a>" in your <em>hbase-site.xml</em>.</p>
 </div>
 </td>
 </tr>
@@ -29739,7 +29743,7 @@ the issue there. When you have developed a potential fix, submit it for review.
 If it addresses the issue and is seen as an improvement, one of the HBase committers
 will commit it to one or more branches, as appropriate.</p>
 </div>
-<div class="paragraph">
+<div id="submit_doc_patch_procedure" class="paragraph">
 <div class="title">Procedure: Suggested Work flow for Submitting Patches</div>
 <p>This procedure goes into more detail than Git pros will need, but is included
 in this appendix so that people unfamiliar with Git can feel confident contributing
@@ -30514,7 +30518,7 @@ Reference Guide are <code>java</code>, <code>xml</code>, <code>sql</code>, and <
 </dd>
 <dt class="hdlist1">What APIs does HBase support?</dt>
 <dd>
-<p>See <a href="#datamodel">Data Model</a>, <a href="#architecture.client">Client</a>, and <a href="#nonjava.jvm">[nonjava.jvm]</a>.</p>
+<p>See <a href="#datamodel">Data Model</a>, <a href="#architecture.client">Client</a>, and <a href="#external_apis">Apache HBase External APIs</a>.</p>
 </dd>
 </dl>
 </div>
@@ -31499,7 +31503,7 @@ For more details about Prefix Tree encoding, see <a href="https://issues.apache.
 </dl>
 </div>
 <div class="sect2">
-<h3 id="_which_compressor_or_data_block_encoder_to_use"><a class="anchor" href="#_which_compressor_or_data_block_encoder_to_use"></a>E.1. Which Compressor or Data Block Encoder To Use</h3>
+<h3 id="data.block.encoding.types"><a class="anchor" href="#data.block.encoding.types"></a>E.1. Which Compressor or Data Block Encoder To Use</h3>
 <div class="paragraph">
 <p>The compression or codec type to use depends on the characteristics of your data. Choosing the wrong type could cause your data to take more space rather than less, and can have performance implications.</p>
 </div>
@@ -31702,7 +31706,7 @@ See <a href="#hbase.regionserver.codecs">hbase.regionserver.codecs</a>.</p>
 <div class="title">Configure LZ4 Support</div>
 <p>LZ4 support is bundled with Hadoop.
 Make sure the hadoop shared library (libhadoop.so) is accessible when you start HBase.
-After configuring your platform (see <a href="#hbase.native.platform">hbase.native.platform</a>), you can make a symbolic link from HBase to the native Hadoop libraries.
+After configuring your platform (see <a href="#hadoop.native.lib">hadoop.native.lib</a>), you can make a symbolic link from HBase to the native Hadoop libraries.
 This assumes the two software installs are colocated.
 For example, if my 'platform' is Linux-amd64-64:</p>
 </div>
@@ -31969,7 +31973,7 @@ DESCRIPTION                                          ENABLED
 </div>
 </div>
 <div class="sect1">
-<h2 id="_ycsb"><a class="anchor" href="#_ycsb"></a>Appendix G: YCSB</h2>
+<h2 id="ycsb"><a class="anchor" href="#ycsb"></a>Appendix G: YCSB</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p><a href="https://github.com/brianfrankcooper/YCSB/">YCSB: The
@@ -33185,7 +33189,7 @@ The server will return cellblocks compressed using this same compressor as long
 <div id="footer">
 <div id="footer-text">
 Version 2.0.0-SNAPSHOT<br>
-Last updated 2016-02-22 15:06:54 UTC
+Last updated 2016-02-23 14:53:43 UTC
 </div>
 </div>
 </body>

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/d02dd5db/bulk-loads.html
----------------------------------------------------------------------
diff --git a/bulk-loads.html b/bulk-loads.html
index b0b4013..04b9614 100644
--- a/bulk-loads.html
+++ b/bulk-loads.html
@@ -7,7 +7,7 @@
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20160222" />
+    <meta name="Date-Revision-yyyymmdd" content="20160223" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Apache HBase &#x2013;  
       Bulk Loads in Apache HBase (TM)
@@ -305,7 +305,7 @@ under the License. -->
                         <a href="http://www.apache.org/">The Apache Software Foundation</a>.
             All rights reserved.      
                     
-                  <li id="publishDate" class="pull-right">Last Published: 2016-02-22</li>
+                  <li id="publishDate" class="pull-right">Last Published: 2016-02-23</li>
             </p>
                 </div>