You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by gi...@apache.org on 2018/10/26 14:54:14 UTC

[35/38] hbase-site git commit: Published site at 0ab7c3a18906fcf33af38da29c211ac7fcb46492.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fa850293/book.html
----------------------------------------------------------------------
diff --git a/book.html b/book.html
index 791c82d..60f4e49 100644
--- a/book.html
+++ b/book.html
@@ -342,21 +342,20 @@
 <ul class="sectlevel1">
 <li><a href="#appendix_contributing_to_documentation">Appendix A: Contributing to Documentation</a></li>
 <li><a href="#faq">Appendix B: FAQ</a></li>
-<li><a href="#hbck.in.depth">Appendix C: hbck In Depth</a></li>
-<li><a href="#appendix_acl_matrix">Appendix D: Access Control Matrix</a></li>
-<li><a href="#compression">Appendix E: Compression and Data Block Encoding In HBase</a></li>
-<li><a href="#sql">Appendix F: SQL over HBase</a></li>
-<li><a href="#ycsb">Appendix G: YCSB</a></li>
-<li><a href="#_hfile_format_2">Appendix H: HFile format</a></li>
-<li><a href="#other.info">Appendix I: Other Information About HBase</a></li>
-<li><a href="#hbase.history">Appendix J: HBase History</a></li>
-<li><a href="#asf">Appendix K: HBase and the Apache Software Foundation</a></li>
-<li><a href="#orca">Appendix L: Apache HBase Orca</a></li>
-<li><a href="#tracing">Appendix M: Enabling Dapper-like Tracing in HBase</a></li>
+<li><a href="#appendix_acl_matrix">Appendix C: Access Control Matrix</a></li>
+<li><a href="#compression">Appendix D: Compression and Data Block Encoding In HBase</a></li>
+<li><a href="#sql">Appendix E: SQL over HBase</a></li>
+<li><a href="#ycsb">Appendix F: YCSB</a></li>
+<li><a href="#_hfile_format_2">Appendix G: HFile format</a></li>
+<li><a href="#other.info">Appendix H: Other Information About HBase</a></li>
+<li><a href="#hbase.history">Appendix I: HBase History</a></li>
+<li><a href="#asf">Appendix J: HBase and the Apache Software Foundation</a></li>
+<li><a href="#orca">Appendix K: Apache HBase Orca</a></li>
+<li><a href="#tracing">Appendix L: Enabling Dapper-like Tracing in HBase</a></li>
 <li><a href="#tracing.client.modifications">200. Client Modifications</a></li>
 <li><a href="#tracing.client.shell">201. Tracing from HBase Shell</a></li>
-<li><a href="#hbase.rpc">Appendix N: 0.95 RPC Specification</a></li>
-<li><a href="#_known_incompatibilities_among_hbase_versions">Appendix O: Known Incompatibilities Among HBase Versions</a></li>
+<li><a href="#hbase.rpc">Appendix M: 0.95 RPC Specification</a></li>
+<li><a href="#_known_incompatibilities_among_hbase_versions">Appendix N: Known Incompatibilities Among HBase Versions</a></li>
 <li><a href="#_hbase_2_0_incompatible_changes">202. HBase 2.0 Incompatible Changes</a></li>
 </ul>
 </li>
@@ -6775,7 +6774,10 @@ Quitting...</code></pre>
 <p>You <strong>must not</strong> use an HBase 1.x version of HBCK against an HBase 2.0+ cluster. HBCK is strongly tied to the HBase server version. Using the HBCK tool from an earlier release against an HBase 2.0+ cluster will destructively alter said cluster in unrecoverable ways.</p>
 </div>
 <div class="paragraph">
-<p>As of HBase 2.0, HBCK is a read-only tool that can report the status of some non-public system internals. You should not rely on the format nor content of these internals to remain consistent across HBase releases.</p>
+<p>As of HBase 2.0, HBCK (A.K.A <em>HBCK1</em> or <em>hbck1</em>) is a read-only tool that can report the status of some non-public system internals. You should not rely on the format nor content of these internals to remain consistent across HBase releases.</p>
+</div>
+<div class="paragraph">
+<p>To read about HBCK&#8217;s replacement, see <a href="#HBCK2">HBase <code>HBCK2</code></a> in <a href="#ops_mgt">Apache HBase Operational Management</a>.</p>
 </div>
 <div id="upgrade2.0.removed.configs" class="paragraph">
 <div class="title">Configuration settings no longer in HBase 2.0+</div>
@@ -26954,7 +26956,8 @@ Options:
 Commands:
 Some commands take arguments. Pass no args or -h for usage.
   shell           Run the HBase shell
-  hbck            Run the hbase 'fsck' tool
+  hbck            Run the HBase 'fsck' tool. Defaults read-only hbck1.
+                  Pass '-j /path/to/HBCK2.jar' to run hbase-2.x HBCK2.
   snapshot        Tool for managing snapshots
   wal             Write-ahead-log analyzer
   hfile           Store file analyzer
@@ -27399,25 +27402,52 @@ Note that this command is in a different package than the others.</p>
 <div class="sect2">
 <h3 id="hbck"><a class="anchor" href="#hbck"></a>149.5. HBase <code>hbck</code></h3>
 <div class="paragraph">
-<p>To run <code>hbck</code> against your HBase cluster run <code>$./bin/hbase hbck</code>. At the end of the command&#8217;s output it prints <code>OK</code> or <code>INCONSISTENCY</code>.
-If your cluster reports inconsistencies, pass <code>-details</code> to see more detail emitted.
-If inconsistencies, run <code>hbck</code> a few times because the inconsistency may be transient (e.g. cluster is starting up or a region is splitting).
- Passing <code>-fix</code> may correct the inconsistency (This is an experimental feature).</p>
+<p>The <code>hbck</code> tool that shipped with hbase-1.x has been made read-only in hbase-2.x. It is not able to repair
+hbase-2.x clusters as hbase internals have changed. Nor should its assessments in read-only mode be
+trusted as it does not understand hbase-2.x operation.</p>
+</div>
+<div class="paragraph">
+<p>A new tool, <a href="#HBCK2">HBase <code>HBCK2</code></a>, described in the next section, replaces <code>hbck</code>.</p>
+</div>
+</div>
+<div class="sect2">
+<h3 id="HBCK2"><a class="anchor" href="#HBCK2"></a>149.6. HBase <code>HBCK2</code></h3>
+<div class="paragraph">
+<p><code>HBCK2</code> is the successor to <a href="#hbck">HBase <code>hbck</code></a>, the hbase-1.x fix tool (A.K.A <code>hbck1</code>). Use it in place of <code>hbck1</code>
+making repairs against hbase-2.x installs.</p>
+</div>
+<div class="paragraph">
+<p><code>HBCK2</code> does not ship as part of hbase. It can be found as a subproject of the companion
+<a href="https://github.com/apache/hbase-operator-tools">hbase-operator-tools</a> repository at
+<a href="https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2">Apache HBase HBCK2 Tool</a>.
+<code>HBCK2</code> was moved out of hbase so it could evolve at a cadence apart from that of hbase core.</p>
+</div>
+<div class="paragraph">
+<p>See the [<a href="https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2" class="bare">https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2</a>](HBCK2) Home Page
+for how <code>HBCK2</code> differs from <code>hbck1</code>, and for how to build and use it.</p>
 </div>
 <div class="paragraph">
-<p>For more information, see <a href="#hbck.in.depth">hbck In Depth</a>.</p>
+<p>Once built, you can run <code>HBCK2</code> as follows:</p>
+</div>
+<div class="listingblock">
+<div class="content">
+<pre class="CodeRay highlight"><code data-lang="java"><span class="error">$</span> hbase hbck -j /path/to/HBCK2.jar</code></pre>
+</div>
+</div>
+<div class="paragraph">
+<p>This will generate <code>HBCK2</code> usage describing commands and options.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="hfile_tool2"><a class="anchor" href="#hfile_tool2"></a>149.6. HFile Tool</h3>
+<h3 id="hfile_tool2"><a class="anchor" href="#hfile_tool2"></a>149.7. HFile Tool</h3>
 <div class="paragraph">
 <p>See <a href="#hfile_tool">HFile Tool</a>.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="_wal_tools"><a class="anchor" href="#_wal_tools"></a>149.7. WAL Tools</h3>
+<h3 id="_wal_tools"><a class="anchor" href="#_wal_tools"></a>149.8. WAL Tools</h3>
 <div class="sect3">
-<h4 id="hlog_tool"><a class="anchor" href="#hlog_tool"></a>149.7.1. FSHLog tool</h4>
+<h4 id="hlog_tool"><a class="anchor" href="#hlog_tool"></a>149.8.1. FSHLog tool</h4>
 <div class="paragraph">
 <p>The main method on <code>FSHLog</code> offers manual split and dump facilities.
 Pass it WALs or the product of a split, the content of the <em>recovered.edits</em>.
@@ -27478,13 +27508,13 @@ In those versions, you can print the contents of a WAL using the same configurat
 </div>
 </div>
 <div class="sect2">
-<h3 id="compression.tool"><a class="anchor" href="#compression.tool"></a>149.8. Compression Tool</h3>
+<h3 id="compression.tool"><a class="anchor" href="#compression.tool"></a>149.9. Compression Tool</h3>
 <div class="paragraph">
 <p>See <a href="#compression.test">compression.test</a>.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="copy.table"><a class="anchor" href="#copy.table"></a>149.9. CopyTable</h3>
+<h3 id="copy.table"><a class="anchor" href="#copy.table"></a>149.10. CopyTable</h3>
 <div class="paragraph">
 <p>CopyTable is a utility that can copy part or of all of a table, either to the same cluster or another cluster.
 The target table must first exist.
@@ -27566,7 +27596,7 @@ For performance consider the following general options:
 </div>
 </div>
 <div class="sect2">
-<h3 id="export"><a class="anchor" href="#export"></a>149.10. Export</h3>
+<h3 id="export"><a class="anchor" href="#export"></a>149.11. Export</h3>
 <div class="paragraph">
 <p>Export is a utility that will dump the contents of table to HDFS in a sequence file.
 The Export can be run via a Coprocessor Endpoint or MapReduce. Invoke via:</p>
@@ -27682,7 +27712,7 @@ specifying column families and applying filters during the export.
 </div>
 </div>
 <div class="sect2">
-<h3 id="import"><a class="anchor" href="#import"></a>149.11. Import</h3>
+<h3 id="import"><a class="anchor" href="#import"></a>149.12. Import</h3>
 <div class="paragraph">
 <p>Import is a utility that will load data that has been exported back into HBase.
 Invoke via:</p>
@@ -27714,7 +27744,7 @@ To see usage instructions, run the command with no options.
 </div>
 </div>
 <div class="sect2">
-<h3 id="importtsv"><a class="anchor" href="#importtsv"></a>149.12. ImportTsv</h3>
+<h3 id="importtsv"><a class="anchor" href="#importtsv"></a>149.13. ImportTsv</h3>
 <div class="paragraph">
 <p>ImportTsv is a utility that will load data in TSV format into HBase.
 It has two distinct usages: loading data from TSV format in HDFS into HBase via Puts, and preparing StoreFiles to be loaded via the <code>completebulkload</code>.</p>
@@ -27739,7 +27769,7 @@ It has two distinct usages: loading data from TSV format in HDFS into HBase via
 <p>These generated StoreFiles can be loaded into HBase via <a href="#completebulkload">completebulkload</a>.</p>
 </div>
 <div class="sect3">
-<h4 id="importtsv.options"><a class="anchor" href="#importtsv.options"></a>149.12.1. ImportTsv Options</h4>
+<h4 id="importtsv.options"><a class="anchor" href="#importtsv.options"></a>149.13.1. ImportTsv Options</h4>
 <div class="paragraph">
 <p>Running <code>ImportTsv</code> with no arguments prints brief usage information:</p>
 </div>
@@ -27771,7 +27801,7 @@ Other options that may be specified with -D include:
 </div>
 </div>
 <div class="sect3">
-<h4 id="importtsv.example"><a class="anchor" href="#importtsv.example"></a>149.12.2. ImportTsv Example</h4>
+<h4 id="importtsv.example"><a class="anchor" href="#importtsv.example"></a>149.13.2. ImportTsv Example</h4>
 <div class="paragraph">
 <p>For example, assume that we are loading data into a table called 'datatsv' with a ColumnFamily called 'd' with two columns "c1" and "c2".</p>
 </div>
@@ -27806,20 +27836,20 @@ The second and third columns in the file will be imported as "d:c1" and "d:c2",
 </div>
 </div>
 <div class="sect3">
-<h4 id="importtsv.warning"><a class="anchor" href="#importtsv.warning"></a>149.12.3. ImportTsv Warning</h4>
+<h4 id="importtsv.warning"><a class="anchor" href="#importtsv.warning"></a>149.13.3. ImportTsv Warning</h4>
 <div class="paragraph">
 <p>If you have preparing a lot of data for bulk loading, make sure the target HBase table is pre-split appropriately.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="importtsv.also"><a class="anchor" href="#importtsv.also"></a>149.12.4. See Also</h4>
+<h4 id="importtsv.also"><a class="anchor" href="#importtsv.also"></a>149.13.4. See Also</h4>
 <div class="paragraph">
 <p>For more information about bulk-loading HFiles into HBase, see <a href="#arch.bulk.load">arch.bulk.load</a></p>
 </div>
 </div>
 </div>
 <div class="sect2">
-<h3 id="completebulkload"><a class="anchor" href="#completebulkload"></a>149.13. CompleteBulkLoad</h3>
+<h3 id="completebulkload"><a class="anchor" href="#completebulkload"></a>149.14. CompleteBulkLoad</h3>
 <div class="paragraph">
 <p>The <code>completebulkload</code> utility will move generated StoreFiles into an HBase table.
 This utility is often used in conjunction with output from <a href="#importtsv">importtsv</a>.</p>
@@ -27840,7 +27870,7 @@ This utility is often used in conjunction with output from <a href="#importtsv">
 </div>
 </div>
 <div class="sect3">
-<h4 id="completebulkload.warning"><a class="anchor" href="#completebulkload.warning"></a>149.13.1. CompleteBulkLoad Warning</h4>
+<h4 id="completebulkload.warning"><a class="anchor" href="#completebulkload.warning"></a>149.14.1. CompleteBulkLoad Warning</h4>
 <div class="paragraph">
 <p>Data generated via MapReduce is often created with file permissions that are not compatible with the running HBase process.
 Assuming you&#8217;re running HDFS with permissions enabled, those permissions will need to be updated before you run CompleteBulkLoad.</p>
@@ -27851,7 +27881,7 @@ Assuming you&#8217;re running HDFS with permissions enabled, those permissions w
 </div>
 </div>
 <div class="sect2">
-<h3 id="walplayer"><a class="anchor" href="#walplayer"></a>149.14. WALPlayer</h3>
+<h3 id="walplayer"><a class="anchor" href="#walplayer"></a>149.15. WALPlayer</h3>
 <div class="paragraph">
 <p>WALPlayer is a utility to replay WAL files into HBase.</p>
 </div>
@@ -27883,7 +27913,7 @@ The output can optionally be mapped to another set of tables.</p>
 To NOT run WALPlayer as a mapreduce job on your cluster, force it to run all in the local process by adding the flags <code>-Dmapreduce.jobtracker.address=local</code> on the command line.</p>
 </div>
 <div class="sect3">
-<h4 id="walplayer.options"><a class="anchor" href="#walplayer.options"></a>149.14.1. WALPlayer Options</h4>
+<h4 id="walplayer.options"><a class="anchor" href="#walplayer.options"></a>149.15.1. WALPlayer Options</h4>
 <div class="paragraph">
 <p>Running <code>WALPlayer</code> with no arguments prints brief usage information:</p>
 </div>
@@ -27920,7 +27950,7 @@ For performance also consider the following options:
 </div>
 </div>
 <div class="sect2">
-<h3 id="rowcounter"><a class="anchor" href="#rowcounter"></a>149.15. RowCounter</h3>
+<h3 id="rowcounter"><a class="anchor" href="#rowcounter"></a>149.16. RowCounter</h3>
 <div class="paragraph">
 <p><a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/RowCounter.html">RowCounter</a> is a mapreduce job to count all the rows of a table.
 This is a good utility to use as a sanity check to ensure that HBase can read all the blocks of a table if there are any concerns of metadata inconsistency.
@@ -27941,7 +27971,7 @@ The scanned data can be limited based on keys using the <code>--range=[startKey]
 </div>
 </div>
 <div class="sect2">
-<h3 id="cellcounter"><a class="anchor" href="#cellcounter"></a>149.16. CellCounter</h3>
+<h3 id="cellcounter"><a class="anchor" href="#cellcounter"></a>149.17. CellCounter</h3>
 <div class="paragraph">
 <p>HBase ships another diagnostic mapreduce job called <a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/CellCounter.html">CellCounter</a>.
 Like RowCounter, it gathers more fine-grained statistics about your table.
@@ -27987,14 +28017,14 @@ Specify a time range to scan the table by using the <code>--starttime=&lt;startt
 </div>
 </div>
 <div class="sect2">
-<h3 id="_mlockall"><a class="anchor" href="#_mlockall"></a>149.17. mlockall</h3>
+<h3 id="_mlockall"><a class="anchor" href="#_mlockall"></a>149.18. mlockall</h3>
 <div class="paragraph">
 <p>It is possible to optionally pin your servers in physical memory making them less likely to be swapped out in oversubscribed environments by having the servers call <a href="http://linux.die.net/man/2/mlockall">mlockall</a> on startup.
 See <a href="https://issues.apache.org/jira/browse/HBASE-4391">HBASE-4391 Add ability to start RS as root and call mlockall</a> for how to build the optional library and have it run on startup.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="compaction.tool"><a class="anchor" href="#compaction.tool"></a>149.18. Offline Compaction Tool</h3>
+<h3 id="compaction.tool"><a class="anchor" href="#compaction.tool"></a>149.19. Offline Compaction Tool</h3>
 <div class="paragraph">
 <p>See the usage for the
 <a href="https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/CompactionTool.html">CompactionTool</a>.
@@ -28007,7 +28037,7 @@ Run it like:</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="__code_hbase_clean_code"><a class="anchor" href="#__code_hbase_clean_code"></a>149.19. <code>hbase clean</code></h3>
+<h3 id="__code_hbase_clean_code"><a class="anchor" href="#__code_hbase_clean_code"></a>149.20. <code>hbase clean</code></h3>
 <div class="paragraph">
 <p>The <code>hbase clean</code> command cleans HBase data from ZooKeeper, HDFS, or both.
 It is appropriate to use for testing.
@@ -28026,7 +28056,7 @@ Options:
 </div>
 </div>
 <div class="sect2">
-<h3 id="__code_hbase_pe_code"><a class="anchor" href="#__code_hbase_pe_code"></a>149.20. <code>hbase pe</code></h3>
+<h3 id="__code_hbase_pe_code"><a class="anchor" href="#__code_hbase_pe_code"></a>149.21. <code>hbase pe</code></h3>
 <div class="paragraph">
 <p>The <code>hbase pe</code> command runs the PerformanceEvaluation tool, which is used for testing.</p>
 </div>
@@ -28039,7 +28069,7 @@ For usage instructions, run the command with no options.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="__code_hbase_ltt_code"><a class="anchor" href="#__code_hbase_ltt_code"></a>149.21. <code>hbase ltt</code></h3>
+<h3 id="__code_hbase_ltt_code"><a class="anchor" href="#__code_hbase_ltt_code"></a>149.22. <code>hbase ltt</code></h3>
 <div class="paragraph">
 <p>The <code>hbase ltt</code> command runs the LoadTestTool utility, which is used for testing.</p>
 </div>
@@ -28052,7 +28082,7 @@ For general usage instructions, pass the <code>-h</code> option.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="ops.pre-upgrade"><a class="anchor" href="#ops.pre-upgrade"></a>149.22. Pre-Upgrade validator</h3>
+<h3 id="ops.pre-upgrade"><a class="anchor" href="#ops.pre-upgrade"></a>149.23. Pre-Upgrade validator</h3>
 <div class="paragraph">
 <p>Pre-Upgrade validator tool can be used to check the cluster for known incompatibilities before upgrading from HBase 1 to HBase 2.</p>
 </div>
@@ -28062,7 +28092,7 @@ For general usage instructions, pass the <code>-h</code> option.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="_coprocessor_validation"><a class="anchor" href="#_coprocessor_validation"></a>149.22.1. Coprocessor validation</h4>
+<h4 id="_coprocessor_validation"><a class="anchor" href="#_coprocessor_validation"></a>149.23.1. Coprocessor validation</h4>
 <div class="paragraph">
 <p>HBase supports co-processors for a long time, but the co-processor API can be changed between major releases. Co-processor validator tries to determine
 whether the old co-processors are still compatible with the actual HBase version.</p>
@@ -28113,7 +28143,7 @@ for warnings.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="_datablockencoding_validation"><a class="anchor" href="#_datablockencoding_validation"></a>149.22.2. DataBlockEncoding validation</h4>
+<h4 id="_datablockencoding_validation"><a class="anchor" href="#_datablockencoding_validation"></a>149.23.2. DataBlockEncoding validation</h4>
 <div class="paragraph">
 <p>HBase 2.0 removed <code>PREFIX_TREE</code> Data Block Encoding from column families. For further information
 please check <a href="#upgrade2.0.prefix-tree.removed"><em>prefix-tree</em> encoding removed</a>.
@@ -28145,7 +28175,7 @@ To verify that none of the column families are using incompatible Data Block Enc
 </div>
 </div>
 <div class="sect3">
-<h4 id="_hfile_content_validation"><a class="anchor" href="#_hfile_content_validation"></a>149.22.3. HFile Content validation</h4>
+<h4 id="_hfile_content_validation"><a class="anchor" href="#_hfile_content_validation"></a>149.23.3. HFile Content validation</h4>
 <div class="paragraph">
 <p>Even though Data Block Encoding is changed from <code>PREFIX_TREE</code> it is still possible to have HFiles that contain data encoded that way.
 To verify that HFiles are readable with HBase 2 please use <em>HFile content validator</em>.</p>
@@ -28246,7 +28276,7 @@ drop_namespace 'pre_upgrade_cleanup'</pre>
 </div>
 </div>
 <div class="sect2">
-<h3 id="_data_block_encoding_tool"><a class="anchor" href="#_data_block_encoding_tool"></a>149.23. Data Block Encoding Tool</h3>
+<h3 id="_data_block_encoding_tool"><a class="anchor" href="#_data_block_encoding_tool"></a>149.24. Data Block Encoding Tool</h3>
 <div class="paragraph">
 <p>Tests various compression algorithms with different data block encoder for key compression on an existing HFile.
 Useful for testing, debugging and benchmarking.</p>
@@ -28583,18 +28613,6 @@ It disables the load balancer before moving the regions.</p>
 <p>Extract the new release, verify its configuration, and synchronize it to all nodes of your cluster using <code>rsync</code>, <code>scp</code>, or another secure synchronization mechanism.</p>
 </li>
 <li>
-<p>Use the hbck utility to ensure that the cluster is consistent.</p>
-<div class="listingblock">
-<div class="content">
-<pre>$ ./bin/hbck</pre>
-</div>
-</div>
-<div class="paragraph">
-<p>Perform repairs if required.
-See <a href="#hbck">hbck</a> for details.</p>
-</div>
-</li>
-<li>
 <p>Restart the master first.
 You may need to modify these commands if your new HBase directory is different from the old one, such as for an upgrade.</p>
 <div class="listingblock">
@@ -28630,9 +28648,6 @@ To wait for 5 minutes between each RegionServer restart, modify the above script
 <li>
 <p>Restart the Master again, to clear out the dead servers list and re-enable the load balancer.</p>
 </li>
-<li>
-<p>Run the <code>hbck</code> utility again, to be sure the cluster is consistent.</p>
-</li>
 </ol>
 </div>
 </div>
@@ -30638,9 +30653,7 @@ HDFS replication factor only affects your disk usage and is invisible to most HB
 <div class="paragraph">
 <p>You can view the current number of regions for a given table using the HMaster UI.
 In the <span class="label">Tables</span> section, the number of online regions for each table is listed in the <span class="label">Online Regions</span> column.
-This total only includes the in-memory state and does not include disabled or offline regions.
-If you do not want to use the HMaster UI, you can determine the number of regions by counting the number of subdirectories of the /hbase/&lt;table&gt;/ subdirectories in HDFS, or by running the <code>bin/hbase hbck</code> command.
-Each of these methods may return a slightly different number, depending on the status of each region.</p>
+This total only includes the in-memory state and does not include disabled or offline regions.</p>
 </div>
 </div>
 <div class="sect3">
@@ -36885,266 +36898,7 @@ Reference Guide are <code>java</code>, <code>xml</code>, <code>sql</code>, and <
 </div>
 </div>
 <div class="sect1">
-<h2 id="hbck.in.depth"><a class="anchor" href="#hbck.in.depth"></a>Appendix C: hbck In Depth</h2>
-<div class="sectionbody">
-<div class="paragraph">
-<p>HBaseFsck (hbck) is a tool for checking for region consistency and table integrity problems and repairing a corrupted HBase.
-It works in two basic modes&#8201;&#8212;&#8201;a read-only inconsistency identifying mode and a multi-phase read-write repair mode.</p>
-</div>
-<div class="sect2">
-<h3 id="_running_hbck_to_identify_inconsistencies"><a class="anchor" href="#_running_hbck_to_identify_inconsistencies"></a>C.1. Running hbck to identify inconsistencies</h3>
-<div class="paragraph">
-<p>To check to see if your HBase cluster has corruptions, run hbck against your HBase cluster:</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre class="CodeRay highlight"><code data-lang="bourne">$ ./bin/hbase hbck</code></pre>
-</div>
-</div>
-<div class="paragraph">
-<p>At the end of the commands output it prints OK or tells you the number of INCONSISTENCIES present.
-You may also want to run hbck a few times because some inconsistencies can be transient (e.g.
-cluster is starting up or a region is splitting). Operationally you may want to run hbck regularly and setup alert (e.g.
-via nagios) if it repeatedly reports inconsistencies . A run of hbck will report a list of inconsistencies along with a brief description of the regions and tables affected.
-The using the <code>-details</code> option will report more details including a representative listing of all the splits present in all the tables.</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre class="CodeRay highlight"><code data-lang="bourne">$ ./bin/hbase hbck -details</code></pre>
-</div>
-</div>
-<div class="paragraph">
-<p>If you just want to know if some tables are corrupted, you can limit hbck to identify inconsistencies in only specific tables.
-For example the following command would only attempt to check table TableFoo and TableBar.
-The benefit is that hbck will run in less time.</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre class="CodeRay highlight"><code data-lang="bourne">$ ./bin/hbase hbck TableFoo TableBar</code></pre>
-</div>
-</div>
-</div>
-<div class="sect2">
-<h3 id="_inconsistencies"><a class="anchor" href="#_inconsistencies"></a>C.2. Inconsistencies</h3>
-<div class="paragraph">
-<p>If after several runs, inconsistencies continue to be reported, you may have encountered a corruption.
-These should be rare, but in the event they occur newer versions of HBase include the hbck tool enabled with automatic repair options.</p>
-</div>
-<div class="paragraph">
-<p>There are two invariants that when violated create inconsistencies in HBase:</p>
-</div>
-<div class="ulist">
-<ul>
-<li>
-<p>HBase&#8217;s region consistency invariant is satisfied if every region is assigned and deployed on exactly one region server, and all places where this state kept is in accordance.</p>
-</li>
-<li>
-<p>HBase&#8217;s table integrity invariant is satisfied if for each table, every possible row key resolves to exactly one region.</p>
-</li>
-</ul>
-</div>
-<div class="paragraph">
-<p>Repairs generally work in three phases&#8201;&#8212;&#8201;a read-only information gathering phase that identifies inconsistencies, a table integrity repair phase that restores the table integrity invariant, and then finally a region consistency repair phase that restores the region consistency invariant.
-Starting from version 0.90.0, hbck could detect region consistency problems report on a subset of possible table integrity problems.
-It also included the ability to automatically fix the most common inconsistency, region assignment and deployment consistency problems.
-This repair could be done by using the <code>-fix</code> command line option.
-These problems close regions if they are open on the wrong server or on multiple region servers and also assigns regions to region servers if they are not open.</p>
-</div>
-<div class="paragraph">
-<p>Starting from HBase versions 0.90.7, 0.92.2 and 0.94.0, several new command line options are introduced to aid repairing a corrupted HBase.
-This hbck sometimes goes by the nickname ``uberhbck''. Each particular version of uber hbck is compatible with the HBase&#8217;s of the same major version (0.90.7 uberhbck can repair a 0.90.4). However, versions &#8656;0.90.6 and versions &#8656;0.92.1 may require restarting the master or failing over to a backup master.</p>
-</div>
-</div>
-<div class="sect2">
-<h3 id="_localized_repairs"><a class="anchor" href="#_localized_repairs"></a>C.3. Localized repairs</h3>
-<div class="paragraph">
-<p>When repairing a corrupted HBase, it is best to repair the lowest risk inconsistencies first.
-These are generally region consistency repairs&#8201;&#8212;&#8201;localized single region repairs, that only modify in-memory data, ephemeral zookeeper data, or patch holes in the META table.
-Region consistency requires that the HBase instance has the state of the region&#8217;s data in HDFS (.regioninfo files), the region&#8217;s row in the hbase:meta table., and region&#8217;s deployment/assignments on region servers and the master in accordance.
-Options for repairing region consistency include:</p>
-</div>
-<div class="ulist">
-<ul>
-<li>
-<p><code>-fixAssignments</code> (equivalent to the 0.90 <code>-fix</code> option) repairs unassigned, incorrectly assigned or multiply assigned regions.</p>
-</li>
-<li>
-<p><code>-fixMeta</code> which removes meta rows when corresponding regions are not present in HDFS and adds new meta rows if they regions are present in HDFS while not in META.                To fix deployment and assignment problems you can run this command:</p>
-</li>
-</ul>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre class="CodeRay highlight"><code data-lang="bourne">$ ./bin/hbase hbck -fixAssignments</code></pre>
-</div>
-</div>
-<div class="paragraph">
-<p>To fix deployment and assignment problems as well as repairing incorrect meta rows you can run this command:</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre class="CodeRay highlight"><code data-lang="bourne">$ ./bin/hbase hbck -fixAssignments -fixMeta</code></pre>
-</div>
-</div>
-<div class="paragraph">
-<p>There are a few classes of table integrity problems that are low risk repairs.
-The first two are degenerate (startkey == endkey) regions and backwards regions (startkey &gt; endkey). These are automatically handled by sidelining the data to a temporary directory (/hbck/xxxx). The third low-risk class is hdfs region holes.
-This can be repaired by using the:</p>
-</div>
-<div class="ulist">
-<ul>
-<li>
-<p><code>-fixHdfsHoles</code> option for fabricating new empty regions on the file system.
-If holes are detected you can use -fixHdfsHoles and should include -fixMeta and -fixAssignments to make the new region consistent.</p>
-</li>
-</ul>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre class="CodeRay highlight"><code data-lang="bourne">$ ./bin/hbase hbck -fixAssignments -fixMeta -fixHdfsHoles</code></pre>
-</div>
-</div>
-<div class="paragraph">
-<p>Since this is a common operation, we&#8217;ve added a the <code>-repairHoles</code> flag that is equivalent to the previous command:</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre class="CodeRay highlight"><code data-lang="bourne">$ ./bin/hbase hbck -repairHoles</code></pre>
-</div>
-</div>
-<div class="paragraph">
-<p>If inconsistencies still remain after these steps, you most likely have table integrity problems related to orphaned or overlapping regions.</p>
-</div>
-</div>
-<div class="sect2">
-<h3 id="_region_overlap_repairs"><a class="anchor" href="#_region_overlap_repairs"></a>C.4. Region Overlap Repairs</h3>
-<div class="paragraph">
-<p>Table integrity problems can require repairs that deal with overlaps.
-This is a riskier operation because it requires modifications to the file system, requires some decision making, and may require some manual steps.
-For these repairs it is best to analyze the output of a <code>hbck -details</code>                run so that you isolate repairs attempts only upon problems the checks identify.
-Because this is riskier, there are safeguard that should be used to limit the scope of the repairs.
-WARNING: This is a relatively new and have only been tested on online but idle HBase instances (no reads/writes). Use at your own risk in an active production environment! The options for repairing table integrity violations include:</p>
-</div>
-<div class="ulist">
-<ul>
-<li>
-<p><code>-fixHdfsOrphans</code> option for ``adopting'' a region directory that is missing a region metadata file (the .regioninfo file).</p>
-</li>
-<li>
-<p><code>-fixHdfsOverlaps</code> ability for fixing overlapping regions</p>
-</li>
-</ul>
-</div>
-<div class="paragraph">
-<p>When repairing overlapping regions, a region&#8217;s data can be modified on the file system in two ways: 1) by merging regions into a larger region or 2) by sidelining regions by moving data to ``sideline'' directory where data could be restored later.
-Merging a large number of regions is technically correct but could result in an extremely large region that requires series of costly compactions and splitting operations.
-In these cases, it is probably better to sideline the regions that overlap with the most other regions (likely the largest ranges) so that merges can happen on a more reasonable scale.
-Since these sidelined regions are already laid out in HBase&#8217;s native directory and HFile format, they can be restored by using HBase&#8217;s bulk load mechanism.
-The default safeguard thresholds are conservative.
-These options let you override the default thresholds and to enable the large region sidelining feature.</p>
-</div>
-<div class="ulist">
-<ul>
-<li>
-<p><code>-maxMerge &lt;n&gt;</code> maximum number of overlapping regions to merge</p>
-</li>
-<li>
-<p><code>-sidelineBigOverlaps</code> if more than maxMerge regions are overlapping, sideline attempt to sideline the regions overlapping with the most other regions.</p>
-</li>
-<li>
-<p><code>-maxOverlapsToSideline &lt;n&gt;</code> if sidelining large overlapping regions, sideline at most n regions.</p>
-</li>
-</ul>
-</div>
-<div class="paragraph">
-<p>Since often times you would just want to get the tables repaired, you can use this option to turn on all repair options:</p>
-</div>
-<div class="ulist">
-<ul>
-<li>
-<p><code>-repair</code> includes all the region consistency options and only the hole repairing table integrity options.</p>
-</li>
-</ul>
-</div>
-<div class="paragraph">
-<p>Finally, there are safeguards to limit repairs to only specific tables.
-For example the following command would only attempt to check and repair table TableFoo and TableBar.</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre>$ ./bin/hbase hbck -repair TableFoo TableBar</pre>
-</div>
-</div>
-<div class="sect3">
-<h4 id="_special_cases_meta_is_not_properly_assigned"><a class="anchor" href="#_special_cases_meta_is_not_properly_assigned"></a>C.4.1. Special cases: Meta is not properly assigned</h4>
-<div class="paragraph">
-<p>There are a few special cases that hbck can handle as well.
-Sometimes the meta table&#8217;s only region is inconsistently assigned or deployed.
-In this case there is a special <code>-fixMetaOnly</code> option that can try to fix meta assignments.</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre>$ ./bin/hbase hbck -fixMetaOnly -fixAssignments</pre>
-</div>
-</div>
-</div>
-<div class="sect3">
-<h4 id="_special_cases_hbase_version_file_is_missing"><a class="anchor" href="#_special_cases_hbase_version_file_is_missing"></a>C.4.2. Special cases: HBase version file is missing</h4>
-<div class="paragraph">
-<p>HBase&#8217;s data on the file system requires a version file in order to start.
-If this file is missing, you can use the <code>-fixVersionFile</code> option to fabricating a new HBase version file.
-This assumes that the version of hbck you are running is the appropriate version for the HBase cluster.</p>
-</div>
-</div>
-<div class="sect3">
-<h4 id="_special_case_root_and_meta_are_corrupt"><a class="anchor" href="#_special_case_root_and_meta_are_corrupt"></a>C.4.3. Special case: Root and META are corrupt.</h4>
-<div class="paragraph">
-<p>The most drastic corruption scenario is the case where the ROOT or META is corrupted and HBase will not start.
-In this case you can use the OfflineMetaRepair tool create new ROOT and META regions and tables.
-This tool assumes that HBase is offline.
-It then marches through the existing HBase home directory, loads as much information from region metadata files (.regioninfo files) as possible from the file system.
-If the region metadata has proper table integrity, it sidelines the original root and meta table directories, and builds new ones with pointers to the region directories and their data.</p>
-</div>
-<div class="listingblock">
-<div class="content">
-<pre>$ ./bin/hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair</pre>
-</div>
-</div>
-<div class="admonitionblock note">
-<table>
-<tr>
-<td class="icon">
-<i class="fa icon-note" title="Note"></i>
-</td>
-<td class="content">
-This tool is not as clever as uberhbck but can be used to bootstrap repairs that uberhbck can complete.
-If the tool succeeds you should be able to start hbase and run online repairs if necessary.
-</td>
-</tr>
-</table>
-</div>
-</div>
-<div class="sect3">
-<h4 id="_special_cases_offline_split_parent"><a class="anchor" href="#_special_cases_offline_split_parent"></a>C.4.4. Special cases: Offline split parent</h4>
-<div class="paragraph">
-<p>Once a region is split, the offline parent will be cleaned up automatically.
-Sometimes, daughter regions are split again before their parents are cleaned up.
-HBase can clean up parents in the right order.
-However, there could be some lingering offline split parents sometimes.
-They are in META, in HDFS, and not deployed.
-But HBase can&#8217;t clean them up.
-In this case, you can use the <code>-fixSplitParents</code> option to reset them in META to be online and not split.
-Therefore, hbck can merge them with other regions if fixing overlapping regions option is used.</p>
-</div>
-<div class="paragraph">
-<p>This option should not normally be used, and it is not in <code>-fixAll</code>.</p>
-</div>
-</div>
-</div>
-</div>
-</div>
-<div class="sect1">
-<h2 id="appendix_acl_matrix"><a class="anchor" href="#appendix_acl_matrix"></a>Appendix D: Access Control Matrix</h2>
+<h2 id="appendix_acl_matrix"><a class="anchor" href="#appendix_acl_matrix"></a>Appendix C: Access Control Matrix</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>The following matrix shows the permission set required to perform operations in HBase.
@@ -37155,7 +36909,7 @@ Before using the table, read through the information about how to interpret it.<
 <p>The following conventions are used in the ACL Matrix table:</p>
 </div>
 <div class="sect2">
-<h3 id="_scopes"><a class="anchor" href="#_scopes"></a>D.1. Scopes</h3>
+<h3 id="_scopes"><a class="anchor" href="#_scopes"></a>C.1. Scopes</h3>
 <div class="paragraph">
 <p>Permissions are evaluated starting at the widest scope and working to the narrowest scope.</p>
 </div>
@@ -37190,7 +36944,7 @@ Before using the table, read through the information about how to interpret it.<
 </div>
 </div>
 <div class="sect2">
-<h3 id="_permissions"><a class="anchor" href="#_permissions"></a>D.2. Permissions</h3>
+<h3 id="_permissions"><a class="anchor" href="#_permissions"></a>C.2. Permissions</h3>
 <div class="paragraph">
 <p>Possible permissions include the following:</p>
 </div>
@@ -37741,7 +37495,7 @@ In case the table goes out of date, the unit tests which check for accuracy of p
 </div>
 </div>
 <div class="sect1">
-<h2 id="compression"><a class="anchor" href="#compression"></a>Appendix E: Compression and Data Block Encoding In HBase</h2>
+<h2 id="compression"><a class="anchor" href="#compression"></a>Appendix D: Compression and Data Block Encoding In HBase</h2>
 <div class="sectionbody">
 <div class="admonitionblock note">
 <table>
@@ -37892,7 +37646,7 @@ It was removed in hbase-2.0.0. It was a good idea but little uptake. If interest
 </dl>
 </div>
 <div class="sect2">
-<h3 id="data.block.encoding.types"><a class="anchor" href="#data.block.encoding.types"></a>E.1. Which Compressor or Data Block Encoder To Use</h3>
+<h3 id="data.block.encoding.types"><a class="anchor" href="#data.block.encoding.types"></a>D.1. Which Compressor or Data Block Encoder To Use</h3>
 <div class="paragraph">
 <p>The compression or codec type to use depends on the characteristics of your data. Choosing the wrong type could cause your data to take more space rather than less, and can have performance implications.</p>
 </div>
@@ -37927,7 +37681,7 @@ Snappy has similar qualities as LZO but has been shown to perform better.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="hadoop.native.lib"><a class="anchor" href="#hadoop.native.lib"></a>E.2. Making use of Hadoop Native Libraries in HBase</h3>
+<h3 id="hadoop.native.lib"><a class="anchor" href="#hadoop.native.lib"></a>D.2. Making use of Hadoop Native Libraries in HBase</h3>
 <div class="paragraph">
 <p>The Hadoop shared library has a bunch of facility including compression libraries and fast crc&#8217;ing&#8201;&#8212;&#8201;hardware crc&#8217;ing if your chipset supports it.
 To make this facility available to HBase, do the following. HBase/Hadoop will fall back to use alternatives if it cannot find the native library
@@ -38055,9 +37809,9 @@ bzip2:  <span class="predefined-constant">true</span> /lib64/libbz2.so<span clas
 </div>
 </div>
 <div class="sect2">
-<h3 id="_compressor_configuration_installation_and_use"><a class="anchor" href="#_compressor_configuration_installation_and_use"></a>E.3. Compressor Configuration, Installation, and Use</h3>
+<h3 id="_compressor_configuration_installation_and_use"><a class="anchor" href="#_compressor_configuration_installation_and_use"></a>D.3. Compressor Configuration, Installation, and Use</h3>
 <div class="sect3">
-<h4 id="compressor.install"><a class="anchor" href="#compressor.install"></a>E.3.1. Configure HBase For Compressors</h4>
+<h4 id="compressor.install"><a class="anchor" href="#compressor.install"></a>D.3.1. Configure HBase For Compressors</h4>
 <div class="paragraph">
 <p>Before HBase can use a given compressor, its libraries need to be available.
 Due to licensing issues, only GZ compression is available to HBase (via native Java libraries) in a default installation.
@@ -38165,7 +37919,7 @@ This would prevent a new server from being added to the cluster without having c
 </div>
 </div>
 <div class="sect3">
-<h4 id="changing.compression"><a class="anchor" href="#changing.compression"></a>E.3.2. Enable Compression On a ColumnFamily</h4>
+<h4 id="changing.compression"><a class="anchor" href="#changing.compression"></a>D.3.2. Enable Compression On a ColumnFamily</h4>
 <div class="paragraph">
 <p>To enable compression for a ColumnFamily, use an <code>alter</code> command.
 You do not need to re-create the table or copy data.
@@ -38201,7 +37955,7 @@ DESCRIPTION                                          ENABLED
 </div>
 </div>
 <div class="sect3">
-<h4 id="_testing_compression_performance"><a class="anchor" href="#_testing_compression_performance"></a>E.3.3. Testing Compression Performance</h4>
+<h4 id="_testing_compression_performance"><a class="anchor" href="#_testing_compression_performance"></a>D.3.3. Testing Compression Performance</h4>
 <div class="paragraph">
 <p>HBase includes a tool called LoadTestTool which provides mechanisms to test your compression performance.
 You must specify either <code>-write</code> or <code>-update-read</code> as your first parameter, and if you do not specify another parameter, usage advice is printed for each option.</p>
@@ -38272,7 +38026,7 @@ Options:
 </div>
 </div>
 <div class="sect2">
-<h3 id="data.block.encoding.enable"><a class="anchor" href="#data.block.encoding.enable"></a>E.4. Enable Data Block Encoding</h3>
+<h3 id="data.block.encoding.enable"><a class="anchor" href="#data.block.encoding.enable"></a>D.4. Enable Data Block Encoding</h3>
 <div class="paragraph">
 <p>Codecs are built into HBase so no extra configuration is needed.
 Codecs are enabled on a table by setting the <code>DATA_BLOCK_ENCODING</code> property.
@@ -38311,19 +38065,19 @@ DESCRIPTION                                          ENABLED
 </div>
 </div>
 <div class="sect1">
-<h2 id="sql"><a class="anchor" href="#sql"></a>Appendix F: SQL over HBase</h2>
+<h2 id="sql"><a class="anchor" href="#sql"></a>Appendix E: SQL over HBase</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>The following projects offer some support for SQL over HBase.</p>
 </div>
 <div class="sect2">
-<h3 id="phoenix"><a class="anchor" href="#phoenix"></a>F.1. Apache Phoenix</h3>
+<h3 id="phoenix"><a class="anchor" href="#phoenix"></a>E.1. Apache Phoenix</h3>
 <div class="paragraph">
 <p><a href="https://phoenix.apache.org">Apache Phoenix</a></p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="_trafodion"><a class="anchor" href="#_trafodion"></a>F.2. Trafodion</h3>
+<h3 id="_trafodion"><a class="anchor" href="#_trafodion"></a>E.2. Trafodion</h3>
 <div class="paragraph">
 <p><a href="https://trafodion.incubator.apache.org/">Trafodion: Transactional SQL-on-HBase</a></p>
 </div>
@@ -38331,7 +38085,7 @@ DESCRIPTION                                          ENABLED
 </div>
 </div>
 <div class="sect1">
-<h2 id="ycsb"><a class="anchor" href="#ycsb"></a>Appendix G: YCSB</h2>
+<h2 id="ycsb"><a class="anchor" href="#ycsb"></a>Appendix F: YCSB</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p><a href="https://github.com/brianfrankcooper/YCSB/">YCSB: The
@@ -38352,18 +38106,18 @@ See <a href="https://github.com/tdunning/YCSB">Ted Dunning&#8217;s YCSB</a>.</p>
 </div>
 </div>
 <div class="sect1">
-<h2 id="_hfile_format_2"><a class="anchor" href="#_hfile_format_2"></a>Appendix H: HFile format</h2>
+<h2 id="_hfile_format_2"><a class="anchor" href="#_hfile_format_2"></a>Appendix G: HFile format</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>This appendix describes the evolution of the HFile format.</p>
 </div>
 <div class="sect2">
-<h3 id="hfilev1"><a class="anchor" href="#hfilev1"></a>H.1. HBase File Format (version 1)</h3>
+<h3 id="hfilev1"><a class="anchor" href="#hfilev1"></a>G.1. HBase File Format (version 1)</h3>
 <div class="paragraph">
 <p>As we will be discussing changes to the HFile format, it is useful to give a short overview of the original (HFile version 1) format.</p>
 </div>
 <div class="sect3">
-<h4 id="hfilev1.overview"><a class="anchor" href="#hfilev1.overview"></a>H.1.1. Overview of Version 1</h4>
+<h4 id="hfilev1.overview"><a class="anchor" href="#hfilev1.overview"></a>G.1.1. Overview of Version 1</h4>
 <div class="paragraph">
 <p>An HFile in version 1 format is structured as follows:</p>
 </div>
@@ -38375,7 +38129,7 @@ See <a href="https://github.com/tdunning/YCSB">Ted Dunning&#8217;s YCSB</a>.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="_block_index_format_in_version_1"><a class="anchor" href="#_block_index_format_in_version_1"></a>H.1.2. Block index format in version 1</h4>
+<h4 id="_block_index_format_in_version_1"><a class="anchor" href="#_block_index_format_in_version_1"></a>G.1.2. Block index format in version 1</h4>
 <div class="paragraph">
 <p>The block index in version 1 is very straightforward.
 For each entry, it contains:</p>
@@ -38412,12 +38166,12 @@ We fix this limitation in version 2, where we store on-disk block size instead o
 </div>
 </div>
 <div class="sect2">
-<h3 id="hfilev2"><a class="anchor" href="#hfilev2"></a>H.2. HBase file format with inline blocks (version 2)</h3>
+<h3 id="hfilev2"><a class="anchor" href="#hfilev2"></a>G.2. HBase file format with inline blocks (version 2)</h3>
 <div class="paragraph">
 <p>Note:  this feature was introduced in HBase 0.92</p>
 </div>
 <div class="sect3">
-<h4 id="_motivation"><a class="anchor" href="#_motivation"></a>H.2.1. Motivation</h4>
+<h4 id="_motivation"><a class="anchor" href="#_motivation"></a>G.2.1. Motivation</h4>
 <div class="paragraph">
 <p>We found it necessary to revise the HFile format after encountering high memory usage and slow startup times caused by large Bloom filters and block indexes in the region server.
 Bloom filters can get as large as 100 MB per HFile, which adds up to 2 GB when aggregated over 20 regions.
@@ -38443,7 +38197,7 @@ In version 2, we seek once to read the trailer and seek again to read everything
 </div>
 </div>
 <div class="sect3">
-<h4 id="hfilev2.overview"><a class="anchor" href="#hfilev2.overview"></a>H.2.2. Overview of Version 2</h4>
+<h4 id="hfilev2.overview"><a class="anchor" href="#hfilev2.overview"></a>G.2.2. Overview of Version 2</h4>
 <div class="paragraph">
 <p>The version of HBase introducing the above features reads both version 1 and 2 HFiles, but only writes version 2 HFiles.
 A version 2 HFile is structured as follows:</p>
@@ -38456,7 +38210,7 @@ A version 2 HFile is structured as follows:</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="_unified_version_2_block_format"><a class="anchor" href="#_unified_version_2_block_format"></a>H.2.3. Unified version 2 block format</h4>
+<h4 id="_unified_version_2_block_format"><a class="anchor" href="#_unified_version_2_block_format"></a>G.2.3. Unified version 2 block format</h4>
 <div class="paragraph">
 <p>In the version 2 every block in the data section contains the following fields:</p>
 </div>
@@ -38545,7 +38299,7 @@ This section contains "meta" blocks and intermediate-level index blocks.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="_block_index_in_version_2"><a class="anchor" href="#_block_index_in_version_2"></a>H.2.4. Block index in version 2</h4>
+<h4 id="_block_index_in_version_2"><a class="anchor" href="#_block_index_in_version_2"></a>G.2.4. Block index in version 2</h4>
 <div class="paragraph">
 <p>There are three types of block indexes in HFile version 2, stored in two different formats (root and non-root):</p>
 </div>
@@ -38577,7 +38331,7 @@ This section contains "meta" blocks and intermediate-level index blocks.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="_root_block_index_format_in_version_2"><a class="anchor" href="#_root_block_index_format_in_version_2"></a>H.2.5. Root block index format in version 2</h4>
+<h4 id="_root_block_index_format_in_version_2"><a class="anchor" href="#_root_block_index_format_in_version_2"></a>G.2.5. Root block index format in version 2</h4>
 <div class="paragraph">
 <p>This format applies to:</p>
 </div>
@@ -38648,7 +38402,7 @@ When reading the HFile and the mid-key is requested, we retrieve the middle leaf
 </div>
 </div>
 <div class="sect3">
-<h4 id="_non_root_block_index_format_in_version_2"><a class="anchor" href="#_non_root_block_index_format_in_version_2"></a>H.2.6. Non-root block index format in version 2</h4>
+<h4 id="_non_root_block_index_format_in_version_2"><a class="anchor" href="#_non_root_block_index_format_in_version_2"></a>G.2.6. Non-root block index format in version 2</h4>
 <div class="paragraph">
 <p>This format applies to intermediate-level and leaf index blocks of a version 2 multi-level data block index.
 Every non-root index block is structured as follows.</p>
@@ -38683,7 +38437,7 @@ The length can be calculated from entryOffsets.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="_bloom_filters_in_version_2"><a class="anchor" href="#_bloom_filters_in_version_2"></a>H.2.7. Bloom filters in version 2</h4>
+<h4 id="_bloom_filters_in_version_2"><a class="anchor" href="#_bloom_filters_in_version_2"></a>G.2.7. Bloom filters in version 2</h4>
 <div class="paragraph">
 <p>In contrast with version 1, in a version 2 HFile Bloom filter metadata is stored in the load-on-open section of the HFile for quick startup.</p>
 </div>
@@ -38723,7 +38477,7 @@ The length can be calculated from entryOffsets.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="_file_info_format_in_versions_1_and_2"><a class="anchor" href="#_file_info_format_in_versions_1_and_2"></a>H.2.8. File Info format in versions 1 and 2</h4>
+<h4 id="_file_info_format_in_versions_1_and_2"><a class="anchor" href="#_file_info_format_in_versions_1_and_2"></a>G.2.8. File Info format in versions 1 and 2</h4>
 <div class="paragraph">
 <p>The file info block is a serialized map from byte arrays to byte arrays, with the following keys, among others.
 StoreFile-level logic adds more keys to this.</p>
@@ -38760,7 +38514,7 @@ This is because we need to know the comparator at the time of parsing the load-o
 </div>
 </div>
 <div class="sect3">
-<h4 id="_fixed_file_trailer_format_differences_between_versions_1_and_2"><a class="anchor" href="#_fixed_file_trailer_format_differences_between_versions_1_and_2"></a>H.2.9. Fixed file trailer format differences between versions 1 and 2</h4>
+<h4 id="_fixed_file_trailer_format_differences_between_versions_1_and_2"><a class="anchor" href="#_fixed_file_trailer_format_differences_between_versions_1_and_2"></a>G.2.9. Fixed file trailer format differences between versions 1 and 2</h4>
 <div class="paragraph">
 <p>The following table shows common and different fields between fixed file trailers in versions 1 and 2.
 Note that the size of the trailer is different depending on the version, so it is ``fixed'' only within one version.
@@ -38829,7 +38583,7 @@ However, the version is always stored as the last four-byte integer in the file.
 </table>
 </div>
 <div class="sect3">
-<h4 id="_getshortmidpointkey_an_optimization_for_data_index_block"><a class="anchor" href="#_getshortmidpointkey_an_optimization_for_data_index_block"></a>H.2.10. getShortMidpointKey(an optimization for data index block)</h4>
+<h4 id="_getshortmidpointkey_an_optimization_for_data_index_block"><a class="anchor" href="#_getshortmidpointkey_an_optimization_for_data_index_block"></a>G.2.10. getShortMidpointKey(an optimization for data index block)</h4>
 <div class="paragraph">
 <p>Note: this optimization was introduced in HBase 0.95+</p>
 </div>
@@ -38863,18 +38617,18 @@ For example, if the stop key of previous block is "the quick brown fox", the sta
 </div>
 </div>
 <div class="sect2">
-<h3 id="hfilev3"><a class="anchor" href="#hfilev3"></a>H.3. HBase File Format with Security Enhancements (version 3)</h3>
+<h3 id="hfilev3"><a class="anchor" href="#hfilev3"></a>G.3. HBase File Format with Security Enhancements (version 3)</h3>
 <div class="paragraph">
 <p>Note: this feature was introduced in HBase 0.98</p>
 </div>
 <div class="sect3">
-<h4 id="hfilev3.motivation"><a class="anchor" href="#hfilev3.motivation"></a>H.3.1. Motivation</h4>
+<h4 id="hfilev3.motivation"><a class="anchor" href="#hfilev3.motivation"></a>G.3.1. Motivation</h4>
 <div class="paragraph">
 <p>Version 3 of HFile makes changes needed to ease management of encryption at rest and cell-level metadata (which in turn is needed for cell-level ACLs and cell-level visibility labels). For more information see <a href="#hbase.encryption.server">hbase.encryption.server</a>, <a href="#hbase.tags">hbase.tags</a>, <a href="#hbase.accesscontrol.configuration">hbase.accesscontrol.configuration</a>, and <a href="#hbase.visibility.labels">hbase.visibility.labels</a>.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="hfilev3.overview"><a class="anchor" href="#hfilev3.overview"></a>H.3.2. Overview</h4>
+<h4 id="hfilev3.overview"><a class="anchor" href="#hfilev3.overview"></a>G.3.2. Overview</h4>
 <div class="paragraph">
 <p>The version of HBase introducing the above features reads HFiles in versions 1, 2, and 3 but only writes version 3 HFiles.
 Version 3 HFiles are structured the same as version 2 HFiles.
@@ -38882,7 +38636,7 @@ For more information see <a href="#hfilev2.overview">hfilev2.overview</a>.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="hvilev3.infoblock"><a class="anchor" href="#hvilev3.infoblock"></a>H.3.3. File Info Block in Version 3</h4>
+<h4 id="hvilev3.infoblock"><a class="anchor" href="#hvilev3.infoblock"></a>G.3.3. File Info Block in Version 3</h4>
 <div class="paragraph">
 <p>Version 3 added two additional pieces of information to the reserved keys in the file info block.</p>
 </div>
@@ -38917,7 +38671,7 @@ Therefore, consumers must read the file&#8217;s info block prior to reading any
 </div>
 </div>
 <div class="sect3">
-<h4 id="hfilev3.datablock"><a class="anchor" href="#hfilev3.datablock"></a>H.3.4. Data Blocks in Version 3</h4>
+<h4 id="hfilev3.datablock"><a class="anchor" href="#hfilev3.datablock"></a>G.3.4. Data Blocks in Version 3</h4>
 <div class="paragraph">
 <p>Within an HFile, HBase cells are stored in data blocks as a sequence of KeyValues (see <a href="#hfilev1.overview">hfilev1.overview</a>, or <a href="http://www.larsgeorge.com/2009/10/hbase-architecture-101-storage.html">Lars George&#8217;s
         excellent introduction to HBase Storage</a>). In version 3, these KeyValue optionally will include a set of 0 or more tags:</p>
@@ -38964,7 +38718,7 @@ It also implies that prior to writing a data block you must know if the file&#82
 </div>
 </div>
 <div class="sect3">
-<h4 id="hfilev3.fixedtrailer"><a class="anchor" href="#hfilev3.fixedtrailer"></a>H.3.5. Fixed File Trailer in Version 3</h4>
+<h4 id="hfilev3.fixedtrailer"><a class="anchor" href="#hfilev3.fixedtrailer"></a>G.3.5. Fixed File Trailer in Version 3</h4>
 <div class="paragraph">
 <p>The fixed file trailers written with HFile version 3 are always serialized with protocol buffers.
 Additionally, it adds an optional field to the version 2 protocol buffer named encryption_key.
@@ -38976,10 +38730,10 @@ For more information see <a href="#hbase.encryption.server">hbase.encryption.ser
 </div>
 </div>
 <div class="sect1">
-<h2 id="other.info"><a class="anchor" href="#other.info"></a>Appendix I: Other Information About HBase</h2>
+<h2 id="other.info"><a class="anchor" href="#other.info"></a>Appendix H: Other Information About HBase</h2>
 <div class="sectionbody">
 <div class="sect2">
-<h3 id="other.info.videos"><a class="anchor" href="#other.info.videos"></a>I.1. HBase Videos</h3>
+<h3 id="other.info.videos"><a class="anchor" href="#other.info.videos"></a>H.1. HBase Videos</h3>
 <div class="ulist">
 <div class="title">Introduction to HBase</div>
 <ul>
@@ -38996,7 +38750,7 @@ For more information see <a href="#hbase.encryption.server">hbase.encryption.ser
 </div>
 </div>
 <div class="sect2">
-<h3 id="other.info.pres"><a class="anchor" href="#other.info.pres"></a>I.2. HBase Presentations (Slides)</h3>
+<h3 id="other.info.pres"><a class="anchor" href="#other.info.pres"></a>H.2. HBase Presentations (Slides)</h3>
 <div class="paragraph">
 <p><a href="https://www.slideshare.net/cloudera/hadoop-world-2011-advanced-hbase-schema-design-lars-george-cloudera">Advanced HBase Schema Design</a> by Lars George (Hadoop World 2011).</p>
 </div>
@@ -39008,7 +38762,7 @@ For more information see <a href="#hbase.encryption.server">hbase.encryption.ser
 </div>
 </div>
 <div class="sect2">
-<h3 id="other.info.papers"><a class="anchor" href="#other.info.papers"></a>I.3. HBase Papers</h3>
+<h3 id="other.info.papers"><a class="anchor" href="#other.info.papers"></a>H.3. HBase Papers</h3>
 <div class="paragraph">
 <p><a href="http://research.google.com/archive/bigtable.html">BigTable</a> by Google (2006).</p>
 </div>
@@ -39020,7 +38774,7 @@ For more information see <a href="#hbase.encryption.server">hbase.encryption.ser
 </div>
 </div>
 <div class="sect2">
-<h3 id="other.info.sites"><a class="anchor" href="#other.info.sites"></a>I.4. HBase Sites</h3>
+<h3 id="other.info.sites"><a class="anchor" href="#other.info.sites"></a>H.4. HBase Sites</h3>
 <div class="paragraph">
 <p><a href="https://blog.cloudera.com/blog/category/hbase/">Cloudera&#8217;s HBase Blog</a> has a lot of links to useful HBase information.</p>
 </div>
@@ -39032,13 +38786,13 @@ For more information see <a href="#hbase.encryption.server">hbase.encryption.ser
 </div>
 </div>
 <div class="sect2">
-<h3 id="other.info.books"><a class="anchor" href="#other.info.books"></a>I.5. HBase Books</h3>
+<h3 id="other.info.books"><a class="anchor" href="#other.info.books"></a>H.5. HBase Books</h3>
 <div class="paragraph">
 <p><a href="http://shop.oreilly.com/product/0636920014348.do">HBase:  The Definitive Guide</a> by Lars George.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="other.info.books.hadoop"><a class="anchor" href="#other.info.books.hadoop"></a>I.6. Hadoop Books</h3>
+<h3 id="other.info.books.hadoop"><a class="anchor" href="#other.info.books.hadoop"></a>H.6. Hadoop Books</h3>
 <div class="paragraph">
 <p><a href="http://shop.oreilly.com/product/9780596521981.do">Hadoop:  The Definitive Guide</a> by Tom White.</p>
 </div>
@@ -39046,7 +38800,7 @@ For more information see <a href="#hbase.encryption.server">hbase.encryption.ser
 </div>
 </div>
 <div class="sect1">
-<h2 id="hbase.history"><a class="anchor" href="#hbase.history"></a>Appendix J: HBase History</h2>
+<h2 id="hbase.history"><a class="anchor" href="#hbase.history"></a>Appendix I: HBase History</h2>
 <div class="sectionbody">
 <div class="ulist">
 <ul>
@@ -39067,19 +38821,19 @@ For more information see <a href="#hbase.encryption.server">hbase.encryption.ser
 </div>
 </div>
 <div class="sect1">
-<h2 id="asf"><a class="anchor" href="#asf"></a>Appendix K: HBase and the Apache Software Foundation</h2>
+<h2 id="asf"><a class="anchor" href="#asf"></a>Appendix J: HBase and the Apache Software Foundation</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>HBase is a project in the Apache Software Foundation and as such there are responsibilities to the ASF to ensure a healthy project.</p>
 </div>
 <div class="sect2">
-<h3 id="asf.devprocess"><a class="anchor" href="#asf.devprocess"></a>K.1. ASF Development Process</h3>
+<h3 id="asf.devprocess"><a class="anchor" href="#asf.devprocess"></a>J.1. ASF Development Process</h3>
 <div class="paragraph">
 <p>See the <a href="https://www.apache.org/dev/#committers">Apache Development Process page</a>            for all sorts of information on how the ASF is structured (e.g., PMC, committers, contributors), to tips on contributing and getting involved, and how open-source works at ASF.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="asf.reporting"><a class="anchor" href="#asf.reporting"></a>K.2. ASF Board Reporting</h3>
+<h3 id="asf.reporting"><a class="anchor" href="#asf.reporting"></a>J.2. ASF Board Reporting</h3>
 <div class="paragraph">
 <p>Once a quarter, each project in the ASF portfolio submits a report to the ASF board.
 This is done by the HBase project lead and the committers.
@@ -39089,7 +38843,7 @@ See <a href="https://www.apache.org/foundation/board/reporting">ASF board report
 </div>
 </div>
 <div class="sect1">
-<h2 id="orca"><a class="anchor" href="#orca"></a>Appendix L: Apache HBase Orca</h2>
+<h2 id="orca"><a class="anchor" href="#orca"></a>Appendix K: Apache HBase Orca</h2>
 <div class="sectionbody">
 <div class="imageblock">
 <div class="content">
@@ -39111,7 +38865,7 @@ See <a href="https://creativecommons.org/licenses/by/3.0/us/" class="bare">https
 </div>
 </div>
 <div class="sect1">
-<h2 id="tracing"><a class="anchor" href="#tracing"></a>Appendix M: Enabling Dapper-like Tracing in HBase</h2>
+<h2 id="tracing"><a class="anchor" href="#tracing"></a>Appendix L: Enabling Dapper-like Tracing in HBase</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>HBase includes facilities for tracing requests using the open source tracing library, <a href="https://htrace.incubator.apache.org/">Apache HTrace</a>.
@@ -39121,7 +38875,7 @@ Setting up tracing is quite simple, however it currently requires some very mino
 <p>Support for this feature using HTrace 3 in HBase was added in <a href="https://issues.apache.org/jira/browse/HBASE-6449">HBASE-6449</a>. Starting with HBase 2.0, there was a non-compatible update to HTrace 4 via <a href="https://issues.apache.org/jira/browse/HBASE-18601">HBASE-18601</a>. The examples provided in this section will be using HTrace 4 package names, syntax, and conventions. For older examples, please consult previous versions of this guide.</p>
 </div>
 <div class="sect2">
-<h3 id="tracing.spanreceivers"><a class="anchor" href="#tracing.spanreceivers"></a>M.1. SpanReceivers</h3>
+<h3 id="tracing.spanreceivers"><a class="anchor" href="#tracing.spanreceivers"></a>L.1. SpanReceivers</h3>
 <div class="paragraph">
 <p>The tracing system works by collecting information in structures called 'Spans'. It is up to you to choose how you want to receive this information by implementing the <code>SpanReceiver</code> interface, which defines one method:</p>
 </div>
@@ -39277,7 +39031,7 @@ hbase(main):<span class="octal">004</span>:<span class="integer">0</span>&gt; tr
 </div>
 </div>
 <div class="sect1">
-<h2 id="hbase.rpc"><a class="anchor" href="#hbase.rpc"></a>Appendix N: 0.95 RPC Specification</h2>
+<h2 id="hbase.rpc"><a class="anchor" href="#hbase.rpc"></a>Appendix M: 0.95 RPC Specification</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>In 0.95, all client/server communication is done with <a href="https://developers.google.com/protocol-buffers/">protobuf&#8217;ed</a> Messages rather than with <a href="https://hadoop.apache.org/docs/current/api/org/apache/hadoop/io/Writable.html">Hadoop
@@ -39292,7 +39046,7 @@ For more background on how we arrived at this spec., see <a href="https://docs.g
             RPC: WIP</a></p>
 </div>
 <div class="sect2">
-<h3 id="_goals"><a class="anchor" href="#_goals"></a>N.1. Goals</h3>
+<h3 id="_goals"><a class="anchor" href="#_goals"></a>M.1. Goals</h3>
 <div class="olist arabic">
 <ol class="arabic">
 <li>
@@ -39305,7 +39059,7 @@ For more background on how we arrived at this spec., see <a href="https://docs.g
 </div>
 </div>
 <div class="sect2">
-<h3 id="_todo"><a class="anchor" href="#_todo"></a>N.2. TODO</h3>
+<h3 id="_todo"><a class="anchor" href="#_todo"></a>M.2. TODO</h3>
 <div class="olist arabic">
 <ol class="arabic">
 <li>
@@ -39324,7 +39078,7 @@ Also, a little state machine on client/server interactions would help with under
 </div>
 </div>
 <div class="sect2">
-<h3 id="_rpc"><a class="anchor" href="#_rpc"></a>N.3. RPC</h3>
+<h3 id="_rpc"><a class="anchor" href="#_rpc"></a>M.3. RPC</h3>
 <div class="paragraph">
 <p>The client will send setup information on connection establish.
 Thereafter, the client invokes methods against the remote server sending a protobuf Message and receiving a protobuf Message in response.
@@ -39338,7 +39092,7 @@ Optionally, Cells(KeyValues) can be passed outside of protobufs in follow-behind
 <a href="https://git-wip-us.apache.org/repos/asf?p=hbase.git;a=blob;f=hbase-protocol/src/main/protobuf/RPC.proto;hb=HEAD">RPC.proto</a>            file in master.</p>
 </div>
 <div class="sect3">
-<h4 id="_connection_setup"><a class="anchor" href="#_connection_setup"></a>N.3.1. Connection Setup</h4>
+<h4 id="_connection_setup"><a class="anchor" href="#_connection_setup"></a>M.3.1. Connection Setup</h4>
 <div class="paragraph">
 <p>Client initiates connection.</p>
 </div>
@@ -39382,7 +39136,7 @@ the protobuf&#8217;d Message that comes after the connection preamble&#8201;&#82
 </div>
 </div>
 <div class="sect3">
-<h4 id="_request"><a class="anchor" href="#_request"></a>N.3.2. Request</h4>
+<h4 id="_request"><a class="anchor" href="#_request"></a>M.3.2. Request</h4>
 <div class="paragraph">
 <p>After a Connection has been set up, client makes requests.
 Server responds.</p>
@@ -39418,7 +39172,7 @@ Data is protobuf&#8217;d inline in this pb Message or optionally comes in the fo
 </div>
 </div>
 <div class="sect3">
-<h4 id="_response"><a class="anchor" href="#_response"></a>N.3.3. Response</h4>
+<h4 id="_response"><a class="anchor" href="#_response"></a>M.3.3. Response</h4>
 <div class="paragraph">
 <p>Same as Request, it is a protobuf ResponseHeader followed by a protobuf Message response where the Message response type suits the method invoked.
 Bulk of the data may come in a following CellBlock.</p>
@@ -39447,7 +39201,7 @@ If the method being invoked is getRegionInfo, if you study the Service descripto
 </div>
 </div>
 <div class="sect3">
-<h4 id="_exceptions"><a class="anchor" href="#_exceptions"></a>N.3.4. Exceptions</h4>
+<h4 id="_exceptions"><a class="anchor" href="#_exceptions"></a>M.3.4. Exceptions</h4>
 <div class="paragraph">
 <p>There are two distinct types.
 There is the request failed which is encapsulated inside the response header for the response.
@@ -39461,7 +39215,7 @@ It has a flag to indicate do-no-retry as well as other miscellaneous payload to
 </div>
 </div>
 <div class="sect3">
-<h4 id="_cellblocks"><a class="anchor" href="#_cellblocks"></a>N.3.5. CellBlocks</h4>
+<h4 id="_cellblocks"><a class="anchor" href="#_cellblocks"></a>M.3.5. CellBlocks</h4>
 <div class="paragraph">
 <p>These are not versioned.
 Server can do the codec or it cannot.
@@ -39471,7 +39225,7 @@ Codecs will live on the server for all time so old clients can connect.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="_notes"><a class="anchor" href="#_notes"></a>N.4. Notes</h3>
+<h3 id="_notes"><a class="anchor" href="#_notes"></a>M.4. Notes</h3>
 <div class="paragraph">
 <div class="title">Constraints</div>
 <p>In some part, current wire-format&#8201;&#8212;&#8201;i.e.
@@ -39501,7 +39255,7 @@ As is, we read header+param in one go as server is currently implemented so this
 If later, fat request has clear advantage, can roll out a v2 later.</p>
 </div>
 <div class="sect3">
-<h4 id="rpc.configs"><a class="anchor" href="#rpc.configs"></a>N.4.1. RPC Configurations</h4>
+<h4 id="rpc.configs"><a class="anchor" href="#rpc.configs"></a>M.4.1. RPC Configurations</h4>
 <div class="paragraph">
 <div class="title">CellBlock Codecs</div>
 <p>To enable a codec other than the default <code>KeyValueCodec</code>, set <code>hbase.client.rpc.codec</code> to the name of the Codec class to use.
@@ -39532,7 +39286,7 @@ The server will return cellblocks compressed using this same compressor as long
 </div>
 </div>
 <div class="sect1">
-<h2 id="_known_incompatibilities_among_hbase_versions"><a class="anchor" href="#_known_incompatibilities_among_hbase_versions"></a>Appendix O: Known Incompatibilities Among HBase Versions</h2>
+<h2 id="_known_incompatibilities_among_hbase_versions"><a class="anchor" href="#_known_incompatibilities_among_hbase_versions"></a>Appendix N: Known Incompatibilities Among HBase Versions</h2>
 <div class="sectionbody">
 
 </div>
@@ -41324,7 +41078,7 @@ org/apache/hadoop/hbase/security/access/AccessControlClient.revoke:(Lorg/apache/
 <div id="footer">
 <div id="footer-text">
 Version 3.0.0-SNAPSHOT<br>
-Last updated 2018-10-25 14:33:44 UTC
+Last updated 2018-10-26 14:33:29 UTC
 </div>
 </div>
 </body>

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/fa850293/bulk-loads.html
----------------------------------------------------------------------
diff --git a/bulk-loads.html b/bulk-loads.html
index 30c284d..d342bf0 100644
--- a/bulk-loads.html
+++ b/bulk-loads.html
@@ -7,7 +7,7 @@
   <head>
     <meta charset="UTF-8" />
     <meta name="viewport" content="width=device-width, initial-scale=1.0" />
-    <meta name="Date-Revision-yyyymmdd" content="20181025" />
+    <meta name="Date-Revision-yyyymmdd" content="20181026" />
     <meta http-equiv="Content-Language" content="en" />
     <title>Apache HBase &#x2013;  
       Bulk Loads in Apache HBase (TM)
@@ -316,7 +316,7 @@ under the License. -->
                         <a href="https://www.apache.org/">The Apache Software Foundation</a>.
             All rights reserved.      
                     
-                  <li id="publishDate" class="pull-right">Last Published: 2018-10-25</li>
+                  <li id="publishDate" class="pull-right">Last Published: 2018-10-26</li>
             </p>
                 </div>