You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by st...@apache.org on 2016/12/07 18:24:16 UTC

[10/52] [partial] hbase-site git commit: Published site at 61220e4d7c8d7e5fb8ed3bbe2469bc86632c48de.

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/d9f3c819/book.html
----------------------------------------------------------------------
diff --git a/book.html b/book.html
index bfc166d..897eac8 100644
--- a/book.html
+++ b/book.html
@@ -173,111 +173,112 @@
 <li><a href="#cp_example">90. Examples</a></li>
 <li><a href="#_guidelines_for_deploying_a_coprocessor">91. Guidelines For Deploying A Coprocessor</a></li>
 <li><a href="#_monitor_time_spent_in_coprocessors">92. Monitor Time Spent in Coprocessors</a></li>
+<li><a href="#_restricting_coprocessor_usage">93. Restricting Coprocessor Usage</a></li>
 </ul>
 </li>
 <li><a href="#performance">Apache HBase Performance Tuning</a>
 <ul class="sectlevel1">
-<li><a href="#perf.os">93. Operating System</a></li>
-<li><a href="#perf.network">94. Network</a></li>
-<li><a href="#jvm">95. Java</a></li>
-<li><a href="#perf.configurations">96. HBase Configurations</a></li>
-<li><a href="#perf.zookeeper">97. ZooKeeper</a></li>
-<li><a href="#perf.schema">98. Schema Design</a></li>
-<li><a href="#perf.general">99. HBase General Patterns</a></li>
-<li><a href="#perf.writing">100. Writing to HBase</a></li>
-<li><a href="#perf.reading">101. Reading from HBase</a></li>
-<li><a href="#perf.deleting">102. Deleting from HBase</a></li>
-<li><a href="#perf.hdfs">103. HDFS</a></li>
-<li><a href="#perf.ec2">104. Amazon EC2</a></li>
-<li><a href="#perf.hbase.mr.cluster">105. Collocating HBase and MapReduce</a></li>
-<li><a href="#perf.casestudy">106. Case Studies</a></li>
+<li><a href="#perf.os">94. Operating System</a></li>
+<li><a href="#perf.network">95. Network</a></li>
+<li><a href="#jvm">96. Java</a></li>
+<li><a href="#perf.configurations">97. HBase Configurations</a></li>
+<li><a href="#perf.zookeeper">98. ZooKeeper</a></li>
+<li><a href="#perf.schema">99. Schema Design</a></li>
+<li><a href="#perf.general">100. HBase General Patterns</a></li>
+<li><a href="#perf.writing">101. Writing to HBase</a></li>
+<li><a href="#perf.reading">102. Reading from HBase</a></li>
+<li><a href="#perf.deleting">103. Deleting from HBase</a></li>
+<li><a href="#perf.hdfs">104. HDFS</a></li>
+<li><a href="#perf.ec2">105. Amazon EC2</a></li>
+<li><a href="#perf.hbase.mr.cluster">106. Collocating HBase and MapReduce</a></li>
+<li><a href="#perf.casestudy">107. Case Studies</a></li>
 </ul>
 </li>
 <li><a href="#trouble">Troubleshooting and Debugging Apache HBase</a>
 <ul class="sectlevel1">
-<li><a href="#trouble.general">107. General Guidelines</a></li>
-<li><a href="#trouble.log">108. Logs</a></li>
-<li><a href="#trouble.resources">109. Resources</a></li>
-<li><a href="#trouble.tools">110. Tools</a></li>
-<li><a href="#trouble.client">111. Client</a></li>
-<li><a href="#trouble.mapreduce">112. MapReduce</a></li>
-<li><a href="#trouble.namenode">113. NameNode</a></li>
-<li><a href="#trouble.network">114. Network</a></li>
-<li><a href="#trouble.rs">115. RegionServer</a></li>
-<li><a href="#trouble.master">116. Master</a></li>
-<li><a href="#trouble.zookeeper">117. ZooKeeper</a></li>
-<li><a href="#trouble.ec2">118. Amazon EC2</a></li>
-<li><a href="#trouble.versions">119. HBase and Hadoop version issues</a></li>
-<li><a href="#_ipc_configuration_conflicts_with_hadoop">120. IPC Configuration Conflicts with Hadoop</a></li>
-<li><a href="#_hbase_and_hdfs">121. HBase and HDFS</a></li>
-<li><a href="#trouble.tests">122. Running unit or integration tests</a></li>
-<li><a href="#trouble.casestudy">123. Case Studies</a></li>
-<li><a href="#trouble.crypto">124. Cryptographic Features</a></li>
-<li><a href="#_operating_system_specific_issues">125. Operating System Specific Issues</a></li>
-<li><a href="#_jdk_issues">126. JDK Issues</a></li>
+<li><a href="#trouble.general">108. General Guidelines</a></li>
+<li><a href="#trouble.log">109. Logs</a></li>
+<li><a href="#trouble.resources">110. Resources</a></li>
+<li><a href="#trouble.tools">111. Tools</a></li>
+<li><a href="#trouble.client">112. Client</a></li>
+<li><a href="#trouble.mapreduce">113. MapReduce</a></li>
+<li><a href="#trouble.namenode">114. NameNode</a></li>
+<li><a href="#trouble.network">115. Network</a></li>
+<li><a href="#trouble.rs">116. RegionServer</a></li>
+<li><a href="#trouble.master">117. Master</a></li>
+<li><a href="#trouble.zookeeper">118. ZooKeeper</a></li>
+<li><a href="#trouble.ec2">119. Amazon EC2</a></li>
+<li><a href="#trouble.versions">120. HBase and Hadoop version issues</a></li>
+<li><a href="#_ipc_configuration_conflicts_with_hadoop">121. IPC Configuration Conflicts with Hadoop</a></li>
+<li><a href="#_hbase_and_hdfs">122. HBase and HDFS</a></li>
+<li><a href="#trouble.tests">123. Running unit or integration tests</a></li>
+<li><a href="#trouble.casestudy">124. Case Studies</a></li>
+<li><a href="#trouble.crypto">125. Cryptographic Features</a></li>
+<li><a href="#_operating_system_specific_issues">126. Operating System Specific Issues</a></li>
+<li><a href="#_jdk_issues">127. JDK Issues</a></li>
 </ul>
 </li>
 <li><a href="#casestudies">Apache HBase Case Studies</a>
 <ul class="sectlevel1">
-<li><a href="#casestudies.overview">127. Overview</a></li>
-<li><a href="#casestudies.schema">128. Schema Design</a></li>
-<li><a href="#casestudies.perftroub">129. Performance/Troubleshooting</a></li>
+<li><a href="#casestudies.overview">128. Overview</a></li>
+<li><a href="#casestudies.schema">129. Schema Design</a></li>
+<li><a href="#casestudies.perftroub">130. Performance/Troubleshooting</a></li>
 </ul>
 </li>
 <li><a href="#ops_mgt">Apache HBase Operational Management</a>
 <ul class="sectlevel1">
-<li><a href="#tools">130. HBase Tools and Utilities</a></li>
-<li><a href="#ops.regionmgt">131. Region Management</a></li>
-<li><a href="#node.management">132. Node Management</a></li>
-<li><a href="#hbase_metrics">133. HBase Metrics</a></li>
-<li><a href="#ops.monitoring">134. HBase Monitoring</a></li>
-<li><a href="#_cluster_replication">135. Cluster Replication</a></li>
-<li><a href="#_running_multiple_workloads_on_a_single_cluster">136. Running Multiple Workloads On a Single Cluster</a></li>
-<li><a href="#ops.backup">137. HBase Backup</a></li>
-<li><a href="#ops.snapshots">138. HBase Snapshots</a></li>
-<li><a href="#snapshots_azure">139. Storing Snapshots in Microsoft Azure Blob Storage</a></li>
-<li><a href="#ops.capacity">140. Capacity Planning and Region Sizing</a></li>
-<li><a href="#table.rename">141. Table Rename</a></li>
+<li><a href="#tools">131. HBase Tools and Utilities</a></li>
+<li><a href="#ops.regionmgt">132. Region Management</a></li>
+<li><a href="#node.management">133. Node Management</a></li>
+<li><a href="#hbase_metrics">134. HBase Metrics</a></li>
+<li><a href="#ops.monitoring">135. HBase Monitoring</a></li>
+<li><a href="#_cluster_replication">136. Cluster Replication</a></li>
+<li><a href="#_running_multiple_workloads_on_a_single_cluster">137. Running Multiple Workloads On a Single Cluster</a></li>
+<li><a href="#ops.backup">138. HBase Backup</a></li>
+<li><a href="#ops.snapshots">139. HBase Snapshots</a></li>
+<li><a href="#snapshots_azure">140. Storing Snapshots in Microsoft Azure Blob Storage</a></li>
+<li><a href="#ops.capacity">141. Capacity Planning and Region Sizing</a></li>
+<li><a href="#table.rename">142. Table Rename</a></li>
 </ul>
 </li>
 <li><a href="#developer">Building and Developing Apache HBase</a>
 <ul class="sectlevel1">
-<li><a href="#getting.involved">142. Getting Involved</a></li>
-<li><a href="#repos">143. Apache HBase Repositories</a></li>
-<li><a href="#_ides">144. IDEs</a></li>
-<li><a href="#build">145. Building Apache HBase</a></li>
-<li><a href="#releasing">146. Releasing Apache HBase</a></li>
-<li><a href="#hbase.rc.voting">147. Voting on Release Candidates</a></li>
-<li><a href="#documentation">148. Generating the HBase Reference Guide</a></li>
-<li><a href="#hbase.org">149. Updating <a href="http://hbase.apache.org">hbase.apache.org</a></a></li>
-<li><a href="#hbase.tests">150. Tests</a></li>
-<li><a href="#developing">151. Developer Guidelines</a></li>
+<li><a href="#getting.involved">143. Getting Involved</a></li>
+<li><a href="#repos">144. Apache HBase Repositories</a></li>
+<li><a href="#_ides">145. IDEs</a></li>
+<li><a href="#build">146. Building Apache HBase</a></li>
+<li><a href="#releasing">147. Releasing Apache HBase</a></li>
+<li><a href="#hbase.rc.voting">148. Voting on Release Candidates</a></li>
+<li><a href="#documentation">149. Generating the HBase Reference Guide</a></li>
+<li><a href="#hbase.org">150. Updating <a href="http://hbase.apache.org">hbase.apache.org</a></a></li>
+<li><a href="#hbase.tests">151. Tests</a></li>
+<li><a href="#developing">152. Developer Guidelines</a></li>
 </ul>
 </li>
 <li><a href="#unit.tests">Unit Testing HBase Applications</a>
 <ul class="sectlevel1">
-<li><a href="#_junit">152. JUnit</a></li>
-<li><a href="#mockito">153. Mockito</a></li>
-<li><a href="#_mrunit">154. MRUnit</a></li>
-<li><a href="#_integration_testing_with_an_hbase_mini_cluster">155. Integration Testing with an HBase Mini-Cluster</a></li>
+<li><a href="#_junit">153. JUnit</a></li>
+<li><a href="#mockito">154. Mockito</a></li>
+<li><a href="#_mrunit">155. MRUnit</a></li>
+<li><a href="#_integration_testing_with_an_hbase_mini_cluster">156. Integration Testing with an HBase Mini-Cluster</a></li>
 </ul>
 </li>
 <li><a href="#protobuf">Protobuf in HBase</a>
 <ul class="sectlevel1">
-<li><a href="#_protobuf">156. Protobuf</a></li>
+<li><a href="#_protobuf">157. Protobuf</a></li>
 </ul>
 </li>
 <li><a href="#zookeeper">ZooKeeper</a>
 <ul class="sectlevel1">
-<li><a href="#_using_existing_zookeeper_ensemble">157. Using existing ZooKeeper ensemble</a></li>
-<li><a href="#zk.sasl.auth">158. SASL Authentication with ZooKeeper</a></li>
+<li><a href="#_using_existing_zookeeper_ensemble">158. Using existing ZooKeeper ensemble</a></li>
+<li><a href="#zk.sasl.auth">159. SASL Authentication with ZooKeeper</a></li>
 </ul>
 </li>
 <li><a href="#community">Community</a>
 <ul class="sectlevel1">
-<li><a href="#_decisions">159. Decisions</a></li>
-<li><a href="#community.roles">160. Community Roles</a></li>
-<li><a href="#hbase.commit.msg.format">161. Commit Message format</a></li>
+<li><a href="#_decisions">160. Decisions</a></li>
+<li><a href="#community.roles">161. Community Roles</a></li>
+<li><a href="#hbase.commit.msg.format">162. Commit Message format</a></li>
 </ul>
 </li>
 <li><a href="#_appendix">Appendix</a>
@@ -287,7 +288,7 @@
 <li><a href="#hbck.in.depth">Appendix C: hbck In Depth</a></li>
 <li><a href="#appendix_acl_matrix">Appendix D: Access Control Matrix</a></li>
 <li><a href="#compression">Appendix E: Compression and Data Block Encoding In HBase</a></li>
-<li><a href="#data.block.encoding.enable">162. Enable Data Block Encoding</a></li>
+<li><a href="#data.block.encoding.enable">163. Enable Data Block Encoding</a></li>
 <li><a href="#sql">Appendix F: SQL over HBase</a></li>
 <li><a href="#ycsb">Appendix G: YCSB</a></li>
 <li><a href="#_hfile_format_2">Appendix H: HFile format</a></li>
@@ -296,8 +297,8 @@
 <li><a href="#asf">Appendix K: HBase and the Apache Software Foundation</a></li>
 <li><a href="#orca">Appendix L: Apache HBase Orca</a></li>
 <li><a href="#tracing">Appendix M: Enabling Dapper-like Tracing in HBase</a></li>
-<li><a href="#tracing.client.modifications">163. Client Modifications</a></li>
-<li><a href="#tracing.client.shell">164. Tracing from HBase Shell</a></li>
+<li><a href="#tracing.client.modifications">164. Client Modifications</a></li>
+<li><a href="#tracing.client.shell">165. Tracing from HBase Shell</a></li>
 <li><a href="#hbase.rpc">Appendix N: 0.95 RPC Specification</a></li>
 </ul>
 </li>
@@ -2903,6 +2904,21 @@ Configuration that it is thought rare anyone would change can exist only in code
 </dd>
 </dl>
 </div>
+<div id="hbase.client.pause.cqtbe" class="dlist">
+<dl>
+<dt class="hdlist1"><code>hbase.client.pause.cqtbe</code></dt>
+<dd>
+<div class="paragraph">
+<div class="title">Description</div>
+<p>Whether or not to use a special client pause for CallQueueTooBigException (cqtbe). Set this property to a higher value than hbase.client.pause if you observe frequent CQTBE from the same RegionServer and the call queue there keeps full</p>
+</div>
+<div class="paragraph">
+<div class="title">Default</div>
+<p>none</p>
+</div>
+</dd>
+</dl>
+</div>
 <div id="hbase.client.retries.number" class="dlist">
 <dl>
 <dt class="hdlist1"><code>hbase.client.retries.number</code></dt>
@@ -3049,6 +3065,21 @@ Configuration that it is thought rare anyone would change can exist only in code
 </dd>
 </dl>
 </div>
+<div id="hbase.master.balancer.maxRitPercent" class="dlist">
+<dl>
+<dt class="hdlist1"><code>hbase.master.balancer.maxRitPercent</code></dt>
+<dd>
+<div class="paragraph">
+<div class="title">Description</div>
+<p>The max percent of regions in transition when balancing. The default value is 1.0. So there are no balancer throttling. If set this config to 0.01, It means that there are at most 1% regions in transition when balancing. Then the cluster&#8217;s availability is at least 99% when balancing.</p>
+</div>
+<div class="paragraph">
+<div class="title">Default</div>
+<p><code>1.0</code></p>
+</div>
+</dd>
+</dl>
+</div>
 <div id="hbase.balancer.period" class="dlist">
 <dl>
 <dt class="hdlist1"><code>hbase.balancer.period</code></dt>
@@ -4909,51 +4940,6 @@ Configuration that it is thought rare anyone would change can exist only in code
 </dd>
 </dl>
 </div>
-<div id="hbase.mob.sweep.tool.compaction.ratio" class="dlist">
-<dl>
-<dt class="hdlist1"><code>hbase.mob.sweep.tool.compaction.ratio</code></dt>
-<dd>
-<div class="paragraph">
-<div class="title">Description</div>
-<p>If there&#8217;re too many cells deleted in a mob file, it&#8217;s regarded as an invalid file and needs to be merged. If existingCellsSize/mobFileSize is less than ratio, it&#8217;s regarded as an invalid file. The default value is 0.5f.</p>
-</div>
-<div class="paragraph">
-<div class="title">Default</div>
-<p><code>0.5f</code></p>
-</div>
-</dd>
-</dl>
-</div>
-<div id="hbase.mob.sweep.tool.compaction.mergeable.size" class="dlist">
-<dl>
-<dt class="hdlist1"><code>hbase.mob.sweep.tool.compaction.mergeable.size</code></dt>
-<dd>
-<div class="paragraph">
-<div class="title">Description</div>
-<p>If the size of a mob file is less than this value, it&#8217;s regarded as a small file and needs to be merged. The default value is 128MB.</p>
-</div>
-<div class="paragraph">
-<div class="title">Default</div>
-<p><code>134217728</code></p>
-</div>
-</dd>
-</dl>
-</div>
-<div id="hbase.mob.sweep.tool.compaction.memstore.flush.size" class="dlist">
-<dl>
-<dt class="hdlist1"><code>hbase.mob.sweep.tool.compaction.memstore.flush.size</code></dt>
-<dd>
-<div class="paragraph">
-<div class="title">Description</div>
-<p>The flush size for the memstore used by sweep job. Each sweep reducer owns such a memstore. The default value is 128MB.</p>
-</div>
-<div class="paragraph">
-<div class="title">Default</div>
-<p><code>134217728</code></p>
-</div>
-</dd>
-</dl>
-</div>
 <div id="hbase.master.mob.ttl.cleaner.period" class="dlist">
 <dl>
 <dt class="hdlist1"><code>hbase.master.mob.ttl.cleaner.period</code></dt>
@@ -4975,11 +4961,11 @@ Configuration that it is thought rare anyone would change can exist only in code
 <dd>
 <div class="paragraph">
 <div class="title">Description</div>
-<p>If the size of a mob file is less than this value, it&#8217;s regarded as a small file and needs to be merged in mob compaction. The default value is 192MB.</p>
+<p>If the size of a mob file is less than this value, it&#8217;s regarded as a small file and needs to be merged in mob compaction. The default value is 1280MB.</p>
 </div>
 <div class="paragraph">
 <div class="title">Default</div>
-<p><code>201326592</code></p>
+<p><code>1342177280</code></p>
 </div>
 </dd>
 </dl>
@@ -20675,32 +20661,101 @@ The metrics sampling rate as described in <a href="#hbase_metrics">HBase Metrics
 </div>
 </div>
 </div>
+<div class="sect1">
+<h2 id="_restricting_coprocessor_usage"><a class="anchor" href="#_restricting_coprocessor_usage"></a>93. Restricting Coprocessor Usage</h2>
+<div class="sectionbody">
+<div class="paragraph">
+<p>Restricting arbitrary user coprocessors can be a big concern in multitenant environments. HBase provides a continuum of options for ensuring only expected coprocessors are running:</p>
+</div>
+<div class="ulist">
+<ul>
+<li>
+<p><code>hbase.coprocessor.enabled</code>: Enables or disables all coprocessors. This will limit the functionality of HBase, as disabling all coprocessors will disable some security providers. An example coproccessor so affected is <code>org.apache.hadoop.hbase.security.access.AccessController</code>.</p>
+<div class="ulist">
+<ul>
+<li>
+<p><code>hbase.coprocessor.user.enabled</code>: Enables or disables loading coprocessors on tables (i.e. user coprocessors).</p>
+</li>
+<li>
+<p>One can statically load coprocessors via the following tunables in <code>hbase-site.xml</code>:</p>
+<div class="ulist">
+<ul>
+<li>
+<p><code>hbase.coprocessor.regionserver.classes</code>: A comma-separated list of coprocessors that are loaded by region servers</p>
+</li>
+<li>
+<p><code>hbase.coprocessor.region.classes</code>: A comma-separated list of RegionObserver and Endpoint coprocessors</p>
+</li>
+<li>
+<p><code>hbase.coprocessor.user.region.classes</code>: A comma-separated list of coprocessors that are loaded by all regions</p>
+</li>
+<li>
+<p><code>hbase.coprocessor.master.classes</code>: A comma-separated list of coprocessors that are loaded by the master (MasterObserver coprocessors)</p>
+</li>
+<li>
+<p><code>hbase.coprocessor.wal.classes</code>: A comma-separated list of WALObserver coprocessors to load</p>
+</li>
+</ul>
+</div>
+</li>
+<li>
+<p><code>hbase.coprocessor.abortonerror</code>: Whether to abort the daemon which has loaded the coprocessor if the coprocessor should error other than <code>IOError</code>. If this is set to false and an access controller coprocessor should have a fatal error the coprocessor will be circumvented, as such in secure installations this is advised to be <code>true</code>; however, one may override this on a per-table basis for user coprocessors, to ensure they do not abort their running region server and are instead unloaded on error.</p>
+</li>
+<li>
+<p><code>hbase.coprocessor.region.whitelist.paths</code>: A comma separated list available for those loading <code>org.apache.hadoop.hbase.security.access.CoprocessorWhitelistMasterObserver</code> whereby one can use the following options to white-list paths from which coprocessors may be loaded.</p>
+<div class="ulist">
+<ul>
+<li>
+<p>Coprocessors on the classpath are implicitly white-listed</p>
+</li>
+<li>
+<p><code>*</code> to wildcard all coprocessor paths</p>
+</li>
+<li>
+<p>An entire filesystem (e.g. <code>hdfs://my-cluster/</code>)</p>
+</li>
+<li>
+<p>A wildcard path to be evaluated by <a href="https://commons.apache.org/proper/commons-io/javadocs/api-release/org/apache/commons/io/FilenameUtils.html">FilenameUtils.wildcardMatch</a></p>
+</li>
+<li>
+<p>Note: Path can specify scheme or not (e.g. <code><a href="file:///usr/hbase/lib/coprocessors" class="bare">file:///usr/hbase/lib/coprocessors</a></code> or for all filesystems <code>/usr/hbase/lib/coprocessors</code>)</p>
+</li>
+</ul>
+</div>
+</li>
+</ul>
+</div>
+</li>
+</ul>
+</div>
+</div>
+</div>
 <h1 id="performance" class="sect0"><a class="anchor" href="#performance"></a>Apache HBase Performance Tuning</h1>
 <div class="sect1">
-<h2 id="perf.os"><a class="anchor" href="#perf.os"></a>93. Operating System</h2>
+<h2 id="perf.os"><a class="anchor" href="#perf.os"></a>94. Operating System</h2>
 <div class="sectionbody">
 <div class="sect2">
-<h3 id="perf.os.ram"><a class="anchor" href="#perf.os.ram"></a>93.1. Memory</h3>
+<h3 id="perf.os.ram"><a class="anchor" href="#perf.os.ram"></a>94.1. Memory</h3>
 <div class="paragraph">
 <p>RAM, RAM, RAM.
 Don&#8217;t starve HBase.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.os.64"><a class="anchor" href="#perf.os.64"></a>93.2. 64-bit</h3>
+<h3 id="perf.os.64"><a class="anchor" href="#perf.os.64"></a>94.2. 64-bit</h3>
 <div class="paragraph">
 <p>Use a 64-bit platform (and 64-bit JVM).</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.os.swap"><a class="anchor" href="#perf.os.swap"></a>93.3. Swapping</h3>
+<h3 id="perf.os.swap"><a class="anchor" href="#perf.os.swap"></a>94.3. Swapping</h3>
 <div class="paragraph">
 <p>Watch out for swapping.
 Set <code>swappiness</code> to 0.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.os.cpu"><a class="anchor" href="#perf.os.cpu"></a>93.4. CPU</h3>
+<h3 id="perf.os.cpu"><a class="anchor" href="#perf.os.cpu"></a>94.4. CPU</h3>
 <div class="paragraph">
 <p>Make sure you have set up your Hadoop to use native, hardware checksumming.
 See link:[hadoop.native.lib].</p>
@@ -20709,7 +20764,7 @@ See link:[hadoop.native.lib].</p>
 </div>
 </div>
 <div class="sect1">
-<h2 id="perf.network"><a class="anchor" href="#perf.network"></a>94. Network</h2>
+<h2 id="perf.network"><a class="anchor" href="#perf.network"></a>95. Network</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>Perhaps the most important factor in avoiding network issues degrading Hadoop and HBase performance is the switching hardware that is used, decisions made early in the scope of the project can cause major problems when you double or triple the size of your cluster (or more).</p>
@@ -20731,14 +20786,14 @@ See link:[hadoop.native.lib].</p>
 </ul>
 </div>
 <div class="sect2">
-<h3 id="perf.network.1switch"><a class="anchor" href="#perf.network.1switch"></a>94.1. Single Switch</h3>
+<h3 id="perf.network.1switch"><a class="anchor" href="#perf.network.1switch"></a>95.1. Single Switch</h3>
 <div class="paragraph">
 <p>The single most important factor in this configuration is that the switching capacity of the hardware is capable of handling the traffic which can be generated by all systems connected to the switch.
 Some lower priced commodity hardware can have a slower switching capacity than could be utilized by a full switch.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.network.2switch"><a class="anchor" href="#perf.network.2switch"></a>94.2. Multiple Switches</h3>
+<h3 id="perf.network.2switch"><a class="anchor" href="#perf.network.2switch"></a>95.2. Multiple Switches</h3>
 <div class="paragraph">
 <p>Multiple switches are a potential pitfall in the architecture.
 The most common configuration of lower priced hardware is a simple 1Gbps uplink from one switch to another.
@@ -20764,7 +20819,7 @@ single 48 port as opposed to 2x 24 port</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.network.multirack"><a class="anchor" href="#perf.network.multirack"></a>94.3. Multiple Racks</h3>
+<h3 id="perf.network.multirack"><a class="anchor" href="#perf.network.multirack"></a>95.3. Multiple Racks</h3>
 <div class="paragraph">
 <p>Multiple rack configurations carry the same potential issues as multiple switches, and can suffer performance degradation from two main areas:</p>
 </div>
@@ -20789,13 +20844,13 @@ An example of this is, creating an 8Gbps port channel from rack A to rack B, usi
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.network.ints"><a class="anchor" href="#perf.network.ints"></a>94.4. Network Interfaces</h3>
+<h3 id="perf.network.ints"><a class="anchor" href="#perf.network.ints"></a>95.4. Network Interfaces</h3>
 <div class="paragraph">
 <p>Are all the network interfaces functioning correctly? Are you sure? See the Troubleshooting Case Study in <a href="#casestudies.slownode">Case Study #1 (Performance Issue On A Single Node)</a>.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.network.call_me_maybe"><a class="anchor" href="#perf.network.call_me_maybe"></a>94.5. Network Consistency and Partition Tolerance</h3>
+<h3 id="perf.network.call_me_maybe"><a class="anchor" href="#perf.network.call_me_maybe"></a>95.5. Network Consistency and Partition Tolerance</h3>
 <div class="paragraph">
 <p>The <a href="http://en.wikipedia.org/wiki/CAP_theorem">CAP Theorem</a> states that a distributed system can maintain two out of the following three characteristics:
 - *C*onsistency&#8201;&#8212;&#8201;all nodes see the same data.
@@ -20812,12 +20867,12 @@ An example of this is, creating an 8Gbps port channel from rack A to rack B, usi
 </div>
 </div>
 <div class="sect1">
-<h2 id="jvm"><a class="anchor" href="#jvm"></a>95. Java</h2>
+<h2 id="jvm"><a class="anchor" href="#jvm"></a>96. Java</h2>
 <div class="sectionbody">
 <div class="sect2">
-<h3 id="gc"><a class="anchor" href="#gc"></a>95.1. The Garbage Collector and Apache HBase</h3>
+<h3 id="gc"><a class="anchor" href="#gc"></a>96.1. The Garbage Collector and Apache HBase</h3>
 <div class="sect3">
-<h4 id="gcpause"><a class="anchor" href="#gcpause"></a>95.1.1. Long GC pauses</h4>
+<h4 id="gcpause"><a class="anchor" href="#gcpause"></a>96.1.1. Long GC pauses</h4>
 <div class="paragraph">
 <p>In his presentation, <a href="http://www.slideshare.net/cloudera/hbase-hug-presentation">Avoiding Full GCs with MemStore-Local Allocation Buffers</a>, Todd Lipcon describes two cases of stop-the-world garbage collections common in HBase, especially during loading; CMS failure modes and old generation heap fragmentation brought.</p>
 </div>
@@ -20855,38 +20910,38 @@ See <a href="#block.cache">Block Cache</a></p>
 </div>
 </div>
 <div class="sect1">
-<h2 id="perf.configurations"><a class="anchor" href="#perf.configurations"></a>96. HBase Configurations</h2>
+<h2 id="perf.configurations"><a class="anchor" href="#perf.configurations"></a>97. HBase Configurations</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>See <a href="#recommended_configurations">Recommended Configurations</a>.</p>
 </div>
 <div class="sect2">
-<h3 id="perf.99th.percentile"><a class="anchor" href="#perf.99th.percentile"></a>96.1. Improving the 99th Percentile</h3>
+<h3 id="perf.99th.percentile"><a class="anchor" href="#perf.99th.percentile"></a>97.1. Improving the 99th Percentile</h3>
 <div class="paragraph">
 <p>Try link:[hedged_reads].</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.compactions.and.splits"><a class="anchor" href="#perf.compactions.and.splits"></a>96.2. Managing Compactions</h3>
+<h3 id="perf.compactions.and.splits"><a class="anchor" href="#perf.compactions.and.splits"></a>97.2. Managing Compactions</h3>
 <div class="paragraph">
 <p>For larger systems, managing link:[compactions and splits] may be something you want to consider.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.handlers"><a class="anchor" href="#perf.handlers"></a>96.3. <code>hbase.regionserver.handler.count</code></h3>
+<h3 id="perf.handlers"><a class="anchor" href="#perf.handlers"></a>97.3. <code>hbase.regionserver.handler.count</code></h3>
 <div class="paragraph">
 <p>See <a href="#hbase.regionserver.handler.count">[hbase.regionserver.handler.count]</a>.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.hfile.block.cache.size"><a class="anchor" href="#perf.hfile.block.cache.size"></a>96.4. <code>hfile.block.cache.size</code></h3>
+<h3 id="perf.hfile.block.cache.size"><a class="anchor" href="#perf.hfile.block.cache.size"></a>97.4. <code>hfile.block.cache.size</code></h3>
 <div class="paragraph">
 <p>See <a href="#hfile.block.cache.size">[hfile.block.cache.size]</a>.
 A memory setting for the RegionServer process.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="blockcache.prefetch"><a class="anchor" href="#blockcache.prefetch"></a>96.5. Prefetch Option for Blockcache</h3>
+<h3 id="blockcache.prefetch"><a class="anchor" href="#blockcache.prefetch"></a>97.5. Prefetch Option for Blockcache</h3>
 <div class="paragraph">
 <p><a href="https://issues.apache.org/jira/browse/HBASE-9857">HBASE-9857</a> adds a new option to prefetch HFile contents when opening the BlockCache, if a Column family or RegionServer property is set.
 This option is available for HBase 0.98.3 and later.
@@ -20933,35 +20988,35 @@ or on <code>org.apache.hadoop.hbase.io.hfile.HFileReaderV2</code> in earlier ver
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.rs.memstore.size"><a class="anchor" href="#perf.rs.memstore.size"></a>96.6. <code>hbase.regionserver.global.memstore.size</code></h3>
+<h3 id="perf.rs.memstore.size"><a class="anchor" href="#perf.rs.memstore.size"></a>97.6. <code>hbase.regionserver.global.memstore.size</code></h3>
 <div class="paragraph">
 <p>See <a href="#hbase.regionserver.global.memstore.size">[hbase.regionserver.global.memstore.size]</a>.
 This memory setting is often adjusted for the RegionServer process depending on needs.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.rs.memstore.size.lower.limit"><a class="anchor" href="#perf.rs.memstore.size.lower.limit"></a>96.7. <code>hbase.regionserver.global.memstore.size.lower.limit</code></h3>
+<h3 id="perf.rs.memstore.size.lower.limit"><a class="anchor" href="#perf.rs.memstore.size.lower.limit"></a>97.7. <code>hbase.regionserver.global.memstore.size.lower.limit</code></h3>
 <div class="paragraph">
 <p>See <a href="#hbase.regionserver.global.memstore.size.lower.limit">[hbase.regionserver.global.memstore.size.lower.limit]</a>.
 This memory setting is often adjusted for the RegionServer process depending on needs.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.hstore.blockingstorefiles"><a class="anchor" href="#perf.hstore.blockingstorefiles"></a>96.8. <code>hbase.hstore.blockingStoreFiles</code></h3>
+<h3 id="perf.hstore.blockingstorefiles"><a class="anchor" href="#perf.hstore.blockingstorefiles"></a>97.8. <code>hbase.hstore.blockingStoreFiles</code></h3>
 <div class="paragraph">
 <p>See <a href="#hbase.hstore.blockingStoreFiles">[hbase.hstore.blockingStoreFiles]</a>.
 If there is blocking in the RegionServer logs, increasing this can help.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.hregion.memstore.block.multiplier"><a class="anchor" href="#perf.hregion.memstore.block.multiplier"></a>96.9. <code>hbase.hregion.memstore.block.multiplier</code></h3>
+<h3 id="perf.hregion.memstore.block.multiplier"><a class="anchor" href="#perf.hregion.memstore.block.multiplier"></a>97.9. <code>hbase.hregion.memstore.block.multiplier</code></h3>
 <div class="paragraph">
 <p>See <a href="#hbase.hregion.memstore.block.multiplier">[hbase.hregion.memstore.block.multiplier]</a>.
 If there is enough RAM, increasing this can help.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="hbase.regionserver.checksum.verify.performance"><a class="anchor" href="#hbase.regionserver.checksum.verify.performance"></a>96.10. <code>hbase.regionserver.checksum.verify</code></h3>
+<h3 id="hbase.regionserver.checksum.verify.performance"><a class="anchor" href="#hbase.regionserver.checksum.verify.performance"></a>97.10. <code>hbase.regionserver.checksum.verify</code></h3>
 <div class="paragraph">
 <p>Have HBase write the checksum into the datablock and save having to do the checksum seek whenever you read.</p>
 </div>
@@ -20970,7 +21025,7 @@ If there is enough RAM, increasing this can help.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="_tuning_code_callqueue_code_options"><a class="anchor" href="#_tuning_code_callqueue_code_options"></a>96.11. Tuning <code>callQueue</code> Options</h3>
+<h3 id="_tuning_code_callqueue_code_options"><a class="anchor" href="#_tuning_code_callqueue_code_options"></a>97.11. Tuning <code>callQueue</code> Options</h3>
 <div class="paragraph">
 <p><a href="https://issues.apache.org/jira/browse/HBASE-11355">HBASE-11355</a> introduces several callQueue tuning mechanisms which can increase performance.
 See the JIRA for some benchmarking information.</p>
@@ -21064,7 +21119,7 @@ These parameters are intended for testing purposes and should be used carefully.
 </div>
 </div>
 <div class="sect1">
-<h2 id="perf.zookeeper"><a class="anchor" href="#perf.zookeeper"></a>97. ZooKeeper</h2>
+<h2 id="perf.zookeeper"><a class="anchor" href="#perf.zookeeper"></a>98. ZooKeeper</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>See <a href="#zookeeper">ZooKeeper</a> for information on configuring ZooKeeper, and see the part about having a dedicated disk.</p>
@@ -21072,23 +21127,23 @@ These parameters are intended for testing purposes and should be used carefully.
 </div>
 </div>
 <div class="sect1">
-<h2 id="perf.schema"><a class="anchor" href="#perf.schema"></a>98. Schema Design</h2>
+<h2 id="perf.schema"><a class="anchor" href="#perf.schema"></a>99. Schema Design</h2>
 <div class="sectionbody">
 <div class="sect2">
-<h3 id="perf.number.of.cfs"><a class="anchor" href="#perf.number.of.cfs"></a>98.1. Number of Column Families</h3>
+<h3 id="perf.number.of.cfs"><a class="anchor" href="#perf.number.of.cfs"></a>99.1. Number of Column Families</h3>
 <div class="paragraph">
 <p>See <a href="#number.of.cfs">On the number of column families</a>.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.schema.keys"><a class="anchor" href="#perf.schema.keys"></a>98.2. Key and Attribute Lengths</h3>
+<h3 id="perf.schema.keys"><a class="anchor" href="#perf.schema.keys"></a>99.2. Key and Attribute Lengths</h3>
 <div class="paragraph">
 <p>See <a href="#keysize">Try to minimize row and column sizes</a>.
 See also <a href="#perf.compression.however">However&#8230;&#8203;</a> for compression caveats.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="schema.regionsize"><a class="anchor" href="#schema.regionsize"></a>98.3. Table RegionSize</h3>
+<h3 id="schema.regionsize"><a class="anchor" href="#schema.regionsize"></a>99.3. Table RegionSize</h3>
 <div class="paragraph">
 <p>The regionsize can be set on a per-table basis via <code>setFileSize</code> on <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HTableDescriptor.html">HTableDescriptor</a> in the event where certain tables require different regionsizes than the configured default regionsize.</p>
 </div>
@@ -21097,7 +21152,7 @@ See also <a href="#perf.compression.however">However&#8230;&#8203;</a> for compr
 </div>
 </div>
 <div class="sect2">
-<h3 id="schema.bloom"><a class="anchor" href="#schema.bloom"></a>98.4. Bloom Filters</h3>
+<h3 id="schema.bloom"><a class="anchor" href="#schema.bloom"></a>99.4. Bloom Filters</h3>
 <div class="paragraph">
 <p>A Bloom filter, named for its creator, Burton Howard Bloom, is a data structure which is designed to predict whether a given element is a member of a set of data.
 A positive result from a Bloom filter is not always accurate, but a negative result is guaranteed to be accurate.
@@ -21124,7 +21179,7 @@ Since HBase 0.96, row-based Bloom filters are enabled by default.
 <p>For more information on Bloom filters in relation to HBase, see <a href="#blooms">Bloom Filters</a> for more information, or the following Quora discussion: <a href="http://www.quora.com/How-are-bloom-filters-used-in-HBase">How are bloom filters used in HBase?</a>.</p>
 </div>
 <div class="sect3">
-<h4 id="bloom.filters.when"><a class="anchor" href="#bloom.filters.when"></a>98.4.1. When To Use Bloom Filters</h4>
+<h4 id="bloom.filters.when"><a class="anchor" href="#bloom.filters.when"></a>99.4.1. When To Use Bloom Filters</h4>
 <div class="paragraph">
 <p>Since HBase 0.96, row-based Bloom filters are enabled by default.
 You may choose to disable them or to change some tables to use row+column Bloom filters, depending on the characteristics of your data and how it is loaded into HBase.</p>
@@ -21149,7 +21204,7 @@ Bloom filters work best when the size of each data entry is at least a few kilob
 </div>
 </div>
 <div class="sect3">
-<h4 id="_enabling_bloom_filters"><a class="anchor" href="#_enabling_bloom_filters"></a>98.4.2. Enabling Bloom Filters</h4>
+<h4 id="_enabling_bloom_filters"><a class="anchor" href="#_enabling_bloom_filters"></a>99.4.2. Enabling Bloom Filters</h4>
 <div class="paragraph">
 <p>Bloom filters are enabled on a Column Family.
 You can do this by using the setBloomFilterType method of HColumnDescriptor or using the HBase API.
@@ -21167,7 +21222,7 @@ See also the API documentation for <a href="http://hbase.apache.org/apidocs/org/
 </div>
 </div>
 <div class="sect3">
-<h4 id="_configuring_server_wide_behavior_of_bloom_filters"><a class="anchor" href="#_configuring_server_wide_behavior_of_bloom_filters"></a>98.4.3. Configuring Server-Wide Behavior of Bloom Filters</h4>
+<h4 id="_configuring_server_wide_behavior_of_bloom_filters"><a class="anchor" href="#_configuring_server_wide_behavior_of_bloom_filters"></a>99.4.3. Configuring Server-Wide Behavior of Bloom Filters</h4>
 <div class="paragraph">
 <p>You can configure the following settings in the <em>hbase-site.xml</em>.</p>
 </div>
@@ -21229,7 +21284,7 @@ See also the API documentation for <a href="http://hbase.apache.org/apidocs/org/
 </div>
 </div>
 <div class="sect2">
-<h3 id="schema.cf.blocksize"><a class="anchor" href="#schema.cf.blocksize"></a>98.5. ColumnFamily BlockSize</h3>
+<h3 id="schema.cf.blocksize"><a class="anchor" href="#schema.cf.blocksize"></a>99.5. ColumnFamily BlockSize</h3>
 <div class="paragraph">
 <p>The blocksize can be configured for each ColumnFamily in a table, and defaults to 64k.
 Larger cell values require larger blocksizes.
@@ -21240,7 +21295,7 @@ There is an inverse relationship between blocksize and the resulting StoreFile i
 </div>
 </div>
 <div class="sect2">
-<h3 id="cf.in.memory"><a class="anchor" href="#cf.in.memory"></a>98.6. In-Memory ColumnFamilies</h3>
+<h3 id="cf.in.memory"><a class="anchor" href="#cf.in.memory"></a>99.6. In-Memory ColumnFamilies</h3>
 <div class="paragraph">
 <p>ColumnFamilies can optionally be defined as in-memory.
 Data is still persisted to disk, just like any other ColumnFamily.
@@ -21251,13 +21306,13 @@ In-memory blocks have the highest priority in the <a href="#block.cache">Block C
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.compression"><a class="anchor" href="#perf.compression"></a>98.7. Compression</h3>
+<h3 id="perf.compression"><a class="anchor" href="#perf.compression"></a>99.7. Compression</h3>
 <div class="paragraph">
 <p>Production systems should use compression with their ColumnFamily definitions.
 See <a href="#compression">Compression and Data Block Encoding In HBase</a> for more information.</p>
 </div>
 <div class="sect3">
-<h4 id="perf.compression.however"><a class="anchor" href="#perf.compression.however"></a>98.7.1. However&#8230;&#8203;</h4>
+<h4 id="perf.compression.however"><a class="anchor" href="#perf.compression.however"></a>99.7.1. However&#8230;&#8203;</h4>
 <div class="paragraph">
 <p>Compression deflates data <em>on disk</em>.
 When it&#8217;s in-memory (e.g., in the MemStore) or on the wire (e.g., transferring between RegionServer and Client) it&#8217;s inflated.
@@ -21271,10 +21326,10 @@ So while using ColumnFamily compression is a best practice, but it&#8217;s not g
 </div>
 </div>
 <div class="sect1">
-<h2 id="perf.general"><a class="anchor" href="#perf.general"></a>99. HBase General Patterns</h2>
+<h2 id="perf.general"><a class="anchor" href="#perf.general"></a>100. HBase General Patterns</h2>
 <div class="sectionbody">
 <div class="sect2">
-<h3 id="perf.general.constants"><a class="anchor" href="#perf.general.constants"></a>99.1. Constants</h3>
+<h3 id="perf.general.constants"><a class="anchor" href="#perf.general.constants"></a>100.1. Constants</h3>
 <div class="paragraph">
 <p>When people get started with HBase they have a tendency to write code that looks like this:</p>
 </div>
@@ -21303,10 +21358,10 @@ Get get = <span class="keyword">new</span> Get(rowkey);
 </div>
 </div>
 <div class="sect1">
-<h2 id="perf.writing"><a class="anchor" href="#perf.writing"></a>100. Writing to HBase</h2>
+<h2 id="perf.writing"><a class="anchor" href="#perf.writing"></a>101. Writing to HBase</h2>
 <div class="sectionbody">
 <div class="sect2">
-<h3 id="perf.batch.loading"><a class="anchor" href="#perf.batch.loading"></a>100.1. Batch Loading</h3>
+<h3 id="perf.batch.loading"><a class="anchor" href="#perf.batch.loading"></a>101.1. Batch Loading</h3>
 <div class="paragraph">
 <p>Use the bulk load tool if you can.
 See <a href="#arch.bulk.load">Bulk Loading</a>.
@@ -21314,7 +21369,7 @@ Otherwise, pay attention to the below.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="precreate.regions"><a class="anchor" href="#precreate.regions"></a>100.2. Table Creation: Pre-Creating Regions</h3>
+<h3 id="precreate.regions"><a class="anchor" href="#precreate.regions"></a>101.2. Table Creation: Pre-Creating Regions</h3>
 <div class="paragraph">
 <p>Tables in HBase are initially created with one region by default.
 For bulk imports, this means that all clients will write to the same region until it is large enough to split and become distributed across the cluster.
@@ -21364,7 +21419,7 @@ See <a href="#tricks.pre-split">Pre-splitting tables with the HBase Shell</a> fo
 </div>
 </div>
 <div class="sect2">
-<h3 id="def.log.flush"><a class="anchor" href="#def.log.flush"></a>100.3. Table Creation: Deferred Log Flush</h3>
+<h3 id="def.log.flush"><a class="anchor" href="#def.log.flush"></a>101.3. Table Creation: Deferred Log Flush</h3>
 <div class="paragraph">
 <p>The default behavior for Puts using the Write Ahead Log (WAL) is that <code>WAL</code> edits will be written immediately.
 If deferred log flush is used, WAL edits are kept in memory until the flush period.
@@ -21377,7 +21432,7 @@ The default value of <code>hbase.regionserver.optionallogflushinterval</code> is
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.hbase.client.autoflush"><a class="anchor" href="#perf.hbase.client.autoflush"></a>100.4. HBase Client: AutoFlush</h3>
+<h3 id="perf.hbase.client.autoflush"><a class="anchor" href="#perf.hbase.client.autoflush"></a>101.4. HBase Client: AutoFlush</h3>
 <div class="paragraph">
 <p>When performing a lot of Puts, make sure that setAutoFlush is set to false on your <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html">Table</a> instance.
 Otherwise, the Puts will be sent one at a time to the RegionServer.
@@ -21388,7 +21443,7 @@ Calling <code>close</code> on the <code>Table</code> instance will invoke <code>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.hbase.client.putwal"><a class="anchor" href="#perf.hbase.client.putwal"></a>100.5. HBase Client: Turn off WAL on Puts</h3>
+<h3 id="perf.hbase.client.putwal"><a class="anchor" href="#perf.hbase.client.putwal"></a>101.5. HBase Client: Turn off WAL on Puts</h3>
 <div class="paragraph">
 <p>A frequent request is to disable the WAL to increase performance of Puts.
 This is only appropriate for bulk loads, as it puts your data at risk by removing the protection of the WAL in the event of a region server crash.
@@ -21413,14 +21468,14 @@ To disable the WAL, see <a href="#wal.disable">Disabling the WAL</a>.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.hbase.client.regiongroup"><a class="anchor" href="#perf.hbase.client.regiongroup"></a>100.6. HBase Client: Group Puts by RegionServer</h3>
+<h3 id="perf.hbase.client.regiongroup"><a class="anchor" href="#perf.hbase.client.regiongroup"></a>101.6. HBase Client: Group Puts by RegionServer</h3>
 <div class="paragraph">
 <p>In addition to using the writeBuffer, grouping <code>Put`s by RegionServer can reduce the number of client RPC calls per writeBuffer flush.
 There is a utility `HTableUtil</code> currently on MASTER that does this, but you can either copy that or implement your own version for those still on 0.90.x or earlier.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.hbase.write.mr.reducer"><a class="anchor" href="#perf.hbase.write.mr.reducer"></a>100.7. MapReduce: Skip The Reducer</h3>
+<h3 id="perf.hbase.write.mr.reducer"><a class="anchor" href="#perf.hbase.write.mr.reducer"></a>101.7. MapReduce: Skip The Reducer</h3>
 <div class="paragraph">
 <p>When writing a lot of data to an HBase table from a MR job (e.g., with <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/TableOutputFormat.html">TableOutputFormat</a>), and specifically where Puts are being emitted from the Mapper, skip the Reducer step.
 When a Reducer step is used, all of the output (Puts) from the Mapper will get spooled to disk, then sorted/shuffled to other Reducers that will most likely be off-node.
@@ -21431,7 +21486,7 @@ It&#8217;s far more efficient to just write directly to HBase.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.one.region"><a class="anchor" href="#perf.one.region"></a>100.8. Anti-Pattern: One Hot Region</h3>
+<h3 id="perf.one.region"><a class="anchor" href="#perf.one.region"></a>101.8. Anti-Pattern: One Hot Region</h3>
 <div class="paragraph">
 <p>If all your data is being written to one region at a time, then re-read the section on processing timeseries data.</p>
 </div>
@@ -21447,21 +21502,21 @@ As the HBase client communicates directly with the RegionServers, this can be ob
 </div>
 </div>
 <div class="sect1">
-<h2 id="perf.reading"><a class="anchor" href="#perf.reading"></a>101. Reading from HBase</h2>
+<h2 id="perf.reading"><a class="anchor" href="#perf.reading"></a>102. Reading from HBase</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>The mailing list can help if you are having performance issues.
 For example, here is a good general thread on what to look at addressing read-time issues: <a href="http://search-hadoop.com/m/qOo2yyHtCC1">HBase Random Read latency &gt; 100ms</a></p>
 </div>
 <div class="sect2">
-<h3 id="perf.hbase.client.caching"><a class="anchor" href="#perf.hbase.client.caching"></a>101.1. Scan Caching</h3>
+<h3 id="perf.hbase.client.caching"><a class="anchor" href="#perf.hbase.client.caching"></a>102.1. Scan Caching</h3>
 <div class="paragraph">
 <p>If HBase is used as an input source for a MapReduce job, for example, make sure that the input <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html">Scan</a> instance to the MapReduce job has <code>setCaching</code> set to something greater than the default (which is 1). Using the default value means that the map-task will make call back to the region-server for every record processed.
 Setting this value to 500, for example, will transfer 500 rows at a time to the client to be processed.
 There is a cost/benefit to have the cache value be large because it costs more in memory for both client and RegionServer, so bigger isn&#8217;t always better.</p>
 </div>
 <div class="sect3">
-<h4 id="perf.hbase.client.caching.mr"><a class="anchor" href="#perf.hbase.client.caching.mr"></a>101.1.1. Scan Caching in MapReduce Jobs</h4>
+<h4 id="perf.hbase.client.caching.mr"><a class="anchor" href="#perf.hbase.client.caching.mr"></a>102.1.1. Scan Caching in MapReduce Jobs</h4>
 <div class="paragraph">
 <p>Scan settings in MapReduce jobs deserve special attention.
 Timeouts can result (e.g., UnknownScannerException) in Map tasks if it takes longer to process a batch of records before the client goes back to the RegionServer for the next set of data.
@@ -21475,7 +21530,7 @@ If you process rows more slowly (e.g., lots of transformations per row, writes),
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.hbase.client.selection"><a class="anchor" href="#perf.hbase.client.selection"></a>101.2. Scan Attribute Selection</h3>
+<h3 id="perf.hbase.client.selection"><a class="anchor" href="#perf.hbase.client.selection"></a>102.2. Scan Attribute Selection</h3>
 <div class="paragraph">
 <p>Whenever a Scan is used to process large numbers of rows (and especially when used as a MapReduce source), be aware of which attributes are selected.
 If <code>scan.addFamily</code> is called then <em>all</em> of the attributes in the specified ColumnFamily will be returned to the client.
@@ -21483,7 +21538,7 @@ If only a small number of the available attributes are to be processed, then onl
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.hbase.client.seek"><a class="anchor" href="#perf.hbase.client.seek"></a>101.3. Avoid scan seeks</h3>
+<h3 id="perf.hbase.client.seek"><a class="anchor" href="#perf.hbase.client.seek"></a>102.3. Avoid scan seeks</h3>
 <div class="paragraph">
 <p>When columns are selected explicitly with <code>scan.addColumn</code>, HBase will schedule seek operations to seek between the selected columns.
 When rows have few columns and each column has only a few versions this can be inefficient.
@@ -21503,13 +21558,13 @@ table.getScanner(scan);</code></pre>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.hbase.mr.input"><a class="anchor" href="#perf.hbase.mr.input"></a>101.4. MapReduce - Input Splits</h3>
+<h3 id="perf.hbase.mr.input"><a class="anchor" href="#perf.hbase.mr.input"></a>102.4. MapReduce - Input Splits</h3>
 <div class="paragraph">
 <p>For MapReduce jobs that use HBase tables as a source, if there a pattern where the "slow" map tasks seem to have the same Input Split (i.e., the RegionServer serving the data), see the Troubleshooting Case Study in <a href="#casestudies.slownode">Case Study #1 (Performance Issue On A Single Node)</a>.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.hbase.client.scannerclose"><a class="anchor" href="#perf.hbase.client.scannerclose"></a>101.5. Close ResultScanners</h3>
+<h3 id="perf.hbase.client.scannerclose"><a class="anchor" href="#perf.hbase.client.scannerclose"></a>102.5. Close ResultScanners</h3>
 <div class="paragraph">
 <p>This isn&#8217;t so much about improving performance but rather <em>avoiding</em> performance problems.
 If you forget to close <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/ResultScanner.html">ResultScanners</a> you can cause problems on the RegionServers.
@@ -21531,7 +21586,7 @@ table.close();</code></pre>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.hbase.client.blockcache"><a class="anchor" href="#perf.hbase.client.blockcache"></a>101.6. Block Cache</h3>
+<h3 id="perf.hbase.client.blockcache"><a class="anchor" href="#perf.hbase.client.blockcache"></a>102.6. Block Cache</h3>
 <div class="paragraph">
 <p><a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html">Scan</a> instances can be set to use the block cache in the RegionServer via the <code>setCacheBlocks</code> method.
 For input Scans to MapReduce jobs, this should be <code>false</code>.
@@ -21543,7 +21598,7 @@ See <a href="#offheap.blockcache">Off-heap Block Cache</a></p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.hbase.client.rowkeyonly"><a class="anchor" href="#perf.hbase.client.rowkeyonly"></a>101.7. Optimal Loading of Row Keys</h3>
+<h3 id="perf.hbase.client.rowkeyonly"><a class="anchor" href="#perf.hbase.client.rowkeyonly"></a>102.7. Optimal Loading of Row Keys</h3>
 <div class="paragraph">
 <p>When performing a table <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html">scan</a> where only the row keys are needed (no families, qualifiers, values or timestamps), add a FilterList with a <code>MUST_PASS_ALL</code> operator to the scanner using <code>setFilter</code>.
 The filter list should include both a <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/FirstKeyOnlyFilter.html">FirstKeyOnlyFilter</a> and a <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/KeyOnlyFilter.html">KeyOnlyFilter</a>.
@@ -21551,7 +21606,7 @@ Using this filter combination will result in a worst case scenario of a RegionSe
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.hbase.read.dist"><a class="anchor" href="#perf.hbase.read.dist"></a>101.8. Concurrency: Monitor Data Spread</h3>
+<h3 id="perf.hbase.read.dist"><a class="anchor" href="#perf.hbase.read.dist"></a>102.8. Concurrency: Monitor Data Spread</h3>
 <div class="paragraph">
 <p>When performing a high number of concurrent reads, monitor the data spread of the target tables.
 If the target table(s) have too few regions then the reads could likely be served from too few nodes.</p>
@@ -21561,7 +21616,7 @@ If the target table(s) have too few regions then the reads could likely be serve
 </div>
 </div>
 <div class="sect2">
-<h3 id="blooms"><a class="anchor" href="#blooms"></a>101.9. Bloom Filters</h3>
+<h3 id="blooms"><a class="anchor" href="#blooms"></a>102.9. Bloom Filters</h3>
 <div class="paragraph">
 <p>Enabling Bloom Filters can save your having to go to disk and can help improve read latencies.</p>
 </div>
@@ -21578,7 +21633,7 @@ Version 2 is a rewrite from scratch though again it starts with the one-lab work
 <p>See also <a href="#schema.bloom">Bloom Filters</a>.</p>
 </div>
 <div class="sect3">
-<h4 id="bloom_footprint"><a class="anchor" href="#bloom_footprint"></a>101.9.1. Bloom StoreFile footprint</h4>
+<h4 id="bloom_footprint"><a class="anchor" href="#bloom_footprint"></a>102.9.1. Bloom StoreFile footprint</h4>
 <div class="paragraph">
 <p>Bloom filters add an entry to the <code>StoreFile</code> general <code>FileInfo</code> data structure and then two extra entries to the <code>StoreFile</code> metadata section.</p>
 </div>
@@ -21602,7 +21657,7 @@ Stored in the LRU cache, if it is enabled (It&#8217;s enabled by default).</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="config.bloom"><a class="anchor" href="#config.bloom"></a>101.9.2. Bloom Filter Configuration</h4>
+<h4 id="config.bloom"><a class="anchor" href="#config.bloom"></a>102.9.2. Bloom Filter Configuration</h4>
 <div class="sect4">
 <h5 id="__code_io_storefile_bloom_enabled_code_global_kill_switch"><a class="anchor" href="#__code_io_storefile_bloom_enabled_code_global_kill_switch"></a><code>io.storefile.bloom.enabled</code> global kill switch</h5>
 <div class="paragraph">
@@ -21630,7 +21685,7 @@ See the <em>Development Process</em> section of the document <a href="https://is
 </div>
 </div>
 <div class="sect2">
-<h3 id="hedged.reads"><a class="anchor" href="#hedged.reads"></a>101.10. Hedged Reads</h3>
+<h3 id="hedged.reads"><a class="anchor" href="#hedged.reads"></a>102.10. Hedged Reads</h3>
 <div class="paragraph">
 <p>Hedged reads are a feature of HDFS, introduced in Hadoop 2.4.0 with <a href="https://issues.apache.org/jira/browse/HDFS-5776">HDFS-5776</a>.
 Normally, a single thread is spawned for each read request.
@@ -21720,10 +21775,10 @@ This could indicate that a given RegionServer is having trouble servicing reques
 </div>
 </div>
 <div class="sect1">
-<h2 id="perf.deleting"><a class="anchor" href="#perf.deleting"></a>102. Deleting from HBase</h2>
+<h2 id="perf.deleting"><a class="anchor" href="#perf.deleting"></a>103. Deleting from HBase</h2>
 <div class="sectionbody">
 <div class="sect2">
-<h3 id="perf.deleting.queue"><a class="anchor" href="#perf.deleting.queue"></a>102.1. Using HBase Tables as Queues</h3>
+<h3 id="perf.deleting.queue"><a class="anchor" href="#perf.deleting.queue"></a>103.1. Using HBase Tables as Queues</h3>
 <div class="paragraph">
 <p>HBase tables are sometimes used as queues.
 In this case, special care must be taken to regularly perform major compactions on tables used in this manner.
@@ -21735,7 +21790,7 @@ Tombstones only get cleaned up with major compactions.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.deleting.rpc"><a class="anchor" href="#perf.deleting.rpc"></a>102.2. Delete RPC Behavior</h3>
+<h3 id="perf.deleting.rpc"><a class="anchor" href="#perf.deleting.rpc"></a>103.2. Delete RPC Behavior</h3>
 <div class="paragraph">
 <p>Be aware that <code>Table.delete(Delete)</code> doesn&#8217;t use the writeBuffer.
 It will execute an RegionServer RPC with each invocation.
@@ -21749,13 +21804,13 @@ For a large number of deletes, consider <code>Table.delete(List)</code>.</p>
 </div>
 </div>
 <div class="sect1">
-<h2 id="perf.hdfs"><a class="anchor" href="#perf.hdfs"></a>103. HDFS</h2>
+<h2 id="perf.hdfs"><a class="anchor" href="#perf.hdfs"></a>104. HDFS</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>Because HBase runs on <a href="#arch.hdfs">HDFS</a> it is important to understand how it works and how it affects HBase.</p>
 </div>
 <div class="sect2">
-<h3 id="perf.hdfs.curr"><a class="anchor" href="#perf.hdfs.curr"></a>103.1. Current Issues With Low-Latency Reads</h3>
+<h3 id="perf.hdfs.curr"><a class="anchor" href="#perf.hdfs.curr"></a>104.1. Current Issues With Low-Latency Reads</h3>
 <div class="paragraph">
 <p>The original use-case for HDFS was batch processing.
 As such, there low-latency reads were historically not a priority.
@@ -21764,7 +21819,7 @@ See the <a href="https://issues.apache.org/jira/browse/HDFS-1599">Umbrella Jira
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.hdfs.configs.localread"><a class="anchor" href="#perf.hdfs.configs.localread"></a>103.2. Leveraging local data</h3>
+<h3 id="perf.hdfs.configs.localread"><a class="anchor" href="#perf.hdfs.configs.localread"></a>104.2. Leveraging local data</h3>
 <div class="paragraph">
 <p>Since Hadoop 1.0.0 (also 0.22.1, 0.23.1, CDH3u3 and HDP 1.0) via <a href="https://issues.apache.org/jira/browse/HDFS-2246">HDFS-2246</a>, it is possible for the DFSClient to take a "short circuit" and read directly from the disk instead of going through the DataNode when the data is local.
 What this means for HBase is that the RegionServers can read directly off their machine&#8217;s disks instead of having to open a socket to talk to the DataNode, the former being generally much faster.
@@ -21831,7 +21886,7 @@ In HBase, if this value has not been set, we set it down from the default of 1M
 </div>
 </div>
 <div class="sect2">
-<h3 id="perf.hdfs.comp"><a class="anchor" href="#perf.hdfs.comp"></a>103.3. Performance Comparisons of HBase vs. HDFS</h3>
+<h3 id="perf.hdfs.comp"><a class="anchor" href="#perf.hdfs.comp"></a>104.3. Performance Comparisons of HBase vs. HDFS</h3>
 <div class="paragraph">
 <p>A fairly common question on the dist-list is why HBase isn&#8217;t as performant as HDFS files in a batch context (e.g., as a MapReduce source or sink). The short answer is that HBase is doing a lot more than HDFS (e.g., reading the KeyValues, returning the most current row or specified timestamps, etc.), and as such HBase is 4-5 times slower than HDFS in this processing context.
 There is room for improvement and this gap will, over time, be reduced, but HDFS will always be faster in this use-case.</p>
@@ -21840,7 +21895,7 @@ There is room for improvement and this gap will, over time, be reduced, but HDFS
 </div>
 </div>
 <div class="sect1">
-<h2 id="perf.ec2"><a class="anchor" href="#perf.ec2"></a>104. Amazon EC2</h2>
+<h2 id="perf.ec2"><a class="anchor" href="#perf.ec2"></a>105. Amazon EC2</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>Performance questions are common on Amazon EC2 environments because it is a shared environment.
@@ -21853,7 +21908,7 @@ In terms of running tests on EC2, run them several times for the same reason (i.
 </div>
 </div>
 <div class="sect1">
-<h2 id="perf.hbase.mr.cluster"><a class="anchor" href="#perf.hbase.mr.cluster"></a>105. Collocating HBase and MapReduce</h2>
+<h2 id="perf.hbase.mr.cluster"><a class="anchor" href="#perf.hbase.mr.cluster"></a>106. Collocating HBase and MapReduce</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>It is often recommended to have different clusters for HBase and MapReduce.
@@ -21872,7 +21927,7 @@ In the worst case, if you really need to collocate both, set MR to use less Map
 </div>
 </div>
 <div class="sect1">
-<h2 id="perf.casestudy"><a class="anchor" href="#perf.casestudy"></a>106. Case Studies</h2>
+<h2 id="perf.casestudy"><a class="anchor" href="#perf.casestudy"></a>107. Case Studies</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>For Performance and Troubleshooting Case Studies, see <a href="#casestudies">Apache HBase Case Studies</a>.</p>
@@ -21881,7 +21936,7 @@ In the worst case, if you really need to collocate both, set MR to use less Map
 </div>
 <h1 id="trouble" class="sect0"><a class="anchor" href="#trouble"></a>Troubleshooting and Debugging Apache HBase</h1>
 <div class="sect1">
-<h2 id="trouble.general"><a class="anchor" href="#trouble.general"></a>107. General Guidelines</h2>
+<h2 id="trouble.general"><a class="anchor" href="#trouble.general"></a>108. General Guidelines</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>Always start with the master log (TODO: Which lines?). Normally it&#8217;s just printing the same lines over and over again.
@@ -21902,7 +21957,7 @@ For more information on GC pauses, see the <a href="https://blog.cloudera.com/bl
 </div>
 </div>
 <div class="sect1">
-<h2 id="trouble.log"><a class="anchor" href="#trouble.log"></a>108. Logs</h2>
+<h2 id="trouble.log"><a class="anchor" href="#trouble.log"></a>109. Logs</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>The key process logs are as follows&#8230;&#8203; (replace &lt;user&gt; with the user that started the service, and &lt;hostname&gt; for the machine name)</p>
@@ -21929,13 +21984,13 @@ For more information on GC pauses, see the <a href="https://blog.cloudera.com/bl
 <p>ZooKeeper: <em>TODO</em></p>
 </div>
 <div class="sect2">
-<h3 id="trouble.log.locations"><a class="anchor" href="#trouble.log.locations"></a>108.1. Log Locations</h3>
+<h3 id="trouble.log.locations"><a class="anchor" href="#trouble.log.locations"></a>109.1. Log Locations</h3>
 <div class="paragraph">
 <p>For stand-alone deployments the logs are obviously going to be on a single machine, however this is a development configuration only.
 Production deployments need to run on a cluster.</p>
 </div>
 <div class="sect3">
-<h4 id="trouble.log.locations.namenode"><a class="anchor" href="#trouble.log.locations.namenode"></a>108.1.1. NameNode</h4>
+<h4 id="trouble.log.locations.namenode"><a class="anchor" href="#trouble.log.locations.namenode"></a>109.1.1. NameNode</h4>
 <div class="paragraph">
 <p>The NameNode log is on the NameNode server.
 The HBase Master is typically run on the NameNode server, and well as ZooKeeper.</p>
@@ -21945,7 +22000,7 @@ The HBase Master is typically run on the NameNode server, and well as ZooKeeper.
 </div>
 </div>
 <div class="sect3">
-<h4 id="trouble.log.locations.datanode"><a class="anchor" href="#trouble.log.locations.datanode"></a>108.1.2. DataNode</h4>
+<h4 id="trouble.log.locations.datanode"><a class="anchor" href="#trouble.log.locations.datanode"></a>109.1.2. DataNode</h4>
 <div class="paragraph">
 <p>Each DataNode server will have a DataNode log for HDFS, as well as a RegionServer log for HBase.</p>
 </div>
@@ -21955,9 +22010,9 @@ The HBase Master is typically run on the NameNode server, and well as ZooKeeper.
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.log.levels"><a class="anchor" href="#trouble.log.levels"></a>108.2. Log Levels</h3>
+<h3 id="trouble.log.levels"><a class="anchor" href="#trouble.log.levels"></a>109.2. Log Levels</h3>
 <div class="sect3">
-<h4 id="rpc.logging"><a class="anchor" href="#rpc.logging"></a>108.2.1. Enabling RPC-level logging</h4>
+<h4 id="rpc.logging"><a class="anchor" href="#rpc.logging"></a>109.2.1. Enabling RPC-level logging</h4>
 <div class="paragraph">
 <p>Enabling the RPC-level logging on a RegionServer can often give insight on timings at the server.
 Once enabled, the amount of log spewed is voluminous.
@@ -21972,7 +22027,7 @@ Analyze.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.log.gc"><a class="anchor" href="#trouble.log.gc"></a>108.3. JVM Garbage Collection Logs</h3>
+<h3 id="trouble.log.gc"><a class="anchor" href="#trouble.log.gc"></a>109.3. JVM Garbage Collection Logs</h3>
 <div class="paragraph">
 <p>HBase is memory intensive, and using the default GC you can see long pauses in all threads including the <em>Juliet Pause</em> aka "GC of Death". To help debug this or confirm this is happening GC logging can be turned on in the Java virtual machine.</p>
 </div>
@@ -22099,17 +22154,17 @@ If your ParNew is very large after running HBase for a while, in one example a P
 </div>
 </div>
 <div class="sect1">
-<h2 id="trouble.resources"><a class="anchor" href="#trouble.resources"></a>109. Resources</h2>
+<h2 id="trouble.resources"><a class="anchor" href="#trouble.resources"></a>110. Resources</h2>
 <div class="sectionbody">
 <div class="sect2">
-<h3 id="trouble.resources.searchhadoop"><a class="anchor" href="#trouble.resources.searchhadoop"></a>109.1. search-hadoop.com</h3>
+<h3 id="trouble.resources.searchhadoop"><a class="anchor" href="#trouble.resources.searchhadoop"></a>110.1. search-hadoop.com</h3>
 <div class="paragraph">
 <p><a href="http://search-hadoop.com">search-hadoop.com</a> indexes all the mailing lists and is great for historical searches.
 Search here first when you have an issue as its more than likely someone has already had your problem.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.resources.lists"><a class="anchor" href="#trouble.resources.lists"></a>109.2. Mailing Lists</h3>
+<h3 id="trouble.resources.lists"><a class="anchor" href="#trouble.resources.lists"></a>110.2. Mailing Lists</h3>
 <div class="paragraph">
 <p>Ask a question on the <a href="http://hbase.apache.org/mail-lists.html">Apache HBase mailing lists</a>.
 The 'dev' mailing list is aimed at the community of developers actually building Apache HBase and for features currently under development, and 'user' is generally used for questions on released versions of Apache HBase.
@@ -22121,13 +22176,13 @@ A quality question that includes all context and exhibits evidence the author ha
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.resources.slack"><a class="anchor" href="#trouble.resources.slack"></a>109.3. Slack</h3>
+<h3 id="trouble.resources.slack"><a class="anchor" href="#trouble.resources.slack"></a>110.3. Slack</h3>
 <div class="paragraph">
 <p>See  <a href="http://apache-hbase.slack.com" class="bare">http://apache-hbase.slack.com</a> Channel on Slack</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.resources.irc"><a class="anchor" href="#trouble.resources.irc"></a>109.4. IRC</h3>
+<h3 id="trouble.resources.irc"><a class="anchor" href="#trouble.resources.irc"></a>110.4. IRC</h3>
 <div class="paragraph">
 <p>(You will probably get a more prompt response on the Slack channel)</p>
 </div>
@@ -22136,7 +22191,7 @@ A quality question that includes all context and exhibits evidence the author ha
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.resources.jira"><a class="anchor" href="#trouble.resources.jira"></a>109.5. JIRA</h3>
+<h3 id="trouble.resources.jira"><a class="anchor" href="#trouble.resources.jira"></a>110.5. JIRA</h3>
 <div class="paragraph">
 <p><a href="https://issues.apache.org/jira/browse/HBASE">JIRA</a> is also really helpful when looking for Hadoop/HBase-specific issues.</p>
 </div>
@@ -22144,12 +22199,12 @@ A quality question that includes all context and exhibits evidence the author ha
 </div>
 </div>
 <div class="sect1">
-<h2 id="trouble.tools"><a class="anchor" href="#trouble.tools"></a>110. Tools</h2>
+<h2 id="trouble.tools"><a class="anchor" href="#trouble.tools"></a>111. Tools</h2>
 <div class="sectionbody">
 <div class="sect2">
-<h3 id="trouble.tools.builtin"><a class="anchor" href="#trouble.tools.builtin"></a>110.1. Builtin Tools</h3>
+<h3 id="trouble.tools.builtin"><a class="anchor" href="#trouble.tools.builtin"></a>111.1. Builtin Tools</h3>
 <div class="sect3">
-<h4 id="trouble.tools.builtin.webmaster"><a class="anchor" href="#trouble.tools.builtin.webmaster"></a>110.1.1. Master Web Interface</h4>
+<h4 id="trouble.tools.builtin.webmaster"><a class="anchor" href="#trouble.tools.builtin.webmaster"></a>111.1.1. Master Web Interface</h4>
 <div class="paragraph">
 <p>The Master starts a web-interface on port 16010 by default.
 (Up to and including 0.98 this was port 60010)</p>
@@ -22159,7 +22214,7 @@ A quality question that includes all context and exhibits evidence the author ha
 </div>
 </div>
 <div class="sect3">
-<h4 id="trouble.tools.builtin.webregion"><a class="anchor" href="#trouble.tools.builtin.webregion"></a>110.1.2. RegionServer Web Interface</h4>
+<h4 id="trouble.tools.builtin.webregion"><a class="anchor" href="#trouble.tools.builtin.webregion"></a>111.1.2. RegionServer Web Interface</h4>
 <div class="paragraph">
 <p>RegionServers starts a web-interface on port 16030 by default.
 (Up to an including 0.98 this was port 60030)</p>
@@ -22172,7 +22227,7 @@ A quality question that includes all context and exhibits evidence the author ha
 </div>
 </div>
 <div class="sect3">
-<h4 id="trouble.tools.builtin.zkcli"><a class="anchor" href="#trouble.tools.builtin.zkcli"></a>110.1.3. zkcli</h4>
+<h4 id="trouble.tools.builtin.zkcli"><a class="anchor" href="#trouble.tools.builtin.zkcli"></a>111.1.3. zkcli</h4>
 <div class="paragraph">
 <p><code>zkcli</code> is a very useful tool for investigating ZooKeeper-related issues.
 To invoke:</p>
@@ -22212,9 +22267,9 @@ To invoke:</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.tools.external"><a class="anchor" href="#trouble.tools.external"></a>110.2. External Tools</h3>
+<h3 id="trouble.tools.external"><a class="anchor" href="#trouble.tools.external"></a>111.2. External Tools</h3>
 <div class="sect3">
-<h4 id="trouble.tools.tail"><a class="anchor" href="#trouble.tools.tail"></a>110.2.1. tail</h4>
+<h4 id="trouble.tools.tail"><a class="anchor" href="#trouble.tools.tail"></a>111.2.1. tail</h4>
 <div class="paragraph">
 <p><code>tail</code> is the command line tool that lets you look at the end of a file.
 Add the <code>-f</code> option and it will refresh when new data is available.
@@ -22222,7 +22277,7 @@ It&#8217;s useful when you are wondering what&#8217;s happening, for example, wh
 </div>
 </div>
 <div class="sect3">
-<h4 id="trouble.tools.top"><a class="anchor" href="#trouble.tools.top"></a>110.2.2. top</h4>
+<h4 id="trouble.tools.top"><a class="anchor" href="#trouble.tools.top"></a>111.2.2. top</h4>
 <div class="paragraph">
 <p><code>top</code> is probably one of the most important tools when first trying to see what&#8217;s running on a machine and how the resources are consumed.
 Here&#8217;s an example from production system:</p>
@@ -22258,7 +22313,7 @@ Typing <code>1</code> will give you the detail of how each CPU is used instead o
 </div>
 </div>
 <div class="sect3">
-<h4 id="trouble.tools.jps"><a class="anchor" href="#trouble.tools.jps"></a>110.2.3. jps</h4>
+<h4 id="trouble.tools.jps"><a class="anchor" href="#trouble.tools.jps"></a>111.2.3. jps</h4>
 <div class="paragraph">
 <p><code>jps</code> is shipped with every JDK and gives the java process ids for the current user (if root, then it gives the ids for all users). Example:</p>
 </div>
@@ -22320,7 +22375,7 @@ hadoop   17789  155 35.2 9067824 8604364 ?     S&amp;lt;l  Mar04 9855:48 /usr/ja
 </div>
 </div>
 <div class="sect3">
-<h4 id="trouble.tools.jstack"><a class="anchor" href="#trouble.tools.jstack"></a>110.2.4. jstack</h4>
+<h4 id="trouble.tools.jstack"><a class="anchor" href="#trouble.tools.jstack"></a>111.2.4. jstack</h4>
 <div class="paragraph">
 <p><code>jstack</code> is one of the most important tools when trying to figure out what a java process is doing apart from looking at the logs.
 It has to be used in conjunction with jps in order to give it a process id.
@@ -22478,7 +22533,7 @@ java.lang.Thread.State: WAITING (on object monitor)
 </div>
 </div>
 <div class="sect3">
-<h4 id="trouble.tools.opentsdb"><a class="anchor" href="#trouble.tools.opentsdb"></a>110.2.5. OpenTSDB</h4>
+<h4 id="trouble.tools.opentsdb"><a class="anchor" href="#trouble.tools.opentsdb"></a>111.2.5. OpenTSDB</h4>
 <div class="paragraph">
 <p><a href="http://opentsdb.net">OpenTSDB</a> is an excellent alternative to Ganglia as it uses Apache HBase to store all the time series and doesn&#8217;t have to downsample.
 Monitoring your own HBase cluster that hosts OpenTSDB is a good exercise.</p>
@@ -22493,7 +22548,7 @@ You can then go down at the machine level and get even more detailed metrics.</p
 </div>
 </div>
 <div class="sect3">
-<h4 id="trouble.tools.clustersshtop"><a class="anchor" href="#trouble.tools.clustersshtop"></a>110.2.6. clusterssh+top</h4>
+<h4 id="trouble.tools.clustersshtop"><a class="anchor" href="#trouble.tools.clustersshtop"></a>111.2.6. clusterssh+top</h4>
 <div class="paragraph">
 <p>clusterssh+top, it&#8217;s like a poor man&#8217;s monitoring system and it can be quite useful when you have only a few machines as it&#8217;s very easy to setup.
 Starting clusterssh will give you one terminal per machine and another terminal in which whatever you type will be retyped in every window.
@@ -22505,13 +22560,13 @@ You can also tail all the logs at the same time, edit files, etc.</p>
 </div>
 </div>
 <div class="sect1">
-<h2 id="trouble.client"><a class="anchor" href="#trouble.client"></a>111. Client</h2>
+<h2 id="trouble.client"><a class="anchor" href="#trouble.client"></a>112. Client</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>For more information on the HBase client, see <a href="#architecture.client">client</a>.</p>
 </div>
 <div class="sect2">
-<h3 id="_missed_scan_results_due_to_mismatch_of_code_hbase_client_scanner_max_result_size_code_between_client_and_server"><a class="anchor" href="#_missed_scan_results_due_to_mismatch_of_code_hbase_client_scanner_max_result_size_code_between_client_and_server"></a>111.1. Missed Scan Results Due To Mismatch Of <code>hbase.client.scanner.max.result.size</code> Between Client and Server</h3>
+<h3 id="_missed_scan_results_due_to_mismatch_of_code_hbase_client_scanner_max_result_size_code_between_client_and_server"><a class="anchor" href="#_missed_scan_results_due_to_mismatch_of_code_hbase_client_scanner_max_result_size_code_between_client_and_server"></a>112.1. Missed Scan Results Due To Mismatch Of <code>hbase.client.scanner.max.result.size</code> Between Client and Server</h3>
 <div class="paragraph">
 <p>If either the client or server version is lower than 0.98.11/1.0.0 and the server
 has a smaller value for <code>hbase.client.scanner.max.result.size</code> than the client, scan
@@ -22522,7 +22577,7 @@ using 0.98.11 servers with any other client version.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.client.scantimeout"><a class="anchor" href="#trouble.client.scantimeout"></a>111.2. ScannerTimeoutException or UnknownScannerException</h3>
+<h3 id="trouble.client.scantimeout"><a class="anchor" href="#trouble.client.scantimeout"></a>112.2. ScannerTimeoutException or UnknownScannerException</h3>
 <div class="paragraph">
 <p>This is thrown if the time between RPC calls from the client to RegionServer exceeds the scan timeout.
 For example, if <code>Scan.setCaching</code> is set to 500, then there will be an RPC call to fetch the next batch of rows every 500 <code>.next()</code> calls on the ResultScanner because data is being transferred in blocks of 500 rows to the client.
@@ -22533,7 +22588,7 @@ Reducing the setCaching value may be an option, but setting this value too low m
 </div>
 </div>
 <div class="sect2">
-<h3 id="_performance_differences_in_thrift_and_java_apis"><a class="anchor" href="#_performance_differences_in_thrift_and_java_apis"></a>111.3. Performance Differences in Thrift and Java APIs</h3>
+<h3 id="_performance_differences_in_thrift_and_java_apis"><a class="anchor" href="#_performance_differences_in_thrift_and_java_apis"></a>112.3. Performance Differences in Thrift and Java APIs</h3>
 <div class="paragraph">
 <p>Poor performance, or even <code>ScannerTimeoutExceptions</code>, can occur if <code>Scan.setCaching</code> is too high, as discussed in <a href="#trouble.client.scantimeout">ScannerTimeoutException or UnknownScannerException</a>.
 If the Thrift client uses the wrong caching settings for a given workload, performance can suffer compared to the Java API.
@@ -22545,7 +22600,7 @@ In one case, it was found that reducing the cache for Thrift scans from 1000 to
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.client.lease.exception"><a class="anchor" href="#trouble.client.lease.exception"></a>111.4. <code>LeaseException</code> when calling <code>Scanner.next</code></h3>
+<h3 id="trouble.client.lease.exception"><a class="anchor" href="#trouble.client.lease.exception"></a>112.4. <code>LeaseException</code> when calling <code>Scanner.next</code></h3>
 <div class="paragraph">
 <p>In some situations clients that fetch data from a RegionServer get a LeaseException instead of the usual <a href="#trouble.client.scantimeout">ScannerTimeoutException or UnknownScannerException</a>.
 Usually the source of the exception is <code>org.apache.hadoop.hbase.regionserver.Leases.removeLease(Leases.java:230)</code> (line number may vary). It tends to happen in the context of a slow/freezing <code>RegionServer#next</code> call.
@@ -22554,7 +22609,7 @@ Harsh J investigated the issue as part of the mailing list thread <a href="http:
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.client.scarylogs"><a class="anchor" href="#trouble.client.scarylogs"></a>111.5. Shell or client application throws lots of scary exceptions during normal operation</h3>
+<h3 id="trouble.client.scarylogs"><a class="anchor" href="#trouble.client.scarylogs"></a>112.5. Shell or client application throws lots of scary exceptions during normal operation</h3>
 <div class="paragraph">
 <p>Since 0.20.0 the default log level for `org.apache.hadoop.hbase.*`is DEBUG.</p>
 </div>
@@ -22563,7 +22618,7 @@ Harsh J investigated the issue as part of the mailing list thread <a href="http:
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.client.longpauseswithcompression"><a class="anchor" href="#trouble.client.longpauseswithcompression"></a>111.6. Long Client Pauses With Compression</h3>
+<h3 id="trouble.client.longpauseswithcompression"><a class="anchor" href="#trouble.client.longpauseswithcompression"></a>112.6. Long Client Pauses With Compression</h3>
 <div class="paragraph">
 <p>This is a fairly frequent question on the Apache HBase dist-list.
 The scenario is that a client is typically inserting a lot of data into a relatively un-optimized HBase cluster.
@@ -22589,7 +22644,7 @@ Without compression the files are much bigger and don&#8217;t need as much compa
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.client.security.rpc.krb"><a class="anchor" href="#trouble.client.security.rpc.krb"></a>111.7. Secure Client Connect ([Caused by GSSException: No valid credentials provided&#8230;&#8203;])</h3>
+<h3 id="trouble.client.security.rpc.krb"><a class="anchor" href="#trouble.client.security.rpc.krb"></a>112.7. Secure Client Connect ([Caused by GSSException: No valid credentials provided&#8230;&#8203;])</h3>
 <div class="paragraph">
 <p>You may encounter the following error:</p>
 </div>
@@ -22612,7 +22667,7 @@ See JIRA <a href="https://issues.apache.org/jira/browse/HBASE-10379">HBASE-10379
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.client.zookeeper"><a class="anchor" href="#trouble.client.zookeeper"></a>111.8. ZooKeeper Client Connection Errors</h3>
+<h3 id="trouble.client.zookeeper"><a class="anchor" href="#trouble.client.zookeeper"></a>112.8. ZooKeeper Client Connection Errors</h3>
 <div class="paragraph">
 <p>Errors like this&#8230;&#8203;</p>
 </div>
@@ -22644,7 +22699,7 @@ See JIRA <a href="https://issues.apache.org/jira/browse/HBASE-10379">HBASE-10379
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.client.oome.directmemory.leak"><a class="anchor" href="#trouble.client.oome.directmemory.leak"></a>111.9. Client running out of memory though heap size seems to be stable (but the off-heap/direct heap keeps growing)</h3>
+<h3 id="trouble.client.oome.directmemory.leak"><a class="anchor" href="#trouble.client.oome.directmemory.leak"></a>112.9. Client running out of memory though heap size seems to be stable (but the off-heap/direct heap keeps growing)</h3>
 <div class="paragraph">
 <p>You are likely running into the issue that is described and worked through in the mail thread <a href="http://search-hadoop.com/m/ubhrX8KvcH/Suspected+memory+leak&amp;subj=Re+Suspected+memory+leak">HBase, mail # user - Suspected memory leak</a> and continued over in <a href="http://search-hadoop.com/m/p2Agc1Zy7Va/MaxDirectMemorySize+Was%253A+Suspected+memory+leak&amp;subj=Re+FeedbackRe+Suspected+memory+leak">HBase, mail # dev - FeedbackRe: Suspected memory leak</a>.
 A workaround is passing your client-side JVM a reasonable value for <code>-XX:MaxDirectMemorySize</code>.
@@ -22653,14 +22708,14 @@ You want to make this setting client-side only especially if you are running the
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.client.slowdown.admin"><a class="anchor" href="#trouble.client.slowdown.admin"></a>111.10. Client Slowdown When Calling Admin Methods (flush, compact, etc.)</h3>
+<h3 id="trouble.client.slowdown.admin"><a class="anchor" href="#trouble.client.slowdown.admin"></a>112.10. Client Slowdown When Calling Admin Methods (flush, compact, etc.)</h3>
 <div class="paragraph">
 <p>This is a client issue fixed by <a href="https://issues.apache.org/jira/browse/HBASE-5073">HBASE-5073</a> in 0.90.6.
 There was a ZooKeeper leak in the client and the client was getting pummeled by ZooKeeper events with each additional invocation of the admin API.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.client.security.rpc"><a class="anchor" href="#trouble.client.security.rpc"></a>111.11. Secure Client Cannot Connect ([Caused by GSSException: No valid credentials provided(Mechanism level: Failed to find any Kerberos tgt)])</h3>
+<h3 id="trouble.client.security.rpc"><a class="anchor" href="#trouble.client.security.rpc"></a>112.11. Secure Client Cannot Connect ([Caused by GSSException: No valid credentials provided(Mechanism level: Failed to find any Kerberos tgt)])</h3>
 <div class="paragraph">
 <p>There can be several causes that produce this symptom.</p>
 </div>
@@ -22691,10 +22746,10 @@ Uncompress and extract the downloaded file, and install the policy jars into <em
 </div>
 </div>
 <div class="sect1">
-<h2 id="trouble.mapreduce"><a class="anchor" href="#trouble.mapreduce"></a>112. MapReduce</h2>
+<h2 id="trouble.mapreduce"><a class="anchor" href="#trouble.mapreduce"></a>113. MapReduce</h2>
 <div class="sectionbody">
 <div class="sect2">
-<h3 id="trouble.mapreduce.local"><a class="anchor" href="#trouble.mapreduce.local"></a>112.1. You Think You&#8217;re On The Cluster, But You&#8217;re Actually Local</h3>
+<h3 id="trouble.mapreduce.local"><a class="anchor" href="#trouble.mapreduce.local"></a>113.1. You Think You&#8217;re On The Cluster, But You&#8217;re Actually Local</h3>
 <div class="paragraph">
 <p>This following stacktrace happened using <code>ImportTsv</code>, but things like this can happen on any job with a mis-configuration.</p>
 </div>
@@ -22744,7 +22799,7 @@ For example (substitute VERSION with your HBase version):</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.hbasezerocopybytestring"><a class="anchor" href="#trouble.hbasezerocopybytestring"></a>112.2. Launching a job, you get java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString or class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteString</h3>
+<h3 id="trouble.hbasezerocopybytestring"><a class="anchor" href="#trouble.hbasezerocopybytestring"></a>113.2. Launching a job, you get java.lang.IllegalAccessError: com/google/protobuf/HBaseZeroCopyByteString or class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteString</h3>
 <div class="paragraph">
 <p>See <a href="https://issues.apache.org/jira/browse/HBASE-10304">HBASE-10304 Running an hbase job jar: IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteString</a> and <a href="https://issues.apache.org/jira/browse/HBASE-11118">HBASE-11118 non environment variable solution for "IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteString"</a>.
 The issue can also show up when trying to run spark jobs.
@@ -22754,13 +22809,13 @@ See <a href="https://issues.apache.org/jira/browse/HBASE-10877">HBASE-10877 HBas
 </div>
 </div>
 <div class="sect1">
-<h2 id="trouble.namenode"><a class="anchor" href="#trouble.namenode"></a>113. NameNode</h2>
+<h2 id="trouble.namenode"><a class="anchor" href="#trouble.namenode"></a>114. NameNode</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>For more information on the NameNode, see <a href="#arch.hdfs">HDFS</a>.</p>
 </div>
 <div class="sect2">
-<h3 id="trouble.namenode.disk"><a class="anchor" href="#trouble.namenode.disk"></a>113.1. HDFS Utilization of Tables and Regions</h3>
+<h3 id="trouble.namenode.disk"><a class="anchor" href="#trouble.namenode.disk"></a>114.1. HDFS Utilization of Tables and Regions</h3>
 <div class="paragraph">
 <p>To determine how much space HBase is using on HDFS use the <code>hadoop</code> shell commands from the NameNode.
 For example&#8230;&#8203;</p>
@@ -22794,7 +22849,7 @@ For example&#8230;&#8203;</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.namenode.hbase.objects"><a class="anchor" href="#trouble.namenode.hbase.objects"></a>113.2. Browsing HDFS for HBase Objects</h3>
+<h3 id="trouble.namenode.hbase.objects"><a class="anchor" href="#trouble.namenode.hbase.objects"></a>114.2. Browsing HDFS for HBase Objects</h3>
 <div class="paragraph">
 <p>Sometimes it will be necessary to explore the HBase objects that exist on HDFS.
 These objects could include the WALs (Write Ahead Logs), tables, regions, StoreFiles, etc.
@@ -22828,7 +22883,7 @@ The NameNode web application will provide links to the all the DataNodes in the
 <p>See the <a href="http://hadoop.apache.org/common/docs/current/hdfs_user_guide.html">HDFS User Guide</a> for other non-shell diagnostic utilities like <code>fsck</code>.</p>
 </div>
 <div class="sect3">
-<h4 id="trouble.namenode.0size.hlogs"><a class="anchor" href="#trouble.namenode.0size.hlogs"></a>113.2.1. Zero size WALs with data in them</h4>
+<h4 id="trouble.namenode.0size.hlogs"><a class="anchor" href="#trouble.namenode.0size.hlogs"></a>114.2.1. Zero size WALs with data in them</h4>
 <div class="paragraph">
 <p>Problem: when getting a listing of all the files in a RegionServer&#8217;s <em>.logs</em> directory, one file has a size of 0 but it contains data.</p>
 </div>
@@ -22838,7 +22893,7 @@ A file that&#8217;s currently being written to will appear to have a size of 0 b
 </div>
 </div>
 <div class="sect3">
-<h4 id="trouble.namenode.uncompaction"><a class="anchor" href="#trouble.namenode.uncompaction"></a>113.2.2. Use Cases</h4>
+<h4 id="trouble.namenode.uncompaction"><a class="anchor" href="#trouble.namenode.uncompaction"></a>114.2.2. Use Cases</h4>
 <div class="paragraph">
 <p>Two common use-cases for querying HDFS for HBase objects is research the degree of uncompaction of a table.
 If there are a large number of StoreFiles for each ColumnFamily it could indicate the need for a major compaction.
@@ -22847,7 +22902,7 @@ Additionally, after a major compaction if the resulting StoreFile is "small" it
 </div>
 </div>
 <div class="sect2">
-<h3 id="_unexpected_filesystem_growth"><a class="anchor" href="#_unexpected_filesystem_growth"></a>113.3. Unexpected Filesystem Growth</h3>
+<h3 id="_unexpected_filesystem_growth"><a class="anchor" href="#_unexpected_filesystem_growth"></a>114.3. Unexpected Filesystem Growth</h3>
 <div class="paragraph">
 <p>If you see an unexpected spike in filesystem usage by HBase, two possible culprits
 are snapshots and WALs.</p>
@@ -22890,10 +22945,10 @@ remember that WALs are saved when replication is disabled, as long as there are
 </div>
 </div>
 <div class="sect1">
-<h2 id="trouble.network"><a class="anchor" href="#trouble.network"></a>114. Network</h2>
+<h2 id="trouble.network"><a class="anchor" href="#trouble.network"></a>115. Network</h2>
 <div class="sectionbody">
 <div class="sect2">
-<h3 id="trouble.network.spikes"><a class="anchor" href="#trouble.network.spikes"></a>114.1. Network Spikes</h3>
+<h3 id="trouble.network.spikes"><a class="anchor" href="#trouble.network.spikes"></a>115.1. Network Spikes</h3>
 <div class="paragraph">
 <p>If you are seeing periodic network spikes you might want to check the <code>compactionQueues</code> to see if major compactions are happening.</p>
 </div>
@@ -22902,14 +22957,14 @@ remember that WALs are saved when replication is disabled, as long as there are
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.network.loopback"><a class="anchor" href="#trouble.network.loopback"></a>114.2. Loopback IP</h3>
+<h3 id="trouble.network.loopback"><a class="anchor" href="#trouble.network.loopback"></a>115.2. Loopback IP</h3>
 <div class="paragraph">
 <p>HBase expects the loopback IP Address to be 127.0.0.1.
 See the Getting Started section on <a href="#loopback.ip">Loopback IP - HBase 0.94.x and earlier</a>.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="trouble.network.ints"><a class="anchor" href="#trouble.network.ints"></a>114.3. Network Interfaces</h3>
+<h3 id="trouble.network.ints"><a class="anchor" href="#trouble.network.ints"></a>115.3. Network Interfaces</h3>
 <div class="paragraph">
 <p>Are all the network interfaces functioning correctly? Are you sure? See the Troubleshooting Case Study in <a href="#trouble.cas

<TRUNCATED>