You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by gi...@apache.org on 2017/11/04 15:17:53 UTC

[19/21] hbase-site git commit: Published site at .

http://git-wip-us.apache.org/repos/asf/hbase-site/blob/edf5597c/book.html
----------------------------------------------------------------------
diff --git a/book.html b/book.html
index 3f6cff2..41332bd 100644
--- a/book.html
+++ b/book.html
@@ -44,241 +44,242 @@
 <li><a href="#upgrading">Upgrading</a>
 <ul class="sectlevel1">
 <li><a href="#hbase.versioning">11. HBase version number and compatibility</a></li>
-<li><a href="#_upgrade_paths">12. Upgrade Paths</a></li>
+<li><a href="#_rollback">12. Rollback</a></li>
+<li><a href="#_upgrade_paths">13. Upgrade Paths</a></li>
 </ul>
 </li>
 <li><a href="#shell">The Apache HBase Shell</a>
 <ul class="sectlevel1">
-<li><a href="#scripting">13. Scripting with Ruby</a></li>
-<li><a href="#_running_the_shell_in_non_interactive_mode">14. Running the Shell in Non-Interactive Mode</a></li>
-<li><a href="#hbase.shell.noninteractive">15. HBase Shell in OS Scripts</a></li>
-<li><a href="#_read_hbase_shell_commands_from_a_command_file">16. Read HBase Shell Commands from a Command File</a></li>
-<li><a href="#_passing_vm_options_to_the_shell">17. Passing VM Options to the Shell</a></li>
-<li><a href="#_shell_tricks">18. Shell Tricks</a></li>
+<li><a href="#scripting">14. Scripting with Ruby</a></li>
+<li><a href="#_running_the_shell_in_non_interactive_mode">15. Running the Shell in Non-Interactive Mode</a></li>
+<li><a href="#hbase.shell.noninteractive">16. HBase Shell in OS Scripts</a></li>
+<li><a href="#_read_hbase_shell_commands_from_a_command_file">17. Read HBase Shell Commands from a Command File</a></li>
+<li><a href="#_passing_vm_options_to_the_shell">18. Passing VM Options to the Shell</a></li>
+<li><a href="#_shell_tricks">19. Shell Tricks</a></li>
 </ul>
 </li>
 <li><a href="#datamodel">Data Model</a>
 <ul class="sectlevel1">
-<li><a href="#conceptual.view">19. Conceptual View</a></li>
-<li><a href="#physical.view">20. Physical View</a></li>
-<li><a href="#_namespace">21. Namespace</a></li>
-<li><a href="#_table">22. Table</a></li>
-<li><a href="#_row">23. Row</a></li>
-<li><a href="#columnfamily">24. Column Family</a></li>
-<li><a href="#_cells">25. Cells</a></li>
-<li><a href="#_data_model_operations">26. Data Model Operations</a></li>
-<li><a href="#versions">27. Versions</a></li>
-<li><a href="#dm.sort">28. Sort Order</a></li>
-<li><a href="#dm.column.metadata">29. Column Metadata</a></li>
-<li><a href="#joins">30. Joins</a></li>
-<li><a href="#_acid">31. ACID</a></li>
+<li><a href="#conceptual.view">20. Conceptual View</a></li>
+<li><a href="#physical.view">21. Physical View</a></li>
+<li><a href="#_namespace">22. Namespace</a></li>
+<li><a href="#_table">23. Table</a></li>
+<li><a href="#_row">24. Row</a></li>
+<li><a href="#columnfamily">25. Column Family</a></li>
+<li><a href="#_cells">26. Cells</a></li>
+<li><a href="#_data_model_operations">27. Data Model Operations</a></li>
+<li><a href="#versions">28. Versions</a></li>
+<li><a href="#dm.sort">29. Sort Order</a></li>
+<li><a href="#dm.column.metadata">30. Column Metadata</a></li>
+<li><a href="#joins">31. Joins</a></li>
+<li><a href="#_acid">32. ACID</a></li>
 </ul>
 </li>
 <li><a href="#schema">HBase and Schema Design</a>
 <ul class="sectlevel1">
-<li><a href="#schema.creation">32. Schema Creation</a></li>
-<li><a href="#table_schema_rules_of_thumb">33. Table Schema Rules Of Thumb</a></li>
+<li><a href="#schema.creation">33. Schema Creation</a></li>
+<li><a href="#table_schema_rules_of_thumb">34. Table Schema Rules Of Thumb</a></li>
 </ul>
 </li>
 <li><a href="#regionserver_sizing_rules_of_thumb">RegionServer Sizing Rules of Thumb</a>
 <ul class="sectlevel1">
-<li><a href="#number.of.cfs">34. On the number of column families</a></li>
-<li><a href="#rowkey.design">35. Rowkey Design</a></li>
-<li><a href="#schema.versions">36. Number of Versions</a></li>
-<li><a href="#supported.datatypes">37. Supported Datatypes</a></li>
-<li><a href="#schema.joins">38. Joins</a></li>
-<li><a href="#ttl">39. Time To Live (TTL)</a></li>
-<li><a href="#cf.keep.deleted">40. Keeping Deleted Cells</a></li>
-<li><a href="#secondary.indexes">41. Secondary Indexes and Alternate Query Paths</a></li>
-<li><a href="#_constraints">42. Constraints</a></li>
-<li><a href="#schema.casestudies">43. Schema Design Case Studies</a></li>
-<li><a href="#schema.ops">44. Operational and Performance Configuration Options</a></li>
-<li><a href="#_special_cases">45. Special Cases</a></li>
+<li><a href="#number.of.cfs">35. On the number of column families</a></li>
+<li><a href="#rowkey.design">36. Rowkey Design</a></li>
+<li><a href="#schema.versions">37. Number of Versions</a></li>
+<li><a href="#supported.datatypes">38. Supported Datatypes</a></li>
+<li><a href="#schema.joins">39. Joins</a></li>
+<li><a href="#ttl">40. Time To Live (TTL)</a></li>
+<li><a href="#cf.keep.deleted">41. Keeping Deleted Cells</a></li>
+<li><a href="#secondary.indexes">42. Secondary Indexes and Alternate Query Paths</a></li>
+<li><a href="#_constraints">43. Constraints</a></li>
+<li><a href="#schema.casestudies">44. Schema Design Case Studies</a></li>
+<li><a href="#schema.ops">45. Operational and Performance Configuration Options</a></li>
+<li><a href="#_special_cases">46. Special Cases</a></li>
 </ul>
 </li>
 <li><a href="#mapreduce">HBase and MapReduce</a>
 <ul class="sectlevel1">
-<li><a href="#hbase.mapreduce.classpath">46. HBase, MapReduce, and the CLASSPATH</a></li>
-<li><a href="#_mapreduce_scan_caching">47. MapReduce Scan Caching</a></li>
-<li><a href="#_bundled_hbase_mapreduce_jobs">48. Bundled HBase MapReduce Jobs</a></li>
-<li><a href="#_hbase_as_a_mapreduce_job_data_source_and_data_sink">49. HBase as a MapReduce Job Data Source and Data Sink</a></li>
-<li><a href="#_writing_hfiles_directly_during_bulk_import">50. Writing HFiles Directly During Bulk Import</a></li>
-<li><a href="#_rowcounter_example">51. RowCounter Example</a></li>
-<li><a href="#splitter">52. Map-Task Splitting</a></li>
-<li><a href="#mapreduce.example">53. HBase MapReduce Examples</a></li>
-<li><a href="#mapreduce.htable.access">54. Accessing Other HBase Tables in a MapReduce Job</a></li>
-<li><a href="#mapreduce.specex">55. Speculative Execution</a></li>
-<li><a href="#cascading">56. Cascading</a></li>
+<li><a href="#hbase.mapreduce.classpath">47. HBase, MapReduce, and the CLASSPATH</a></li>
+<li><a href="#_mapreduce_scan_caching">48. MapReduce Scan Caching</a></li>
+<li><a href="#_bundled_hbase_mapreduce_jobs">49. Bundled HBase MapReduce Jobs</a></li>
+<li><a href="#_hbase_as_a_mapreduce_job_data_source_and_data_sink">50. HBase as a MapReduce Job Data Source and Data Sink</a></li>
+<li><a href="#_writing_hfiles_directly_during_bulk_import">51. Writing HFiles Directly During Bulk Import</a></li>
+<li><a href="#_rowcounter_example">52. RowCounter Example</a></li>
+<li><a href="#splitter">53. Map-Task Splitting</a></li>
+<li><a href="#mapreduce.example">54. HBase MapReduce Examples</a></li>
+<li><a href="#mapreduce.htable.access">55. Accessing Other HBase Tables in a MapReduce Job</a></li>
+<li><a href="#mapreduce.specex">56. Speculative Execution</a></li>
+<li><a href="#cascading">57. Cascading</a></li>
 </ul>
 </li>
 <li><a href="#security">Securing Apache HBase</a>
 <ul class="sectlevel1">
-<li><a href="#_using_secure_http_https_for_the_web_ui">57. Using Secure HTTP (HTTPS) for the Web UI</a></li>
-<li><a href="#hbase.secure.spnego.ui">58. Using SPNEGO for Kerberos authentication with Web UIs</a></li>
-<li><a href="#hbase.secure.configuration">59. Secure Client Access to Apache HBase</a></li>
-<li><a href="#hbase.secure.simpleconfiguration">60. Simple User Access to Apache HBase</a></li>
-<li><a href="#_securing_access_to_hdfs_and_zookeeper">61. Securing Access to HDFS and ZooKeeper</a></li>
-<li><a href="#_securing_access_to_your_data">62. Securing Access To Your Data</a></li>
-<li><a href="#security.example.config">63. Security Configuration Example</a></li>
+<li><a href="#_using_secure_http_https_for_the_web_ui">58. Using Secure HTTP (HTTPS) for the Web UI</a></li>
+<li><a href="#hbase.secure.spnego.ui">59. Using SPNEGO for Kerberos authentication with Web UIs</a></li>
+<li><a href="#hbase.secure.configuration">60. Secure Client Access to Apache HBase</a></li>
+<li><a href="#hbase.secure.simpleconfiguration">61. Simple User Access to Apache HBase</a></li>
+<li><a href="#_securing_access_to_hdfs_and_zookeeper">62. Securing Access to HDFS and ZooKeeper</a></li>
+<li><a href="#_securing_access_to_your_data">63. Securing Access To Your Data</a></li>
+<li><a href="#security.example.config">64. Security Configuration Example</a></li>
 </ul>
 </li>
 <li><a href="#_architecture">Architecture</a>
 <ul class="sectlevel1">
-<li><a href="#arch.overview">64. Overview</a></li>
-<li><a href="#arch.catalog">65. Catalog Tables</a></li>
-<li><a href="#architecture.client">66. Client</a></li>
-<li><a href="#client.filter">67. Client Request Filters</a></li>
-<li><a href="#architecture.master">68. Master</a></li>
-<li><a href="#regionserver.arch">69. RegionServer</a></li>
-<li><a href="#regions.arch">70. Regions</a></li>
-<li><a href="#arch.bulk.load">71. Bulk Loading</a></li>
-<li><a href="#arch.hdfs">72. HDFS</a></li>
-<li><a href="#arch.timelineconsistent.reads">73. Timeline-consistent High Available Reads</a></li>
-<li><a href="#hbase_mob">74. Storing Medium-sized Objects (MOB)</a></li>
+<li><a href="#arch.overview">65. Overview</a></li>
+<li><a href="#arch.catalog">66. Catalog Tables</a></li>
+<li><a href="#architecture.client">67. Client</a></li>
+<li><a href="#client.filter">68. Client Request Filters</a></li>
+<li><a href="#architecture.master">69. Master</a></li>
+<li><a href="#regionserver.arch">70. RegionServer</a></li>
+<li><a href="#regions.arch">71. Regions</a></li>
+<li><a href="#arch.bulk.load">72. Bulk Loading</a></li>
+<li><a href="#arch.hdfs">73. HDFS</a></li>
+<li><a href="#arch.timelineconsistent.reads">74. Timeline-consistent High Available Reads</a></li>
+<li><a href="#hbase_mob">75. Storing Medium-sized Objects (MOB)</a></li>
 </ul>
 </li>
 <li><a href="#hbase_apis">Apache HBase APIs</a>
 <ul class="sectlevel1">
-<li><a href="#_examples">75. Examples</a></li>
+<li><a href="#_examples">76. Examples</a></li>
 </ul>
 </li>
 <li><a href="#external_apis">Apache HBase External APIs</a>
 <ul class="sectlevel1">
-<li><a href="#_rest">76. REST</a></li>
-<li><a href="#_thrift">77. Thrift</a></li>
-<li><a href="#c">78. C/C++ Apache HBase Client</a></li>
-<li><a href="#jdo">79. Using Java Data Objects (JDO) with HBase</a></li>
-<li><a href="#scala">80. Scala</a></li>
-<li><a href="#jython">81. Jython</a></li>
+<li><a href="#_rest">77. REST</a></li>
+<li><a href="#_thrift">78. Thrift</a></li>
+<li><a href="#c">79. C/C++ Apache HBase Client</a></li>
+<li><a href="#jdo">80. Using Java Data Objects (JDO) with HBase</a></li>
+<li><a href="#scala">81. Scala</a></li>
+<li><a href="#jython">82. Jython</a></li>
 </ul>
 </li>
 <li><a href="#thrift">Thrift API and Filter Language</a>
 <ul class="sectlevel1">
-<li><a href="#thrift.filter_language">82. Filter Language</a></li>
+<li><a href="#thrift.filter_language">83. Filter Language</a></li>
 </ul>
 </li>
 <li><a href="#spark">HBase and Spark</a>
 <ul class="sectlevel1">
-<li><a href="#_basic_spark">83. Basic Spark</a></li>
-<li><a href="#_spark_streaming">84. Spark Streaming</a></li>
-<li><a href="#_bulk_load">85. Bulk Load</a></li>
-<li><a href="#_sparksql_dataframes">86. SparkSQL/DataFrames</a></li>
+<li><a href="#_basic_spark">84. Basic Spark</a></li>
+<li><a href="#_spark_streaming">85. Spark Streaming</a></li>
+<li><a href="#_bulk_load">86. Bulk Load</a></li>
+<li><a href="#_sparksql_dataframes">87. SparkSQL/DataFrames</a></li>
 </ul>
 </li>
 <li><a href="#cp">Apache HBase Coprocessors</a>
 <ul class="sectlevel1">
-<li><a href="#_coprocessor_overview">87. Coprocessor Overview</a></li>
-<li><a href="#_types_of_coprocessors">88. Types of Coprocessors</a></li>
-<li><a href="#cp_loading">89. Loading Coprocessors</a></li>
-<li><a href="#cp_example">90. Examples</a></li>
-<li><a href="#_guidelines_for_deploying_a_coprocessor">91. Guidelines For Deploying A Coprocessor</a></li>
-<li><a href="#_restricting_coprocessor_usage">92. Restricting Coprocessor Usage</a></li>
+<li><a href="#_coprocessor_overview">88. Coprocessor Overview</a></li>
+<li><a href="#_types_of_coprocessors">89. Types of Coprocessors</a></li>
+<li><a href="#cp_loading">90. Loading Coprocessors</a></li>
+<li><a href="#cp_example">91. Examples</a></li>
+<li><a href="#_guidelines_for_deploying_a_coprocessor">92. Guidelines For Deploying A Coprocessor</a></li>
+<li><a href="#_restricting_coprocessor_usage">93. Restricting Coprocessor Usage</a></li>
 </ul>
 </li>
 <li><a href="#performance">Apache HBase Performance Tuning</a>
 <ul class="sectlevel1">
-<li><a href="#perf.os">93. Operating System</a></li>
-<li><a href="#perf.network">94. Network</a></li>
-<li><a href="#jvm">95. Java</a></li>
-<li><a href="#perf.configurations">96. HBase Configurations</a></li>
-<li><a href="#perf.zookeeper">97. ZooKeeper</a></li>
-<li><a href="#perf.schema">98. Schema Design</a></li>
-<li><a href="#perf.general">99. HBase General Patterns</a></li>
-<li><a href="#perf.writing">100. Writing to HBase</a></li>
-<li><a href="#perf.reading">101. Reading from HBase</a></li>
-<li><a href="#perf.deleting">102. Deleting from HBase</a></li>
-<li><a href="#perf.hdfs">103. HDFS</a></li>
-<li><a href="#perf.ec2">104. Amazon EC2</a></li>
-<li><a href="#perf.hbase.mr.cluster">105. Collocating HBase and MapReduce</a></li>
-<li><a href="#perf.casestudy">106. Case Studies</a></li>
+<li><a href="#perf.os">94. Operating System</a></li>
+<li><a href="#perf.network">95. Network</a></li>
+<li><a href="#jvm">96. Java</a></li>
+<li><a href="#perf.configurations">97. HBase Configurations</a></li>
+<li><a href="#perf.zookeeper">98. ZooKeeper</a></li>
+<li><a href="#perf.schema">99. Schema Design</a></li>
+<li><a href="#perf.general">100. HBase General Patterns</a></li>
+<li><a href="#perf.writing">101. Writing to HBase</a></li>
+<li><a href="#perf.reading">102. Reading from HBase</a></li>
+<li><a href="#perf.deleting">103. Deleting from HBase</a></li>
+<li><a href="#perf.hdfs">104. HDFS</a></li>
+<li><a href="#perf.ec2">105. Amazon EC2</a></li>
+<li><a href="#perf.hbase.mr.cluster">106. Collocating HBase and MapReduce</a></li>
+<li><a href="#perf.casestudy">107. Case Studies</a></li>
 </ul>
 </li>
 <li><a href="#trouble">Troubleshooting and Debugging Apache HBase</a>
 <ul class="sectlevel1">
-<li><a href="#trouble.general">107. General Guidelines</a></li>
-<li><a href="#trouble.log">108. Logs</a></li>
-<li><a href="#trouble.resources">109. Resources</a></li>
-<li><a href="#trouble.tools">110. Tools</a></li>
-<li><a href="#trouble.client">111. Client</a></li>
-<li><a href="#trouble.mapreduce">112. MapReduce</a></li>
-<li><a href="#trouble.namenode">113. NameNode</a></li>
-<li><a href="#trouble.network">114. Network</a></li>
-<li><a href="#trouble.rs">115. RegionServer</a></li>
-<li><a href="#trouble.master">116. Master</a></li>
-<li><a href="#trouble.zookeeper">117. ZooKeeper</a></li>
-<li><a href="#trouble.ec2">118. Amazon EC2</a></li>
-<li><a href="#trouble.versions">119. HBase and Hadoop version issues</a></li>
-<li><a href="#_ipc_configuration_conflicts_with_hadoop">120. IPC Configuration Conflicts with Hadoop</a></li>
-<li><a href="#_hbase_and_hdfs">121. HBase and HDFS</a></li>
-<li><a href="#trouble.tests">122. Running unit or integration tests</a></li>
-<li><a href="#trouble.casestudy">123. Case Studies</a></li>
-<li><a href="#trouble.crypto">124. Cryptographic Features</a></li>
-<li><a href="#_operating_system_specific_issues">125. Operating System Specific Issues</a></li>
-<li><a href="#_jdk_issues">126. JDK Issues</a></li>
+<li><a href="#trouble.general">108. General Guidelines</a></li>
+<li><a href="#trouble.log">109. Logs</a></li>
+<li><a href="#trouble.resources">110. Resources</a></li>
+<li><a href="#trouble.tools">111. Tools</a></li>
+<li><a href="#trouble.client">112. Client</a></li>
+<li><a href="#trouble.mapreduce">113. MapReduce</a></li>
+<li><a href="#trouble.namenode">114. NameNode</a></li>
+<li><a href="#trouble.network">115. Network</a></li>
+<li><a href="#trouble.rs">116. RegionServer</a></li>
+<li><a href="#trouble.master">117. Master</a></li>
+<li><a href="#trouble.zookeeper">118. ZooKeeper</a></li>
+<li><a href="#trouble.ec2">119. Amazon EC2</a></li>
+<li><a href="#trouble.versions">120. HBase and Hadoop version issues</a></li>
+<li><a href="#_ipc_configuration_conflicts_with_hadoop">121. IPC Configuration Conflicts with Hadoop</a></li>
+<li><a href="#_hbase_and_hdfs">122. HBase and HDFS</a></li>
+<li><a href="#trouble.tests">123. Running unit or integration tests</a></li>
+<li><a href="#trouble.casestudy">124. Case Studies</a></li>
+<li><a href="#trouble.crypto">125. Cryptographic Features</a></li>
+<li><a href="#_operating_system_specific_issues">126. Operating System Specific Issues</a></li>
+<li><a href="#_jdk_issues">127. JDK Issues</a></li>
 </ul>
 </li>
 <li><a href="#casestudies">Apache HBase Case Studies</a>
 <ul class="sectlevel1">
-<li><a href="#casestudies.overview">127. Overview</a></li>
-<li><a href="#casestudies.schema">128. Schema Design</a></li>
-<li><a href="#casestudies.perftroub">129. Performance/Troubleshooting</a></li>
+<li><a href="#casestudies.overview">128. Overview</a></li>
+<li><a href="#casestudies.schema">129. Schema Design</a></li>
+<li><a href="#casestudies.perftroub">130. Performance/Troubleshooting</a></li>
 </ul>
 </li>
 <li><a href="#ops_mgt">Apache HBase Operational Management</a>
 <ul class="sectlevel1">
-<li><a href="#tools">130. HBase Tools and Utilities</a></li>
-<li><a href="#ops.regionmgt">131. Region Management</a></li>
-<li><a href="#node.management">132. Node Management</a></li>
-<li><a href="#hbase_metrics">133. HBase Metrics</a></li>
-<li><a href="#ops.monitoring">134. HBase Monitoring</a></li>
-<li><a href="#_cluster_replication">135. Cluster Replication</a></li>
-<li><a href="#_running_multiple_workloads_on_a_single_cluster">136. Running Multiple Workloads On a Single Cluster</a></li>
-<li><a href="#ops.backup">137. HBase Backup</a></li>
-<li><a href="#ops.snapshots">138. HBase Snapshots</a></li>
-<li><a href="#snapshots_azure">139. Storing Snapshots in Microsoft Azure Blob Storage</a></li>
-<li><a href="#ops.capacity">140. Capacity Planning and Region Sizing</a></li>
-<li><a href="#table.rename">141. Table Rename</a></li>
-<li><a href="#rsgroup">142. RegionServer Grouping</a></li>
+<li><a href="#tools">131. HBase Tools and Utilities</a></li>
+<li><a href="#ops.regionmgt">132. Region Management</a></li>
+<li><a href="#node.management">133. Node Management</a></li>
+<li><a href="#hbase_metrics">134. HBase Metrics</a></li>
+<li><a href="#ops.monitoring">135. HBase Monitoring</a></li>
+<li><a href="#_cluster_replication">136. Cluster Replication</a></li>
+<li><a href="#_running_multiple_workloads_on_a_single_cluster">137. Running Multiple Workloads On a Single Cluster</a></li>
+<li><a href="#ops.backup">138. HBase Backup</a></li>
+<li><a href="#ops.snapshots">139. HBase Snapshots</a></li>
+<li><a href="#snapshots_azure">140. Storing Snapshots in Microsoft Azure Blob Storage</a></li>
+<li><a href="#ops.capacity">141. Capacity Planning and Region Sizing</a></li>
+<li><a href="#table.rename">142. Table Rename</a></li>
+<li><a href="#rsgroup">143. RegionServer Grouping</a></li>
 </ul>
 </li>
 <li><a href="#developer">Building and Developing Apache HBase</a>
 <ul class="sectlevel1">
-<li><a href="#getting.involved">143. Getting Involved</a></li>
-<li><a href="#repos">144. Apache HBase Repositories</a></li>
-<li><a href="#_ides">145. IDEs</a></li>
-<li><a href="#build">146. Building Apache HBase</a></li>
-<li><a href="#releasing">147. Releasing Apache HBase</a></li>
-<li><a href="#hbase.rc.voting">148. Voting on Release Candidates</a></li>
-<li><a href="#documentation">149. Generating the HBase Reference Guide</a></li>
-<li><a href="#hbase.org">150. Updating <a href="http://hbase.apache.org">hbase.apache.org</a></a></li>
-<li><a href="#hbase.tests">151. Tests</a></li>
-<li><a href="#developing">152. Developer Guidelines</a></li>
+<li><a href="#getting.involved">144. Getting Involved</a></li>
+<li><a href="#repos">145. Apache HBase Repositories</a></li>
+<li><a href="#_ides">146. IDEs</a></li>
+<li><a href="#build">147. Building Apache HBase</a></li>
+<li><a href="#releasing">148. Releasing Apache HBase</a></li>
+<li><a href="#hbase.rc.voting">149. Voting on Release Candidates</a></li>
+<li><a href="#documentation">150. Generating the HBase Reference Guide</a></li>
+<li><a href="#hbase.org">151. Updating <a href="https://hbase.apache.org">hbase.apache.org</a></a></li>
+<li><a href="#hbase.tests">152. Tests</a></li>
+<li><a href="#developing">153. Developer Guidelines</a></li>
 </ul>
 </li>
 <li><a href="#unit.tests">Unit Testing HBase Applications</a>
 <ul class="sectlevel1">
-<li><a href="#_junit">153. JUnit</a></li>
-<li><a href="#mockito">154. Mockito</a></li>
-<li><a href="#_mrunit">155. MRUnit</a></li>
-<li><a href="#_integration_testing_with_an_hbase_mini_cluster">156. Integration Testing with an HBase Mini-Cluster</a></li>
+<li><a href="#_junit">154. JUnit</a></li>
+<li><a href="#mockito">155. Mockito</a></li>
+<li><a href="#_mrunit">156. MRUnit</a></li>
+<li><a href="#_integration_testing_with_an_hbase_mini_cluster">157. Integration Testing with an HBase Mini-Cluster</a></li>
 </ul>
 </li>
 <li><a href="#protobuf">Protobuf in HBase</a>
 <ul class="sectlevel1">
-<li><a href="#_protobuf">157. Protobuf</a></li>
+<li><a href="#_protobuf">158. Protobuf</a></li>
 </ul>
 </li>
 <li><a href="#zookeeper">ZooKeeper</a>
 <ul class="sectlevel1">
-<li><a href="#_using_existing_zookeeper_ensemble">158. Using existing ZooKeeper ensemble</a></li>
-<li><a href="#zk.sasl.auth">159. SASL Authentication with ZooKeeper</a></li>
+<li><a href="#_using_existing_zookeeper_ensemble">159. Using existing ZooKeeper ensemble</a></li>
+<li><a href="#zk.sasl.auth">160. SASL Authentication with ZooKeeper</a></li>
 </ul>
 </li>
 <li><a href="#community">Community</a>
 <ul class="sectlevel1">
-<li><a href="#_decisions">160. Decisions</a></li>
-<li><a href="#community.roles">161. Community Roles</a></li>
-<li><a href="#hbase.commit.msg.format">162. Commit Message format</a></li>
+<li><a href="#_decisions">161. Decisions</a></li>
+<li><a href="#community.roles">162. Community Roles</a></li>
+<li><a href="#hbase.commit.msg.format">163. Commit Message format</a></li>
 </ul>
 </li>
 <li><a href="#_appendix">Appendix</a>
@@ -288,7 +289,7 @@
 <li><a href="#hbck.in.depth">Appendix C: hbck In Depth</a></li>
 <li><a href="#appendix_acl_matrix">Appendix D: Access Control Matrix</a></li>
 <li><a href="#compression">Appendix E: Compression and Data Block Encoding In HBase</a></li>
-<li><a href="#data.block.encoding.enable">163. Enable Data Block Encoding</a></li>
+<li><a href="#data.block.encoding.enable">164. Enable Data Block Encoding</a></li>
 <li><a href="#sql">Appendix F: SQL over HBase</a></li>
 <li><a href="#ycsb">Appendix G: YCSB</a></li>
 <li><a href="#_hfile_format_2">Appendix H: HFile format</a></li>
@@ -297,8 +298,8 @@
 <li><a href="#asf">Appendix K: HBase and the Apache Software Foundation</a></li>
 <li><a href="#orca">Appendix L: Apache HBase Orca</a></li>
 <li><a href="#tracing">Appendix M: Enabling Dapper-like Tracing in HBase</a></li>
-<li><a href="#tracing.client.modifications">164. Client Modifications</a></li>
-<li><a href="#tracing.client.shell">165. Tracing from HBase Shell</a></li>
+<li><a href="#tracing.client.modifications">165. Client Modifications</a></li>
+<li><a href="#tracing.client.shell">166. Tracing from HBase Shell</a></li>
 <li><a href="#hbase.rpc">Appendix N: 0.95 RPC Specification</a></li>
 </ul>
 </li>
@@ -309,7 +310,7 @@
 <div id="preamble">
 <div class="sectionbody">
 <div>
-  <a href="http://hbase.apache.org"><img src="images/hbase_logo_with_orca.png" alt="Apache HBase Logo" /></a>
+  <a href="https://hbase.apache.org"><img src="images/hbase_logo_with_orca.png" alt="Apache HBase Logo" /></a>
 </div>
 </div>
 </div>
@@ -317,12 +318,12 @@
 <h2 id="_preface"><a class="anchor" href="#_preface"></a>Preface</h2>
 <div class="sectionbody">
 <div class="paragraph">
-<p>This is the official reference guide for the <a href="http://hbase.apache.org/">HBase</a> version it ships with.</p>
+<p>This is the official reference guide for the <a href="https://hbase.apache.org/">HBase</a> version it ships with.</p>
 </div>
 <div class="paragraph">
 <p>Herein you will find either the definitive documentation on an HBase topic as of its
 standing when the referenced HBase version shipped, or it will point to the location
-in <a href="http://hbase.apache.org/apidocs/index.html">Javadoc</a> or
+in <a href="https://hbase.apache.org/apidocs/index.html">Javadoc</a> or
 <a href="https://issues.apache.org/jira/browse/HBASE">JIRA</a> where the pertinent information can be found.</p>
 </div>
 <div class="paragraph">
@@ -489,7 +490,7 @@ See <a href="#java">Java</a> for information about supported JDK versions.</p>
 <div class="title">Procedure: Download, Configure, and Start HBase in Standalone Mode</div>
 <ol class="arabic">
 <li>
-<p>Choose a download site from this list of <a href="http://www.apache.org/dyn/closer.cgi/hbase/">Apache Download Mirrors</a>.
+<p>Choose a download site from this list of <a href="https://www.apache.org/dyn/closer.cgi/hbase/">Apache Download Mirrors</a>.
 Click on the suggested top link.
 This will take you to a mirror of <em>HBase Releases</em>.
 Click on the folder named <em>stable</em> and then download the binary file that ends in <em>.tar.gz</em> to your local filesystem.
@@ -798,7 +799,7 @@ You can skip the HDFS configuration to continue storing your data in the local f
 <p>This procedure assumes that you have configured Hadoop and HDFS on your local system and/or a remote
 system, and that they are running and available. It also assumes you are using Hadoop 2.
 The guide on
-<a href="http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html">Setting up a Single Node Cluster</a>
+<a href="https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html">Setting up a Single Node Cluster</a>
 in the Hadoop documentation is a good starting point.</p>
 </div>
 </td>
@@ -1322,7 +1323,7 @@ A plain-text file which lists hosts on which the Master should start a backup Ma
 <dt class="hdlist1"><em>hadoop-metrics2-hbase.properties</em></dt>
 <dd>
 <p>Used to connect HBase Hadoop&#8217;s Metrics2 framework.
-See the <a href="http://wiki.apache.org/hadoop/HADOOP-6728-MetricsV2">Hadoop Wiki entry</a> for more information on Metrics2.
+See the <a href="https://wiki.apache.org/hadoop/HADOOP-6728-MetricsV2">Hadoop Wiki entry</a> for more information on Metrics2.
 Contains only commented-out examples by default.</p>
 </dd>
 <dt class="hdlist1"><em>hbase-env.cmd</em> and <em>hbase-env.sh</em></dt>
@@ -1464,7 +1465,7 @@ You must set <code>JAVA_HOME</code> on each node of your cluster. <em>hbase-env.
 <dl>
 <dt class="hdlist1">ssh</dt>
 <dd>
-<p>HBase uses the Secure Shell (ssh) command and utilities extensively to communicate between cluster nodes. Each server in the cluster must be running <code>ssh</code> so that the Hadoop and HBase daemons can be managed. You must be able to connect to all nodes via SSH, including the local node, from the Master as well as any backup Master, using a shared key rather than a password. You can see the basic methodology for such a set-up in Linux or Unix systems at "<a href="#passwordless.ssh.quickstart">Procedure: Configure Passwordless SSH Access</a>". If your cluster nodes use OS X, see the section, <a href="http://wiki.apache.org/hadoop/Running_Hadoop_On_OS_X_10.5_64-bit_%28Single-Node_Cluster%29">SSH: Setting up Remote Desktop and Enabling Self-Login</a> on the Hadoop wiki.</p>
+<p>HBase uses the Secure Shell (ssh) command and utilities extensively to communicate between cluster nodes. Each server in the cluster must be running <code>ssh</code> so that the Hadoop and HBase daemons can be managed. You must be able to connect to all nodes via SSH, including the local node, from the Master as well as any backup Master, using a shared key rather than a password. You can see the basic methodology for such a set-up in Linux or Unix systems at "<a href="#passwordless.ssh.quickstart">Procedure: Configure Passwordless SSH Access</a>". If your cluster nodes use OS X, see the section, <a href="https://wiki.apache.org/hadoop/Running_Hadoop_On_OS_X_10.5_64-bit_%28Single-Node_Cluster%29">SSH: Setting up Remote Desktop and Enabling Self-Login</a> on the Hadoop wiki.</p>
 </dd>
 <dt class="hdlist1">DNS</dt>
 <dd>
@@ -1545,13 +1546,13 @@ Running production systems on Windows machines is not recommended.</p>
 </dl>
 </div>
 <div class="sect2">
-<h3 id="hadoop"><a class="anchor" href="#hadoop"></a>4.1. <a href="http://hadoop.apache.org">Hadoop</a></h3>
+<h3 id="hadoop"><a class="anchor" href="#hadoop"></a>4.1. <a href="https://hadoop.apache.org">Hadoop</a></h3>
 <div class="paragraph">
 <p>The following table summarizes the versions of Hadoop supported with each version of HBase.
 Based on the version of HBase, you should select the most appropriate version of Hadoop.
 You can use Apache Hadoop, or a vendor&#8217;s distribution of Hadoop.
 No distinction is made here.
-See <a href="http://wiki.apache.org/hadoop/Distributions%20and%20Commercial%20Support">the Hadoop wiki</a> for information about vendors of Hadoop.</p>
+See <a href="https://wiki.apache.org/hadoop/Distributions%20and%20Commercial%20Support">the Hadoop wiki</a> for information about vendors of Hadoop.</p>
 </div>
 <div class="admonitionblock tip">
 <table>
@@ -1894,7 +1895,7 @@ The <em>pseudo-distributed</em> vs. <em>fully-distributed</em> nomenclature come
 </div>
 <div class="paragraph">
 <p>Pseudo-distributed mode can run against the local filesystem or it can run against an instance of the <em>Hadoop Distributed File System</em> (HDFS). Fully-distributed mode can ONLY run on HDFS.
-See the Hadoop <a href="http://hadoop.apache.org/docs/current/">documentation</a> for how to set up HDFS.
+See the Hadoop <a href="https://hadoop.apache.org/docs/current/">documentation</a> for how to set up HDFS.
 A good walk-through for setting up HDFS on Hadoop 2 can be found at <a href="http://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide" class="bare">http://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide</a>.</p>
 </div>
 <div class="sect3">
@@ -5186,7 +5187,7 @@ Usually this ensemble location is kept out in the <em>hbase-site.xml</em> and is
 <div class="sect3">
 <h4 id="java.client.config"><a class="anchor" href="#java.client.config"></a>7.5.1. Java client configuration</h4>
 <div class="paragraph">
-<p>The configuration used by a Java client is kept in an <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HBaseConfiguration">HBaseConfiguration</a> instance.</p>
+<p>The configuration used by a Java client is kept in an <a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HBaseConfiguration">HBaseConfiguration</a> instance.</p>
 </div>
 <div class="paragraph">
 <p>The factory method on HBaseConfiguration, <code>HBaseConfiguration.create();</code>, on invocation, will read in the content of the first <em>hbase-site.xml</em> found on the client&#8217;s <code>CLASSPATH</code>, if one is present (Invocation will also factor in any <em>hbase-default.xml</em> found; an <em>hbase-default.xml</em> ships inside the <em>hbase.X.X.X.jar</em>). It is also possible to specify configuration directly without having to read from a <em>hbase-site.xml</em>.
@@ -5199,7 +5200,7 @@ config.set(<span class="string"><span class="delimiter">&quot;</span><span class
 </div>
 </div>
 <div class="paragraph">
-<p>If multiple ZooKeeper instances make up your ZooKeeper ensemble, they may be specified in a comma-separated list (just as in the <em>hbase-site.xml</em> file). This populated <code>Configuration</code> instance can then be passed to an <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html">Table</a>, and so on.</p>
+<p>If multiple ZooKeeper instances make up your ZooKeeper ensemble, they may be specified in a comma-separated list (just as in the <em>hbase-site.xml</em> file). This populated <code>Configuration</code> instance can then be passed to an <a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html">Table</a>, and so on.</p>
 </div>
 </div>
 </div>
@@ -5483,7 +5484,7 @@ See the entry for <code>hbase.hregion.majorcompaction</code> in the <a href="#co
 <div class="paragraph">
 <p>Major compactions are absolutely necessary for StoreFile clean-up.
 Do not disable them altogether.
-You can run major compactions manually via the HBase shell or via the <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html#majorCompact-org.apache.hadoop.hbase.TableName-">Admin API</a>.</p>
+You can run major compactions manually via the HBase shell or via the <a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html#majorCompact-org.apache.hadoop.hbase.TableName-">Admin API</a>.</p>
 </div>
 </td>
 </tr>
@@ -5766,7 +5767,7 @@ It may be possible to skip across versions&#8201;&#8212;&#8201;for example go fr
 </table>
 </div>
 <div class="paragraph">
-<p>Review <a href="#configuration">Apache HBase Configuration</a>, in particular <a href="#hadoop"><a href="http://hadoop.apache.org">Hadoop</a></a>. Familiarize yourself with <a href="#hbase_supported_tested_definitions">Support and Testing Expectations</a>.</p>
+<p>Review <a href="#configuration">Apache HBase Configuration</a>, in particular <a href="#hadoop"><a href="https://hadoop.apache.org">Hadoop</a></a>. Familiarize yourself with <a href="#hbase_supported_tested_definitions">Support and Testing Expectations</a>.</p>
 </div>
 </div>
 </div>
@@ -5843,7 +5844,7 @@ It may be possible to skip across versions&#8201;&#8212;&#8201;for example go fr
 <p>Support file formats backward and forward compatible</p>
 </li>
 <li>
-<p>Example: File, ZK encoding, directory layout is upgraded automatically as part of an HBase upgrade. User can rollback to the older version and everything will continue to work.</p>
+<p>Example: File, ZK encoding, directory layout is upgraded automatically as part of an HBase upgrade. User can downgrade to the older version and everything will continue to work.</p>
 </li>
 </ul>
 </div>
@@ -6030,12 +6031,12 @@ for warning about incompatible changes). All effort will be made to provide a de
 <div class="sect3">
 <h4 id="hbase.client.api.surface"><a class="anchor" href="#hbase.client.api.surface"></a>11.1.1. HBase API Surface</h4>
 <div class="paragraph">
-<p>HBase has a lot of API points, but for the compatibility matrix above, we differentiate between Client API, Limited Private API, and Private API. HBase uses <a href="http://yetus.apache.org/documentation/0.5.0/interface-classification/">Apache Yetus Audience Annotations</a> to guide downstream expectations for stability.</p>
+<p>HBase has a lot of API points, but for the compatibility matrix above, we differentiate between Client API, Limited Private API, and Private API. HBase uses <a href="https://yetus.apache.org/documentation/0.5.0/interface-classification/">Apache Yetus Audience Annotations</a> to guide downstream expectations for stability.</p>
 </div>
 <div class="ulist">
 <ul>
 <li>
-<p>InterfaceAudience (<a href="http://yetus.apache.org/documentation/0.5.0/audience-annotations-apidocs/org/apache/yetus/audience/InterfaceAudience.html">javadocs</a>): captures the intended audience, possible values include:</p>
+<p>InterfaceAudience (<a href="https://yetus.apache.org/documentation/0.5.0/audience-annotations-apidocs/org/apache/yetus/audience/InterfaceAudience.html">javadocs</a>): captures the intended audience, possible values include:</p>
 <div class="ulist">
 <ul>
 <li>
@@ -6052,7 +6053,7 @@ Classes which are defined as <code>IA.Private</code> may be used as parameters o
 </div>
 </li>
 <li>
-<p>InterfaceStability (<a href="http://yetus.apache.org/documentation/0.5.0/audience-annotations-apidocs/org/apache/yetus/audience/InterfaceStability.html">javadocs</a>): describes what types of interface changes are permitted. Possible values include:</p>
+<p>InterfaceStability (<a href="https://yetus.apache.org/documentation/0.5.0/audience-annotations-apidocs/org/apache/yetus/audience/InterfaceStability.html">javadocs</a>): describes what types of interface changes are permitted. Possible values include:</p>
 <div class="ulist">
 <ul>
 <li>
@@ -6121,7 +6122,7 @@ Classes which are defined as <code>IA.Private</code> may be used as parameters o
 </td>
 <td class="content">
 <div class="title">HBase Pre-1.0 versions are all EOM</div>
-For new installations, do not deploy 0.94.y, 0.96.y, or 0.98.y.  Deploy our stable version. See <a href="https://issues.apache.org/jira/browse/HBASE-11642">EOL 0.96</a>, <a href="https://issues.apache.org/jira/browse/HBASE-16215">clean up of EOM releases</a>, and <a href="http://www.apache.org/dist/hbase/">the header of our downloads</a>.
+For new installations, do not deploy 0.94.y, 0.96.y, or 0.98.y.  Deploy our stable version. See <a href="https://issues.apache.org/jira/browse/HBASE-11642">EOL 0.96</a>, <a href="https://issues.apache.org/jira/browse/HBASE-16215">clean up of EOM releases</a>, and <a href="https://www.apache.org/dist/hbase/">the header of our downloads</a>.
 </td>
 </tr>
 </table>
@@ -6168,15 +6169,232 @@ For new installations, do not deploy 0.94.y, 0.96.y, or 0.98.y.  Deploy our stab
 </div>
 </div>
 <div class="sect1">
-<h2 id="_upgrade_paths"><a class="anchor" href="#_upgrade_paths"></a>12. Upgrade Paths</h2>
+<h2 id="_rollback"><a class="anchor" href="#_rollback"></a>12. Rollback</h2>
+<div class="sectionbody">
+<div class="paragraph">
+<p>Sometimes things don&#8217;t go as planned when attempting an upgrade. This section explains how to perform a <em>rollback</em> to an earlier HBase release. Note that this should only be needed between Major and some Minor releases. You should always be able to <em>downgrade</em> between HBase Patch releases within the same Minor version. These instructions may require you to take steps before you start the upgrade process, so be sure to read through this section beforehand.</p>
+</div>
+<div class="sect2">
+<h3 id="_caveats"><a class="anchor" href="#_caveats"></a>12.1. Caveats</h3>
+<div class="paragraph">
+<div class="title">Rollback vs Downgrade</div>
+<p>This section describes how to perform a <em>rollback</em> on an upgrade between HBase minor and major versions. In this document, rollback refers to the process of taking an upgraded cluster and restoring it to the old version <em>while losing all changes that have occurred since upgrade</em>. By contrast, a cluster <em>downgrade</em> would restore an upgraded cluster to the old version while maintaining any data written since the upgrade. We currently only offer instructions to rollback HBase clusters. Further, rollback only works when these instructions are followed prior to performing the upgrade.</p>
+</div>
+<div class="paragraph">
+<p>When these instructions talk about rollback vs downgrade of prerequisite cluster services (i.e. HDFS), you should treat leaving the service version the same as a degenerate case of downgrade.</p>
+</div>
+<div class="paragraph">
+<div class="title">Replication</div>
+<p>Unless you are doing an all-service rollback, the HBase cluster will lose any configured peers for HBase replication. If your cluster is configured for HBase replication, then prior to following these instructions you should document all replication peers. After performing the rollback you should then add each documented peer back to the cluster. For more information on enabling HBase replication, listing peers, and adding a peer see <a href="#hbase.replication.management">Managing and Configuring Cluster Replication</a>. Note also that data written to the cluster since the upgrade may or may not have already been replicated to any peers. Determining which, if any, peers have seen replication data as well as rolling back the data in those peers is out of the scope of this guide.</p>
+</div>
+<div class="paragraph">
+<div class="title">Data Locality</div>
+<p>Unless you are doing an all-service rollback, going through a rollback procedure will likely destroy all locality for Region Servers. You should expect degraded performance until after the cluster has had time to go through compactions to restore data locality. Optionally, you can force a compaction to speed this process up at the cost of generating cluster load.</p>
+</div>
+<div class="paragraph">
+<div class="title">Configurable Locations</div>
+<p>The instructions below assume default locations for the HBase data directory and the HBase znode. Both of these locations are configurable and you should verify the value used in your cluster before proceeding. In the event that you have a different value, just replace the default with the one found in your configuration
+* HBase data directory is configured via the key 'hbase.rootdir' and has a default value of '/hbase'.
+* HBase znode is configured via the key 'zookeeper.znode.parent' and has a default value of '/hbase'.</p>
+</div>
+</div>
+<div class="sect2">
+<h3 id="_all_service_rollback"><a class="anchor" href="#_all_service_rollback"></a>12.2. All service rollback</h3>
+<div class="paragraph">
+<p>If you will be performing a rollback of both the HDFS and ZooKeeper services, then HBase&#8217;s data will be rolled back in the process.</p>
+</div>
+<div class="ulist">
+<div class="title">Requirements</div>
+<ul>
+<li>
+<p>Ability to rollback HDFS and ZooKeeper</p>
+</li>
+</ul>
+</div>
+<div class="paragraph">
+<div class="title">Before upgrade</div>
+<p>No additional steps are needed pre-upgrade. As an extra precautionary measure, you may wish to use distcp to back up the HBase data off of the cluster to be upgraded. To do so, follow the steps in the 'Before upgrade' section of 'Rollback after HDFS downgrade' but copy to another HDFS instance instead of within the same instance.</p>
+</div>
+<div class="olist arabic">
+<div class="title">Performing a rollback</div>
+<ol class="arabic">
+<li>
+<p>Stop HBase</p>
+</li>
+<li>
+<p>Perform a rollback for HDFS and ZooKeeper (HBase should remain stopped)</p>
+</li>
+<li>
+<p>Change the installed version of HBase to the previous version</p>
+</li>
+<li>
+<p>Start HBase</p>
+</li>
+<li>
+<p>Verify HBase contents—use the HBase shell to list tables and scan some known values.</p>
+</li>
+</ol>
+</div>
+</div>
+<div class="sect2">
+<h3 id="_rollback_after_hdfs_rollback_and_zookeeper_downgrade"><a class="anchor" href="#_rollback_after_hdfs_rollback_and_zookeeper_downgrade"></a>12.3. Rollback after HDFS rollback and ZooKeeper downgrade</h3>
+<div class="paragraph">
+<p>If you will be rolling back HDFS but going through a ZooKeeper downgrade, then HBase will be in an inconsistent state. You must ensure the cluster is not started until you complete this process.</p>
+</div>
+<div class="ulist">
+<div class="title">Requirements</div>
+<ul>
+<li>
+<p>Ability to rollback HDFS</p>
+</li>
+<li>
+<p>Ability to downgrade ZooKeeper</p>
+</li>
+</ul>
+</div>
+<div class="paragraph">
+<div class="title">Before upgrade</div>
+<p>No additional steps are needed pre-upgrade. As an extra precautionary measure, you may wish to use distcp to back up the HBase data off of the cluster to be upgraded. To do so, follow the steps in the 'Before upgrade' section of 'Rollback after HDFS downgrade' but copy to another HDFS instance instead of within the same instance.</p>
+</div>
+<div class="olist arabic">
+<div class="title">Performing a rollback</div>
+<ol class="arabic">
+<li>
+<p>Stop HBase</p>
+</li>
+<li>
+<p>Perform a rollback for HDFS and a downgrade for ZooKeeper (HBase should remain stopped)</p>
+</li>
+<li>
+<p>Change the installed version of HBase to the previous version</p>
+</li>
+<li>
+<p>Clean out ZooKeeper information related to HBase. WARNING: This step will permanently destroy all replication peers. Please see the section on HBase Replication under Caveats for more information.</p>
+<div class="listingblock">
+<div class="title">Clean HBase information out of ZooKeeper</div>
+<div class="content">
+<pre class="CodeRay highlight"><code data-lang="bash">[hpnewton@gateway_node.example.com ~]$ zookeeper-client -server zookeeper1.example.com:2181,zookeeper2.example.com:2181,zookeeper3.example.com:2181
+Welcome to ZooKeeper!
+JLine support is disabled
+rmr /hbase
+quit
+Quitting...</code></pre>
+</div>
+</div>
+</li>
+<li>
+<p>Start HBase</p>
+</li>
+<li>
+<p>Verify HBase contents—use the HBase shell to list tables and scan some known values.</p>
+</li>
+</ol>
+</div>
+</div>
+<div class="sect2">
+<h3 id="_rollback_after_hdfs_downgrade"><a class="anchor" href="#_rollback_after_hdfs_downgrade"></a>12.4. Rollback after HDFS downgrade</h3>
+<div class="paragraph">
+<p>If you will be performing an HDFS downgrade, then you&#8217;ll need to follow these instructions regardless of whether ZooKeeper goes through rollback, downgrade, or reinstallation.</p>
+</div>
+<div class="ulist">
+<div class="title">Requirements</div>
+<ul>
+<li>
+<p>Ability to downgrade HDFS</p>
+</li>
+<li>
+<p>Pre-upgrade cluster must be able to run MapReduce jobs</p>
+</li>
+<li>
+<p>HDFS super user access</p>
+</li>
+<li>
+<p>Sufficient space in HDFS for at least two copies of the HBase data directory</p>
+</li>
+</ul>
+</div>
+<div class="paragraph">
+<div class="title">Before upgrade</div>
+<p>Before beginning the upgrade process, you must take a complete backup of HBase&#8217;s backing data. The following instructions cover backing up the data within the current HDFS instance. Alternatively, you can use the distcp command to copy the data to another HDFS cluster.</p>
+</div>
+<div class="olist arabic">
+<ol class="arabic">
+<li>
+<p>Stop the HBase cluster</p>
+</li>
+<li>
+<p>Copy the HBase data directory to a backup location using the <a href="https://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html">distcp command</a> as the HDFS super user (shown below on a security enabled cluster)</p>
+<div class="listingblock">
+<div class="title">Using distcp to backup the HBase data directory</div>
+<div class="content">
+<pre class="CodeRay highlight"><code data-lang="bash">[hpnewton@gateway_node.example.com ~]$ kinit -k -t hdfs.keytab hdfs@EXAMPLE.COM
+[hpnewton@gateway_node.example.com ~]$ hadoop distcp /hbase /hbase-pre-upgrade-backup</code></pre>
+</div>
+</div>
+</li>
+<li>
+<p>Distcp will launch a mapreduce job to handle copying the files in a distributed fashion. Check the output of the distcp command to ensure this job completed successfully.</p>
+</li>
+</ol>
+</div>
+<div class="olist arabic">
+<div class="title">Performing a rollback</div>
+<ol class="arabic">
+<li>
+<p>Stop HBase</p>
+</li>
+<li>
+<p>Perform a downgrade for HDFS and a downgrade/rollback for ZooKeeper (HBase should remain stopped)</p>
+</li>
+<li>
+<p>Change the installed version of HBase to the previous version</p>
+</li>
+<li>
+<p>Restore the HBase data directory from prior to the upgrade as the HDFS super user (shown below on a security enabled cluster). If you backed up your data on another HDFS cluster instead of locally, you will need to use the distcp command to copy it back to the current HDFS cluster.</p>
+<div class="listingblock">
+<div class="title">Restore the HBase data directory</div>
+<div class="content">
+<pre class="CodeRay highlight"><code data-lang="bash">[hpnewton@gateway_node.example.com ~]$ kinit -k -t hdfs.keytab hdfs@EXAMPLE.COM
+[hpnewton@gateway_node.example.com ~]$ hdfs dfs -mv /hbase /hbase-upgrade-rollback
+[hpnewton@gateway_node.example.com ~]$ hdfs dfs -mv /hbase-pre-upgrade-backup /hbase</code></pre>
+</div>
+</div>
+</li>
+<li>
+<p>Clean out ZooKeeper information related to HBase. WARNING: This step will permanently destroy all replication peers. Please see the section on HBase Replication under Caveats for more information.</p>
+<div class="listingblock">
+<div class="title">Clean HBase information out of ZooKeeper</div>
+<div class="content">
+<pre class="CodeRay highlight"><code data-lang="bash">[hpnewton@gateway_node.example.com ~]$ zookeeper-client -server zookeeper1.example.com:2181,zookeeper2.example.com:2181,zookeeper3.example.com:2181
+Welcome to ZooKeeper!
+JLine support is disabled
+rmr /hbase
+quit
+Quitting...</code></pre>
+</div>
+</div>
+</li>
+<li>
+<p>Start HBase</p>
+</li>
+<li>
+<p>Verify HBase contents–use the HBase shell to list tables and scan some known values.</p>
+</li>
+</ol>
+</div>
+</div>
+</div>
+</div>
+<div class="sect1">
+<h2 id="_upgrade_paths"><a class="anchor" href="#_upgrade_paths"></a>13. Upgrade Paths</h2>
 <div class="sectionbody">
 <div class="sect2">
-<h3 id="upgrade1.0"><a class="anchor" href="#upgrade1.0"></a>12.1. Upgrading from 0.98.x to 1.0.x</h3>
+<h3 id="upgrade1.0"><a class="anchor" href="#upgrade1.0"></a>13.1. Upgrading from 0.98.x to 1.0.x</h3>
 <div class="paragraph">
 <p>In this section we first note the significant changes that come in with 1.0.0 HBase and then we go over the upgrade process. Be sure to read the significant changes section with care so you avoid surprises.</p>
 </div>
 <div class="sect3">
-<h4 id="_changes_of_note"><a class="anchor" href="#_changes_of_note"></a>12.1.1. Changes of Note!</h4>
+<h4 id="_changes_of_note"><a class="anchor" href="#_changes_of_note"></a>13.1.1. Changes of Note!</h4>
 <div class="paragraph">
 <p>In here we list important changes that are in 1.0.0 since 0.98.x., changes you should be aware that will go into effect once you upgrade.</p>
 </div>
@@ -6218,7 +6436,7 @@ using 0.98.11 servers with any other client version.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="upgrade1.0.rolling.upgrade"><a class="anchor" href="#upgrade1.0.rolling.upgrade"></a>12.1.2. Rolling upgrade from 0.98.x to HBase 1.0.0</h4>
+<h4 id="upgrade1.0.rolling.upgrade"><a class="anchor" href="#upgrade1.0.rolling.upgrade"></a>13.1.2. Rolling upgrade from 0.98.x to HBase 1.0.0</h4>
 <div class="admonitionblock note">
 <table>
 <tr>
@@ -6237,7 +6455,7 @@ You cannot do a <a href="#hbase.rolling.upgrade">rolling upgrade</a> from 0.96.x
 </div>
 </div>
 <div class="sect3">
-<h4 id="upgrade1.0.scanner.caching"><a class="anchor" href="#upgrade1.0.scanner.caching"></a>12.1.3. Scanner Caching has Changed</h4>
+<h4 id="upgrade1.0.scanner.caching"><a class="anchor" href="#upgrade1.0.scanner.caching"></a>13.1.3. Scanner Caching has Changed</h4>
 <div class="paragraph">
 <div class="title">From 0.98.x to 1.x</div>
 <p>In hbase-1.x, the default Scan caching 'number of rows' changed.
@@ -6250,14 +6468,14 @@ for further discussion.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="upgrade1.0.from.0.94"><a class="anchor" href="#upgrade1.0.from.0.94"></a>12.1.4. Upgrading to 1.0 from 0.94</h4>
+<h4 id="upgrade1.0.from.0.94"><a class="anchor" href="#upgrade1.0.from.0.94"></a>13.1.4. Upgrading to 1.0 from 0.94</h4>
 <div class="paragraph">
 <p>You cannot rolling upgrade from 0.94.x to 1.x.x.  You must stop your cluster, install the 1.x.x software, run the migration described at <a href="#executing.the.0.96.upgrade">Executing the 0.96 Upgrade</a> (substituting 1.x.x. wherever we make mention of 0.96.x in the section below), and then restart. Be sure to upgrade your ZooKeeper if it is a version less than the required 3.4.x.</p>
 </div>
 </div>
 </div>
 <div class="sect2">
-<h3 id="upgrade0.98"><a class="anchor" href="#upgrade0.98"></a>12.2. Upgrading from 0.96.x to 0.98.x</h3>
+<h3 id="upgrade0.98"><a class="anchor" href="#upgrade0.98"></a>13.2. Upgrading from 0.96.x to 0.98.x</h3>
 <div class="paragraph">
 <p>A rolling upgrade from 0.96.x to 0.98.x works. The two versions are not binary compatible.</p>
 </div>
@@ -6269,15 +6487,15 @@ for further discussion.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="_upgrading_from_0_94_x_to_0_98_x"><a class="anchor" href="#_upgrading_from_0_94_x_to_0_98_x"></a>12.3. Upgrading from 0.94.x to 0.98.x</h3>
+<h3 id="_upgrading_from_0_94_x_to_0_98_x"><a class="anchor" href="#_upgrading_from_0_94_x_to_0_98_x"></a>13.3. Upgrading from 0.94.x to 0.98.x</h3>
 <div class="paragraph">
 <p>A rolling upgrade from 0.94.x directly to 0.98.x does not work. The upgrade path follows the same procedures as <a href="#upgrade0.96">Upgrading from 0.94.x to 0.96.x</a>. Additional steps are required to use some of the new features of 0.98.x. See <a href="#upgrade0.98">Upgrading from 0.96.x to 0.98.x</a> for an abbreviated list of these features.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="upgrade0.96"><a class="anchor" href="#upgrade0.96"></a>12.4. Upgrading from 0.94.x to 0.96.x</h3>
+<h3 id="upgrade0.96"><a class="anchor" href="#upgrade0.96"></a>13.4. Upgrading from 0.94.x to 0.96.x</h3>
 <div class="sect3">
-<h4 id="_the_singularity"><a class="anchor" href="#_the_singularity"></a>12.4.1. The "Singularity"</h4>
+<h4 id="_the_singularity"><a class="anchor" href="#_the_singularity"></a>13.4.1. The "Singularity"</h4>
 <div class="paragraph">
 <p>You will have to stop your old 0.94.x cluster completely to upgrade. If you are replicating between clusters, both clusters will have to go down to upgrade. Make sure it is a clean shutdown. The less WAL files around, the faster the upgrade will run (the upgrade will split any log files it finds in the filesystem as part of the upgrade process). All clients must be upgraded to 0.96 too.</p>
 </div>
@@ -6286,7 +6504,7 @@ for further discussion.</p>
 </div>
 </div>
 <div class="sect3">
-<h4 id="executing.the.0.96.upgrade"><a class="anchor" href="#executing.the.0.96.upgrade"></a>12.4.2. Executing the 0.96 Upgrade</h4>
+<h4 id="executing.the.0.96.upgrade"><a class="anchor" href="#executing.the.0.96.upgrade"></a>13.4.2. Executing the 0.96 Upgrade</h4>
 <div class="admonitionblock note">
 <table>
 <tr>
@@ -6451,7 +6669,7 @@ Successfully completed Log splitting</pre>
 </div>
 </div>
 <div class="sect2">
-<h3 id="s096.migration.troubleshooting"><a class="anchor" href="#s096.migration.troubleshooting"></a>12.5. Troubleshooting</h3>
+<h3 id="s096.migration.troubleshooting"><a class="anchor" href="#s096.migration.troubleshooting"></a>13.5. Troubleshooting</h3>
 <div id="s096.migration.troubleshooting.old.client" class="paragraph">
 <div class="title">Old Client connecting to 0.96 cluster</div>
 <p>It will fail with an exception like the below. Upgrade.</p>
@@ -6473,7 +6691,7 @@ Successfully completed Log splitting</pre>
 </div>
 </div>
 <div class="sect3">
-<h4 id="_upgrading_code_meta_code_to_use_protocol_buffers_protobuf"><a class="anchor" href="#_upgrading_code_meta_code_to_use_protocol_buffers_protobuf"></a>12.5.1. Upgrading <code>META</code> to use Protocol Buffers (Protobuf)</h4>
+<h4 id="_upgrading_code_meta_code_to_use_protocol_buffers_protobuf"><a class="anchor" href="#_upgrading_code_meta_code_to_use_protocol_buffers_protobuf"></a>13.5.1. Upgrading <code>META</code> to use Protocol Buffers (Protobuf)</h4>
 <div class="paragraph">
 <p>When you upgrade from versions prior to 0.96, <code>META</code> needs to be converted to use protocol buffers. This is controlled by the configuration option <code>hbase.MetaMigrationConvertingToPB</code>, which is set to <code>true</code> by default. Therefore, by default, no action is required on your part.</p>
 </div>
@@ -6483,15 +6701,15 @@ Successfully completed Log splitting</pre>
 </div>
 </div>
 <div class="sect2">
-<h3 id="upgrade0.94"><a class="anchor" href="#upgrade0.94"></a>12.6. Upgrading from 0.92.x to 0.94.x</h3>
+<h3 id="upgrade0.94"><a class="anchor" href="#upgrade0.94"></a>13.6. Upgrading from 0.92.x to 0.94.x</h3>
 <div class="paragraph">
 <p>We used to think that 0.92 and 0.94 were interface compatible and that you can do a rolling upgrade between these versions but then we figured that <a href="https://issues.apache.org/jira/browse/HBASE-5357">HBASE-5357 Use builder pattern in HColumnDescriptor</a> changed method signatures so rather than return <code>void</code> they instead return <code>HColumnDescriptor</code>. This will throw <code>java.lang.NoSuchMethodError: org.apache.hadoop.hbase.HColumnDescriptor.setMaxVersions(I)V</code> so 0.92 and 0.94 are NOT compatible. You cannot do a rolling upgrade between them.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="upgrade0.92"><a class="anchor" href="#upgrade0.92"></a>12.7. Upgrading from 0.90.x to 0.92.x</h3>
+<h3 id="upgrade0.92"><a class="anchor" href="#upgrade0.92"></a>13.7. Upgrading from 0.90.x to 0.92.x</h3>
 <div class="sect3">
-<h4 id="_upgrade_guide"><a class="anchor" href="#_upgrade_guide"></a>12.7.1. Upgrade Guide</h4>
+<h4 id="_upgrade_guide"><a class="anchor" href="#_upgrade_guide"></a>13.7.1. Upgrade Guide</h4>
 <div class="paragraph">
 <p>You will find that 0.92.0 runs a little differently to 0.90.x releases. Here are a few things to watch out for upgrading from 0.90.x to 0.92.0.</p>
 </div>
@@ -6545,7 +6763,7 @@ Successfully completed Log splitting</pre>
 </div>
 <div class="paragraph">
 <div class="title">On the Hadoop version to use</div>
-<p>Run 0.92.0 on Hadoop 1.0.x (or CDH3u3). The performance benefits are worth making the move. Otherwise, our Hadoop prescription is as it has been; you need an Hadoop that supports a working sync. See <a href="#hadoop"><a href="http://hadoop.apache.org">Hadoop</a></a>.</p>
+<p>Run 0.92.0 on Hadoop 1.0.x (or CDH3u3). The performance benefits are worth making the move. Otherwise, our Hadoop prescription is as it has been; you need an Hadoop that supports a working sync. See <a href="#hadoop"><a href="https://hadoop.apache.org">Hadoop</a></a>.</p>
 </div>
 <div class="paragraph">
 <p>If running on Hadoop 1.0.x (or CDH3u3), enable local read. See <a href="http://files.meetup.com/1350427/hug_ebay_jdcryans.pdf">Practical Caching</a> presentation for ruminations on the performance benefits ‘going local’ (and for how to enable local reads).</p>
@@ -6581,7 +6799,7 @@ Successfully completed Log splitting</pre>
 </div>
 </div>
 <div class="sect2">
-<h3 id="upgrade0.90"><a class="anchor" href="#upgrade0.90"></a>12.8. Upgrading to HBase 0.90.x from 0.20.x or 0.89.x</h3>
+<h3 id="upgrade0.90"><a class="anchor" href="#upgrade0.90"></a>13.8. Upgrading to HBase 0.90.x from 0.20.x or 0.89.x</h3>
 <div class="paragraph">
 <p>This version of 0.90.x HBase can be started on data written by HBase 0.20.x or HBase 0.89.x. There is no need of a migration step. HBase 0.89.x and 0.90.x does write out the name of region directories differently&#8201;&#8212;&#8201;it names them with a md5 hash of the region name rather than a jenkins hash&#8201;&#8212;&#8201;so this means that once started, there is no going back to HBase 0.20.x.</p>
 </div>
@@ -6631,7 +6849,7 @@ Browse at least the paragraphs at the end of the help output for the gist of how
 </div>
 </div>
 <div class="sect1">
-<h2 id="scripting"><a class="anchor" href="#scripting"></a>13. Scripting with Ruby</h2>
+<h2 id="scripting"><a class="anchor" href="#scripting"></a>14. Scripting with Ruby</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>For examples scripting Apache HBase, look in the HBase <em>bin</em>            directory.
@@ -6646,7 +6864,7 @@ To run one of these files, do as follows:</p>
 </div>
 </div>
 <div class="sect1">
-<h2 id="_running_the_shell_in_non_interactive_mode"><a class="anchor" href="#_running_the_shell_in_non_interactive_mode"></a>14. Running the Shell in Non-Interactive Mode</h2>
+<h2 id="_running_the_shell_in_non_interactive_mode"><a class="anchor" href="#_running_the_shell_in_non_interactive_mode"></a>15. Running the Shell in Non-Interactive Mode</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>A new non-interactive mode has been added to the HBase Shell (<a href="https://issues.apache.org/jira/browse/HBASE-11658">HBASE-11658)</a>.
@@ -6659,7 +6877,7 @@ If you use the normal interactive mode, the HBase Shell will only ever return it
 </div>
 </div>
 <div class="sect1">
-<h2 id="hbase.shell.noninteractive"><a class="anchor" href="#hbase.shell.noninteractive"></a>15. HBase Shell in OS Scripts</h2>
+<h2 id="hbase.shell.noninteractive"><a class="anchor" href="#hbase.shell.noninteractive"></a>16. HBase Shell in OS Scripts</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>You can use the HBase shell from within operating system script interpreters like the Bash shell which is the default command interpreter for most Linux and UNIX distributions.
@@ -6743,7 +6961,7 @@ return $status</code></pre>
 </div>
 </div>
 <div class="sect2">
-<h3 id="_checking_for_success_or_failure_in_scripts"><a class="anchor" href="#_checking_for_success_or_failure_in_scripts"></a>15.1. Checking for Success or Failure In Scripts</h3>
+<h3 id="_checking_for_success_or_failure_in_scripts"><a class="anchor" href="#_checking_for_success_or_failure_in_scripts"></a>16.1. Checking for Success or Failure In Scripts</h3>
 <div class="paragraph">
 <p>Getting an exit code of <code>0</code> means that the command you scripted definitely succeeded.
 However, getting a non-zero exit code does not necessarily mean the command failed.
@@ -6756,7 +6974,7 @@ For instance, if your script creates a table, but returns a non-zero exit value,
 </div>
 </div>
 <div class="sect1">
-<h2 id="_read_hbase_shell_commands_from_a_command_file"><a class="anchor" href="#_read_hbase_shell_commands_from_a_command_file"></a>16. Read HBase Shell Commands from a Command File</h2>
+<h2 id="_read_hbase_shell_commands_from_a_command_file"><a class="anchor" href="#_read_hbase_shell_commands_from_a_command_file"></a>17. Read HBase Shell Commands from a Command File</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>You can enter HBase Shell commands into a text file, one command per line, and pass that file to the HBase Shell.</p>
@@ -6828,7 +7046,7 @@ COLUMN                CELL
 </div>
 </div>
 <div class="sect1">
-<h2 id="_passing_vm_options_to_the_shell"><a class="anchor" href="#_passing_vm_options_to_the_shell"></a>17. Passing VM Options to the Shell</h2>
+<h2 id="_passing_vm_options_to_the_shell"><a class="anchor" href="#_passing_vm_options_to_the_shell"></a>18. Passing VM Options to the Shell</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>You can pass VM options to the HBase Shell using the <code>HBASE_SHELL_OPTS</code> environment variable.
@@ -6845,10 +7063,10 @@ The command should be run all on a single line, but is broken by the <code>\</co
 </div>
 </div>
 <div class="sect1">
-<h2 id="_shell_tricks"><a class="anchor" href="#_shell_tricks"></a>18. Shell Tricks</h2>
+<h2 id="_shell_tricks"><a class="anchor" href="#_shell_tricks"></a>19. Shell Tricks</h2>
 <div class="sectionbody">
 <div class="sect2">
-<h3 id="_table_variables"><a class="anchor" href="#_table_variables"></a>18.1. Table variables</h3>
+<h3 id="_table_variables"><a class="anchor" href="#_table_variables"></a>19.1. Table variables</h3>
 <div class="paragraph">
 <p>HBase 0.95 adds shell commands that provides jruby-style object-oriented references for tables.
 Previously all of the shell commands that act upon a table have a procedural style that always took the name of the table as an argument.
@@ -6959,7 +7177,7 @@ hbase(main):018:0&gt;</pre>
 </div>
 </div>
 <div class="sect2">
-<h3 id="__em_irbrc_em"><a class="anchor" href="#__em_irbrc_em"></a>18.2. <em>irbrc</em></h3>
+<h3 id="__em_irbrc_em"><a class="anchor" href="#__em_irbrc_em"></a>19.2. <em>irbrc</em></h3>
 <div class="paragraph">
 <p>Create an <em>.irbrc</em> file for yourself in your home directory.
 Add customizations.
@@ -6978,7 +7196,7 @@ IRB.conf[:HISTORY_FILE] = &quot;#{ENV['HOME']}/.irb-save-history&quot;</code></p
 </div>
 </div>
 <div class="sect2">
-<h3 id="_log_data_to_timestamp"><a class="anchor" href="#_log_data_to_timestamp"></a>18.3. LOG data to timestamp</h3>
+<h3 id="_log_data_to_timestamp"><a class="anchor" href="#_log_data_to_timestamp"></a>19.3. LOG data to timestamp</h3>
 <div class="paragraph">
 <p>To convert the date '08/08/16 20:56:29' from an hbase log into a timestamp, do:</p>
 </div>
@@ -7003,7 +7221,7 @@ hbase(main):022:0&gt; Date.new(1218920189000).toString() =&gt; "Sat Aug 16 20:56
 </div>
 </div>
 <div class="sect2">
-<h3 id="_query_shell_configuration"><a class="anchor" href="#_query_shell_configuration"></a>18.4. Query Shell Configuration</h3>
+<h3 id="_query_shell_configuration"><a class="anchor" href="#_query_shell_configuration"></a>19.4. Query Shell Configuration</h3>
 <div class="listingblock">
 <div class="content">
 <pre>hbase(main):001:0&gt; @shell.hbase.configuration.get("hbase.rpc.timeout")
@@ -7022,7 +7240,7 @@ hbase(main):006:0&gt; @shell.hbase.configuration.get("hbase.rpc.timeout")
 </div>
 </div>
 <div class="sect2">
-<h3 id="tricks.pre-split"><a class="anchor" href="#tricks.pre-split"></a>18.5. Pre-splitting tables with the HBase Shell</h3>
+<h3 id="tricks.pre-split"><a class="anchor" href="#tricks.pre-split"></a>19.5. Pre-splitting tables with the HBase Shell</h3>
 <div class="paragraph">
 <p>You can use a variety of options to pre-split tables when creating them via the HBase Shell <code>create</code> command.</p>
 </div>
@@ -7093,9 +7311,9 @@ If you need to truncate a pre-split table, you must drop and recreate the table
 </div>
 </div>
 <div class="sect2">
-<h3 id="_debug"><a class="anchor" href="#_debug"></a>18.6. Debug</h3>
+<h3 id="_debug"><a class="anchor" href="#_debug"></a>19.6. Debug</h3>
 <div class="sect3">
-<h4 id="_shell_debug_switch"><a class="anchor" href="#_shell_debug_switch"></a>18.6.1. Shell debug switch</h4>
+<h4 id="_shell_debug_switch"><a class="anchor" href="#_shell_debug_switch"></a>19.6.1. Shell debug switch</h4>
 <div class="paragraph">
 <p>You can set a debug switch in the shell to see more output&#8201;&#8212;&#8201;e.g.
 more of the stack trace on exception&#8201;&#8212;&#8201;when you run a command:</p>
@@ -7107,7 +7325,7 @@ more of the stack trace on exception&#8201;&#8212;&#8201;when you run a command:
 </div>
 </div>
 <div class="sect3">
-<h4 id="_debug_log_level"><a class="anchor" href="#_debug_log_level"></a>18.6.2. DEBUG log level</h4>
+<h4 id="_debug_log_level"><a class="anchor" href="#_debug_log_level"></a>19.6.2. DEBUG log level</h4>
 <div class="paragraph">
 <p>To enable DEBUG level logging in the shell, launch it with the <code>-d</code> option.</p>
 </div>
@@ -7119,9 +7337,9 @@ more of the stack trace on exception&#8201;&#8212;&#8201;when you run a command:
 </div>
 </div>
 <div class="sect2">
-<h3 id="_commands"><a class="anchor" href="#_commands"></a>18.7. Commands</h3>
+<h3 id="_commands"><a class="anchor" href="#_commands"></a>19.7. Commands</h3>
 <div class="sect3">
-<h4 id="_count"><a class="anchor" href="#_count"></a>18.7.1. count</h4>
+<h4 id="_count"><a class="anchor" href="#_count"></a>19.7.1. count</h4>
 <div class="paragraph">
 <p>Count command returns the number of rows in a table.
 It&#8217;s quite fast when configured with the right CACHE</p>
@@ -7194,7 +7412,7 @@ By default, the timestamp represents the time on the RegionServer when the data
 </div>
 </div>
 <div class="sect1">
-<h2 id="conceptual.view"><a class="anchor" href="#conceptual.view"></a>19. Conceptual View</h2>
+<h2 id="conceptual.view"><a class="anchor" href="#conceptual.view"></a>20. Conceptual View</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>You can read a very understandable explanation of the HBase data model in the blog post <a href="http://jimbojw.com/#understanding%20hbase">Understanding HBase and BigTable</a> by Jim R. Wilson.
@@ -7328,7 +7546,7 @@ This is only a mock-up for illustrative purposes and may not be strictly accurat
 </div>
 </div>
 <div class="sect1">
-<h2 id="physical.view"><a class="anchor" href="#physical.view"></a>20. Physical View</h2>
+<h2 id="physical.view"><a class="anchor" href="#physical.view"></a>21. Physical View</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>Although at a conceptual level tables may be viewed as a sparse set of rows, they are physically stored by column family.
@@ -7407,7 +7625,7 @@ Thus a request for the values of all columns in the row <code>com.cnn.www</code>
 </div>
 </div>
 <div class="sect1">
-<h2 id="_namespace"><a class="anchor" href="#_namespace"></a>21. Namespace</h2>
+<h2 id="_namespace"><a class="anchor" href="#_namespace"></a>22. Namespace</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>A namespace is a logical grouping of tables analogous to a database in relation database systems.
@@ -7427,7 +7645,7 @@ This abstraction lays the groundwork for upcoming multi-tenancy related features
 </ul>
 </div>
 <div class="sect2">
-<h3 id="namespace_creation"><a class="anchor" href="#namespace_creation"></a>21.1. Namespace management</h3>
+<h3 id="namespace_creation"><a class="anchor" href="#namespace_creation"></a>22.1. Namespace management</h3>
 <div class="paragraph">
 <p>A namespace can be created, removed or altered.
 Namespace membership is determined during table creation by specifying a fully-qualified table name of the form:</p>
@@ -7468,7 +7686,7 @@ alter_namespace 'my_ns', {METHOD =&gt; 'set', 'PROPERTY_NAME' =&gt; 'PROPERTY_VA
 </div>
 </div>
 <div class="sect2">
-<h3 id="namespace_special"><a class="anchor" href="#namespace_special"></a>21.2. Predefined namespaces</h3>
+<h3 id="namespace_special"><a class="anchor" href="#namespace_special"></a>22.2. Predefined namespaces</h3>
 <div class="paragraph">
 <p>There are two predefined special namespaces:</p>
 </div>
@@ -7500,7 +7718,7 @@ create 'bar', 'fam'</code></pre>
 </div>
 </div>
 <div class="sect1">
-<h2 id="_table"><a class="anchor" href="#_table"></a>22. Table</h2>
+<h2 id="_table"><a class="anchor" href="#_table"></a>23. Table</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>Tables are declared up front at schema definition time.</p>
@@ -7508,7 +7726,7 @@ create 'bar', 'fam'</code></pre>
 </div>
 </div>
 <div class="sect1">
-<h2 id="_row"><a class="anchor" href="#_row"></a>23. Row</h2>
+<h2 id="_row"><a class="anchor" href="#_row"></a>24. Row</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>Row keys are uninterpreted bytes.
@@ -7518,7 +7736,7 @@ The empty byte array is used to denote both the start and end of a tables' names
 </div>
 </div>
 <div class="sect1">
-<h2 id="columnfamily"><a class="anchor" href="#columnfamily"></a>24. Column Family</h2>
+<h2 id="columnfamily"><a class="anchor" href="#columnfamily"></a>25. Column Family</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>Columns in Apache HBase are grouped into <em>column families</em>.
@@ -7536,7 +7754,7 @@ Because tunings and storage specifications are done at the column family level,
 </div>
 </div>
 <div class="sect1">
-<h2 id="_cells"><a class="anchor" href="#_cells"></a>25. Cells</h2>
+<h2 id="_cells"><a class="anchor" href="#_cells"></a>26. Cells</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>A <em>{row, column, version}</em> tuple exactly specifies a <code>cell</code> in HBase.
@@ -7545,29 +7763,29 @@ Cell content is uninterpreted bytes</p>
 </div>
 </div>
 <div class="sect1">
-<h2 id="_data_model_operations"><a class="anchor" href="#_data_model_operations"></a>26. Data Model Operations</h2>
+<h2 id="_data_model_operations"><a class="anchor" href="#_data_model_operations"></a>27. Data Model Operations</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>The four primary data model operations are Get, Put, Scan, and Delete.
-Operations are applied via <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html">Table</a> instances.</p>
+Operations are applied via <a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html">Table</a> instances.</p>
 </div>
 <div class="sect2">
-<h3 id="_get"><a class="anchor" href="#_get"></a>26.1. Get</h3>
+<h3 id="_get"><a class="anchor" href="#_get"></a>27.1. Get</h3>
 <div class="paragraph">
-<p><a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html">Get</a> returns attributes for a specified row.
-Gets are executed via <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#get-org.apache.hadoop.hbase.client.Get-">Table.get</a></p>
+<p><a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html">Get</a> returns attributes for a specified row.
+Gets are executed via <a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#get-org.apache.hadoop.hbase.client.Get-">Table.get</a></p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="_put"><a class="anchor" href="#_put"></a>26.2. Put</h3>
+<h3 id="_put"><a class="anchor" href="#_put"></a>27.2. Put</h3>
 <div class="paragraph">
-<p><a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html">Put</a> either adds new rows to a table (if the key is new) or can update existing rows (if the key already exists). Puts are executed via <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#put-org.apache.hadoop.hbase.client.Put-">Table.put</a> (non-writeBuffer) or <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#batch-java.util.List-java.lang.Object:A-">Table.batch</a> (non-writeBuffer)</p>
+<p><a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Put.html">Put</a> either adds new rows to a table (if the key is new) or can update existing rows (if the key already exists). Puts are executed via <a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#put-org.apache.hadoop.hbase.client.Put-">Table.put</a> (non-writeBuffer) or <a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#batch-java.util.List-java.lang.Object:A-">Table.batch</a> (non-writeBuffer)</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="scan"><a class="anchor" href="#scan"></a>26.3. Scans</h3>
+<h3 id="scan"><a class="anchor" href="#scan"></a>27.3. Scans</h3>
 <div class="paragraph">
-<p><a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html">Scan</a> allow iteration over multiple rows for specified attributes.</p>
+<p><a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html">Scan</a> allow iteration over multiple rows for specified attributes.</p>
 </div>
 <div class="paragraph">
 <p>The following is an example of a Scan on a Table instance.
@@ -7595,14 +7813,14 @@ ResultScanner rs = table.getScanner(scan);
 </div>
 </div>
 <div class="paragraph">
-<p>Note that generally the easiest way to specify a specific stop point for a scan is by using the <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/InclusiveStopFilter.html">InclusiveStopFilter</a> class.</p>
+<p>Note that generally the easiest way to specify a specific stop point for a scan is by using the <a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/InclusiveStopFilter.html">InclusiveStopFilter</a> class.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="_delete"><a class="anchor" href="#_delete"></a>26.4. Delete</h3>
+<h3 id="_delete"><a class="anchor" href="#_delete"></a>27.4. Delete</h3>
 <div class="paragraph">
-<p><a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Delete.html">Delete</a> removes a row from a table.
-Deletes are executed via <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#delete-org.apache.hadoop.hbase.client.Delete-">Table.delete</a>.</p>
+<p><a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Delete.html">Delete</a> removes a row from a table.
+Deletes are executed via <a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html#delete-org.apache.hadoop.hbase.client.Delete-">Table.delete</a>.</p>
 </div>
 <div class="paragraph">
 <p>HBase does not modify data in place, and so deletes are handled by creating new markers called <em>tombstones</em>.
@@ -7615,7 +7833,7 @@ These tombstones, along with the dead values, are cleaned up on major compaction
 </div>
 </div>
 <div class="sect1">
-<h2 id="versions"><a class="anchor" href="#versions"></a>27. Versions</h2>
+<h2 id="versions"><a class="anchor" href="#versions"></a>28. Versions</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>A <em>{row, column, version}</em> tuple exactly specifies a <code>cell</code> in HBase.
@@ -7650,7 +7868,7 @@ As of this writing, the limitation <em>Overwriting values at existing timestamps
 This section is basically a synopsis of this article by Bruno Dumon.</p>
 </div>
 <div class="sect2">
-<h3 id="specify.number.of.versions"><a class="anchor" href="#specify.number.of.versions"></a>27.1. Specifying the Number of Versions to Store</h3>
+<h3 id="specify.number.of.versions"><a class="anchor" href="#specify.number.of.versions"></a>28.1. Specifying the Number of Versions to Store</h3>
 <div class="paragraph">
 <p>The maximum number of versions to store for a given column is part of the column schema and is specified at table creation, or via an <code>alter</code> command, via <code>HColumnDescriptor.DEFAULT_VERSIONS</code>.
 Prior to HBase 0.96, the default number of versions kept was <code>3</code>, but in 0.96 and newer has been changed to <code>1</code>.</p>
@@ -7660,7 +7878,7 @@ Prior to HBase 0.96, the default number of versions kept was <code>3</code>, but
 <div class="content">
 <div class="paragraph">
 <p>This example uses HBase Shell to keep a maximum of 5 versions of all columns in column family <code>f1</code>.
-You could also use <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html">HColumnDescriptor</a>.</p>
+You could also use <a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html">HColumnDescriptor</a>.</p>
 </div>
 <div class="listingblock">
 <div class="content">
@@ -7676,7 +7894,7 @@ You could also use <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hb
 <p>You can also specify the minimum number of versions to store per column family.
 By default, this is set to 0, which means the feature is disabled.
 The following example sets the minimum number of versions on all columns in column family <code>f1</code> to <code>2</code>, via HBase Shell.
-You could also use <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html">HColumnDescriptor</a>.</p>
+You could also use <a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html">HColumnDescriptor</a>.</p>
 </div>
 <div class="listingblock">
 <div class="content">
@@ -7691,15 +7909,15 @@ See <a href="#hbase.column.max.version">hbase.column.max.version</a>.</p>
 </div>
 </div>
 <div class="sect2">
-<h3 id="versions.ops"><a class="anchor" href="#versions.ops"></a>27.2. Versions and HBase Operations</h3>
+<h3 id="versions.ops"><a class="anchor" href="#versions.ops"></a>28.2. Versions and HBase Operations</h3>
 <div class="paragraph">
 <p>In this section we look at the behavior of the version dimension for each of the core HBase operations.</p>
 </div>
 <div class="sect3">
-<h4 id="_get_scan"><a class="anchor" href="#_get_scan"></a>27.2.1. Get/Scan</h4>
+<h4 id="_get_scan"><a class="anchor" href="#_get_scan"></a>28.2.1. Get/Scan</h4>
 <div class="paragraph">
 <p>Gets are implemented on top of Scans.
-The below discussion of <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html">Get</a> applies equally to <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html">Scans</a>.</p>
+The below discussion of <a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html">Get</a> applies equally to <a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Scan.html">Scans</a>.</p>
 </div>
 <div class="paragraph">
 <p>By default, i.e. if you specify no explicit version, when doing a <code>get</code>, the cell whose version has the largest value is returned (which may or may not be the latest one written, see later). The default behavior can be modified in the following ways:</p>
@@ -7707,10 +7925,10 @@ The below discussion of <a href="http://hbase.apache.org/apidocs/org/apache/hado
 <div class="ulist">
 <ul>
 <li>
-<p>to return more than one version, see <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html#setMaxVersions--">Get.setMaxVersions()</a></p>
+<p>to return more than one version, see <a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html#setMaxVersions--">Get.setMaxVersions()</a></p>
 </li>
 <li>
-<p>to return versions other than the latest, see <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html#setTimeRange-long-long-">Get.setTimeRange()</a></p>
+<p>to return versions other than the latest, see <a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Get.html#setTimeRange-long-long-">Get.setTimeRange()</a></p>
 <div class="paragraph">
 <p>To retrieve the latest version that is less than or equal to a given value, thus giving the 'latest' state of the record at a certain point in time, just use a range from 0 to the desired version and set the max versions to 1.</p>
 </div>
@@ -7719,7 +7937,7 @@ The below discussion of <a href="http://hbase.apache.org/apidocs/org/apache/hado
 </div>
 </div>
 <div class="sect3">
-<h4 id="_default_get_example"><a class="anchor" href="#_default_get_example"></a>27.2.2. Default Get Example</h4>
+<h4 id="_default_get_example"><a class="anchor" href="#_default_get_example"></a>28.2.2. Default Get Example</h4>
 <div class="paragraph">
 <p>The following Get will only retrieve the current version of the row</p>
 </div>
@@ -7735,7 +7953,7 @@ Get get = <span class="keyword">new</span> Get(Bytes.toBytes(<span class="string
 </div>
 </div>
 <div class="sect3">
-<h4 id="_versioned_get_example"><a class="anchor" href="#_versioned_get_example"></a>27.2.3. Versioned Get Example</h4>
+<h4 id="_versioned_get_example"><a class="anchor" href="#_versioned_get_example"></a>28.2.3. Versioned Get Example</h4>
 <div class="paragraph">
 <p>The following Get will return the last 3 versions of the row.</p>
 </div>
@@ -7753,7 +7971,7 @@ get.setMaxVersions(<span class="integer">3</span>);  <span class="comment">// wi
 </div>
 </div>
 <div class="sect3">
-<h4 id="_put_2"><a class="anchor" href="#_put_2"></a>27.2.4. Put</h4>
+<h4 id="_put_2"><a class="anchor" href="#_put_2"></a>28.2.4. Put</h4>
 <div class="paragraph">
 <p>Doing a put always creates a new version of a <code>cell</code>, at a certain timestamp.
 By default the system uses the server&#8217;s <code>currentTimeMillis</code>, but you can specify the version (= the long integer) yourself, on a per-column level.
@@ -7802,7 +8020,7 @@ Prefer using a separate timestamp attribute of the row, or have the timestamp as
 </div>
 </div>
 <div class="sect3">
-<h4 id="version.delete"><a class="anchor" href="#version.delete"></a>27.2.5. Delete</h4>
+<h4 id="version.delete"><a class="anchor" href="#version.delete"></a>28.2.5. Delete</h4>
 <div class="paragraph">
 <p>There are three different types of internal delete markers.
 See Lars Hofhansl&#8217;s blog for discussion of his attempt adding another, <a href="http://hadoop-hbase.blogspot.com/2012/01/scanning-in-hbase.html">Scanning in HBase: Prefix Delete Marker</a>.</p>
@@ -7861,9 +8079,9 @@ The change has been backported to HBase 0.94 and newer branches.
 </div>
 </div>
 <div class="sect2">
-<h3 id="_current_limitations"><a class="anchor" href="#_current_limitations"></a>27.3. Current Limitations</h3>
+<h3 id="_current_limitations"><a class="anchor" href="#_current_limitations"></a>28.3. Current Limitations</h3>
 <div class="sect3">
-<h4 id="_deletes_mask_puts"><a class="anchor" href="#_deletes_mask_puts"></a>27.3.1. Deletes mask Puts</h4>
+<h4 id="_deletes_mask_puts"><a class="anchor" href="#_deletes_mask_puts"></a>28.3.1. Deletes mask Puts</h4>
 <div class="paragraph">
 <p>Deletes mask puts, even puts that happened after the delete was entered.
 See <a href="https://issues.apache.org/jira/browse/HBASE-2256">HBASE-2256</a>.
@@ -7878,7 +8096,7 @@ But they can occur even if you do not care about time: just do delete and put im
 </div>
 </div>
 <div class="sect3">
-<h4 id="major.compactions.change.query.results"><a class="anchor" href="#major.compactions.change.query.results"></a>27.3.2. Major compactions change query results</h4>
+<h4 id="major.compactions.change.query.results"><a class="anchor" href="#major.compactions.change.query.results"></a>28.3.2. Major compactions change query results</h4>
 <div class="paragraph">
 <p><em>&#8230;&#8203;create three cell versions at t1, t2 and t3, with a maximum-versions
     setting of 2. So when getting all versions, only the values at t2 and t3 will be
@@ -7891,7 +8109,7 @@ But they can occur even if you do not care about time: just do delete and put im
 </div>
 </div>
 <div class="sect1">
-<h2 id="dm.sort"><a class="anchor" href="#dm.sort"></a>28. Sort Order</h2>
+<h2 id="dm.sort"><a class="anchor" href="#dm.sort"></a>29. Sort Order</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>All data model operations HBase return data in sorted order.
@@ -7900,7 +8118,7 @@ First by row, then by ColumnFamily, followed by column qualifier, and finally ti
 </div>
 </div>
 <div class="sect1">
-<h2 id="dm.column.metadata"><a class="anchor" href="#dm.column.metadata"></a>29. Column Metadata</h2>
+<h2 id="dm.column.metadata"><a class="anchor" href="#dm.column.metadata"></a>30. Column Metadata</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>There is no store of column metadata outside of the internal KeyValue instances for a ColumnFamily.
@@ -7913,7 +8131,7 @@ For more information about how HBase stores data internally, see <a href="#keyva
 </div>
 </div>
 <div class="sect1">
-<h2 id="joins"><a class="anchor" href="#joins"></a>30. Joins</h2>
+<h2 id="joins"><a class="anchor" href="#joins"></a>31. Joins</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>Whether HBase supports joins is a common question on the dist-list, and there is a simple answer:  it doesn&#8217;t, at not least in the way that RDBMS' support them (e.g., with equi-joins or outer-joins in SQL).  As has been illustrated in this chapter, the read data model operations in HBase are Get and Scan.</p>
@@ -7926,7 +8144,7 @@ hash-joins). So which is the best approach? It depends on what you are trying to
 </div>
 </div>
 <div class="sect1">
-<h2 id="_acid"><a class="anchor" href="#_acid"></a>31. ACID</h2>
+<h2 id="_acid"><a class="anchor" href="#_acid"></a>32. ACID</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>See <a href="/acid-semantics.html">ACID Semantics</a>.
@@ -7959,10 +8177,10 @@ modeling on HBase.</p>
 </div>
 </div>
 <div class="sect1">
-<h2 id="schema.creation"><a class="anchor" href="#schema.creation"></a>32. Schema Creation</h2>
+<h2 id="schema.creation"><a class="anchor" href="#schema.creation"></a>33. Schema Creation</h2>
 <div class="sectionbody">
 <div class="paragraph">
-<p>HBase schemas can be created or updated using the <a href="#shell">The Apache HBase Shell</a> or by using <a href="http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html">Admin</a> in the Java API.</p>
+<p>HBase schemas can be created or updated using the <a href="#shell">The Apache HBase Shell</a> or by using <a href="https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Admin.html">Admin</a> in the Java API.</p>
 </div>
 <div class="paragraph">
 <p>Tables must be disabled when making ColumnFamily modifications, for example:</p>
@@ -7999,7 +8217,7 @@ online schema changes are supported in the 0.92.x codebase, but the 0.90.x codeb
 </table>
 </div>
 <div class="sect2">
-<h3 id="schema.updates"><a class="anchor" href="#schema.updates"></a>32.1. Schema Updates</h3>
+<h3 id="schema.updates"><a class="anchor" href="#schema.updates"></a>33.1. Schema Updates</h3>
 <div class="paragraph">
 <p>When changes are made to either Tables or ColumnFamilies (e.g. region size, block size), these changes take effect the next time there is a major compaction and the StoreFiles get re-written.</p>
 </div>
@@ -8010,7 +8228,7 @@ online schema changes are supported in the 0.92.x codebase, but the 0.90.x codeb
 </div>
 </div>
 <div class="sect1">
-<h2 id="table_schema_rules_of_thumb"><a class="anchor" href="#table_schema_rules_of_thumb"></a>33. Table Schema Rules Of Thumb</h2>
+<h2 id="table_schema_rules_of_thumb"><a class="anchor" href="#table_schema_rules_of_thumb"></a>34. Table Schema Rules Of Thumb</h2>
 <div class="sectionbody">
 <div class="paragraph">
 <p>There are many different data sets, with different access patterns and service-level
@@ -8083,7 +8301,7 @@ defaults).</p>
 </div>
 </div>
 <div class="sect1">
-<h2 id="number.of.cfs"><a class="anchor" href="#number.of.cfs"></a>34.

<TRUNCATED>