You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by bu...@apache.org on 2019/02/14 22:50:51 UTC

[hbase] branch branch-1.2 updated: HBASE-21901 update ref guide for 1.2.11 release

This is an automated email from the ASF dual-hosted git repository.

busbey pushed a commit to branch branch-1.2
in repository https://gitbox.apache.org/repos/asf/hbase.git


The following commit(s) were added to refs/heads/branch-1.2 by this push:
     new f8e1a8d  HBASE-21901 update ref guide for 1.2.11 release
f8e1a8d is described below

commit f8e1a8dd2a43050eb35db9ee734cdf661bea2fb8
Author: Sean Busbey <bu...@apache.org>
AuthorDate: Thu Feb 14 10:46:33 2019 -0600

    HBASE-21901 update ref guide for 1.2.11 release
    
    * remove chapter on integation with Apache Spark since it's not in 1.2 and doesn't work with 1.2.
    * HBASE-21091 HBASE-21295 Update Hadoop and Java "supported" versions tables
    * HBASE-21685 Change repository urls to Gitbox
    * HBASE-21727 Simplify documentation around client timeout
    * HBASE-21737 Fix typos in "Appendix A: HFile format" section in the doc
    * HBASE-21741 Add a note in "HFile Tool" section regarding 'seqid=0'
    * HBASE-21790 Detail docs on ref guide for CompactionTool
    * HBASE-20389 Move website building flags into a profile.
    
    Co-Authored-By: Josh Elser <el...@apache.org>
    Co-Authored-By: Peter Somogyi <ps...@apache.org>
    Co-Authored-By: Sakthi <sa...@gmail.com>
    Co-Authored-By: Wellington Chevreuil <we...@gmail.com>
---
 .../java/org/apache/hadoop/hbase/HConstants.java   |   2 +-
 pom.xml                                            |  38 ++
 .../asciidoc/_chapters/appendix_hfile_format.adoc  |  32 +-
 src/main/asciidoc/_chapters/architecture.adoc      |   3 +
 src/main/asciidoc/_chapters/configuration.adoc     | 127 +++--
 src/main/asciidoc/_chapters/developer.adoc         |   2 +-
 src/main/asciidoc/_chapters/ops_mgt.adoc           |  78 ++-
 src/main/asciidoc/_chapters/spark.adoc             | 554 ---------------------
 src/main/asciidoc/_chapters/troubleshooting.adoc   |   4 +-
 src/main/asciidoc/book.adoc                        |   1 -
 10 files changed, 213 insertions(+), 628 deletions(-)

diff --git a/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java b/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
index c5ae3f0..0956abf 100644
--- a/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
+++ b/hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
@@ -278,7 +278,7 @@ public final class HConstants {
   /** Parameter name for HBase client operation timeout. */
   public static final String HBASE_CLIENT_OPERATION_TIMEOUT = "hbase.client.operation.timeout";
 
-  /** Parameter name for HBase client operation timeout. */
+  /** Parameter name for HBase client meta operation timeout. */
   public static final String HBASE_CLIENT_META_OPERATION_TIMEOUT =
     "hbase.client.meta.operation.timeout";
 
diff --git a/pom.xml b/pom.xml
index b61cdfc..33a0be9 100644
--- a/pom.xml
+++ b/pom.xml
@@ -2634,6 +2634,44 @@
         </plugins>
       </build>
     </profile>
+    <profile>
+      <!-- Used by the website generation script on jenkins to
+           do a local install of the jars we need to run a normal
+           site build w/o forking.
+        -->
+      <id>site-install-step</id>
+      <properties>
+        <skipTests>true</skipTests>
+        <maven.javadoc.skip>true</maven.javadoc.skip>
+        <enforcer.skip>true</enforcer.skip>
+        <checkstyle.skip>true</checkstyle.skip>
+        <findbugs.skip>true</findbugs.skip>
+        <warbucks.skip>true</warbucks.skip>
+      </properties>
+    </profile>
+    <profile>
+      <!-- Used by the website generation script on jenkins to
+           mitigate the impact of unneeded build forks while building
+           our javadocs.
+        -->
+      <id>site-build-step</id>
+      <properties>
+        <skipTests>true</skipTests>
+        <enforcer.skip>true</enforcer.skip>
+        <maven.main.skip>true</maven.main.skip>
+        <maven.test.skip>true</maven.test.skip>
+        <warbucks.skip>true</warbucks.skip>
+        <protoc.skip>true</protoc.skip>
+        <remoteresources.skip>true</remoteresources.skip>
+        <!-- Because the scala-maven-plugin has no skip configuration option
+             this configuration setting here won't actually do anything.
+
+             However, if you pass it on the command line it'll activate
+             a profile in hbase-spark/pom.xml that will skip things.
+        -->
+        <scala.skip>true</scala.skip>
+      </properties>
+    </profile>
   </profiles>
   <!-- See http://jira.codehaus.org/browse/MSITE-443 why the settings need to be here and not in pluginManagement. -->
   <reporting>
diff --git a/src/main/asciidoc/_chapters/appendix_hfile_format.adoc b/src/main/asciidoc/_chapters/appendix_hfile_format.adoc
index 18eafe6..bfd0685 100644
--- a/src/main/asciidoc/_chapters/appendix_hfile_format.adoc
+++ b/src/main/asciidoc/_chapters/appendix_hfile_format.adoc
@@ -106,11 +106,11 @@ In the version 2 every block in the data section contains the following fields:
 .. BLOOM_CHUNK – Bloom filter chunks
 .. META – meta blocks (not used for Bloom filters in version 2 anymore)
 .. INTERMEDIATE_INDEX – intermediate-level index blocks in a multi-level blockindex
-.. ROOT_INDEX – root>level index blocks in a multi>level block index
-.. FILE_INFO – the ``file info'' block, a small key>value map of metadata
-.. BLOOM_META – a Bloom filter metadata block in the load>on>open section
-.. TRAILER – a fixed>size file trailer.
-  As opposed to the above, this is not an HFile v2 block but a fixed>size (for each HFile version) data structure
+.. ROOT_INDEX – root-level index blocks in a multi-level block index
+.. FILE_INFO – the ''file info'' block, a small key-value map of metadata
+.. BLOOM_META – a Bloom filter metadata block in the load-on-open section
+.. TRAILER – a fixed-size file trailer.
+  As opposed to the above, this is not an HFile v2 block but a fixed-size (for each HFile version) data structure
 .. INDEX_V1 – this block type is only used for legacy HFile v1 block
 . Compressed size of the block's data, not including the header (int).
 +
@@ -127,7 +127,7 @@ The above format of blocks is used in the following HFile sections:
 
 Scanned block section::
   The section is named so because it contains all data blocks that need to be read when an HFile is scanned sequentially.
-  Also contains leaf block index and Bloom chunk blocks.
+  Also contains Leaf index blocks and Bloom chunk blocks.
 Non-scanned block section::
   This section still contains unified-format v2 blocks but it does not have to be read when doing a sequential scan.
   This section contains "meta" blocks and intermediate-level index blocks.
@@ -140,10 +140,10 @@ There are three types of block indexes in HFile version 2, stored in two differe
 
 . Data index -- version 2 multi-level block index, consisting of:
 .. Version 2 root index, stored in the data block index section of the file
-.. Optionally, version 2 intermediate levels, stored in the non%root format in   the data index section of the file. Intermediate levels can only be present if leaf level blocks are present
-.. Optionally, version 2 leaf levels, stored in the non%root format inline with   data blocks
+.. Optionally, version 2 intermediate levels, stored in the non-root format in   the data index section of the file. Intermediate levels can only be present if leaf level blocks are present
+.. Optionally, version 2 leaf levels, stored in the non-root format inline with   data blocks
 . Meta index -- version 2 root index format only, stored in the meta index section of the file
-. Bloom index -- version 2 root index format only, stored in the ``load-on-open'' section as part of Bloom filter metadata.
+. Bloom index -- version 2 root index format only, stored in the ''load-on-open'' section as part of Bloom filter metadata.
 
 ==== Root block index format in version 2
 
@@ -156,7 +156,7 @@ A version 2 root index block is a sequence of entries of the following format, s
 
 . Offset (long)
 +
-This offset may point to a data block or to a deeper>level index block.
+This offset may point to a data block or to a deeper-level index block.
 
 . On-disk size (int)
 . Key (a serialized byte array stored using Bytes.writeByteArray)
@@ -172,7 +172,7 @@ For the data index and the meta index the number of entries is stored in the tra
 For a multi-level block index we also store the following fields in the root index block in the load-on-open section of the HFile, in addition to the data structure described above:
 
 . Middle leaf index block offset
-. Middle leaf block on-disk size (meaning the leaf index block containing the reference to the ``middle'' data block of the file)
+. Middle leaf block on-disk size (meaning the leaf index block containing the reference to the ''middle'' data block of the file)
 . The index of the mid-key (defined below) in the middle leaf-level block.
 
 
@@ -200,9 +200,9 @@ Every non-root index block is structured as follows.
 . Entries.
   Each entry contains:
 +
-. Offset of the block referenced by this entry in the file (long)
-. On>disk size of the referenced block (int)
-. Key.
+.. Offset of the block referenced by this entry in the file (long)
+.. On-disk size of the referenced block (int)
+.. Key.
   The length can be calculated from entryOffsets.
 
 
@@ -214,7 +214,7 @@ In contrast with version 1, in a version 2 HFile Bloom filter metadata is stored
 +
 . Bloom filter version = 3 (int). There used to be a DynamicByteBloomFilter class that had the Bloom   filter version number 2
 . The total byte size of all compound Bloom filter chunks (long)
-. Number of hash functions (int
+. Number of hash functions (int)
 . Type of hash functions (int)
 . The total key count inserted into the Bloom filter (long)
 . The maximum total number of keys in the Bloom filter (long)
@@ -246,7 +246,7 @@ This is because we need to know the comparator at the time of parsing the load-o
 ==== Fixed file trailer format differences between versions 1 and 2
 
 The following table shows common and different fields between fixed file trailers in versions 1 and 2.
-Note that the size of the trailer is different depending on the version, so it is ``fixed'' only within one version.
+Note that the size of the trailer is different depending on the version, so it is ''fixed'' only within one version.
 However, the version is always stored as the last four-byte integer in the file.
 
 .Differences between HFile Versions 1 and 2
diff --git a/src/main/asciidoc/_chapters/architecture.adoc b/src/main/asciidoc/_chapters/architecture.adoc
index 3b91869..0df62c8 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -1572,6 +1572,9 @@ For example, to view the content of the file _hdfs://10.81.47.41:8020/hbase/TEST
 If you leave off the option -v to see just a summary on the HFile.
 See usage for other things to do with the `HFile` tool.
 
+NOTE: In the output of this tool, you might see 'seqid=0' for certain keys in places such as 'Mid-key'/'firstKey'/'lastKey'. These are
+ 'KeyOnlyKeyValue' type instances - meaning their seqid is irrelevant & we just need the keys of these Key-Value instances.
+
 [[store.file.dir]]
 ===== StoreFile Directory Structure on HDFS
 
diff --git a/src/main/asciidoc/_chapters/configuration.adoc b/src/main/asciidoc/_chapters/configuration.adoc
index 4702bcb..3d6f373 100644
--- a/src/main/asciidoc/_chapters/configuration.adoc
+++ b/src/main/asciidoc/_chapters/configuration.adoc
@@ -93,43 +93,51 @@ This section lists required services and some required system configuration.
 
 [[java]]
 .Java
-[cols="1,1,1,4", options="header"]
+
+The following table summarizes the recommendation of the HBase community wrt deploying on various Java versions.
+A icon:check-circle[role="green"] symbol is meant to indicate a base level of testing and willingness to help diagnose and address issues you might run into.
+Similarly, an entry of icon:exclamation-circle[role="yellow"] or icon:times-circle[role="red"] generally means that should you run into an issue the community is likely to ask you to change the Java environment before proceeding to help.
+In some cases, specific guidance on limitations (e.g. whether compiling / unit tests work, specific operational issues, etc) will also be noted.
+
+.Long Term Support JDKs are recommended
+[TIP]
+====
+HBase recommends downstream users rely on JDK releases that are marked as Long Term Supported (LTS) either from the OpenJDK project or vendors. As of March 2018 that means Java 8 is the only applicable version and that the next likely version to see testing will be Java 11 near Q3 2018.
+====
+
+.Java support by release line
+[cols="6*^.^", options="header"]
 |===
 |HBase Version
-|JDK 6
 |JDK 7
 |JDK 8
+|JDK 9 (Non-LTS)
+|JDK 10 (Non-LTS)
+|JDK 11
+
+|2.0+
+|icon:times-circle[role="red"]
+|icon:check-circle[role="green"]
+v|icon:exclamation-circle[role="yellow"]
+link:https://issues.apache.org/jira/browse/HBASE-20264[HBASE-20264]
+v|icon:exclamation-circle[role="yellow"]
+link:https://issues.apache.org/jira/browse/HBASE-20264[HBASE-20264]
+v|icon:exclamation-circle[role="yellow"]
+link:https://issues.apache.org/jira/browse/HBASE-21110[HBASE-21110]
+
+|1.2+
+|icon:check-circle[role="green"]
+|icon:check-circle[role="green"]
+v|icon:exclamation-circle[role="yellow"]
+link:https://issues.apache.org/jira/browse/HBASE-20264[HBASE-20264]
+v|icon:exclamation-circle[role="yellow"]
+link:https://issues.apache.org/jira/browse/HBASE-20264[HBASE-20264]
+v|icon:exclamation-circle[role="yellow"]
+link:https://issues.apache.org/jira/browse/HBASE-21110[HBASE-21110]
 
-|1.2
-|link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
-|yes
-|yes
-
-|1.1
-|link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
-|yes
-|Running with JDK 8 will work but is not well tested.
-
-|1.0
-|link:http://search-hadoop.com/m/DHED4Zlz0R1[Not Supported]
-|yes
-|Running with JDK 8 will work but is not well tested.
-
-|0.98
-|yes
-|yes
-|Running with JDK 8 works but is not well tested. Building with JDK 8 would require removal of the
-deprecated `remove()` method of the `PoolMap` class and is under consideration. See
-link:https://issues.apache.org/jira/browse/HBASE-7608[HBASE-7608] for more information about JDK 8
-support.
-
-|0.94
-|yes
-|yes
-|N/A
 |===
 
-NOTE: In HBase 0.98.5 and newer, you must set `JAVA_HOME` on each node of your cluster. _hbase-env.sh_ provides a handy mechanism to do this.
+NOTE: In HBase 1.2 and newer, you must set `JAVA_HOME` on each node of your cluster. _hbase-env.sh_ provides a handy mechanism to do this.
 
 [[os]]
 .Operating System Utilities
@@ -210,26 +218,28 @@ Use the following legend to interpret this table:
 
 .Hadoop version support matrix
 
-* "S" = supported
-* "X" = not supported
-* "NT" = Not tested
+* icon:check-circle[role="green"] = Tested to be fully-functional
+* icon:times-circle[role="red"] = Known to not be fully-functional
+* icon:exclamation-circle[role="yellow"] = Not tested, may/may-not function
 
-[cols="1,1,1,1,1,1", options="header"]
+[cols="1,4*^.^", options="header"]
 |===
-| | HBase-0.94.x | HBase-0.98.x (Support for Hadoop 1.1+ is deprecated.) | HBase-1.0.x (Hadoop 1.x is NOT supported) | HBase-1.1.x | HBase-1.2.x
-|Hadoop-1.0.x  | X | X | X | X | X
-|Hadoop-1.1.x | S | NT | X | X | X
-|Hadoop-0.23.x | S | X | X | X | X
-|Hadoop-2.0.x-alpha | NT | X | X | X | X
-|Hadoop-2.1.0-beta | NT | X | X | X | X
-|Hadoop-2.2.0 | NT | S | NT | NT | X 
-|Hadoop-2.3.x | NT | S | NT | NT | X 
-|Hadoop-2.4.x | NT | S | S | S | S
-|Hadoop-2.5.x | NT | S | S | S | S
-|Hadoop-2.6.0 | X | X | X | X | X
-|Hadoop-2.6.1+ | NT | NT | NT | NT | S
-|Hadoop-2.7.0 | X | X | X | X | X
-|Hadoop-2.7.1+ | NT | NT | NT | NT | S
+| | HBase-1.2.x, HBase-1.3.x | HBase-1.4.x | HBase-2.0.x | HBase-2.1.x
+|Hadoop-2.4.x | icon:check-circle[role="green"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"]
+|Hadoop-2.5.x | icon:check-circle[role="green"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"]
+|Hadoop-2.6.0 | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"]
+|Hadoop-2.6.1+ | icon:check-circle[role="green"] | icon:times-circle[role="red"] | icon:check-circle[role="green"] | icon:times-circle[role="red"]
+|Hadoop-2.7.0 | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"]
+|Hadoop-2.7.1+ | icon:check-circle[role="green"] | icon:check-circle[role="green"] | icon:check-circle[role="green"] | icon:check-circle[role="green"]
+|Hadoop-2.8.[0-1] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"]
+|Hadoop-2.8.2 | icon:exclamation-circle[role="yellow"] | icon:exclamation-circle[role="yellow"] | icon:exclamation-circle[role="yellow"] | icon:exclamation-circle[role="yellow"]
+|Hadoop-2.8.3+ | icon:exclamation-circle[role="yellow"] | icon:exclamation-circle[role="yellow"] | icon:check-circle[role="green"] | icon:check-circle[role="green"]
+|Hadoop-2.9.0 | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"]
+|Hadoop-2.9.1+ | icon:exclamation-circle[role="yellow"] | icon:exclamation-circle[role="yellow"] | icon:exclamation-circle[role="yellow"] | icon:exclamation-circle[role="yellow"]
+|Hadoop-3.0.[0-2] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"]
+|Hadoop-3.0.3+ | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:check-circle[role="green"] | icon:check-circle[role="green"]
+|Hadoop-3.1.0 | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:times-circle[role="red"]
+|Hadoop-3.1.1+ | icon:times-circle[role="red"] | icon:times-circle[role="red"] | icon:check-circle[role="green"] | icon:check-circle[role="green"]
 |===
 
 .Hadoop 2.6.x
@@ -656,7 +666,26 @@ Configuration config = HBaseConfiguration.create();
 config.set("hbase.zookeeper.quorum", "localhost");  // Here we are running zookeeper locally
 ----
 
-If multiple ZooKeeper instances make up your ZooKeeper ensemble, they may be specified in a comma-separated list (just as in the _hbase-site.xml_ file). This populated `Configuration` instance can then be passed to an link:http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html[Table], and so on.
+If multiple ZooKeeper instances make up your ZooKeeper ensemble, they may be specified in a comma-separated list (just as in the _hbase-site.xml_ file). This populated `Configuration` instance can then be passed to an link:https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html[Table], and so on.
+
+[[config_timeouts]]
+=== Timeout settings
+
+HBase provides a wide variety of timeout settings to limit the execution time of various remote operations.
+
+* hbase.rpc.timeout
+* hbase.client.operation.timeout
+* hbase.client.meta.operation.timeout
+* hbase.client.scanner.timeout.period
+
+The `hbase.rpc.timeout` property limits how long a single RPC call can run before timing out.
+
+A higher-level timeout is `hbase.client.operation.timeout` which is valid for each client call.
+When an RPC call fails for instance for a timeout due to `hbase.rpc.timeout` it will be retried until `hbase.client.operation.timeout` is reached.
+Client operation timeout for system tables can be fine tuned by setting `hbase.client.meta.operation.timeout` configuration value.
+When this is not set its value will use `hbase.client.operation.timeout`.
+
+Timeout for scan operations is controlled differently. Use `hbase.client.scanner.timeout.period` property to set this timeout.
 
 [[example_config]]
 == Example Configurations
diff --git a/src/main/asciidoc/_chapters/developer.adoc b/src/main/asciidoc/_chapters/developer.adoc
index b51cf65..8bceb0e 100644
--- a/src/main/asciidoc/_chapters/developer.adoc
+++ b/src/main/asciidoc/_chapters/developer.adoc
@@ -1619,7 +1619,7 @@ If you submit a patch for one thing, don't do auto-reformatting or unrelated ref
 Likewise, don't add unrelated cleanup or refactorings outside the scope of your Jira.
 
 [[common.patch.feedback.tests]]
-===== Ambigious Unit Tests
+===== Ambiguous Unit Tests
 
 Make sure that you're clear about what you are testing in your unit tests and why.
 
diff --git a/src/main/asciidoc/_chapters/ops_mgt.adoc b/src/main/asciidoc/_chapters/ops_mgt.adoc
index 85a3f7d..842ca2d 100644
--- a/src/main/asciidoc/_chapters/ops_mgt.adoc
+++ b/src/main/asciidoc/_chapters/ops_mgt.adoc
@@ -783,15 +783,85 @@ See link:https://issues.apache.org/jira/browse/HBASE-4391[HBASE-4391 Add ability
 [[compaction.tool]]
 === Offline Compaction Tool
 
-See the usage for the
-link:http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/CompactionTool.html[CompactionTool].
-Run it like:
+*CompactionTool* provides a way of running compactions (either minor or major) as an independent
+process from the RegionServer. It reuses same internal implementation classes executed by RegionServer
+compaction feature. However, since this runs on a complete separate independent java process, it
+releases RegionServers from the overhead involved in rewrite a set of hfiles, which can be critical
+for latency sensitive use cases.
 
-[source, bash]
+Usage:
 ----
 $ ./bin/hbase org.apache.hadoop.hbase.regionserver.CompactionTool
+
+Usage: java org.apache.hadoop.hbase.regionserver.CompactionTool \
+  [-compactOnce] [-major] [-mapred] [-D<property=value>]* files...
+
+Options:
+ mapred         Use MapReduce to run compaction.
+ compactOnce    Execute just one compaction step. (default: while needed)
+ major          Trigger major compaction.
+
+Note: -D properties will be applied to the conf used.
+For example:
+ To stop delete of compacted file, pass -Dhbase.compactiontool.delete=false
+ To set tmp dir, pass -Dhbase.tmp.dir=ALTERNATE_DIR
+
+Examples:
+ To compact the full 'TestTable' using MapReduce:
+ $ hbase org.apache.hadoop.hbase.regionserver.CompactionTool -mapred hdfs://hbase/data/default/TestTable
+
+ To compact column family 'x' of the table 'TestTable' region 'abc':
+ $ hbase org.apache.hadoop.hbase.regionserver.CompactionTool hdfs://hbase/data/default/TestTable/abc/x
 ----
 
+As shown by usage options above, *CompactionTool* can run as a standalone client or a mapreduce job.
+When running as mapreduce job, each family dir is handled as an input split, and is processed
+by a separate map task.
+
+The *compactionOnce* parameter controls how many compaction cycles will be performed until
+*CompactionTool* program decides to finish its work. If omitted, it will assume it should keep
+running compactions on each specified family as determined by the given compaction policy
+configured. For more info on compaction policy, see <<compaction,compaction>>.
+
+If a major compaction is desired, *major* flag can be specified. If omitted, *CompactionTool* will
+assume minor compaction is wanted by default.
+
+It also allows for configuration overrides with `-D` flag. In the usage section above, for example,
+`-Dhbase.compactiontool.delete=false` option will instruct compaction engine to not delete original
+files from temp folder.
+
+Files targeted for compaction must be specified as parent hdfs dirs. It allows for multiple dirs
+definition, as long as each for these dirs are either a *family*, a *region*, or a *table* dir. If a
+table or region dir is passed, the program will recursively iterate through related sub-folders,
+effectively running compaction for each family found below the table/region level.
+
+Since these dirs are nested under *hbase* hdfs directory tree, *CompactionTool* requires hbase super
+user permissions in order to have access to required hfiles.
+
+.Running in MapReduce mode
+[NOTE]
+====
+MapReduce mode offers the ability to process each family dir in parallel, as a separate map task.
+Generally, it would make sense to run in this mode when specifying one or more table dirs as targets
+for compactions. The caveat, though, is that if number of families to be compacted become too large,
+the related mapreduce job may have indirect impacts on *RegionServers* performance .
+Since *NodeManagers* are normally co-located with RegionServers, such large jobs could
+compete for IO/Bandwidth resources with the *RegionServers*.
+====
+
+.MajorCompaction completely disabled on RegionServers due performance impacts
+[NOTE]
+====
+*Major compactions* can be a costly operation (see <<compaction,compaction>>), and can indeed
+impact performance on RegionServers, leading operators to completely disable it for critical
+low latency application. *CompactionTool* could be used as an alternative in such scenarios,
+although, additional custom application logic would need to be implemented, such as deciding
+scheduling and selection of tables/regions/families target for a given compaction run.
+====
+
+For additional details about CompactionTool, see also
+link:https://hbase.apache.org/1.2/devapidocs/org/apache/hadoop/hbase/regionserver/CompactionTool.html[CompactionTool].
+
 === `hbase clean`
 
 The `hbase clean` command cleans HBase data from ZooKeeper, HDFS, or both.
diff --git a/src/main/asciidoc/_chapters/spark.adoc b/src/main/asciidoc/_chapters/spark.adoc
deleted file mode 100644
index 88918aa..0000000
--- a/src/main/asciidoc/_chapters/spark.adoc
+++ /dev/null
@@ -1,554 +0,0 @@
-////
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements. See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership. The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License. You may obtain a copy of the License at
- *
- . . http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-////
-
-[[spark]]
-= HBase and Spark
-:doctype: book
-:numbered:
-:toc: left
-:icons: font
-:experimental:
-
-link:http://spark.apache.org/[Apache Spark] is a software framework that is used
-to process data in memory in a distributed manner, and is replacing MapReduce in
-many use cases.
-
-Spark itself is out of scope of this document, please refer to the Spark site for
-more information on the Spark project and subprojects. This document will focus
-on 4 main interaction points between Spark and HBase. Those interaction points are:
-
-Basic Spark::
-  The ability to have an HBase Connection at any point in your Spark DAG.
-Spark Streaming::
-  The ability to have an HBase Connection at any point in your Spark Streaming
-  application.
-Spark Bulk Load::
-  The ability to write directly to HBase HFiles for bulk insertion into HBase
-SparkSQL/DataFrames::
-  The ability to write SparkSQL that draws on tables that are represented in HBase.
-
-The following sections will walk through examples of all these interaction points.
-
-== Basic Spark
-
-This section discusses Spark HBase integration at the lowest and simplest levels.
-All the other interaction points are built upon the concepts that will be described
-here.
-
-At the root of all Spark and HBase integration is the HBaseContext. The HBaseContext
-takes in HBase configurations and pushes them to the Spark executors. This allows
-us to have an HBase Connection per Spark Executor in a static location.
-
-For reference, Spark Executors can be on the same nodes as the Region Servers or
-on different nodes there is no dependence of co-location. Think of every Spark
-Executor as a multi-threaded client application. This allows any Spark Tasks
-running on the executors to access the shared Connection object.
-
-.HBaseContext Usage Example
-====
-
-This example shows how HBaseContext can be used to do a `foreachPartition` on a RDD
-in Scala:
-
-[source, scala]
-----
-val sc = new SparkContext("local", "test")
-val config = new HBaseConfiguration()
-
-...
-
-val hbaseContext = new HBaseContext(sc, config)
-
-rdd.hbaseForeachPartition(hbaseContext, (it, conn) => {
- val bufferedMutator = conn.getBufferedMutator(TableName.valueOf("t1"))
- it.foreach((putRecord) => {
-. val put = new Put(putRecord._1)
-. putRecord._2.foreach((putValue) => put.addColumn(putValue._1, putValue._2, putValue._3))
-. bufferedMutator.mutate(put)
- })
- bufferedMutator.flush()
- bufferedMutator.close()
-})
-----
-
-Here is the same example implemented in Java:
-
-[source, java]
-----
-JavaSparkContext jsc = new JavaSparkContext(sparkConf);
-
-try {
-  List<byte[]> list = new ArrayList<>();
-  list.add(Bytes.toBytes("1"));
-  ...
-  list.add(Bytes.toBytes("5"));
-
-  JavaRDD<byte[]> rdd = jsc.parallelize(list);
-  Configuration conf = HBaseConfiguration.create();
-
-  JavaHBaseContext hbaseContext = new JavaHBaseContext(jsc, conf);
-
-  hbaseContext.foreachPartition(rdd,
-      new VoidFunction<Tuple2<Iterator<byte[]>, Connection>>() {
-   public void call(Tuple2<Iterator<byte[]>, Connection> t)
-        throws Exception {
-    Table table = t._2().getTable(TableName.valueOf(tableName));
-    BufferedMutator mutator = t._2().getBufferedMutator(TableName.valueOf(tableName));
-    while (t._1().hasNext()) {
-      byte[] b = t._1().next();
-      Result r = table.get(new Get(b));
-      if (r.getExists()) {
-       mutator.mutate(new Put(b));
-      }
-    }
-
-    mutator.flush();
-    mutator.close();
-    table.close();
-   }
-  });
-} finally {
-  jsc.stop();
-}
-----
-====
-
-All functionality between Spark and HBase will be supported both in Scala and in
-Java, with the exception of SparkSQL which will support any language that is
-supported by Spark. For the remaining of this documentation we will focus on
-Scala examples for now.
-
-The examples above illustrate how to do a foreachPartition with a connection. A
-number of other Spark base functions  are supported out of the box:
-
-// tag::spark_base_functions[]
-`bulkPut`:: For massively parallel sending of puts to HBase
-`bulkDelete`:: For massively parallel sending of deletes to HBase
-`bulkGet`:: For massively parallel sending of gets to HBase to create a new RDD
-`mapPartition`:: To do a Spark Map function with a Connection object to allow full
-access to HBase
-`hBaseRDD`:: To simplify a distributed scan to create a RDD
-// end::spark_base_functions[]
-
-For examples of all these functionalities, see the HBase-Spark Module.
-
-== Spark Streaming
-http://spark.apache.org/streaming/[Spark Streaming] is a micro batching stream
-processing framework built on top of Spark. HBase and Spark Streaming make great
-companions in that HBase can help serve the following benefits alongside Spark
-Streaming.
-
-* A place to grab reference data or profile data on the fly
-* A place to store counts or aggregates in a way that supports Spark Streaming
-promise of _only once processing_.
-
-The HBase-Spark module’s integration points with Spark Streaming are similar to
-its normal Spark integration points, in that the following commands are possible
-straight off a Spark Streaming DStream.
-
-include::spark.adoc[tags=spark_base_functions]
-
-.`bulkPut` Example with DStreams
-====
-
-Below is an example of bulkPut with DStreams. It is very close in feel to the RDD
-bulk put.
-
-[source, scala]
-----
-val sc = new SparkContext("local", "test")
-val config = new HBaseConfiguration()
-
-val hbaseContext = new HBaseContext(sc, config)
-val ssc = new StreamingContext(sc, Milliseconds(200))
-
-val rdd1 = ...
-val rdd2 = ...
-
-val queue = mutable.Queue[RDD[(Array[Byte], Array[(Array[Byte],
-    Array[Byte], Array[Byte])])]]()
-
-queue += rdd1
-queue += rdd2
-
-val dStream = ssc.queueStream(queue)
-
-dStream.hbaseBulkPut(
-  hbaseContext,
-  TableName.valueOf(tableName),
-  (putRecord) => {
-   val put = new Put(putRecord._1)
-   putRecord._2.foreach((putValue) => put.addColumn(putValue._1, putValue._2, putValue._3))
-   put
-  })
-----
-
-There are three inputs to the `hbaseBulkPut` function.
-. The hbaseContext that carries the configuration boardcast information link us
-to the HBase Connections in the executors
-. The table name of the table we are putting data into
-. A function that will convert a record in the DStream into an HBase Put object.
-====
-
-== Bulk Load
-
-There are two options for bulk loading data into HBase with Spark.  There is the
-basic bulk load functionality that will work for cases where your rows have
-millions of columns and cases where your columns are not consolidated and
-partitions before the on the map side of the Spark bulk load process.
-
-There is also a thin record bulk load option with Spark, this second option is
-designed for tables that have less then 10k columns per row.  The advantage
-of this second option is higher throughput and less over all load on the Spark
-shuffle operation.
-
-Both implementations work more or less like the MapReduce bulk load process in
-that a partitioner partitions the rowkeys based on region splits and
-the row keys are sent to the reducers in order, so that HFiles can be written
-out directly from the reduce phase.
-
-In Spark terms, the bulk load will be implemented around a the Spark
-`repartitionAndSortWithinPartitions` followed by a Spark `foreachPartition`.
-
-First lets look at an example of using the basic bulk load functionality
-
-.Bulk Loading Example
-====
-
-The following example shows bulk loading with Spark.
-
-[source, scala]
-----
-val sc = new SparkContext("local", "test")
-val config = new HBaseConfiguration()
-
-val hbaseContext = new HBaseContext(sc, config)
-
-val stagingFolder = ...
-val rdd = sc.parallelize(Array(
-      (Bytes.toBytes("1"),
-        (Bytes.toBytes(columnFamily1), Bytes.toBytes("a"), Bytes.toBytes("foo1"))),
-      (Bytes.toBytes("3"),
-        (Bytes.toBytes(columnFamily1), Bytes.toBytes("b"), Bytes.toBytes("foo2.b"))), ...
-
-rdd.hbaseBulkLoad(TableName.valueOf(tableName),
-  t => {
-   val rowKey = t._1
-   val family:Array[Byte] = t._2(0)._1
-   val qualifier = t._2(0)._2
-   val value = t._2(0)._3
-
-   val keyFamilyQualifier= new KeyFamilyQualifier(rowKey, family, qualifier)
-
-   Seq((keyFamilyQualifier, value)).iterator
-  },
-  stagingFolder.getPath)
-
-val load = new LoadIncrementalHFiles(config)
-load.doBulkLoad(new Path(stagingFolder.getPath),
-  conn.getAdmin, table, conn.getRegionLocator(TableName.valueOf(tableName)))
-----
-====
-
-The `hbaseBulkLoad` function takes three required parameters:
-
-. The table name of the table we intend to bulk load too
-
-. A function that will convert a record in the RDD to a tuple key value par. With
-the tuple key being a KeyFamilyQualifer object and the value being the cell value.
-The KeyFamilyQualifer object will hold the RowKey, Column Family, and Column Qualifier.
-The shuffle will partition on the RowKey but will sort by all three values.
-
-. The temporary path for the HFile to be written out too
-
-Following the Spark bulk load command,  use the HBase's LoadIncrementalHFiles object
-to load the newly created HFiles into HBase.
-
-.Additional Parameters for Bulk Loading with Spark
-
-You can set the following attributes with additional parameter options on hbaseBulkLoad.
-
-* Max file size of the HFiles
-* A flag to exclude HFiles from compactions
-* Column Family settings for compression, bloomType, blockSize, and dataBlockEncoding
-
-.Using Additional Parameters
-====
-
-[source, scala]
-----
-val sc = new SparkContext("local", "test")
-val config = new HBaseConfiguration()
-
-val hbaseContext = new HBaseContext(sc, config)
-
-val stagingFolder = ...
-val rdd = sc.parallelize(Array(
-      (Bytes.toBytes("1"),
-        (Bytes.toBytes(columnFamily1), Bytes.toBytes("a"), Bytes.toBytes("foo1"))),
-      (Bytes.toBytes("3"),
-        (Bytes.toBytes(columnFamily1), Bytes.toBytes("b"), Bytes.toBytes("foo2.b"))), ...
-
-val familyHBaseWriterOptions = new java.util.HashMap[Array[Byte], FamilyHFileWriteOptions]
-val f1Options = new FamilyHFileWriteOptions("GZ", "ROW", 128, "PREFIX")
-
-familyHBaseWriterOptions.put(Bytes.toBytes("columnFamily1"), f1Options)
-
-rdd.hbaseBulkLoad(TableName.valueOf(tableName),
-  t => {
-   val rowKey = t._1
-   val family:Array[Byte] = t._2(0)._1
-   val qualifier = t._2(0)._2
-   val value = t._2(0)._3
-
-   val keyFamilyQualifier= new KeyFamilyQualifier(rowKey, family, qualifier)
-
-   Seq((keyFamilyQualifier, value)).iterator
-  },
-  stagingFolder.getPath,
-  familyHBaseWriterOptions,
-  compactionExclude = false,
-  HConstants.DEFAULT_MAX_FILE_SIZE)
-
-val load = new LoadIncrementalHFiles(config)
-load.doBulkLoad(new Path(stagingFolder.getPath),
-  conn.getAdmin, table, conn.getRegionLocator(TableName.valueOf(tableName)))
-----
-====
-
-Now lets look at how you would call the thin record bulk load implementation
-
-.Using thin record bulk load
-====
-
-[source, scala]
-----
-val sc = new SparkContext("local", "test")
-val config = new HBaseConfiguration()
-
-val hbaseContext = new HBaseContext(sc, config)
-
-val stagingFolder = ...
-val rdd = sc.parallelize(Array(
-      ("1",
-        (Bytes.toBytes(columnFamily1), Bytes.toBytes("a"), Bytes.toBytes("foo1"))),
-      ("3",
-        (Bytes.toBytes(columnFamily1), Bytes.toBytes("b"), Bytes.toBytes("foo2.b"))), ...
-
-rdd.hbaseBulkLoadThinRows(hbaseContext,
-      TableName.valueOf(tableName),
-      t => {
-        val rowKey = t._1
-
-        val familyQualifiersValues = new FamiliesQualifiersValues
-        t._2.foreach(f => {
-          val family:Array[Byte] = f._1
-          val qualifier = f._2
-          val value:Array[Byte] = f._3
-
-          familyQualifiersValues +=(family, qualifier, value)
-        })
-        (new ByteArrayWrapper(Bytes.toBytes(rowKey)), familyQualifiersValues)
-      },
-      stagingFolder.getPath,
-      new java.util.HashMap[Array[Byte], FamilyHFileWriteOptions],
-      compactionExclude = false,
-      20)
-
-val load = new LoadIncrementalHFiles(config)
-load.doBulkLoad(new Path(stagingFolder.getPath),
-  conn.getAdmin, table, conn.getRegionLocator(TableName.valueOf(tableName)))
-----
-====
-
-Note that the big difference in using bulk load for thin rows is the function
-returns a tuple with the first value being the row key and the second value
-being an object of FamiliesQualifiersValues, which will contain all the
-values for this row for all column families.
-
-
-== SparkSQL/DataFrames
-
-http://spark.apache.org/sql/[SparkSQL] is a subproject of Spark that supports
-SQL that will compute down to a Spark DAG. In addition,SparkSQL is a heavy user
-of DataFrames. DataFrames are like RDDs with schema information.
-
-The HBase-Spark module includes support for Spark SQL and DataFrames, which allows
-you to write SparkSQL directly on HBase tables. In addition the HBase-Spark
-will push down query filtering logic to HBase.
-
-In HBaseSparkConf, four parameters related to timestamp can be set. They are TIMESTAMP,
-MIN_TIMESTAMP, MAX_TIMESTAMP and MAX_VERSIONS respectively. Users can query records
-with different timestamps or time ranges with MIN_TIMESTAMP and MAX_TIMESTAMP.
-In the meantime, use concrete value instead of tsSpecified and oldMs in the examples below.
-
-.Query with different timestamps
-====
-
-The example below shows how to load df DataFrame with different timestamps.
-tsSpecified is specified by the user.
-HBaseTableCatalog defines the HBase and Relation relation schema.
-writeCatalog defines catalog for the schema mapping.
-----
-val df = sqlContext.read
-      .options(Map(HBaseTableCatalog.tableCatalog -> writeCatalog, HBaseSparkConf.TIMESTAMP -> tsSpecified.toString))
-      .format("org.apache.hadoop.hbase.spark")
-      .load()
-----
-
-The example below shows how to load df DataFrame with different time ranges.
-oldMs is specified by the user.
-----
-val df = sqlContext.read
-      .options(Map(HBaseTableCatalog.tableCatalog -> writeCatalog, HBaseSparkConf.MIN_TIMESTAMP -> "0",
-        HBaseSparkConf.MAX_TIMESTAMP -> oldMs.toString))
-      .format("org.apache.hadoop.hbase.spark")
-      .load()
-----
-
-After loading df DataFrame, users can query data.
-----
-    df.registerTempTable("table")
-    sqlContext.sql("select count(col1) from table").show
-----
-====
-
-=== Predicate Push Down
-
-There are two examples of predicate push down in the HBase-Spark implementation.
-The first example shows the push down of filtering logic on the RowKey. HBase-Spark
-will reduce the filters on RowKeys down to a set of Get and/or Scan commands.
-
-NOTE: The Scans are distributed scans, rather than a single client scan operation.
-
-If the query looks something like the following, the logic will push down and get
-the rows through 3 Gets and 0 Scans. We can do gets because all the operations
-are `equal` operations.
-
-[source,sql]
-----
-SELECT
-  KEY_FIELD,
-  B_FIELD,
-  A_FIELD
-FROM hbaseTmp
-WHERE (KEY_FIELD = 'get1' or KEY_FIELD = 'get2' or KEY_FIELD = 'get3')
-----
-
-Now let's look at an example where we will end up doing two scans on HBase.
-
-[source, sql]
-----
-SELECT
-  KEY_FIELD,
-  B_FIELD,
-  A_FIELD
-FROM hbaseTmp
-WHERE KEY_FIELD < 'get2' or KEY_FIELD > 'get3'
-----
-
-In this example we will get 0 Gets and 2 Scans. One scan will load everything
-from the first row in the table until “get2” and the second scan will get
-everything from “get3” until the last row in the table.
-
-The next query is a good example of having a good deal of range checks. However
-the ranges overlap. To the code will be smart enough to get the following data
-in a single scan that encompasses all the data asked by the query.
-
-[source, sql]
-----
-SELECT
-  KEY_FIELD,
-  B_FIELD,
-  A_FIELD
-FROM hbaseTmp
-WHERE
-  (KEY_FIELD >= 'get1' and KEY_FIELD <= 'get3') or
-  (KEY_FIELD > 'get3' and KEY_FIELD <= 'get5')
-----
-
-The second example of push down functionality offered by the HBase-Spark module
-is the ability to push down filter logic for column and cell fields. Just like
-the RowKey logic, all query logic will be consolidated into the minimum number
-of range checks and equal checks by sending a Filter object along with the Scan
-with information about consolidated push down predicates
-
-.SparkSQL Code Example
-====
-This example shows how we can interact with HBase with SQL.
-
-[source, scala]
-----
-val sc = new SparkContext("local", "test")
-val config = new HBaseConfiguration()
-
-new HBaseContext(sc, TEST_UTIL.getConfiguration)
-val sqlContext = new SQLContext(sc)
-
-df = sqlContext.load("org.apache.hadoop.hbase.spark",
-  Map("hbase.columns.mapping" ->
-   "KEY_FIELD STRING :key, A_FIELD STRING c:a, B_FIELD STRING c:b",
-   "hbase.table" -> "t1"))
-
-df.registerTempTable("hbaseTmp")
-
-val results = sqlContext.sql("SELECT KEY_FIELD, B_FIELD FROM hbaseTmp " +
-  "WHERE " +
-  "(KEY_FIELD = 'get1' and B_FIELD < '3') or " +
-  "(KEY_FIELD >= 'get3' and B_FIELD = '8')").take(5)
-----
-
-There are three major parts of this example that deserve explaining.
-
-The sqlContext.load function::
-  In the sqlContext.load function we see two
-  parameters. The first of these parameters is pointing Spark to the HBase
-  DefaultSource class that will act as the interface between SparkSQL and HBase.
-
-A map of key value pairs::
-  In this example we have two keys in our map, `hbase.columns.mapping` and
-  `hbase.table`. The `hbase.table` directs SparkSQL to use the given HBase table.
-  The `hbase.columns.mapping` key give us the logic to translate HBase columns to
-  SparkSQL columns.
-+
-The `hbase.columns.mapping` is a string that follows the following format
-+
-[source, scala]
-----
-(SparkSQL.ColumnName) (SparkSQL.ColumnType) (HBase.ColumnFamily):(HBase.Qualifier)
-----
-+
-In the example below we see the definition of three fields. Because KEY_FIELD has
-no ColumnFamily, it is the RowKey.
-+
-----
-KEY_FIELD STRING :key, A_FIELD STRING c:a, B_FIELD STRING c:b
-----
-
-The registerTempTable function::
-  This is a SparkSQL function that allows us now to be free of Scala when accessing
-  our HBase table directly with SQL with the table name of "hbaseTmp".
-
-The last major point to note in the example is the `sqlContext.sql` function, which
-allows the user to ask their questions in SQL which will be pushed down to the
-DefaultSource code in the HBase-Spark module. The result of this command will be
-a DataFrame with the Schema of KEY_FIELD and B_FIELD.
-====
diff --git a/src/main/asciidoc/_chapters/troubleshooting.adoc b/src/main/asciidoc/_chapters/troubleshooting.adoc
index 556fc3f..5c10ca9 100644
--- a/src/main/asciidoc/_chapters/troubleshooting.adoc
+++ b/src/main/asciidoc/_chapters/troubleshooting.adoc
@@ -590,8 +590,8 @@ See also Jesse Andersen's link:http://blog.cloudera.com/blog/2014/04/how-to-use-
 
 In some situations clients that fetch data from a RegionServer get a LeaseException instead of the usual <<trouble.client.scantimeout>>.
 Usually the source of the exception is `org.apache.hadoop.hbase.regionserver.Leases.removeLease(Leases.java:230)` (line number may vary). It tends to happen in the context of a slow/freezing `RegionServer#next` call.
-It can be prevented by having `hbase.rpc.timeout` > `hbase.regionserver.lease.period`.
-Harsh J investigated the issue as part of the mailing list thread link:http://mail-archives.apache.org/mod_mbox/hbase-user/201209.mbox/%3CCAOcnVr3R-LqtKhFsk8Bhrm-YW2i9O6J6Fhjz2h7q6_sxvwd2yw%40mail.gmail.com%3E[HBase, mail # user - Lease does not exist exceptions]
+It can be prevented by having `hbase.rpc.timeout` > `hbase.client.scanner.timeout.period`.
+Harsh J investigated the issue as part of the mailing list thread link:https://mail-archives.apache.org/mod_mbox/hbase-user/201209.mbox/%3CCAOcnVr3R-LqtKhFsk8Bhrm-YW2i9O6J6Fhjz2h7q6_sxvwd2yw%40mail.gmail.com%3E[HBase, mail # user - Lease does not exist exceptions]
 
 [[trouble.client.scarylogs]]
 === Shell or client application throws lots of scary exceptions during normal operation
diff --git a/src/main/asciidoc/book.adoc b/src/main/asciidoc/book.adoc
index 2209b4f..d030c38 100644
--- a/src/main/asciidoc/book.adoc
+++ b/src/main/asciidoc/book.adoc
@@ -65,7 +65,6 @@ include::_chapters/hbase_mob.adoc[]
 include::_chapters/hbase_apis.adoc[]
 include::_chapters/external_apis.adoc[]
 include::_chapters/thrift_filter_language.adoc[]
-include::_chapters/spark.adoc[]
 include::_chapters/cp.adoc[]
 include::_chapters/performance.adoc[]
 include::_chapters/troubleshooting.adoc[]