You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by st...@apache.org on 2018/10/26 20:17:38 UTC

hbase git commit: HBASE-21054 Copy down docs, amend to suite branch-2.0, and then commit

Repository: hbase
Updated Branches:
  refs/heads/branch-2.1 066082dff -> e867b1a33


HBASE-21054 Copy down docs, amend to suite branch-2.0, and then commit


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/e867b1a3
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/e867b1a3
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/e867b1a3

Branch: refs/heads/branch-2.1
Commit: e867b1a3383621a6b5d3b68d3118e9f6501c35f4
Parents: 066082d
Author: Michael Stack <st...@apache.org>
Authored: Fri Oct 26 13:17:17 2018 -0700
Committer: Michael Stack <st...@apache.org>
Committed: Fri Oct 26 13:17:17 2018 -0700

----------------------------------------------------------------------
 src/main/asciidoc/_chapters/architecture.adoc   |  94 +++++++-
 src/main/asciidoc/_chapters/developer.adoc      | 162 ++++++++++----
 src/main/asciidoc/_chapters/mapreduce.adoc      |   2 +-
 src/main/asciidoc/_chapters/ops_mgt.adoc        | 214 ++++++++++++++-----
 src/main/asciidoc/_chapters/preface.adoc        |   2 +-
 src/main/asciidoc/_chapters/schema_design.adoc  |   2 +-
 src/main/asciidoc/_chapters/security.adoc       |  27 ++-
 .../asciidoc/_chapters/sync_replication.adoc    | 125 +++++++++++
 .../asciidoc/_chapters/troubleshooting.adoc     |   4 +-
 src/main/asciidoc/_chapters/upgrading.adoc      |  23 +-
 src/main/asciidoc/book.adoc                     |   2 +-
 11 files changed, 546 insertions(+), 111 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hbase/blob/e867b1a3/src/main/asciidoc/_chapters/architecture.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/architecture.adoc b/src/main/asciidoc/_chapters/architecture.adoc
index 5f215e5..e1905bc 100644
--- a/src/main/asciidoc/_chapters/architecture.adoc
+++ b/src/main/asciidoc/_chapters/architecture.adoc
@@ -594,6 +594,80 @@ See <<regions.arch.assignment>> for more information on region assignment.
 Periodically checks and cleans up the `hbase:meta` table.
 See <<arch.catalog.meta>> for more information on the meta table.
 
+[[master.wal]]
+=== MasterProcWAL
+
+HMaster records administrative operations and their running states, such as the handling of a crashed server,
+table creation, and other DDLs, into its own WAL file. The WALs are stored under the MasterProcWALs
+directory. The Master WALs are not like RegionServer WALs. Keeping up the Master WAL allows
+us run a state machine that is resilient across Master failures. For example, if a HMaster was in the
+middle of creating a table encounters an issue and fails, the next active HMaster can take up where
+the previous left off and carry the operation to completion. Since hbase-2.0.0, a
+new AssignmentManager (A.K.A AMv2) was introduced and the HMaster handles region assignment
+operations, server crash processing, balancing, etc., all via AMv2 persisting all state and
+transitions into MasterProcWALs rather than up into ZooKeeper, as we do in hbase-1.x.
+
+See <<amv2>> (and <<pv2>> for its basis) if you would like to learn more about the new
+AssignmentManager.
+
+[[master.wal.conf]]
+==== Configurations for MasterProcWAL
+Here are the list of configurations that effect MasterProcWAL operation.
+You should not have to change your defaults.
+
+[[hbase.procedure.store.wal.periodic.roll.msec]]
+*`hbase.procedure.store.wal.periodic.roll.msec`*::
++
+.Description
+Frequency of generating a new WAL
++
+.Default
+`1h (3600000 in msec)`
+
+[[hbase.procedure.store.wal.roll.threshold]]
+*`hbase.procedure.store.wal.roll.threshold`*::
++
+.Description
+Threshold in size before the WAL rolls. Every time the WAL reaches this size or the above period, 1 hour, passes since last log roll, the HMaster will generate a new WAL.
++
+.Default
+`32MB (33554432 in byte)`
+
+[[hbase.procedure.store.wal.warn.threshold]]
+*`hbase.procedure.store.wal.warn.threshold`*::
++
+.Description
+If the number of WALs goes beyond this threshold, the following message should appear in the HMaster log with WARN level when rolling.
+
+ procedure WALs count=xx above the warning threshold 64. check running procedures to see if something is stuck.
+
++
+.Default
+`64`
+
+[[hbase.procedure.store.wal.max.retries.before.roll]]
+*`hbase.procedure.store.wal.max.retries.before.roll`*::
++
+.Description
+Max number of retry when syncing slots (records) to its underlying storage, such as HDFS. Every attempt, the following message should appear in the HMaster log.
+
+ unable to sync slots, retry=xx
+
++
+.Default
+`3`
+
+[[hbase.procedure.store.wal.sync.failure.roll.max]]
+*`hbase.procedure.store.wal.sync.failure.roll.max`*::
++
+.Description
+After the above 3 retrials, the log is rolled and the retry count is reset to 0, thereon a new set of retrial starts. This configuration controls the max number of attempts of log rolling upon sync failure. That is, HMaster is allowed to fail to sync 9 times in total. Once it exceeds, the following log should appear in the HMaster log.
+
+ Sync slots after log roll failed, abort.
++
+.Default
+`3`
+
 [[regionserver.arch]]
 == RegionServer
 
@@ -947,20 +1021,20 @@ For an end-to-end off-heaped read-path, first of all there should be an off-heap
 _hbase-site.xml_. Also specify the total capacity of the BC using `hbase.bucketcache.size` config. Please remember to adjust value of 'HBASE_OFFHEAPSIZE' in
 _hbase-env.sh_. This is how we specify the max possible off-heap memory allocation for the
 RegionServer java process. This should be bigger than the off-heap BC size. Please keep in mind that there is no default for `hbase.bucketcache.ioengine`
-which means the BC is turned OFF by default (See <<direct.memory>>).
+which means the BC is turned OFF by default (See <<direct.memory>>). 
 
 Next thing to tune is the ByteBuffer pool on the RPC server side.
 The buffers from this pool will be used to accumulate the cell bytes and create a result cell block to send back to the client side.
 `hbase.ipc.server.reservoir.enabled` can be used to turn this pool ON or OFF. By default this pool is ON and available. HBase will create off heap ByteBuffers
 and pool them. Please make sure not to turn this OFF if you want end-to-end off-heaping in read path.
 If this pool is turned off, the server will create temp buffers on heap to accumulate the cell bytes and make a result cell block. This can impact the GC on a highly read loaded server.
-The user can tune this pool with respect to how many buffers are in the pool and what should be the size of each ByteBuffer.
-Use the config `hbase.ipc.server.reservoir.initial.buffer.size` to tune each of the buffer sizes. Default is 64 KB.
+The user can tune this pool with respect to how many buffers are in the pool and what should be the size of each ByteBuffer. 
+Use the config `hbase.ipc.server.reservoir.initial.buffer.size` to tune each of the buffer sizes. Default is 64 KB. 
 
 When the read pattern is a random row read load and each of the rows are smaller in size compared to this 64 KB, try reducing this.
-When the result size is larger than one ByteBuffer size, the server will try to grab more than one buffer and make a result cell block out of these. When the pool is running out of buffers, the server will end up creating temporary on-heap buffers.
+When the result size is larger than one ByteBuffer size, the server will try to grab more than one buffer and make a result cell block out of these. When the pool is running out of buffers, the server will end up creating temporary on-heap buffers. 
 
-The maximum number of ByteBuffers in the pool can be tuned using the config 'hbase.ipc.server.reservoir.initial.max'. Its value defaults to 64 * region server handlers configured (See the config 'hbase.regionserver.handler.count'). The math is such that by default we consider 2 MB as the result cell block size per read result and each handler will be handling a read. For 2 MB size, we need 32 buffers each of size 64 KB (See default buffer size in pool). So per handler 32 ByteBuffers(BB). We allocate twice this size as the max BBs count such that one handler can be creating the response and handing it to the RPC Responder thread and then handling a new request creating a new response cell block (using pooled buffers). Even if the responder could not send back the first TCP reply immediately, our count should allow that we should still have enough buffers in our pool without having to make temporary buffers on the heap. Again for smaller sized random row reads, tune this max count. Th
 ere are lazily created buffers and the count is the max count to be pooled.
+The maximum number of ByteBuffers in the pool can be tuned using the config 'hbase.ipc.server.reservoir.initial.max'. Its value defaults to 64 * region server handlers configured (See the config 'hbase.regionserver.handler.count'). The math is such that by default we consider 2 MB as the result cell block size per read result and each handler will be handling a read. For 2 MB size, we need 32 buffers each of size 64 KB (See default buffer size in pool). So per handler 32 ByteBuffers(BB). We allocate twice this size as the max BBs count such that one handler can be creating the response and handing it to the RPC Responder thread and then handling a new request creating a new response cell block (using pooled buffers). Even if the responder could not send back the first TCP reply immediately, our count should allow that we should still have enough buffers in our pool without having to make temporary buffers on the heap. Again for smaller sized random row reads, tune this max count. Th
 ere are lazily created buffers and the count is the max count to be pooled. 
 
 If you still see GC issues even after making end-to-end read path off-heap, look for issues in the appropriate buffer pool. Check the below RegionServer log with INFO level:
 [source]
@@ -968,7 +1042,7 @@ If you still see GC issues even after making end-to-end read path off-heap, look
 Pool already reached its max capacity : XXX and no free buffers now. Consider increasing the value for 'hbase.ipc.server.reservoir.initial.max' ?
 ----
 
-The setting for _HBASE_OFFHEAPSIZE_ in _hbase-env.sh_ should consider this off heap buffer pool at the RPC side also. We need to config this max off heap size for the RegionServer as a bit higher than the sum of this max pool size and the off heap cache size. The TCP layer will also need to create direct bytebuffers for TCP communication. Also the DFS client will need some off-heap to do its workings especially if short-circuit reads are configured. Allocating an extra of 1 - 2 GB for the max direct memory size has worked in tests.
+The setting for _HBASE_OFFHEAPSIZE_ in _hbase-env.sh_ should consider this off heap buffer pool at the RPC side also. We need to config this max off heap size for the RegionServer as a bit higher than the sum of this max pool size and the off heap cache size. The TCP layer will also need to create direct bytebuffers for TCP communication. Also the DFS client will need some off-heap to do its workings especially if short-circuit reads are configured. Allocating an extra of 1 - 2 GB for the max direct memory size has worked in tests. 
 
 If you are using co processors and refer the Cells in the read results, DO NOT store reference to these Cells out of the scope of the CP hook methods. Some times the CPs need store info about the cell (Like its row key) for considering in the next CP hook call etc. For such cases, pls clone the required fields of the entire Cell as per the use cases. [ See CellUtil#cloneXXX(Cell) APIs ]
 
@@ -1846,6 +1920,14 @@ See <<managed.compactions>>.
 Compactions do not perform region merges.
 See <<ops.regionmgt.merge>> for more information on region merging.
 
+.Compaction Switch
+We can switch on and off the compactions at region servers. Switching off compactions will also
+interrupt any currently ongoing compactions. It can be done dynamically using the "compaction_switch"
+command from hbase shell. If done from the command line, this setting will be lost on restart of the
+server. To persist the changes across region servers modify the configuration hbase.regionserver
+.compaction.enabled in hbase-site.xml and restart HBase.
+
+
 [[compaction.file.selection]]
 ===== Compaction Policy - HBase 0.96.x and newer
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/e867b1a3/src/main/asciidoc/_chapters/developer.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/developer.adoc b/src/main/asciidoc/_chapters/developer.adoc
index 935d6e6..51ed461 100644
--- a/src/main/asciidoc/_chapters/developer.adoc
+++ b/src/main/asciidoc/_chapters/developer.adoc
@@ -2040,30 +2040,97 @@ For more information on how to use ReviewBoard, see link:http://www.reviewboard.
 
 ==== Guide for HBase Committers
 
+===== Becoming a committer
+
+Committers are responsible for reviewing and integrating code changes, testing
+and voting on release candidates, weighing in on design discussions, as well as
+other types of project contributions. The PMC votes to make a contributor a
+committer based on an assessment of their contributions to the project. It is
+expected that committers demonstrate a sustained history of high-quality
+contributions to the project and community involvement.
+
+Contributions can be made in many ways. There is no single path to becoming a
+committer, nor any expected timeline. Submitting features, improvements, and bug
+fixes is the most common avenue, but other methods are both recognized and
+encouraged (and may be even more important to the health of HBase as a project and a
+community). A non-exhaustive list of potential contributions (in no particular
+order):
+
+* <<appendix_contributing_to_documentation,Update the documentation>> for new
+  changes, best practices, recipes, and other improvements.
+* Keep the website up to date.
+* Perform testing and report the results. For instance, scale testing and
+  testing non-standard configurations is always appreciated.
+* Maintain the shared Jenkins testing environment and other testing
+  infrastructure.
+* <<hbase.rc.voting,Vote on release candidates>> after performing validation, even if non-binding.
+  A non-binding vote is a vote by a non-committer.
+* Provide input for discussion threads on the link:/mail-lists.html[mailing lists] (which usually have
+  `[DISCUSS]` in the subject line).
+* Answer questions questions on the user or developer mailing lists and on
+  Slack.
+* Make sure the HBase community is a welcoming one and that we adhere to our
+  link:/coc.html[Code of conduct]. Alert the PMC if you
+  have concerns.
+* Review other people's work (both code and non-code) and provide public
+  feedback.
+* Report bugs that are found, or file new feature requests.
+* Triage issues and keep JIRA organized. This includes closing stale issues,
+  labeling new issues, updating metadata, and other tasks as needed.
+* Mentor new contributors of all sorts.
+* Give talks and write blogs about HBase. Add these to the link:/[News] section
+  of the website.
+* Provide UX feedback about HBase, the web UI, the CLI, APIs, and the website.
+* Write demo applications and scripts.
+* Help attract and retain a diverse community.
+* Interact with other projects in ways that benefit HBase and those other
+  projects.
+
+Not every individual is able to do all (or even any) of the items on this list.
+If you think of other ways to contribute, go for it (and add them to the list).
+A pleasant demeanor and willingness to contribute are all you need to make a
+positive impact on the HBase project. Invitations to become a committer are the
+result of steady interaction with the community over the long term, which builds
+trust and recognition.
+
 ===== New committers
 
-New committers are encouraged to first read Apache's generic committer documentation:
+New committers are encouraged to first read Apache's generic committer
+documentation:
 
 * link:https://www.apache.org/dev/new-committers-guide.html[Apache New Committer Guide]
 * link:https://www.apache.org/dev/committers.html[Apache Committer FAQ]
 
 ===== Review
 
-HBase committers should, as often as possible, attempt to review patches submitted by others.
-Ideally every submitted patch will get reviewed by a committer _within a few days_.
-If a committer reviews a patch they have not authored, and believe it to be of sufficient quality, then they can commit the patch, otherwise the patch should be cancelled with a clear explanation for why it was rejected.
-
-The list of submitted patches is in the link:https://issues.apache.org/jira/secure/IssueNavigator.jspa?mode=hide&requestId=12312392[HBase Review Queue], which is ordered by time of last modification.
-Committers should scan the list from top to bottom, looking for patches that they feel qualified to review and possibly commit.
-
-For non-trivial changes, it is required to get another committer to review your own patches before commit.
-Use the btn:[Submit Patch]                        button in JIRA, just like other contributors, and then wait for a `+1` response from another committer before committing.
+HBase committers should, as often as possible, attempt to review patches
+submitted by others. Ideally every submitted patch will get reviewed by a
+committer _within a few days_. If a committer reviews a patch they have not
+authored, and believe it to be of sufficient quality, then they can commit the
+patch. Otherwise the patch should be cancelled with a clear explanation for why
+it was rejected.
+
+The list of submitted patches is in the
+link:https://issues.apache.org/jira/secure/IssueNavigator.jspa?mode=hide&requestId=12312392[HBase Review Queue],
+which is ordered by time of last modification. Committers should scan the list
+from top to bottom, looking for patches that they feel qualified to review and
+possibly commit. If you see a patch you think someone else is better qualified
+to review, you can mention them by username in the JIRA.
+
+For non-trivial changes, it is required that another committer review your
+patches before commit. **Self-commits of non-trivial patches are not allowed.**
+Use the btn:[Submit Patch] button in JIRA, just like other contributors, and
+then wait for a `+1` response from another committer before committing.
 
 ===== Reject
 
-Patches which do not adhere to the guidelines in link:https://hbase.apache.org/book.html#developer[HowToContribute] and to the link:https://wiki.apache.org/hadoop/CodeReviewChecklist[code review checklist] should be rejected.
-Committers should always be polite to contributors and try to instruct and encourage them to contribute better patches.
-If a committer wishes to improve an unacceptable patch, then it should first be rejected, and a new patch should be attached by the committer for review.
+Patches which do not adhere to the guidelines in
+link:https://hbase.apache.org/book.html#developer[HowToContribute] and to the
+link:https://wiki.apache.org/hadoop/CodeReviewChecklist[code review checklist]
+should be rejected. Committers should always be polite to contributors and try
+to instruct and encourage them to contribute better patches. If a committer
+wishes to improve an unacceptable patch, then it should first be rejected, and a
+new patch should be attached by the committer for further review.
 
 [[committing.patches]]
 ===== Commit
@@ -2074,29 +2141,34 @@ Committers commit patches to the Apache HBase GIT repository.
 [NOTE]
 ====
 Make sure your local configuration is correct, especially your identity and email.
-Examine the output of the +$ git config
-                                --list+ command and be sure it is correct.
-See this GitHub article, link:https://help.github.com/articles/set-up-git[Set Up Git] if you need pointers.
+Examine the output of the +$ git config --list+ command and be sure it is correct.
+See link:https://help.github.com/articles/set-up-git[Set Up Git] if you need
+pointers.
 ====
 
-When you commit a patch, please:
-
-. Include the Jira issue id in the commit message along with a short description of the change. Try
-  to add something more than just the Jira title so that someone looking at git log doesn't
-  have to go to Jira to discern what the change is about.
-  Be sure to get the issue ID right, as this causes Jira to link to the change in Git (use the
-  issue's "All" tab to see these).
-. Commit the patch to a new branch based off master or other intended branch.
-  It's a good idea to call this branch by the JIRA ID.
-  Then check out the relevant target branch where you want to commit, make sure your local branch has all remote changes, by doing a +git pull --rebase+ or another similar command, cherry-pick the change into each relevant branch (such as master), and do +git push <remote-server>
-  <remote-branch>+.
+When you commit a patch:
+
+. Include the Jira issue ID in the commit message along with a short description
+  of the change. Try to add something more than just the Jira title so that
+  someone looking at `git log` output doesn't have to go to Jira to discern what
+  the change is about. Be sure to get the issue ID right, because this causes
+  Jira to link to the change in Git (use the issue's "All" tab to see these
+  automatic links).
+. Commit the patch to a new branch based off `master` or the other intended
+  branch. It's a good idea to include the JIRA ID in the name of this branch.
+  Check out the relevant target branch where you want to commit, and make sure
+  your local branch has all remote changes, by doing a +git pull --rebase+ or
+  another similar command. Next, cherry-pick the change into each relevant
+  branch (such as master), and push the changes to the remote branch using
+  a command such as +git push <remote-server> <remote-branch>+.
 +
 WARNING: If you do not have all remote changes, the push will fail.
 If the push fails for any reason, fix the problem or ask for help.
 Do not do a +git push --force+.
 +
 Before you can commit a patch, you need to determine how the patch was created.
-The instructions and preferences around the way to create patches have changed, and there will be a transition period.
+The instructions and preferences around the way to create patches have changed,
+and there will be a transition period.
 +
 .Determine How a Patch Was Created
 * If the first few lines of the patch look like the headers of an email, with a From, Date, and
@@ -2123,16 +2195,18 @@ diff --git src/main/asciidoc/_chapters/developer.adoc src/main/asciidoc/_chapter
 +
 .Example of committing a Patch
 ====
-One thing you will notice with these examples is that there are a lot of +git pull+ commands.
-The only command that actually writes anything to the remote repository is +git push+, and you need to make absolutely sure you have the correct versions of everything and don't have any conflicts before pushing.
-The extra +git
-                                        pull+ commands are usually redundant, but better safe than sorry.
+One thing you will notice with these examples is that there are a lot of
++git pull+ commands. The only command that actually writes anything to the
+remote repository is +git push+, and you need to make absolutely sure you have
+the correct versions of everything and don't have any conflicts before pushing.
+The extra +git pull+ commands are usually redundant, but better safe than sorry.
 
-The first example shows how to apply a patch that was generated with +git format-patch+ and apply it to the `master` and `branch-1` branches.
+The first example shows how to apply a patch that was generated with +git
+format-patch+ and apply it to the `master` and `branch-1` branches.
 
-The directive to use +git format-patch+                                    rather than +git diff+, and not to use `--no-prefix`, is a new one.
-See the second example for how to apply a patch created with +git
-                                        diff+, and educate the person who created the patch.
+The directive to use +git format-patch+ rather than +git diff+, and not to use
+`--no-prefix`, is a new one. See the second example for how to apply a patch
+created with +git diff+, and educate the person who created the patch.
 
 ----
 $ git checkout -b HBASE-XXXX
@@ -2154,13 +2228,13 @@ $ git push origin branch-1
 $ git branch -D HBASE-XXXX
 ----
 
-This example shows how to commit a patch that was created using +git diff+ without `--no-prefix`.
-If the patch was created with `--no-prefix`, add `-p0` to the +git apply+ command.
+This example shows how to commit a patch that was created using +git diff+
+without `--no-prefix`. If the patch was created with `--no-prefix`, add `-p0` to
+the +git apply+ command.
 
 ----
 $ git apply ~/Downloads/HBASE-XXXX-v2.patch
-$ git commit -m "HBASE-XXXX Really Good Code Fix (Joe Schmo)" --author=<contributor> -a  # This
-and next command is needed for patches created with 'git diff'
+$ git commit -m "HBASE-XXXX Really Good Code Fix (Joe Schmo)" --author=<contributor> -a  # This and next command is needed for patches created with 'git diff'
 $ git commit --amend --signoff
 $ git checkout master
 $ git pull --rebase
@@ -2181,7 +2255,9 @@ $ git branch -D HBASE-XXXX
 ====
 
 . Resolve the issue as fixed, thanking the contributor.
-  Always set the "Fix Version" at this point, but please only set a single fix version for each branch where the change was committed, the earliest release in that branch in which the change will appear.
+  Always set the "Fix Version" at this point, but only set a single fix version
+  for each branch where the change was committed, the earliest release in that
+  branch in which the change will appear.
 
 ====== Commit Message Format
 
@@ -2196,7 +2272,9 @@ The preferred commit message format is:
 HBASE-12345 Fix All The Things (jane@example.com)
 ----
 
-If the contributor used +git format-patch+ to generate the patch, their commit message is in their patch and you can use that, but be sure the JIRA ID is at the front of the commit message, even if the contributor left it out.
+If the contributor used +git format-patch+ to generate the patch, their commit
+message is in their patch and you can use that, but be sure the JIRA ID is at
+the front of the commit message, even if the contributor left it out.
 
 [[committer.amending.author]]
 ====== Add Amending-Author when a conflict cherrypick backporting

http://git-wip-us.apache.org/repos/asf/hbase/blob/e867b1a3/src/main/asciidoc/_chapters/mapreduce.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/mapreduce.adoc b/src/main/asciidoc/_chapters/mapreduce.adoc
index 2f72a2d..61cff86 100644
--- a/src/main/asciidoc/_chapters/mapreduce.adoc
+++ b/src/main/asciidoc/_chapters/mapreduce.adoc
@@ -120,7 +120,7 @@ You might find the more selective `hbase mapredcp` tool output of interest; it l
 to run a basic mapreduce job against an hbase install. It does not include configuration. You'll probably need to add
 these if you want your MapReduce job to find the target cluster. You'll probably have to also add pointers to extra jars
 once you start to do anything of substance. Just specify the extras by passing the system propery `-Dtmpjars` when
-you run `hbase mapredcp`.
+you run `hbase mapredcp`. 
 
 For jobs that do not package their dependencies or call `TableMapReduceUtil#addDependencyJars`, the following command structure is necessary:
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/e867b1a3/src/main/asciidoc/_chapters/ops_mgt.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/ops_mgt.adoc b/src/main/asciidoc/_chapters/ops_mgt.adoc
index 5645af5..ae5507f 100644
--- a/src/main/asciidoc/_chapters/ops_mgt.adoc
+++ b/src/main/asciidoc/_chapters/ops_mgt.adoc
@@ -51,7 +51,8 @@ Options:
 Commands:
 Some commands take arguments. Pass no args or -h for usage.
   shell           Run the HBase shell
-  hbck            Run the hbase 'fsck' tool
+  hbck            Run the HBase 'fsck' tool. Defaults read-only hbck1.
+                  Pass '-j /path/to/HBCK2.jar' to run hbase-2.x HBCK2.
   snapshot        Tool for managing snapshots
   wal             Write-ahead-log analyzer
   hfile           Store file analyzer
@@ -386,12 +387,33 @@ Each command except `RowCounter` and `CellCounter` accept a single `--help` argu
 [[hbck]]
 === HBase `hbck`
 
-To run `hbck` against your HBase cluster run `$./bin/hbase hbck`. At the end of the command's output it prints `OK` or `INCONSISTENCY`.
-If your cluster reports inconsistencies, pass `-details` to see more detail emitted.
-If inconsistencies, run `hbck` a few times because the inconsistency may be transient (e.g. cluster is starting up or a region is splitting).
- Passing `-fix` may correct the inconsistency (This is an experimental feature).
+The `hbck` tool that shipped with hbase-1.x has been made read-only in hbase-2.x. It is not able to repair
+hbase-2.x clusters as hbase internals have changed. Nor should its assessments in read-only mode be
+trusted as it does not understand hbase-2.x operation.
 
-For more information, see <<hbck.in.depth>>.
+A new tool, <<HBCK2>>, described in the next section, replaces `hbck`.
+
+[[HBCK2]]
+=== HBase `HBCK2`
+
+`HBCK2` is the successor to <<hbck>>, the hbase-1.x fix tool (A.K.A `hbck1`). Use it in place of `hbck1`
+making repairs against hbase-2.x installs.
+
+`HBCK2` does not ship as part of hbase. It can be found as a subproject of the companion
+link:https://github.com/apache/hbase-operator-tools[hbase-operator-tools] repository at
+link:https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2[Apache HBase HBCK2 Tool].
+`HBCK2` was moved out of hbase so it could evolve at a cadence apart from that of hbase core.
+
+See the [https://github.com/apache/hbase-operator-tools/tree/master/hbase-hbck2](HBCK2) Home Page
+for how `HBCK2` differs from `hbck1`, and for how to build and use it.
+
+Once built, you can run `HBCK2` as follows:
+
+```
+$ hbase hbck -j /path/to/HBCK2.jar
+```
+
+This will generate `HBCK2` usage describing commands and options.
 
 [[hfile_tool2]]
 === HFile Tool
@@ -916,7 +938,8 @@ $ bin/hbase pre-upgrade validate-cp -table .*
 It validates every table level co-processors where the table name matches to `.*` regular expression.
 
 ==== DataBlockEncoding validation
-HBase 2.0 removed `PREFIX_TREE` Data Block Encoding from column families.
+HBase 2.0 removed `PREFIX_TREE` Data Block Encoding from column families. For further information
+please check <<upgrade2.0.prefix-tree.removed,_prefix-tree_ encoding removed>>.
 To verify that none of the column families are using incompatible Data Block Encodings in the cluster run the following command.
 
 [source, bash]
@@ -924,8 +947,103 @@ To verify that none of the column families are using incompatible Data Block Enc
 $ bin/hbase pre-upgrade validate-dbe
 ----
 
-This check validates all column families and print out any incompatibilities.
-To change `PREFIX_TREE` encoding to supported one check <<upgrade2.0.prefix-tree.removed,_prefix-tree_ encoding removed>>.
+This check validates all column families and print out any incompatibilities. For example:
+
+----
+2018-07-13 09:58:32,028 WARN  [main] tool.DataBlockEncodingValidator: Incompatible DataBlockEncoding for table: t, cf: f, encoding: PREFIX_TREE
+----
+
+Which means that Data Block Encoding of table `t`, column family `f` is incompatible. To fix, use `alter` command in HBase shell:
+
+----
+alter 't', { NAME => 'f', DATA_BLOCK_ENCODING => 'FAST_DIFF' }
+----
+
+Please also validate HFiles, which is described in the next section.
+
+==== HFile Content validation
+Even though Data Block Encoding is changed from `PREFIX_TREE` it is still possible to have HFiles that contain data encoded that way.
+To verify that HFiles are readable with HBase 2 please use _HFile content validator_.
+
+[source, bash]
+----
+$ bin/hbase pre-upgrade validate-hfile
+----
+
+The tool will log the corrupt HFiles and details about the root cause.
+If the problem is about PREFIX_TREE encoding it is necessary to change encodings before upgrading to HBase 2.
+
+The following log message shows an example of incorrect HFiles.
+
+----
+2018-06-05 16:20:46,976 WARN  [hfilevalidator-pool1-t3] hbck.HFileCorruptionChecker: Found corrupt HFile hdfs://example.com:8020/hbase/data/default/t/72ea7f7d625ee30f959897d1a3e2c350/prefix/7e6b3d73263c4851bf2b8590a9b3791e
+org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading HFile Trailer from file hdfs://example.com:8020/hbase/data/default/t/72ea7f7d625ee30f959897d1a3e2c350/prefix/7e6b3d73263c4851bf2b8590a9b3791e
+    ...
+Caused by: java.io.IOException: Invalid data block encoding type in file info: PREFIX_TREE
+    ...
+Caused by: java.lang.IllegalArgumentException: No enum constant org.apache.hadoop.hbase.io.encoding.DataBlockEncoding.PREFIX_TREE
+    ...
+2018-06-05 16:20:47,322 INFO  [main] tool.HFileContentValidator: Corrupted file: hdfs://example.com:8020/hbase/data/default/t/72ea7f7d625ee30f959897d1a3e2c350/prefix/7e6b3d73263c4851bf2b8590a9b3791e
+2018-06-05 16:20:47,383 INFO  [main] tool.HFileContentValidator: Corrupted file: hdfs://example.com:8020/hbase/archive/data/default/t/56be41796340b757eb7fff1eb5e2a905/f/29c641ae91c34fc3bee881f45436b6d1
+----
+
+===== Fixing PREFIX_TREE errors
+
+It's possible to get `PREFIX_TREE` errors after changing Data Block Encoding to a supported one. It can happen
+because there are some HFiles which still encoded with `PREFIX_TREE` or there are still some snapshots.
+
+For fixing HFiles, please run a major compaction on the table (it was `default:t` according to the log message):
+
+----
+major_compact 't'
+----
+
+HFiles can be referenced from snapshots, too. It's the case when the HFile is located under `archive/data`.
+The first step is to determine which snapshot references that HFile (the name of the file was `29c641ae91c34fc3bee881f45436b6d1`
+according to the logs):
+
+[source, bash]
+----
+for snapshot in $(hbase snapshotinfo -list-snapshots 2> /dev/null | tail -n -1 | cut -f 1 -d \|);
+do
+  echo "checking snapshot named '${snapshot}'";
+  hbase snapshotinfo -snapshot "${snapshot}" -files 2> /dev/null | grep 29c641ae91c34fc3bee881f45436b6d1;
+done
+----
+
+The output of this shell script is:
+
+----
+checking snapshot named 't_snap'
+   1.0 K t/56be41796340b757eb7fff1eb5e2a905/f/29c641ae91c34fc3bee881f45436b6d1 (archive)
+----
+
+Which means `t_snap` snapshot references the incompatible HFile. If the snapshot is still needed,
+then it has to be recreated with HBase shell:
+
+----
+# creating a new namespace for the cleanup process
+create_namespace 'pre_upgrade_cleanup'
+
+# creating a new snapshot
+clone_snapshot 't_snap', 'pre_upgrade_cleanup:t'
+alter 'pre_upgrade_cleanup:t', { NAME => 'f', DATA_BLOCK_ENCODING => 'FAST_DIFF' }
+major_compact 'pre_upgrade_cleanup:t'
+
+# removing the invalid snapshot
+delete_snapshot 't_snap'
+
+# creating a new snapshot
+snapshot 'pre_upgrade_cleanup:t', 't_snap'
+
+# removing temporary table
+disable 'pre_upgrade_cleanup:t'
+drop 'pre_upgrade_cleanup:t'
+drop_namespace 'pre_upgrade_cleanup'
+----
+
+For further information, please refer to
+link:https://issues.apache.org/jira/browse/HBASE-20649?focusedCommentId=16535476#comment-16535476[HBASE-20649].
 
 === Data Block Encoding Tool
 
@@ -1176,15 +1294,6 @@ Monitor the output of the _/tmp/log.txt_ file to follow the progress of the scri
 Use the following guidelines if you want to create your own rolling restart script.
 
 . Extract the new release, verify its configuration, and synchronize it to all nodes of your cluster using `rsync`, `scp`, or another secure synchronization mechanism.
-. Use the hbck utility to ensure that the cluster is consistent.
-+
-----
-
-$ ./bin/hbck
-----
-+
-Perform repairs if required.
-See <<hbck,hbck>> for details.
 
 . Restart the master first.
   You may need to modify these commands if your new HBase directory is different from the old one, such as for an upgrade.
@@ -1216,7 +1325,6 @@ $ for i in `cat conf/regionservers|sort`; do ./bin/graceful_stop.sh --restart --
 ----
 
 . Restart the Master again, to clear out the dead servers list and re-enable the load balancer.
-. Run the `hbck` utility again, to be sure the cluster is consistent.
 
 [[adding.new.node]]
 === Adding a New Node
@@ -1532,6 +1640,9 @@ Some use cases for cluster replication include:
 NOTE: Replication is enabled at the granularity of the column family.
 Before enabling replication for a column family, create the table and all column families to be replicated, on the destination cluster.
 
+NOTE: Replication is asynchronous as we send WAL to another cluster in background, which means that when you want to do recovery through replication, you could loss some data. To address this problem, we have introduced a new feature called synchronous replication. As the mechanism is a bit different so we use a separated section to describe it. Please see
+<<Synchronous Replication,Synchronous Replication>>.
+
 === Replication Overview
 
 Cluster replication uses a source-push methodology.
@@ -2337,9 +2448,12 @@ Since the cluster is up, there is a risk that edits could be missed in the expor
 [[ops.snapshots]]
 == HBase Snapshots
 
-HBase Snapshots allow you to take a snapshot of a table without too much impact on Region Servers.
-Snapshot, Clone and restore operations don't involve data copying.
-Also, Exporting the snapshot to another cluster doesn't have impact on the Region Servers.
+HBase Snapshots allow you to take a copy of a table (both contents and metadata)with a very small performance impact. A Snapshot is an immutable
+collection of table metadata and a list of HFiles that comprised the table at the time the Snapshot was taken. A "clone"
+of a snapshot creates a new table from that snapshot, and a "restore" of a snapshot returns the contents of a table to
+what it was when the snapshot was created. The "clone" and "restore" operations do not require any data to be copied,
+as the underlying HFiles (the files which contain the data for an HBase table) are not modified with either action.
+Simiarly, exporting a snapshot to another cluster has little impact on RegionServers of the local cluster.
 
 Prior to version 0.94.6, the only way to backup or to clone a table is to use CopyTable/ExportTable, or to copy all the hfiles in HDFS after disabling the table.
 The disadvantages of these methods are that you can degrade region server performance (Copy/Export Table) or you need to disable the table, that means no reads or writes; and this is usually unacceptable.
@@ -2602,8 +2716,6 @@ HDFS replication factor only affects your disk usage and is invisible to most HB
 You can view the current number of regions for a given table using the HMaster UI.
 In the [label]#Tables# section, the number of online regions for each table is listed in the [label]#Online Regions# column.
 This total only includes the in-memory state and does not include disabled or offline regions.
-If you do not want to use the HMaster UI, you can determine the number of regions by counting the number of subdirectories of the /hbase/<table>/ subdirectories in HDFS, or by running the `bin/hbase hbck` command.
-Each of these methods may return a slightly different number, depending on the status of each region.
 
 [[ops.capacity.regions.count]]
 ==== Number of regions per RS - upper bound
@@ -2890,8 +3002,8 @@ If it appears stuck, restart the Master process.
 
 === Remove RegionServer Grouping
 Removing RegionServer Grouping feature from a cluster on which it was enabled involves
-more steps in addition to removing the relevant properties from `hbase-site.xml`. This is
-to clean the RegionServer grouping related meta data so that if the feature is re-enabled
+more steps in addition to removing the relevant properties from `hbase-site.xml`. This is 
+to clean the RegionServer grouping related meta data so that if the feature is re-enabled 
 in the future, the old meta data will not affect the functioning of the cluster.
 
 - Move all tables in non-default rsgroups to `default` regionserver group
@@ -2900,7 +3012,7 @@ in the future, the old meta data will not affect the functioning of the cluster.
 #Reassigning table t1 from non default group - hbase shell
 hbase(main):005:0> move_tables_rsgroup 'default',['t1']
 ----
-- Move all regionservers in non-default rsgroups to `default` regionserver group
+- Move all regionservers in non-default rsgroups to `default` regionserver group    
 [source, bash]
 ----
 #Reassigning all the servers in the non-default rsgroup to default - hbase shell
@@ -2975,21 +3087,21 @@ To check normalizer status and enable/disable normalizer
 [source,bash]
 ----
 hbase(main):001:0> normalizer_enabled
-true
+true 
 0 row(s) in 0.4870 seconds
-
+ 
 hbase(main):002:0> normalizer_switch false
-true
+true 
 0 row(s) in 0.0640 seconds
-
+ 
 hbase(main):003:0> normalizer_enabled
-false
+false 
 0 row(s) in 0.0120 seconds
-
+ 
 hbase(main):004:0> normalizer_switch true
 false
 0 row(s) in 0.0200 seconds
-
+ 
 hbase(main):005:0> normalizer_enabled
 true
 0 row(s) in 0.0090 seconds
@@ -3008,19 +3120,19 @@ merge action being taken as a result of the normalization plan computed by Simpl
 
 Consider an user table with some pre-split regions having 3 equally large regions
 (about 100K rows) and 1 relatively small region (about 25K rows). Following is the
-snippet from an hbase meta table scan showing each of the pre-split regions for
+snippet from an hbase meta table scan showing each of the pre-split regions for 
 the user table.
 
 ----
-table_p8ddpd6q5z,,1469494305548.68b9892220865cb6048 column=info:regioninfo, timestamp=1469494306375, value={ENCODED => 68b9892220865cb604809c950d1adf48, NAME => 'table_p8ddpd6q5z,,1469494305548.68b989222 09c950d1adf48.   0865cb604809c950d1adf48.', STARTKEY => '', ENDKEY => '1'}
-....
-table_p8ddpd6q5z,1,1469494317178.867b77333bdc75a028 column=info:regioninfo, timestamp=1469494317848, value={ENCODED => 867b77333bdc75a028bb4c5e4b235f48, NAME => 'table_p8ddpd6q5z,1,1469494317178.867b7733 bb4c5e4b235f48.  3bdc75a028bb4c5e4b235f48.', STARTKEY => '1', ENDKEY => '3'}
-....
-table_p8ddpd6q5z,3,1469494328323.98f019a753425e7977 column=info:regioninfo, timestamp=1469494328486, value={ENCODED => 98f019a753425e7977ab8636e32deeeb, NAME => 'table_p8ddpd6q5z,3,1469494328323.98f019a7 ab8636e32deeeb.  53425e7977ab8636e32deeeb.', STARTKEY => '3', ENDKEY => '7'}
-....
-table_p8ddpd6q5z,7,1469494339662.94c64e748979ecbb16 column=info:regioninfo, timestamp=1469494339859, value={ENCODED => 94c64e748979ecbb166f6cc6550e25c6, NAME => 'table_p8ddpd6q5z,7,1469494339662.94c64e74 6f6cc6550e25c6.   8979ecbb166f6cc6550e25c6.', STARTKEY => '7', ENDKEY => '8'}
-....
-table_p8ddpd6q5z,8,1469494339662.6d2b3f5fd1595ab8e7 column=info:regioninfo, timestamp=1469494339859, value={ENCODED => 6d2b3f5fd1595ab8e7c031876057b1ee, NAME => 'table_p8ddpd6q5z,8,1469494339662.6d2b3f5f c031876057b1ee.   d1595ab8e7c031876057b1ee.', STARTKEY => '8', ENDKEY => ''}
+table_p8ddpd6q5z,,1469494305548.68b9892220865cb6048 column=info:regioninfo, timestamp=1469494306375, value={ENCODED => 68b9892220865cb604809c950d1adf48, NAME => 'table_p8ddpd6q5z,,1469494305548.68b989222 09c950d1adf48.   0865cb604809c950d1adf48.', STARTKEY => '', ENDKEY => '1'} 
+.... 
+table_p8ddpd6q5z,1,1469494317178.867b77333bdc75a028 column=info:regioninfo, timestamp=1469494317848, value={ENCODED => 867b77333bdc75a028bb4c5e4b235f48, NAME => 'table_p8ddpd6q5z,1,1469494317178.867b7733 bb4c5e4b235f48.  3bdc75a028bb4c5e4b235f48.', STARTKEY => '1', ENDKEY => '3'} 
+.... 
+table_p8ddpd6q5z,3,1469494328323.98f019a753425e7977 column=info:regioninfo, timestamp=1469494328486, value={ENCODED => 98f019a753425e7977ab8636e32deeeb, NAME => 'table_p8ddpd6q5z,3,1469494328323.98f019a7 ab8636e32deeeb.  53425e7977ab8636e32deeeb.', STARTKEY => '3', ENDKEY => '7'} 
+.... 
+table_p8ddpd6q5z,7,1469494339662.94c64e748979ecbb16 column=info:regioninfo, timestamp=1469494339859, value={ENCODED => 94c64e748979ecbb166f6cc6550e25c6, NAME => 'table_p8ddpd6q5z,7,1469494339662.94c64e74 6f6cc6550e25c6.   8979ecbb166f6cc6550e25c6.', STARTKEY => '7', ENDKEY => '8'} 
+.... 
+table_p8ddpd6q5z,8,1469494339662.6d2b3f5fd1595ab8e7 column=info:regioninfo, timestamp=1469494339859, value={ENCODED => 6d2b3f5fd1595ab8e7c031876057b1ee, NAME => 'table_p8ddpd6q5z,8,1469494339662.6d2b3f5f c031876057b1ee.   d1595ab8e7c031876057b1ee.', STARTKEY => '8', ENDKEY => ''}  
 ----
 Invoking the normalizer using ‘normalize’ int the HBase shell, the below log snippet
 from HMaster log shows the normalization plan computed as per the logic defined for
@@ -3046,15 +3158,15 @@ and end key as ‘1’, with another region having start key as ‘1’ and end
 Now, that these regions have been merged we see a single new region with start key
 as ‘’ and end key as ‘3’
 ----
-table_p8ddpd6q5z,,1469516907210.e06c9b83c4a252b130e column=info:mergeA, timestamp=1469516907431,
-value=PBUF\x08\xA5\xD9\x9E\xAF\xE2*\x12\x1B\x0A\x07default\x12\x10table_p8ddpd6q5z\x1A\x00"\x011(\x000\x00 ea74d246741ba.   8\x00
+table_p8ddpd6q5z,,1469516907210.e06c9b83c4a252b130e column=info:mergeA, timestamp=1469516907431, 
+value=PBUF\x08\xA5\xD9\x9E\xAF\xE2*\x12\x1B\x0A\x07default\x12\x10table_p8ddpd6q5z\x1A\x00"\x011(\x000\x00 ea74d246741ba.   8\x00 
 table_p8ddpd6q5z,,1469516907210.e06c9b83c4a252b130e column=info:mergeB, timestamp=1469516907431,
-value=PBUF\x08\xB5\xBA\x9F\xAF\xE2*\x12\x1B\x0A\x07default\x12\x10table_p8ddpd6q5z\x1A\x011"\x013(\x000\x0 ea74d246741ba.   08\x00
+value=PBUF\x08\xB5\xBA\x9F\xAF\xE2*\x12\x1B\x0A\x07default\x12\x10table_p8ddpd6q5z\x1A\x011"\x013(\x000\x0 ea74d246741ba.   08\x00 
 table_p8ddpd6q5z,,1469516907210.e06c9b83c4a252b130e column=info:regioninfo, timestamp=1469516907431, value={ENCODED => e06c9b83c4a252b130eea74d246741ba, NAME => 'table_p8ddpd6q5z,,1469516907210.e06c9b83c ea74d246741ba.   4a252b130eea74d246741ba.', STARTKEY => '', ENDKEY => '3'}
-....
-table_p8ddpd6q5z,3,1469514778736.bf024670a847c0adff column=info:regioninfo, timestamp=1469514779417, value={ENCODED => bf024670a847c0adffb74b2e13408b32, NAME => 'table_p8ddpd6q5z,3,1469514778736.bf024670 b74b2e13408b32.  a847c0adffb74b2e13408b32.' STARTKEY => '3', ENDKEY => '7'}
-....
-table_p8ddpd6q5z,7,1469514790152.7c5a67bc755e649db2 column=info:regioninfo, timestamp=1469514790312, value={ENCODED => 7c5a67bc755e649db22f49af6270f1e1, NAME => 'table_p8ddpd6q5z,7,1469514790152.7c5a67bc 2f49af6270f1e1.  755e649db22f49af6270f1e1.', STARTKEY => '7', ENDKEY => '8'}
+.... 
+table_p8ddpd6q5z,3,1469514778736.bf024670a847c0adff column=info:regioninfo, timestamp=1469514779417, value={ENCODED => bf024670a847c0adffb74b2e13408b32, NAME => 'table_p8ddpd6q5z,3,1469514778736.bf024670 b74b2e13408b32.  a847c0adffb74b2e13408b32.' STARTKEY => '3', ENDKEY => '7'} 
+.... 
+table_p8ddpd6q5z,7,1469514790152.7c5a67bc755e649db2 column=info:regioninfo, timestamp=1469514790312, value={ENCODED => 7c5a67bc755e649db22f49af6270f1e1, NAME => 'table_p8ddpd6q5z,7,1469514790152.7c5a67bc 2f49af6270f1e1.  755e649db22f49af6270f1e1.', STARTKEY => '7', ENDKEY => '8'} 
 ....
 table_p8ddpd6q5z,8,1469514790152.58e7503cda69f98f47 column=info:regioninfo, timestamp=1469514790312, value={ENCODED => 58e7503cda69f98f4755178e74288c3a, NAME => 'table_p8ddpd6q5z,8,1469514790152.58e7503c 55178e74288c3a.  da69f98f4755178e74288c3a.', STARTKEY => '8', ENDKEY => ''}
 ----

http://git-wip-us.apache.org/repos/asf/hbase/blob/e867b1a3/src/main/asciidoc/_chapters/preface.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/preface.adoc b/src/main/asciidoc/_chapters/preface.adoc
index 280f2d8..deebdd3 100644
--- a/src/main/asciidoc/_chapters/preface.adoc
+++ b/src/main/asciidoc/_chapters/preface.adoc
@@ -68,7 +68,7 @@ Yours, the HBase Community.
 
 Please use link:https://issues.apache.org/jira/browse/hbase[JIRA] to report non-security-related bugs.
 
-To protect existing HBase installations from new vulnerabilities, please *do not* use JIRA to report security-related bugs. Instead, send your report to the mailing list private@apache.org, which allows anyone to send messages, but restricts who can read them. Someone on that list will contact you to follow up on your report.
+To protect existing HBase installations from new vulnerabilities, please *do not* use JIRA to report security-related bugs. Instead, send your report to the mailing list private@hbase.apache.org, which allows anyone to send messages, but restricts who can read them. Someone on that list will contact you to follow up on your report.
 
 [[hbase_supported_tested_definitions]]
 .Support and Testing Expectations

http://git-wip-us.apache.org/repos/asf/hbase/blob/e867b1a3/src/main/asciidoc/_chapters/schema_design.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/schema_design.adoc b/src/main/asciidoc/_chapters/schema_design.adoc
index b7a6936..fdbd184 100644
--- a/src/main/asciidoc/_chapters/schema_design.adoc
+++ b/src/main/asciidoc/_chapters/schema_design.adoc
@@ -1158,7 +1158,7 @@ the regionserver/dfsclient side.
 
 * In `hbase-site.xml`, set the following parameters:
 - `dfs.client.read.shortcircuit = true`
-- `dfs.client.read.shortcircuit.skip.checksum = true` so we don't double checksum (HBase does its own checksumming to save on i/os. See <<hbase.regionserver.checksum.verify.performance>> for more on this.
+- `dfs.client.read.shortcircuit.skip.checksum = true` so we don't double checksum (HBase does its own checksumming to save on i/os. See <<hbase.regionserver.checksum.verify.performance>> for more on this. 
 - `dfs.domain.socket.path` to match what was set for the datanodes.
 - `dfs.client.read.shortcircuit.buffer.size = 131072` Important to avoid OOME -- hbase has a default it uses if unset, see `hbase.dfs.client.read.shortcircuit.buffer.size`; its default is 131072.
 * Ensure data locality. In `hbase-site.xml`, set `hbase.hstore.min.locality.to.skip.major.compact = 0.7` (Meaning that 0.7 \<= n \<= 1)

http://git-wip-us.apache.org/repos/asf/hbase/blob/e867b1a3/src/main/asciidoc/_chapters/security.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/security.adoc b/src/main/asciidoc/_chapters/security.adoc
index dae6c53..56f6566 100644
--- a/src/main/asciidoc/_chapters/security.adoc
+++ b/src/main/asciidoc/_chapters/security.adoc
@@ -30,7 +30,7 @@
 [IMPORTANT]
 .Reporting Security Bugs
 ====
-NOTE: To protect existing HBase installations from exploitation, please *do not* use JIRA to report security-related bugs. Instead, send your report to the mailing list private@apache.org, which allows anyone to send messages, but restricts who can read them. Someone on that list will contact you to follow up on your report.
+NOTE: To protect existing HBase installations from exploitation, please *do not* use JIRA to report security-related bugs. Instead, send your report to the mailing list private@hbase.apache.org, which allows anyone to send messages, but restricts who can read them. Someone on that list will contact you to follow up on your report.
 
 HBase adheres to the Apache Software Foundation's policy on reported vulnerabilities, available at http://apache.org/security/.
 
@@ -179,7 +179,25 @@ Add the following to the `hbase-site.xml` file on every client:
 </property>
 ----
 
-The client environment must be logged in to Kerberos from KDC or keytab via the `kinit` command before communication with the HBase cluster will be possible.
+Before 2.2.0 version, the client environment must be logged in to Kerberos from KDC or keytab via the `kinit` command before communication with the HBase cluster will be possible.
+
+Since 2.2.0, client can specify the following configurations in `hbase-site.xml`:
+[source,xml]
+----
+<property>
+  <name>hbase.client.keytab.file</name>
+  <value>/local/path/to/client/keytab</value>
+</property>
+
+<property>
+  <name>hbase.client.keytab.principal</name>
+  <value>foo@EXAMPLE.COM</value>
+</property>
+----
+Then application can automatically do the login and credential renewal jobs without client interference.
+
+It's optional feature, client, who upgrades to 2.2.0, can still keep their login and credential renewal logic already did in older version, as long as keeping `hbase.client.keytab.file`
+and `hbase.client.keytab.principal` are unset.
 
 Be advised that if the `hbase.security.authentication` in the client- and server-side site files do not match, the client will not be able to communicate with the cluster.
 
@@ -1721,7 +1739,7 @@ All options have been discussed separately in the sections above.
 <!-- HBase Superuser -->
 <property>
   <name>hbase.superuser</name>
-  <value>hbase, admin</value>
+  <value>hbase,admin</value>
 </property>
 <!-- Coprocessors for ACLs and Visibility Tags -->
 <property>
@@ -1741,8 +1759,7 @@ All options have been discussed separately in the sections above.
 </property>
 <property>
   <name>hbase.coprocessor.regionserver.classes</name>
-  <value>org.apache.hadoop/hbase.security.access.AccessController,
-  org.apache.hadoop.hbase.security.access.VisibilityController</value>
+  <value>org.apache.hadoop.hbase.security.access.AccessController</value>
 </property>
 <!-- Executable ACL for Coprocessor Endpoints -->
 <property>

http://git-wip-us.apache.org/repos/asf/hbase/blob/e867b1a3/src/main/asciidoc/_chapters/sync_replication.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/sync_replication.adoc b/src/main/asciidoc/_chapters/sync_replication.adoc
new file mode 100644
index 0000000..d28b9a9
--- /dev/null
+++ b/src/main/asciidoc/_chapters/sync_replication.adoc
@@ -0,0 +1,125 @@
+////
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+////
+
+[[syncreplication]]
+= Synchronous Replication
+:doctype: book
+:numbered:
+:toc: left
+:icons: font
+:experimental:
+:source-language: java
+
+== Background
+
+The current <<Cluster Replication, replication>> in HBase in asynchronous. So if the master cluster crashes, the slave cluster may not have the
+newest data. If users want strong consistency then they can not switch to the slave cluster.
+
+== Design
+
+Please see the design doc on link:https://issues.apache.org/jira/browse/HBASE-19064[HBASE-19064]
+
+== Operation and maintenance
+
+Case.1 Setup two synchronous replication clusters::
+
+* Add a synchronous peer in both source cluster and peer cluster.
+
+For source cluster:
+[source,ruby]
+----
+hbase> add_peer  '1', CLUSTER_KEY => 'lg-hadoop-tst-st01.bj:10010,lg-hadoop-tst-st02.bj:10010,lg-hadoop-tst-st03.bj:10010:/hbase/test-hbase-slave', REMOTE_WAL_DIR=>'hdfs://lg-hadoop-tst-st01.bj:20100/hbase/test-hbase-slave/remoteWALs', TABLE_CFS => {"ycsb-test"=>[]}
+----
+
+For peer cluster:
+[source,ruby]
+----
+hbase> add_peer  '1', CLUSTER_KEY => 'lg-hadoop-tst-st01.bj:10010,lg-hadoop-tst-st02.bj:10010,lg-hadoop-tst-st03.bj:10010:/hbase/test-hbase', REMOTE_WAL_DIR=>'hdfs://lg-hadoop-tst-st01.bj:20100/hbase/test-hbase/remoteWALs', TABLE_CFS => {"ycsb-test"=>[]}
+----
+
+NOTE: For synchronous replication, the current implementation require that we have the same peer id for both source
+and peer cluster. Another thing that need attention is: the peer does not support cluster-level, namespace-level, or
+cf-level replication, only support table-level replication now.
+
+* Transit the peer cluster to be STANDBY state
+
+[source,ruby]
+----
+hbase> transit_peer_sync_replication_state '1', 'STANDBY'
+----
+
+* Transit the source cluster to be ACTIVE state
+
+[source,ruby]
+----
+hbase> transit_peer_sync_replication_state '1', 'ACTIVE'
+----
+
+Now, the synchronous replication has been set up successfully. the HBase client can only request to source cluster, if
+request to peer cluster, the peer cluster which is STANDBY state now will reject the read/write requests.
+
+Case.2 How to operate when standby cluster crashed::
+
+If the standby cluster has been crashed, it will fail to write remote WAL for the active cluster. So we need to transit
+the source cluster to DOWNGRANDE_ACTIVE state, which means source cluster won't write any remote WAL any more, but
+the normal replication (asynchronous Replication) can still work fine, it queue the newly written WALs, but the
+replication block until the peer cluster come back.
+
+[source,ruby]
+----
+hbase> transit_peer_sync_replication_state '1', 'DOWNGRADE_ACTIVE'
+----
+
+Once the peer cluster come back, we can just transit the source cluster to ACTIVE, to ensure that the replication will be
+synchronous.
+
+[source,ruby]
+----
+hbase> transit_peer_sync_replication_state '1', 'ACTIVE'
+----
+
+Case.3 How to operate when active cluster crashed::
+
+If the active cluster has been crashed (it may be not reachable now), so let's just transit the standby cluster to
+DOWNGRANDE_ACTIVE state, and after that, we should redirect all the requests from client to the DOWNGRADE_ACTIVE cluster.
+
+[source,ruby]
+----
+hbase> transit_peer_sync_replication_state '1', 'DOWNGRADE_ACTIVE'
+----
+
+If the crashed cluster come back again, we just need to transit it to STANDBY directly. Otherwise if you transit the
+cluster to DOWNGRADE_ACTIVE, the original ACTIVE cluster may have redundant data compared to the current ACTIVE
+cluster. Because we designed to write source cluster WALs and remote cluster WALs concurrently, so it's possible that
+the source cluster WALs has more data than the remote cluster, which result in data inconsistency. The procedure of
+transiting ACTIVE to STANDBY has no problem, because we'll skip to replay the original WALs.
+
+[source,ruby]
+----
+hbase> transit_peer_sync_replication_state '1', 'STANDBY'
+----
+
+After that, we can promote the DOWNGRADE_ACTIVE cluster to ACTIVE now, to ensure that the replication will be synchronous.
+
+[source,ruby]
+----
+hbase> transit_peer_sync_replication_state '1', 'ACTIVE'
+----

http://git-wip-us.apache.org/repos/asf/hbase/blob/e867b1a3/src/main/asciidoc/_chapters/troubleshooting.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/troubleshooting.adoc b/src/main/asciidoc/_chapters/troubleshooting.adoc
index ba7bb02..f5288be 100644
--- a/src/main/asciidoc/_chapters/troubleshooting.adoc
+++ b/src/main/asciidoc/_chapters/troubleshooting.adoc
@@ -868,9 +868,9 @@ Snapshots::
   When you create a snapshot, HBase retains everything it needs to recreate the table's
   state at that time of the snapshot. This includes deleted cells or expired versions.
   For this reason, your snapshot usage pattern should be well-planned, and you should
-  prune snapshots that you no longer need. Snapshots are stored in `/hbase/.snapshots`,
+  prune snapshots that you no longer need. Snapshots are stored in `/hbase/.hbase-snapshot`,
   and archives needed to restore snapshots are stored in
-  `/hbase/.archive/<_tablename_>/<_region_>/<_column_family_>/`.
+  `/hbase/archive/<_tablename_>/<_region_>/<_column_family_>/`.
 
   *Do not* manage snapshots or archives manually via HDFS. HBase provides APIs and
   HBase Shell commands for managing them. For more information, see <<ops.snapshots>>.

http://git-wip-us.apache.org/repos/asf/hbase/blob/e867b1a3/src/main/asciidoc/_chapters/upgrading.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/upgrading.adoc b/src/main/asciidoc/_chapters/upgrading.adoc
index 6dc788a..da0dac0 100644
--- a/src/main/asciidoc/_chapters/upgrading.adoc
+++ b/src/main/asciidoc/_chapters/upgrading.adoc
@@ -331,7 +331,10 @@ As noted in the section <<basic.prerequisites>>, HBase 2.0+ requires a minimum o
 .HBCK must match HBase server version
 You *must not* use an HBase 1.x version of HBCK against an HBase 2.0+ cluster. HBCK is strongly tied to the HBase server version. Using the HBCK tool from an earlier release against an HBase 2.0+ cluster will destructively alter said cluster in unrecoverable ways.
 
-As of HBase 2.0, HBCK is a read-only tool that can report the status of some non-public system internals. You should not rely on the format nor content of these internals to remain consistent across HBase releases.
+As of HBase 2.0, HBCK (A.K.A _HBCK1_ or _hbck1_) is a read-only tool that can report the status of some non-public system internals. You should not rely on the format nor content of these internals to remain consistent across HBase releases.
+
+To read about HBCK's replacement, see <<HBCK2>> in <<ops_mgt>>.
+
 
 ////
 Link to a ref guide section on HBCK in 2.0 that explains use and calls out the inability of clients and server sides to detect version of each other.
@@ -611,6 +614,19 @@ Performance is also an area that is now under active review so look forward to
 improvement in coming releases (See
 link:https://issues.apache.org/jira/browse/HBASE-20188[HBASE-20188 TESTING Performance]).
 
+[[upgrade2.0.it.kerberos]]
+.Integration Tests and Kerberos
+Integration Tests (`IntegrationTests*`) used to rely on the Kerberos credential cache
+for authentication against secured clusters. This used to lead to tests failing due
+to authentication failures when the tickets in the credential cache expired.
+As of hbase-2.0.0 (and hbase-1.3.0+), the integration test clients will make use
+of the configuration properties `hbase.client.keytab.file` and
+`hbase.client.kerberos.principal`. They are required. The clients will perform a
+login from the configured keytab file and automatically refresh the credentials
+in the background for the process lifetime (See
+link:https://issues.apache.org/jira/browse/HBASE-16231[HBASE-16231]).
+
+
 ////
 This would be a good place to link to an appendix on migrating applications
 ////
@@ -731,6 +747,11 @@ Notes:
 
 Doing a raw scan will now return results that have expired according to TTL settings.
 
+[[upgrade1.3]]
+=== Upgrading from pre-1.3 to 1.3+
+If running Integration Tests under Kerberos, see <<upgrade2.0.it.kerberos>>.
+
+
 [[upgrade1.0]]
 === Upgrading to 1.x
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/e867b1a3/src/main/asciidoc/book.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/book.adoc b/src/main/asciidoc/book.adoc
index 764d7b4..5680c79 100644
--- a/src/main/asciidoc/book.adoc
+++ b/src/main/asciidoc/book.adoc
@@ -63,6 +63,7 @@ include::_chapters/security.adoc[]
 include::_chapters/architecture.adoc[]
 include::_chapters/hbase_mob.adoc[]
 include::_chapters/inmemory_compaction.adoc[]
+include::_chapters/sync_replication.adoc[]
 include::_chapters/hbase_apis.adoc[]
 include::_chapters/external_apis.adoc[]
 include::_chapters/thrift_filter_language.adoc[]
@@ -83,7 +84,6 @@ include::_chapters/community.adoc[]
 
 include::_chapters/appendix_contributing_to_documentation.adoc[]
 include::_chapters/faq.adoc[]
-include::_chapters/hbck_in_depth.adoc[]
 include::_chapters/appendix_acl_matrix.adoc[]
 include::_chapters/compression.adoc[]
 include::_chapters/sql.adoc[]