You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by ap...@apache.org on 2017/12/05 02:54:03 UTC

[1/9] hbase git commit: HBASE-19420 Backport HBASE-19152 Update refguide 'how to build an RC' and the make_rc.sh script

Repository: hbase
Updated Branches:
  refs/heads/branch-1 1fe75f98d -> ba5bd0ae5
  refs/heads/branch-1.4 5f58e618c -> 3839a01dd


http://git-wip-us.apache.org/repos/asf/hbase/blob/14318d73/src/main/asciidoc/_chapters/developer.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/developer.adoc b/src/main/asciidoc/_chapters/developer.adoc
index c3ba0a2..8b74690 100644
--- a/src/main/asciidoc/_chapters/developer.adoc
+++ b/src/main/asciidoc/_chapters/developer.adoc
@@ -33,40 +33,124 @@ Being familiar with these guidelines will help the HBase committers to use your
 [[getting.involved]]
 == Getting Involved
 
-Apache HBase gets better only when people contribute! If you are looking to contribute to Apache HBase, look for link:https://issues.apache.org/jira/issues/?jql=project%20%3D%20HBASE%20AND%20labels%20in%20(beginner)[issues in JIRA tagged with the label 'beginner'].
+Apache HBase gets better only when people contribute! If you are looking to contribute to Apache HBase, look for link:https://issues.apache.org/jira/issues/?jql=project%20%3D%20HBASE%20AND%20labels%20in%20(beginner)%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened)[issues in JIRA tagged with the label 'beginner'].
 These are issues HBase contributors have deemed worthy but not of immediate priority and a good way to ramp on HBase internals.
 See link:http://search-hadoop.com/m/DHED43re96[What label
                 is used for issues that are good on ramps for new contributors?] from the dev mailing list for background.
 
 Before you get started submitting code to HBase, please refer to <<developing,developing>>.
 
-As Apache HBase is an Apache Software Foundation project, see <<asf,asf>>            for more information about how the ASF functions. 
+As Apache HBase is an Apache Software Foundation project, see <<asf,asf>>            for more information about how the ASF functions.
 
 [[mailing.list]]
 === Mailing Lists
 
 Sign up for the dev-list and the user-list.
-See the link:http://hbase.apache.org/mail-lists.html[mailing lists] page.
-Posing questions - and helping to answer other people's questions - is encouraged! There are varying levels of experience on both lists so patience and politeness are encouraged (and please stay on topic.) 
+See the link:https://hbase.apache.org/mail-lists.html[mailing lists] page.
+Posing questions - and helping to answer other people's questions - is encouraged! There are varying levels of experience on both lists so patience and politeness are encouraged (and please stay on topic.)
+
+[[slack]]
+=== Slack
+The Apache HBase project has its own link: http://apache-hbase.slack.com[Slack Channel] for real-time questions
+and discussion. Mail dev@hbase.apache.org to request an invite.
 
 [[irc]]
 === Internet Relay Chat (IRC)
 
+(NOTE: Our IRC channel seems to have been deprecated in favor of the above Slack channel)
+
 For real-time questions and discussions, use the `#hbase` IRC channel on the link:https://freenode.net/[FreeNode] IRC network.
 FreeNode offers a web-based client, but most people prefer a native client, and several clients are available for each operating system.
 
 === Jira
 
-Check for existing issues in link:https://issues.apache.org/jira/browse/HBASE[Jira].
-If it's either a new feature request, enhancement, or a bug, file a ticket. 
+Check for existing issues in link:https://issues.apache.org/jira/projects/HBASE/issues[Jira].
+If it's either a new feature request, enhancement, or a bug, file a ticket.
+
+We track multiple types of work in JIRA:
+
+- Bug: Something is broken in HBase itself.
+- Test: A test is needed, or a test is broken.
+- New feature: You have an idea for new functionality. It's often best to bring
+  these up on the mailing lists first, and then write up a design specification
+  that you add to the feature request JIRA.
+- Improvement: A feature exists, but could be tweaked or augmented. It's often
+  best to bring these up on the mailing lists first and have a discussion, then
+  summarize or link to the discussion if others seem interested in the
+  improvement.
+- Wish: This is like a new feature, but for something you may not have the
+  background to flesh out yourself.
+
+Bugs and tests have the highest priority and should be actionable.
+
+==== Guidelines for reporting effective issues
+
+- *Search for duplicates*: Your issue may have already been reported. Have a
+  look, realizing that someone else might have worded the summary differently.
++
+Also search the mailing lists, which may have information about your problem
+and how to work around it. Don't file an issue for something that has already
+been discussed and resolved on a mailing list, unless you strongly disagree
+with the resolution *and* are willing to help take the issue forward.
+
+* *Discuss in public*: Use the mailing lists to discuss what you've discovered
+  and see if there is something you've missed. Avoid using back channels, so
+  that you benefit from the experience and expertise of the project as a whole.
+
+* *Don't file on behalf of others*: You might not have all the context, and you
+  don't have as much motivation to see it through as the person who is actually
+  experiencing the bug. It's more helpful in the long term to encourage others
+  to file their own issues. Point them to this material and offer to help out
+  the first time or two.
+
+* *Write a good summary*: A good summary includes information about the problem,
+  the impact on the user or developer, and the area of the code.
+** Good: `Address new license dependencies from hadoop3-alpha4`
+** Room for improvement: `Canary is broken`
++
+If you write a bad title, someone else will rewrite it for you. This is time
+they could have spent working on the issue instead.
+
+* *Give context in the description*: It can be good to think of this in multiple
+  parts:
+** What happens or doesn't happen?
+** How does it impact you?
+** How can someone else reproduce it?
+** What would "fixed" look like?
++
+You don't need to know the answers for all of these, but give as much
+information as you can. If you can provide technical information, such as a
+Git commit SHA that you think might have caused the issue or a build failure
+on builds.apache.org where you think the issue first showed up, share that
+info.
+
+* *Fill in all relevant fields*: These fields help us filter, categorize, and
+  find things.
+
+* *One bug, one issue, one patch*: To help with back-porting, don't split issues
+  or fixes among multiple bugs.
 
-To check for existing issues which you can tackle as a beginner, search for link:https://issues.apache.org/jira/issues/?jql=project%20%3D%20HBASE%20AND%20labels%20in%20(beginner)[issues in JIRA tagged with the label 'beginner'].
+* *Add value if you can*: Filing issues is great, even if you don't know how to
+  fix them. But providing as much information as possible, being willing to
+  triage and answer questions, and being willing to test potential fixes is even
+  better! We want to fix your issue as quickly as you want it to be fixed.
 
-* .JIRA PrioritiesBlocker: Should only be used if the issue WILL cause data loss or cluster instability reliably.
-* Critical: The issue described can cause data loss or cluster instability in some cases.
-* Major: Important but not tragic issues, like updates to the client API that will add a lot of much-needed functionality or significant bugs that need to be fixed but that don't cause data loss.
-* Minor: Useful enhancements and annoying but not damaging bugs.
-* Trivial: Useful enhancements but generally cosmetic.
+* *Don't be upset if we don't fix it*: Time and resources are finite. In some
+  cases, we may not be able to (or might choose not to) fix an issue, especially
+  if it is an edge case or there is a workaround. Even if it doesn't get fixed,
+  the JIRA is a public record of it, and will help others out if they run into
+  a similar issue in the future.
+
+==== Working on an issue
+
+To check for existing issues which you can tackle as a beginner, search for link:https://issues.apache.org/jira/issues/?jql=project%20%3D%20HBASE%20AND%20labels%20in%20(beginner)%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened)[issues in JIRA tagged with the label 'beginner'].
+
+.JIRA Priorites
+* *Blocker*: Should only be used if the issue WILL cause data loss or cluster instability reliably.
+* *Critical*: The issue described can cause data loss or cluster instability in some cases.
+* *Major*: Important but not tragic issues, like updates to the client API that will add a lot of much-needed functionality or significant bugs that need to be fixed but that don't cause data loss.
+* *Minor*: Useful enhancements and annoying but not damaging bugs.
+* *Trivial*: Useful enhancements but generally cosmetic.
 
 .Code Blocks in Jira Comments
 ====
@@ -89,11 +173,12 @@ GIT is our repository of record for all but the Apache HBase website.
 We used to be on SVN.
 We migrated.
 See link:https://issues.apache.org/jira/browse/INFRA-7768[Migrate Apache HBase SVN Repos to Git].
-Updating hbase.apache.org still requires use of SVN (See <<hbase.org,hbase.org>>). See link:http://hbase.apache.org/source-repository.html[Source Code
-                Management] page for contributor and committer links or seach for HBase on the link:http://git.apache.org/[Apache Git] page.
+See link:https://hbase.apache.org/source-repository.html[Source Code
+                Management] page for contributor and committer links or search for HBase on the link:https://git.apache.org/[Apache Git] page.
 
 == IDEs
 
+[[eclipse]]
 === Eclipse
 
 [[eclipse.code.formatting]]
@@ -102,27 +187,12 @@ Updating hbase.apache.org still requires use of SVN (See <<hbase.org,hbase.org>>
 Under the _dev-support/_ folder, you will find _hbase_eclipse_formatter.xml_.
 We encourage you to have this formatter in place in eclipse when editing HBase code.
 
-.Procedure: Load the HBase Formatter Into Eclipse
-. Open the  menu item.
-. In Preferences, click the  menu item.
-. Click btn:[Import] and browse to the location of the _hbase_eclipse_formatter.xml_ file, which is in the _dev-support/_ directory.
-  Click btn:[Apply].
-. Still in Preferences, click .
-  Be sure the following options are selected:
-+
-* Perform the selected actions on save
-* Format source code
-* Format edited lines
-+
-Click btn:[Apply].
-Close all dialog boxes and return to the main window.
-
+Go to `Preferences->Java->Code Style->Formatter->Import` to load the xml file.
+Go to `Preferences->Java->Editor->Save Actions`, and make sure 'Format source code' and 'Format
+edited lines' is selected.
 
-In addition to the automatic formatting, make sure you follow the style guidelines explained in <<common.patch.feedback,common.patch.feedback>>
-
-Also, no `@author` tags - that's a rule.
-Quality Javadoc comments are appreciated.
-And include the Apache license.
+In addition to the automatic formatting, make sure you follow the style guidelines explained in
+<<common.patch.feedback,common.patch.feedback>>.
 
 [[eclipse.git.plugin]]
 ==== Eclipse Git Plugin
@@ -133,30 +203,30 @@ If you cloned the project via git, download and install the Git plugin (EGit). A
 ==== HBase Project Setup in Eclipse using `m2eclipse`
 
 The easiest way is to use the +m2eclipse+ plugin for Eclipse.
-Eclipse Indigo or newer includes +m2eclipse+, or you can download it from link:http://www.eclipse.org/m2e//. It provides Maven integration for Eclipse, and even lets you use the direct Maven commands from within Eclipse to compile and test your project.
+Eclipse Indigo or newer includes +m2eclipse+, or you can download it from http://www.eclipse.org/m2e/. It provides Maven integration for Eclipse, and even lets you use the direct Maven commands from within Eclipse to compile and test your project.
 
 To import the project, click  and select the HBase root directory. `m2eclipse`                    locates all the hbase modules for you.
 
-If you install +m2eclipse+ and import HBase in your workspace, do the following to fix your eclipse Build Path. 
+If you install +m2eclipse+ and import HBase in your workspace, do the following to fix your eclipse Build Path.
 
 . Remove _target_ folder
 . Add _target/generated-jamon_ and _target/generated-sources/java_ folders.
 . Remove from your Build Path the exclusions on the _src/main/resources_ and _src/test/resources_ to avoid error message in the console, such as the following:
 +
 ----
-Failed to execute goal 
+Failed to execute goal
 org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project hbase:
-'An Ant BuildException has occured: Replace: source file .../target/classes/hbase-default.xml 
+'An Ant BuildException has occurred: Replace: source file .../target/classes/hbase-default.xml
 doesn't exist
 ----
 +
-This will also reduce the eclipse build cycles and make your life easier when developing. 
+This will also reduce the eclipse build cycles and make your life easier when developing.
 
 
 [[eclipse.commandline]]
 ==== HBase Project Setup in Eclipse Using the Command Line
 
-Instead of using `m2eclipse`, you can generate the Eclipse files from the command line. 
+Instead of using `m2eclipse`, you can generate the Eclipse files from the command line.
 
 . First, run the following command, which builds HBase.
   You only need to do this once.
@@ -181,7 +251,7 @@ mvn eclipse:eclipse
 The `$M2_REPO` classpath variable needs to be set up for the project.
 This needs to be set to your local Maven repository, which is usually _~/.m2/repository_
 
-If this classpath variable is not configured, you will see compile errors in Eclipse like this: 
+If this classpath variable is not configured, you will see compile errors in Eclipse like this:
 
 ----
 
@@ -209,14 +279,14 @@ Access restriction: The method getLong(Object, long) from the type Unsafe is not
 [[eclipse.more]]
 ==== Eclipse - More Information
 
-For additional information on setting up Eclipse for HBase development on Windows, see link:http://michaelmorello.blogspot.com/2011/09/hbase-subversion-eclipse-windows.html[Michael Morello's blog] on the topic. 
+For additional information on setting up Eclipse for HBase development on Windows, see link:http://michaelmorello.blogspot.com/2011/09/hbase-subversion-eclipse-windows.html[Michael Morello's blog] on the topic.
 
 === IntelliJ IDEA
 
-You can set up IntelliJ IDEA for similar functinoality as Eclipse.
+You can set up IntelliJ IDEA for similar functionality as Eclipse.
 Follow these steps.
 
-. Select 
+. Select
 . You do not need to select a profile.
   Be sure [label]#Maven project
   required# is selected, and click btn:[Next].
@@ -227,7 +297,7 @@ Using the Eclipse Code Formatter plugin for IntelliJ IDEA, you can import the HB
 
 === Other IDEs
 
-It would be userful to mirror the <<eclipse,eclipse>> set-up instructions for other IDEs.
+It would be useful to mirror the <<eclipse,eclipse>> set-up instructions for other IDEs.
 If you would like to assist, please have a look at link:https://issues.apache.org/jira/browse/HBASE-11704[HBASE-11704].
 
 [[build]]
@@ -237,20 +307,20 @@ If you would like to assist, please have a look at link:https://issues.apache.or
 === Basic Compile
 
 HBase is compiled using Maven.
-You must use Maven 3.x.
+You must use at least Maven 3.0.4.
 To check your Maven version, run the command +mvn -version+.
 
 .JDK Version Requirements
 [NOTE]
 ====
 Starting with HBase 1.0 you must use Java 7 or later to build from source code.
-See <<java,java>> for more complete information about supported JDK versions. 
+See <<java,java>> for more complete information about supported JDK versions.
 ====
 
 [[maven.build.commands]]
 ==== Maven Build Commands
 
-All commands are executed from the local HBase project directory. 
+All commands are executed from the local HBase project directory.
 
 ===== Package
 
@@ -269,7 +339,7 @@ mvn clean package -DskipTests
 ----
 
 With Eclipse set up as explained above in <<eclipse,eclipse>>, you can also use the menu:Build[] command in Eclipse.
-To create the full installable HBase package takes a little bit more work, so read on. 
+To create the full installable HBase package takes a little bit more work, so read on.
 
 [[maven.build.commands.compile]]
 ===== Compile
@@ -313,38 +383,27 @@ See the <<hbase.unittests.cmds,hbase.unittests.cmds>> section in <<hbase.unittes
 [[maven.build.hadoop]]
 ==== Building against various hadoop versions.
 
-As of 0.96, Apache HBase supports building against Apache Hadoop versions: 1.0.3, 2.0.0-alpha and 3.0.0-SNAPSHOT.
-By default, in 0.96 and earlier, we will build with Hadoop-1.0.x.
-As of 0.98, Hadoop 1.x is deprecated and Hadoop 2.x is the default.
-To change the version to build against, add a hadoop.profile property when you invoke +mvn+:
+HBase supports building against Apache Hadoop versions: 2.y and 3.y (early release artifacts). By default we build against Hadoop 2.x.
+
+To build against a specific release from the Hadoop 2.y line, set e.g. `-Dhadoop-two.version=2.7.4`.
 
 [source,bourne]
 ----
-mvn -Dhadoop.profile=1.0 ...
+mvn -Dhadoop-two.version=2.7.4 ...
 ----
 
-The above will build against whatever explicit hadoop 1.x version we have in our _pom.xml_ as our '1.0' version.
-Tests may not all pass so you may need to pass `-DskipTests` unless you are inclined to fix the failing tests.
-
-.'dependencyManagement.dependencies.dependency.artifactId' fororg.apache.hbase:${compat.module}:test-jar with value '${compat.module}'does not match a valid id pattern
-[NOTE]
-====
-You will see ERRORs like the above title if you pass the _default_ profile; e.g.
-if you pass +hadoop.profile=1.1+ when building 0.96 or +hadoop.profile=2.0+ when building hadoop 0.98; just drop the hadoop.profile stipulation in this case to get your build to run again.
-This seems to be a maven pecularity that is probably fixable but we've not spent the time trying to figure it.
-====
-
-Similarly, for 3.0, you would just replace the profile value.
-Note that Hadoop-3.0.0-SNAPSHOT does not currently have a deployed maven artificat - you will need to build and install your own in your local maven repository if you want to run against this profile. 
-
-In earilier versions of Apache HBase, you can build against older versions of Apache Hadoop, notably, Hadoop 0.22.x and 0.23.x.
-If you are running, for example HBase-0.94 and wanted to build against Hadoop 0.23.x, you would run with:
+To change the major release line of Hadoop we build against, add a hadoop.profile property when you invoke +mvn+:
 
 [source,bourne]
 ----
-mvn -Dhadoop.profile=22 ...
+mvn -Dhadoop.profile=3.0 ...
 ----
 
+The above will build against whatever explicit hadoop 3.y version we have in our _pom.xml_ as our '3.0' version.
+Tests may not all pass so you may need to pass `-DskipTests` unless you are inclined to fix the failing tests.
+
+To pick a particular Hadoop 3.y release, you'd set e.g. `-Dhadoop-three.version=3.0.0-alpha1`.
+
 [[build.protobuf]]
 ==== Build Protobuf
 
@@ -367,7 +426,7 @@ You may also want to define `protoc.path` for the protoc binary, using the follo
 mvn compile -Pcompile-protobuf -Dprotoc.path=/opt/local/bin/protoc
 ----
 
-Read the _hbase-protocol/README.txt_ for more details. 
+Read the _hbase-protocol/README.txt_ for more details.
 
 [[build.thrift]]
 ==== Build Thrift
@@ -415,9 +474,8 @@ mvn -DskipTests package assembly:single deploy
 ==== Build Gotchas
 
 If you see `Unable to find resource 'VM_global_library.vm'`, ignore it.
-Its not an error.
-It is link:http://jira.codehaus.org/browse/MSITE-286[officially
-                        ugly] though. 
+It's not an error.
+It is link:https://issues.apache.org/jira/browse/MSITE-286[officially ugly] though.
 
 [[releasing]]
 == Releasing Apache HBase
@@ -429,27 +487,7 @@ HBase 1.x requires Java 7 to build.
 See <<java,java>> for Java requirements per HBase release.
 ====
 
-=== Building against HBase 0.96-0.98
-
-HBase 0.96.x will run on Hadoop 1.x or Hadoop 2.x.
-HBase 0.98 still runs on both, but HBase 0.98 deprecates use of Hadoop 1.
-HBase 1.x will _not_                run on Hadoop 1.
-In the following procedures, we make a distinction between HBase 1.x builds and the awkward process involved building HBase 0.96/0.98 for either Hadoop 1 or Hadoop 2 targets. 
-
-You must choose which Hadoop to build against.
-It is not possible to build a single HBase binary that runs against both Hadoop 1 and Hadoop 2.
-Hadoop is included in the build, because it is needed to run HBase in standalone mode.
-Therefore, the set of modules included in the tarball changes, depending on the build target.
-To determine which HBase you have, look at the HBase version.
-The Hadoop version is embedded within it.
-
-Maven, our build system, natively does not allow a single product to be built against different dependencies.
-Also, Maven cannot change the set of included modules and write out the correct _pom.xml_ files with appropriate dependencies, even using two build targets, one for Hadoop 1 and another for Hadoop 2.
-A prerequisite step is required, which takes as input the current _pom.xml_s and generates Hadoop 1 or Hadoop 2 versions using a script in the _dev-tools/_ directory, called _generate-hadoopX-poms.sh_                where [replaceable]_X_ is either `1` or `2`.
-You then reference these generated poms when you build.
-For now, just be aware of the difference between HBase 1.x builds and those of HBase 0.96-0.98.
-This difference is important to the build instructions.
-
+[[maven.settings.xml]]
 .Example _~/.m2/settings.xml_ File
 ====
 Publishing to maven requires you sign the artifacts you want to upload.
@@ -497,48 +535,53 @@ For the build to sign them for you, you a properly configured _settings.xml_ in
 
 [[maven.release]]
 === Making a Release Candidate
-
-NOTE: These instructions are for building HBase 1.0.x.
-For building earlier versions, the process is different.
-See this section under the respective release documentation folders. 
-
-.Point Releases
-If you are making a point release (for example to quickly address a critical incompatability or security problem) off of a release branch instead of a development branch, the tagging instructions are slightly different.
-I'll prefix those special steps with _Point Release Only_. 
+Only committers may make releases of hbase artifacts.
 
 .Before You Begin
-Before you make a release candidate, do a practice run by deploying a snapshot.
-Before you start, check to be sure recent builds have been passing for the branch from where you are going to take your release.
-You should also have tried recent branch tips out on a cluster under load, perhaps by running the `hbase-it` integration test suite for a few hours to 'burn in' the near-candidate bits. 
-
-.Point Release Only
+Make sure your environment is properly set up. Maven and Git are the main tooling
+used in the below. You'll need a properly configured _settings.xml_ file in your
+local _~/.m2_ maven repository with logins for apache repos (See <<maven.settings.xml>>).
+You will also need to have a published signing key. Browse the Hadoop
+link:http://wiki.apache.org/hadoop/HowToRelease[How To Release] wiki page on
+how to release. It is a model for most of the instructions below. It often has more
+detail on particular steps, for example, on adding your code signing key to the
+project KEYS file up in Apache or on how to update JIRA in preparation for release.
+
+Before you make a release candidate, do a practice run by deploying a SNAPSHOT.
+Check to be sure recent builds have been passing for the branch from where you
+are going to take your release. You should also have tried recent branch tips
+out on a cluster under load, perhaps by running the `hbase-it` integration test
+suite for a few hours to 'burn in' the near-candidate bits.
+
+
+.Specifying the Heap Space for Maven
 [NOTE]
 ====
-At this point you should tag the previous release branch (ex: 0.96.1) with the new point release tag (e.g.
-0.96.1.1 tag). Any commits with changes for the point release should be appled to the new tag. 
-====
-
-The Hadoop link:http://wiki.apache.org/hadoop/HowToRelease[How To
-                    Release] wiki page is used as a model for most of the instructions below, and may have more detail on particular sections, so it is worth review.
-
-.Specifying the Heap Space for Maven on OSX
-[NOTE]
-====
-On OSX, you may need to specify the heap space for Maven commands, by setting the `MAVEN_OPTS` variable to `-Xmx3g`.
+You may run into OutOfMemoryErrors building, particularly building the site and
+documentation. Up the heap for Maven by setting the `MAVEN_OPTS` variable.
 You can prefix the variable to the Maven command, as in the following example:
 
 ----
-MAVEN_OPTS="-Xmx2g" mvn package
+MAVEN_OPTS="-Xmx4g -XX:MaxPermSize=256m" mvn package
 ----
 
 You could also set this in an environment variable or alias in your shell.
 ====
 
 
-NOTE: The script _dev-support/make_rc.sh_ automates many of these steps.
-It does not do the modification of the _CHANGES.txt_                    for the release, the close of the staging repository in Apache Maven (human intervention is needed here), the checking of the produced artifacts to ensure they are 'good' -- e.g.
-extracting the produced tarballs, verifying that they look right, then starting HBase and checking that everything is running correctly, then the signing and pushing of the tarballs to link:http://people.apache.org[people.apache.org].
-The script handles everything else, and comes in handy.
+[NOTE]
+====
+The script _dev-support/make_rc.sh_ automates many of the below steps.
+It will checkout a tag, clean the checkout, build src and bin tarballs,
+and deploy the built jars to repository.apache.org.
+It does NOT do the modification of the _CHANGES.txt_ for the release,
+the checking of the produced artifacts to ensure they are 'good' --
+e.g. extracting the produced tarballs, verifying that they
+look right, then starting HBase and checking that everything is running
+correctly -- or the signing and pushing of the tarballs to
+link:https://people.apache.org[people.apache.org].
+Take a look. Modify/improve as you see fit.
+====
 
 .Procedure: Release Procedure
 . Update the _CHANGES.txt_ file and the POM files.
@@ -546,118 +589,188 @@ The script handles everything else, and comes in handy.
 Update _CHANGES.txt_ with the changes since the last release.
 Make sure the URL to the JIRA points to the proper location which lists fixes for this release.
 Adjust the version in all the POM files appropriately.
-If you are making a release candidate, you must remove the `-SNAPSHOT` label from all versions.
+If you are making a release candidate, you must remove the `-SNAPSHOT` label from all versions
+in all pom.xml files.
 If you are running this receipe to publish a snapshot, you must keep the `-SNAPSHOT` suffix on the hbase version.
-The link:http://mojo.codehaus.org/versions-maven-plugin/[Versions
-                            Maven Plugin] can be of use here.
+The link:http://www.mojohaus.org/versions-maven-plugin/[Versions Maven Plugin] can be of use here.
 To set a version in all the many poms of the hbase multi-module project, use a command like the following:
 +
 [source,bourne]
 ----
-
-$ mvn clean org.codehaus.mojo:versions-maven-plugin:1.3.1:set -DnewVersion=0.96.0
+$ mvn clean org.codehaus.mojo:versions-maven-plugin:2.5:set -DnewVersion=1.5.0
 ----
 +
-Checkin the _CHANGES.txt_ and any version changes.
+Make sure all versions in poms are changed! Checkin the _CHANGES.txt_ and any maven version changes.
 
 . Update the documentation.
 +
-Update the documentation under _src/main/docbkx_.
-This usually involves copying the latest from trunk and making version-particular adjustments to suit this release candidate version. 
+Update the documentation under _src/main/asciidoc_.
+This usually involves copying the latest from master branch and making version-particular
+adjustments to suit this release candidate version.
 
-. Build the source tarball.
+. Clean the checkout dir
 +
-Now, build the source tarball.
-This tarball is Hadoop-version-independent.
-It is just the pure source code and documentation without a particular hadoop taint, etc.
-Add the `-Prelease` profile when building.
-It checks files for licenses and will fail the build if unlicensed files are present.
+[source,bourne]
+----
+
+$ mvn clean
+$ git clean -f -x -d
+----
+
+
+. Run Apache-Rat
+Check licenses are good
 +
 [source,bourne]
 ----
 
-$ mvn clean install -DskipTests assembly:single -Dassembly.file=hbase-assembly/src/main/assembly/src.xml -Prelease
+$ mvn apache-rat
 ----
 +
-Extract the tarball and make sure it looks good.
-A good test for the src tarball being 'complete' is to see if you can build new tarballs from this source bundle.
-If the source tarball is good, save it off to a _version directory_, a directory somewhere where you are collecting all of the tarballs you will publish as part of the release candidate.
-For example if you were building a hbase-0.96.0 release candidate, you might call the directory _hbase-0.96.0RC0_.
-Later you will publish this directory as our release candidate up on http://people.apache.org/~YOU. 
+If the above fails, check the rat log.
 
-. Build the binary tarball.
 +
-Next, build the binary tarball.
-Add the `-Prelease`                        profile when building.
-It checks files for licenses and will fail the build if unlicensed files are present.
-Do it in two steps.
+[source,bourne]
+----
+$ grep 'Rat check' patchprocess/mvn_apache_rat.log
+----
 +
-* First install into the local repository
+
+. Create a release tag.
+Presuming you have run basic tests, the rat check, passes and all is
+looking good, now is the time to tag the release candidate (You
+always remove the tag if you need to redo). To tag, do
+what follows substituting in the version appropriate to your build.
+All tags should be signed tags; i.e. pass the _-s_ option (See
+link:http://https://git-scm.com/book/id/v2/Git-Tools-Signing-Your-Work[Signing Your Work]
+for how to set up your git environment for signing).
+
 +
 [source,bourne]
 ----
 
-$ mvn clean install -DskipTests -Prelease
+$ git tag -s 1.5.0-RC0 -m "Tagging the 1.5.0 first Releae Candidate (Candidates start at zero)"
 ----
 
-* Next, generate documentation and assemble the tarball.
+Or, if you are making a release, tags should have a _rel/_ prefix to ensure
+they are preserved in the Apache repo as in:
+
+[source,bourne]
+----
++$ git tag -s rel/1.5.0 -m "Tagging the 1.5.0 Release"
+----
+
+Push the (specific) tag (only) so others have access.
 +
 [source,bourne]
 ----
 
+$ git push origin 1.5.0-RC0
+----
++
+For how to delete tags, see
+link:http://www.manikrathee.com/how-to-delete-a-tag-in-git.html[How to Delete a Tag]. Covers
+deleting tags that have not yet been pushed to the remote Apache
+repo as well as delete of tags pushed to Apache.
+
+
+. Build the source tarball.
++
+Now, build the source tarball. Lets presume we are building the source
+tarball for the tag _1.5.0-RC0_ into _/tmp/hbase-1.5.0-RC0/_
+(This step requires that the mvn and git clean steps described above have just been done).
++
+[source,bourne]
+----
+$ git archive --format=tar.gz --output="/tmp/hbase-1.5.0-RC0/hbase-1.5.0-src.tar.gz" --prefix="hbase-1.5.0/" $git_tag
+----
+
+Above we generate the hbase-1.5.0-src.tar.gz tarball into the
+_/tmp/hbase-1.5.0-RC0_ build output directory (We don't want the _RC0_ in the name or prefix.
+These bits are currently a release candidate but if the VOTE passes, they will become the release so we do not taint
+the artifact names with _RCX_).
+
+. Build the binary tarball.
+Next, build the binary tarball. Add the `-Prelease` profile when building.
+It runs the license apache-rat check among other rules that help ensure
+all is wholesome. Do it in two steps.
+
+First install into the local repository
+
+[source,bourne]
+----
+
+$ mvn clean install -DskipTests -Prelease
+----
+
+Next, generate documentation and assemble the tarball. Be warned,
+this next step can take a good while, a couple of hours generating site
+documentation.
+
+[source,bourne]
+----
+
 $ mvn install -DskipTests site assembly:single -Prelease
 ----
 
 +
-Otherwise, the build complains that hbase modules are not in the maven repository when you try to do it at once, especially on fresh repository.
+Otherwise, the build complains that hbase modules are not in the maven repository
+when you try to do it all in one step, especially on a fresh repository.
 It seems that you need the install goal in both steps.
 +
-Extract the generated tarball and check it out.
+Extract the generated tarball -- you'll find it under
+_hbase-assembly/target_ and check it out.
 Look at the documentation, see if it runs, etc.
-If good, copy the tarball to the above mentioned _version directory_. 
+If good, copy the tarball beside the source tarball in the
+build output directory.
 
-. Create a new tag.
-+
-.Point Release Only
-[NOTE]
-====
-The following step that creates a new tag can be skipped since you've already created the point release tag
-====
-+
-Tag the release at this point since it looks good.
-If you find an issue later, you can delete the tag and start over.
-Release needs to be tagged for the next step.
 
 . Deploy to the Maven Repository.
 +
-Next, deploy HBase to the Apache Maven repository, using the `apache-release` profile instead of the `release` profile when running the `mvn deploy` command.
-This profile invokes the Apache pom referenced by our pom files, and also signs your artifacts published to Maven, as long as the _settings.xml_ is configured correctly, as described in <<mvn.settings.file,mvn.settings.file>>.
+Next, deploy HBase to the Apache Maven repository. Add the
+apache-release` profile when running the `mvn deploy` command.
+This profile comes from the Apache parent pom referenced by our pom files.
+It does signing of your artifacts published to Maven, as long as the
+_settings.xml_ is configured correctly, as described in <<maven.settings.xml>>.
+This step depends on the local repository having been populate
+by the just-previous bin tarball build.
+
 +
 [source,bourne]
 ----
 
-$ mvn deploy -DskipTests -Papache-release
+$ mvn deploy -DskipTests -Papache-release -Prelease
 ----
 +
 This command copies all artifacts up to a temporary staging Apache mvn repository in an 'open' state.
-More work needs to be done on these maven artifacts to make them generally available. 
+More work needs to be done on these maven artifacts to make them generally available.
 +
-We do not release HBase tarball to the Apache Maven repository. To avoid deploying the tarball, do not include the `assembly:single` goal in your `mvn deploy` command. Check the deployed artifacts as described in the next section.
+We do not release HBase tarball to the Apache Maven repository. To avoid deploying the tarball, do not
+include the `assembly:single` goal in your `mvn deploy` command. Check the deployed artifacts as described in the next section.
+
+.make_rc.sh
+[NOTE]
+====
+If you run the _dev-support/make_rc.sh_ script, this is as far as it takes you.
+To finish the release, take up the script from here on out.
+====
 
 . Make the Release Candidate available.
 +
 The artifacts are in the maven repository in the staging area in the 'open' state.
 While in this 'open' state you can check out what you've published to make sure all is good.
-To do this, login at link:http://repository.apache.org[repository.apache.org]                        using your Apache ID.
-Find your artifacts in the staging repository.
-Browse the content.
-Make sure all artifacts made it up and that the poms look generally good.
-If it checks out, 'close' the repo.
-This will make the artifacts publically available.
-You will receive an email with the URL to give out for the temporary staging repository for others to use trying out this new release candidate.
-Include it in the email that announces the release candidate.
-Folks will need to add this repo URL to their local poms or to their local _settings.xml_ file to pull the published release candidate artifacts.
-If the published artifacts are incomplete or have problems, just delete the 'open' staged artifacts.
+To do this, log in to Apache's Nexus at link:https://repository.apache.org[repository.apache.org] using your Apache ID.
+Find your artifacts in the staging repository. Click on 'Staging Repositories' and look for a new one ending in "hbase" with a status of 'Open', select it.
+Use the tree view to expand the list of repository contents and inspect if the artifacts you expect are present. Check the POMs.
+As long as the staging repo is open you can re-upload if something is missing or built incorrectly.
++
+If something is seriously wrong and you would like to back out the upload, you can use the 'Drop' button to drop and delete the staging repository.
+Sometimes the upload fails in the middle. This is another reason you might have to 'Drop' the upload from the staging repository.
++
+If it checks out, close the repo using the 'Close' button. The repository must be closed before a public URL to it becomes available. It may take a few minutes for the repository to close. Once complete you'll see a public URL to the repository in the Nexus UI. You may also receive an email with the URL. Provide the URL to the temporary staging repository in the email that announces the release candidate.
+(Folks will need to add this repo URL to their local poms or to their local _settings.xml_ file to pull the published release candidate artifacts.)
++
+When the release vote concludes successfully, return here and click the 'Release' button to release the artifacts to central. The release process will automatically drop and delete the staging repository.
 +
 .hbase-downstreamer
 [NOTE]
@@ -665,60 +778,57 @@ If the published artifacts are incomplete or have problems, just delete the 'ope
 See the link:https://github.com/saintstack/hbase-downstreamer[hbase-downstreamer] test for a simple example of a project that is downstream of HBase an depends on it.
 Check it out and run its simple test to make sure maven artifacts are properly deployed to the maven repository.
 Be sure to edit the pom to point to the proper staging repository.
-Make sure you are pulling from the repository when tests run and that you are not getting from your local repository, by either passing the `-U` flag or deleting your local repo content and check maven is pulling from remote out of the staging repository. 
+Make sure you are pulling from the repository when tests run and that you are not getting from your local repository, by either passing the `-U` flag or deleting your local repo content and check maven is pulling from remote out of the staging repository.
 ====
-+
-See link:http://www.apache.org/dev/publishing-maven-artifacts.html[Publishing Maven Artifacts] for some pointers on this maven staging process.
-+
-NOTE: We no longer publish using the maven release plugin.
-Instead we do +mvn deploy+.
-It seems to give us a backdoor to maven release publishing.
-If there is no _-SNAPSHOT_                            on the version string, then we are 'deployed' to the apache maven repository staging directory from which we can publish URLs for candidates and later, if they pass, publish as release (if a _-SNAPSHOT_ on the version string, deploy will put the artifacts up into apache snapshot repos). 
-+
+
+See link:https://www.apache.org/dev/publishing-maven-artifacts.html[Publishing Maven Artifacts] for some pointers on this maven staging process.
+
 If the HBase version ends in `-SNAPSHOT`, the artifacts go elsewhere.
 They are put into the Apache snapshots repository directly and are immediately available.
 Making a SNAPSHOT release, this is what you want to happen.
 
-. If you used the _make_rc.sh_ script instead of doing
-  the above manually, do your sanity checks now.
-+
-At this stage, you have two tarballs in your 'version directory' and a set of artifacts in a staging area of the maven repository, in the 'closed' state.
-These are publicly accessible in a temporary staging repository whose URL you should have gotten in an email.
-The above mentioned script, _make_rc.sh_ does all of the above for you minus the check of the artifacts built, the closing of the staging repository up in maven, and the tagging of the release.
-If you run the script, do your checks at this stage verifying the src and bin tarballs and checking what is up in staging using hbase-downstreamer project.
-Tag before you start the build.
-You can always delete it if the build goes haywire. 
-
-. Sign, upload, and 'stage' your version directory to link:http://people.apache.org[people.apache.org] (TODO:
-  There is a new location to stage releases using svnpubsub.  See
-  (link:https://issues.apache.org/jira/browse/HBASE-10554[HBASE-10554 Please delete old releases from mirroring system]).
-+
-If all checks out, next put the _version directory_ up on link:http://people.apache.org[people.apache.org].
-You will need to sign and fingerprint them before you push them up.
-In the _version directory_ run the following commands: 
-+
+At this stage, you have two tarballs in your 'build output directory' and a set of artifacts in a staging area of the maven repository, in the 'closed' state.
+Next sign, fingerprint and then 'stage' your release candiate build output directory via svnpubsub by committing
+your directory to link:https://dist.apache.org/repos/dist/dev/hbase/[The 'dev' distribution directory] (See comments on link:https://issues.apache.org/jira/browse/HBASE-10554[HBASE-10554 Please delete old releases from mirroring system] but in essence it is an svn checkout of https://dist.apache.org/repos/dist/dev/hbase -- releases are at https://dist.apache.org/repos/dist/release/hbase). In the _version directory_ run the following commands:
+
 [source,bourne]
 ----
 
-$ for i in *.tar.gz; do echo $i; gpg --print-mds $i > $i.mds ; done
 $ for i in *.tar.gz; do echo $i; gpg --print-md MD5 $i > $i.md5 ; done
 $ for i in *.tar.gz; do echo $i; gpg --print-md SHA512 $i > $i.sha ; done
 $ for i in *.tar.gz; do echo $i; gpg --armor --output $i.asc --detach-sig $i  ; done
 $ cd ..
-# Presuming our 'version directory' is named 0.96.0RC0, now copy it up to people.apache.org.
-$ rsync -av 0.96.0RC0 people.apache.org:public_html
+# Presuming our 'build output directory' is named 1.5.0RC0, copy it to the svn checkout of the dist dev dir
+# in this case named hbase.dist.dev.svn
+$ cd /Users/stack/checkouts/hbase.dist.dev.svn
+$ svn info
+Path: .
+Working Copy Root Path: /Users/stack/checkouts/hbase.dist.dev.svn
+URL: https://dist.apache.org/repos/dist/dev/hbase
+Repository Root: https://dist.apache.org/repos/dist
+Repository UUID: 0d268c88-bc11-4956-87df-91683dc98e59
+Revision: 15087
+Node Kind: directory
+Schedule: normal
+Last Changed Author: ndimiduk
+Last Changed Rev: 15045
+Last Changed Date: 2016-08-28 11:13:36 -0700 (Sun, 28 Aug 2016)
+$ mv 1.5.0RC0 /Users/stack/checkouts/hbase.dist.dev.svn
+$ svn add 1.5.0RC0
+$ svn commit ...
 ----
 +
-Make sure the link:http://people.apache.org[people.apache.org] directory is showing and that the mvn repo URLs are good.
-Announce the release candidate on the mailing list and call a vote. 
+Ensure it actually gets published by checking link:https://dist.apache.org/repos/dist/dev/hbase/[https://dist.apache.org/repos/dist/dev/hbase/].
+
+Announce the release candidate on the mailing list and call a vote.
 
 
 [[maven.snapshot]]
 === Publishing a SNAPSHOT to maven
 
-Make sure your _settings.xml_ is set up properly, as in <<mvn.settings.file,mvn.settings.file>>.
+Make sure your _settings.xml_ is set up properly (see <<maven.settings.xml>>).
 Make sure the hbase version includes `-SNAPSHOT` as a suffix.
-Following is an example of publishing SNAPSHOTS of a release that had an hbase version of 0.96.0 in its poms.
+Following is an example of publishing SNAPSHOTS of a release that had an hbase version of 1.5.0 in its poms.
 
 [source,bourne]
 ----
@@ -729,7 +839,7 @@ Following is an example of publishing SNAPSHOTS of a release that had an hbase v
 
 The _make_rc.sh_ script mentioned above (see <<maven.release,maven.release>>) can help you publish `SNAPSHOTS`.
 Make sure your `hbase.version` has a `-SNAPSHOT`                suffix before running the script.
-It will put a snapshot up into the apache snapshot repository for you. 
+It will put a snapshot up into the apache snapshot repository for you.
 
 [[hbase.rc.voting]]
 == Voting on Release Candidates
@@ -744,7 +854,7 @@ PMC members, please read this WIP doc on policy voting for a release candidate,
                 requirements of the ASF policy on releases._ Regards the latter, run +mvn apache-rat:check+ to verify all files are suitably licensed.
 See link:http://search-hadoop.com/m/DHED4dhFaU[HBase, mail # dev - On
                 recent discussion clarifying ASF release policy].
-for how we arrived at this process. 
+for how we arrived at this process.
 
 [[documentation]]
 == Generating the HBase Reference Guide
@@ -752,10 +862,10 @@ for how we arrived at this process.
 The manual is marked up using Asciidoc.
 We then use the link:http://asciidoctor.org/docs/asciidoctor-maven-plugin/[Asciidoctor maven plugin] to transform the markup to html.
 This plugin is run when you specify the +site+ goal as in when you run +mvn site+.
-See <<appendix_contributing_to_documentation,appendix contributing to documentation>> for more information on building the documentation. 
+See <<appendix_contributing_to_documentation,appendix contributing to documentation>> for more information on building the documentation.
 
 [[hbase.org]]
-== Updating link:http://hbase.apache.org[hbase.apache.org]
+== Updating link:https://hbase.apache.org[hbase.apache.org]
 
 [[hbase.org.site.contributing]]
 === Contributing to hbase.apache.org
@@ -763,26 +873,9 @@ See <<appendix_contributing_to_documentation,appendix contributing to documentat
 See <<appendix_contributing_to_documentation,appendix contributing to documentation>> for more information on contributing to the documentation or website.
 
 [[hbase.org.site.publishing]]
-=== Publishing link:http://hbase.apache.org[hbase.apache.org]
+=== Publishing link:https://hbase.apache.org[hbase.apache.org]
 
-As of link:https://issues.apache.org/jira/browse/INFRA-5680[INFRA-5680 Migrate apache hbase website], to publish the website, build it using Maven, and then deploy it over a checkout of _https://svn.apache.org/repos/asf/hbase/hbase.apache.org/trunk_                and check in your changes.
-The script _dev-scripts/publish_hbase_website.sh_ is provided to automate this process and to be sure that stale files are removed from SVN.
-Review the script even if you decide to publish the website manually.
-Use the script as follows:
-
-----
-$ publish_hbase_website.sh -h
-Usage: publish_hbase_website.sh [-i | -a] [-g <dir>] [-s <dir>]
- -h          Show this message
- -i          Prompts the user for input
- -a          Does not prompt the user. Potentially dangerous.
- -g          The local location of the HBase git repository
- -s          The local location of the HBase svn checkout
- Either --interactive or --silent is required.
- Edit the script to set default Git and SVN directories.
-----
-
-NOTE: The SVN commit takes a long time.
+See <<website_publish>> for instructions on publishing the website and documentation.
 
 [[hbase.tests]]
 == Tests
@@ -806,7 +899,7 @@ For any other module, for example `hbase-common`, the tests must be strict unit
 
 The HBase shell and its tests are predominantly written in jruby.
 In order to make these tests run as a part of the standard build, there is a single JUnit test, `TestShell`, that takes care of loading the jruby implemented tests and running them.
-You can run all of these tests from the top level with: 
+You can run all of these tests from the top level with:
 
 [source,bourne]
 ----
@@ -816,7 +909,7 @@ You can run all of these tests from the top level with:
 
 Alternatively, you may limit the shell tests that run using the system variable `shell.test`.
 This value should specify the ruby literal equivalent of a particular test case by name.
-For example, the tests that cover the shell commands for altering tables are contained in the test case `AdminAlterTableTest`        and you can run them with: 
+For example, the tests that cover the shell commands for altering tables are contained in the test case `AdminAlterTableTest`        and you can run them with:
 
 [source,bourne]
 ----
@@ -826,7 +919,7 @@ For example, the tests that cover the shell commands for altering tables are con
 
 You may also use a link:http://docs.ruby-doc.com/docs/ProgrammingRuby/html/language.html#UJ[Ruby Regular Expression
       literal] (in the `/pattern/` style) to select a set of test cases.
-You can run all of the HBase admin related tests, including both the normal administration and the security administration, with the command: 
+You can run all of the HBase admin related tests, including both the normal administration and the security administration, with the command:
 
 [source,bourne]
 ----
@@ -834,7 +927,7 @@ You can run all of the HBase admin related tests, including both the normal admi
       mvn clean test -Dtest=TestShell -Dshell.test=/.*Admin.*Test/
 ----
 
-In the event of a test failure, you can see details by examining the XML version of the surefire report results 
+In the event of a test failure, you can see details by examining the XML version of the surefire report results
 
 [source,bourne]
 ----
@@ -876,7 +969,8 @@ Also, keep in mind that if you are running tests in the `hbase-server` module yo
 [[hbase.unittests]]
 === Unit Tests
 
-Apache HBase unit tests are subdivided into four categories: small, medium, large, and integration with corresponding JUnit link:http://www.junit.org/node/581[categories]: `SmallTests`, `MediumTests`, `LargeTests`, `IntegrationTests`.
+Apache HBase test cases are subdivided into four categories: small, medium, large, and
+integration with corresponding JUnit link:https://github.com/junit-team/junit4/wiki/Categories[categories]: `SmallTests`, `MediumTests`, `LargeTests`, `IntegrationTests`.
 JUnit categories are denoted using java annotations and look like this in your unit test code.
 
 [source,java]
@@ -891,51 +985,53 @@ public class TestHRegionInfo {
 }
 ----
 
-The above example shows how to mark a unit test as belonging to the `small` category.
-All unit tests in HBase have a categorization. 
+The above example shows how to mark a test case as belonging to the `small` category.
+All test cases in HBase should have a categorization.
 
-The first three categories, `small`, `medium`, and `large`, are for tests run when you type `$ mvn test`.
+The first three categories, `small`, `medium`, and `large`, are for test cases which run when you
+type `$ mvn test`.
 In other words, these three categorizations are for HBase unit tests.
 The `integration` category is not for unit tests, but for integration tests.
 These are run when you invoke `$ mvn verify`.
 Integration tests are described in <<integration.tests,integration.tests>>.
 
-HBase uses a patched maven surefire plugin and maven profiles to implement its unit test characterizations. 
+HBase uses a patched maven surefire plugin and maven profiles to implement its unit test characterizations.
 
-Keep reading to figure which annotation of the set small, medium, and large to put on your new HBase unit test. 
+Keep reading to figure which annotation of the set small, medium, and large to put on your new
+HBase test case.
 
 .Categorizing Tests
 Small Tests (((SmallTests)))::
-  _Small_ tests are executed in a shared JVM.
-  We put in this category all the tests that can be executed quickly in a shared JVM.
-  The maximum execution time for a small test is 15 seconds, and small tests should not use a (mini)cluster.
+  _Small_ test cases are executed in a shared JVM and individual test cases should run in 15 seconds
+   or less; i.e. a link:https://en.wikipedia.org/wiki/JUnit[junit test fixture], a java object made
+   up of test methods, should finish in under 15 seconds. These test cases can not use mini cluster.
+   These are run as part of patch pre-commit.
 
 Medium Tests (((MediumTests)))::
-  _Medium_ tests represent tests that must be executed before proposing a patch.
-  They are designed to run in less than 30 minutes altogether, and are quite stable in their results.
-  They are designed to last less than 50 seconds individually.
-  They can use a cluster, and each of them is executed in a separate JVM. 
+  _Medium_ test cases are executed in separate JVM and individual test case should run in 50 seconds
+   or less. Together, they should take less than 30 minutes, and are quite stable in their results.
+   These test cases can use a mini cluster. These are run as part of patch pre-commit.
 
 Large Tests (((LargeTests)))::
-  _Large_ tests are everything else.
+  _Large_ test cases are everything else.
   They are typically large-scale tests, regression tests for specific bugs, timeout tests, performance tests.
   They are executed before a commit on the pre-integration machines.
-  They can be run on the developer machine as well. 
+  They can be run on the developer machine as well.
 
 Integration Tests (((IntegrationTests)))::
   _Integration_ tests are system level tests.
-  See <<integration.tests,integration.tests>> for more info. 
+  See <<integration.tests,integration.tests>> for more info.
 
 [[hbase.unittests.cmds]]
 === Running tests
 
 [[hbase.unittests.cmds.test]]
-==== Default: small and medium category tests 
+==== Default: small and medium category tests
 
 Running `mvn test` will execute all small tests in a single JVM (no fork) and then medium tests in a separate JVM for each test instance.
 Medium tests are NOT executed if there is an error in a small test.
 Large tests are NOT executed.
-There is one report for small tests, and one report for medium tests if they are executed. 
+There is one report for small tests, and one report for medium tests if they are executed.
 
 [[hbase.unittests.cmds.test.runalltests]]
 ==== Running all tests
@@ -943,38 +1039,38 @@ There is one report for small tests, and one report for medium tests if they are
 Running `mvn test -P runAllTests` will execute small tests in a single JVM then medium and large tests in a separate JVM for each test.
 Medium and large tests are NOT executed if there is an error in a small test.
 Large tests are NOT executed if there is an error in a small or medium test.
-There is one report for small tests, and one report for medium and large tests if they are executed. 
+There is one report for small tests, and one report for medium and large tests if they are executed.
 
 [[hbase.unittests.cmds.test.localtests.mytest]]
 ==== Running a single test or all tests in a package
 
-To run an individual test, e.g. `MyTest`, rum `mvn test -Dtest=MyTest` You can also pass multiple, individual tests as a comma-delimited list: 
+To run an individual test, e.g. `MyTest`, rum `mvn test -Dtest=MyTest` You can also pass multiple, individual tests as a comma-delimited list:
 [source,bash]
 ----
 mvn test  -Dtest=MyTest1,MyTest2,MyTest3
 ----
-You can also pass a package, which will run all tests under the package: 
+You can also pass a package, which will run all tests under the package:
 [source,bash]
 ----
 mvn test '-Dtest=org.apache.hadoop.hbase.client.*'
-----                
+----
 
 When `-Dtest` is specified, the `localTests` profile will be used.
 It will use the official release of maven surefire, rather than our custom surefire plugin, and the old connector (The HBase build uses a patched version of the maven surefire plugin). Each junit test is executed in a separate JVM (A fork per test class). There is no parallelization when tests are running in this mode.
 You will see a new message at the end of the -report: `"[INFO] Tests are skipped"`.
 It's harmless.
-However, you need to make sure the sum of `Tests run:` in the `Results:` section of test reports matching the number of tests you specified because no error will be reported when a non-existent test case is specified. 
+However, you need to make sure the sum of `Tests run:` in the `Results:` section of test reports matching the number of tests you specified because no error will be reported when a non-existent test case is specified.
 
 [[hbase.unittests.cmds.test.profiles]]
 ==== Other test invocation permutations
 
-Running `mvn test -P runSmallTests` will execute "small" tests only, using a single JVM. 
+Running `mvn test -P runSmallTests` will execute "small" tests only, using a single JVM.
 
-Running `mvn test -P runMediumTests` will execute "medium" tests only, launching a new JVM for each test-class. 
+Running `mvn test -P runMediumTests` will execute "medium" tests only, launching a new JVM for each test-class.
 
-Running `mvn test -P runLargeTests` will execute "large" tests only, launching a new JVM for each test-class. 
+Running `mvn test -P runLargeTests` will execute "large" tests only, launching a new JVM for each test-class.
 
-For convenience, you can run `mvn test -P runDevTests` to execute both small and medium tests, using a single JVM. 
+For convenience, you can run `mvn test -P runDevTests` to execute both small and medium tests, using a single JVM.
 
 [[hbase.unittests.test.faster]]
 ==== Running tests faster
@@ -996,7 +1092,7 @@ $ sudo mkdir /ram2G
 sudo mount -t tmpfs -o size=2048M tmpfs /ram2G
 ----
 
-You can then use it to run all HBase tests on 2.0 with the command: 
+You can then use it to run all HBase tests on 2.0 with the command:
 
 ----
 mvn test
@@ -1004,7 +1100,7 @@ mvn test
                         -Dtest.build.data.basedirectory=/ram2G
 ----
 
-On earlier versions, use: 
+On earlier versions, use:
 
 ----
 mvn test
@@ -1023,7 +1119,7 @@ It must be executed from the directory which contains the _pom.xml_.
 For example running +./dev-support/hbasetests.sh+ will execute small and medium tests.
 Running +./dev-support/hbasetests.sh
                         runAllTests+ will execute all tests.
-Running +./dev-support/hbasetests.sh replayFailed+ will rerun the failed tests a second time, in a separate jvm and without parallelisation. 
+Running +./dev-support/hbasetests.sh replayFailed+ will rerun the failed tests a second time, in a separate jvm and without parallelisation.
 
 [[hbase.unittests.resource.checker]]
 ==== Test Resource Checker(((Test ResourceChecker)))
@@ -1033,7 +1129,7 @@ Check the _*-out.txt_ files). The resources counted are the number of threads, t
 If the number has increased, it adds a _LEAK?_ comment in the logs.
 As you can have an HBase instance running in the background, some threads can be deleted/created without any specific action in the test.
 However, if the test does not work as expected, or if the test should not impact these resources, it's worth checking these log lines [computeroutput]+...hbase.ResourceChecker(157): before...+                    and [computeroutput]+...hbase.ResourceChecker(157): after...+.
-For example: 
+For example:
 
 ----
 2012-09-26 09:22:15,315 INFO [pool-1-thread-1]
@@ -1061,9 +1157,7 @@ ConnectionCount=1 (was 1)
 
 * All tests must be categorized, if not they could be skipped.
 * All tests should be written to be as fast as possible.
-* Small category tests should last less than 15 seconds, and must not have any side effect.
-* Medium category tests should last less than 50 seconds.
-* Large category tests should last less than 3 minutes.
+* See <<hbase.unittests,hbase.unittests> for test case categories and corresponding timeouts.
   This should ensure a good parallelization for people using it, and ease the analysis when the test fails.
 
 [[hbase.tests.sleeps]]
@@ -1076,10 +1170,10 @@ This allows understanding what the test is waiting for.
 Moreover, the test will work whatever the machine performance is.
 Sleep should be minimal to be as fast as possible.
 Waiting for a variable should be done in a 40ms sleep loop.
-Waiting for a socket operation should be done in a 200 ms sleep loop. 
+Waiting for a socket operation should be done in a 200 ms sleep loop.
 
 [[hbase.tests.cluster]]
-==== Tests using a cluster 
+==== Tests using a cluster
 
 Tests using a HRegion do not have to start a cluster: A region can use the local file system.
 Start/stopping a cluster cost around 10 seconds.
@@ -1087,7 +1181,50 @@ They should not be started per test method but per test class.
 Started cluster must be shutdown using [method]+HBaseTestingUtility#shutdownMiniCluster+, which cleans the directories.
 As most as possible, tests should use the default settings for the cluster.
 When they don't, they should document it.
-This will allow to share the cluster later. 
+This will allow to share the cluster later.
+
+[[hbase.tests.example.code]]
+==== Tests Skeleton Code
+
+Here is a test skeleton code with Categorization and a Category-based timeout rule to copy and paste and use as basis for test contribution.
+[source,java]
+----
+/**
+ * Describe what this testcase tests. Talk about resources initialized in @BeforeClass (before
+ * any test is run) and before each test is run, etc.
+ */
+// Specify the category as explained in <<hbase.unittests,hbase.unittests>>.
+@Category(SmallTests.class)
+public class TestExample {
+  // Replace the TestExample.class in the below with the name of your test fixture class.
+  private static final Log LOG = LogFactory.getLog(TestExample.class);
+
+  // Handy test rule that allows you subsequently get the name of the current method. See
+  // down in 'testExampleFoo()' where we use it to log current test's name.
+  @Rule public TestName testName = new TestName();
+
+  // The below rule does two things. It decides the timeout based on the category
+  // (small/medium/large) of the testcase. This @Rule requires that the full testcase runs
+  // within this timeout irrespective of individual test methods' times. The second
+  // feature is we'll dump in the log when the test is done a count of threads still
+  // running.
+  @Rule public static TestRule timeout = CategoryBasedTimeout.builder().
+    withTimeout(this.getClass()).withLookingForStuckThread(true).build();
+
+  @Before
+  public void setUp() throws Exception {
+  }
+
+  @After
+  public void tearDown() throws Exception {
+  }
+
+  @Test
+  public void testExampleFoo() {
+    LOG.info("Running test " + testName.getMethodName());
+  }
+}
+----
 
 [[integration.tests]]
 === Integration Tests
@@ -1095,16 +1232,16 @@ This will allow to share the cluster later.
 HBase integration/system tests are tests that are beyond HBase unit tests.
 They are generally long-lasting, sizeable (the test can be asked to 1M rows or 1B rows), targetable (they can take configuration that will point them at the ready-made cluster they are to run against; integration tests do not include cluster start/stop code), and verifying success, integration tests rely on public APIs only; they do not attempt to examine server internals asserting success/fail.
 Integration tests are what you would run when you need to more elaborate proofing of a release candidate beyond what unit tests can do.
-They are not generally run on the Apache Continuous Integration build server, however, some sites opt to run integration tests as a part of their continuous testing on an actual cluster. 
+They are not generally run on the Apache Continuous Integration build server, however, some sites opt to run integration tests as a part of their continuous testing on an actual cluster.
 
 Integration tests currently live under the _src/test_                directory in the hbase-it submodule and will match the regex: _**/IntegrationTest*.java_.
-All integration tests are also annotated with `@Category(IntegrationTests.class)`. 
+All integration tests are also annotated with `@Category(IntegrationTests.class)`.
 
 Integration tests can be run in two modes: using a mini cluster, or against an actual distributed cluster.
 Maven failsafe is used to run the tests using the mini cluster.
 IntegrationTestsDriver class is used for executing the tests against a distributed cluster.
 Integration tests SHOULD NOT assume that they are running against a mini cluster, and SHOULD NOT use private API's to access cluster state.
-To interact with the distributed or mini cluster uniformly, `IntegrationTestingUtility`, and `HBaseCluster` classes, and public client API's can be used. 
+To interact with the distributed or mini cluster uniformly, `IntegrationTestingUtility`, and `HBaseCluster` classes, and public client API's can be used.
 
 On a distributed cluster, integration tests that use ChaosMonkey or otherwise manipulate services thru cluster manager (e.g.
 restart regionservers) use SSH to do it.
@@ -1118,15 +1255,15 @@ The argument 1 (%1$s) is SSH options set the via opts setting or via environment
 ----
 /usr/bin/ssh %1$s %2$s%3$s%4$s "su hbase - -c \"%5$s\""
 ----
-That way, to kill RS (for example) integration tests may run: 
+That way, to kill RS (for example) integration tests may run:
 [source,bash]
 ----
 {/usr/bin/ssh some-hostname "su hbase - -c \"ps aux | ... | kill ...\""}
 ----
-The command is logged in the test logs, so you can verify it is correct for your environment. 
+The command is logged in the test logs, so you can verify it is correct for your environment.
 
 To disable the running of Integration Tests, pass the following profile on the command line `-PskipIntegrationTests`.
-For example, 
+For example,
 [source]
 ----
 $ mvn clean install test -Dtest=TestZooKeeper  -PskipIntegrationTests
@@ -1136,7 +1273,7 @@ $ mvn clean install test -Dtest=TestZooKeeper  -PskipIntegrationTests
 ==== Running integration tests against mini cluster
 
 HBase 0.92 added a `verify` maven target.
-Invoking it, for example by doing `mvn verify`, will run all the phases up to and including the verify phase via the maven link:http://maven.apache.org/plugins/maven-failsafe-plugin/[failsafe
+Invoking it, for example by doing `mvn verify`, will run all the phases up to and including the verify phase via the maven link:https://maven.apache.org/plugins/maven-failsafe-plugin/[failsafe
                         plugin], running all the above mentioned HBase unit tests as well as tests that are in the HBase integration test group.
 After you have completed +mvn install -DskipTests+ You can run just the integration tests by invoking:
 
@@ -1148,9 +1285,9 @@ mvn verify
 ----
 
 If you just want to run the integration tests in top-level, you need to run two commands.
-First: +mvn failsafe:integration-test+ This actually runs ALL the integration tests. 
+First: +mvn failsafe:integration-test+ This actually runs ALL the integration tests.
 
-NOTE: This command will always output `BUILD SUCCESS` even if there are test failures. 
+NOTE: This command will always output `BUILD SUCCESS` even if there are test failures.
 
 At this point, you could grep the output by hand looking for failed tests.
 However, maven will do this for us; just use: +mvn
@@ -1161,19 +1298,19 @@ However, maven will do this for us; just use: +mvn
 
 This is very similar to how you specify running a subset of unit tests (see above), but use the property `it.test` instead of `test`.
 To just run `IntegrationTestClassXYZ.java`, use: +mvn
-                            failsafe:integration-test -Dit.test=IntegrationTestClassXYZ+                        The next thing you might want to do is run groups of integration tests, say all integration tests that are named IntegrationTestClassX*.java: +mvn failsafe:integration-test -Dit.test=*ClassX*+ This runs everything that is an integration test that matches *ClassX*. This means anything matching: "**/IntegrationTest*ClassX*". You can also run multiple groups of integration tests using comma-delimited lists (similar to unit tests). Using a list of matches still supports full regex matching for each of the groups.This would look something like: +mvn
-                            failsafe:integration-test -Dit.test=*ClassX*, *ClassY+                    
+                            failsafe:integration-test -Dit.test=IntegrationTestClassXYZ+                        The next thing you might want to do is run groups of integration tests, say all integration tests that are named IntegrationTestClassX*.java: +mvn failsafe:integration-test -Dit.test=*ClassX*+ This runs everything that is an integration test that matches *ClassX*. This means anything matching: "**/IntegrationTest*ClassX*". You can also run multiple groups of integration tests using comma-delimited lists (similar to unit tests). Using a list of matches still supports full regex matching for each of the groups. This would look something like: +mvn
+                            failsafe:integration-test -Dit.test=*ClassX*, *ClassY+
 
 [[maven.build.commands.integration.tests.distributed]]
 ==== Running integration tests against distributed cluster
 
 If you have an already-setup HBase cluster, you can launch the integration tests by invoking the class `IntegrationTestsDriver`.
 You may have to run test-compile first.
-The configuration will be picked by the bin/hbase script. 
+The configuration will be picked by the bin/hbase script.
 [source,bourne]
 ----
 mvn test-compile
----- 
+----
 Then launch the tests with:
 
 [source,bourne]
@@ -1186,26 +1323,30 @@ Running the IntegrationTestsDriver without any argument will launch tests found
 See the usage, by passing -h, to see how to filter test classes.
 You can pass a regex which is checked against the full class name; so, part of class name can be used.
 IntegrationTestsDriver uses Junit to run the tests.
-Currently there is no support for running integration tests against a distributed cluster using maven (see link:https://issues.apache.org/jira/browse/HBASE-6201[HBASE-6201]). 
+Currently there is no support for running integration tests against a distributed cluster using maven (see link:https://issues.apache.org/jira/browse/HBASE-6201[HBASE-6201]).
 
 The tests interact with the distributed cluster by using the methods in the `DistributedHBaseCluster` (implementing `HBaseCluster`) class, which in turn uses a pluggable `ClusterManager`.
 Concrete implementations provide actual functionality for carrying out deployment-specific and environment-dependent tasks (SSH, etc). The default `ClusterManager` is `HBaseClusterManager`, which uses SSH to remotely execute start/stop/kill/signal commands, and assumes some posix commands (ps, etc). Also assumes the user running the test has enough "power" to start/stop servers on the remote machines.
 By default, it picks up `HBASE_SSH_OPTS`, `HBASE_HOME`, `HBASE_CONF_DIR` from the env, and uses `bin/hbase-daemon.sh` to carry out the actions.
-Currently tarball deployments, deployments which uses _hbase-daemons.sh_, and link:http://incubator.apache.org/ambari/[Apache Ambari]                    deployments are supported.
+Currently tarball deployments, deployments which uses _hbase-daemons.sh_, and link:https://incubator.apache.org/ambari/[Apache Ambari]                    deployments are supported.
 _/etc/init.d/_ scripts are not supported for now, but it can be easily added.
-For other deployment options, a ClusterManager can be implemented and plugged in. 
+For other deployment options, a ClusterManager can be implemented and plugged in.
 
 [[maven.build.commands.integration.tests.destructive]]
-==== Destructive integration / system tests
+==== Destructive integration / system tests (ChaosMonkey)
+
+HBase 0.96 introduced a tool named `ChaosMonkey`, modeled after
+link:https://netflix.github.io/chaosmonkey/[same-named tool by Netflix's Chaos Monkey tool].
+ChaosMonkey simulates real-world
+faults in a running cluster by killing or disconnecting random servers, or injecting
+other failures into the environment. You can use ChaosMonkey as a stand-alone tool
+to run a policy while other tests are running. In some environments, ChaosMonkey is
+always running, in order to constantly check that high availability and fault tolerance
+are working as expected.
 
-In 0.96, a tool named `ChaosMonkey` has been introduced.
-It is modeled after the link:http://techblog.netflix.com/2012/07/chaos-monkey-released-into-wild.html[same-named tool by Netflix].
-Some of the tests use ChaosMonkey to simulate faults in the running cluster in the way of killing random servers, disconnecting servers, etc.
-ChaosMonkey can also be used as a stand-alone tool to run a (misbehaving) policy while you are running other tests. 
+ChaosMonkey defines *Actions* and *Policies*.
 
-ChaosMonkey defines Action's and Policy's.
-Actions are sequences of events.
-We have at least the following actions:
+Actions:: Actions are predefined sequences of events, such as the following:
 
 * Restart active master (sleep 5 sec)
 * Restart random regionserver (sleep 5 sec)
@@ -1215,23 +1356,17 @@ We have at least the following actions:
 * Batch restart of 50% of regionservers (sleep 5 sec)
 * Rolling restart of 100% of regionservers (sleep 5 sec)
 
-Policies on the other hand are responsible for executing the actions based on a strategy.
-The default policy is to execute a random action every minute based on predefined action weights.
-ChaosMonkey executes predefined named policies until it is stopped.
-More than one policy can be active at any time. 
+Policies:: A policy is a strategy for executing one or more actions. The default policy
+executes a random action every minute based on predefined action weights.
+A given policy will be executed until ChaosMonkey is interrupted.
 
-To run ChaosMonkey as a standalone tool deploy your HBase cluster as usual.
-ChaosMonkey uses the configuration from the bin/hbase script, thus no extra configuration needs to be done.
-You can invoke the ChaosMonkey by running:
-
-[source,bourne]
-----
-bin/hbase org.apache.hadoop.hbase.util.ChaosMonkey
-----
-
-This will output smt like: 
+Most ChaosMonkey actions are configured to have reasonable defaults, so you can run
+ChaosMonkey against an existing cluster without any additional configuration. The
+following example runs ChaosMonkey with the default configuration:
 
+[source,bash]
 ----
+$ bin/hbase org.apache.hadoop.hbase.util.ChaosMonkey
 
 12/11/19 23:21:57 INFO util.ChaosMonkey: Using ChaosMonkey Policy: class org.apache.hadoop.hbase.util.ChaosMonkey$PeriodicRandomActionPolicy, period:60000
 12/11/19 23:21:57 INFO util.ChaosMonkey: Sleeping for 26953 to add jitter
@@ -1270,31 +1405,38 @@ This will output smt like:
 12/11/19 23:24:27 INFO util.ChaosMonkey: Started region server:rs3.example.com,60020,1353367027826. Reported num of rs:6
 ----
 
-As you can see from the log, ChaosMonkey started the default PeriodicRandomActionPolicy, which is configured with all the available actions, and ran RestartActiveMaster and RestartRandomRs actions.
-ChaosMonkey tool, if run from command line, will keep on running until the process is killed. 
+The output indicates that ChaosMonkey started the default `PeriodicRandomActionPolicy`
+policy, which is configured with all the available actions. It chose to run `RestartActiveMaster` and `RestartRandomRs` actions.
+
+==== Available Policies
+HBase ships with several ChaosMonkey policies, available in the
+`hbase/hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/policies/` directory.
 
 [[chaos.monkey.properties]]
-==== Passing individual Chaos Monkey per-test Settings/Properties
+==== Configuring Individual ChaosMonkey Actions
 
-Since HBase version 1.0.0 (link:https://issues.apache.org/jira/browse/HBASE-11348[HBASE-11348]), the chaos monkeys is used to run integration tests can be configured per test run.
-Users can create a java properties file and and pass this to the chaos monkey with timing configurations.
-The properties file needs to be in the HBase classpath.
-The various properties that can be configured and their default values can be found listed in the `org.apache.hadoop.hbase.chaos.factories.MonkeyConstants`                    class.
-If any chaos monkey configuration is missing from the property file, then the default values are assumed.
-For example:
+Since HBase version 1.0.0 (link:https://issues.apache.org/jira/browse/HBASE-11348[HBASE-11348]),
+ChaosMonkey integration tests can be configured per test run.
+Create a Java properties file in the HBase classpath and pass it to ChaosMonkey using
+the `-monkeyProps` configuration flag. Configurable properties, along with their default
+values if applicable, are listed in the `org.apache.hadoop.hbase.chaos.factories.MonkeyConstants`
+class. For properties that have defaults, you can override them by including them
+in your properties file.
+
+The following example uses a properties file called <<monkey.properties,monkey.properties>>.
 
 [source,bourne]
 ----
-
-$bin/hbase org.apache.hadoop.hbase.IntegrationTestIngest -m slowDeterministic -monkeyProps monkey.properties
+$ bin/hbase org.apache.hadoop.hbase.IntegrationTestIngest -m slowDeterministic -monkeyProps monkey.properties
 ----
 
 The above command will start the integration tests and chaos monkey passing the properties file _monkey.properties_.
 Here is an example chaos monkey file:
 
+[[monkey.properties]]
+.Example ChaosMonkey Properties File
 [source]
 ----
-
 sdm.action1.period=120000
 sdm.action2.period=40000
 move.regions.sleep.time=80000
@@ -1303,14 +1445,43 @@ move.regions.sleep.time=80000
 batch.restart.rs.ratio=0.4f
 ----
 
+HBase 1.0.2 and newer adds the ability to restart HBase's underlying ZooKeeper quorum or
+HDFS nodes. To use these actions, you need to configure some new properties, which
+have no reasonable defaults because they are deployment-specific, in your ChaosMonkey
+properties file, which may be `hbase-site.xml` or a different properties file.
+
+[source,xml]
+----
+<property>
+  <name>hbase.it.clustermanager.hadoop.home</name>
+  <value>$HADOOP_HOME</value>
+</property>
+<property>
+  <name>hbase.it.clustermanager.zookeeper.home</name>
+  <value>$ZOOKEEPER_HOME</value>
+</property>
+<property>
+  <name>hbase.it.clustermanager.hbase.user</name>
+  <value>hbase</value>
+</property>
+<property>
+  <name>hbase.it.clustermanager.hadoop.hdfs.user</name>
+  <value>hdfs</value>
+</property>
+<property>
+  <name>hbase.it.clustermanager.zookeeper.user</name>
+  <value>zookeeper</value>
+</property>
+----
+
 [[developing]]
 == Developer Guidelines
 
-=== Codelines
+=== Branches
 
-Most development is done on the master branch, which is named `master` in the Git repository.
-Previously, HBase used Subversion, in which the master branch was called `TRUNK`.
-Branches exist for minor releases, and important features and bug fixes are often back-ported.
+We use Git for source code management and latest development happens on `master` branch. There are
+branches for past major/minor/maintenance releases and important features and bug fixes are often
+ back-ported to them.
 
 === Release Managers
 
@@ -1326,25 +1497,29 @@ NOTE: End-of-life releases are not included in this list.
 |===
 | Release
 | Release Manager
-| 0.98
-| Andrew Purtell
 
-| 1.0
-| Enis Soztutar
+| 1.1
+| Nick Dimiduk
+
+| 1.2
+| Sean Busbey
+
+| 1.3
+| Mikhail Antonov
+
 |===
 
 [[code.standards]]
 === Code Standards
 
-See <<eclipse.code.formatting,eclipse.code.formatting>> and <<common.patch.feedback,common.patch.feedback>>. 
 
 ==== Interface Classifications
 
 Interfaces are classified both by audience and by stability level.
 These labels appear at the head of a class.
-The conventions followed by HBase are inherited by its parent project, Hadoop. 
+The conventions followed by HBase are inherited by its parent project, Hadoop.
 
-The following interface classifications are commonly used: 
+The following interface classifications are commonly used:
 
 .InterfaceAudience
 `@InterfaceAudience.Public`::
@@ -1358,8 +1533,6 @@ The following interface classifications are commonly used:
 
 `@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC)`::
   APIs for HBase coprocessor writers.
-  As of HBase 0.92/0.94/0.96/0.98 this api is still unstable.
-  No guarantees on compatibility with future versions.
 
 No `@InterfaceAudience` Classification::
   Packages without an `@InterfaceAudience` label are considered private.
@@ -1368,7 +1541,7 @@ No `@InterfaceAudience` Classification::
 .Excluding Non-Public Interfaces from API Documentation
 [NOTE]
 ====
-Only interfaces classified `@InterfaceAudience.Public` should be included in API documentation (Javadoc). Committers must add new package excludes `ExcludePackageNames` section of the _pom.xml_ for new packages which do not contain public classes. 
+Only interfaces classified `@InterfaceAudience.Public` should be included in API documentation (Javadoc). Committers must add new package excludes `ExcludePackageNames` section of the _pom.xml_ for new packages which do not contain public classes.
 ====
 
 .@InterfaceStability
@@ -1386,7 +1559,7 @@ Only interfaces classified `@InterfaceAudience.Public` should be included in API
 No `@InterfaceStability` Label::
   Public classes with no `@InterfaceStability` label are discouraged, and should be considered implicitly unstable.
 
-If you are unclear about how to mark packages, ask on the development list. 
+If you are unclear about how to mark packages, ask on the development list.
 
 [[common.patch.feedback]]
 ==== Code Formatting Conventions
@@ -1396,6 +1569,8 @@ These guidelines have been developed based upon common feedback on patches from
 
 See the link:http://www.oracle.com/technetwork/java/index-135089.html[Code
                     Conventions for the Java Programming Language] for more information on coding conventions in Java.
+See <<eclipse.code.formatting,eclipse.code.formatting>> to setup Eclipse to check for some of
+these guidelines automatically.
 
 [[common.patch.feedback.space.invaders]]
 ===== Space Invaders
@@ -1470,7 +1645,6 @@ Bar bar = foo.veryLongMethodWithManyArguments(
 [[common.patch.feedback.trailingspaces]]
 ===== Trailing Spaces
 
-Trailing spaces are a common problem.
 Be sure there is a line break after the end of your code, and avoid lines with nothing but whitespace.
 This makes diffs more meaningful.
 You can configure your IDE to help with this.
@@ -1484,21 +1658,22 @@ Bar bar = foo.getBar();     <--- imagine there is an extra space(s) after the se
 [[common.patch.feedback.javadoc]]
 ===== API Documentation (Javadoc)
 
-This is also a very common feedback item.
 Don't forget Javadoc!
 
 Javadoc warnings are checked during precommit.
 If the precommit tool gives you a '-1', please fix the javadoc issue.
-Your patch won't be committed if it adds such warnings. 
+Your patch won't be committed if it adds such warnings.
+
+Also, no `@author` tags - that's a rule.
 
 [[common.patch.feedback.findbugs]]
 ===== Findbugs
 
 `Findbugs` is used to detect common bugs pattern.
-It is checked during the precommit build by Apache's Jenkins.
+It is checked during the precommit build.
 If errors are found, please fix them.
-You can run findbugs locally with +mvn
-                            findbugs:findbugs+, which will generate the `findbugs` files locally.
+You can run findbugs locally with `mvn
+                            findbugs:findbugs`, which will generate the `findbugs` files locally.
 Sometimes, you may have to write code smarter than `findbugs`.
 You can annotate your code to tell `findbugs` you know what you're doing, by annotating your class with the following annotation:
 
@@ -1509,38 +1684,42 @@ value="HE_EQUALS_USE_HASHCODE",
 justification="I know what I'm doing")
 ----
 
-It is important to use the Apache-licensed version of the annotations. 
+It is important to use the Apache-licensed version of the annotations. That generally means using
+annotations in the `edu.umd.cs.findbugs.annotations` package so that we can rely on the cleanroom
+reimplementation rather than annotations in the `javax.annotations` package.
 
 [[common.patch.feedback.javadoc.defaults]]
 ===== Javadoc - Useless Defaults
 
-Don't just leave the @param arguments the way your IDE generated them.:
+Don't just leave javadoc tags the way IDE generates them, or fill redundant information in them.
 
 [source,java]
 ----
 
   /**
-   *
-   * @param bar             <---- don't do this!!!!
-   * @return                <---- or this!!!!
+   * @param table                              <---- don't leave them empty!
+   * @param region An HRegion object.          <---- don't fill redundant information!
+   * @return Foo Object foo just created.      <---- Not useful information
+   * @throws SomeException                     <---- Not useful. Function declarations already tell that!
+   * @throws BarException when something went wrong  <---- really?
    */
-  public Foo getFoo(Bar bar);
+  public Foo createFoo(Bar bar);
 ----
 
-Either add something descriptive to the @`param` and @`return` lines, or just remove them.
+Either add something descriptive to the tags, or just remove them.
 The preference is to add something descriptive and useful.
 
 [[common.patch.feedback.onething]]
 ===== One Thing At A Time, Folks
 
-If you submit a patch for one thing, don't do auto-reformatting or unrelated reformatting of code on a completely different area of code. 
+If you submit a patch for one thing, don't do auto-reformatting or unrelated reformatting of code on a completely different area of code.
 
-Likewise, don't add unrelated cleanup or refactorings outside the scope of your Jira. 
+Likewise, don't add unrelated cleanup or refactorings outside the scope of your Jira.
 
 [[common.patch.feedback.tests]]
 ===== Ambigious Unit Tests
 
-Make sure that you're clear about what you are testing in your unit tests and why. 
+Make sure that you're clear about what you are testing in your unit tests and why.
 
 [[common.patch.feedback.writable]]
 ===== Implementing Writable
@@ -1548,24 +1727,38 @@ Make sure that you're clear about what you are testing in your unit tests and wh
 .Applies pre-0.96 only
 [NOTE]
 ====
-In 0.96, HBase moved to protocol buffers (protobufs). The below section on Writables applies to 0.94.x and previous, not to 0.96 and beyond. 
+In 0.96, HBase moved to protocol buffers (protobufs). The below section on Writables applies to 0.94.x and previous, not to 0.96 and beyond.
 ====
 
 Every class returned by RegionServers must implement the `Writable` interface.
-If you are creating a new class that needs to implement this interface, do not forget the default constructor. 
+If you are creating a new class that needs to implement this interface, do not forget the default constructor.
+
+==== Garbage-Collection Conserving Guidelines
+
+The following guidelines were borrowed from http://engineering.linkedin.com/performance/linkedin-feed-faster-less-jvm-garbage.
+Keep them in mind to keep preventable garbage  collection to a minimum. Have a look
+at the blog post for some great examples of how to refactor your code according to
+these guidelines.
+
+- Be careful with Iterators
+- Estimate the size of a collection when initializing
+- Defer expression evaluation
+- Compile the regex patterns in advance
+- Cache it if you can
+- String Interns are useful but dangerous
 
 [[design.invariants]]
 === Invariants
 
 We don't have many but what we have we list below.
-All are subject to challenge of course but until then, please hold to the rules of the road. 
+All are subject to challenge of course but until then, please hold to the rules of the road.
 
 [[design.invariants.zk.data]]
 ==== No permanent state in ZooKeeper
 
 ZooKeeper state should transient (treat it like memory). If ZooKeeper state is deleted, hbase should be able to recover and essentially be in the same state.
 
-* .ExceptionsThere are currently a few exceptions that we need to fix around whether a table is enabled or disabled.
+* .Exceptions: There are currently a few exceptions that we need to fix around whether a table is enabled or disabled.
 * Replication data is currently stored only in ZooKeeper.
   Deleting ZooKeeper data related to replication may cause replication to be disabled.
   Do not delete the replication tree, _/hbase/replication/_.
@@ -1579,14 +1772,14 @@ Follow progress on this issue at link:https://issues.apache.org/jira/browse/HBAS
 
 If you are developing Apache HBase, frequently it is useful to test your changes against a more-real cluster than what you find in unit tests.
 In this case, HBase can be run directly from the source in local-mode.
-All you need to do is run: 
+All you need to do is run:
 
 [source,bourne]
 ----
 ${HBASE_HOME}/bin/start-hbase.sh
 ----
 
-This will spin up a full local-cluster, just as if you had packaged up HBase and installed it on your machine. 
+This will spin up a full local-cluster, just as if you had packaged up HBase and installed it on your machine.
 
 Keep in mind that you will need to have installed HBase into your local maven repository for the in-situ cluster to work properly.
 That is, you will need to run:
@@ -1607,27 +1800,25 @@ HBase exposes metrics using the Hadoop Metrics 2 system, so adding a new metric
 Unfortunately the API of metri

<TRUNCATED>

[7/9] hbase git commit: Update POMs and CHANGES.txt for 1.4.0RC0

Posted by ap...@apache.org.
http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-annotations/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-annotations/pom.xml b/hbase-annotations/pom.xml
index 7f78a9e..95e25f8 100644
--- a/hbase-annotations/pom.xml
+++ b/hbase-annotations/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-archetypes/hbase-archetype-builder/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-archetypes/hbase-archetype-builder/pom.xml b/hbase-archetypes/hbase-archetype-builder/pom.xml
index 1bbdb45..6aff789 100644
--- a/hbase-archetypes/hbase-archetype-builder/pom.xml
+++ b/hbase-archetypes/hbase-archetype-builder/pom.xml
@@ -25,7 +25,7 @@
   <parent>
     <artifactId>hbase-archetypes</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-archetypes/hbase-client-project/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-archetypes/hbase-client-project/pom.xml b/hbase-archetypes/hbase-client-project/pom.xml
index 1e4e507..4b82ad4 100644
--- a/hbase-archetypes/hbase-client-project/pom.xml
+++ b/hbase-archetypes/hbase-client-project/pom.xml
@@ -26,7 +26,7 @@
   <parent>
     <artifactId>hbase-archetypes</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-client-project</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-archetypes/hbase-shaded-client-project/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-archetypes/hbase-shaded-client-project/pom.xml b/hbase-archetypes/hbase-shaded-client-project/pom.xml
index 6d237d3..1107ae3 100644
--- a/hbase-archetypes/hbase-shaded-client-project/pom.xml
+++ b/hbase-archetypes/hbase-shaded-client-project/pom.xml
@@ -26,7 +26,7 @@
   <parent>
     <artifactId>hbase-archetypes</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-shaded-client-project</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-archetypes/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-archetypes/pom.xml b/hbase-archetypes/pom.xml
index 6681fb2..609de25 100644
--- a/hbase-archetypes/pom.xml
+++ b/hbase-archetypes/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-assembly/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-assembly/pom.xml b/hbase-assembly/pom.xml
index a8ac79d..4436f3d 100644
--- a/hbase-assembly/pom.xml
+++ b/hbase-assembly/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-assembly</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-checkstyle/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-checkstyle/pom.xml b/hbase-checkstyle/pom.xml
index 4b41f75..b67c4c9 100644
--- a/hbase-checkstyle/pom.xml
+++ b/hbase-checkstyle/pom.xml
@@ -24,14 +24,14 @@
 <modelVersion>4.0.0</modelVersion>
 <groupId>org.apache.hbase</groupId>
 <artifactId>hbase-checkstyle</artifactId>
-<version>1.4.0-SNAPSHOT</version>
+<version>1.4.0</version>
 <name>Apache HBase - Checkstyle</name>
 <description>Module to hold Checkstyle properties for HBase.</description>
 
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-client/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-client/pom.xml b/hbase-client/pom.xml
index c5d0430..00036e7 100644
--- a/hbase-client/pom.xml
+++ b/hbase-client/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-common/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-common/pom.xml b/hbase-common/pom.xml
index 7cf16b9..87ad2a3 100644
--- a/hbase-common/pom.xml
+++ b/hbase-common/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-error-prone/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-error-prone/pom.xml b/hbase-error-prone/pom.xml
index da57ca6..565b891 100644
--- a/hbase-error-prone/pom.xml
+++ b/hbase-error-prone/pom.xml
@@ -23,11 +23,11 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-error-prone</artifactId>
-  <version>1.4.0-SNAPSHOT</version>
+  <version>1.4.0</version>
   <name>Apache HBase - Error Prone Rules</name>
   <description>Module to hold error prone custom rules for HBase.</description>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-examples/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-examples/pom.xml b/hbase-examples/pom.xml
index 2bd86e5..4826a59 100644
--- a/hbase-examples/pom.xml
+++ b/hbase-examples/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-examples</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-external-blockcache/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-external-blockcache/pom.xml b/hbase-external-blockcache/pom.xml
index 39d1b1e..6c0c75f 100644
--- a/hbase-external-blockcache/pom.xml
+++ b/hbase-external-blockcache/pom.xml
@@ -25,7 +25,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-external-blockcache</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-hadoop-compat/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-hadoop-compat/pom.xml b/hbase-hadoop-compat/pom.xml
index 335f7e9..5ffc8e6 100644
--- a/hbase-hadoop-compat/pom.xml
+++ b/hbase-hadoop-compat/pom.xml
@@ -23,7 +23,7 @@
     <parent>
         <artifactId>hbase</artifactId>
         <groupId>org.apache.hbase</groupId>
-        <version>1.4.0-SNAPSHOT</version>
+        <version>1.4.0</version>
         <relativePath>..</relativePath>
     </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-hadoop2-compat/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-hadoop2-compat/pom.xml b/hbase-hadoop2-compat/pom.xml
index 09904be..45b8b52 100644
--- a/hbase-hadoop2-compat/pom.xml
+++ b/hbase-hadoop2-compat/pom.xml
@@ -21,7 +21,7 @@ limitations under the License.
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-it/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-it/pom.xml b/hbase-it/pom.xml
index 2114409..b20462e 100644
--- a/hbase-it/pom.xml
+++ b/hbase-it/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-metrics-api/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-metrics-api/pom.xml b/hbase-metrics-api/pom.xml
index 7c471eb..a2f8a5e 100644
--- a/hbase-metrics-api/pom.xml
+++ b/hbase-metrics-api/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-metrics/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-metrics/pom.xml b/hbase-metrics/pom.xml
index 77b5358..af589fd 100644
--- a/hbase-metrics/pom.xml
+++ b/hbase-metrics/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-prefix-tree/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-prefix-tree/pom.xml b/hbase-prefix-tree/pom.xml
index c5a0df7..d29ac10 100644
--- a/hbase-prefix-tree/pom.xml
+++ b/hbase-prefix-tree/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-procedure/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-procedure/pom.xml b/hbase-procedure/pom.xml
index 09230ca..6db7d12 100644
--- a/hbase-procedure/pom.xml
+++ b/hbase-procedure/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-protocol/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-protocol/pom.xml b/hbase-protocol/pom.xml
index f8295a3..9f928c9 100644
--- a/hbase-protocol/pom.xml
+++ b/hbase-protocol/pom.xml
@@ -23,7 +23,7 @@
     <parent>
         <artifactId>hbase</artifactId>
         <groupId>org.apache.hbase</groupId>
-        <version>1.4.0-SNAPSHOT</version>
+        <version>1.4.0</version>
         <relativePath>..</relativePath>
     </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-resource-bundle/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-resource-bundle/pom.xml b/hbase-resource-bundle/pom.xml
index 4780b3a..17a8aba 100644
--- a/hbase-resource-bundle/pom.xml
+++ b/hbase-resource-bundle/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-rest/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-rest/pom.xml b/hbase-rest/pom.xml
index c532be5..bae46d0 100644
--- a/hbase-rest/pom.xml
+++ b/hbase-rest/pom.xml
@@ -25,7 +25,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-rest</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-rsgroup/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-rsgroup/pom.xml b/hbase-rsgroup/pom.xml
index 901e589..e6789f8 100644
--- a/hbase-rsgroup/pom.xml
+++ b/hbase-rsgroup/pom.xml
@@ -24,7 +24,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
 

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-server/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-server/pom.xml b/hbase-server/pom.xml
index 9b9cc2e..e1aea2c 100644
--- a/hbase-server/pom.xml
+++ b/hbase-server/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-server</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-shaded/hbase-shaded-check-invariants/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-shaded/hbase-shaded-check-invariants/pom.xml b/hbase-shaded/hbase-shaded-check-invariants/pom.xml
index 3d25da1..a2c41dc 100644
--- a/hbase-shaded/hbase-shaded-check-invariants/pom.xml
+++ b/hbase-shaded/hbase-shaded-check-invariants/pom.xml
@@ -16,7 +16,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>../..</relativePath>
   </parent>
   <artifactId>hbase-shaded-check-invariants</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-shaded/hbase-shaded-client/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-shaded/hbase-shaded-client/pom.xml b/hbase-shaded/hbase-shaded-client/pom.xml
index 0735a51..e0e965d 100644
--- a/hbase-shaded/hbase-shaded-client/pom.xml
+++ b/hbase-shaded/hbase-shaded-client/pom.xml
@@ -24,7 +24,7 @@
     <parent>
         <artifactId>hbase-shaded</artifactId>
         <groupId>org.apache.hbase</groupId>
-        <version>1.4.0-SNAPSHOT</version>
+        <version>1.4.0</version>
         <relativePath>..</relativePath>
     </parent>
     <artifactId>hbase-shaded-client</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-shaded/hbase-shaded-server/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-shaded/hbase-shaded-server/pom.xml b/hbase-shaded/hbase-shaded-server/pom.xml
index 0c42caf..bc1d172 100644
--- a/hbase-shaded/hbase-shaded-server/pom.xml
+++ b/hbase-shaded/hbase-shaded-server/pom.xml
@@ -24,7 +24,7 @@
     <parent>
         <artifactId>hbase-shaded</artifactId>
         <groupId>org.apache.hbase</groupId>
-        <version>1.4.0-SNAPSHOT</version>
+        <version>1.4.0</version>
         <relativePath>..</relativePath>
     </parent>
     <artifactId>hbase-shaded-server</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-shaded/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-shaded/pom.xml b/hbase-shaded/pom.xml
index e667fcd..c81c875 100644
--- a/hbase-shaded/pom.xml
+++ b/hbase-shaded/pom.xml
@@ -23,7 +23,7 @@
     <parent>
         <artifactId>hbase</artifactId>
         <groupId>org.apache.hbase</groupId>
-        <version>1.4.0-SNAPSHOT</version>
+        <version>1.4.0</version>
         <relativePath>..</relativePath>
     </parent>
     <artifactId>hbase-shaded</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-shell/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-shell/pom.xml b/hbase-shell/pom.xml
index 65bbcf7..a7f7e81 100644
--- a/hbase-shell/pom.xml
+++ b/hbase-shell/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-shell</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-testing-util/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-testing-util/pom.xml b/hbase-testing-util/pom.xml
index 8231a91..0afc5a5 100644
--- a/hbase-testing-util/pom.xml
+++ b/hbase-testing-util/pom.xml
@@ -23,7 +23,7 @@
     <parent>
         <artifactId>hbase</artifactId>
         <groupId>org.apache.hbase</groupId>
-        <version>1.4.0-SNAPSHOT</version>
+        <version>1.4.0</version>
         <relativePath>..</relativePath>
     </parent>
     <artifactId>hbase-testing-util</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/hbase-thrift/pom.xml
----------------------------------------------------------------------
diff --git a/hbase-thrift/pom.xml b/hbase-thrift/pom.xml
index 3283b2e..924a561 100644
--- a/hbase-thrift/pom.xml
+++ b/hbase-thrift/pom.xml
@@ -23,7 +23,7 @@
   <parent>
     <artifactId>hbase</artifactId>
     <groupId>org.apache.hbase</groupId>
-    <version>1.4.0-SNAPSHOT</version>
+    <version>1.4.0</version>
     <relativePath>..</relativePath>
   </parent>
   <artifactId>hbase-thrift</artifactId>

http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 438d77c..fa0dc42 100644
--- a/pom.xml
+++ b/pom.xml
@@ -39,7 +39,7 @@
   <groupId>org.apache.hbase</groupId>
   <artifactId>hbase</artifactId>
   <packaging>pom</packaging>
-  <version>1.4.0-SNAPSHOT</version>
+  <version>1.4.0</version>
   <name>Apache HBase</name>
   <description>
     Apache HBase™ is the Hadoop database. Use it when you need


[2/9] hbase git commit: HBASE-19420 Backport HBASE-19152 Update refguide 'how to build an RC' and the make_rc.sh script

Posted by ap...@apache.org.
HBASE-19420 Backport HBASE-19152 Update refguide 'how to build an RC' and the make_rc.sh script

Removes src.xml used building src tgz via hbase-assembly.

Use git archive instead going forward. Updates developer release candidate
documentation and the make_rc.sh script.

Slight modifications to developer.adoc for branch-1


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/14318d73
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/14318d73
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/14318d73

Branch: refs/heads/branch-1
Commit: 14318d734ef5974785c5c493f3b259b82b04beb4
Parents: 1fe75f9
Author: Andrew Purtell <ap...@apache.org>
Authored: Mon Dec 4 12:17:15 2017 -0800
Committer: Andrew Purtell <ap...@apache.org>
Committed: Mon Dec 4 16:35:13 2017 -0800

----------------------------------------------------------------------
 dev-support/make_rc.sh                     |   97 +-
 hbase-assembly/src/main/assembly/src.xml   |  136 ---
 src/main/asciidoc/_chapters/developer.adoc | 1168 +++++++++++++----------
 3 files changed, 751 insertions(+), 650 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hbase/blob/14318d73/dev-support/make_rc.sh
----------------------------------------------------------------------
diff --git a/dev-support/make_rc.sh b/dev-support/make_rc.sh
index b88a984..19f906f 100755
--- a/dev-support/make_rc.sh
+++ b/dev-support/make_rc.sh
@@ -28,8 +28,17 @@
 
 set -e
 
-devsupport=`dirname "$0"`
-devsupport=`cd "$devsupport">/dev/null; pwd`
+# Script checks out a tag, cleans the checkout and then builds src and bin
+# tarballs. It then deploys to the apache maven repository.
+# Presumes run from git dir.
+
+# Need a git tag to build.
+if [ "$1" = "" ]
+then
+  echo -n "Usage: $0 TAG_TO_PACKAGE"
+  exit 1
+fi
+git_tag=$1
 
 # Set mvn and mvnopts
 mvn=mvn
@@ -41,45 +50,67 @@ if [ "$MAVEN_OPTS" != "" ]; then
   mvnopts="${MAVEN_OPTS}"
 fi
 
-# Make a dir to save tgzs in.
+# Ensure we are inside a git repo before making progress
+# The below will fail if outside git.
+git -C . rev-parse
+
+# Checkout git_tag
+git checkout "${git_tag}"
+
+# Get mvn protject version
+#shellcheck disable=SC2016
+version=$(${mvn} -q -N -Dexec.executable="echo" -Dexec.args='${project.version}' exec:exec)
+hbase_name="hbase-${version}"
+
+# Make a dir to save tgzs into.
 d=`date -u +"%Y%m%dT%H%M%SZ"`
-archivedir="$(pwd)/../`basename $0`.$d"
-echo "Archive dir ${archivedir}"
-mkdir -p "${archivedir}"
+output_dir="/${TMPDIR}/$hbase_name.$d"
+mkdir -p "${output_dir}"
+
 
-function tgz_mover {
-  mv ./hbase-assembly/target/hbase-*.tar.gz "${archivedir}"
+# Build src tgz.
+function build_src {
+  git archive --format=tar.gz --output="${output_dir}/${hbase_name}-src.tar.gz" --prefix="${hbase_name}/" "${git_tag}"
 }
 
-function deploy {
-  MAVEN_OPTS="${mvnopts}" ${mvn} clean install -DskipTests -Prelease \
-    -Dmaven.repo.local=${archivedir}/repository
-  MAVEN_OPTS="${mvnopts}" ${mvn} install -DskipTests post-site assembly:single -Prelease \
-    -Dmaven.repo.local=${archivedir}/repository
-  tgz_mover
-  MAVEN_OPTS="${mvnopts}" ${mvn} deploy -DskipTests -Papache-release -Prelease \
-    -Dmaven.repo.local=${archivedir}/repository
+# Build bin tgz
+function build_bin {
+  MAVEN_OPTS="${mvnopts}" ${mvn} clean install -DskipTests -Papache-release -Prelease \
+    -Dmaven.repo.local=${output_dir}/repository
+  MAVEN_OPTS="${mvnopts}" ${mvn} install -DskipTests site assembly:single -Papache-release -Prelease \
+    -Dmaven.repo.local=${output_dir}/repository
+  mv ./hbase-assembly/target/hbase-*.tar.gz "${output_dir}"
 }
 
-# Build src tarball
-# run clean separate from assembly:single because it fails to clean shaded modules correctly
+# Make sure all clean.
+git clean -f -x -d
 MAVEN_OPTS="${mvnopts}" ${mvn} clean
-MAVEN_OPTS="${mvnopts}" ${mvn} install -DskipTests assembly:single \
-  -Dassembly.file="$(pwd)/hbase-assembly/src/main/assembly/src.xml" \
-  -Prelease -Dmaven.repo.local=${archivedir}/repository
-
-tgz_mover
 
 # Now do the two builds,  one for hadoop1, then hadoop2
-deploy
-
-echo "DONE"
-echo "Check the content of ${archivedir}.  If good, sign and push to dist.apache.org"
-echo " cd ${archivedir}"
-echo ' for i in *.tar.gz; do echo $i; gpg --print-mds $i > $i.mds ; done'
-echo ' for i in *.tar.gz; do echo $i; gpg --print-md MD5 $i > $i.md5 ; done'
-echo ' for i in *.tar.gz; do echo $i; gpg --print-md SHA512 $i > $i.sha ; done'
+# Run a rat check.
+${mvn} apache-rat:check
+
+#Build src.
+build_src
+
+# Build bin product
+build_bin
+
+# Deploy to mvn repository
+# Depends on build_bin having populated the local repository
+# If the below upload fails, you will probably have to clean the partial
+# upload from repository.apache.org by 'drop'ping it from the staging
+# repository before restart.
+MAVEN_OPTS="${mvnopts}" ${mvn} deploy -DskipTests -Papache-release -Prelease \
+    -Dmaven.repo.local=${output_dir}/repository
+
+# Do sha1 and md5
+cd ${output_dir}
+for i in *.tar.gz; do echo $i; gpg --print-md SHA512 $i > $i.sha ; done
+for i in *.tar.gz; do echo $i; gpg --print-md MD5 $i > $i.md5 ; done
+
+echo "Check the content of ${output_dir}.  If good, sign and push to dist.apache.org"
+echo " cd ${output_dir}"
 echo ' for i in *.tar.gz; do echo $i; gpg --armor --output $i.asc --detach-sig $i  ; done'
-echo ' rsync -av ${archivedir}/*.gz ${archivedir}/*.mds ${archivedir}/*.asc ~/repos/dist-dev/hbase-VERSION/'
+echo ' rsync -av ${output_dir}/*.gz ${output_dir}/*.md5 ${output_dir}/*.sha ${output_dir}/*.asc ${APACHE_HBASE_DIST_DEV_DIR}/${hbase_name}/'
 echo "Check the content deployed to maven.  If good, close the repo and record links of temporary staging repo"
-echo "If all good tag the RC"

http://git-wip-us.apache.org/repos/asf/hbase/blob/14318d73/hbase-assembly/src/main/assembly/src.xml
----------------------------------------------------------------------
diff --git a/hbase-assembly/src/main/assembly/src.xml b/hbase-assembly/src/main/assembly/src.xml
deleted file mode 100644
index b13967e..0000000
--- a/hbase-assembly/src/main/assembly/src.xml
+++ /dev/null
@@ -1,136 +0,0 @@
-<?xml version="1.0"?>
-<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1 http://maven.apache.org/xsd/assembly-1.1.1.xsd">
-<!--
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
--->
-
-  <!--Copies over all you need to build hbase-->
-  <id>src</id>
-  <formats>
-    <format>tar.gz</format>
-  </formats>
-  <moduleSets>
-    <moduleSet>
-      <!-- Enable access to all projects in the current multimodule build. Eclipse
-        says this is an error, but builds from the command line just fine. -->
-      <useAllReactorProjects>true</useAllReactorProjects>
-      <includes>
-        <include>org.apache.hbase:hbase-annotations</include>
-        <include>org.apache.hbase:hbase-archetypes</include>
-        <include>org.apache.hbase:hbase-assembly</include>
-        <include>org.apache.hbase:hbase-checkstyle</include>
-        <include>org.apache.hbase:hbase-client</include>
-        <include>org.apache.hbase:hbase-common</include>
-        <include>org.apache.hbase:hbase-examples</include>
-        <include>org.apache.hbase:hbase-external-blockcache</include>
-        <include>org.apache.hbase:hbase-hadoop2-compat</include>
-        <include>org.apache.hbase:hbase-hadoop-compat</include>
-        <include>org.apache.hbase:hbase-it</include>
-        <include>org.apache.hbase:hbase-prefix-tree</include>
-        <include>org.apache.hbase:hbase-procedure</include>
-        <include>org.apache.hbase:hbase-protocol</include>
-        <include>org.apache.hbase:hbase-rest</include>
-        <include>org.apache.hbase:hbase-resource-bundle</include>
-        <include>org.apache.hbase:hbase-server</include>
-        <include>org.apache.hbase:hbase-shaded</include>
-        <include>org.apache.hbase:hbase-shell</include>
-        <include>org.apache.hbase:hbase-testing-util</include>
-        <include>org.apache.hbase:hbase-thrift</include>
-      </includes>
-      <!-- Include all the sources in the top directory -->
-      <sources>
-        <excludeSubModuleDirectories>false</excludeSubModuleDirectories>
-        <fileSets>
-          <fileSet>
-            <includes>
-              <include>**</include>
-            </includes>
-            <!--Make sure this excludes is same as the hbase-hadoop2-compat
-                 excludes below-->
-            <excludes>
-              <exclude>target/</exclude>
-              <exclude>test/</exclude>
-              <exclude>.classpath</exclude>
-              <exclude>.project</exclude>
-              <exclude>.settings/</exclude>
-            </excludes>
-          </fileSet>
-        </fileSets>
-      </sources>
-    </moduleSet>
-  </moduleSets>
-  <fileSets>
-    <!--This one is weird.  When we assemble src, it'll be default profile which
-         at the moment is hadoop1.  But we should include the hadoop2 compat module
-         too so can build hadoop2 from src -->
-    <fileSet>
-      <directory>${project.basedir}/../hbase-hadoop2-compat</directory>
-      <outputDirectory>hbase-hadoop2-compat</outputDirectory>
-      <fileMode>0644</fileMode>
-      <directoryMode>0755</directoryMode>
-            <excludes>
-              <exclude>target/</exclude>
-              <exclude>test/</exclude>
-              <exclude>.classpath</exclude>
-              <exclude>.project</exclude>
-              <exclude>.settings/</exclude>
-            </excludes>
-    </fileSet>
-    <!--Include dev tools-->
-    <fileSet>
-      <directory>${project.basedir}/../dev-support</directory>
-      <outputDirectory>dev-support</outputDirectory>
-      <fileMode>0644</fileMode>
-      <directoryMode>0755</directoryMode>
-    </fileSet>
-    <fileSet>
-      <directory>${project.basedir}/../src</directory>
-      <outputDirectory>src</outputDirectory>
-      <fileMode>0644</fileMode>
-      <directoryMode>0755</directoryMode>
-    </fileSet>
-    <!-- Include the top level conf directory -->
-    <fileSet>
-      <directory>${project.basedir}/../conf</directory>
-      <outputDirectory>conf</outputDirectory>
-      <fileMode>0644</fileMode>
-      <directoryMode>0755</directoryMode>
-    </fileSet>
-    <!-- Include top level bin directory -->
-    <fileSet>
-        <directory>${project.basedir}/../bin</directory>
-      <outputDirectory>bin</outputDirectory>
-      <fileMode>0755</fileMode>
-      <directoryMode>0755</directoryMode>
-    </fileSet>
-    <fileSet>
-      <directory>${project.basedir}/..</directory>
-      <outputDirectory>.</outputDirectory>
-      <includes>
-        <include>pom.xml</include>
-        <include>LICENSE.txt</include>
-        <include>NOTICE.txt</include>
-        <include>CHANGES.txt</include>
-        <include>README.txt</include>
-        <include>.pylintrc</include>
-      </includes>
-      <fileMode>0644</fileMode>
-    </fileSet>
-</fileSets>
-</assembly>


[3/9] hbase git commit: HBASE-19429 Release build fails in checkstyle phase of site target (branch-1)

Posted by ap...@apache.org.
HBASE-19429 Release build fails in checkstyle phase of site target (branch-1)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/ba5bd0ae
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/ba5bd0ae
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/ba5bd0ae

Branch: refs/heads/branch-1
Commit: ba5bd0ae5b547b0fc70472def161b42ebefe4d38
Parents: 14318d7
Author: Andrew Purtell <ap...@apache.org>
Authored: Mon Dec 4 18:33:18 2017 -0800
Committer: Andrew Purtell <ap...@apache.org>
Committed: Mon Dec 4 18:40:49 2017 -0800

----------------------------------------------------------------------
 pom.xml | 2 ++
 1 file changed, 2 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hbase/blob/ba5bd0ae/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 70e1947..1976c34 100644
--- a/pom.xml
+++ b/pom.xml
@@ -2888,6 +2888,7 @@
           </reportSet>
         </reportSets>
       </plugin>
+      <!--
       <plugin>
         <groupId>org.apache.maven.plugins</groupId>
         <artifactId>maven-checkstyle-plugin</artifactId>
@@ -2898,6 +2899,7 @@
           <includeTestSourceDirectory>true</includeTestSourceDirectory>
         </configuration>
       </plugin>
+      -->
     </plugins>
   </reporting>
   <distributionManagement>


[4/9] hbase git commit: HBASE-19429 Release build fails in checkstyle phase of site target (branch-1)

Posted by ap...@apache.org.
HBASE-19429 Release build fails in checkstyle phase of site target (branch-1)


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/6fcdc33d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/6fcdc33d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/6fcdc33d

Branch: refs/heads/branch-1.4
Commit: 6fcdc33df5ac40682599f642121842217e742f7c
Parents: 1dba475
Author: Andrew Purtell <ap...@apache.org>
Authored: Mon Dec 4 18:33:18 2017 -0800
Committer: Andrew Purtell <ap...@apache.org>
Committed: Mon Dec 4 18:41:11 2017 -0800

----------------------------------------------------------------------
 pom.xml | 2 ++
 1 file changed, 2 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hbase/blob/6fcdc33d/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index b47cdbf..438d77c 100644
--- a/pom.xml
+++ b/pom.xml
@@ -2888,6 +2888,7 @@
           </reportSet>
         </reportSets>
       </plugin>
+      <!--
       <plugin>
         <groupId>org.apache.maven.plugins</groupId>
         <artifactId>maven-checkstyle-plugin</artifactId>
@@ -2898,6 +2899,7 @@
           <includeTestSourceDirectory>true</includeTestSourceDirectory>
         </configuration>
       </plugin>
+      -->
     </plugins>
   </reporting>
   <distributionManagement>


[8/9] hbase git commit: Update POMs and CHANGES.txt for 1.4.0RC0

Posted by ap...@apache.org.
http://git-wip-us.apache.org/repos/asf/hbase/blob/3839a01d/CHANGES.txt
----------------------------------------------------------------------
diff --git a/CHANGES.txt b/CHANGES.txt
index f7403a5..c82f1d3 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,1462 +1,2543 @@
 HBase Change Log
 
-Release Notes - HBase - Version 0.99.2 12/07/2014
+Release Notes - HBase - Version 1.4.0 12/04/2017
 
 ** Sub-task
-    * [HBASE-10671] - Add missing InterfaceAudience annotations for classes in hbase-common and hbase-client modules
-    * [HBASE-11164] - Document and test rolling updates from 0.98 -> 1.0
-    * [HBASE-11915] - Document and test 0.94 -> 1.0.0 update
-    * [HBASE-11964] - Improve spreading replication load from failed regionservers
-    * [HBASE-12075] - Preemptive Fast Fail
-    * [HBASE-12128] - Cache configuration and RpcController selection for Table in Connection
-    * [HBASE-12147] - Porting Online Config Change from 89-fb
-    * [HBASE-12202] - Support DirectByteBuffer usage in HFileBlock
-    * [HBASE-12214] - Visibility Controller in the peer cluster should be able to extract visibility tags from the replicated cells
-    * [HBASE-12288] - Support DirectByteBuffer usage in DataBlock Encoding area
-    * [HBASE-12297] - Support DBB usage in Bloom and HFileIndex area
-    * [HBASE-12313] - Redo the hfile index length optimization so cell-based rather than serialized KV key
-    * [HBASE-12353] - Turn down logging on some spewing unit tests
-    * [HBASE-12354] - Update dependencies in time for 1.0 release
-    * [HBASE-12355] - Update maven plugins
-    * [HBASE-12363] - Improve how KEEP_DELETED_CELLS works with MIN_VERSIONS
-    * [HBASE-12379] - Try surefire 2.18-SNAPSHOT
-    * [HBASE-12400] - Fix refguide so it does connection#getTable rather than new HTable everywhere: first cut!
-    * [HBASE-12404] - Task 5 from parent: Replace internal HTable constructor use with HConnection#getTable (0.98, 0.99)
-    * [HBASE-12471] - Task 4. replace internal ConnectionManager#{delete,get}Connection use with #close, #createConnection (0.98, 0.99) under src/main/java
-    * [HBASE-12517] - Several HConstant members are assignable
-    * [HBASE-12518] - Task 4 polish. Remove CM#{get,delete}Connection
-    * [HBASE-12519] - Remove tabs used as whitespace
-    * [HBASE-12526] - Remove unused imports
-    * [HBASE-12577] - Disable distributed log replay by default
+    * [HBASE-12148] - Remove TimeRangeTracker as point of contention when many threads writing a Store
+    * [HBASE-15160] - Put back HFile's HDFS op latency sampling code and add metrics for monitoring
+    * [HBASE-15342] - create branch-1.3 and update branch-1 poms to 1.4.0-SNAPSHOT
+    * [HBASE-15386] - PREFETCH_BLOCKS_ON_OPEN in HColumnDescriptor is ignored
+    * [HBASE-15484] - Correct the semantic of batch and partial
+    * [HBASE-15662] - Hook up JvmPauseMonitor to REST server
+    * [HBASE-15663] - Hook up JvmPauseMonitor to ThriftServer
+    * [HBASE-15675] - Add more details about region on table.jsp
+    * [HBASE-15691] - Port HBASE-10205 (ConcurrentModificationException in BucketAllocator) to branch-1
+    * [HBASE-15927] - Remove HMaster.assignRegion()
+    * [HBASE-15994] - Allow selection of RpcSchedulers
+    * [HBASE-15995] - Separate replication WAL reading from shipping
+    * [HBASE-16092] - Procedure v2 - complete child procedure support
+    * [HBASE-16236] - Typo in javadoc of InstancePending
+    * [HBASE-16242] - Upgrade Avro to 1.7.7
+    * [HBASE-16280] - Use hash based map in SequenceIdAccounting
+    * [HBASE-16311] - Audit log for delete snapshot operation is missing in case of snapshot owner deleting the same
+    * [HBASE-16336] - Removing peers seems to be leaving spare queues
+    * [HBASE-16398] - optimize HRegion computeHDFSBlocksDistribution
+    * [HBASE-16451] - Procedure v2 - Test WAL protobuf entry size limit
+    * [HBASE-16505] - Save deadline in RpcCallContext according to request's timeout
+    * [HBASE-16530] - Reduce DBE code duplication
+    * [HBASE-16533] - Procedure v2 - Extract chore from the executor
+    * [HBASE-16570] - Compute region locality in parallel at startup
+    * [HBASE-17167] - Pass mvcc to client when scan
+    * [HBASE-17210] - Set timeout on trying rowlock according to client's RPC timeout
+    * [HBASE-17508] - Unify the implementation of small scan and regular scan for sync client
+    * [HBASE-17561] - table status page should escape values that may contain arbitrary characters.
+    * [HBASE-17583] - Add inclusive/exclusive support for startRow and endRow of scan for sync client
+    * [HBASE-17584] - Expose ScanMetrics with ResultScanner rather than Scan
+    * [HBASE-17595] - Add partial result support for small/limited scan
+    * [HBASE-17599] - Use mayHaveMoreCellsInRow instead of isPartial
+    * [HBASE-17793] - Backport ScanResultCache related code to branch-1
+    * [HBASE-17887] - Row-level consistency is broken for read
+    * [HBASE-17925] - mvn assembly:single fails against hadoop3-alpha2
+    * [HBASE-18268] - Eliminate the findbugs warnings for hbase-client
+    * [HBASE-18293] - Only add the spotbugs dependency when jdk8 is active
+    * [HBASE-18295] -  The result contains the cells across different rows
+    * [HBASE-18308] - Eliminate the findbugs warnings for hbase-server
+    * [HBASE-18315] - Eliminate the findbugs warnings for hbase-rest
+    * [HBASE-18365] - Eliminate the findbugs warnings for hbase-common
+    * [HBASE-18398] - Snapshot operation fails with FileNotFoundException
+    * [HBASE-18448] - EndPoint example  for refreshing HFiles for stores
+    * [HBASE-18458] - Refactor TestRegionServerHostname to make it robust (Port HBASE-17922 to branch-1)
+    * [HBASE-18552] - Backport the server side change in HBASE-18489 to branch-1
+    * [HBASE-18656] - Address issues found by error-prone in hbase-common
+    * [HBASE-18786] - FileNotFoundException should not be silently handled for primary region replicas
+    * [HBASE-18867] - maven enforcer plugin needs update to work with jdk9
+    * [HBASE-18957] - add test that confirms 2 FamilyFilters in a FilterList using MUST_PASS_ONE operator will return results that match either of the FamilyFilters and revert as needed to make it pass.
+    * [HBASE-18980] - Address issues found by error-prone in hbase-hadoop2-compat
+    * [HBASE-19070] - temporarily make the mvnsite nightly test non-voting.
+    * [HBASE-19113] - Restore dropped constants from TableInputFormatBase for compatibility
+    * [HBASE-19131] - Add the ClusterStatus hook and cleanup other hooks which can be replaced by ClusterStatus hook
+    * [HBASE-19182] - Add deprecation in branch-1 for hbase-prefix-tree so some heads up it removed in hbase2
+    * [HBASE-19203] - Update Hadoop version used for build to 2.7.4
+    * [HBASE-19205] - Backport HBASE-18441 ZookeeperWatcher#interruptedException should throw exception
+    * [HBASE-19243] - Start mini cluster once before class for TestFIFOCompactionPolicy
+    * [HBASE-19276] - RegionPlan should correctly implement equals and hashCode
+    * [HBASE-19348] - Fix error-prone errors for branch-1
+    * [HBASE-19354] - [branch-1] Build using a jdk that is beyond ubuntu trusty's openjdk-151
+    * [HBASE-19366] - Backport to branch-1 HBASE-19035 Miss metrics when coprocessor use region scanner to read data
+    * [HBASE-19368] - [nightly] Make xml test non-voting in branch-1.2
 
+** Bug
+    * [HBASE-7621] - REST client (RemoteHTable) doesn't support binary row keys
+    * [HBASE-8758] - Error in RegionCoprocessorHost class preScanner method documentation.
+    * [HBASE-9393] - Hbase does not closing a closed socket resulting in many CLOSE_WAIT 
+    * [HBASE-12088] - Remove un-used profiles in non-root poms
+    * [HBASE-12091] - Optionally ignore edits for dropped tables for replication.
+    * [HBASE-12949] - Scanner can be stuck in infinite loop if the HFile is corrupted
+    * [HBASE-13860] - Remove units from ServerMetricsTmpl.jamon since values are formatted human readable
+    * [HBASE-14129] - If any regionserver gets shutdown uncleanly during full cluster restart, locality looks to be lost
+    * [HBASE-14329] - Report region in transition only ever operates on one region
+    * [HBASE-14753] - TestShell is not invoked anymore
+    * [HBASE-15109] - HM/RS failed to start when "fs.hdfs.impl.disable.cache" is set to true
+    * [HBASE-15187] - Integrate CSRF prevention filter to REST gateway
+    * [HBASE-15236] - Inconsistent cell reads over multiple bulk-loaded HFiles
+    * [HBASE-15302] - Reenable the other tests disabled by HBASE-14678
+    * [HBASE-15328] - Unvalidated Redirect in HMaster
+    * [HBASE-15497] - Incorrect javadoc for atomicity guarantee of Increment and Append
+    * [HBASE-15528] - Clean up outdated entries in hbase-default.xml
+    * [HBASE-15548] - SyncTable: sourceHashDir is supposed to be optional but won't work without 
+    * [HBASE-15635] - Mean age of Blocks in cache (seconds) on webUI should be greater than zero
+    * [HBASE-15711] - Add client side property to allow logging details for batch errors
+    * [HBASE-15725] - make_patch.sh should add the branch name when -b is passed.
+    * [HBASE-15769] - Perform validation on cluster key for add_peer
+    * [HBASE-15783] - AccessControlConstants#OP_ATTRIBUTE_ACL_STRATEGY_CELL_FIRST not used any more.
+    * [HBASE-15803] - ZooKeeperWatcher's constructor can leak a ZooKeeper instance with throwing ZooKeeperConnectionException when canCreateBaseZNode is true
+    * [HBASE-15815] - Region mover script sometimes reports stuck region where only one server was involved
+    * [HBASE-15844] - We should respect hfile.block.index.cacheonwrite when write intermediate index Block
+    * [HBASE-15845] - Shell Cleanup : move formatter to commands.rb; move one of the two hbase.rb to hbase_constants.rb
+    * [HBASE-15866] - Split hbase.rpc.timeout into *.read.timeout and *.write.timeout
+    * [HBASE-15889] - String case conversions are locale-sensitive, used without locale
+    * [HBASE-15933] - NullPointerException may be thrown from SimpleRegionNormalizer#getRegionSize()
+    * [HBASE-15947] - Classes used only for tests included in main code base
+    * [HBASE-15950] - Fix memstore size estimates to be more tighter
+    * [HBASE-15965] - Shell test changes. Use @shell.command instead directly calling functions in admin.rb and other libraries.
+    * [HBASE-15990] - The priority value of subsequent coprocessors in the Coprocessor.Priority.SYSTEM list are not incremented by one
+    * [HBASE-16011] - TableSnapshotScanner and TableSnapshotInputFormat can produce duplicate rows
+    * [HBASE-16045] - endtime argument for VerifyReplication was incorrectly specified in usage
+    * [HBASE-16054] - OutOfMemory exception when using AsyncRpcClient with encryption
+    * [HBASE-16055] - PutSortReducer loses any Visibility/acl attribute set on the Puts 
+    * [HBASE-16058] - TestHRegion fails on 1.4 builds
+    * [HBASE-16059] - Region normalizer fails to trigger merge action where one of the regions is empty
+    * [HBASE-16070] - Mapreduce Serialization classes do not have Interface audience
+    * [HBASE-16090] - ResultScanner is not closed in SyncTable#finishRemainingHashRanges()
+    * [HBASE-16091] - Canary takes lot more time when there are delete markers in the table
+    * [HBASE-16118] - TestHFileOutputFormat2 is broken
+    * [HBASE-16122] - PerformanceEvaluation should provide user friendly hint when client threads argument is missing
+    * [HBASE-16125] - RegionMover uses hardcoded, Unix-style tmp folder - breaks Windows
+    * [HBASE-16157] - The incorrect block cache count and size are caused by removing duplicate block key in the LruBlockCache
+    * [HBASE-16159] - OutOfMemory exception when using AsyncRpcClient with encryption to read rpc response
+    * [HBASE-16172] - Unify the retry logic in ScannerCallableWithReplicas and RpcRetryingCallerWithReadReplicas
+    * [HBASE-16182] - Increase IntegrationTestRpcClient timeout 
+    * [HBASE-16209] - Provide an ExponentialBackOffPolicy sleep between failed region open requests
+    * [HBASE-16235] - TestSnapshotFromMaster#testSnapshotHFileArchiving will fail if there are too many hfiles
+    * [HBASE-16244] - LocalHBaseCluster start timeout should be configurable
+    * [HBASE-16293] - TestSnapshotFromMaster#testSnapshotHFileArchiving flakey
+    * [HBASE-16309] - TestDefaultCompactSelection.testCompactionRatio is flaky
+    * [HBASE-16345] - RpcRetryingCallerWithReadReplicas#call() should catch some RegionServer Exceptions
+    * [HBASE-16353] - Deprecate / Remove Get.isClosestRowBefore() 
+    * [HBASE-16356] - REST API scanner: row prefix filter and custom filter parameters are mutually exclusive
+    * [HBASE-16359] - NullPointerException in RSRpcServices.openRegion()
+    * [HBASE-16367] - Race between master and region server initialization may lead to premature server abort
+    * [HBASE-16377] - ServerName check is ineffective in region_mover.rb
+    * [HBASE-16409] - Row key for bad row should be properly delimited in VerifyReplication
+    * [HBASE-16444] - CellUtil#estimatedSerializedSizeOfKey() should consider KEY_INFRASTRUCTURE_SIZE
+    * [HBASE-16495] - When accessed via Thrift, all column families have timeToLive equal to -1
+    * [HBASE-16515] - AsyncProcess has incorrent count of tasks if the backoff policy is enabled
+    * [HBASE-16538] - Version mismatch in HBaseConfiguration.checkDefaultsVersion
+    * [HBASE-16540] - Scan should do additional validation on start and stop row
+    * [HBASE-16556] - The read/write timeout are not used in HTable.delete(List), HTable.get(List), and HTable.existsAll(List)
+    * [HBASE-16572] - Sync method in RecoverableZooKeeper failed to pass callback function in
+    * [HBASE-16576] - Shell add_peer doesn't allow setting cluster_key for custom endpoints
+    * [HBASE-16611] - Flakey org.apache.hadoop.hbase.client.TestReplicasClient.testCancelOfMultiGet
+    * [HBASE-16612] - Use array to cache Types for KeyValue.Type.codeToType
+    * [HBASE-16615] - Fix flaky TestScannerHeartbeatMessages
+    * [HBASE-16621] - HBCK should have -fixHFileLinks
+    * [HBASE-16630] - Fragmentation in long running Bucket Cache
+    * [HBASE-16647] - hbck should do offline reference repair before online repair
+    * [HBASE-16653] - Backport HBASE-11393 to all branches which support namespace
+    * [HBASE-16670] - Make RpcServer#processRequest logic more robust
+    * [HBASE-16675] - Average region size may be incorrect when there is region whose RegionLoad cannot be retrieved
+    * [HBASE-16716] - OfflineMetaRepair leaves empty directory inside /hbase/WALs which remains forever
+    * [HBASE-16724] - Snapshot owner can't clone
+    * [HBASE-16739] - Timed out exception message should include encoded region name
+    * [HBASE-16762] - NullPointerException is thrown when constructing sourceTable in verifyrep
+    * [HBASE-16768] - Inconsistent results from the Append/Increment
+    * [HBASE-16771] - VerifyReplication should increase GOODROWS counter if re-comparison passes
+    * [HBASE-16801] - The Append/Increment may return the data from future
+    * [HBASE-16815] - Low scan ratio in RPC queue tuning triggers divide by zero exception
+    * [HBASE-16816] - HMaster.move() should throw exception if region to move is not online
+    * [HBASE-16829] - DemoClient should detect secure mode
+    * [HBASE-16855] - Avoid NPE in MetricsConnection’s construction
+    * [HBASE-16856] - Exception message in SyncRunner.run() should print currentSequence
+    * [HBASE-16870] - Add the metrics of replication sources which were transformed from other dead rs to ReplicationLoad
+    * [HBASE-16886] - hbase-client: scanner with reversed=true and small=true gets no result
+    * [HBASE-16910] - Avoid NPE when starting StochasticLoadBalancer
+    * [HBASE-16938] - TableCFsUpdater maybe failed due to no write permission on peerNode
+    * [HBASE-16939] - ExportSnapshot: set owner and permission on right directory
+    * [HBASE-16948] - Fix inconsistency between HRegion and Region javadoc on getRowLock
+    * [HBASE-16962] - Add readPoint to preCompactScannerOpen() and preFlushScannerOpen() API
+    * [HBASE-16983] - TestMultiTableSnapshotInputFormat failing with  Unable to create region directory: /tmp/...
+    * [HBASE-16985] - TestClusterId failed due to wrong hbase rootdir
+    * [HBASE-16992] - The usage of mutation from CP is weird.
+    * [HBASE-16993] - BucketCache throw java.io.IOException: Invalid HFile block magic when configuring hbase.bucketcache.bucket.sizes
+    * [HBASE-17010] - Serial replication should handle daughter regions being assigned to another RS
+    * [HBASE-17020] - keylen in midkey() dont computed correctly
+    * [HBASE-17033] - LogRoller makes a lot of allocations unnecessarily
+    * [HBASE-17039] - SimpleLoadBalancer schedules large amount of invalid region moves
+    * [HBASE-17054] - Compactor#preCreateCoprocScanner should be passed user
+    * [HBASE-17062] - RegionSplitter throws ClassCastException
+    * [HBASE-17069] - RegionServer writes invalid META entries for split daughters in some circumstances
+    * [HBASE-17072] - CPU usage starts to climb up to 90-100% when using G1GC; purge ThreadLocal usage
+    * [HBASE-17095] - The ClientSimpleScanner keeps retrying if the hfile is corrupt or cannot found
+    * [HBASE-17105] - Annotate RegionServerObserver
+    * [HBASE-17112] - Prevent setting timestamp of delta operations the same as previous value's
+    * [HBASE-17116] - [PerformanceEvaluation] Add option to configure block size
+    * [HBASE-17118] - StoreScanner leaked in KeyValueHeap
+    * [HBASE-17127] - Locate region should fail fast if underlying Connection already closed
+    * [HBASE-17131] - Avoid livelock caused by HRegion#processRowsWithLocks
+    * [HBASE-17170] - HBase is also retrying DoNotRetryIOException because of class loader differences.
+    * [HBASE-17171] - IntegrationTestTimeBoundedRequestsWithRegionReplicas fails with obtuse error when readers have no time to run
+    * [HBASE-17187] - DoNotRetryExceptions from coprocessors should bubble up to the application
+    * [HBASE-17206] - FSHLog may roll a new writer successfully with unflushed entries
+    * [HBASE-17256] - Rpc handler monitoring will be removed when the task queue is full
+    * [HBASE-17264] - Processing RIT with offline state will always fail to open the first time
+    * [HBASE-17265] - Region left unassigned in master failover when region failed to open
+    * [HBASE-17275] - Assign timeout may cause region to be unassigned forever
+    * [HBASE-17286] - LICENSE.txt in binary tarball contains only ASL text
+    * [HBASE-17287] - Master becomes a zombie if filesystem object closes
+    * [HBASE-17289] - Avoid adding a replication peer named "lock"
+    * [HBASE-17290] - Potential loss of data for replication of bulk loaded hfiles
+    * [HBASE-17297] - Single Filter in parenthesis cannot be parsed correctly
+    * [HBASE-17302] - The region flush request disappeared from flushQueue
+    * [HBASE-17330] - SnapshotFileCache will always refresh the file cache
+    * [HBASE-17344] - The regionserver web UIs miss the coprocessors of RegionServerCoprocessorHost.
+    * [HBASE-17347] - ExportSnapshot may write snapshot info file to wrong directory when specifying target name
+    * [HBASE-17351] - Enforcer plugin fails with NullPointerException
+    * [HBASE-17352] - Fix hbase-assembly build with bash 4
+    * [HBASE-17357] - PerformanceEvaluation parameters parsing triggers NPE.
+    * [HBASE-17374] - ZKPermissionWatcher crashed when grant after region close
+    * [HBASE-17381] - ReplicationSourceWorkerThread can die due to unhandled exceptions
+    * [HBASE-17387] - Reduce the overhead of exception report in RegionActionResult for multi()
+    * [HBASE-17390] - Online update of configuration for all servers leaves out masters
+    * [HBASE-17426] - Inconsistent environment variable names for enabling JMX
+    * [HBASE-17427] - region_mover.rb may move region onto the same server
+    * [HBASE-17429] - HBase bulkload cannot support HDFS viewFs
+    * [HBASE-17435] - Call to preCommitStoreFile() hook encounters SaslException in secure deployment
+    * [HBASE-17445] - Count size of serialized exceptions in checking max result size quota
+    * [HBASE-17450] - TablePermission#equals throws NPE after namespace support was added
+    * [HBASE-17452] - Failed taking snapshot - region Manifest proto-message too large
+    * [HBASE-17460] - enable_table_replication can not perform cyclic replication of a table
+    * [HBASE-17464] - Fix HBaseTestingUtility.getNewDataTestDirOnTestFS to always return a unique path
+    * [HBASE-17469] - Properly handle empty TableName in TablePermission#readFields and #write
+    * [HBASE-17471] - Region Seqid will be out of order in WAL if using mvccPreAssign
+    * [HBASE-17475] - Stack overflow in AsyncProcess if retry too much
+    * [HBASE-17489] - ClientScanner may send a next request to a RegionScanner which has been exhausted
+    * [HBASE-17501] - NullPointerException after Datanodes Decommissioned and Terminated
+    * [HBASE-17504] - The passed durability of Increment is ignored when syncing WAL
+    * [HBASE-17510] - DefaultMemStore gets the wrong heap size after rollback
+    * [HBASE-17519] - Rollback the removed cells
+    * [HBASE-17522] - RuntimeExceptions from MemoryMXBean should not take down server process
+    * [HBASE-17534] - SecureBulkLoadClient squashes DoNotRetryIOExceptions from the server
+    * [HBASE-17540] - Change SASL server GSSAPI callback log line from DEBUG to TRACE in RegionServer to reduce log volumes in DEBUG mode
+    * [HBASE-17558] - ZK dumping jsp should escape html 
+    * [HBASE-17565] - StochasticLoadBalancer may incorrectly skip balancing due to skewed multiplier sum
+    * [HBASE-17578] - Thrift per-method metrics should still update in the case of exceptions
+    * [HBASE-17587] - Do not Rethrow DoNotRetryIOException as UnknownScannerException
+    * [HBASE-17590] - Drop cache hint should work for StoreFile write path
+    * [HBASE-17597] - TestMetaWithReplicas.testMetaTableReplicaAssignment is flaky
+    * [HBASE-17601] - close() in TableRecordReaderImpl assumes the split has started
+    * [HBASE-17603] - REST API for scan should return 404 when table does not exist
+    * [HBASE-17607] - Rest api for scan should return 404 when table not exists
+    * [HBASE-17611] - Thrift 2 per-call latency metrics are capped at ~ 2 seconds
+    * [HBASE-17616] - Incorrect actions performed by CM
+    * [HBASE-17617] - Backport HBASE-16731 (Inconsistent results from the Get/Scan if we use the empty FilterList) to branch-1
+    * [HBASE-17639] - Do not stop server if ReplicationSourceManager's waitUntilCanBePushed throws InterruptedException
+    * [HBASE-17648] - HBase Table-level synchronization fails between two secured(kerberized) clusters
+    * [HBASE-17649] - REST API for scan should return 410 when table is disabled
+    * [HBASE-17658] - Fix bookkeeping error with max regions for a table
+    * [HBASE-17661] - fix the queue length passed to FastPathBalancedQueueRpcExecutor
+    * [HBASE-17673] - Monitored RPC Handler not shown in the WebUI
+    * [HBASE-17674] - Major compaction may be cancelled in CompactionChecker
+    * [HBASE-17675] - ReplicationEndpoint should choose new sinks if a SaslException occurs
+    * [HBASE-17677] - ServerName parsing from directory name should be more robust to errors from guava's HostAndPort
+    * [HBASE-17682] - Region stuck in merging_new state indefinitely
+    * [HBASE-17688] - MultiRowRangeFilter not working correctly if given same start and stop RowKey
+    * [HBASE-17698] - ReplicationEndpoint choosing sinks
+    * [HBASE-17710] - HBase in standalone mode creates directories with 777 permission
+    * [HBASE-17712] - Remove/Simplify the logic of RegionScannerImpl.handleFileNotFound
+    * [HBASE-17713] - the interface '/version/cluster' with header 'Accept: application/json' return is not JSON but plain text
+    * [HBASE-17717] - Incorrect ZK ACL set for HBase superuser
+    * [HBASE-17718] - Difference between RS's servername and its ephemeral node cause SSH stop working
+    * [HBASE-17722] - Metrics subsystem stop/start messages add a lot of useless bulk to operational logging
+    * [HBASE-17729] - Missing shortcuts for some useful HCD options
+    * [HBASE-17736] - Some options can't be configured by the shell
+    * [HBASE-17746] - TestSimpleRpcScheduler.testCoDelScheduling is broken
+    * [HBASE-17761] - Test TestRemoveRegionMetrics.testMoveRegion fails intermittently because of race condition
+    * [HBASE-17764] - Solve TestMultiSlaveReplication flakiness 
+    * [HBASE-17773] - VerifyReplication tool wrongly emits warning "ERROR: Invalid argument '--recomparesleep=xx'"
+    * [HBASE-17779] - disable_table_replication returns misleading message and does not turn off replication
+    * [HBASE-17780] - BoundedByteBufferPool "At capacity" messages are not actionable
+    * [HBASE-17798] - RpcServer.Listener.Reader can abort due to CancelledKeyException
+    * [HBASE-17803] - PE always re-creates table when we specify the split policy
+    * [HBASE-17816] - HRegion#mutateRowWithLocks should update writeRequestCount metric
+    * [HBASE-17821] - The CompoundConfiguration#toString is wrong
+    * [HBASE-17861] - Regionserver down when checking the permission of staging dir if hbase.rootdir is on S3
+    * [HBASE-17862] - Condition that always returns true
+    * [HBASE-17869] - UnsafeAvailChecker wrongly returns false on ppc
+    * [HBASE-17871] - scan#setBatch(int) call leads wrong result of VerifyReplication
+    * [HBASE-17886] - Fix compatibility of ServerSideScanMetrics
+    * [HBASE-17893] - Allow HBase to build against Hadoop 2.8.0
+    * [HBASE-17904] - Get runs into NoSuchElementException when using Read Replica, with hbase. ipc.client.specificThreadForWriting to be true and hbase.rpc.client.impl to be org.apache.hadoop.hbase.ipc.RpcClientImpl
+    * [HBASE-17930] - Avoid using Canary.sniff in HBaseTestingUtility
+    * [HBASE-17931] - Assign system tables to servers with highest version
+    * [HBASE-17937] - Memstore size becomes negative in case of expensive postPut/Delete Coprocessor call
+    * [HBASE-17958] - Avoid passing unexpected cell to ScanQueryMatcher when optimize SEEK to SKIP
+    * [HBASE-17985] - Inline package manage updates with package installation in Yetus Dockerfile
+    * [HBASE-17991] - Add more details about compaction queue on /dump
+    * [HBASE-17993] - Delete useless info log in RpcServer.processResponse
+    * [HBASE-18000] - Make sure we always return the scanner id with ScanResponse
+    * [HBASE-18005] - read replica: handle the case that region server hosting both primary replica and meta region is down
+    * [HBASE-18014] - A case of Region remain unassigned when table enabled
+    * [HBASE-18024] - HRegion#initializeRegionInternals should not re-create .hregioninfo file when the region directory no longer exists
+    * [HBASE-18025] - CatalogJanitor should collect outdated RegionStates from the AM
+    * [HBASE-18026] - ProtobufUtil seems to do extra array copying
+    * [HBASE-18027] - Replication should respect RPC size limits when batching edits
+    * [HBASE-18030] - Per Cell TTL tags may get duplicated with increments/Append causing tags length overflow
+    * [HBASE-18035] - Meta replica does not give any primaryOperationTimeout to primary meta region
+    * [HBASE-18036] - HBase 1.x : Data locality is not maintained after cluster restart or SSH
+    * [HBASE-18042] - Client Compatibility breaks between versions 1.2 and 1.3
+    * [HBASE-18054] - log when we add/remove failed servers in client
+    * [HBASE-18058] - Zookeeper retry sleep time should have an upper limit
+    * [HBASE-18066] - Get with closest_row_before on "hbase:meta" can return empty Cell during region merge/split
+    * [HBASE-18069] - Fix flaky test TestReplicationAdminWithClusters#testDisableAndEnableReplication
+    * [HBASE-18077] - Update JUnit license to EPL from CPL
+    * [HBASE-18081] - The way we process connection preamble in SimpleRpcServer is broken
+    * [HBASE-18092] - Removing a peer does not properly clean up the ReplicationSourceManager state and metrics
+    * [HBASE-18093] - Overloading the meaning of 'enabled' in Quota Manager to indicate either quota disabled or quota manager not ready is not good
+    * [HBASE-18099] - FlushSnapshotSubprocedure should wait for concurrent Region#flush() to finish
+    * [HBASE-18111] - Replication stuck when cluster connection is closed
+    * [HBASE-18113] - Handle old client without include_stop_row flag when startRow equals endRow
+    * [HBASE-18122] - Scanner id should include ServerName of region server
+    * [HBASE-18125] - HBase shell disregards spaces at the end of a split key in a split file
+    * [HBASE-18129] - truncate_preserve fails when the truncate method doesn't exists on the master
+    * [HBASE-18132] - Low replication should be checked in period in case of datanode rolling upgrade
+    * [HBASE-18137] - Replication gets stuck for empty WALs
+    * [HBASE-18141] - Regionserver fails to shutdown when abort triggered in RegionScannerImpl during RPC call
+    * [HBASE-18142] - Deletion of a cell deletes the previous versions too
+    * [HBASE-18145] - The flush may cause the corrupt data for reading
+    * [HBASE-18149] - The setting rules for table-scope attributes and family-scope attributes should keep consistent
+    * [HBASE-18150] - hbase.version file is created under both hbase.rootdir and hbase.wal.dir
+    * [HBASE-18159] - Use OpenJDK7 instead of Oracle JDK7 in pre commit docker file
+    * [HBASE-18167] - OfflineMetaRepair tool may cause HMaster abort always
+    * [HBASE-18180] - Possible connection leak while closing BufferedMutator in TableOutputFormat
+    * [HBASE-18184] - Add hbase-hadoop2-compat jar as MapReduce job dependency
+    * [HBASE-18185] - IntegrationTestTimeBoundedRequestsWithRegionReplicas unbalanced tests fails with AssertionError
+    * [HBASE-18192] - Replication drops recovered queues on region server shutdown
+    * [HBASE-18197] - Avoided to call job.waitForCompletion(true) two times
+    * [HBASE-18199] - Race in NettyRpcConnection may cause call stuck in BufferCallBeforeInitHandler forever
+    * [HBASE-18212] - In Standalone mode with local filesystem HBase logs Warning message:Failed to invoke 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream
+    * [HBASE-18219] - Fix typo in constant HConstants.HBASE_CLIENT_MEAT_REPLICA_SCAN_TIMEOUT
+    * [HBASE-18230] - Generated LICENSE file includes unsubstituted Velocity variables
+    * [HBASE-18233] - We shouldn't wait for readlock in doMiniBatchMutation in case of deadlock
+    * [HBASE-18247] - Hbck to fix the case that replica region shows as key in the meta table
+    * [HBASE-18255] - Time-Delayed HBase Performance Degradation with Java 7
+    * [HBASE-18267] - The result from the postAppend is ignored
+    * [HBASE-18323] - Remove multiple ACLs for the same user in kerberos
+    * [HBASE-18330] - NPE in ReplicationZKLockCleanerChore
+    * [HBASE-18340] - TestXXXProcedure tests hanging or failing on branch-1 (branch-1.4)
+    * [HBASE-18346] - TestRSKilledWhenInitializing failing on branch-1 (branch-1.4)
+    * [HBASE-18358] - Backport HBASE-18099 'FlushSnapshotSubprocedure should wait for concurrent Region#flush() to finish' to branch-1.3
+    * [HBASE-18362] - hbck should not report split replica parent region from meta as errors 
+    * [HBASE-18377] - Error handling for FileNotFoundException should consider RemoteException in openReader()
+    * [HBASE-18390] - Sleep too long when finding region location failed
+    * [HBASE-18401] - Region Replica shows up in meta table after split
+    * [HBASE-18431] - Mitigate compatibility concerns between branch-1.3 and branch-1.4
+    * [HBASE-18437] - Revoke access permissions of a user from a table does not work as expected
+    * [HBASE-18438] - Precommit doesn't warn about unused imports
+    * [HBASE-18441] - ZookeeperWatcher#interruptedException should throw exception
+    * [HBASE-18447] - MetricRegistryInfo#hashCode uses hashCode instead of toHashCode
+    * [HBASE-18461] - Build broken If the username contains a backslash
+    * [HBASE-18470] - Remove the redundant comma from RetriesExhaustedWithDetailsException#getDesc
+    * [HBASE-18471] - The DeleteFamily cell is skipped when StoreScanner seeks to next column
+    * [HBASE-18473] - VC.listLabels() erroneously closes any connection
+    * [HBASE-18476] - HTable#put should call RS#mutate rather than RS#multi
+    * [HBASE-18479] - should apply HBASE-18255 to HBASE_MASTER_OPTS too
+    * [HBASE-18480] - The cost of BaseLoadBalancer.cluster is changed even if the rollback is done
+    * [HBASE-18481] - The autoFlush flag was not used in PE tool
+    * [HBASE-18487] - Minor fixes in row lock implementation
+    * [HBASE-18505] - Our build/yetus personality will run tests on individual modules and then on all (i.e. 'root'). Should do one or other
+    * [HBASE-18512] - Region Server will abort with IllegalStateException if HDFS umask has limited scope
+    * [HBASE-18526] - FIFOCompactionPolicy pre-check uses wrong scope
+    * [HBASE-18568] - Correct  metric of  numRegions
+    * [HBASE-18572] - Delete can't remove the cells which have no visibility label
+    * [HBASE-18577] - shaded client includes several non-relocated third party dependencies
+    * [HBASE-18587] - Fix Flaky TestFileIOEngine
+    * [HBASE-18589] - branch-1.4 build compile is broken
+    * [HBASE-18607] - fix submit-patch.py to support utf8
+    * [HBASE-18614] - Setting BUCKET_CACHE_COMBINED_KEY to false disables stats on RS UI
+    * [HBASE-18616] - Shell warns about already initialized constants at startup
+    * [HBASE-18617] - FuzzyRowKeyFilter should not modify the filter pairs
+    * [HBASE-18626] - Handle the incompatible change about the replication TableCFs' config
+    * [HBASE-18628] - ZKPermissionWatcher blocks all ZK notifications
+    * [HBASE-18633] - Add more info to understand the source/scenario of large batch requests exceeding threshold
+    * [HBASE-18641] - Include block content verification logic used in lruCache in bucketCache
+    * [HBASE-18644] - Duplicate "compactionQueueLength" metric in Region Server metrics
+    * [HBASE-18647] - Parameter cacheBlocks does not take effect in REST API for scan
+    * [HBASE-18665] - ReversedScannerCallable invokes getRegionLocations incorrectly
+    * [HBASE-18671] - Support Append/Increment in rest api
+    * [HBASE-18679] - YARN may null Counters object and cause an NPE in ITBLL
+    * [HBASE-18743] - HFiles in use by a table which has the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table are deleted
+    * [HBASE-18757] - Fix Improper bitwise & in BucketCache offset calculation
+    * [HBASE-18762] - Canary sink type cast error
+    * [HBASE-18771] - Incorrect StoreFileRefresh leading to split and compaction failures
+    * [HBASE-18789] - Displays the reporting interval of each RS on the Master page
+    * [HBASE-18796] - Admin#isTableAvailable returns incorrect result before daughter regions are opened
+    * [HBASE-18801] - Bulk load cleanup may falsely deem file deletion successful
+    * [HBASE-18810] - TestClientScannerRPCTimeout failing in branch-1 / branch-1.4
+    * [HBASE-18813] - TestCanaryTool fails on branch-1 / branch-1.4 
+    * [HBASE-18818] - TestConnectionImplemenation fails
+    * [HBASE-18830] - TestCanaryTool does not check Canary monitor's error code
+    * [HBASE-18885] - HFileOutputFormat2 hardcodes default FileOutputCommitter
+    * [HBASE-18890] - Backport HBASE-14499 (Master coprocessors shutdown will not happen on master abort) to branch-1
+    * [HBASE-18921] - Result.current() throws ArrayIndexOutOfBoundsException after calling advance()
+    * [HBASE-18923] - TestTableResource flaky on branch-1
+    * [HBASE-18934] - precommit on branch-1 isn't supposed to run against hadoop 3
+    * [HBASE-18940] - branch-2 (and probably others) fail check of generated source artifact
+    * [HBASE-18942] - hbase-hadoop2-compat module ignores hadoop-3 profile
+    * [HBASE-18959] - Backport HBASE-18874 (HMaster abort message will be skipped if Throwable is passed null) to branch-1
+    * [HBASE-18998] - processor.getRowsToLock() always assumes there is some row being locked
+    * [HBASE-19014] - surefire fails; When writing xml report stdout/stderr ... No such file or directory
+    * [HBASE-19020] - TestXmlParsing exception checking relies on a particular xml implementation without declaring it.
+    * [HBASE-19030] - nightly runs should attempt to log test results after archiving
+    * [HBASE-19035] - Miss metrics when coprocessor use region scanner to read data
+    * [HBASE-19038] - precommit mvn install should run from root on patch
+    * [HBASE-19039] - refactor shadedjars test to only run on java changes.
+    * [HBASE-19055] - Backport HBASE-19042 to other active branches
+    * [HBASE-19056] -  TestCompactionInDeadRegionServer is top of the flakies charts!
+    * [HBASE-19058] - The wget isn't installed in building docker image
+    * [HBASE-19060] - "Hadoop check" test is running all the time instead of just when changes to java
+    * [HBASE-19061] - enforcer NPE on hbase-shaded-invariants
+    * [HBASE-19066] - Correct the directory of openjdk-8 for jenkins
+    * [HBASE-19072] - Missing break in catch block of InterruptedException in HRegion#waitForFlushes() 
+    * [HBASE-19088] - move_tables_rsgroup will throw an exception when the table is disabled
+    * [HBASE-19094] - NPE in RSGroupStartupWorker.waitForGroupTableOnline during master startup
+    * [HBASE-19098] - Python based compatiblity checker fails if git repo does not have a remote named 'origin'
+    * [HBASE-19102] - TestZooKeeperMainServer fails with KeeperException$ConnectionLossException
+    * [HBASE-19124] - Move HBase-Nightly source artifact creation test from JenkinsFile to a script in dev-support
+    * [HBASE-19129] - TestChoreService is flaky (branch-1 / branch-1.4)
+    * [HBASE-19137] - Nightly test should make junit reports optional rather than attempt archive after reporting.
+    * [HBASE-19138] - Rare failure in TestLruBlockCache
+    * [HBASE-19144] - [RSgroups] Retry assignments in FAILED_OPEN state when servers (re)join the cluster
+    * [HBASE-19150] - TestSnapshotWithAcl is flaky
+    * [HBASE-19156] - Duplicative regions_per_server options on LoadTestTool
+    * [HBASE-19173] - Configure IntegrationTestRSGroup automatically for minicluster mode
+    * [HBASE-19184] - clean up nightly source artifact test to match expectations from switch to git-archive
+    * [HBASE-19188] - Build fails on branch-1 using maven-3.5.2
+    * [HBASE-19194] - TestRSGroupsBase has some always false checks
+    * [HBASE-19195] - More error-prone fixes
+    * [HBASE-19198] - TestIPv6NIOServerSocketChannel fails; unable to bind
+    * [HBASE-19215] - Incorrect exception handling on the client causes incorrect call timeouts and byte buffer allocations on the server
+    * [HBASE-19223] - Remove references to Date Tiered compaction from branch-1.2 and branch-1.1 ref guide
+    * [HBASE-19229] - Nightly script to check source artifact should not do a destructive git operation without opt-in
+    * [HBASE-19245] - MultiTableInputFormatBase#getSplits creates a Connection per Table
+    * [HBASE-19249] - test for "hbase antipatterns" should check _count_ of occurance rather than text of
+    * [HBASE-19250] - TestClientClusterStatus is flaky
+    * [HBASE-19260] - Add lock back to avoid parallel accessing meta to locate region
+    * [HBASE-19285] - Add per-table latency histograms
+    * [HBASE-19300] - TestMultithreadedTableMapper fails in branch-1.4
+    * [HBASE-19325] - Pass a list of server name to postClearDeadServers
+    * [HBASE-19350] - TestMetaWithReplicas is flaky
+    * [HBASE-19376] - Fix more binary compatibility problems with branch-1.4 / branch-1
+    * [HBASE-19379] - TestEndToEndSplitTransaction fails with NPE
+    * [HBASE-19381] - TestGlobalThrottler doesn't make progress (branch-1.4)
+    * [HBASE-19385] - [1.3] TestReplicator failed 1.3 nightly
+    * [HBASE-19388] - Incorrect value is being set for Compaction Pressure in RegionLoadStats object inside HRegion class
+    * [HBASE-19393] - HTTP 413 FULL head while accessing HBase UI using SSL. 
+    * [HBASE-19395] - [branch-1] TestEndToEndSplitTransaction.testMasterOpsWhileSplitting fails with NPE
+    * [HBASE-19396] - Fix flaky test TestHTableMultiplexerFlushCache
+    * [HBASE-19406] - Fix CompactionRequest equals and hashCode
+    * [HBASE-19423] - Replication entries are not filtered correctly when replication scope is set through WAL Co-processor
+    * [HBASE-19429] - Release build fails in checkstyle phase of site target (branch-1)
 
+** Improvement
+    * [HBASE-11013] - Clone Snapshots on Secure Cluster Should provide option to apply Retained User Permissions
+    * [HBASE-12350] - Backport error-prone build support to branch-1 and branch-2
+    * [HBASE-12770] - Don't transfer all the queued hlogs of a dead server to the same alive server
+    * [HBASE-12870] - "Major compaction triggered" and "Skipping major compaction" messages lack the region information
+    * [HBASE-13718] - Add a pretty printed table description to the table detail page of HBase's master
+    * [HBASE-14007] - Writing to table through MR should fail upfront if table does not exist/is disabled
+    * [HBASE-14220] - nightly tests should verify src tgz generates and builds correctly
+    * [HBASE-14548] - Expand how table coprocessor jar and dependency path can be specified
+    * [HBASE-14574] - TableOutputFormat#getRecordWriter javadoc misleads
+    * [HBASE-14871] - Allow specifying the base branch for make_patch
+    * [HBASE-14925] - Develop HBase shell command/tool to list table's region info through command line
+    * [HBASE-14985] - TimeRange constructors should set allTime when appropriate
+    * [HBASE-15191] - CopyTable and VerifyReplication - Option to specify batch size, versions
+    * [HBASE-15243] - Utilize the lowest seek value when all Filters in MUST_PASS_ONE FilterList return SEEK_NEXT_USING_HINT
+    * [HBASE-15429] - Add a split policy for busy regions
+    * [HBASE-15451] - Remove unnecessary wait in MVCC
+    * [HBASE-15496] - Throw RowTooBigException only for user scan/get
+    * [HBASE-15529] - Override needBalance in StochasticLoadBalancer
+    * [HBASE-15571] - Make MasterProcedureManagerHost accessible through MasterServices
+    * [HBASE-15614] - Report metrics from JvmPauseMonitor
+    * [HBASE-15686] - Add override mechanism for the exempt classes when dynamically loading table coprocessor
+    * [HBASE-15727] - Canary Tool for Zookeeper
+    * [HBASE-15802] -  ConnectionUtils should use ThreadLocalRandom instead of Random
+    * [HBASE-15816] - Provide client with ability to set priority on Operations 
+    * [HBASE-15842] - SnapshotInfo should display ownership information
+    * [HBASE-15843] - Replace RegionState.getRegionInTransition() Map with a Set
+    * [HBASE-15849] - Shell Cleanup: Simplify handling of commands' runtime
+    * [HBASE-15924] - Enhance hbase services autorestart capability to hbase-daemon.sh 
+    * [HBASE-15941] - HBCK repair should not unsplit healthy splitted region
+    * [HBASE-16008] - A robust way deal with early termination of HBCK
+    * [HBASE-16052] - Improve HBaseFsck Scalability
+    * [HBASE-16108] - RowCounter should support multiple key ranges
+    * [HBASE-16114] - Get regionLocation of required regions only for MR jobs
+    * [HBASE-16116] - Remove redundant pattern *.iml
+    * [HBASE-16147] - Add ruby wrapper for getting compaction state
+    * [HBASE-16188] - Add EventCounter information to log4j properties file
+    * [HBASE-16220] - Demote log level for "HRegionFileSystem - No StoreFiles for" messages to TRACE
+    * [HBASE-16224] - Reduce the number of RPCs for the large PUTs
+    * [HBASE-16225] - Refactor ScanQueryMatcher
+    * [HBASE-16262] - Update RegionsInTransition UI for branch-1
+    * [HBASE-16275] - Change ServerManager#onlineServers from ConcurrentHashMap to ConcurrentSkipListMap
+    * [HBASE-16299] - Update REST API scanner with ability to do reverse scan
+    * [HBASE-16302] - age of last shipped op and age of last applied op should be histograms
+    * [HBASE-16351] - do dependency license check via enforcer plugin
+    * [HBASE-16399] - Provide an API to get list of failed regions and servername in Canary
+    * [HBASE-16419] - check REPLICATION_SCOPE's value more stringently
+    * [HBASE-16423] - Add re-compare option to VerifyReplication to avoid occasional inconsistent rows
+    * [HBASE-16448] - Custom metrics for custom replication endpoints
+    * [HBASE-16455] - Provide API for obtaining all the WAL files
+    * [HBASE-16469] - Several log refactoring/improvement suggestions
+    * [HBASE-16502] - Reduce garbage in BufferedDataBlockEncoder
+    * [HBASE-16508] - Move UnexpectedStateException to common
+    * [HBASE-16541] - Avoid unnecessary cell copy in Result#compareResults
+    * [HBASE-16561] - Add metrics about read/write/scan queue length and active read/write/scan handler count
+    * [HBASE-16562] - ITBLL should fail to start if misconfigured
+    * [HBASE-16585] - Rewrite the delegation token tests with Parameterized pattern
+    * [HBASE-16616] - Rpc handlers stuck on ThreadLocalMap.expungeStaleEntry
+    * [HBASE-16640] - TimeoutBlockingQueue#remove() should return whether the entry is removed
+    * [HBASE-16672] - Add option for bulk load to always copy hfile(s) instead of renaming
+    * [HBASE-16694] - Reduce garbage for onDiskChecksum in HFileBlock
+    * [HBASE-16698] - Performance issue: handlers stuck waiting for CountDownLatch inside WALKey#getWriteEntry under high writing workload
+    * [HBASE-16705] - Eliminate long to Long auto boxing in LongComparator
+    * [HBASE-16708] - Expose endpoint Coprocessor name in "responseTooSlow" log messages
+    * [HBASE-16755] - Honor flush policy under global memstore pressure
+    * [HBASE-16772] - Add verbose option to VerifyReplication for logging good rows
+    * [HBASE-16773] - AccessController should access local region if possible
+    * [HBASE-16832] - Reduce the default number of versions in Meta table for branch-1
+    * [HBASE-16840] - Reuse cell's timestamp and type in ScanQueryMatcher
+    * [HBASE-16894] - Create more than 1 split per region, generalize HBASE-12590
+    * [HBASE-16946] - Provide Raw scan as an option in VerifyReplication 
+    * [HBASE-16947] - Some improvements for DumpReplicationQueues tool
+    * [HBASE-16969] - RegionCoprocessorServiceExec should override the toString() for debugging
+    * [HBASE-16977] - VerifyReplication should log a printable representation of the row keys
+    * [HBASE-17026] - VerifyReplication log should distinguish whether good row key is result of revalidation
+    * [HBASE-17057] - Minor compactions should also drop page cache behind reads
+    * [HBASE-17077] - Don't copy the replication queue belonging to the peer which has been deleted
+    * [HBASE-17088] - Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor
+    * [HBASE-17178] - Add region balance throttling
+    * [HBASE-17205] - Add a metric for the duration of region in transition
+    * [HBASE-17211] - Add more details in log when UnknownScannerException thrown in ScannerCallable
+    * [HBASE-17212] - Should add null checker on table name in HTable constructor and RegionServerCallable
+    * [HBASE-17276] - Reduce log spam from WrongRegionException in large multi()'s
+    * [HBASE-17280] - Add mechanism to control hbase cleaner behavior
+    * [HBASE-17292] - Add observer notification before bulk loaded hfile is moved to region directory
+    * [HBASE-17296] - Provide per peer throttling for replication
+    * [HBASE-17318] - Increment does not add new column if the increment amount is zero at first time writing
+    * [HBASE-17332] - Replace HashMap to Array for DataBlockEncoding.idToEncoding
+    * [HBASE-17437] - Support specifying a WAL directory outside of the root directory
+    * [HBASE-17448] - Export metrics from RecoverableZooKeeper
+    * [HBASE-17462] - Use sliding window for read/write request costs in StochasticLoadBalancer
+    * [HBASE-17472] - Correct the semantic of  permission grant
+    * [HBASE-17488] - WALEdit should be lazily instantiated
+    * [HBASE-17494] - Guard against cloning family of all cells if no data need be replicated
+    * [HBASE-17505] - Do not issue close scanner request if RS tells us there is no more results for this region
+    * [HBASE-17514] - Warn when Thrift Server 1 is configured for proxy users but not the HTTP transport
+    * [HBASE-17543] - Create additional ReplicationEndpoint WALEntryFilters by configuration
+    * [HBASE-17623] - Reuse the bytes array when building the hfile block
+    * [HBASE-17627] - Active workers metric for thrift
+    * [HBASE-17634] - Clean up the usage of Result.isPartial
+    * [HBASE-17637] - Update progress more frequently in IntegrationTestBigLinkedList.Generator.persist
+    * [HBASE-17689] - Add support for table.existsAll in thrift2 THBaseservice
+    * [HBASE-17716] - Formalize Scan Metric names
+    * [HBASE-17731] - Fractional latency reporting in MultiThreadedAction
+    * [HBASE-17778] - Remove the testing code in the AsyncRequestFutureImpl
+    * [HBASE-17817] - Make Regionservers log which tables it removed coprocessors from when aborting
+    * [HBASE-17831] - Support small scan in thrift2
+    * [HBASE-17835] - Spelling mistakes in the Java source
+    * [HBASE-17877] - Improve HBase's byte[] comparator
+    * [HBASE-17912] - Avoid major compactions on region server startup
+    * [HBASE-17916] - Error message not clear when the permission of staging dir is not as expected
+    * [HBASE-17924] - Consider sorting the row order when processing multi() ops before taking rowlocks
+    * [HBASE-17944] - Removed unused JDK version parsing from ClassSize.
+    * [HBASE-17956] - Raw scan should ignore TTL
+    * [HBASE-17959] - Canary timeout should be configurable on a per-table basis
+    * [HBASE-17962] - Improve documentation on Rest interface
+    * [HBASE-17973] - Create shell command to identify regions with poor locality
+    * [HBASE-17979] - HBase Shell 'list' Command Help Doc Improvements
+    * [HBASE-17995] - improve log messages during snapshot related tests
+    * [HBASE-18021] - Add more info in timed out RetriesExhaustedException for read replica client get processing, 
+    * [HBASE-18023] - Log multi-* requests for more than threshold number of rows
+    * [HBASE-18041] - Add pylintrc file to HBase
+    * [HBASE-18043] - Institute a hard limit for individual cell size that cannot be overridden by clients
+    * [HBASE-18090] - Improve TableSnapshotInputFormat to allow more multiple mappers per region
+    * [HBASE-18094] - Display the return value of the command append
+    * [HBASE-18164] - Much faster locality cost function and candidate generator
+    * [HBASE-18248] - Warn if monitored RPC task has been tied up beyond a configurable threshold
+    * [HBASE-18251] - Remove unnecessary traversing to the first and last keys in the CellSet
+    * [HBASE-18252] - Resolve BaseLoadBalancer bad practice warnings
+    * [HBASE-18286] - Create static empty byte array to save memory
+    * [HBASE-18374] - RegionServer Metrics improvements
+    * [HBASE-18387] - [Thrift] Make principal configurable in DemoClient.java
+    * [HBASE-18426] - nightly job should use independent stages to check supported jdks
+    * [HBASE-18436] - Add client-side hedged read metrics
+    * [HBASE-18469] - Correct  RegionServer metric of  totalRequestCount
+    * [HBASE-18478] - Allow users to remove RegionFinder from LoadBalancer calculations if no locality possible
+    * [HBASE-18520] - Add jmx value to determine true Master Start time
+    * [HBASE-18522] - Add RowMutations support to Batch
+    * [HBASE-18532] - Improve cache related stats rendered on RS UI
+    * [HBASE-18533] - Expose BucketCache values to be configured
+    * [HBASE-18555] - Remove redundant familyMap.put() from addxxx() of sub-classes of Mutation and Query
+    * [HBASE-18559] - Add histogram to MetricsConnection to track concurrent calls per server
+    * [HBASE-18573] - Update Append and Delete to use Mutation#getCellList(family)
+    * [HBASE-18602] - rsgroup cleanup unassign code
+    * [HBASE-18631] - Allow configuration of ChaosMonkey properties via hbase-site
+    * [HBASE-18652] - Expose individual cache stats in a CombinedCache through JMX
+    * [HBASE-18675] - Making {max,min}SessionTimeout configurable for MiniZooKeeperCluster
+    * [HBASE-18737] - Display configured max size of memstore and cache on RS UI
+    * [HBASE-18740] - Upgrade Zookeeper version to 3.4.10
+    * [HBASE-18746] - Throw exception with job.getStatus().getFailureInfo() when ExportSnapshot fails
+    * [HBASE-18814] - Make ScanMetrics enabled and add counter <HBase Counters, ROWS_SCANNED> into the MapReduce Job over snapshot
+    * [HBASE-18993] - Backport patches in HBASE-18410 to branch-1.x branches.
+    * [HBASE-19051] - Add new split algorithm for num string
+    * [HBASE-19052] - FixedFileTrailer should recognize CellComparatorImpl class in branch-1.x
+    * [HBASE-19091] - Code annotation wrote "BinaryComparator" instead of "LongComparator"
+    * [HBASE-19140] - hbase-cleanup.sh uses deprecated call to remove files in hdfs
+    * [HBASE-19228] - nightly job should gather machine stats.
+    * [HBASE-19239] - Fix findbugs and error-prone warnings (branch-1)
+    * [HBASE-19262] - Revisit checkstyle rules
 
-** Bug
-    * [HBASE-7211] - Improve hbase ref guide for the testing part.
-    * [HBASE-9003] - TableMapReduceUtil should not rely on org.apache.hadoop.util.JarFinder#getJar
-    * [HBASE-9117] - Remove HTablePool and all HConnection pooling related APIs
-    * [HBASE-9157] - ZKUtil.blockUntilAvailable loops forever with non-recoverable errors
-    * [HBASE-9527] - Review all old api that takes a table name as a byte array and ensure none can pass ns + tablename
-    * [HBASE-10536] - ImportTsv should fail fast if any of the column family passed to the job is not present in the table
-    * [HBASE-10780] - HFilePrettyPrinter#processFile should return immediately if file does not exist
-    * [HBASE-11099] - Two situations where we could open a region with smaller sequence number
-    * [HBASE-11562] - CopyTable should provide an option to shuffle the mapper tasks
-    * [HBASE-11835] - Wrong managenement of non expected calls in the client
-    * [HBASE-12017] - Use Connection.createTable() instead of HTable constructors.
-    * [HBASE-12029] - Use Table and RegionLocator in HTable.getRegionLocations() 
-    * [HBASE-12053] - SecurityBulkLoadEndPoint set 777 permission on input data files 
-    * [HBASE-12072] - Standardize retry handling for master operations
-    * [HBASE-12083] - Deprecate new HBaseAdmin() in favor of Connection.getAdmin()
-    * [HBASE-12142] - Truncate command does not preserve ACLs table
-    * [HBASE-12194] - Make TestEncodedSeekers faster
-    * [HBASE-12219] - Cache more efficiently getAll() and get() in FSTableDescriptors
-    * [HBASE-12226] - TestAccessController#testPermissionList failing on master
-    * [HBASE-12229] - NullPointerException in SnapshotTestingUtils
-    * [HBASE-12234] - Make TestMultithreadedTableMapper a little more stable.
-    * [HBASE-12237] - HBaseZeroCopyByteString#wrap() should not be called in hbase-client code
-    * [HBASE-12238] - A few ugly exceptions on startup
-    * [HBASE-12240] - hbase-daemon.sh should remove pid file if process not found running
-    * [HBASE-12241] - The crash of regionServer when taking deadserver's replication queue breaks replication
-    * [HBASE-12242] - Fix new javadoc warnings in Admin, etc.
-    * [HBASE-12246] - Compilation with hadoop-2.3.x and 2.2.x is broken
-    * [HBASE-12247] - Replace setHTable() with initializeTable() in TableInputFormat.
-    * [HBASE-12248] - broken link in hbase shell help
-    * [HBASE-12252] - IntegrationTestBulkLoad fails with illegal partition error
-    * [HBASE-12257] - TestAssignmentManager unsynchronized access to regionPlans
-    * [HBASE-12258] - Make TestHBaseFsck less flaky
-    * [HBASE-12261] - Add checkstyle to HBase build process
-    * [HBASE-12263] - RegionServer listens on localhost in distributed cluster when DNS is unavailable
-    * [HBASE-12265] - HBase shell 'show_filters' points to internal Facebook URL
-    * [HBASE-12274] - Race between RegionScannerImpl#nextInternal() and RegionScannerImpl#close() may produce null pointer exception
-    * [HBASE-12277] - Refactor bulkLoad methods in AccessController to its own interface
-    * [HBASE-12278] - Race condition in TestSecureLoadIncrementalHFilesSplitRecovery
-    * [HBASE-12279] - Generated thrift files were generated with the wrong parameters
-    * [HBASE-12281] - ClonedPrefixTreeCell should implement HeapSize
-    * [HBASE-12285] - Builds are failing, possibly because of SUREFIRE-1091
-    * [HBASE-12294] - Can't build the docs after the hbase-checkstyle module was added
-    * [HBASE-12301] - user_permission command does not show global permissions
-    * [HBASE-12302] - VisibilityClient getAuths does not propagate remote service exception correctly
-    * [HBASE-12304] - CellCounter will throw AIOBE when output directory is not specified
-    * [HBASE-12306] - CellCounter output's wrong value for Total Families Across all Rows in output file
-    * [HBASE-12308] - Fix typo in hbase-rest profile name
-    * [HBASE-12312] - Another couple of createTable race conditions
-    * [HBASE-12314] - Add chaos monkey policy to execute two actions concurrently
-    * [HBASE-12315] - Fix 0.98 Tests after checkstyle got parented
-    * [HBASE-12316] - test-patch.sh (Hadoop-QA) outputs the wrong release audit warnings URL
-    * [HBASE-12318] - Add license header to checkstyle xml files
-    * [HBASE-12319] - Inconsistencies during region recovery due to close/open of a region during recovery
-    * [HBASE-12322] - Add clean up command to ITBLL
-    * [HBASE-12327] - MetricsHBaseServerSourceFactory#createContextName has wrong conditions
-    * [HBASE-12329] - Table create with duplicate column family names quietly succeeds
-    * [HBASE-12334] - Handling of DeserializationException causes needless retry on failure
-    * [HBASE-12336] - RegionServer failed to shutdown for NodeFailoverWorker thread
-    * [HBASE-12337] - Import tool fails with NullPointerException if clusterIds is not initialized
-    * [HBASE-12346] - Scan's default auths behavior under Visibility labels
-    * [HBASE-12352] - Add hbase-annotation-tests to runtime classpath so can run hbase it tests.
-    * [HBASE-12356] - Rpc with region replica does not propagate tracing spans
-    * [HBASE-12359] - MulticastPublisher should specify IPv4/v6 protocol family when creating multicast channel
-    * [HBASE-12366] - Add login code to HBase Canary tool.
-    * [HBASE-12372] - [WINDOWS] Enable log4j configuration in hbase.cmd 
-    * [HBASE-12375] - LoadIncrementalHFiles fails to load data in table when CF name starts with '_'
-    * [HBASE-12377] - HBaseAdmin#deleteTable fails when META region is moved around the same timeframe
-    * [HBASE-12384] - TestTags can hang on fast test hosts
-    * [HBASE-12386] - Replication gets stuck following a transient zookeeper error to remote peer cluster
-    * [HBASE-12398] - Region isn't assigned in an extreme race condition
-    * [HBASE-12399] - Master startup race between metrics and RpcServer
-    * [HBASE-12402] - ZKPermissionWatcher race condition in refreshing the cache leaving stale ACLs and causing AccessDenied
-    * [HBASE-12407] - HConnectionKey doesn't contain CUSTOM_CONTROLLER_CONF_KEY in CONNECTION_PROPERTIES 
-    * [HBASE-12414] - Move HFileLink.exists() to base class
-    * [HBASE-12417] - Scan copy constructor does not retain small attribute
-    * [HBASE-12419] - "Partial cell read caused by EOF" ERRORs on replication source during replication
-    * [HBASE-12420] - BucketCache logged startup message is egregiously large
-    * [HBASE-12423] - Use a non-managed Table in TableOutputFormat
-    * [HBASE-12428] - region_mover.rb script is broken if port is not specified
-    * [HBASE-12440] - Region may remain offline on clean startup under certain race condition
-    * [HBASE-12445] - hbase is removing all remaining cells immediately after the cell marked with marker = KeyValue.Type.DeleteColumn via PUT
-    * [HBASE-12448] - Fix rate reporting in compaction progress DEBUG logging
-    * [HBASE-12449] - Use the max timestamp of current or old cell's timestamp in HRegion.append()
-    * [HBASE-12450] - Unbalance chaos monkey might kill all region servers without starting them back
-    * [HBASE-12459] - Use a non-managed Table in mapred.TableOutputFormat
-    * [HBASE-12460] - Moving Chore to hbase-common module.
-    * [HBASE-12461] - FSVisitor logging is excessive
-    * [HBASE-12464] - meta table region assignment stuck in the FAILED_OPEN state due to region server not fully ready to serve
-    * [HBASE-12478] - HBASE-10141 and MIN_VERSIONS are not compatible
-    * [HBASE-12479] - Backport HBASE-11689 (Track meta in transition) to 0.98 and branch-1
-    * [HBASE-12490] - Replace uses of setAutoFlush(boolean, boolean)
-    * [HBASE-12491] - TableMapReduceUtil.findContainingJar() NPE
-    * [HBASE-12495] - Use interfaces in the shell scripts
-    * [HBASE-12513] - Graceful stop script does not restore the balancer state
-    * [HBASE-12514] - Cleanup HFileOutputFormat legacy code
-    * [HBASE-12520] - Add protected getters on TableInputFormatBase
-    * [HBASE-12533] - staging directories are not deleted after secure bulk load
-    * [HBASE-12536] - Reduce the effective scope of GLOBAL CREATE and ADMIN permission
-    * [HBASE-12537] - HBase should log the remote host on replication error
-    * [HBASE-12539] - HFileLinkCleaner logs are uselessly noisy
-    * [HBASE-12541] - Add misc debug logging to hanging tests in TestHCM and TestBaseLoadBalancer
-    * [HBASE-12544] - ops_mgt.xml missing in branch-1
-    * [HBASE-12550] - Check all storefiles are referenced before splitting
-    * [HBASE-12560] - [WINDOWS] Append the classpath from Hadoop to HBase classpath in bin/hbase.cmd
-    * [HBASE-12576] - Add metrics for rolling the HLog if there are too few DN's in the write pipeline
-    * [HBASE-12580] - Zookeeper instantiated even though we might not need it in the shell
-    * [HBASE-12581] - TestCellACLWithMultipleVersions failing since task 5 HBASE-12404 (HBASE-12404 addendum)
-    * [HBASE-12584] - Fix branch-1 failing since task 5 HBASE-12404 (HBASE-12404 addendum)
-    * [HBASE-12595] - Use Connection.getTable() in table.rb
-    * [HBASE-12600] - Remove REPLAY tag dependency in Distributed Replay Mode
-    * [HBASE-12610] - Close connection in TableInputFormatBase
-    * [HBASE-12611] - Create autoCommit() method and remove clearBufferOnFail
-    * [HBASE-12614] - Potentially unclosed StoreFile(s) in DefaultCompactor#compact()
-    * [HBASE-12616] - We lost the IntegrationTestBigLinkedList COMMANDS in recent usage refactoring
+** New Feature
+    * [HBASE-15134] - Add visibility into Flush and Compaction queues
+    * [HBASE-15576] - Scanning cursor to prevent blocking long time on ResultScanner.next()
+    * [HBASE-15631] - Backport Regionserver Groups (HBASE-6721) to branch-1 
+    * [HBASE-15633] - Backport HBASE-15507 to branch-1
+    * [HBASE-15847] - VerifyReplication prefix filtering
+    * [HBASE-16213] - A new HFileBlock structure for fast random get
+    * [HBASE-16388] - Prevent client threads being blocked by only one slow region server
+    * [HBASE-16677] - Add table size (total store file size) to table page
+    * [HBASE-17181] - Let HBase thrift2 support TThreadedSelectorServer
+    * [HBASE-17737] - Thrift2 proxy should support scan timeRange per column family
+    * [HBASE-17757] - Unify blocksize after encoding to decrease memory fragment 
+    * [HBASE-18060] - Backport to branch-1 HBASE-9774 HBase native metrics and metric collection for coprocessors
+    * [HBASE-18131] - Add an hbase shell command to clear deadserver list in ServerManager
+    * [HBASE-18226] - Disable reverse DNS lookup at HMaster and use the hostname provided by RegionServer
+    * [HBASE-18875] - Thrift server supports read-only mode
+    * [HBASE-19189] - Ad-hoc test job for running a subset of tests lots of times
+    * [HBASE-19326] - Remove decommissioned servers from rsgroup
 
+** Task
+    * [HBASE-4368] - Expose processlist in shell (per regionserver and perhaps by cluster)
+    * [HBASE-14635] - Fix flaky test TestSnapshotCloneIndependence
+    * [HBASE-16335] - update to latest apache parent pom
+    * [HBASE-16459] - Remove unused hbase shell --format option
+    * [HBASE-16584] - Backport the new ipc implementation in HBASE-16432 to branch-1
+    * [HBASE-17609] - Allow for region merging in the UI 
+    * [HBASE-17954] - Switch findbugs implementation to spotbugs
+    * [HBASE-17965] - Canary tool should print the regionserver name on failure
+    * [HBASE-17968] - Update copyright year in NOTICE file
+    * [HBASE-18096] - Limit HFileUtil visibility and add missing annotations
+    * [HBASE-18527] - update nightly builds to compensate for jenkins plugin upgrades
+    * [HBASE-18582] - Correct the docs for Mutation#setCellVisibility
+    * [HBASE-18623] - Frequent failed to parse at EOF warnings from WALEntryStream
+    * [HBASE-18670] - Add .DS_Store to .gitignore
+    * [HBASE-18690] - Replace o.a.h.c.InterfaceAudience by o.a.h.h.c.InterfaceAudience
+    * [HBASE-18833] - Ensure precommit personality is up to date on all active branches
+    * [HBASE-18996] - Backport HBASE-17703 (TestThriftServerCmdLine is flaky in master branch) to branch-1
+    * [HBASE-19097] - update testing to use Apache Yetus Test Patch version 0.6.0
+    * [HBASE-19099] - Evaluate the remaining API compatibility concerns between branch-1.3 and branch-1.4 / branch-1
+    * [HBASE-19217] - Update supplemental-models.xml for jetty-sslengine
+    * [HBASE-19232] - Fix shaded-check-invariants (check-jar-contents) failure on branch-1
+    * [HBASE-19419] - Remove hbase-native-client from branch-1
+    * [HBASE-19420] - Backport HBASE-19152 (Update refguide 'how to build an RC' and the make_rc.sh script) to branch-1
 
+** Test
+    * [HBASE-16349] - TestClusterId may hang during cluster shutdown
+    * [HBASE-16418] - Reduce duration of sleep waiting for region reopen in IntegrationTestBulkLoad#installSlowingCoproc()
+    * [HBASE-16639] - TestProcedureInMemoryChore#testChoreAddAndRemove occasionally fails
+    * [HBASE-16725] - Don't let flushThread hang in TestHRegion
+    * [HBASE-17189] - TestMasterObserver#wasModifyTableActionCalled uses wrong variables
+    * [HBASE-18147] - nightly job to check health of active branches
+    * [HBASE-18979] - TestInterfaceAudienceAnnotations fails on branch-1.3
 
+** Umbrella
+    * [HBASE-18266] - Eliminate the warnings from the spotbugs
 
-** Improvement
-    * [HBASE-2609] - Harmonize the Get and Delete operations
-    * [HBASE-4955] - Use the official versions of surefire & junit
-    * [HBASE-8361] - Bulk load and other utilities should not create tables for user
-    * [HBASE-8572] - Enhance delete_snapshot.rb to call snapshot deletion API with regex
-    * [HBASE-10082] - Describe 'table' output is all on one line, could use better formatting
-    * [HBASE-10483] - Provide API for retrieving info port when hbase.master.info.port is set to 0
-    * [HBASE-11639] - [Visibility controller] Replicate the visibility of Cells as strings
-    * [HBASE-11870] - Optimization : Avoid copy of key and value for tags addition in AC and VC
-    * [HBASE-12161] - Add support for grant/revoke on namespaces in AccessControlClient
-    * [HBASE-12243] - HBaseFsck should auto set ignorePreCheckPermission to true if no fix option is set.
-    * [HBASE-12249] - Script to help you adhere to the patch-naming guidelines
-    * [HBASE-12264] - ImportTsv should fail fast if output is not specified and table does not exist
-    * [HBASE-12271] - Add counters for files skipped during snapshot export
-    * [HBASE-12272] - Generate Thrift code through maven
-    * [HBASE-12328] - Need to separate JvmMetrics for Master and RegionServer
-    * [HBASE-12389] - Reduce the number of versions configured for the ACL table
-    * [HBASE-12390] - Change revision style from svn to git
-    * [HBASE-12411] - Optionally enable p-reads and private readers for compactions
-    * [HBASE-12416] - RegionServerCallable should report what host it was communicating with
-    * [HBASE-12424] - Finer grained logging and metrics for split transactions
-    * [HBASE-12432] - RpcRetryingCaller should log after fixed number of retries like AsyncProcess
-    * [HBASE-12434] - Add a command to compact all the regions in a regionserver
-    * [HBASE-12447] - Add support for setTimeRange for RowCounter and CellCounter
-    * [HBASE-12455] - Add 'description' to bean and attribute output when you do /jmx?description=true
-    * [HBASE-12529] - Use ThreadLocalRandom for RandomQueueBalancer
-    * [HBASE-12569] - Control MaxDirectMemorySize in the same manner as heap size
 
-** New Feature
-    * [HBASE-8707] - Add LongComparator for filter
-    * [HBASE-12286] - [shell] Add server/cluster online load of configuration changes
-    * [HBASE-12361] - Show data locality of region in table page
-    * [HBASE-12496] - A blockedRequestsCount metric
+Release Notes - HBase - Version 1.3.1 04/30/2017
 
+** Sub-task
+    * [HBASE-15386] - PREFETCH_BLOCKS_ON_OPEN in HColumnDescriptor is ignored
+    * [HBASE-17060] - backport HBASE-16570 to 1.3.1
+    * [HBASE-17561] - table status page should escape values that may contain arbitrary characters.
 
+** Bug
+    * [HBASE-14753] - TestShell is not invoked anymore
+    * [HBASE-15328] - Unvalidated Redirect in HMaster
+    * [HBASE-15635] - Mean age of Blocks in cache (seconds) on webUI should be greater than zero
+    * [HBASE-16630] - Fragmentation in long running Bucket Cache
+    * [HBASE-16886] - hbase-client: scanner with reversed=true and small=true gets no result
+    * [HBASE-16939] - ExportSnapshot: set owner and permission on right directory
+    * [HBASE-16948] - Fix inconsistency between HRegion and Region javadoc on getRowLock
+    * [HBASE-17059] - backport HBASE-17039 to 1.3.1
+    * [HBASE-17069] - RegionServer writes invalid META entries for split daughters in some circumstances
+    * [HBASE-17070] - backport HBASE-17020 to 1.3.1
+    * [HBASE-17112] - Prevent setting timestamp of delta operations the same as previous value's
+    * [HBASE-17175] - backport HBASE-17127 to 1.3.1
+    * [HBASE-17187] - DoNotRetryExceptions from coprocessors should bubble up to the application
+    * [HBASE-17227] - Backport HBASE-17206 to branch-1.3
+    * [HBASE-17264] - Processing RIT with offline state will always fail to open the first time
+    * [HBASE-17265] - Region left unassigned in master failover when region failed to open
+    * [HBASE-17275] - Assign timeout may cause region to be unassigned forever
+    * [HBASE-17287] - Master becomes a zombie if filesystem object closes
+    * [HBASE-17289] - Avoid adding a replication peer named "lock"
+    * [HBASE-17357] - PerformanceEvaluation parameters parsing triggers NPE.
+    * [HBASE-17381] - ReplicationSourceWorkerThread can die due to unhandled exceptions
+    * [HBASE-17387] - Reduce the overhead of exception report in RegionActionResult for multi()
+    * [HBASE-17445] - Count size of serialized exceptions in checking max result size quota
+    * [HBASE-17475] - Stack overflow in AsyncProcess if retry too much
+    * [HBASE-17489] - ClientScanner may send a next request to a RegionScanner which has been exhausted
+    * [HBASE-17501] - NullPointerException after Datanodes Decommissioned and Terminated
+    * [HBASE-17522] - RuntimeExceptions from MemoryMXBean should not take down server process
+    * [HBASE-17540] - Change SASL server GSSAPI callback log line from DEBUG to TRACE in RegionServer to reduce log volumes in DEBUG mode
+    * [HBASE-17558] - ZK dumping jsp should escape html 
+    * [HBASE-17572] - HMaster: Caught throwable while processing event C_M_MERGE_REGION
+    * [HBASE-17578] - Thrift per-method metrics should still update in the case of exceptions
+    * [HBASE-17587] - Do not Rethrow DoNotRetryIOException as UnknownScannerException
+    * [HBASE-17590] - Drop cache hint should work for StoreFile write path
+    * [HBASE-17597] - TestMetaWithReplicas.testMetaTableReplicaAssignment is flaky
+    * [HBASE-17601] - close() in TableRecordReaderImpl assumes the split has started
+    * [HBASE-17604] - Backport HBASE-15437 (fix request and response size metrics) to branch-1
+    * [HBASE-17611] - Thrift 2 per-call latency metrics are capped at ~ 2 seconds
+    * [HBASE-17616] - Incorrect actions performed by CM
+    * [HBASE-17649] - REST API for scan should return 410 when table is disabled
+    * [HBASE-17675] - ReplicationEndpoint should choose new sinks if a SaslException occurs 
+    * [HBASE-17677] - ServerName parsing from directory name should be more robust to errors from guava's HostAndPort
+    * [HBASE-17682] - Region stuck in merging_new state indefinitely
+    * [HBASE-17688] - MultiRowRangeFilter not working correctly if given same start and stop RowKey
+    * [HBASE-17698] - ReplicationEndpoint choosing sinks
+    * [HBASE-17716] - Formalize Scan Metric names
+    * [HBASE-17717] - Incorrect ZK ACL set for HBase superuser
+    * [HBASE-17722] - Metrics subsystem stop/start messages add a lot of useless bulk to operational logging
+    * [HBASE-17780] - BoundedByteBufferPool "At capacity" messages are not actionable
+    * [HBASE-17813] - backport HBASE-16983 to branch-1.3
+    * [HBASE-17868] - Backport HBASE-10205 to branch-1.3
+    * [HBASE-17886] - Fix compatibility of ServerSideScanMetrics
 
+** Improvement
+    * [HBASE-12770] - Don't transfer all the queued hlogs of a dead server to the same alive server
+    * [HBASE-15429] - Add a split policy for busy regions
+    * [HBASE-15941] - HBCK repair should not unsplit healthy splitted region
+    * [HBASE-16562] - ITBLL should fail to start if misconfigured
+    * [HBASE-16755] - Honor flush policy under global memstore pressure
+    * [HBASE-16773] - AccessController should access local region if possible
+    * [HBASE-16947] - Some improvements for DumpReplicationQueues tool
+    * [HBASE-16977] - VerifyReplication should log a printable representation of the row keys
+    * [HBASE-17057] - Minor compactions should also drop page cache behind reads
+    * [HBASE-17579] - Backport HBASE-16302 to 1.3.1
+    * [HBASE-17627] - Active workers metric for thrift
+    * [HBASE-17637] - Update progress more frequently in IntegrationTestBigLinkedList.Generator.persist
+    * [HBASE-17837] - Backport HBASE-15314 to branch-1.3
 
+** Task
+    * [HBASE-17609] - Allow for region merging in the UI 
 
 
+Release Notes - HBase - Version 1.3.0 10/24/2016
 
+** Sub-task
+    * [HBASE-13212] - Procedure V2 - master Create/Modify/Delete namespace
+    * [HBASE-13819] - Make RPC layer CellBlock buffer a DirectByteBuffer
+    * [HBASE-13909] - create 1.2 branch
+    * [HBASE-14051] - Undo workarounds in IntegrationTestDDLMasterFailover for client double submit
+    * [HBASE-14212] - Add IT test for procedure-v2-based namespace DDL
+    * [HBASE-14423] - TestStochasticBalancerJmxMetrics.testJmxMetrics_PerTableMode:183 NullPointer
+    * [HBASE-14464] - Removed unused fs code
+    * [HBASE-14575] - Relax region read lock for compactions
+    * [HBASE-14662] - Fix NPE in HFileOutputFormat2
+    * [HBASE-14734] - BindException when setting up MiniKdc
+    * [HBASE-14786] - TestProcedureAdmin hangs
+    * [HBASE-14877] - maven archetype: client application
+    * [HBASE-14878] - maven archetype: client application with shaded jars
+    * [HBASE-14949] - Resolve name conflict when splitting if there are duplicated WAL entries
+    * [HBASE-14955] - OOME: cannot create native thread is back
+    * [HBASE-15105] - Procedure V2 - Procedure Queue with Namespaces
+    * [HBASE-15113] - Procedure v2 - Speedup eviction of sys operation results
+    * [HBASE-15142] - Procedure v2 - Basic WebUI listing the procedures
+    * [HBASE-15144] - Procedure v2 - Web UI displaying Store state
+    * [HBASE-15163] - Add sampling code and metrics for get/scan/multi/mutate count separately
+    * [HBASE-15171] - Avoid counting duplicate kv and generating lots of small hfiles in PutSortReducer
+    * [HBASE-15194] - TestStochasticLoadBalancer.testRegionReplicationOnMidClusterSameHosts flaky on trunk
+    * [HBASE-15202] - Reduce garbage while setting response
+    * [HBASE-15203] - Reduce garbage created by path.toString() during Checksum verification
+    * [HBASE-15204] - Try to estimate the cell count for adding into WALEdit
+    * [HBASE-15232] - Exceptions returned over multi RPC don't automatically trigger region location reloads
+    * [HBASE-15311] - Prevent NPE in BlockCacheViewTmpl
+    * [HBASE-15347] - Update CHANGES.txt for 1.3
+    * [HBASE-15351] - Fix description of hbase.bucketcache.size in hbase-default.xml
+    * [HBASE-15354] - Use same criteria for clearing meta cache for all operations
+    * [HBASE-15365] - Do not write to '/tmp' in TestHBaseConfiguration
+    * [HBASE-15366] - Add doc, trace-level logging, and test around hfileblock
+    * [HBASE-15368] - Add pluggable window support
+    * [HBASE-15371] - Procedure V2 - Completed support parent-child procedure
+    * [HBASE-15373] - DEPRECATED_NAME_OF_NO_LIMIT_THROUGHPUT_CONTROLLER_CLASS value is wrong in CompactionThroughputControllerFactory
+    * [HBASE-15376] - ScanNext metric is size-based while every other per-operation metric is time based
+    * [HBASE-15377] - Per-RS Get metric is time based, per-region metric is size-based
+    * [HBASE-15384] - Avoid using '/tmp' directory in TestBulkLoad
+    * [HBASE-15389] - Write out multiple files when compaction
+    * [HBASE-15390] - Unnecessary MetaCache evictions cause elevated number of requests to meta
+    * [HBASE-15392] - Single Cell Get reads two HFileBlocks
+    * [HBASE-15400] - Use DateTieredCompactor for Date Tiered Compaction
+    * [HBASE-15412] - Add average region size metric
+    * [HBASE-15422] - Procedure v2 - Avoid double yield
+    * [HBASE-15435] - Add WAL (in bytes) written metric
+    * [HBASE-15460] - Fix infer issues in hbase-common
+    * [HBASE-15464] - Flush / Compaction metrics revisited
+    * [HBASE-15477] - Do not save 'next block header' when we cache hfileblocks
+    * [HBASE-15479] - No more garbage or beware of autoboxing
+    * [HBASE-15488] - Add ACL for setting split merge switch
+    * [HBASE-15518] - Add Per-Table metrics back
+    * [HBASE-15524] - Fix NPE in client-side metrics
+    * [HBASE-15527] - Refactor Compactor related classes
+    * [HBASE-15537] - Make multi WAL work with WALs other than FSHLog
+    * [HBASE-15640] - L1 cache doesn't give fair warning that it is showing partial stats only when it hits limit
+    * [HBASE-15658] - RegionServerCallable / RpcRetryingCaller clear meta cache on retries
+    * [HBASE-15665] - Support using different StoreFileComparators for different CompactionPolicies
+    * [HBASE-15671] - Add per-table metrics on memstore, storefile and regionsize
+    * [HBASE-15683] - Min latency in latency histograms are emitted as Long.MAX_VALUE
+    * [HBASE-15713] - Backport "HBASE-15477 Do not save 'next block header' when we cache hfileblocks"
+    * [HBASE-15740] - Replication source.shippedKBs metric is undercounting because it is in KB
+    * [HBASE-15865] - Move TestTableDeleteFamilyHandler and TestTableDescriptorModification handler tests to procedure
+    * [HBASE-15872] - Split TestWALProcedureStore
+    * [HBASE-15878] - Deprecate doBulkLoad(Path hfofDir, final HTable table)  in branch-1 (even though its 'late')
+    * [HBASE-15935] - Have a separate Walker task running concurrently with Generator
+    * [HBASE-15971] - Regression: Random Read/WorkloadC slower in 1.x than 0.98
+    * [HBASE-15984] - Given failure to parse a given WAL that was closed cleanly, replay the WAL.
+    * [HBASE-16023] - Fastpath for the FIFO rpcscheduler
+    * [HBASE-16034] - Fix ProcedureTestingUtility#LoadCounter.setMaxProcId()
+    * [HBASE-16056] - Procedure v2 - fix master crash for FileNotFound
+    * [HBASE-16068] - Procedure v2 - use consts for conf properties in test
+    * [HBASE-16101] - Procedure v2 - Perf Tool for WAL
+    * [HBASE-16146] - Counters are expensive...
+    * [HBASE-16176] - Bug fixes/improvements on HBASE-15650 Remove TimeRangeTracker as point of contention when many threads reading a StoreFile
+    * [HBASE-16180] - Fix ST_WRITE_TO_STATIC_FROM_INSTANCE_METHOD findbugs introduced by parent
+    * [HBASE-16189] - [Rolling Upgrade] 2.0 hfiles cannot be opened by 1.x servers
+    * [HBASE-16194] - Should count in MSLAB chunk allocation into heap size change when adding duplicate cells
+    * [HBASE-16195] - Should not add chunk into chunkQueue if not using chunk pool in HeapMemStoreLAB
+    * [HBASE-16285] - Drop RPC requests if it must be considered as timeout at client
+    * [HBASE-16317] - revert all ESAPI changes
+    * [HBASE-16318] - fail build if license isn't in whitelist
+    * [HBASE-16321] - Ensure findbugs jsr305 jar isn't present
+    * [HBASE-16452] - Procedure v2 - Make ProcedureWALPrettyPrinter extend Tool
+    * [HBASE-16485] - Procedure V2 - Add support to addChildProcedure() as last "step" in StateMachineProcedure
+    * [HBASE-16522] - Procedure v2 - Cache system user and avoid IOException
+    * [HBASE-16970] - Clarify misleading Scan.java comment about caching
+    * [HBASE-17017] - Remove the current per-region latency histogram metrics
+    * [HBASE-17149] - Procedure V2 - Fix nonce submission to avoid unnecessary calling coprocessor multiple times
 
-** Task
-    * [HBASE-10200] - Better error message when HttpServer fails to start due to java.net.BindException
-    * [HBASE-10870] - Deprecate and replace HCD methods that have a 'should' prefix with a 'get' instead
-    * [HBASE-12250] - Adding an endpoint for updating the regionserver config
-    * [HBASE-12344] - Split up TestAdmin
-    * [HBASE-12381] - Add maven enforcer rules for build assumptions
-    * [HBASE-12388] - Document that WALObservers don't get empty edits.
-    * [HBASE-12427] - Change branch-1 version from 0.99.2-SNAPSHOT to 0.99.3-SNAPSHOT
-    * [HBASE-12442] - Bring KeyValue#createFirstOnRow() back to branch-1 as deprecated methods
-    * [HBASE-12456] - Update surefire from 2.18-SNAPSHOT to 2.18
-    * [HBASE-12516] - Clean up master so QA Bot is in known good state
-    * [HBASE-12522] - Backport WAL refactoring to branch-1
+** Bug
+    * [HBASE-11625] - Reading datablock throws "Invalid HFile block magic" and can not switch to hdfs checksum
+    * [HBASE-12865] - WALs may be deleted before they are replicated to peers
+    * [HBASE-13082] - Coarsen StoreScanner locks to RegionScanner
+    * [HBASE-13897] - OOM may occur when Import imports a row with too many KeyValues
+    * [HBASE-14077] - Add package to hbase-protocol protobuf files.
+    * [HBASE-14094] - Procedure.proto can't be compiled to C++
+    * [HBASE-14143] - remove obsolete maven repositories
+    * [HBASE-14162] - Fixing maven target for regenerating thrift classes fails against 0.9.2
+    * [HBASE-14252] - RegionServers fail to start when setting hbase.ipc.server.callqueue.scan.ratio to 0
+    * [HBASE-14256] - Flush task message may be confusing when region is recovered
+    * [HBASE-14349] - pre-commit zombie finder is overly broad
+    * [HBASE-14370] - Use separate thread for calling ZKPermissionWatcher#refreshNodes()
+    * [HBASE-14411] - Fix unit test failures when using multiwal as default WAL provider
+    * [HBASE-14485] - ConnectionImplementation leaks on construction failure
+    * [HBASE-14497] - Reverse Scan threw StackOverflow caused by readPt checking
+    * [HBASE-14525] - Append and increment operation throws NullPointerException on non-existing column families.
+    * [HBASE-14536] - Balancer & SSH interfering with each other leading to unavailability
+    * [HBASE-14604] - Improve MoveCostFunction in StochasticLoadBalancer
+    * [HBASE-14644] - Region in transition metric is broken
+    * [HBASE-14818] - user_permission does not list namespace permissions
+    * [HBASE-14970] - Backport HBASE-13082 and its sub-jira to branch-1
+    * [HBASE-14975] - Don't color the total RIT line yellow if it's zero
+    * [HBASE-15000] - Fix javadoc warn in LoadIncrementalHFiles
+    * [HBASE-15026] - The default value of "hbase.regions.slop" in hbase-default.xml is obsolete
+    * [HBASE-15028] - Minor fix on RegionGroupingProvider
+    * [HBASE-15030] - Deadlock in master TableNamespaceManager while running IntegrationTestDDLMasterFailover
+    * [HBASE-15034] - IntegrationTestDDLMasterFailover does not clean created namespaces
+    * [HBASE-15093] - Replication can report incorrect size of log queue for the global source when multiwal is enabled
+    * [HBASE-15125] - HBaseFsck's adoptHdfsOrphan function creates region with wrong end key boundary
+    * [HBASE-15128] - Disable region splits and merges switch in master
+    * [HBASE-15132] - Master region merge RPC should authorize user request
+    * [HBASE-15137] - CallTimeoutException and CallQueueTooBigException should trigger PFFE
+    * [HBASE-15173] - Execute mergeRegions RPC call as the request user
+    * [HBASE-15234] - ReplicationLogCleaner can abort due to transient ZK issues
+    * [HBASE-15247] - InclusiveStopFilter does not respect reverse Filter property
+    * [HBASE-15287] - mapreduce.RowCounter returns incorrect result with binary row key inputs
+    * [HBASE-15290] - Hbase Rest CheckAndAPI should save other cells along with compared cell
+    * [HBASE-15292] - Refined ZooKeeperWatcher to prevent ZooKeeper's callback while construction
+    * [HBASE-15295] - MutateTableAccess.multiMutate() does not get high priority causing a deadlock
+    * [HBASE-15297] - error message is wrong when a wrong namspace is specified in grant in hbase shell
+    * [HBASE-15319] - clearJmxCache does not take effect actually
+    * [HBASE-15322] - Operations using Unsafe path broken for platforms not having sun.misc.Unsafe
+    * [HBASE-15323] - Hbase Rest CheckAndDeleteAPi should be able to delete more cells
+    * [HBASE-15324] - Jitter may cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy and trigger unexpected split
+    * [HBASE-15325] - ResultScanner allowing partial result will miss the rest of the row if the region is moved between two rpc requests
+    * [HBASE-15327] - Canary will always invoke admin.balancer() in each sniffing period when writeSniffing is enabled
+    * [HBASE-15348] - Fix tests broken by recent metrics re-work
+    * [HBASE-15357] - TableInputFormatBase getSplitKey does not handle signed bytes correctly
+    * [HBASE-15358] - canEnforceTimeLimitFromScope should use timeScope instead of sizeScope
+    * [HBASE-15360] - Fix flaky TestSimpleRpcScheduler
+    * [HBASE-15378] - Scanner cannot handle heartbeat message with no results
+    * [HBASE-15393] - Enable table replication command will fail when parent znode is not default in peer cluster
+    * [HBASE-15397] - Create bulk load replication znode(hfile-refs) in ZK replication queue by default
+    * [HBASE-15405] - Synchronize final results logging single thread in PE, fix wrong defaults in help message
+    * [HBASE-15406] - Split / merge switch left disabled after early termination of hbck
+    * [HBASE-15424] - Add bulk load hfile-refs for replication in ZK after the event is appended in the WAL
+    * [HBASE-15425] - Failing to write bulk load event marker in the WAL is ignored
+    * [HBASE-15430] - Failed taking snapshot - Manifest proto-message too large
+    * [HBASE-15433] - SnapshotManager#restoreSnapshot not update table and region count quota correctly when encountering exception
+    * [HBASE-15439] - getMaximumAllowedTimeBetweenRuns in ScheduledChore ignores the TimeUnit
+    * [HBASE-15441] - Fix WAL splitting when region has moved multiple times
+    * [HBASE-15463] - Region normalizer should check whether split/merge is enabled
+    * [HBASE-15465] - userPermission returned by getUserPermission() for the selected namespace does not have namespace set
+    * [HBASE-15485] - Filter.reset() should not be called between batches
+    * [HBASE-15490] - Remove duplicated CompactionThroughputControllerFactory in branch-1
+    * [HBASE-15504] - Fix Balancer in 1.3 not moving regions off overloaded regionse

<TRUNCATED>

[9/9] hbase git commit: Update POMs and CHANGES.txt for 1.4.0RC0

Posted by ap...@apache.org.
Update POMs and CHANGES.txt for 1.4.0RC0


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/3839a01d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/3839a01d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/3839a01d

Branch: refs/heads/branch-1.4
Commit: 3839a01ddc430f68ad83c6be1317a34cc16cf13d
Parents: 6fcdc33
Author: Andrew Purtell <ap...@apache.org>
Authored: Mon Dec 4 16:31:30 2017 -0800
Committer: Andrew Purtell <ap...@apache.org>
Committed: Mon Dec 4 18:41:57 2017 -0800

----------------------------------------------------------------------
 CHANGES.txt                                     | 3895 +++++++++++-------
 hbase-annotations/pom.xml                       |    2 +-
 .../hbase-archetype-builder/pom.xml             |    2 +-
 hbase-archetypes/hbase-client-project/pom.xml   |    2 +-
 .../hbase-shaded-client-project/pom.xml         |    2 +-
 hbase-archetypes/pom.xml                        |    2 +-
 hbase-assembly/pom.xml                          |    2 +-
 hbase-checkstyle/pom.xml                        |    4 +-
 hbase-client/pom.xml                            |    2 +-
 hbase-common/pom.xml                            |    2 +-
 hbase-error-prone/pom.xml                       |    4 +-
 hbase-examples/pom.xml                          |    2 +-
 hbase-external-blockcache/pom.xml               |    2 +-
 hbase-hadoop-compat/pom.xml                     |    2 +-
 hbase-hadoop2-compat/pom.xml                    |    2 +-
 hbase-it/pom.xml                                |    2 +-
 hbase-metrics-api/pom.xml                       |    2 +-
 hbase-metrics/pom.xml                           |    2 +-
 hbase-prefix-tree/pom.xml                       |    2 +-
 hbase-procedure/pom.xml                         |    2 +-
 hbase-protocol/pom.xml                          |    2 +-
 hbase-resource-bundle/pom.xml                   |    2 +-
 hbase-rest/pom.xml                              |    2 +-
 hbase-rsgroup/pom.xml                           |    2 +-
 hbase-server/pom.xml                            |    2 +-
 .../hbase-shaded-check-invariants/pom.xml       |    2 +-
 hbase-shaded/hbase-shaded-client/pom.xml        |    2 +-
 hbase-shaded/hbase-shaded-server/pom.xml        |    2 +-
 hbase-shaded/pom.xml                            |    2 +-
 hbase-shell/pom.xml                             |    2 +-
 hbase-testing-util/pom.xml                      |    2 +-
 hbase-thrift/pom.xml                            |    2 +-
 pom.xml                                         |    2 +-
 33 files changed, 2522 insertions(+), 1441 deletions(-)
----------------------------------------------------------------------



[5/9] hbase git commit: HBASE-19420 Backport HBASE-19152 Update refguide 'how to build an RC' and the make_rc.sh script

Posted by ap...@apache.org.
http://git-wip-us.apache.org/repos/asf/hbase/blob/1dba475d/src/main/asciidoc/_chapters/developer.adoc
----------------------------------------------------------------------
diff --git a/src/main/asciidoc/_chapters/developer.adoc b/src/main/asciidoc/_chapters/developer.adoc
index c3ba0a2..8b74690 100644
--- a/src/main/asciidoc/_chapters/developer.adoc
+++ b/src/main/asciidoc/_chapters/developer.adoc
@@ -33,40 +33,124 @@ Being familiar with these guidelines will help the HBase committers to use your
 [[getting.involved]]
 == Getting Involved
 
-Apache HBase gets better only when people contribute! If you are looking to contribute to Apache HBase, look for link:https://issues.apache.org/jira/issues/?jql=project%20%3D%20HBASE%20AND%20labels%20in%20(beginner)[issues in JIRA tagged with the label 'beginner'].
+Apache HBase gets better only when people contribute! If you are looking to contribute to Apache HBase, look for link:https://issues.apache.org/jira/issues/?jql=project%20%3D%20HBASE%20AND%20labels%20in%20(beginner)%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened)[issues in JIRA tagged with the label 'beginner'].
 These are issues HBase contributors have deemed worthy but not of immediate priority and a good way to ramp on HBase internals.
 See link:http://search-hadoop.com/m/DHED43re96[What label
                 is used for issues that are good on ramps for new contributors?] from the dev mailing list for background.
 
 Before you get started submitting code to HBase, please refer to <<developing,developing>>.
 
-As Apache HBase is an Apache Software Foundation project, see <<asf,asf>>            for more information about how the ASF functions. 
+As Apache HBase is an Apache Software Foundation project, see <<asf,asf>>            for more information about how the ASF functions.
 
 [[mailing.list]]
 === Mailing Lists
 
 Sign up for the dev-list and the user-list.
-See the link:http://hbase.apache.org/mail-lists.html[mailing lists] page.
-Posing questions - and helping to answer other people's questions - is encouraged! There are varying levels of experience on both lists so patience and politeness are encouraged (and please stay on topic.) 
+See the link:https://hbase.apache.org/mail-lists.html[mailing lists] page.
+Posing questions - and helping to answer other people's questions - is encouraged! There are varying levels of experience on both lists so patience and politeness are encouraged (and please stay on topic.)
+
+[[slack]]
+=== Slack
+The Apache HBase project has its own link: http://apache-hbase.slack.com[Slack Channel] for real-time questions
+and discussion. Mail dev@hbase.apache.org to request an invite.
 
 [[irc]]
 === Internet Relay Chat (IRC)
 
+(NOTE: Our IRC channel seems to have been deprecated in favor of the above Slack channel)
+
 For real-time questions and discussions, use the `#hbase` IRC channel on the link:https://freenode.net/[FreeNode] IRC network.
 FreeNode offers a web-based client, but most people prefer a native client, and several clients are available for each operating system.
 
 === Jira
 
-Check for existing issues in link:https://issues.apache.org/jira/browse/HBASE[Jira].
-If it's either a new feature request, enhancement, or a bug, file a ticket. 
+Check for existing issues in link:https://issues.apache.org/jira/projects/HBASE/issues[Jira].
+If it's either a new feature request, enhancement, or a bug, file a ticket.
+
+We track multiple types of work in JIRA:
+
+- Bug: Something is broken in HBase itself.
+- Test: A test is needed, or a test is broken.
+- New feature: You have an idea for new functionality. It's often best to bring
+  these up on the mailing lists first, and then write up a design specification
+  that you add to the feature request JIRA.
+- Improvement: A feature exists, but could be tweaked or augmented. It's often
+  best to bring these up on the mailing lists first and have a discussion, then
+  summarize or link to the discussion if others seem interested in the
+  improvement.
+- Wish: This is like a new feature, but for something you may not have the
+  background to flesh out yourself.
+
+Bugs and tests have the highest priority and should be actionable.
+
+==== Guidelines for reporting effective issues
+
+- *Search for duplicates*: Your issue may have already been reported. Have a
+  look, realizing that someone else might have worded the summary differently.
++
+Also search the mailing lists, which may have information about your problem
+and how to work around it. Don't file an issue for something that has already
+been discussed and resolved on a mailing list, unless you strongly disagree
+with the resolution *and* are willing to help take the issue forward.
+
+* *Discuss in public*: Use the mailing lists to discuss what you've discovered
+  and see if there is something you've missed. Avoid using back channels, so
+  that you benefit from the experience and expertise of the project as a whole.
+
+* *Don't file on behalf of others*: You might not have all the context, and you
+  don't have as much motivation to see it through as the person who is actually
+  experiencing the bug. It's more helpful in the long term to encourage others
+  to file their own issues. Point them to this material and offer to help out
+  the first time or two.
+
+* *Write a good summary*: A good summary includes information about the problem,
+  the impact on the user or developer, and the area of the code.
+** Good: `Address new license dependencies from hadoop3-alpha4`
+** Room for improvement: `Canary is broken`
++
+If you write a bad title, someone else will rewrite it for you. This is time
+they could have spent working on the issue instead.
+
+* *Give context in the description*: It can be good to think of this in multiple
+  parts:
+** What happens or doesn't happen?
+** How does it impact you?
+** How can someone else reproduce it?
+** What would "fixed" look like?
++
+You don't need to know the answers for all of these, but give as much
+information as you can. If you can provide technical information, such as a
+Git commit SHA that you think might have caused the issue or a build failure
+on builds.apache.org where you think the issue first showed up, share that
+info.
+
+* *Fill in all relevant fields*: These fields help us filter, categorize, and
+  find things.
+
+* *One bug, one issue, one patch*: To help with back-porting, don't split issues
+  or fixes among multiple bugs.
 
-To check for existing issues which you can tackle as a beginner, search for link:https://issues.apache.org/jira/issues/?jql=project%20%3D%20HBASE%20AND%20labels%20in%20(beginner)[issues in JIRA tagged with the label 'beginner'].
+* *Add value if you can*: Filing issues is great, even if you don't know how to
+  fix them. But providing as much information as possible, being willing to
+  triage and answer questions, and being willing to test potential fixes is even
+  better! We want to fix your issue as quickly as you want it to be fixed.
 
-* .JIRA PrioritiesBlocker: Should only be used if the issue WILL cause data loss or cluster instability reliably.
-* Critical: The issue described can cause data loss or cluster instability in some cases.
-* Major: Important but not tragic issues, like updates to the client API that will add a lot of much-needed functionality or significant bugs that need to be fixed but that don't cause data loss.
-* Minor: Useful enhancements and annoying but not damaging bugs.
-* Trivial: Useful enhancements but generally cosmetic.
+* *Don't be upset if we don't fix it*: Time and resources are finite. In some
+  cases, we may not be able to (or might choose not to) fix an issue, especially
+  if it is an edge case or there is a workaround. Even if it doesn't get fixed,
+  the JIRA is a public record of it, and will help others out if they run into
+  a similar issue in the future.
+
+==== Working on an issue
+
+To check for existing issues which you can tackle as a beginner, search for link:https://issues.apache.org/jira/issues/?jql=project%20%3D%20HBASE%20AND%20labels%20in%20(beginner)%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened)[issues in JIRA tagged with the label 'beginner'].
+
+.JIRA Priorites
+* *Blocker*: Should only be used if the issue WILL cause data loss or cluster instability reliably.
+* *Critical*: The issue described can cause data loss or cluster instability in some cases.
+* *Major*: Important but not tragic issues, like updates to the client API that will add a lot of much-needed functionality or significant bugs that need to be fixed but that don't cause data loss.
+* *Minor*: Useful enhancements and annoying but not damaging bugs.
+* *Trivial*: Useful enhancements but generally cosmetic.
 
 .Code Blocks in Jira Comments
 ====
@@ -89,11 +173,12 @@ GIT is our repository of record for all but the Apache HBase website.
 We used to be on SVN.
 We migrated.
 See link:https://issues.apache.org/jira/browse/INFRA-7768[Migrate Apache HBase SVN Repos to Git].
-Updating hbase.apache.org still requires use of SVN (See <<hbase.org,hbase.org>>). See link:http://hbase.apache.org/source-repository.html[Source Code
-                Management] page for contributor and committer links or seach for HBase on the link:http://git.apache.org/[Apache Git] page.
+See link:https://hbase.apache.org/source-repository.html[Source Code
+                Management] page for contributor and committer links or search for HBase on the link:https://git.apache.org/[Apache Git] page.
 
 == IDEs
 
+[[eclipse]]
 === Eclipse
 
 [[eclipse.code.formatting]]
@@ -102,27 +187,12 @@ Updating hbase.apache.org still requires use of SVN (See <<hbase.org,hbase.org>>
 Under the _dev-support/_ folder, you will find _hbase_eclipse_formatter.xml_.
 We encourage you to have this formatter in place in eclipse when editing HBase code.
 
-.Procedure: Load the HBase Formatter Into Eclipse
-. Open the  menu item.
-. In Preferences, click the  menu item.
-. Click btn:[Import] and browse to the location of the _hbase_eclipse_formatter.xml_ file, which is in the _dev-support/_ directory.
-  Click btn:[Apply].
-. Still in Preferences, click .
-  Be sure the following options are selected:
-+
-* Perform the selected actions on save
-* Format source code
-* Format edited lines
-+
-Click btn:[Apply].
-Close all dialog boxes and return to the main window.
-
+Go to `Preferences->Java->Code Style->Formatter->Import` to load the xml file.
+Go to `Preferences->Java->Editor->Save Actions`, and make sure 'Format source code' and 'Format
+edited lines' is selected.
 
-In addition to the automatic formatting, make sure you follow the style guidelines explained in <<common.patch.feedback,common.patch.feedback>>
-
-Also, no `@author` tags - that's a rule.
-Quality Javadoc comments are appreciated.
-And include the Apache license.
+In addition to the automatic formatting, make sure you follow the style guidelines explained in
+<<common.patch.feedback,common.patch.feedback>>.
 
 [[eclipse.git.plugin]]
 ==== Eclipse Git Plugin
@@ -133,30 +203,30 @@ If you cloned the project via git, download and install the Git plugin (EGit). A
 ==== HBase Project Setup in Eclipse using `m2eclipse`
 
 The easiest way is to use the +m2eclipse+ plugin for Eclipse.
-Eclipse Indigo or newer includes +m2eclipse+, or you can download it from link:http://www.eclipse.org/m2e//. It provides Maven integration for Eclipse, and even lets you use the direct Maven commands from within Eclipse to compile and test your project.
+Eclipse Indigo or newer includes +m2eclipse+, or you can download it from http://www.eclipse.org/m2e/. It provides Maven integration for Eclipse, and even lets you use the direct Maven commands from within Eclipse to compile and test your project.
 
 To import the project, click  and select the HBase root directory. `m2eclipse`                    locates all the hbase modules for you.
 
-If you install +m2eclipse+ and import HBase in your workspace, do the following to fix your eclipse Build Path. 
+If you install +m2eclipse+ and import HBase in your workspace, do the following to fix your eclipse Build Path.
 
 . Remove _target_ folder
 . Add _target/generated-jamon_ and _target/generated-sources/java_ folders.
 . Remove from your Build Path the exclusions on the _src/main/resources_ and _src/test/resources_ to avoid error message in the console, such as the following:
 +
 ----
-Failed to execute goal 
+Failed to execute goal
 org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project hbase:
-'An Ant BuildException has occured: Replace: source file .../target/classes/hbase-default.xml 
+'An Ant BuildException has occurred: Replace: source file .../target/classes/hbase-default.xml
 doesn't exist
 ----
 +
-This will also reduce the eclipse build cycles and make your life easier when developing. 
+This will also reduce the eclipse build cycles and make your life easier when developing.
 
 
 [[eclipse.commandline]]
 ==== HBase Project Setup in Eclipse Using the Command Line
 
-Instead of using `m2eclipse`, you can generate the Eclipse files from the command line. 
+Instead of using `m2eclipse`, you can generate the Eclipse files from the command line.
 
 . First, run the following command, which builds HBase.
   You only need to do this once.
@@ -181,7 +251,7 @@ mvn eclipse:eclipse
 The `$M2_REPO` classpath variable needs to be set up for the project.
 This needs to be set to your local Maven repository, which is usually _~/.m2/repository_
 
-If this classpath variable is not configured, you will see compile errors in Eclipse like this: 
+If this classpath variable is not configured, you will see compile errors in Eclipse like this:
 
 ----
 
@@ -209,14 +279,14 @@ Access restriction: The method getLong(Object, long) from the type Unsafe is not
 [[eclipse.more]]
 ==== Eclipse - More Information
 
-For additional information on setting up Eclipse for HBase development on Windows, see link:http://michaelmorello.blogspot.com/2011/09/hbase-subversion-eclipse-windows.html[Michael Morello's blog] on the topic. 
+For additional information on setting up Eclipse for HBase development on Windows, see link:http://michaelmorello.blogspot.com/2011/09/hbase-subversion-eclipse-windows.html[Michael Morello's blog] on the topic.
 
 === IntelliJ IDEA
 
-You can set up IntelliJ IDEA for similar functinoality as Eclipse.
+You can set up IntelliJ IDEA for similar functionality as Eclipse.
 Follow these steps.
 
-. Select 
+. Select
 . You do not need to select a profile.
   Be sure [label]#Maven project
   required# is selected, and click btn:[Next].
@@ -227,7 +297,7 @@ Using the Eclipse Code Formatter plugin for IntelliJ IDEA, you can import the HB
 
 === Other IDEs
 
-It would be userful to mirror the <<eclipse,eclipse>> set-up instructions for other IDEs.
+It would be useful to mirror the <<eclipse,eclipse>> set-up instructions for other IDEs.
 If you would like to assist, please have a look at link:https://issues.apache.org/jira/browse/HBASE-11704[HBASE-11704].
 
 [[build]]
@@ -237,20 +307,20 @@ If you would like to assist, please have a look at link:https://issues.apache.or
 === Basic Compile
 
 HBase is compiled using Maven.
-You must use Maven 3.x.
+You must use at least Maven 3.0.4.
 To check your Maven version, run the command +mvn -version+.
 
 .JDK Version Requirements
 [NOTE]
 ====
 Starting with HBase 1.0 you must use Java 7 or later to build from source code.
-See <<java,java>> for more complete information about supported JDK versions. 
+See <<java,java>> for more complete information about supported JDK versions.
 ====
 
 [[maven.build.commands]]
 ==== Maven Build Commands
 
-All commands are executed from the local HBase project directory. 
+All commands are executed from the local HBase project directory.
 
 ===== Package
 
@@ -269,7 +339,7 @@ mvn clean package -DskipTests
 ----
 
 With Eclipse set up as explained above in <<eclipse,eclipse>>, you can also use the menu:Build[] command in Eclipse.
-To create the full installable HBase package takes a little bit more work, so read on. 
+To create the full installable HBase package takes a little bit more work, so read on.
 
 [[maven.build.commands.compile]]
 ===== Compile
@@ -313,38 +383,27 @@ See the <<hbase.unittests.cmds,hbase.unittests.cmds>> section in <<hbase.unittes
 [[maven.build.hadoop]]
 ==== Building against various hadoop versions.
 
-As of 0.96, Apache HBase supports building against Apache Hadoop versions: 1.0.3, 2.0.0-alpha and 3.0.0-SNAPSHOT.
-By default, in 0.96 and earlier, we will build with Hadoop-1.0.x.
-As of 0.98, Hadoop 1.x is deprecated and Hadoop 2.x is the default.
-To change the version to build against, add a hadoop.profile property when you invoke +mvn+:
+HBase supports building against Apache Hadoop versions: 2.y and 3.y (early release artifacts). By default we build against Hadoop 2.x.
+
+To build against a specific release from the Hadoop 2.y line, set e.g. `-Dhadoop-two.version=2.7.4`.
 
 [source,bourne]
 ----
-mvn -Dhadoop.profile=1.0 ...
+mvn -Dhadoop-two.version=2.7.4 ...
 ----
 
-The above will build against whatever explicit hadoop 1.x version we have in our _pom.xml_ as our '1.0' version.
-Tests may not all pass so you may need to pass `-DskipTests` unless you are inclined to fix the failing tests.
-
-.'dependencyManagement.dependencies.dependency.artifactId' fororg.apache.hbase:${compat.module}:test-jar with value '${compat.module}'does not match a valid id pattern
-[NOTE]
-====
-You will see ERRORs like the above title if you pass the _default_ profile; e.g.
-if you pass +hadoop.profile=1.1+ when building 0.96 or +hadoop.profile=2.0+ when building hadoop 0.98; just drop the hadoop.profile stipulation in this case to get your build to run again.
-This seems to be a maven pecularity that is probably fixable but we've not spent the time trying to figure it.
-====
-
-Similarly, for 3.0, you would just replace the profile value.
-Note that Hadoop-3.0.0-SNAPSHOT does not currently have a deployed maven artificat - you will need to build and install your own in your local maven repository if you want to run against this profile. 
-
-In earilier versions of Apache HBase, you can build against older versions of Apache Hadoop, notably, Hadoop 0.22.x and 0.23.x.
-If you are running, for example HBase-0.94 and wanted to build against Hadoop 0.23.x, you would run with:
+To change the major release line of Hadoop we build against, add a hadoop.profile property when you invoke +mvn+:
 
 [source,bourne]
 ----
-mvn -Dhadoop.profile=22 ...
+mvn -Dhadoop.profile=3.0 ...
 ----
 
+The above will build against whatever explicit hadoop 3.y version we have in our _pom.xml_ as our '3.0' version.
+Tests may not all pass so you may need to pass `-DskipTests` unless you are inclined to fix the failing tests.
+
+To pick a particular Hadoop 3.y release, you'd set e.g. `-Dhadoop-three.version=3.0.0-alpha1`.
+
 [[build.protobuf]]
 ==== Build Protobuf
 
@@ -367,7 +426,7 @@ You may also want to define `protoc.path` for the protoc binary, using the follo
 mvn compile -Pcompile-protobuf -Dprotoc.path=/opt/local/bin/protoc
 ----
 
-Read the _hbase-protocol/README.txt_ for more details. 
+Read the _hbase-protocol/README.txt_ for more details.
 
 [[build.thrift]]
 ==== Build Thrift
@@ -415,9 +474,8 @@ mvn -DskipTests package assembly:single deploy
 ==== Build Gotchas
 
 If you see `Unable to find resource 'VM_global_library.vm'`, ignore it.
-Its not an error.
-It is link:http://jira.codehaus.org/browse/MSITE-286[officially
-                        ugly] though. 
+It's not an error.
+It is link:https://issues.apache.org/jira/browse/MSITE-286[officially ugly] though.
 
 [[releasing]]
 == Releasing Apache HBase
@@ -429,27 +487,7 @@ HBase 1.x requires Java 7 to build.
 See <<java,java>> for Java requirements per HBase release.
 ====
 
-=== Building against HBase 0.96-0.98
-
-HBase 0.96.x will run on Hadoop 1.x or Hadoop 2.x.
-HBase 0.98 still runs on both, but HBase 0.98 deprecates use of Hadoop 1.
-HBase 1.x will _not_                run on Hadoop 1.
-In the following procedures, we make a distinction between HBase 1.x builds and the awkward process involved building HBase 0.96/0.98 for either Hadoop 1 or Hadoop 2 targets. 
-
-You must choose which Hadoop to build against.
-It is not possible to build a single HBase binary that runs against both Hadoop 1 and Hadoop 2.
-Hadoop is included in the build, because it is needed to run HBase in standalone mode.
-Therefore, the set of modules included in the tarball changes, depending on the build target.
-To determine which HBase you have, look at the HBase version.
-The Hadoop version is embedded within it.
-
-Maven, our build system, natively does not allow a single product to be built against different dependencies.
-Also, Maven cannot change the set of included modules and write out the correct _pom.xml_ files with appropriate dependencies, even using two build targets, one for Hadoop 1 and another for Hadoop 2.
-A prerequisite step is required, which takes as input the current _pom.xml_s and generates Hadoop 1 or Hadoop 2 versions using a script in the _dev-tools/_ directory, called _generate-hadoopX-poms.sh_                where [replaceable]_X_ is either `1` or `2`.
-You then reference these generated poms when you build.
-For now, just be aware of the difference between HBase 1.x builds and those of HBase 0.96-0.98.
-This difference is important to the build instructions.
-
+[[maven.settings.xml]]
 .Example _~/.m2/settings.xml_ File
 ====
 Publishing to maven requires you sign the artifacts you want to upload.
@@ -497,48 +535,53 @@ For the build to sign them for you, you a properly configured _settings.xml_ in
 
 [[maven.release]]
 === Making a Release Candidate
-
-NOTE: These instructions are for building HBase 1.0.x.
-For building earlier versions, the process is different.
-See this section under the respective release documentation folders. 
-
-.Point Releases
-If you are making a point release (for example to quickly address a critical incompatability or security problem) off of a release branch instead of a development branch, the tagging instructions are slightly different.
-I'll prefix those special steps with _Point Release Only_. 
+Only committers may make releases of hbase artifacts.
 
 .Before You Begin
-Before you make a release candidate, do a practice run by deploying a snapshot.
-Before you start, check to be sure recent builds have been passing for the branch from where you are going to take your release.
-You should also have tried recent branch tips out on a cluster under load, perhaps by running the `hbase-it` integration test suite for a few hours to 'burn in' the near-candidate bits. 
-
-.Point Release Only
+Make sure your environment is properly set up. Maven and Git are the main tooling
+used in the below. You'll need a properly configured _settings.xml_ file in your
+local _~/.m2_ maven repository with logins for apache repos (See <<maven.settings.xml>>).
+You will also need to have a published signing key. Browse the Hadoop
+link:http://wiki.apache.org/hadoop/HowToRelease[How To Release] wiki page on
+how to release. It is a model for most of the instructions below. It often has more
+detail on particular steps, for example, on adding your code signing key to the
+project KEYS file up in Apache or on how to update JIRA in preparation for release.
+
+Before you make a release candidate, do a practice run by deploying a SNAPSHOT.
+Check to be sure recent builds have been passing for the branch from where you
+are going to take your release. You should also have tried recent branch tips
+out on a cluster under load, perhaps by running the `hbase-it` integration test
+suite for a few hours to 'burn in' the near-candidate bits.
+
+
+.Specifying the Heap Space for Maven
 [NOTE]
 ====
-At this point you should tag the previous release branch (ex: 0.96.1) with the new point release tag (e.g.
-0.96.1.1 tag). Any commits with changes for the point release should be appled to the new tag. 
-====
-
-The Hadoop link:http://wiki.apache.org/hadoop/HowToRelease[How To
-                    Release] wiki page is used as a model for most of the instructions below, and may have more detail on particular sections, so it is worth review.
-
-.Specifying the Heap Space for Maven on OSX
-[NOTE]
-====
-On OSX, you may need to specify the heap space for Maven commands, by setting the `MAVEN_OPTS` variable to `-Xmx3g`.
+You may run into OutOfMemoryErrors building, particularly building the site and
+documentation. Up the heap for Maven by setting the `MAVEN_OPTS` variable.
 You can prefix the variable to the Maven command, as in the following example:
 
 ----
-MAVEN_OPTS="-Xmx2g" mvn package
+MAVEN_OPTS="-Xmx4g -XX:MaxPermSize=256m" mvn package
 ----
 
 You could also set this in an environment variable or alias in your shell.
 ====
 
 
-NOTE: The script _dev-support/make_rc.sh_ automates many of these steps.
-It does not do the modification of the _CHANGES.txt_                    for the release, the close of the staging repository in Apache Maven (human intervention is needed here), the checking of the produced artifacts to ensure they are 'good' -- e.g.
-extracting the produced tarballs, verifying that they look right, then starting HBase and checking that everything is running correctly, then the signing and pushing of the tarballs to link:http://people.apache.org[people.apache.org].
-The script handles everything else, and comes in handy.
+[NOTE]
+====
+The script _dev-support/make_rc.sh_ automates many of the below steps.
+It will checkout a tag, clean the checkout, build src and bin tarballs,
+and deploy the built jars to repository.apache.org.
+It does NOT do the modification of the _CHANGES.txt_ for the release,
+the checking of the produced artifacts to ensure they are 'good' --
+e.g. extracting the produced tarballs, verifying that they
+look right, then starting HBase and checking that everything is running
+correctly -- or the signing and pushing of the tarballs to
+link:https://people.apache.org[people.apache.org].
+Take a look. Modify/improve as you see fit.
+====
 
 .Procedure: Release Procedure
 . Update the _CHANGES.txt_ file and the POM files.
@@ -546,118 +589,188 @@ The script handles everything else, and comes in handy.
 Update _CHANGES.txt_ with the changes since the last release.
 Make sure the URL to the JIRA points to the proper location which lists fixes for this release.
 Adjust the version in all the POM files appropriately.
-If you are making a release candidate, you must remove the `-SNAPSHOT` label from all versions.
+If you are making a release candidate, you must remove the `-SNAPSHOT` label from all versions
+in all pom.xml files.
 If you are running this receipe to publish a snapshot, you must keep the `-SNAPSHOT` suffix on the hbase version.
-The link:http://mojo.codehaus.org/versions-maven-plugin/[Versions
-                            Maven Plugin] can be of use here.
+The link:http://www.mojohaus.org/versions-maven-plugin/[Versions Maven Plugin] can be of use here.
 To set a version in all the many poms of the hbase multi-module project, use a command like the following:
 +
 [source,bourne]
 ----
-
-$ mvn clean org.codehaus.mojo:versions-maven-plugin:1.3.1:set -DnewVersion=0.96.0
+$ mvn clean org.codehaus.mojo:versions-maven-plugin:2.5:set -DnewVersion=1.5.0
 ----
 +
-Checkin the _CHANGES.txt_ and any version changes.
+Make sure all versions in poms are changed! Checkin the _CHANGES.txt_ and any maven version changes.
 
 . Update the documentation.
 +
-Update the documentation under _src/main/docbkx_.
-This usually involves copying the latest from trunk and making version-particular adjustments to suit this release candidate version. 
+Update the documentation under _src/main/asciidoc_.
+This usually involves copying the latest from master branch and making version-particular
+adjustments to suit this release candidate version.
 
-. Build the source tarball.
+. Clean the checkout dir
 +
-Now, build the source tarball.
-This tarball is Hadoop-version-independent.
-It is just the pure source code and documentation without a particular hadoop taint, etc.
-Add the `-Prelease` profile when building.
-It checks files for licenses and will fail the build if unlicensed files are present.
+[source,bourne]
+----
+
+$ mvn clean
+$ git clean -f -x -d
+----
+
+
+. Run Apache-Rat
+Check licenses are good
 +
 [source,bourne]
 ----
 
-$ mvn clean install -DskipTests assembly:single -Dassembly.file=hbase-assembly/src/main/assembly/src.xml -Prelease
+$ mvn apache-rat
 ----
 +
-Extract the tarball and make sure it looks good.
-A good test for the src tarball being 'complete' is to see if you can build new tarballs from this source bundle.
-If the source tarball is good, save it off to a _version directory_, a directory somewhere where you are collecting all of the tarballs you will publish as part of the release candidate.
-For example if you were building a hbase-0.96.0 release candidate, you might call the directory _hbase-0.96.0RC0_.
-Later you will publish this directory as our release candidate up on http://people.apache.org/~YOU. 
+If the above fails, check the rat log.
 
-. Build the binary tarball.
 +
-Next, build the binary tarball.
-Add the `-Prelease`                        profile when building.
-It checks files for licenses and will fail the build if unlicensed files are present.
-Do it in two steps.
+[source,bourne]
+----
+$ grep 'Rat check' patchprocess/mvn_apache_rat.log
+----
 +
-* First install into the local repository
+
+. Create a release tag.
+Presuming you have run basic tests, the rat check, passes and all is
+looking good, now is the time to tag the release candidate (You
+always remove the tag if you need to redo). To tag, do
+what follows substituting in the version appropriate to your build.
+All tags should be signed tags; i.e. pass the _-s_ option (See
+link:http://https://git-scm.com/book/id/v2/Git-Tools-Signing-Your-Work[Signing Your Work]
+for how to set up your git environment for signing).
+
 +
 [source,bourne]
 ----
 
-$ mvn clean install -DskipTests -Prelease
+$ git tag -s 1.5.0-RC0 -m "Tagging the 1.5.0 first Releae Candidate (Candidates start at zero)"
 ----
 
-* Next, generate documentation and assemble the tarball.
+Or, if you are making a release, tags should have a _rel/_ prefix to ensure
+they are preserved in the Apache repo as in:
+
+[source,bourne]
+----
++$ git tag -s rel/1.5.0 -m "Tagging the 1.5.0 Release"
+----
+
+Push the (specific) tag (only) so others have access.
 +
 [source,bourne]
 ----
 
+$ git push origin 1.5.0-RC0
+----
++
+For how to delete tags, see
+link:http://www.manikrathee.com/how-to-delete-a-tag-in-git.html[How to Delete a Tag]. Covers
+deleting tags that have not yet been pushed to the remote Apache
+repo as well as delete of tags pushed to Apache.
+
+
+. Build the source tarball.
++
+Now, build the source tarball. Lets presume we are building the source
+tarball for the tag _1.5.0-RC0_ into _/tmp/hbase-1.5.0-RC0/_
+(This step requires that the mvn and git clean steps described above have just been done).
++
+[source,bourne]
+----
+$ git archive --format=tar.gz --output="/tmp/hbase-1.5.0-RC0/hbase-1.5.0-src.tar.gz" --prefix="hbase-1.5.0/" $git_tag
+----
+
+Above we generate the hbase-1.5.0-src.tar.gz tarball into the
+_/tmp/hbase-1.5.0-RC0_ build output directory (We don't want the _RC0_ in the name or prefix.
+These bits are currently a release candidate but if the VOTE passes, they will become the release so we do not taint
+the artifact names with _RCX_).
+
+. Build the binary tarball.
+Next, build the binary tarball. Add the `-Prelease` profile when building.
+It runs the license apache-rat check among other rules that help ensure
+all is wholesome. Do it in two steps.
+
+First install into the local repository
+
+[source,bourne]
+----
+
+$ mvn clean install -DskipTests -Prelease
+----
+
+Next, generate documentation and assemble the tarball. Be warned,
+this next step can take a good while, a couple of hours generating site
+documentation.
+
+[source,bourne]
+----
+
 $ mvn install -DskipTests site assembly:single -Prelease
 ----
 
 +
-Otherwise, the build complains that hbase modules are not in the maven repository when you try to do it at once, especially on fresh repository.
+Otherwise, the build complains that hbase modules are not in the maven repository
+when you try to do it all in one step, especially on a fresh repository.
 It seems that you need the install goal in both steps.
 +
-Extract the generated tarball and check it out.
+Extract the generated tarball -- you'll find it under
+_hbase-assembly/target_ and check it out.
 Look at the documentation, see if it runs, etc.
-If good, copy the tarball to the above mentioned _version directory_. 
+If good, copy the tarball beside the source tarball in the
+build output directory.
 
-. Create a new tag.
-+
-.Point Release Only
-[NOTE]
-====
-The following step that creates a new tag can be skipped since you've already created the point release tag
-====
-+
-Tag the release at this point since it looks good.
-If you find an issue later, you can delete the tag and start over.
-Release needs to be tagged for the next step.
 
 . Deploy to the Maven Repository.
 +
-Next, deploy HBase to the Apache Maven repository, using the `apache-release` profile instead of the `release` profile when running the `mvn deploy` command.
-This profile invokes the Apache pom referenced by our pom files, and also signs your artifacts published to Maven, as long as the _settings.xml_ is configured correctly, as described in <<mvn.settings.file,mvn.settings.file>>.
+Next, deploy HBase to the Apache Maven repository. Add the
+apache-release` profile when running the `mvn deploy` command.
+This profile comes from the Apache parent pom referenced by our pom files.
+It does signing of your artifacts published to Maven, as long as the
+_settings.xml_ is configured correctly, as described in <<maven.settings.xml>>.
+This step depends on the local repository having been populate
+by the just-previous bin tarball build.
+
 +
 [source,bourne]
 ----
 
-$ mvn deploy -DskipTests -Papache-release
+$ mvn deploy -DskipTests -Papache-release -Prelease
 ----
 +
 This command copies all artifacts up to a temporary staging Apache mvn repository in an 'open' state.
-More work needs to be done on these maven artifacts to make them generally available. 
+More work needs to be done on these maven artifacts to make them generally available.
 +
-We do not release HBase tarball to the Apache Maven repository. To avoid deploying the tarball, do not include the `assembly:single` goal in your `mvn deploy` command. Check the deployed artifacts as described in the next section.
+We do not release HBase tarball to the Apache Maven repository. To avoid deploying the tarball, do not
+include the `assembly:single` goal in your `mvn deploy` command. Check the deployed artifacts as described in the next section.
+
+.make_rc.sh
+[NOTE]
+====
+If you run the _dev-support/make_rc.sh_ script, this is as far as it takes you.
+To finish the release, take up the script from here on out.
+====
 
 . Make the Release Candidate available.
 +
 The artifacts are in the maven repository in the staging area in the 'open' state.
 While in this 'open' state you can check out what you've published to make sure all is good.
-To do this, login at link:http://repository.apache.org[repository.apache.org]                        using your Apache ID.
-Find your artifacts in the staging repository.
-Browse the content.
-Make sure all artifacts made it up and that the poms look generally good.
-If it checks out, 'close' the repo.
-This will make the artifacts publically available.
-You will receive an email with the URL to give out for the temporary staging repository for others to use trying out this new release candidate.
-Include it in the email that announces the release candidate.
-Folks will need to add this repo URL to their local poms or to their local _settings.xml_ file to pull the published release candidate artifacts.
-If the published artifacts are incomplete or have problems, just delete the 'open' staged artifacts.
+To do this, log in to Apache's Nexus at link:https://repository.apache.org[repository.apache.org] using your Apache ID.
+Find your artifacts in the staging repository. Click on 'Staging Repositories' and look for a new one ending in "hbase" with a status of 'Open', select it.
+Use the tree view to expand the list of repository contents and inspect if the artifacts you expect are present. Check the POMs.
+As long as the staging repo is open you can re-upload if something is missing or built incorrectly.
++
+If something is seriously wrong and you would like to back out the upload, you can use the 'Drop' button to drop and delete the staging repository.
+Sometimes the upload fails in the middle. This is another reason you might have to 'Drop' the upload from the staging repository.
++
+If it checks out, close the repo using the 'Close' button. The repository must be closed before a public URL to it becomes available. It may take a few minutes for the repository to close. Once complete you'll see a public URL to the repository in the Nexus UI. You may also receive an email with the URL. Provide the URL to the temporary staging repository in the email that announces the release candidate.
+(Folks will need to add this repo URL to their local poms or to their local _settings.xml_ file to pull the published release candidate artifacts.)
++
+When the release vote concludes successfully, return here and click the 'Release' button to release the artifacts to central. The release process will automatically drop and delete the staging repository.
 +
 .hbase-downstreamer
 [NOTE]
@@ -665,60 +778,57 @@ If the published artifacts are incomplete or have problems, just delete the 'ope
 See the link:https://github.com/saintstack/hbase-downstreamer[hbase-downstreamer] test for a simple example of a project that is downstream of HBase an depends on it.
 Check it out and run its simple test to make sure maven artifacts are properly deployed to the maven repository.
 Be sure to edit the pom to point to the proper staging repository.
-Make sure you are pulling from the repository when tests run and that you are not getting from your local repository, by either passing the `-U` flag or deleting your local repo content and check maven is pulling from remote out of the staging repository. 
+Make sure you are pulling from the repository when tests run and that you are not getting from your local repository, by either passing the `-U` flag or deleting your local repo content and check maven is pulling from remote out of the staging repository.
 ====
-+
-See link:http://www.apache.org/dev/publishing-maven-artifacts.html[Publishing Maven Artifacts] for some pointers on this maven staging process.
-+
-NOTE: We no longer publish using the maven release plugin.
-Instead we do +mvn deploy+.
-It seems to give us a backdoor to maven release publishing.
-If there is no _-SNAPSHOT_                            on the version string, then we are 'deployed' to the apache maven repository staging directory from which we can publish URLs for candidates and later, if they pass, publish as release (if a _-SNAPSHOT_ on the version string, deploy will put the artifacts up into apache snapshot repos). 
-+
+
+See link:https://www.apache.org/dev/publishing-maven-artifacts.html[Publishing Maven Artifacts] for some pointers on this maven staging process.
+
 If the HBase version ends in `-SNAPSHOT`, the artifacts go elsewhere.
 They are put into the Apache snapshots repository directly and are immediately available.
 Making a SNAPSHOT release, this is what you want to happen.
 
-. If you used the _make_rc.sh_ script instead of doing
-  the above manually, do your sanity checks now.
-+
-At this stage, you have two tarballs in your 'version directory' and a set of artifacts in a staging area of the maven repository, in the 'closed' state.
-These are publicly accessible in a temporary staging repository whose URL you should have gotten in an email.
-The above mentioned script, _make_rc.sh_ does all of the above for you minus the check of the artifacts built, the closing of the staging repository up in maven, and the tagging of the release.
-If you run the script, do your checks at this stage verifying the src and bin tarballs and checking what is up in staging using hbase-downstreamer project.
-Tag before you start the build.
-You can always delete it if the build goes haywire. 
-
-. Sign, upload, and 'stage' your version directory to link:http://people.apache.org[people.apache.org] (TODO:
-  There is a new location to stage releases using svnpubsub.  See
-  (link:https://issues.apache.org/jira/browse/HBASE-10554[HBASE-10554 Please delete old releases from mirroring system]).
-+
-If all checks out, next put the _version directory_ up on link:http://people.apache.org[people.apache.org].
-You will need to sign and fingerprint them before you push them up.
-In the _version directory_ run the following commands: 
-+
+At this stage, you have two tarballs in your 'build output directory' and a set of artifacts in a staging area of the maven repository, in the 'closed' state.
+Next sign, fingerprint and then 'stage' your release candiate build output directory via svnpubsub by committing
+your directory to link:https://dist.apache.org/repos/dist/dev/hbase/[The 'dev' distribution directory] (See comments on link:https://issues.apache.org/jira/browse/HBASE-10554[HBASE-10554 Please delete old releases from mirroring system] but in essence it is an svn checkout of https://dist.apache.org/repos/dist/dev/hbase -- releases are at https://dist.apache.org/repos/dist/release/hbase). In the _version directory_ run the following commands:
+
 [source,bourne]
 ----
 
-$ for i in *.tar.gz; do echo $i; gpg --print-mds $i > $i.mds ; done
 $ for i in *.tar.gz; do echo $i; gpg --print-md MD5 $i > $i.md5 ; done
 $ for i in *.tar.gz; do echo $i; gpg --print-md SHA512 $i > $i.sha ; done
 $ for i in *.tar.gz; do echo $i; gpg --armor --output $i.asc --detach-sig $i  ; done
 $ cd ..
-# Presuming our 'version directory' is named 0.96.0RC0, now copy it up to people.apache.org.
-$ rsync -av 0.96.0RC0 people.apache.org:public_html
+# Presuming our 'build output directory' is named 1.5.0RC0, copy it to the svn checkout of the dist dev dir
+# in this case named hbase.dist.dev.svn
+$ cd /Users/stack/checkouts/hbase.dist.dev.svn
+$ svn info
+Path: .
+Working Copy Root Path: /Users/stack/checkouts/hbase.dist.dev.svn
+URL: https://dist.apache.org/repos/dist/dev/hbase
+Repository Root: https://dist.apache.org/repos/dist
+Repository UUID: 0d268c88-bc11-4956-87df-91683dc98e59
+Revision: 15087
+Node Kind: directory
+Schedule: normal
+Last Changed Author: ndimiduk
+Last Changed Rev: 15045
+Last Changed Date: 2016-08-28 11:13:36 -0700 (Sun, 28 Aug 2016)
+$ mv 1.5.0RC0 /Users/stack/checkouts/hbase.dist.dev.svn
+$ svn add 1.5.0RC0
+$ svn commit ...
 ----
 +
-Make sure the link:http://people.apache.org[people.apache.org] directory is showing and that the mvn repo URLs are good.
-Announce the release candidate on the mailing list and call a vote. 
+Ensure it actually gets published by checking link:https://dist.apache.org/repos/dist/dev/hbase/[https://dist.apache.org/repos/dist/dev/hbase/].
+
+Announce the release candidate on the mailing list and call a vote.
 
 
 [[maven.snapshot]]
 === Publishing a SNAPSHOT to maven
 
-Make sure your _settings.xml_ is set up properly, as in <<mvn.settings.file,mvn.settings.file>>.
+Make sure your _settings.xml_ is set up properly (see <<maven.settings.xml>>).
 Make sure the hbase version includes `-SNAPSHOT` as a suffix.
-Following is an example of publishing SNAPSHOTS of a release that had an hbase version of 0.96.0 in its poms.
+Following is an example of publishing SNAPSHOTS of a release that had an hbase version of 1.5.0 in its poms.
 
 [source,bourne]
 ----
@@ -729,7 +839,7 @@ Following is an example of publishing SNAPSHOTS of a release that had an hbase v
 
 The _make_rc.sh_ script mentioned above (see <<maven.release,maven.release>>) can help you publish `SNAPSHOTS`.
 Make sure your `hbase.version` has a `-SNAPSHOT`                suffix before running the script.
-It will put a snapshot up into the apache snapshot repository for you. 
+It will put a snapshot up into the apache snapshot repository for you.
 
 [[hbase.rc.voting]]
 == Voting on Release Candidates
@@ -744,7 +854,7 @@ PMC members, please read this WIP doc on policy voting for a release candidate,
                 requirements of the ASF policy on releases._ Regards the latter, run +mvn apache-rat:check+ to verify all files are suitably licensed.
 See link:http://search-hadoop.com/m/DHED4dhFaU[HBase, mail # dev - On
                 recent discussion clarifying ASF release policy].
-for how we arrived at this process. 
+for how we arrived at this process.
 
 [[documentation]]
 == Generating the HBase Reference Guide
@@ -752,10 +862,10 @@ for how we arrived at this process.
 The manual is marked up using Asciidoc.
 We then use the link:http://asciidoctor.org/docs/asciidoctor-maven-plugin/[Asciidoctor maven plugin] to transform the markup to html.
 This plugin is run when you specify the +site+ goal as in when you run +mvn site+.
-See <<appendix_contributing_to_documentation,appendix contributing to documentation>> for more information on building the documentation. 
+See <<appendix_contributing_to_documentation,appendix contributing to documentation>> for more information on building the documentation.
 
 [[hbase.org]]
-== Updating link:http://hbase.apache.org[hbase.apache.org]
+== Updating link:https://hbase.apache.org[hbase.apache.org]
 
 [[hbase.org.site.contributing]]
 === Contributing to hbase.apache.org
@@ -763,26 +873,9 @@ See <<appendix_contributing_to_documentation,appendix contributing to documentat
 See <<appendix_contributing_to_documentation,appendix contributing to documentation>> for more information on contributing to the documentation or website.
 
 [[hbase.org.site.publishing]]
-=== Publishing link:http://hbase.apache.org[hbase.apache.org]
+=== Publishing link:https://hbase.apache.org[hbase.apache.org]
 
-As of link:https://issues.apache.org/jira/browse/INFRA-5680[INFRA-5680 Migrate apache hbase website], to publish the website, build it using Maven, and then deploy it over a checkout of _https://svn.apache.org/repos/asf/hbase/hbase.apache.org/trunk_                and check in your changes.
-The script _dev-scripts/publish_hbase_website.sh_ is provided to automate this process and to be sure that stale files are removed from SVN.
-Review the script even if you decide to publish the website manually.
-Use the script as follows:
-
-----
-$ publish_hbase_website.sh -h
-Usage: publish_hbase_website.sh [-i | -a] [-g <dir>] [-s <dir>]
- -h          Show this message
- -i          Prompts the user for input
- -a          Does not prompt the user. Potentially dangerous.
- -g          The local location of the HBase git repository
- -s          The local location of the HBase svn checkout
- Either --interactive or --silent is required.
- Edit the script to set default Git and SVN directories.
-----
-
-NOTE: The SVN commit takes a long time.
+See <<website_publish>> for instructions on publishing the website and documentation.
 
 [[hbase.tests]]
 == Tests
@@ -806,7 +899,7 @@ For any other module, for example `hbase-common`, the tests must be strict unit
 
 The HBase shell and its tests are predominantly written in jruby.
 In order to make these tests run as a part of the standard build, there is a single JUnit test, `TestShell`, that takes care of loading the jruby implemented tests and running them.
-You can run all of these tests from the top level with: 
+You can run all of these tests from the top level with:
 
 [source,bourne]
 ----
@@ -816,7 +909,7 @@ You can run all of these tests from the top level with:
 
 Alternatively, you may limit the shell tests that run using the system variable `shell.test`.
 This value should specify the ruby literal equivalent of a particular test case by name.
-For example, the tests that cover the shell commands for altering tables are contained in the test case `AdminAlterTableTest`        and you can run them with: 
+For example, the tests that cover the shell commands for altering tables are contained in the test case `AdminAlterTableTest`        and you can run them with:
 
 [source,bourne]
 ----
@@ -826,7 +919,7 @@ For example, the tests that cover the shell commands for altering tables are con
 
 You may also use a link:http://docs.ruby-doc.com/docs/ProgrammingRuby/html/language.html#UJ[Ruby Regular Expression
       literal] (in the `/pattern/` style) to select a set of test cases.
-You can run all of the HBase admin related tests, including both the normal administration and the security administration, with the command: 
+You can run all of the HBase admin related tests, including both the normal administration and the security administration, with the command:
 
 [source,bourne]
 ----
@@ -834,7 +927,7 @@ You can run all of the HBase admin related tests, including both the normal admi
       mvn clean test -Dtest=TestShell -Dshell.test=/.*Admin.*Test/
 ----
 
-In the event of a test failure, you can see details by examining the XML version of the surefire report results 
+In the event of a test failure, you can see details by examining the XML version of the surefire report results
 
 [source,bourne]
 ----
@@ -876,7 +969,8 @@ Also, keep in mind that if you are running tests in the `hbase-server` module yo
 [[hbase.unittests]]
 === Unit Tests
 
-Apache HBase unit tests are subdivided into four categories: small, medium, large, and integration with corresponding JUnit link:http://www.junit.org/node/581[categories]: `SmallTests`, `MediumTests`, `LargeTests`, `IntegrationTests`.
+Apache HBase test cases are subdivided into four categories: small, medium, large, and
+integration with corresponding JUnit link:https://github.com/junit-team/junit4/wiki/Categories[categories]: `SmallTests`, `MediumTests`, `LargeTests`, `IntegrationTests`.
 JUnit categories are denoted using java annotations and look like this in your unit test code.
 
 [source,java]
@@ -891,51 +985,53 @@ public class TestHRegionInfo {
 }
 ----
 
-The above example shows how to mark a unit test as belonging to the `small` category.
-All unit tests in HBase have a categorization. 
+The above example shows how to mark a test case as belonging to the `small` category.
+All test cases in HBase should have a categorization.
 
-The first three categories, `small`, `medium`, and `large`, are for tests run when you type `$ mvn test`.
+The first three categories, `small`, `medium`, and `large`, are for test cases which run when you
+type `$ mvn test`.
 In other words, these three categorizations are for HBase unit tests.
 The `integration` category is not for unit tests, but for integration tests.
 These are run when you invoke `$ mvn verify`.
 Integration tests are described in <<integration.tests,integration.tests>>.
 
-HBase uses a patched maven surefire plugin and maven profiles to implement its unit test characterizations. 
+HBase uses a patched maven surefire plugin and maven profiles to implement its unit test characterizations.
 
-Keep reading to figure which annotation of the set small, medium, and large to put on your new HBase unit test. 
+Keep reading to figure which annotation of the set small, medium, and large to put on your new
+HBase test case.
 
 .Categorizing Tests
 Small Tests (((SmallTests)))::
-  _Small_ tests are executed in a shared JVM.
-  We put in this category all the tests that can be executed quickly in a shared JVM.
-  The maximum execution time for a small test is 15 seconds, and small tests should not use a (mini)cluster.
+  _Small_ test cases are executed in a shared JVM and individual test cases should run in 15 seconds
+   or less; i.e. a link:https://en.wikipedia.org/wiki/JUnit[junit test fixture], a java object made
+   up of test methods, should finish in under 15 seconds. These test cases can not use mini cluster.
+   These are run as part of patch pre-commit.
 
 Medium Tests (((MediumTests)))::
-  _Medium_ tests represent tests that must be executed before proposing a patch.
-  They are designed to run in less than 30 minutes altogether, and are quite stable in their results.
-  They are designed to last less than 50 seconds individually.
-  They can use a cluster, and each of them is executed in a separate JVM. 
+  _Medium_ test cases are executed in separate JVM and individual test case should run in 50 seconds
+   or less. Together, they should take less than 30 minutes, and are quite stable in their results.
+   These test cases can use a mini cluster. These are run as part of patch pre-commit.
 
 Large Tests (((LargeTests)))::
-  _Large_ tests are everything else.
+  _Large_ test cases are everything else.
   They are typically large-scale tests, regression tests for specific bugs, timeout tests, performance tests.
   They are executed before a commit on the pre-integration machines.
-  They can be run on the developer machine as well. 
+  They can be run on the developer machine as well.
 
 Integration Tests (((IntegrationTests)))::
   _Integration_ tests are system level tests.
-  See <<integration.tests,integration.tests>> for more info. 
+  See <<integration.tests,integration.tests>> for more info.
 
 [[hbase.unittests.cmds]]
 === Running tests
 
 [[hbase.unittests.cmds.test]]
-==== Default: small and medium category tests 
+==== Default: small and medium category tests
 
 Running `mvn test` will execute all small tests in a single JVM (no fork) and then medium tests in a separate JVM for each test instance.
 Medium tests are NOT executed if there is an error in a small test.
 Large tests are NOT executed.
-There is one report for small tests, and one report for medium tests if they are executed. 
+There is one report for small tests, and one report for medium tests if they are executed.
 
 [[hbase.unittests.cmds.test.runalltests]]
 ==== Running all tests
@@ -943,38 +1039,38 @@ There is one report for small tests, and one report for medium tests if they are
 Running `mvn test -P runAllTests` will execute small tests in a single JVM then medium and large tests in a separate JVM for each test.
 Medium and large tests are NOT executed if there is an error in a small test.
 Large tests are NOT executed if there is an error in a small or medium test.
-There is one report for small tests, and one report for medium and large tests if they are executed. 
+There is one report for small tests, and one report for medium and large tests if they are executed.
 
 [[hbase.unittests.cmds.test.localtests.mytest]]
 ==== Running a single test or all tests in a package
 
-To run an individual test, e.g. `MyTest`, rum `mvn test -Dtest=MyTest` You can also pass multiple, individual tests as a comma-delimited list: 
+To run an individual test, e.g. `MyTest`, rum `mvn test -Dtest=MyTest` You can also pass multiple, individual tests as a comma-delimited list:
 [source,bash]
 ----
 mvn test  -Dtest=MyTest1,MyTest2,MyTest3
 ----
-You can also pass a package, which will run all tests under the package: 
+You can also pass a package, which will run all tests under the package:
 [source,bash]
 ----
 mvn test '-Dtest=org.apache.hadoop.hbase.client.*'
-----                
+----
 
 When `-Dtest` is specified, the `localTests` profile will be used.
 It will use the official release of maven surefire, rather than our custom surefire plugin, and the old connector (The HBase build uses a patched version of the maven surefire plugin). Each junit test is executed in a separate JVM (A fork per test class). There is no parallelization when tests are running in this mode.
 You will see a new message at the end of the -report: `"[INFO] Tests are skipped"`.
 It's harmless.
-However, you need to make sure the sum of `Tests run:` in the `Results:` section of test reports matching the number of tests you specified because no error will be reported when a non-existent test case is specified. 
+However, you need to make sure the sum of `Tests run:` in the `Results:` section of test reports matching the number of tests you specified because no error will be reported when a non-existent test case is specified.
 
 [[hbase.unittests.cmds.test.profiles]]
 ==== Other test invocation permutations
 
-Running `mvn test -P runSmallTests` will execute "small" tests only, using a single JVM. 
+Running `mvn test -P runSmallTests` will execute "small" tests only, using a single JVM.
 
-Running `mvn test -P runMediumTests` will execute "medium" tests only, launching a new JVM for each test-class. 
+Running `mvn test -P runMediumTests` will execute "medium" tests only, launching a new JVM for each test-class.
 
-Running `mvn test -P runLargeTests` will execute "large" tests only, launching a new JVM for each test-class. 
+Running `mvn test -P runLargeTests` will execute "large" tests only, launching a new JVM for each test-class.
 
-For convenience, you can run `mvn test -P runDevTests` to execute both small and medium tests, using a single JVM. 
+For convenience, you can run `mvn test -P runDevTests` to execute both small and medium tests, using a single JVM.
 
 [[hbase.unittests.test.faster]]
 ==== Running tests faster
@@ -996,7 +1092,7 @@ $ sudo mkdir /ram2G
 sudo mount -t tmpfs -o size=2048M tmpfs /ram2G
 ----
 
-You can then use it to run all HBase tests on 2.0 with the command: 
+You can then use it to run all HBase tests on 2.0 with the command:
 
 ----
 mvn test
@@ -1004,7 +1100,7 @@ mvn test
                         -Dtest.build.data.basedirectory=/ram2G
 ----
 
-On earlier versions, use: 
+On earlier versions, use:
 
 ----
 mvn test
@@ -1023,7 +1119,7 @@ It must be executed from the directory which contains the _pom.xml_.
 For example running +./dev-support/hbasetests.sh+ will execute small and medium tests.
 Running +./dev-support/hbasetests.sh
                         runAllTests+ will execute all tests.
-Running +./dev-support/hbasetests.sh replayFailed+ will rerun the failed tests a second time, in a separate jvm and without parallelisation. 
+Running +./dev-support/hbasetests.sh replayFailed+ will rerun the failed tests a second time, in a separate jvm and without parallelisation.
 
 [[hbase.unittests.resource.checker]]
 ==== Test Resource Checker(((Test ResourceChecker)))
@@ -1033,7 +1129,7 @@ Check the _*-out.txt_ files). The resources counted are the number of threads, t
 If the number has increased, it adds a _LEAK?_ comment in the logs.
 As you can have an HBase instance running in the background, some threads can be deleted/created without any specific action in the test.
 However, if the test does not work as expected, or if the test should not impact these resources, it's worth checking these log lines [computeroutput]+...hbase.ResourceChecker(157): before...+                    and [computeroutput]+...hbase.ResourceChecker(157): after...+.
-For example: 
+For example:
 
 ----
 2012-09-26 09:22:15,315 INFO [pool-1-thread-1]
@@ -1061,9 +1157,7 @@ ConnectionCount=1 (was 1)
 
 * All tests must be categorized, if not they could be skipped.
 * All tests should be written to be as fast as possible.
-* Small category tests should last less than 15 seconds, and must not have any side effect.
-* Medium category tests should last less than 50 seconds.
-* Large category tests should last less than 3 minutes.
+* See <<hbase.unittests,hbase.unittests> for test case categories and corresponding timeouts.
   This should ensure a good parallelization for people using it, and ease the analysis when the test fails.
 
 [[hbase.tests.sleeps]]
@@ -1076,10 +1170,10 @@ This allows understanding what the test is waiting for.
 Moreover, the test will work whatever the machine performance is.
 Sleep should be minimal to be as fast as possible.
 Waiting for a variable should be done in a 40ms sleep loop.
-Waiting for a socket operation should be done in a 200 ms sleep loop. 
+Waiting for a socket operation should be done in a 200 ms sleep loop.
 
 [[hbase.tests.cluster]]
-==== Tests using a cluster 
+==== Tests using a cluster
 
 Tests using a HRegion do not have to start a cluster: A region can use the local file system.
 Start/stopping a cluster cost around 10 seconds.
@@ -1087,7 +1181,50 @@ They should not be started per test method but per test class.
 Started cluster must be shutdown using [method]+HBaseTestingUtility#shutdownMiniCluster+, which cleans the directories.
 As most as possible, tests should use the default settings for the cluster.
 When they don't, they should document it.
-This will allow to share the cluster later. 
+This will allow to share the cluster later.
+
+[[hbase.tests.example.code]]
+==== Tests Skeleton Code
+
+Here is a test skeleton code with Categorization and a Category-based timeout rule to copy and paste and use as basis for test contribution.
+[source,java]
+----
+/**
+ * Describe what this testcase tests. Talk about resources initialized in @BeforeClass (before
+ * any test is run) and before each test is run, etc.
+ */
+// Specify the category as explained in <<hbase.unittests,hbase.unittests>>.
+@Category(SmallTests.class)
+public class TestExample {
+  // Replace the TestExample.class in the below with the name of your test fixture class.
+  private static final Log LOG = LogFactory.getLog(TestExample.class);
+
+  // Handy test rule that allows you subsequently get the name of the current method. See
+  // down in 'testExampleFoo()' where we use it to log current test's name.
+  @Rule public TestName testName = new TestName();
+
+  // The below rule does two things. It decides the timeout based on the category
+  // (small/medium/large) of the testcase. This @Rule requires that the full testcase runs
+  // within this timeout irrespective of individual test methods' times. The second
+  // feature is we'll dump in the log when the test is done a count of threads still
+  // running.
+  @Rule public static TestRule timeout = CategoryBasedTimeout.builder().
+    withTimeout(this.getClass()).withLookingForStuckThread(true).build();
+
+  @Before
+  public void setUp() throws Exception {
+  }
+
+  @After
+  public void tearDown() throws Exception {
+  }
+
+  @Test
+  public void testExampleFoo() {
+    LOG.info("Running test " + testName.getMethodName());
+  }
+}
+----
 
 [[integration.tests]]
 === Integration Tests
@@ -1095,16 +1232,16 @@ This will allow to share the cluster later.
 HBase integration/system tests are tests that are beyond HBase unit tests.
 They are generally long-lasting, sizeable (the test can be asked to 1M rows or 1B rows), targetable (they can take configuration that will point them at the ready-made cluster they are to run against; integration tests do not include cluster start/stop code), and verifying success, integration tests rely on public APIs only; they do not attempt to examine server internals asserting success/fail.
 Integration tests are what you would run when you need to more elaborate proofing of a release candidate beyond what unit tests can do.
-They are not generally run on the Apache Continuous Integration build server, however, some sites opt to run integration tests as a part of their continuous testing on an actual cluster. 
+They are not generally run on the Apache Continuous Integration build server, however, some sites opt to run integration tests as a part of their continuous testing on an actual cluster.
 
 Integration tests currently live under the _src/test_                directory in the hbase-it submodule and will match the regex: _**/IntegrationTest*.java_.
-All integration tests are also annotated with `@Category(IntegrationTests.class)`. 
+All integration tests are also annotated with `@Category(IntegrationTests.class)`.
 
 Integration tests can be run in two modes: using a mini cluster, or against an actual distributed cluster.
 Maven failsafe is used to run the tests using the mini cluster.
 IntegrationTestsDriver class is used for executing the tests against a distributed cluster.
 Integration tests SHOULD NOT assume that they are running against a mini cluster, and SHOULD NOT use private API's to access cluster state.
-To interact with the distributed or mini cluster uniformly, `IntegrationTestingUtility`, and `HBaseCluster` classes, and public client API's can be used. 
+To interact with the distributed or mini cluster uniformly, `IntegrationTestingUtility`, and `HBaseCluster` classes, and public client API's can be used.
 
 On a distributed cluster, integration tests that use ChaosMonkey or otherwise manipulate services thru cluster manager (e.g.
 restart regionservers) use SSH to do it.
@@ -1118,15 +1255,15 @@ The argument 1 (%1$s) is SSH options set the via opts setting or via environment
 ----
 /usr/bin/ssh %1$s %2$s%3$s%4$s "su hbase - -c \"%5$s\""
 ----
-That way, to kill RS (for example) integration tests may run: 
+That way, to kill RS (for example) integration tests may run:
 [source,bash]
 ----
 {/usr/bin/ssh some-hostname "su hbase - -c \"ps aux | ... | kill ...\""}
 ----
-The command is logged in the test logs, so you can verify it is correct for your environment. 
+The command is logged in the test logs, so you can verify it is correct for your environment.
 
 To disable the running of Integration Tests, pass the following profile on the command line `-PskipIntegrationTests`.
-For example, 
+For example,
 [source]
 ----
 $ mvn clean install test -Dtest=TestZooKeeper  -PskipIntegrationTests
@@ -1136,7 +1273,7 @@ $ mvn clean install test -Dtest=TestZooKeeper  -PskipIntegrationTests
 ==== Running integration tests against mini cluster
 
 HBase 0.92 added a `verify` maven target.
-Invoking it, for example by doing `mvn verify`, will run all the phases up to and including the verify phase via the maven link:http://maven.apache.org/plugins/maven-failsafe-plugin/[failsafe
+Invoking it, for example by doing `mvn verify`, will run all the phases up to and including the verify phase via the maven link:https://maven.apache.org/plugins/maven-failsafe-plugin/[failsafe
                         plugin], running all the above mentioned HBase unit tests as well as tests that are in the HBase integration test group.
 After you have completed +mvn install -DskipTests+ You can run just the integration tests by invoking:
 
@@ -1148,9 +1285,9 @@ mvn verify
 ----
 
 If you just want to run the integration tests in top-level, you need to run two commands.
-First: +mvn failsafe:integration-test+ This actually runs ALL the integration tests. 
+First: +mvn failsafe:integration-test+ This actually runs ALL the integration tests.
 
-NOTE: This command will always output `BUILD SUCCESS` even if there are test failures. 
+NOTE: This command will always output `BUILD SUCCESS` even if there are test failures.
 
 At this point, you could grep the output by hand looking for failed tests.
 However, maven will do this for us; just use: +mvn
@@ -1161,19 +1298,19 @@ However, maven will do this for us; just use: +mvn
 
 This is very similar to how you specify running a subset of unit tests (see above), but use the property `it.test` instead of `test`.
 To just run `IntegrationTestClassXYZ.java`, use: +mvn
-                            failsafe:integration-test -Dit.test=IntegrationTestClassXYZ+                        The next thing you might want to do is run groups of integration tests, say all integration tests that are named IntegrationTestClassX*.java: +mvn failsafe:integration-test -Dit.test=*ClassX*+ This runs everything that is an integration test that matches *ClassX*. This means anything matching: "**/IntegrationTest*ClassX*". You can also run multiple groups of integration tests using comma-delimited lists (similar to unit tests). Using a list of matches still supports full regex matching for each of the groups.This would look something like: +mvn
-                            failsafe:integration-test -Dit.test=*ClassX*, *ClassY+                    
+                            failsafe:integration-test -Dit.test=IntegrationTestClassXYZ+                        The next thing you might want to do is run groups of integration tests, say all integration tests that are named IntegrationTestClassX*.java: +mvn failsafe:integration-test -Dit.test=*ClassX*+ This runs everything that is an integration test that matches *ClassX*. This means anything matching: "**/IntegrationTest*ClassX*". You can also run multiple groups of integration tests using comma-delimited lists (similar to unit tests). Using a list of matches still supports full regex matching for each of the groups. This would look something like: +mvn
+                            failsafe:integration-test -Dit.test=*ClassX*, *ClassY+
 
 [[maven.build.commands.integration.tests.distributed]]
 ==== Running integration tests against distributed cluster
 
 If you have an already-setup HBase cluster, you can launch the integration tests by invoking the class `IntegrationTestsDriver`.
 You may have to run test-compile first.
-The configuration will be picked by the bin/hbase script. 
+The configuration will be picked by the bin/hbase script.
 [source,bourne]
 ----
 mvn test-compile
----- 
+----
 Then launch the tests with:
 
 [source,bourne]
@@ -1186,26 +1323,30 @@ Running the IntegrationTestsDriver without any argument will launch tests found
 See the usage, by passing -h, to see how to filter test classes.
 You can pass a regex which is checked against the full class name; so, part of class name can be used.
 IntegrationTestsDriver uses Junit to run the tests.
-Currently there is no support for running integration tests against a distributed cluster using maven (see link:https://issues.apache.org/jira/browse/HBASE-6201[HBASE-6201]). 
+Currently there is no support for running integration tests against a distributed cluster using maven (see link:https://issues.apache.org/jira/browse/HBASE-6201[HBASE-6201]).
 
 The tests interact with the distributed cluster by using the methods in the `DistributedHBaseCluster` (implementing `HBaseCluster`) class, which in turn uses a pluggable `ClusterManager`.
 Concrete implementations provide actual functionality for carrying out deployment-specific and environment-dependent tasks (SSH, etc). The default `ClusterManager` is `HBaseClusterManager`, which uses SSH to remotely execute start/stop/kill/signal commands, and assumes some posix commands (ps, etc). Also assumes the user running the test has enough "power" to start/stop servers on the remote machines.
 By default, it picks up `HBASE_SSH_OPTS`, `HBASE_HOME`, `HBASE_CONF_DIR` from the env, and uses `bin/hbase-daemon.sh` to carry out the actions.
-Currently tarball deployments, deployments which uses _hbase-daemons.sh_, and link:http://incubator.apache.org/ambari/[Apache Ambari]                    deployments are supported.
+Currently tarball deployments, deployments which uses _hbase-daemons.sh_, and link:https://incubator.apache.org/ambari/[Apache Ambari]                    deployments are supported.
 _/etc/init.d/_ scripts are not supported for now, but it can be easily added.
-For other deployment options, a ClusterManager can be implemented and plugged in. 
+For other deployment options, a ClusterManager can be implemented and plugged in.
 
 [[maven.build.commands.integration.tests.destructive]]
-==== Destructive integration / system tests
+==== Destructive integration / system tests (ChaosMonkey)
+
+HBase 0.96 introduced a tool named `ChaosMonkey`, modeled after
+link:https://netflix.github.io/chaosmonkey/[same-named tool by Netflix's Chaos Monkey tool].
+ChaosMonkey simulates real-world
+faults in a running cluster by killing or disconnecting random servers, or injecting
+other failures into the environment. You can use ChaosMonkey as a stand-alone tool
+to run a policy while other tests are running. In some environments, ChaosMonkey is
+always running, in order to constantly check that high availability and fault tolerance
+are working as expected.
 
-In 0.96, a tool named `ChaosMonkey` has been introduced.
-It is modeled after the link:http://techblog.netflix.com/2012/07/chaos-monkey-released-into-wild.html[same-named tool by Netflix].
-Some of the tests use ChaosMonkey to simulate faults in the running cluster in the way of killing random servers, disconnecting servers, etc.
-ChaosMonkey can also be used as a stand-alone tool to run a (misbehaving) policy while you are running other tests. 
+ChaosMonkey defines *Actions* and *Policies*.
 
-ChaosMonkey defines Action's and Policy's.
-Actions are sequences of events.
-We have at least the following actions:
+Actions:: Actions are predefined sequences of events, such as the following:
 
 * Restart active master (sleep 5 sec)
 * Restart random regionserver (sleep 5 sec)
@@ -1215,23 +1356,17 @@ We have at least the following actions:
 * Batch restart of 50% of regionservers (sleep 5 sec)
 * Rolling restart of 100% of regionservers (sleep 5 sec)
 
-Policies on the other hand are responsible for executing the actions based on a strategy.
-The default policy is to execute a random action every minute based on predefined action weights.
-ChaosMonkey executes predefined named policies until it is stopped.
-More than one policy can be active at any time. 
+Policies:: A policy is a strategy for executing one or more actions. The default policy
+executes a random action every minute based on predefined action weights.
+A given policy will be executed until ChaosMonkey is interrupted.
 
-To run ChaosMonkey as a standalone tool deploy your HBase cluster as usual.
-ChaosMonkey uses the configuration from the bin/hbase script, thus no extra configuration needs to be done.
-You can invoke the ChaosMonkey by running:
-
-[source,bourne]
-----
-bin/hbase org.apache.hadoop.hbase.util.ChaosMonkey
-----
-
-This will output smt like: 
+Most ChaosMonkey actions are configured to have reasonable defaults, so you can run
+ChaosMonkey against an existing cluster without any additional configuration. The
+following example runs ChaosMonkey with the default configuration:
 
+[source,bash]
 ----
+$ bin/hbase org.apache.hadoop.hbase.util.ChaosMonkey
 
 12/11/19 23:21:57 INFO util.ChaosMonkey: Using ChaosMonkey Policy: class org.apache.hadoop.hbase.util.ChaosMonkey$PeriodicRandomActionPolicy, period:60000
 12/11/19 23:21:57 INFO util.ChaosMonkey: Sleeping for 26953 to add jitter
@@ -1270,31 +1405,38 @@ This will output smt like:
 12/11/19 23:24:27 INFO util.ChaosMonkey: Started region server:rs3.example.com,60020,1353367027826. Reported num of rs:6
 ----
 
-As you can see from the log, ChaosMonkey started the default PeriodicRandomActionPolicy, which is configured with all the available actions, and ran RestartActiveMaster and RestartRandomRs actions.
-ChaosMonkey tool, if run from command line, will keep on running until the process is killed. 
+The output indicates that ChaosMonkey started the default `PeriodicRandomActionPolicy`
+policy, which is configured with all the available actions. It chose to run `RestartActiveMaster` and `RestartRandomRs` actions.
+
+==== Available Policies
+HBase ships with several ChaosMonkey policies, available in the
+`hbase/hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/policies/` directory.
 
 [[chaos.monkey.properties]]
-==== Passing individual Chaos Monkey per-test Settings/Properties
+==== Configuring Individual ChaosMonkey Actions
 
-Since HBase version 1.0.0 (link:https://issues.apache.org/jira/browse/HBASE-11348[HBASE-11348]), the chaos monkeys is used to run integration tests can be configured per test run.
-Users can create a java properties file and and pass this to the chaos monkey with timing configurations.
-The properties file needs to be in the HBase classpath.
-The various properties that can be configured and their default values can be found listed in the `org.apache.hadoop.hbase.chaos.factories.MonkeyConstants`                    class.
-If any chaos monkey configuration is missing from the property file, then the default values are assumed.
-For example:
+Since HBase version 1.0.0 (link:https://issues.apache.org/jira/browse/HBASE-11348[HBASE-11348]),
+ChaosMonkey integration tests can be configured per test run.
+Create a Java properties file in the HBase classpath and pass it to ChaosMonkey using
+the `-monkeyProps` configuration flag. Configurable properties, along with their default
+values if applicable, are listed in the `org.apache.hadoop.hbase.chaos.factories.MonkeyConstants`
+class. For properties that have defaults, you can override them by including them
+in your properties file.
+
+The following example uses a properties file called <<monkey.properties,monkey.properties>>.
 
 [source,bourne]
 ----
-
-$bin/hbase org.apache.hadoop.hbase.IntegrationTestIngest -m slowDeterministic -monkeyProps monkey.properties
+$ bin/hbase org.apache.hadoop.hbase.IntegrationTestIngest -m slowDeterministic -monkeyProps monkey.properties
 ----
 
 The above command will start the integration tests and chaos monkey passing the properties file _monkey.properties_.
 Here is an example chaos monkey file:
 
+[[monkey.properties]]
+.Example ChaosMonkey Properties File
 [source]
 ----
-
 sdm.action1.period=120000
 sdm.action2.period=40000
 move.regions.sleep.time=80000
@@ -1303,14 +1445,43 @@ move.regions.sleep.time=80000
 batch.restart.rs.ratio=0.4f
 ----
 
+HBase 1.0.2 and newer adds the ability to restart HBase's underlying ZooKeeper quorum or
+HDFS nodes. To use these actions, you need to configure some new properties, which
+have no reasonable defaults because they are deployment-specific, in your ChaosMonkey
+properties file, which may be `hbase-site.xml` or a different properties file.
+
+[source,xml]
+----
+<property>
+  <name>hbase.it.clustermanager.hadoop.home</name>
+  <value>$HADOOP_HOME</value>
+</property>
+<property>
+  <name>hbase.it.clustermanager.zookeeper.home</name>
+  <value>$ZOOKEEPER_HOME</value>
+</property>
+<property>
+  <name>hbase.it.clustermanager.hbase.user</name>
+  <value>hbase</value>
+</property>
+<property>
+  <name>hbase.it.clustermanager.hadoop.hdfs.user</name>
+  <value>hdfs</value>
+</property>
+<property>
+  <name>hbase.it.clustermanager.zookeeper.user</name>
+  <value>zookeeper</value>
+</property>
+----
+
 [[developing]]
 == Developer Guidelines
 
-=== Codelines
+=== Branches
 
-Most development is done on the master branch, which is named `master` in the Git repository.
-Previously, HBase used Subversion, in which the master branch was called `TRUNK`.
-Branches exist for minor releases, and important features and bug fixes are often back-ported.
+We use Git for source code management and latest development happens on `master` branch. There are
+branches for past major/minor/maintenance releases and important features and bug fixes are often
+ back-ported to them.
 
 === Release Managers
 
@@ -1326,25 +1497,29 @@ NOTE: End-of-life releases are not included in this list.
 |===
 | Release
 | Release Manager
-| 0.98
-| Andrew Purtell
 
-| 1.0
-| Enis Soztutar
+| 1.1
+| Nick Dimiduk
+
+| 1.2
+| Sean Busbey
+
+| 1.3
+| Mikhail Antonov
+
 |===
 
 [[code.standards]]
 === Code Standards
 
-See <<eclipse.code.formatting,eclipse.code.formatting>> and <<common.patch.feedback,common.patch.feedback>>. 
 
 ==== Interface Classifications
 
 Interfaces are classified both by audience and by stability level.
 These labels appear at the head of a class.
-The conventions followed by HBase are inherited by its parent project, Hadoop. 
+The conventions followed by HBase are inherited by its parent project, Hadoop.
 
-The following interface classifications are commonly used: 
+The following interface classifications are commonly used:
 
 .InterfaceAudience
 `@InterfaceAudience.Public`::
@@ -1358,8 +1533,6 @@ The following interface classifications are commonly used:
 
 `@InterfaceAudience.LimitedPrivate(HBaseInterfaceAudience.COPROC)`::
   APIs for HBase coprocessor writers.
-  As of HBase 0.92/0.94/0.96/0.98 this api is still unstable.
-  No guarantees on compatibility with future versions.
 
 No `@InterfaceAudience` Classification::
   Packages without an `@InterfaceAudience` label are considered private.
@@ -1368,7 +1541,7 @@ No `@InterfaceAudience` Classification::
 .Excluding Non-Public Interfaces from API Documentation
 [NOTE]
 ====
-Only interfaces classified `@InterfaceAudience.Public` should be included in API documentation (Javadoc). Committers must add new package excludes `ExcludePackageNames` section of the _pom.xml_ for new packages which do not contain public classes. 
+Only interfaces classified `@InterfaceAudience.Public` should be included in API documentation (Javadoc). Committers must add new package excludes `ExcludePackageNames` section of the _pom.xml_ for new packages which do not contain public classes.
 ====
 
 .@InterfaceStability
@@ -1386,7 +1559,7 @@ Only interfaces classified `@InterfaceAudience.Public` should be included in API
 No `@InterfaceStability` Label::
   Public classes with no `@InterfaceStability` label are discouraged, and should be considered implicitly unstable.
 
-If you are unclear about how to mark packages, ask on the development list. 
+If you are unclear about how to mark packages, ask on the development list.
 
 [[common.patch.feedback]]
 ==== Code Formatting Conventions
@@ -1396,6 +1569,8 @@ These guidelines have been developed based upon common feedback on patches from
 
 See the link:http://www.oracle.com/technetwork/java/index-135089.html[Code
                     Conventions for the Java Programming Language] for more information on coding conventions in Java.
+See <<eclipse.code.formatting,eclipse.code.formatting>> to setup Eclipse to check for some of
+these guidelines automatically.
 
 [[common.patch.feedback.space.invaders]]
 ===== Space Invaders
@@ -1470,7 +1645,6 @@ Bar bar = foo.veryLongMethodWithManyArguments(
 [[common.patch.feedback.trailingspaces]]
 ===== Trailing Spaces
 
-Trailing spaces are a common problem.
 Be sure there is a line break after the end of your code, and avoid lines with nothing but whitespace.
 This makes diffs more meaningful.
 You can configure your IDE to help with this.
@@ -1484,21 +1658,22 @@ Bar bar = foo.getBar();     <--- imagine there is an extra space(s) after the se
 [[common.patch.feedback.javadoc]]
 ===== API Documentation (Javadoc)
 
-This is also a very common feedback item.
 Don't forget Javadoc!
 
 Javadoc warnings are checked during precommit.
 If the precommit tool gives you a '-1', please fix the javadoc issue.
-Your patch won't be committed if it adds such warnings. 
+Your patch won't be committed if it adds such warnings.
+
+Also, no `@author` tags - that's a rule.
 
 [[common.patch.feedback.findbugs]]
 ===== Findbugs
 
 `Findbugs` is used to detect common bugs pattern.
-It is checked during the precommit build by Apache's Jenkins.
+It is checked during the precommit build.
 If errors are found, please fix them.
-You can run findbugs locally with +mvn
-                            findbugs:findbugs+, which will generate the `findbugs` files locally.
+You can run findbugs locally with `mvn
+                            findbugs:findbugs`, which will generate the `findbugs` files locally.
 Sometimes, you may have to write code smarter than `findbugs`.
 You can annotate your code to tell `findbugs` you know what you're doing, by annotating your class with the following annotation:
 
@@ -1509,38 +1684,42 @@ value="HE_EQUALS_USE_HASHCODE",
 justification="I know what I'm doing")
 ----
 
-It is important to use the Apache-licensed version of the annotations. 
+It is important to use the Apache-licensed version of the annotations. That generally means using
+annotations in the `edu.umd.cs.findbugs.annotations` package so that we can rely on the cleanroom
+reimplementation rather than annotations in the `javax.annotations` package.
 
 [[common.patch.feedback.javadoc.defaults]]
 ===== Javadoc - Useless Defaults
 
-Don't just leave the @param arguments the way your IDE generated them.:
+Don't just leave javadoc tags the way IDE generates them, or fill redundant information in them.
 
 [source,java]
 ----
 
   /**
-   *
-   * @param bar             <---- don't do this!!!!
-   * @return                <---- or this!!!!
+   * @param table                              <---- don't leave them empty!
+   * @param region An HRegion object.          <---- don't fill redundant information!
+   * @return Foo Object foo just created.      <---- Not useful information
+   * @throws SomeException                     <---- Not useful. Function declarations already tell that!
+   * @throws BarException when something went wrong  <---- really?
    */
-  public Foo getFoo(Bar bar);
+  public Foo createFoo(Bar bar);
 ----
 
-Either add something descriptive to the @`param` and @`return` lines, or just remove them.
+Either add something descriptive to the tags, or just remove them.
 The preference is to add something descriptive and useful.
 
 [[common.patch.feedback.onething]]
 ===== One Thing At A Time, Folks
 
-If you submit a patch for one thing, don't do auto-reformatting or unrelated reformatting of code on a completely different area of code. 
+If you submit a patch for one thing, don't do auto-reformatting or unrelated reformatting of code on a completely different area of code.
 
-Likewise, don't add unrelated cleanup or refactorings outside the scope of your Jira. 
+Likewise, don't add unrelated cleanup or refactorings outside the scope of your Jira.
 
 [[common.patch.feedback.tests]]
 ===== Ambigious Unit Tests
 
-Make sure that you're clear about what you are testing in your unit tests and why. 
+Make sure that you're clear about what you are testing in your unit tests and why.
 
 [[common.patch.feedback.writable]]
 ===== Implementing Writable
@@ -1548,24 +1727,38 @@ Make sure that you're clear about what you are testing in your unit tests and wh
 .Applies pre-0.96 only
 [NOTE]
 ====
-In 0.96, HBase moved to protocol buffers (protobufs). The below section on Writables applies to 0.94.x and previous, not to 0.96 and beyond. 
+In 0.96, HBase moved to protocol buffers (protobufs). The below section on Writables applies to 0.94.x and previous, not to 0.96 and beyond.
 ====
 
 Every class returned by RegionServers must implement the `Writable` interface.
-If you are creating a new class that needs to implement this interface, do not forget the default constructor. 
+If you are creating a new class that needs to implement this interface, do not forget the default constructor.
+
+==== Garbage-Collection Conserving Guidelines
+
+The following guidelines were borrowed from http://engineering.linkedin.com/performance/linkedin-feed-faster-less-jvm-garbage.
+Keep them in mind to keep preventable garbage  collection to a minimum. Have a look
+at the blog post for some great examples of how to refactor your code according to
+these guidelines.
+
+- Be careful with Iterators
+- Estimate the size of a collection when initializing
+- Defer expression evaluation
+- Compile the regex patterns in advance
+- Cache it if you can
+- String Interns are useful but dangerous
 
 [[design.invariants]]
 === Invariants
 
 We don't have many but what we have we list below.
-All are subject to challenge of course but until then, please hold to the rules of the road. 
+All are subject to challenge of course but until then, please hold to the rules of the road.
 
 [[design.invariants.zk.data]]
 ==== No permanent state in ZooKeeper
 
 ZooKeeper state should transient (treat it like memory). If ZooKeeper state is deleted, hbase should be able to recover and essentially be in the same state.
 
-* .ExceptionsThere are currently a few exceptions that we need to fix around whether a table is enabled or disabled.
+* .Exceptions: There are currently a few exceptions that we need to fix around whether a table is enabled or disabled.
 * Replication data is currently stored only in ZooKeeper.
   Deleting ZooKeeper data related to replication may cause replication to be disabled.
   Do not delete the replication tree, _/hbase/replication/_.
@@ -1579,14 +1772,14 @@ Follow progress on this issue at link:https://issues.apache.org/jira/browse/HBAS
 
 If you are developing Apache HBase, frequently it is useful to test your changes against a more-real cluster than what you find in unit tests.
 In this case, HBase can be run directly from the source in local-mode.
-All you need to do is run: 
+All you need to do is run:
 
 [source,bourne]
 ----
 ${HBASE_HOME}/bin/start-hbase.sh
 ----
 
-This will spin up a full local-cluster, just as if you had packaged up HBase and installed it on your machine. 
+This will spin up a full local-cluster, just as if you had packaged up HBase and installed it on your machine.
 
 Keep in mind that you will need to have installed HBase into your local maven repository for the in-situ cluster to work properly.
 That is, you will need to run:
@@ -1607,27 +1800,25 @@ HBase exposes metrics using the Hadoop Metrics 2 system, so adding a new metric
 Unfortunately the API of metri

<TRUNCATED>

[6/9] hbase git commit: HBASE-19420 Backport HBASE-19152 Update refguide 'how to build an RC' and the make_rc.sh script

Posted by ap...@apache.org.
HBASE-19420 Backport HBASE-19152 Update refguide 'how to build an RC' and the make_rc.sh script

Removes src.xml used building src tgz via hbase-assembly.

Use git archive instead going forward. Updates developer release candidate
documentation and the make_rc.sh script.

Slight modifications to developer.adoc for branch-1


Project: http://git-wip-us.apache.org/repos/asf/hbase/repo
Commit: http://git-wip-us.apache.org/repos/asf/hbase/commit/1dba475d
Tree: http://git-wip-us.apache.org/repos/asf/hbase/tree/1dba475d
Diff: http://git-wip-us.apache.org/repos/asf/hbase/diff/1dba475d

Branch: refs/heads/branch-1.4
Commit: 1dba475da6dbba5c04ca34ba906732368bfa7090
Parents: 5f58e61
Author: Andrew Purtell <ap...@apache.org>
Authored: Mon Dec 4 12:17:15 2017 -0800
Committer: Andrew Purtell <ap...@apache.org>
Committed: Mon Dec 4 18:41:11 2017 -0800

----------------------------------------------------------------------
 dev-support/make_rc.sh                     |   97 +-
 hbase-assembly/src/main/assembly/src.xml   |  136 ---
 src/main/asciidoc/_chapters/developer.adoc | 1168 +++++++++++++----------
 3 files changed, 751 insertions(+), 650 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hbase/blob/1dba475d/dev-support/make_rc.sh
----------------------------------------------------------------------
diff --git a/dev-support/make_rc.sh b/dev-support/make_rc.sh
index b88a984..19f906f 100755
--- a/dev-support/make_rc.sh
+++ b/dev-support/make_rc.sh
@@ -28,8 +28,17 @@
 
 set -e
 
-devsupport=`dirname "$0"`
-devsupport=`cd "$devsupport">/dev/null; pwd`
+# Script checks out a tag, cleans the checkout and then builds src and bin
+# tarballs. It then deploys to the apache maven repository.
+# Presumes run from git dir.
+
+# Need a git tag to build.
+if [ "$1" = "" ]
+then
+  echo -n "Usage: $0 TAG_TO_PACKAGE"
+  exit 1
+fi
+git_tag=$1
 
 # Set mvn and mvnopts
 mvn=mvn
@@ -41,45 +50,67 @@ if [ "$MAVEN_OPTS" != "" ]; then
   mvnopts="${MAVEN_OPTS}"
 fi
 
-# Make a dir to save tgzs in.
+# Ensure we are inside a git repo before making progress
+# The below will fail if outside git.
+git -C . rev-parse
+
+# Checkout git_tag
+git checkout "${git_tag}"
+
+# Get mvn protject version
+#shellcheck disable=SC2016
+version=$(${mvn} -q -N -Dexec.executable="echo" -Dexec.args='${project.version}' exec:exec)
+hbase_name="hbase-${version}"
+
+# Make a dir to save tgzs into.
 d=`date -u +"%Y%m%dT%H%M%SZ"`
-archivedir="$(pwd)/../`basename $0`.$d"
-echo "Archive dir ${archivedir}"
-mkdir -p "${archivedir}"
+output_dir="/${TMPDIR}/$hbase_name.$d"
+mkdir -p "${output_dir}"
+
 
-function tgz_mover {
-  mv ./hbase-assembly/target/hbase-*.tar.gz "${archivedir}"
+# Build src tgz.
+function build_src {
+  git archive --format=tar.gz --output="${output_dir}/${hbase_name}-src.tar.gz" --prefix="${hbase_name}/" "${git_tag}"
 }
 
-function deploy {
-  MAVEN_OPTS="${mvnopts}" ${mvn} clean install -DskipTests -Prelease \
-    -Dmaven.repo.local=${archivedir}/repository
-  MAVEN_OPTS="${mvnopts}" ${mvn} install -DskipTests post-site assembly:single -Prelease \
-    -Dmaven.repo.local=${archivedir}/repository
-  tgz_mover
-  MAVEN_OPTS="${mvnopts}" ${mvn} deploy -DskipTests -Papache-release -Prelease \
-    -Dmaven.repo.local=${archivedir}/repository
+# Build bin tgz
+function build_bin {
+  MAVEN_OPTS="${mvnopts}" ${mvn} clean install -DskipTests -Papache-release -Prelease \
+    -Dmaven.repo.local=${output_dir}/repository
+  MAVEN_OPTS="${mvnopts}" ${mvn} install -DskipTests site assembly:single -Papache-release -Prelease \
+    -Dmaven.repo.local=${output_dir}/repository
+  mv ./hbase-assembly/target/hbase-*.tar.gz "${output_dir}"
 }
 
-# Build src tarball
-# run clean separate from assembly:single because it fails to clean shaded modules correctly
+# Make sure all clean.
+git clean -f -x -d
 MAVEN_OPTS="${mvnopts}" ${mvn} clean
-MAVEN_OPTS="${mvnopts}" ${mvn} install -DskipTests assembly:single \
-  -Dassembly.file="$(pwd)/hbase-assembly/src/main/assembly/src.xml" \
-  -Prelease -Dmaven.repo.local=${archivedir}/repository
-
-tgz_mover
 
 # Now do the two builds,  one for hadoop1, then hadoop2
-deploy
-
-echo "DONE"
-echo "Check the content of ${archivedir}.  If good, sign and push to dist.apache.org"
-echo " cd ${archivedir}"
-echo ' for i in *.tar.gz; do echo $i; gpg --print-mds $i > $i.mds ; done'
-echo ' for i in *.tar.gz; do echo $i; gpg --print-md MD5 $i > $i.md5 ; done'
-echo ' for i in *.tar.gz; do echo $i; gpg --print-md SHA512 $i > $i.sha ; done'
+# Run a rat check.
+${mvn} apache-rat:check
+
+#Build src.
+build_src
+
+# Build bin product
+build_bin
+
+# Deploy to mvn repository
+# Depends on build_bin having populated the local repository
+# If the below upload fails, you will probably have to clean the partial
+# upload from repository.apache.org by 'drop'ping it from the staging
+# repository before restart.
+MAVEN_OPTS="${mvnopts}" ${mvn} deploy -DskipTests -Papache-release -Prelease \
+    -Dmaven.repo.local=${output_dir}/repository
+
+# Do sha1 and md5
+cd ${output_dir}
+for i in *.tar.gz; do echo $i; gpg --print-md SHA512 $i > $i.sha ; done
+for i in *.tar.gz; do echo $i; gpg --print-md MD5 $i > $i.md5 ; done
+
+echo "Check the content of ${output_dir}.  If good, sign and push to dist.apache.org"
+echo " cd ${output_dir}"
 echo ' for i in *.tar.gz; do echo $i; gpg --armor --output $i.asc --detach-sig $i  ; done'
-echo ' rsync -av ${archivedir}/*.gz ${archivedir}/*.mds ${archivedir}/*.asc ~/repos/dist-dev/hbase-VERSION/'
+echo ' rsync -av ${output_dir}/*.gz ${output_dir}/*.md5 ${output_dir}/*.sha ${output_dir}/*.asc ${APACHE_HBASE_DIST_DEV_DIR}/${hbase_name}/'
 echo "Check the content deployed to maven.  If good, close the repo and record links of temporary staging repo"
-echo "If all good tag the RC"

http://git-wip-us.apache.org/repos/asf/hbase/blob/1dba475d/hbase-assembly/src/main/assembly/src.xml
----------------------------------------------------------------------
diff --git a/hbase-assembly/src/main/assembly/src.xml b/hbase-assembly/src/main/assembly/src.xml
deleted file mode 100644
index b13967e..0000000
--- a/hbase-assembly/src/main/assembly/src.xml
+++ /dev/null
@@ -1,136 +0,0 @@
-<?xml version="1.0"?>
-<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.1 http://maven.apache.org/xsd/assembly-1.1.1.xsd">
-<!--
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- *     http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
--->
-
-  <!--Copies over all you need to build hbase-->
-  <id>src</id>
-  <formats>
-    <format>tar.gz</format>
-  </formats>
-  <moduleSets>
-    <moduleSet>
-      <!-- Enable access to all projects in the current multimodule build. Eclipse
-        says this is an error, but builds from the command line just fine. -->
-      <useAllReactorProjects>true</useAllReactorProjects>
-      <includes>
-        <include>org.apache.hbase:hbase-annotations</include>
-        <include>org.apache.hbase:hbase-archetypes</include>
-        <include>org.apache.hbase:hbase-assembly</include>
-        <include>org.apache.hbase:hbase-checkstyle</include>
-        <include>org.apache.hbase:hbase-client</include>
-        <include>org.apache.hbase:hbase-common</include>
-        <include>org.apache.hbase:hbase-examples</include>
-        <include>org.apache.hbase:hbase-external-blockcache</include>
-        <include>org.apache.hbase:hbase-hadoop2-compat</include>
-        <include>org.apache.hbase:hbase-hadoop-compat</include>
-        <include>org.apache.hbase:hbase-it</include>
-        <include>org.apache.hbase:hbase-prefix-tree</include>
-        <include>org.apache.hbase:hbase-procedure</include>
-        <include>org.apache.hbase:hbase-protocol</include>
-        <include>org.apache.hbase:hbase-rest</include>
-        <include>org.apache.hbase:hbase-resource-bundle</include>
-        <include>org.apache.hbase:hbase-server</include>
-        <include>org.apache.hbase:hbase-shaded</include>
-        <include>org.apache.hbase:hbase-shell</include>
-        <include>org.apache.hbase:hbase-testing-util</include>
-        <include>org.apache.hbase:hbase-thrift</include>
-      </includes>
-      <!-- Include all the sources in the top directory -->
-      <sources>
-        <excludeSubModuleDirectories>false</excludeSubModuleDirectories>
-        <fileSets>
-          <fileSet>
-            <includes>
-              <include>**</include>
-            </includes>
-            <!--Make sure this excludes is same as the hbase-hadoop2-compat
-                 excludes below-->
-            <excludes>
-              <exclude>target/</exclude>
-              <exclude>test/</exclude>
-              <exclude>.classpath</exclude>
-              <exclude>.project</exclude>
-              <exclude>.settings/</exclude>
-            </excludes>
-          </fileSet>
-        </fileSets>
-      </sources>
-    </moduleSet>
-  </moduleSets>
-  <fileSets>
-    <!--This one is weird.  When we assemble src, it'll be default profile which
-         at the moment is hadoop1.  But we should include the hadoop2 compat module
-         too so can build hadoop2 from src -->
-    <fileSet>
-      <directory>${project.basedir}/../hbase-hadoop2-compat</directory>
-      <outputDirectory>hbase-hadoop2-compat</outputDirectory>
-      <fileMode>0644</fileMode>
-      <directoryMode>0755</directoryMode>
-            <excludes>
-              <exclude>target/</exclude>
-              <exclude>test/</exclude>
-              <exclude>.classpath</exclude>
-              <exclude>.project</exclude>
-              <exclude>.settings/</exclude>
-            </excludes>
-    </fileSet>
-    <!--Include dev tools-->
-    <fileSet>
-      <directory>${project.basedir}/../dev-support</directory>
-      <outputDirectory>dev-support</outputDirectory>
-      <fileMode>0644</fileMode>
-      <directoryMode>0755</directoryMode>
-    </fileSet>
-    <fileSet>
-      <directory>${project.basedir}/../src</directory>
-      <outputDirectory>src</outputDirectory>
-      <fileMode>0644</fileMode>
-      <directoryMode>0755</directoryMode>
-    </fileSet>
-    <!-- Include the top level conf directory -->
-    <fileSet>
-      <directory>${project.basedir}/../conf</directory>
-      <outputDirectory>conf</outputDirectory>
-      <fileMode>0644</fileMode>
-      <directoryMode>0755</directoryMode>
-    </fileSet>
-    <!-- Include top level bin directory -->
-    <fileSet>
-        <directory>${project.basedir}/../bin</directory>
-      <outputDirectory>bin</outputDirectory>
-      <fileMode>0755</fileMode>
-      <directoryMode>0755</directoryMode>
-    </fileSet>
-    <fileSet>
-      <directory>${project.basedir}/..</directory>
-      <outputDirectory>.</outputDirectory>
-      <includes>
-        <include>pom.xml</include>
-        <include>LICENSE.txt</include>
-        <include>NOTICE.txt</include>
-        <include>CHANGES.txt</include>
-        <include>README.txt</include>
-        <include>.pylintrc</include>
-      </includes>
-      <fileMode>0644</fileMode>
-    </fileSet>
-</fileSets>
-</assembly>