You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@accumulo.apache.org by ct...@apache.org on 2016/01/09 04:38:04 UTC

[01/19] accumulo git commit: ACCUMULO-4103 Add jdk8 profile for findbugs

Repository: accumulo
Updated Branches:
  refs/heads/1.6 05811af38 -> c8c0cf7f9
  refs/heads/1.7 d505843e1 -> 0ccba14f8
  refs/heads/master c252d1a6e -> 8ff2ca81c


ACCUMULO-4103 Add jdk8 profile for findbugs

* Automatically set findbugs.version for jdk8


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/f38d5e7f
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/f38d5e7f
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/f38d5e7f

Branch: refs/heads/1.6
Commit: f38d5e7f69d21eec6d197e2109575bbc60b3eae0
Parents: 05811af
Author: Christopher Tubbs <ct...@apache.org>
Authored: Fri Jan 8 14:30:35 2016 -0500
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Fri Jan 8 15:34:07 2016 -0500

----------------------------------------------------------------------
 pom.xml | 9 +++++++++
 1 file changed, 9 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/f38d5e7f/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 833bf44..ea40f31 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1398,5 +1398,14 @@
         <slf4j.version>1.7.5</slf4j.version>
       </properties>
     </profile>
+    <profile>
+      <id>jdk8</id>
+      <activation>
+        <jdk>[1.8,)</jdk>
+      </activation>
+      <properties>
+        <findbugs.version>3.0.1</findbugs.version>
+      </properties>
+    </profile>
   </profiles>
 </project>


[05/19] accumulo git commit: ACCUMULO-4102 Configure javadoc plugin for jdk8

Posted by ct...@apache.org.
ACCUMULO-4102 Configure javadoc plugin for jdk8


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/4169a12b
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/4169a12b
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/4169a12b

Branch: refs/heads/1.7
Commit: 4169a12b52c6e6744a975eab60f8b29fdcb2f22b
Parents: f38d5e7
Author: Christopher Tubbs <ct...@apache.org>
Authored: Fri Jan 8 17:43:57 2016 -0500
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Fri Jan 8 17:43:57 2016 -0500

----------------------------------------------------------------------
 pom.xml | 19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/4169a12b/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index ea40f31..6138dbc 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1401,11 +1401,28 @@
     <profile>
       <id>jdk8</id>
       <activation>
-        <jdk>[1.8,)</jdk>
+        <jdk>[1.8,1.9)</jdk>
       </activation>
       <properties>
         <findbugs.version>3.0.1</findbugs.version>
       </properties>
+      <build>
+        <pluginManagement>
+          <plugins>
+            <plugin>
+              <groupId>org.apache.maven.plugins</groupId>
+              <artifactId>maven-javadoc-plugin</artifactId>
+              <configuration>
+                <encoding>${project.reporting.outputEncoding}</encoding>
+                <quiet>true</quiet>
+                <javadocVersion>1.8</javadocVersion>
+                <additionalJOption>-J-Xmx512m</additionalJOption>
+                <additionalparam>-Xdoclint:all,-Xdoclint:-missing</additionalparam>
+              </configuration>
+            </plugin>
+          </plugins>
+        </pluginManagement>
+      </build>
     </profile>
   </profiles>
 </project>


[07/19] accumulo git commit: ACCUMULO-4102 Bump maven-plugin-plugin to 3.4

Posted by ct...@apache.org.
ACCUMULO-4102 Bump maven-plugin-plugin to 3.4

* Bump maven-plugin-plugin so the generated HelpMojo doesn't have
  javadoc problems (especially on JDK8)


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/7cc81374
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/7cc81374
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/7cc81374

Branch: refs/heads/1.6
Commit: 7cc81374233b0f8ba3a243f6084eecce9d6a1e6f
Parents: 4169a12
Author: Christopher Tubbs <ct...@apache.org>
Authored: Fri Jan 8 20:45:03 2016 -0500
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Fri Jan 8 20:45:49 2016 -0500

----------------------------------------------------------------------
 pom.xml | 5 +++++
 1 file changed, 5 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/7cc81374/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 6138dbc..f04aa53 100644
--- a/pom.xml
+++ b/pom.xml
@@ -904,6 +904,11 @@
             </execution>
           </executions>
         </plugin>
+        <plugin>
+          <groupId>org.apache.maven.plugins</groupId>
+          <artifactId>maven-plugin-plugin</artifactId>
+          <version>3.4</version>
+        </plugin>
       </plugins>
     </pluginManagement>
     <plugins>


[14/19] accumulo git commit: Merge branch 'javadoc-jdk8-1.6' into javadoc-jdk8-1.7

Posted by ct...@apache.org.
Merge branch 'javadoc-jdk8-1.6' into javadoc-jdk8-1.7

* Merge to 1.7 branch, with additional javadoc fixes so build works
* Prevent merging maven-plugin-plugin version 3.4 specification (as it only applied to 1.6 branch)


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/6becfbd3
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/6becfbd3
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/6becfbd3

Branch: refs/heads/1.7
Commit: 6becfbd3852dc10f46658827d064f7d1e9ee6c45
Parents: d505843 c8c0cf7
Author: Christopher Tubbs <ct...@apache.org>
Authored: Fri Jan 8 22:04:57 2016 -0500
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Fri Jan 8 22:04:57 2016 -0500

----------------------------------------------------------------------
 .../core/bloomfilter/DynamicBloomFilter.java    |  4 +--
 .../accumulo/core/client/BatchWriterConfig.java | 10 +++---
 .../core/client/ConditionalWriterConfig.java    |  4 +--
 .../client/mapred/AccumuloFileOutputFormat.java |  4 +--
 .../mapreduce/AccumuloFileOutputFormat.java     |  4 +--
 .../lib/impl/FileOutputConfigurator.java        |  4 +--
 .../lib/util/FileOutputConfigurator.java        |  4 +--
 .../security/tokens/AuthenticationToken.java    |  2 +-
 .../core/constraints/VisibilityConstraint.java  |  1 -
 .../java/org/apache/accumulo/core/data/Key.java |  2 +-
 .../org/apache/accumulo/core/data/Range.java    |  6 ++--
 .../file/blockfile/cache/CachedBlockQueue.java  |  2 +-
 .../core/file/blockfile/cache/ClassSize.java    |  4 +--
 .../accumulo/core/file/rfile/bcfile/Utils.java  | 35 +++++++++++---------
 .../user/WholeColumnFamilyIterator.java         |  4 +--
 .../core/metadata/ServicerForMetadataTable.java |  2 +-
 .../core/metadata/ServicerForRootTable.java     |  2 +-
 .../core/metadata/ServicerForUserTables.java    |  2 +-
 .../core/metadata/schema/MetadataSchema.java    |  2 +-
 .../core/replication/ReplicationSchema.java     |  6 ++--
 .../core/security/ColumnVisibility.java         |  8 ++---
 .../security/crypto/CryptoModuleParameters.java |  7 +---
 .../accumulo/core/conf/config-header.html       | 12 +++----
 .../examples/simple/filedata/ChunkCombiner.java | 18 +++++-----
 pom.xml                                         | 26 +++++++++++++++
 .../apache/accumulo/server/ServerConstants.java |  2 +-
 .../server/master/balancer/GroupBalancer.java   |  4 +--
 .../master/balancer/RegexGroupBalancer.java     |  6 ++--
 .../server/security/SecurityOperation.java      |  6 ++--
 .../server/security/UserImpersonation.java      |  2 +-
 .../server/security/SystemCredentialsTest.java  |  2 +-
 .../replication/SequentialWorkAssigner.java     |  2 +-
 .../monitor/servlets/DefaultServlet.java        |  2 +-
 .../monitor/servlets/ReplicationServlet.java    |  2 +-
 .../monitor/servlets/TablesServlet.java         |  4 +--
 .../tserver/compaction/CompactionStrategy.java  |  6 ++--
 .../test/replication/merkle/package-info.java   |  9 ++---
 .../replication/merkle/skvi/DigestIterator.java |  2 +-
 38 files changed, 124 insertions(+), 100 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
index 6ceefad,320ecf4..3421f76
--- a/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
@@@ -49,10 -48,8 +49,10 @@@ public class BatchWriterConfig implemen
    private static final Integer DEFAULT_MAX_WRITE_THREADS = 3;
    private Integer maxWriteThreads = null;
  
 +  private Durability durability = Durability.DEFAULT;
 +
    /**
-    * Sets the maximum memory to batch before writing. The smaller this value, the more frequently the {@link BatchWriter} will write.<br />
+    * Sets the maximum memory to batch before writing. The smaller this value, the more frequently the {@link BatchWriter} will write.<br>
     * If set to a value smaller than a single mutation, then it will {@link BatchWriter#flush()} after each added mutation. Must be non-negative.
     *
     * <p>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/constraints/VisibilityConstraint.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/constraints/VisibilityConstraint.java
index 91bc22f,0000000..648d044
mode 100644,000000..100644
--- a/core/src/main/java/org/apache/accumulo/core/constraints/VisibilityConstraint.java
+++ b/core/src/main/java/org/apache/accumulo/core/constraints/VisibilityConstraint.java
@@@ -1,93 -1,0 +1,92 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.core.constraints;
 +
 +import static java.nio.charset.StandardCharsets.UTF_8;
 +
 +import java.util.Collections;
 +import java.util.HashSet;
 +import java.util.List;
 +
 +import org.apache.accumulo.core.data.ColumnUpdate;
 +import org.apache.accumulo.core.data.Mutation;
 +import org.apache.accumulo.core.security.ColumnVisibility;
 +import org.apache.accumulo.core.security.VisibilityEvaluator;
 +import org.apache.accumulo.core.security.VisibilityParseException;
 +import org.apache.accumulo.core.util.BadArgumentException;
 +
 +/**
 + * A constraint that checks the visibility of columns against the actor's authorizations. Violation codes:
-  * <p>
 + * <ul>
 + * <li>1 = failure to parse visibility expression</li>
 + * <li>2 = insufficient authorization</li>
 + * </ul>
 + */
 +public class VisibilityConstraint implements Constraint {
 +
 +  @Override
 +  public String getViolationDescription(short violationCode) {
 +    switch (violationCode) {
 +      case 1:
 +        return "Malformed column visibility";
 +      case 2:
 +        return "User does not have authorization on column visibility";
 +    }
 +
 +    return null;
 +  }
 +
 +  @Override
 +  public List<Short> check(Environment env, Mutation mutation) {
 +    List<ColumnUpdate> updates = mutation.getUpdates();
 +
 +    HashSet<String> ok = null;
 +    if (updates.size() > 1)
 +      ok = new HashSet<String>();
 +
 +    VisibilityEvaluator ve = null;
 +
 +    for (ColumnUpdate update : updates) {
 +
 +      byte[] cv = update.getColumnVisibility();
 +      if (cv.length > 0) {
 +        String key = null;
 +        if (ok != null && ok.contains(key = new String(cv, UTF_8)))
 +          continue;
 +
 +        try {
 +
 +          if (ve == null)
 +            ve = new VisibilityEvaluator(env.getAuthorizationsContainer());
 +
 +          if (!ve.evaluate(new ColumnVisibility(cv)))
 +            return Collections.singletonList(Short.valueOf((short) 2));
 +
 +        } catch (BadArgumentException bae) {
 +          return Collections.singletonList(new Short((short) 1));
 +        } catch (VisibilityParseException e) {
 +          return Collections.singletonList(new Short((short) 1));
 +        }
 +
 +        if (ok != null)
 +          ok.add(key);
 +      }
 +    }
 +
 +    return null;
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/data/Key.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/data/Key.java
index f88ddaa,f605c98..758436d
--- a/core/src/main/java/org/apache/accumulo/core/data/Key.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/Key.java
@@@ -786,23 -660,6 +786,23 @@@ public class Key implements WritableCom
      return appendPrintableString(ba, offset, len, maxLen, new StringBuilder()).toString();
    }
  
 +  /**
 +   * Appends ASCII printable characters to a string, based on the given byte array, treating the bytes as ASCII characters. If a byte can be converted to a
-    * ASCII printable character it is appended as is; otherwise, it is appended as a character code, e.g., %05; for byte value 5. If len > maxlen, the string
++   * ASCII printable character it is appended as is; otherwise, it is appended as a character code, e.g., %05; for byte value 5. If len &gt; maxlen, the string
 +   * includes a "TRUNCATED" note at the end.
 +   *
 +   * @param ba
 +   *          byte array
 +   * @param offset
 +   *          offset to start with in byte array (inclusive)
 +   * @param len
 +   *          number of bytes to print
 +   * @param maxLen
 +   *          maximum number of bytes to convert to printable form
 +   * @param sb
 +   *          <code>StringBuilder</code> to append to
 +   * @return given <code>StringBuilder</code>
 +   */
    public static StringBuilder appendPrintableString(byte ba[], int offset, int len, int maxLen, StringBuilder sb) {
      int plen = Math.min(len, maxLen);
  

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/data/Range.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/data/Range.java
index 0fcfee6,7ccfe3d..c114e2b
--- a/core/src/main/java/org/apache/accumulo/core/data/Range.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/Range.java
@@@ -555,17 -506,14 +555,17 @@@ public class Range implements WritableC
    }
  
    /**
-    * Creates a new range that is bounded by the columns passed in. The start key in the returned range will have a column >= to the minimum column. The end key
-    * in the returned range will have a column <= the max column.
 -   * Creates a new range that is bounded by the columns passed in. The stary key in the returned range will have a column &gt;= to the minimum column. The end
++   * Creates a new range that is bounded by the columns passed in. The start key in the returned range will have a column &gt;= to the minimum column. The end
+    * key in the returned range will have a column &lt;= the max column.
     *
 +   * @param min
 +   *          minimum column
 +   * @param max
 +   *          maximum column
     * @return a column bounded range
     * @throws IllegalArgumentException
 -   *           if min &gt; max
 +   *           if the minimum column compares greater than the maximum column
     */
 -
    public Range bound(Column min, Column max) {
  
      if (min.compareTo(max) > 0) {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/ClassSize.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Utils.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
index 525e2a2,af48770..5a96c20
--- a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
@@@ -16,10 -16,11 +16,10 @@@
   */
  package org.apache.accumulo.core.metadata;
  
 -import org.apache.accumulo.core.client.Instance;
 -import org.apache.accumulo.core.security.Credentials;
 +import org.apache.accumulo.core.client.impl.ClientContext;
  
  /**
-  * A metadata servicer for the metadata table (which holds metadata for user tables).<br />
+  * A metadata servicer for the metadata table (which holds metadata for user tables).<br>
   * The metadata table's metadata is serviced in the root table.
   */
  class ServicerForMetadataTable extends TableMetadataServicer {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
index 73a943d,b279d01..32b5824
--- a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
@@@ -22,11 -22,11 +22,11 @@@ import org.apache.accumulo.core.client.
  import org.apache.accumulo.core.client.AccumuloSecurityException;
  import org.apache.accumulo.core.client.Instance;
  import org.apache.accumulo.core.client.TableNotFoundException;
 -import org.apache.accumulo.core.data.KeyExtent;
 -import org.apache.accumulo.core.security.Credentials;
 +import org.apache.accumulo.core.client.impl.ClientContext;
 +import org.apache.accumulo.core.data.impl.KeyExtent;
  
  /**
-  * A metadata servicer for the root table.<br />
+  * A metadata servicer for the root table.<br>
   * The root table's metadata is serviced in zookeeper.
   */
  class ServicerForRootTable extends MetadataServicer {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
index 5efa8a6,607dfbd..73f9188
--- a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
@@@ -16,10 -16,11 +16,10 @@@
   */
  package org.apache.accumulo.core.metadata;
  
 -import org.apache.accumulo.core.client.Instance;
 -import org.apache.accumulo.core.security.Credentials;
 +import org.apache.accumulo.core.client.impl.ClientContext;
  
  /**
-  * A metadata servicer for user tables.<br />
+  * A metadata servicer for user tables.<br>
   * Metadata for user tables are serviced in the metadata table.
   */
  class ServicerForUserTables extends TableMetadataServicer {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataSchema.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataSchema.java
index 6baae17,f20fce1..3970c49
--- a/core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataSchema.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataSchema.java
@@@ -227,55 -233,4 +227,55 @@@ public class MetadataSchema 
  
    }
  
 +  /**
 +   * Holds references to files that need replication
 +   * <p>
-    * <code>~replhdfs://localhost:8020/accumulo/wal/tserver+port/WAL stat:local_table_id [] -> protobuf</code>
++   * <code>~replhdfs://localhost:8020/accumulo/wal/tserver+port/WAL stat:local_table_id [] -&gt; protobuf</code>
 +   */
 +  public static class ReplicationSection {
 +    public static final Text COLF = new Text("stat");
 +    private static final ArrayByteSequence COLF_BYTE_SEQ = new ArrayByteSequence(COLF.toString());
 +    private static final Section section = new Section(RESERVED_PREFIX + "repl", true, RESERVED_PREFIX + "repm", false);
 +
 +    public static Range getRange() {
 +      return section.getRange();
 +    }
 +
 +    public static String getRowPrefix() {
 +      return section.getRowPrefix();
 +    }
 +
 +    /**
 +     * Extract the table ID from the colfam into the given {@link Text}
 +     *
 +     * @param k
 +     *          Key to extract from
 +     * @param buff
 +     *          Text to place table ID into
 +     */
 +    public static void getTableId(Key k, Text buff) {
 +      Preconditions.checkNotNull(k);
 +      Preconditions.checkNotNull(buff);
 +
 +      k.getColumnQualifier(buff);
 +    }
 +
 +    /**
 +     * Extract the file name from the row suffix into the given {@link Text}
 +     *
 +     * @param k
 +     *          Key to extract from
 +     * @param buff
 +     *          Text to place file name into
 +     */
 +    public static void getFile(Key k, Text buff) {
 +      Preconditions.checkNotNull(k);
 +      Preconditions.checkNotNull(buff);
 +      Preconditions.checkArgument(COLF_BYTE_SEQ.equals(k.getColumnFamilyData()), "Given metadata replication status key with incorrect colfam");
 +
 +      k.getRow(buff);
 +
 +      buff.set(buff.getBytes(), section.getRowPrefix().length(), buff.getLength() - section.getRowPrefix().length());
 +    }
 +  }
  }

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/replication/ReplicationSchema.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/replication/ReplicationSchema.java
index ed46130,0000000..b352957
mode 100644,000000..100644
--- a/core/src/main/java/org/apache/accumulo/core/replication/ReplicationSchema.java
+++ b/core/src/main/java/org/apache/accumulo/core/replication/ReplicationSchema.java
@@@ -1,299 -1,0 +1,299 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.core.replication;
 +
 +import static java.nio.charset.StandardCharsets.UTF_8;
 +
 +import java.nio.charset.CharacterCodingException;
 +
 +import org.apache.accumulo.core.client.ScannerBase;
 +import org.apache.accumulo.core.client.lexicoder.ULongLexicoder;
 +import org.apache.accumulo.core.data.ArrayByteSequence;
 +import org.apache.accumulo.core.data.ByteSequence;
 +import org.apache.accumulo.core.data.Key;
 +import org.apache.accumulo.core.data.Mutation;
 +import org.apache.accumulo.core.data.Value;
 +import org.apache.hadoop.fs.Path;
 +import org.apache.hadoop.io.Text;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +import com.google.common.base.Preconditions;
 +
 +/**
 + *
 + */
 +public class ReplicationSchema {
 +  private static final Logger log = LoggerFactory.getLogger(ReplicationSchema.class);
 +
 +  /**
 +   * Portion of a file that must be replication to the given target: peer and some identifying location on that peer, e.g. remote table ID
 +   * <p>
-    * <code>hdfs://localhost:8020/accumulo/wal/tserver+port/WAL work:serialized_ReplicationTarget [] -> Status Protobuf</code>
++   * <code>hdfs://localhost:8020/accumulo/wal/tserver+port/WAL work:serialized_ReplicationTarget [] -&gt; Status Protobuf</code>
 +   */
 +  public static class WorkSection {
 +    public static final Text NAME = new Text("work");
 +    private static final ByteSequence BYTE_SEQ_NAME = new ArrayByteSequence("work");
 +
 +    public static void getFile(Key k, Text buff) {
 +      Preconditions.checkNotNull(k);
 +      Preconditions.checkNotNull(buff);
 +      Preconditions.checkArgument(BYTE_SEQ_NAME.equals(k.getColumnFamilyData()), "Given replication work key with incorrect colfam");
 +      _getFile(k, buff);
 +    }
 +
 +    public static ReplicationTarget getTarget(Key k) {
 +      return getTarget(k, new Text());
 +    }
 +
 +    public static ReplicationTarget getTarget(Key k, Text buff) {
 +      Preconditions.checkArgument(BYTE_SEQ_NAME.equals(k.getColumnFamilyData()), "Given replication work key with incorrect colfam");
 +      k.getColumnQualifier(buff);
 +
 +      return ReplicationTarget.from(buff);
 +    }
 +
 +    /**
 +     * Limit the scanner to only pull replication work records
 +     */
 +    public static void limit(ScannerBase scanner) {
 +      scanner.fetchColumnFamily(NAME);
 +    }
 +
 +    public static Mutation add(Mutation m, Text serializedTarget, Value v) {
 +      m.put(NAME, serializedTarget, v);
 +      return m;
 +    }
 +  }
 +
 +  /**
 +   * Holds replication markers tracking status for files
 +   * <p>
-    * <code>hdfs://localhost:8020/accumulo/wal/tserver+port/WAL repl:local_table_id [] -> Status Protobuf</code>
++   * <code>hdfs://localhost:8020/accumulo/wal/tserver+port/WAL repl:local_table_id [] -&gt; Status Protobuf</code>
 +   */
 +  public static class StatusSection {
 +    public static final Text NAME = new Text("repl");
 +    private static final ByteSequence BYTE_SEQ_NAME = new ArrayByteSequence("repl");
 +
 +    /**
 +     * Extract the table ID from the key (inefficiently if called repeatedly)
 +     *
 +     * @param k
 +     *          Key to extract from
 +     * @return The table ID
 +     * @see #getTableId(Key,Text)
 +     */
 +    public static String getTableId(Key k) {
 +      Text buff = new Text();
 +      getTableId(k, buff);
 +      return buff.toString();
 +    }
 +
 +    /**
 +     * Extract the table ID from the key into the given {@link Text}
 +     *
 +     * @param k
 +     *          Key to extract from
 +     * @param buff
 +     *          Text to place table ID into
 +     */
 +    public static void getTableId(Key k, Text buff) {
 +      Preconditions.checkNotNull(k);
 +      Preconditions.checkNotNull(buff);
 +
 +      k.getColumnQualifier(buff);
 +    }
 +
 +    /**
 +     * Extract the file name from the row suffix into the given {@link Text}
 +     *
 +     * @param k
 +     *          Key to extract from
 +     * @param buff
 +     *          Text to place file name into
 +     */
 +    public static void getFile(Key k, Text buff) {
 +      Preconditions.checkNotNull(k);
 +      Preconditions.checkNotNull(buff);
 +      Preconditions.checkArgument(BYTE_SEQ_NAME.equals(k.getColumnFamilyData()), "Given replication status key with incorrect colfam");
 +
 +      _getFile(k, buff);
 +    }
 +
 +    /**
 +     * Limit the scanner to only return Status records
 +     */
 +    public static void limit(ScannerBase scanner) {
 +      scanner.fetchColumnFamily(NAME);
 +    }
 +
 +    public static Mutation add(Mutation m, Text tableId, Value v) {
 +      m.put(NAME, tableId, v);
 +      return m;
 +    }
 +  }
 +
 +  /**
 +   * Holds the order in which files needed for replication were closed. The intent is to be able to guarantee that files which were closed earlier were
 +   * replicated first and we don't replay data in the wrong order on our peers
 +   * <p>
-    * <code>encodedTimeOfClosure\x00hdfs://localhost:8020/accumulo/wal/tserver+port/WAL order:source_table_id [] -> Status Protobuf</code>
++   * <code>encodedTimeOfClosure\x00hdfs://localhost:8020/accumulo/wal/tserver+port/WAL order:source_table_id [] -&gt; Status Protobuf</code>
 +   */
 +  public static class OrderSection {
 +    public static final Text NAME = new Text("order");
 +    public static final Text ROW_SEPARATOR = new Text(new byte[] {0});
 +    private static final ULongLexicoder longEncoder = new ULongLexicoder();
 +
 +    /**
 +     * Extract the table ID from the given key (inefficiently if called repeatedly)
 +     *
 +     * @param k
 +     *          OrderSection Key
 +     * @return source table id
 +     */
 +    public static String getTableId(Key k) {
 +      Text buff = new Text();
 +      getTableId(k, buff);
 +      return buff.toString();
 +    }
 +
 +    /**
 +     * Extract the table ID from the given key
 +     *
 +     * @param k
 +     *          OrderSection key
 +     * @param buff
 +     *          Text to place table ID into
 +     */
 +    public static void getTableId(Key k, Text buff) {
 +      Preconditions.checkNotNull(k);
 +      Preconditions.checkNotNull(buff);
 +
 +      k.getColumnQualifier(buff);
 +    }
 +
 +    /**
 +     * Limit the scanner to only return Order records
 +     */
 +    public static void limit(ScannerBase scanner) {
 +      scanner.fetchColumnFamily(NAME);
 +    }
 +
 +    /**
 +     * Creates the Mutation for the Order section for the given file and time
 +     *
 +     * @param file
 +     *          Filename
 +     * @param timeInMillis
 +     *          Time in millis that the file was closed
 +     * @return Mutation for the Order section
 +     */
 +    public static Mutation createMutation(String file, long timeInMillis) {
 +      Preconditions.checkNotNull(file);
 +      Preconditions.checkArgument(timeInMillis >= 0, "timeInMillis must be greater than zero");
 +
 +      // Encode the time so it sorts properly
 +      byte[] rowPrefix = longEncoder.encode(timeInMillis);
 +      Text row = new Text(rowPrefix);
 +
 +      // Normalize the file using Path
 +      Path p = new Path(file);
 +      String pathString = p.toUri().toString();
 +
 +      log.trace("Normalized {} into {}", file, pathString);
 +
 +      // Append the file as a suffix to the row
 +      row.append((ROW_SEPARATOR + pathString).getBytes(UTF_8), 0, pathString.length() + ROW_SEPARATOR.getLength());
 +
 +      // Make the mutation and add the column update
 +      return new Mutation(row);
 +    }
 +
 +    /**
 +     * Add a column update to the given mutation with the provided tableId and value
 +     *
 +     * @param m
 +     *          Mutation for OrderSection
 +     * @param tableId
 +     *          Source table id
 +     * @param v
 +     *          Serialized Status msg
 +     * @return The original Mutation
 +     */
 +    public static Mutation add(Mutation m, Text tableId, Value v) {
 +      m.put(NAME, tableId, v);
 +      return m;
 +    }
 +
 +    public static long getTimeClosed(Key k) {
 +      return getTimeClosed(k, new Text());
 +    }
 +
 +    public static long getTimeClosed(Key k, Text buff) {
 +      k.getRow(buff);
 +      int offset = 0;
 +      // find the last offset
 +      while (true) {
 +        int nextOffset = buff.find(ROW_SEPARATOR.toString(), offset + 1);
 +        if (-1 == nextOffset) {
 +          break;
 +        }
 +        offset = nextOffset;
 +      }
 +
 +      if (-1 == offset) {
 +        throw new IllegalArgumentException("Row does not contain expected separator for OrderSection");
 +      }
 +
 +      byte[] encodedLong = new byte[offset];
 +      System.arraycopy(buff.getBytes(), 0, encodedLong, 0, offset);
 +      return longEncoder.decode(encodedLong);
 +    }
 +
 +    public static String getFile(Key k) {
 +      Text buff = new Text();
 +      return getFile(k, buff);
 +    }
 +
 +    public static String getFile(Key k, Text buff) {
 +      k.getRow(buff);
 +      int offset = 0;
 +      // find the last offset
 +      while (true) {
 +        int nextOffset = buff.find(ROW_SEPARATOR.toString(), offset + 1);
 +        if (-1 == nextOffset) {
 +          break;
 +        }
 +        offset = nextOffset;
 +      }
 +
 +      if (-1 == offset) {
 +        throw new IllegalArgumentException("Row does not contain expected separator for OrderSection");
 +      }
 +
 +      try {
 +        return Text.decode(buff.getBytes(), offset + 1, buff.getLength() - (offset + 1));
 +      } catch (CharacterCodingException e) {
 +        throw new IllegalArgumentException("Could not decode file path", e);
 +      }
 +    }
 +  }
 +
 +  private static void _getFile(Key k, Text buff) {
 +    k.getRow(buff);
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/resources/org/apache/accumulo/core/conf/config-header.html
----------------------------------------------------------------------
diff --cc core/src/main/resources/org/apache/accumulo/core/conf/config-header.html
index 9c770b1,8270ad2..49291fc
--- a/core/src/main/resources/org/apache/accumulo/core/conf/config-header.html
+++ b/core/src/main/resources/org/apache/accumulo/core/conf/config-header.html
@@@ -28,23 -28,23 +28,23 @@@
    below (from highest to lowest):</p>
    <table>
     <tr><th>Location</th><th>Description</th></tr>
--   <tr class='highlight'><td><b>Zookeeper<br/>table properties</b></td>
--       <td>Table properties are applied to the entire cluster when set in zookeeper using the accumulo API or shell.  While table properties take precedent over system properties, both will override properties set in accumulo-site.xml<br/><br/>
++   <tr class='highlight'><td><b>Zookeeper<br />table properties</b></td>
++       <td>Table properties are applied to the entire cluster when set in zookeeper using the accumulo API or shell.  While table properties take precedent over system properties, both will override properties set in accumulo-site.xml<br /><br />
             Table properties consist of all properties with the table.* prefix.  Table properties are configured on a per-table basis using the following shell commmand:
          <pre>config -t TABLE -s PROPERTY=VALUE</pre></td>
     </tr>
--   <tr><td><b>Zookeeper<br/>system properties</b></td>
++   <tr><td><b>Zookeeper<br />system properties</b></td>
        <td>System properties are applied to the entire cluster when set in zookeeper using the accumulo API or shell.  System properties consist of all properties with a 'yes' in the 'Zookeeper Mutable' column in the table below.  They are set with the following shell command:
          <pre>config -s PROPERTY=VALUE</pre>
--      If a table.* property is set using this method, the value will apply to all tables except those configured on per-table basis (which have higher precedence).<br/><br/>
++      If a table.* property is set using this method, the value will apply to all tables except those configured on per-table basis (which have higher precedence).<br /><br />
        While most system properties take effect immediately, some require a restart of the process which is indicated in 'Zookeeper Mutable'.</td>
     </tr>
     <tr class='highlight'><td><b>accumulo-site.xml</b></td>
--       <td>Accumulo processes (master, tserver, etc) read their local accumulo-site.xml on start up.  Therefore, changes made to accumulo-site.xml must rsynced across the cluster and processes must be restarted to apply changes.<br/><br/>
++       <td>Accumulo processes (master, tserver, etc) read their local accumulo-site.xml on start up.  Therefore, changes made to accumulo-site.xml must rsynced across the cluster and processes must be restarted to apply changes.<br /><br />
             Certain properties (indicated by a 'no' in 'Zookeeper Mutable') cannot be set in zookeeper and only set in this file.  The accumulo-site.xml also allows you to configure tablet servers with different settings.</td>
     </tr>
     <tr><td><b>Default</b></td>
--        <td>All properties have a default value in the source code.  This value has the lowest precedence and is overriden if set in accumulo-site.xml or zookeeper.<br/><br/>While the default value is usually optimal, there are cases where a change can increase query and ingest performance.</td>
++        <td>All properties have a default value in the source code.  This value has the lowest precedence and is overriden if set in accumulo-site.xml or zookeeper.<br /><br />While the default value is usually optimal, there are cases where a change can increase query and ingest performance.</td>
     </tr>
    </table>
  

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/pom.xml
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java
----------------------------------------------------------------------
diff --cc server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java
index fb4e0d9,0000000..9734528
mode 100644,000000..100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java
@@@ -1,788 -1,0 +1,788 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +
 +package org.apache.accumulo.server.master.balancer;
 +
 +import java.util.ArrayList;
 +import java.util.Collection;
 +import java.util.Collections;
 +import java.util.HashMap;
 +import java.util.Iterator;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.Map.Entry;
 +import java.util.Objects;
 +import java.util.Set;
 +import java.util.SortedMap;
 +
 +import org.apache.accumulo.core.client.IsolatedScanner;
 +import org.apache.accumulo.core.client.RowIterator;
 +import org.apache.accumulo.core.client.Scanner;
 +import org.apache.accumulo.core.data.Key;
 +import org.apache.accumulo.core.data.Value;
 +import org.apache.accumulo.core.data.impl.KeyExtent;
 +import org.apache.accumulo.core.master.thrift.TabletServerStatus;
 +import org.apache.accumulo.core.metadata.MetadataTable;
 +import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 +import org.apache.accumulo.core.security.Authorizations;
 +import org.apache.accumulo.core.util.ComparablePair;
 +import org.apache.accumulo.core.util.MapCounter;
 +import org.apache.accumulo.core.util.Pair;
 +import org.apache.accumulo.server.master.state.TServerInstance;
 +import org.apache.accumulo.server.master.state.TabletMigration;
 +import org.apache.commons.lang.mutable.MutableInt;
 +import org.apache.hadoop.io.Text;
 +
 +import com.google.common.base.Function;
 +import com.google.common.base.Preconditions;
 +import com.google.common.collect.HashBasedTable;
 +import com.google.common.collect.HashMultimap;
 +import com.google.common.collect.Iterators;
 +import com.google.common.collect.Multimap;
 +import com.google.common.collect.Table;
 +
 +/**
 + * A balancer that evenly spreads groups of tablets across all tablet server. This balancer accomplishes the following two goals :
 + *
 + * <ul>
-  * <li/>Evenly spreads each group across all tservers.
-  * <li/>Minimizes the total number of groups on each tserver.
++ * <li>Evenly spreads each group across all tservers.
++ * <li>Minimizes the total number of groups on each tserver.
 + * </ul>
 + *
 + * <p>
 + * To use this balancer you must extend it and implement {@link #getPartitioner()}. See {@link RegexGroupBalancer} as an example.
 + */
 +
 +public abstract class GroupBalancer extends TabletBalancer {
 +
 +  private final String tableId;
 +  private final Text textTableId;
 +  private long lastRun = 0;
 +
 +  /**
 +   * @return A function that groups tablets into named groups.
 +   */
 +  protected abstract Function<KeyExtent,String> getPartitioner();
 +
 +  public GroupBalancer(String tableId) {
 +    this.tableId = tableId;
 +    this.textTableId = new Text(tableId);
 +  }
 +
 +  protected Iterable<Pair<KeyExtent,Location>> getLocationProvider() {
 +    return new MetadataLocationProvider();
 +  }
 +
 +  /**
 +   * The amount of time to wait between balancing.
 +   */
 +  protected long getWaitTime() {
 +    return 60000;
 +  }
 +
 +  /**
 +   * The maximum number of migrations to perform in a single pass.
 +   */
 +  protected int getMaxMigrations() {
 +    return 1000;
 +  }
 +
 +  /**
 +   * @return Examine current tserver and migrations and return true if balancing should occur.
 +   */
 +  protected boolean shouldBalance(SortedMap<TServerInstance,TabletServerStatus> current, Set<KeyExtent> migrations) {
 +
 +    if (current.size() < 2) {
 +      return false;
 +    }
 +
 +    for (KeyExtent keyExtent : migrations) {
 +      if (keyExtent.getTableId().equals(textTableId)) {
 +        return false;
 +      }
 +    }
 +
 +    return true;
 +  }
 +
 +  @Override
 +  public void getAssignments(SortedMap<TServerInstance,TabletServerStatus> current, Map<KeyExtent,TServerInstance> unassigned,
 +      Map<KeyExtent,TServerInstance> assignments) {
 +
 +    if (current.size() == 0) {
 +      return;
 +    }
 +
 +    Function<KeyExtent,String> partitioner = getPartitioner();
 +
 +    List<ComparablePair<String,KeyExtent>> tabletsByGroup = new ArrayList<>();
 +    for (Entry<KeyExtent,TServerInstance> entry : unassigned.entrySet()) {
 +      TServerInstance last = entry.getValue();
 +      if (last != null) {
 +        // Maintain locality
 +        String fakeSessionID = " ";
 +        TServerInstance simple = new TServerInstance(last.getLocation(), fakeSessionID);
 +        Iterator<TServerInstance> find = current.tailMap(simple).keySet().iterator();
 +        if (find.hasNext()) {
 +          TServerInstance tserver = find.next();
 +          if (tserver.host().equals(last.host())) {
 +            assignments.put(entry.getKey(), tserver);
 +            continue;
 +          }
 +        }
 +      }
 +
 +      tabletsByGroup.add(new ComparablePair<String,KeyExtent>(partitioner.apply(entry.getKey()), entry.getKey()));
 +    }
 +
 +    Collections.sort(tabletsByGroup);
 +
 +    Iterator<TServerInstance> tserverIter = Iterators.cycle(current.keySet());
 +    for (ComparablePair<String,KeyExtent> pair : tabletsByGroup) {
 +      KeyExtent ke = pair.getSecond();
 +      assignments.put(ke, tserverIter.next());
 +    }
 +
 +  }
 +
 +  @Override
 +  public long balance(SortedMap<TServerInstance,TabletServerStatus> current, Set<KeyExtent> migrations, List<TabletMigration> migrationsOut) {
 +
 +    // The terminology extra and expected are used in this code. Expected tablets is the number of tablets a tserver must have for a given group and is
 +    // numInGroup/numTservers. Extra tablets are any tablets more than the number expected for a given group. If numInGroup % numTservers > 0, then a tserver
 +    // may have one extra tablet for a group.
 +    //
 +    // Assume we have 4 tservers and group A has 11 tablets.
 +    // * expected tablets : group A is expected to have 2 tablets on each tservers
 +    // * extra tablets : group A may have an additional tablet on each tserver. Group A has a total of 3 extra tablets.
 +    //
 +    // This balancer also evens out the extra tablets across all groups. The terminology extraExpected and extraExtra is used to describe these tablets.
 +    // ExtraExpected is totalExtra/numTservers. ExtraExtra is totalExtra%numTservers. Each tserver should have at least expectedExtra extra tablets and at most
 +    // one extraExtra tablets. All extra tablets on a tserver must be from different groups.
 +    //
 +    // Assume we have 6 tservers and three groups (G1, G2, G3) with 9 tablets each. Each tserver is expected to have one tablet from each group and could
 +    // possibly have 2 tablets from a group. Below is an illustration of an ideal balancing of extra tablets. To understand the illustration, the first column
 +    // shows tserver T1 with 2 tablets from G1, 1 tablet from G2, and two tablets from G3. EE means empty, put it there so eclipse formating would not mess up
 +    // table.
 +    //
 +    // T1 | T2 | T3 | T4 | T5 | T6
 +    // ---+----+----+----+----+-----
 +    // G3 | G2 | G3 | EE | EE | EE <-- extra extra tablets
 +    // G1 | G1 | G1 | G2 | G3 | G2 <-- extra expected tablets.
 +    // G1 | G1 | G1 | G1 | G1 | G1 <-- expected tablets for group 1
 +    // G2 | G2 | G2 | G2 | G2 | G2 <-- expected tablets for group 2
 +    // G3 | G3 | G3 | G3 | G3 | G3 <-- expected tablets for group 3
 +    //
 +    // Do not want to balance the extra tablets like the following. There are two problem with this. First extra tablets are not evenly spread. Since there are
 +    // a total of 9 extra tablets, every tserver is expected to have at least one extra tablet. Second tserver T1 has two extra tablet for group G1. This
 +    // violates the principal that a tserver can only have one extra tablet for a given group.
 +    //
 +    // T1 | T2 | T3 | T4 | T5 | T6
 +    // ---+----+----+----+----+-----
 +    // G1 | EE | EE | EE | EE | EE <--- one extra tablets from group 1
 +    // G3 | G3 | G3 | EE | EE | EE <--- three extra tablets from group 3
 +    // G2 | G2 | G2 | EE | EE | EE <--- three extra tablets from group 2
 +    // G1 | G1 | EE | EE | EE | EE <--- two extra tablets from group 1
 +    // G1 | G1 | G1 | G1 | G1 | G1 <-- expected tablets for group 1
 +    // G2 | G2 | G2 | G2 | G2 | G2 <-- expected tablets for group 2
 +    // G3 | G3 | G3 | G3 | G3 | G3 <-- expected tablets for group 3
 +
 +    if (!shouldBalance(current, migrations)) {
 +      return 5000;
 +    }
 +
 +    if (System.currentTimeMillis() - lastRun < getWaitTime()) {
 +      return 5000;
 +    }
 +
 +    MapCounter<String> groupCounts = new MapCounter<>();
 +    Map<TServerInstance,TserverGroupInfo> tservers = new HashMap<>();
 +
 +    for (TServerInstance tsi : current.keySet()) {
 +      tservers.put(tsi, new TserverGroupInfo(tsi));
 +    }
 +
 +    Function<KeyExtent,String> partitioner = getPartitioner();
 +
 +    // collect stats about current state
 +    for (Pair<KeyExtent,Location> entry : getLocationProvider()) {
 +      String group = partitioner.apply(entry.getFirst());
 +      Location loc = entry.getSecond();
 +
 +      if (loc.equals(Location.NONE) || !tservers.containsKey(loc.getTserverInstance())) {
 +        return 5000;
 +      }
 +
 +      groupCounts.increment(group, 1);
 +      TserverGroupInfo tgi = tservers.get(loc.getTserverInstance());
 +      tgi.addGroup(group);
 +    }
 +
 +    Map<String,Integer> expectedCounts = new HashMap<>();
 +
 +    int totalExtra = 0;
 +    for (String group : groupCounts.keySet()) {
 +      long groupCount = groupCounts.get(group);
 +      totalExtra += groupCount % current.size();
 +      expectedCounts.put(group, (int) (groupCount / current.size()));
 +    }
 +
 +    // The number of extra tablets from all groups that each tserver must have.
 +    int expectedExtra = totalExtra / current.size();
 +    int maxExtraGroups = expectedExtra + 1;
 +
 +    expectedCounts = Collections.unmodifiableMap(expectedCounts);
 +    tservers = Collections.unmodifiableMap(tservers);
 +
 +    for (TserverGroupInfo tgi : tservers.values()) {
 +      tgi.finishedAdding(expectedCounts);
 +    }
 +
 +    Moves moves = new Moves();
 +
 +    // The order of the following steps is important, because as ordered each step should not move any tablets moved by a previous step.
 +    balanceExpected(tservers, moves);
 +    if (moves.size() < getMaxMigrations()) {
 +      balanceExtraExpected(tservers, expectedExtra, moves);
 +      if (moves.size() < getMaxMigrations()) {
 +        boolean cont = balanceExtraMultiple(tservers, maxExtraGroups, moves);
 +        if (cont && moves.size() < getMaxMigrations()) {
 +          balanceExtraExtra(tservers, maxExtraGroups, moves);
 +        }
 +      }
 +    }
 +
 +    populateMigrations(tservers.keySet(), migrationsOut, moves);
 +
 +    lastRun = System.currentTimeMillis();
 +
 +    return 5000;
 +  }
 +
 +  public static class Location {
 +    public static final Location NONE = new Location();
 +    private final TServerInstance tserverInstance;
 +
 +    public Location() {
 +      this(null);
 +    }
 +
 +    public Location(TServerInstance tsi) {
 +      tserverInstance = tsi;
 +    }
 +
 +    public TServerInstance getTserverInstance() {
 +      return tserverInstance;
 +    }
 +
 +    @Override
 +    public int hashCode() {
 +      return Objects.hashCode(tserverInstance);
 +    }
 +
 +    @Override
 +    public boolean equals(Object o) {
 +      if (o instanceof Location) {
 +        Location ol = ((Location) o);
 +        if (tserverInstance == ol.tserverInstance) {
 +          return true;
 +        }
 +        return tserverInstance.equals(ol.tserverInstance);
 +      }
 +      return false;
 +    }
 +  }
 +
 +  static class TserverGroupInfo {
 +
 +    private Map<String,Integer> expectedCounts;
 +    private final Map<String,MutableInt> initialCounts = new HashMap<>();
 +    private final Map<String,Integer> extraCounts = new HashMap<>();
 +    private final Map<String,Integer> expectedDeficits = new HashMap<>();
 +
 +    private final TServerInstance tsi;
 +    private boolean finishedAdding = false;
 +
 +    TserverGroupInfo(TServerInstance tsi) {
 +      this.tsi = tsi;
 +    }
 +
 +    public void addGroup(String group) {
 +      Preconditions.checkState(!finishedAdding);
 +
 +      MutableInt mi = initialCounts.get(group);
 +      if (mi == null) {
 +        mi = new MutableInt();
 +        initialCounts.put(group, mi);
 +      }
 +
 +      mi.increment();
 +    }
 +
 +    public void finishedAdding(Map<String,Integer> expectedCounts) {
 +      Preconditions.checkState(!finishedAdding);
 +      finishedAdding = true;
 +      this.expectedCounts = expectedCounts;
 +
 +      for (Entry<String,Integer> entry : expectedCounts.entrySet()) {
 +        String group = entry.getKey();
 +        int expected = entry.getValue();
 +
 +        MutableInt count = initialCounts.get(group);
 +        int num = count == null ? 0 : count.intValue();
 +
 +        if (num < expected) {
 +          expectedDeficits.put(group, expected - num);
 +        } else if (num > expected) {
 +          extraCounts.put(group, num - expected);
 +        }
 +      }
 +
 +    }
 +
 +    public void moveOff(String group, int num) {
 +      Preconditions.checkArgument(num > 0);
 +      Preconditions.checkState(finishedAdding);
 +
 +      Integer extraCount = extraCounts.get(group);
 +
 +      Preconditions.checkArgument(extraCount != null && extraCount >= num, "group=%s num=%s extraCount=%s", group, num, extraCount);
 +
 +      MutableInt initialCount = initialCounts.get(group);
 +
 +      Preconditions.checkArgument(initialCount.intValue() >= num);
 +
 +      initialCount.subtract(num);
 +
 +      if (extraCount - num == 0) {
 +        extraCounts.remove(group);
 +      } else {
 +        extraCounts.put(group, extraCount - num);
 +      }
 +    }
 +
 +    public void moveTo(String group, int num) {
 +      Preconditions.checkArgument(num > 0);
 +      Preconditions.checkArgument(expectedCounts.containsKey(group));
 +      Preconditions.checkState(finishedAdding);
 +
 +      Integer deficit = expectedDeficits.get(group);
 +      if (deficit != null) {
 +        if (num >= deficit) {
 +          expectedDeficits.remove(group);
 +          num -= deficit;
 +        } else {
 +          expectedDeficits.put(group, deficit - num);
 +          num = 0;
 +        }
 +      }
 +
 +      if (num > 0) {
 +        Integer extra = extraCounts.get(group);
 +        if (extra == null) {
 +          extra = 0;
 +        }
 +
 +        extraCounts.put(group, extra + num);
 +      }
 +
 +      // TODO could check extra constraints
 +    }
 +
 +    public Map<String,Integer> getExpectedDeficits() {
 +      Preconditions.checkState(finishedAdding);
 +      return Collections.unmodifiableMap(expectedDeficits);
 +    }
 +
 +    public Map<String,Integer> getExtras() {
 +      Preconditions.checkState(finishedAdding);
 +      return Collections.unmodifiableMap(extraCounts);
 +    }
 +
 +    public TServerInstance getTserverInstance() {
 +      return tsi;
 +    }
 +
 +    @Override
 +    public int hashCode() {
 +      return tsi.hashCode();
 +    }
 +
 +    @Override
 +    public boolean equals(Object o) {
 +      if (o instanceof TserverGroupInfo) {
 +        TserverGroupInfo otgi = (TserverGroupInfo) o;
 +        return tsi.equals(otgi.tsi);
 +      }
 +
 +      return false;
 +    }
 +
 +    @Override
 +    public String toString() {
 +      return tsi.toString();
 +    }
 +
 +  }
 +
 +  private static class Move {
 +    TserverGroupInfo dest;
 +    int count;
 +
 +    public Move(TserverGroupInfo dest, int num) {
 +      this.dest = dest;
 +      this.count = num;
 +    }
 +  }
 +
 +  private static class Moves {
 +
 +    private final Table<TServerInstance,String,List<Move>> moves = HashBasedTable.create();
 +    private int totalMoves = 0;
 +
 +    public void move(String group, int num, TserverGroupInfo src, TserverGroupInfo dest) {
 +      Preconditions.checkArgument(num > 0);
 +      Preconditions.checkArgument(!src.equals(dest));
 +
 +      src.moveOff(group, num);
 +      dest.moveTo(group, num);
 +
 +      List<Move> srcMoves = moves.get(src.getTserverInstance(), group);
 +      if (srcMoves == null) {
 +        srcMoves = new ArrayList<>();
 +        moves.put(src.getTserverInstance(), group, srcMoves);
 +      }
 +
 +      srcMoves.add(new Move(dest, num));
 +      totalMoves += num;
 +    }
 +
 +    public TServerInstance removeMove(TServerInstance src, String group) {
 +      List<Move> srcMoves = moves.get(src, group);
 +      if (srcMoves == null) {
 +        return null;
 +      }
 +
 +      Move move = srcMoves.get(srcMoves.size() - 1);
 +      TServerInstance ret = move.dest.getTserverInstance();
 +      totalMoves--;
 +
 +      move.count--;
 +      if (move.count == 0) {
 +        srcMoves.remove(srcMoves.size() - 1);
 +        if (srcMoves.size() == 0) {
 +          moves.remove(src, group);
 +        }
 +      }
 +
 +      return ret;
 +    }
 +
 +    public int size() {
 +      return totalMoves;
 +    }
 +  }
 +
 +  private void balanceExtraExtra(Map<TServerInstance,TserverGroupInfo> tservers, int maxExtraGroups, Moves moves) {
 +    Table<String,TServerInstance,TserverGroupInfo> surplusExtra = HashBasedTable.create();
 +    for (TserverGroupInfo tgi : tservers.values()) {
 +      Map<String,Integer> extras = tgi.getExtras();
 +      if (extras.size() > maxExtraGroups) {
 +        for (String group : extras.keySet()) {
 +          surplusExtra.put(group, tgi.getTserverInstance(), tgi);
 +        }
 +      }
 +    }
 +
 +    ArrayList<Pair<String,TServerInstance>> serversGroupsToRemove = new ArrayList<>();
 +    ArrayList<TServerInstance> serversToRemove = new ArrayList<>();
 +
 +    for (TserverGroupInfo destTgi : tservers.values()) {
 +      if (surplusExtra.size() == 0) {
 +        break;
 +      }
 +
 +      Map<String,Integer> extras = destTgi.getExtras();
 +      if (extras.size() < maxExtraGroups) {
 +        serversToRemove.clear();
 +        serversGroupsToRemove.clear();
 +        for (String group : surplusExtra.rowKeySet()) {
 +          if (!extras.containsKey(group)) {
 +            TserverGroupInfo srcTgi = surplusExtra.row(group).values().iterator().next();
 +
 +            moves.move(group, 1, srcTgi, destTgi);
 +
 +            if (srcTgi.getExtras().size() <= maxExtraGroups) {
 +              serversToRemove.add(srcTgi.getTserverInstance());
 +            } else {
 +              serversGroupsToRemove.add(new Pair<String,TServerInstance>(group, srcTgi.getTserverInstance()));
 +            }
 +
 +            if (destTgi.getExtras().size() >= maxExtraGroups || moves.size() >= getMaxMigrations()) {
 +              break;
 +            }
 +          }
 +        }
 +
 +        if (serversToRemove.size() > 0) {
 +          surplusExtra.columnKeySet().removeAll(serversToRemove);
 +        }
 +
 +        for (Pair<String,TServerInstance> pair : serversGroupsToRemove) {
 +          surplusExtra.remove(pair.getFirst(), pair.getSecond());
 +        }
 +
 +        if (moves.size() >= getMaxMigrations()) {
 +          break;
 +        }
 +      }
 +    }
 +  }
 +
 +  private boolean balanceExtraMultiple(Map<TServerInstance,TserverGroupInfo> tservers, int maxExtraGroups, Moves moves) {
 +    Multimap<String,TserverGroupInfo> extraMultiple = HashMultimap.create();
 +
 +    for (TserverGroupInfo tgi : tservers.values()) {
 +      Map<String,Integer> extras = tgi.getExtras();
 +      for (Entry<String,Integer> entry : extras.entrySet()) {
 +        if (entry.getValue() > 1) {
 +          extraMultiple.put(entry.getKey(), tgi);
 +        }
 +      }
 +    }
 +
 +    balanceExtraMultiple(tservers, maxExtraGroups, moves, extraMultiple, false);
 +    if (moves.size() < getMaxMigrations() && extraMultiple.size() > 0) {
 +      // no place to move so must exceed maxExtra temporarily... subsequent balancer calls will smooth things out
 +      balanceExtraMultiple(tservers, maxExtraGroups, moves, extraMultiple, true);
 +      return false;
 +    } else {
 +      return true;
 +    }
 +  }
 +
 +  private void balanceExtraMultiple(Map<TServerInstance,TserverGroupInfo> tservers, int maxExtraGroups, Moves moves,
 +      Multimap<String,TserverGroupInfo> extraMultiple, boolean alwaysAdd) {
 +
 +    ArrayList<Pair<String,TserverGroupInfo>> serversToRemove = new ArrayList<>();
 +    for (TserverGroupInfo destTgi : tservers.values()) {
 +      Map<String,Integer> extras = destTgi.getExtras();
 +      if (alwaysAdd || extras.size() < maxExtraGroups) {
 +        serversToRemove.clear();
 +        for (String group : extraMultiple.keySet()) {
 +          if (!extras.containsKey(group)) {
 +            Collection<TserverGroupInfo> sources = extraMultiple.get(group);
 +            Iterator<TserverGroupInfo> iter = sources.iterator();
 +            TserverGroupInfo srcTgi = iter.next();
 +
 +            int num = srcTgi.getExtras().get(group);
 +
 +            moves.move(group, 1, srcTgi, destTgi);
 +
 +            if (num == 2) {
 +              serversToRemove.add(new Pair<String,TserverGroupInfo>(group, srcTgi));
 +            }
 +
 +            if (destTgi.getExtras().size() >= maxExtraGroups || moves.size() >= getMaxMigrations()) {
 +              break;
 +            }
 +          }
 +        }
 +
 +        for (Pair<String,TserverGroupInfo> pair : serversToRemove) {
 +          extraMultiple.remove(pair.getFirst(), pair.getSecond());
 +        }
 +
 +        if (extraMultiple.size() == 0 || moves.size() >= getMaxMigrations()) {
 +          break;
 +        }
 +      }
 +    }
 +  }
 +
 +  private void balanceExtraExpected(Map<TServerInstance,TserverGroupInfo> tservers, int expectedExtra, Moves moves) {
 +
 +    Table<String,TServerInstance,TserverGroupInfo> extraSurplus = HashBasedTable.create();
 +
 +    for (TserverGroupInfo tgi : tservers.values()) {
 +      Map<String,Integer> extras = tgi.getExtras();
 +      if (extras.size() > expectedExtra) {
 +        for (String group : extras.keySet()) {
 +          extraSurplus.put(group, tgi.getTserverInstance(), tgi);
 +        }
 +      }
 +    }
 +
 +    ArrayList<TServerInstance> emptyServers = new ArrayList<>();
 +    ArrayList<Pair<String,TServerInstance>> emptyServerGroups = new ArrayList<>();
 +    for (TserverGroupInfo destTgi : tservers.values()) {
 +      if (extraSurplus.size() == 0) {
 +        break;
 +      }
 +
 +      Map<String,Integer> extras = destTgi.getExtras();
 +      if (extras.size() < expectedExtra) {
 +        emptyServers.clear();
 +        emptyServerGroups.clear();
 +        nextGroup: for (String group : extraSurplus.rowKeySet()) {
 +          if (!extras.containsKey(group)) {
 +            Iterator<TserverGroupInfo> iter = extraSurplus.row(group).values().iterator();
 +            TserverGroupInfo srcTgi = iter.next();
 +
 +            while (srcTgi.getExtras().size() <= expectedExtra) {
 +              if (iter.hasNext()) {
 +                srcTgi = iter.next();
 +              } else {
 +                continue nextGroup;
 +              }
 +            }
 +
 +            moves.move(group, 1, srcTgi, destTgi);
 +
 +            if (srcTgi.getExtras().size() <= expectedExtra) {
 +              emptyServers.add(srcTgi.getTserverInstance());
 +            } else if (srcTgi.getExtras().get(group) == null) {
 +              emptyServerGroups.add(new Pair<String,TServerInstance>(group, srcTgi.getTserverInstance()));
 +            }
 +
 +            if (destTgi.getExtras().size() >= expectedExtra || moves.size() >= getMaxMigrations()) {
 +              break;
 +            }
 +          }
 +        }
 +
 +        if (emptyServers.size() > 0) {
 +          extraSurplus.columnKeySet().removeAll(emptyServers);
 +        }
 +
 +        for (Pair<String,TServerInstance> pair : emptyServerGroups) {
 +          extraSurplus.remove(pair.getFirst(), pair.getSecond());
 +        }
 +
 +        if (moves.size() >= getMaxMigrations()) {
 +          break;
 +        }
 +      }
 +    }
 +  }
 +
 +  private void balanceExpected(Map<TServerInstance,TserverGroupInfo> tservers, Moves moves) {
 +    Multimap<String,TserverGroupInfo> groupDefecits = HashMultimap.create();
 +    Multimap<String,TserverGroupInfo> groupSurplus = HashMultimap.create();
 +
 +    for (TserverGroupInfo tgi : tservers.values()) {
 +      for (String group : tgi.getExpectedDeficits().keySet()) {
 +        groupDefecits.put(group, tgi);
 +      }
 +
 +      for (String group : tgi.getExtras().keySet()) {
 +        groupSurplus.put(group, tgi);
 +      }
 +    }
 +
 +    for (String group : groupDefecits.keySet()) {
 +      Collection<TserverGroupInfo> defecitServers = groupDefecits.get(group);
 +      for (TserverGroupInfo defecitTsi : defecitServers) {
 +        int numToMove = defecitTsi.getExpectedDeficits().get(group);
 +
 +        Iterator<TserverGroupInfo> surplusIter = groupSurplus.get(group).iterator();
 +        while (numToMove > 0) {
 +          TserverGroupInfo surplusTsi = surplusIter.next();
 +
 +          int available = surplusTsi.getExtras().get(group);
 +
 +          if (numToMove >= available) {
 +            surplusIter.remove();
 +          }
 +
 +          int transfer = Math.min(numToMove, available);
 +
 +          numToMove -= transfer;
 +
 +          moves.move(group, transfer, surplusTsi, defecitTsi);
 +          if (moves.size() >= getMaxMigrations()) {
 +            return;
 +          }
 +        }
 +      }
 +    }
 +  }
 +
 +  private void populateMigrations(Set<TServerInstance> current, List<TabletMigration> migrationsOut, Moves moves) {
 +    if (moves.size() == 0) {
 +      return;
 +    }
 +
 +    Function<KeyExtent,String> partitioner = getPartitioner();
 +
 +    for (Pair<KeyExtent,Location> entry : getLocationProvider()) {
 +      String group = partitioner.apply(entry.getFirst());
 +      Location loc = entry.getSecond();
 +
 +      if (loc.equals(Location.NONE) || !current.contains(loc.getTserverInstance())) {
 +        migrationsOut.clear();
 +        return;
 +      }
 +
 +      TServerInstance dest = moves.removeMove(loc.getTserverInstance(), group);
 +      if (dest != null) {
 +        migrationsOut.add(new TabletMigration(entry.getFirst(), loc.getTserverInstance(), dest));
 +        if (moves.size() == 0) {
 +          break;
 +        }
 +      }
 +    }
 +  }
 +
 +  static class LocationFunction implements Function<Iterator<Entry<Key,Value>>,Pair<KeyExtent,Location>> {
 +    @Override
 +    public Pair<KeyExtent,Location> apply(Iterator<Entry<Key,Value>> input) {
 +      Location loc = Location.NONE;
 +      KeyExtent extent = null;
 +      while (input.hasNext()) {
 +        Entry<Key,Value> entry = input.next();
 +        if (entry.getKey().getColumnFamily().equals(MetadataSchema.TabletsSection.CurrentLocationColumnFamily.NAME)) {
 +          loc = new Location(new TServerInstance(entry.getValue(), entry.getKey().getColumnQualifier()));
 +        } else if (MetadataSchema.TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN.hasColumns(entry.getKey())) {
 +          extent = new KeyExtent(entry.getKey().getRow(), entry.getValue());
 +        }
 +      }
 +
 +      return new Pair<KeyExtent,Location>(extent, loc);
 +    }
 +
 +  }
 +
 +  class MetadataLocationProvider implements Iterable<Pair<KeyExtent,Location>> {
 +
 +    @Override
 +    public Iterator<Pair<KeyExtent,Location>> iterator() {
 +      try {
 +        Scanner scanner = new IsolatedScanner(context.getConnector().createScanner(MetadataTable.NAME, Authorizations.EMPTY));
 +        scanner.fetchColumnFamily(MetadataSchema.TabletsSection.CurrentLocationColumnFamily.NAME);
 +        MetadataSchema.TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN.fetch(scanner);
 +        scanner.setRange(MetadataSchema.TabletsSection.getRange(tableId));
 +
 +        RowIterator rowIter = new RowIterator(scanner);
 +
 +        return Iterators.transform(rowIter, new LocationFunction());
 +      } catch (Exception e) {
 +        throw new RuntimeException(e);
 +      }
 +    }
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/base/src/main/java/org/apache/accumulo/server/master/balancer/RegexGroupBalancer.java
----------------------------------------------------------------------
diff --cc server/base/src/main/java/org/apache/accumulo/server/master/balancer/RegexGroupBalancer.java
index 724a606,0000000..0d07a77
mode 100644,000000..100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/RegexGroupBalancer.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/RegexGroupBalancer.java
@@@ -1,96 -1,0 +1,96 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +
 +package org.apache.accumulo.server.master.balancer;
 +
 +import java.util.Map;
 +import java.util.regex.Matcher;
 +import java.util.regex.Pattern;
 +
 +import org.apache.accumulo.core.conf.AccumuloConfiguration;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.data.impl.KeyExtent;
 +import org.apache.hadoop.io.Text;
 +
 +import com.google.common.base.Function;
 +
 +/**
 + * A {@link GroupBalancer} that groups tablets using a configurable regex. To use this balancer configure the following settings for your table then configure
 + * this balancer for your table.
 + *
 + * <ul>
-  * <li/>Set {@code table.custom.balancer.group.regex.pattern} to a regular expression. This regular expression must have one group. The regex is applied to the
++ * <li>Set {@code table.custom.balancer.group.regex.pattern} to a regular expression. This regular expression must have one group. The regex is applied to the
 + * tablet end row and whatever the regex group matches is used as the group. For example with a regex of {@code (\d\d).*} and an end row of {@code 12abc}, the
 + * group for the tablet would be {@code 12}.
-  * <li/>Set {@code table.custom.balancer.group.regex.default} to a default group. This group is returned for the last tablet in the table and tablets for which
++ * <li>Set {@code table.custom.balancer.group.regex.default} to a default group. This group is returned for the last tablet in the table and tablets for which
 + * the regex does not match.
-  * <li/>Optionally set {@code table.custom.balancer.group.regex.wait.time} to time (can use time suffixes). This determines how long to wait between balancing.
++ * <li>Optionally set {@code table.custom.balancer.group.regex.wait.time} to time (can use time suffixes). This determines how long to wait between balancing.
 + * Since this balancer scans the metadata table, may want to set this higher for large tables.
 + * </ul>
 + */
 +
 +public class RegexGroupBalancer extends GroupBalancer {
 +
 +  public static final String REGEX_PROPERTY = Property.TABLE_ARBITRARY_PROP_PREFIX.getKey() + "balancer.group.regex.pattern";
 +  public static final String DEFAUT_GROUP_PROPERTY = Property.TABLE_ARBITRARY_PROP_PREFIX.getKey() + "balancer.group.regex.default";
 +  public static final String WAIT_TIME_PROPERTY = Property.TABLE_ARBITRARY_PROP_PREFIX.getKey() + "balancer.group.regex.wait.time";
 +
 +  private final String tableId;
 +
 +  public RegexGroupBalancer(String tableId) {
 +    super(tableId);
 +    this.tableId = tableId;
 +  }
 +
 +  @Override
 +  protected long getWaitTime() {
 +    Map<String,String> customProps = configuration.getTableConfiguration(tableId).getAllPropertiesWithPrefix(Property.TABLE_ARBITRARY_PROP_PREFIX);
 +    if (customProps.containsKey(WAIT_TIME_PROPERTY)) {
 +      return AccumuloConfiguration.getTimeInMillis(customProps.get(WAIT_TIME_PROPERTY));
 +    }
 +
 +    return super.getWaitTime();
 +  }
 +
 +  @Override
 +  protected Function<KeyExtent,String> getPartitioner() {
 +
 +    Map<String,String> customProps = configuration.getTableConfiguration(tableId).getAllPropertiesWithPrefix(Property.TABLE_ARBITRARY_PROP_PREFIX);
 +    String regex = customProps.get(REGEX_PROPERTY);
 +    final String defaultGroup = customProps.get(DEFAUT_GROUP_PROPERTY);
 +
 +    final Pattern pattern = Pattern.compile(regex);
 +
 +    return new Function<KeyExtent,String>() {
 +
 +      @Override
 +      public String apply(KeyExtent input) {
 +        Text er = input.getEndRow();
 +        if (er == null) {
 +          return defaultGroup;
 +        }
 +
 +        Matcher matcher = pattern.matcher(er.toString());
 +        if (matcher.matches() && matcher.groupCount() == 1) {
 +          return matcher.group(1);
 +        }
 +
 +        return defaultGroup;
 +      }
 +    };
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/base/src/main/java/org/apache/accumulo/server/security/UserImpersonation.java
----------------------------------------------------------------------
diff --cc server/base/src/main/java/org/apache/accumulo/server/security/UserImpersonation.java
index fada1ad,0000000..2a1fd00
mode 100644,000000..100644
--- a/server/base/src/main/java/org/apache/accumulo/server/security/UserImpersonation.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/security/UserImpersonation.java
@@@ -1,228 -1,0 +1,228 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.server.security;
 +
 +import java.util.Arrays;
 +import java.util.Collection;
 +import java.util.HashMap;
 +import java.util.HashSet;
 +import java.util.Iterator;
 +import java.util.Map;
 +import java.util.Map.Entry;
 +import java.util.Set;
 +
 +import org.apache.accumulo.core.conf.AccumuloConfiguration;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.commons.lang.StringUtils;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +/**
 + * When SASL is enabled, this parses properties from the site configuration to build up a set of all users capable of impersonating another user, the users
 + * which may be impersonated and the hosts in which the impersonator may issue requests from.
 + *
-  * <code>rpc_user=>{allowed_accumulo_users=[...], allowed_client_hosts=[...]</code>
++ * <code>rpc_user=&gt;{allowed_accumulo_users=[...], allowed_client_hosts=[...]</code>
 + *
 + * @see Property#INSTANCE_RPC_SASL_PROXYUSERS
 + */
 +public class UserImpersonation {
 +
 +  private static final Logger log = LoggerFactory.getLogger(UserImpersonation.class);
 +  private static final Set<String> ALWAYS_TRUE = new AlwaysTrueSet<>();
 +  private static final String ALL = "*", USERS = "users", HOSTS = "hosts";
 +
 +  public static class AlwaysTrueSet<T> implements Set<T> {
 +
 +    @Override
 +    public int size() {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public boolean isEmpty() {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public boolean contains(Object o) {
 +      return true;
 +    }
 +
 +    @Override
 +    public Iterator<T> iterator() {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public Object[] toArray() {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public <E> E[] toArray(E[] a) {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public boolean add(T e) {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public boolean remove(Object o) {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public boolean containsAll(Collection<?> c) {
 +      return true;
 +    }
 +
 +    @Override
 +    public boolean addAll(Collection<? extends T> c) {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public boolean retainAll(Collection<?> c) {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public boolean removeAll(Collection<?> c) {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public void clear() {
 +      throw new UnsupportedOperationException();
 +    }
 +  }
 +
 +  public static class UsersWithHosts {
 +    private Set<String> users = new HashSet<>(), hosts = new HashSet<>();
 +    private boolean allUsers, allHosts;
 +
 +    public UsersWithHosts() {
 +      allUsers = allHosts = false;
 +    }
 +
 +    public UsersWithHosts(Set<String> users, Set<String> hosts) {
 +      this();
 +      this.users = users;
 +      this.hosts = hosts;
 +    }
 +
 +    public Set<String> getUsers() {
 +      if (allUsers) {
 +        return ALWAYS_TRUE;
 +      }
 +      return users;
 +    }
 +
 +    public Set<String> getHosts() {
 +      if (allHosts) {
 +        return ALWAYS_TRUE;
 +      }
 +      return hosts;
 +    }
 +
 +    public boolean acceptsAllUsers() {
 +      return allUsers;
 +    }
 +
 +    public void setAcceptAllUsers(boolean allUsers) {
 +      this.allUsers = allUsers;
 +    }
 +
 +    public boolean acceptsAllHosts() {
 +      return allHosts;
 +    }
 +
 +    public void setAcceptAllHosts(boolean allHosts) {
 +      this.allHosts = allHosts;
 +    }
 +
 +    public void setUsers(Set<String> users) {
 +      this.users = users;
 +      allUsers = false;
 +    }
 +
 +    public void setHosts(Set<String> hosts) {
 +      this.hosts = hosts;
 +      allHosts = false;
 +    }
 +  }
 +
 +  private final Map<String,UsersWithHosts> proxyUsers;
 +
 +  public UserImpersonation(AccumuloConfiguration conf) {
 +    Map<String,String> entries = conf.getAllPropertiesWithPrefix(Property.INSTANCE_RPC_SASL_PROXYUSERS);
 +    proxyUsers = new HashMap<>();
 +    final String configKey = Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey();
 +    for (Entry<String,String> entry : entries.entrySet()) {
 +      String aclKey = entry.getKey().substring(configKey.length());
 +      int index = aclKey.lastIndexOf('.');
 +
 +      if (-1 == index) {
 +        throw new RuntimeException("Expected 2 elements in key suffix: " + aclKey);
 +      }
 +
 +      final String remoteUser = aclKey.substring(0, index).trim(), usersOrHosts = aclKey.substring(index + 1).trim();
 +      UsersWithHosts usersWithHosts = proxyUsers.get(remoteUser);
 +      if (null == usersWithHosts) {
 +        usersWithHosts = new UsersWithHosts();
 +        proxyUsers.put(remoteUser, usersWithHosts);
 +      }
 +
 +      if (USERS.equals(usersOrHosts)) {
 +        String userString = entry.getValue().trim();
 +        if (ALL.equals(userString)) {
 +          usersWithHosts.setAcceptAllUsers(true);
 +        } else if (!usersWithHosts.acceptsAllUsers()) {
 +          Set<String> users = usersWithHosts.getUsers();
 +          if (null == users) {
 +            users = new HashSet<>();
 +            usersWithHosts.setUsers(users);
 +          }
 +          String[] userValues = StringUtils.split(userString, ',');
 +          users.addAll(Arrays.<String> asList(userValues));
 +        }
 +      } else if (HOSTS.equals(usersOrHosts)) {
 +        String hostsString = entry.getValue().trim();
 +        if (ALL.equals(hostsString)) {
 +          usersWithHosts.setAcceptAllHosts(true);
 +        } else if (!usersWithHosts.acceptsAllHosts()) {
 +          Set<String> hosts = usersWithHosts.getHosts();
 +          if (null == hosts) {
 +            hosts = new HashSet<>();
 +            usersWithHosts.setHosts(hosts);
 +          }
 +          String[] hostValues = StringUtils.split(hostsString, ',');
 +          hosts.addAll(Arrays.<String> asList(hostValues));
 +        }
 +      } else {
 +        log.debug("Ignoring key " + aclKey);
 +      }
 +    }
 +  }
 +
 +  public UsersWithHosts get(String remoteUser) {
 +    return proxyUsers.get(remoteUser);
 +  }
 +
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
----------------------------------------------------------------------
diff --cc server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
index 1af908b,a4c5fd6..274ec76
--- a/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
@@@ -56,8 -53,8 +56,8 @@@ public class SystemCredentialsTest 
    }
  
    /**
 -   * This is a test to ensure the string literal in {@link ConnectorImpl#ConnectorImpl(Instance, Credentials)} is kept up-to-date if we move the
 -   * {@link SystemToken}<br>
 +   * This is a test to ensure the string literal in {@link ConnectorImpl#ConnectorImpl(org.apache.accumulo.core.client.impl.ClientContext)} is kept up-to-date
-    * if we move the {@link SystemToken}<br/>
++   * if we move the {@link SystemToken}<br>
     * This check will not be needed after ACCUMULO-1578
     */
    @Test

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/master/src/main/java/org/apache/accumulo/master/replication/SequentialWorkAssigner.java
----------------------------------------------------------------------
diff --cc server/master/src/main/java/org/apache/accumulo/master/replication/SequentialWorkAssigner.java
index e30e9ac,0000000..f24da7e
mode 100644,000000..100644
--- a/server/master/src/main/java/org/apache/accumulo/master/replication/SequentialWorkAssigner.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/replication/SequentialWorkAssigner.java
@@@ -1,227 -1,0 +1,227 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.master.replication;
 +
 +import java.util.Collections;
 +import java.util.HashMap;
 +import java.util.Iterator;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.Map.Entry;
 +import java.util.Set;
 +
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.conf.AccumuloConfiguration;
 +import org.apache.accumulo.core.replication.ReplicationConstants;
 +import org.apache.accumulo.core.replication.ReplicationTarget;
 +import org.apache.accumulo.core.zookeeper.ZooUtil;
 +import org.apache.accumulo.server.replication.DistributedWorkQueueWorkAssignerHelper;
 +import org.apache.hadoop.fs.Path;
 +import org.apache.zookeeper.KeeperException;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +/**
-  * Creates work in ZK which is <code>filename.serialized_ReplicationTarget => filename</code>, but replicates files in the order in which they were created.
++ * Creates work in ZK which is <code>filename.serialized_ReplicationTarget =&gt; filename</code>, but replicates files in the order in which they were created.
 + * <p>
 + * The intent is to ensure that WALs are replayed in the same order on the peer in which they were applied on the primary.
 + */
 +public class SequentialWorkAssigner extends DistributedWorkQueueWorkAssigner {
 +  private static final Logger log = LoggerFactory.getLogger(SequentialWorkAssigner.class);
 +  private static final String NAME = "Sequential Work Assigner";
 +
 +  // @formatter:off
 +  /*
 +   * {
 +   *    peer1 => {sourceTableId1 => work_queue_key1, sourceTableId2 => work_queue_key2, ...}
 +   *    peer2 => {sourceTableId1 => work_queue_key1, sourceTableId3 => work_queue_key4, ...}
 +   *    ...
 +   * }
 +   */
 +  // @formatter:on
 +  private Map<String,Map<String,String>> queuedWorkByPeerName;
 +
 +  public SequentialWorkAssigner() {}
 +
 +  public SequentialWorkAssigner(AccumuloConfiguration conf, Connector conn) {
 +    configure(conf, conn);
 +  }
 +
 +  @Override
 +  public String getName() {
 +    return NAME;
 +  }
 +
 +  protected Map<String,Map<String,String>> getQueuedWork() {
 +    return queuedWorkByPeerName;
 +  }
 +
 +  protected void setQueuedWork(Map<String,Map<String,String>> queuedWork) {
 +    this.queuedWorkByPeerName = queuedWork;
 +  }
 +
 +  /**
 +   * Initialize the queuedWork set with the work already sent out
 +   */
 +  @Override
 +  protected void initializeQueuedWork() {
 +    if (null != queuedWorkByPeerName) {
 +      return;
 +    }
 +
 +    queuedWorkByPeerName = new HashMap<>();
 +    List<String> existingWork;
 +    try {
 +      existingWork = workQueue.getWorkQueued();
 +    } catch (KeeperException | InterruptedException e) {
 +      throw new RuntimeException("Error reading existing queued replication work", e);
 +    }
 +
 +    log.info("Restoring replication work queue state from zookeeper");
 +
 +    for (String work : existingWork) {
 +      Entry<String,ReplicationTarget> entry = DistributedWorkQueueWorkAssignerHelper.fromQueueKey(work);
 +      String filename = entry.getKey();
 +      String peerName = entry.getValue().getPeerName();
 +      String sourceTableId = entry.getValue().getSourceTableId();
 +
 +      log.debug("In progress replication of {} from table with ID {} to peer {}", filename, sourceTableId, peerName);
 +
 +      Map<String,String> replicationForPeer = queuedWorkByPeerName.get(peerName);
 +      if (null == replicationForPeer) {
 +        replicationForPeer = new HashMap<>();
 +        queuedWorkByPeerName.put(sourceTableId, replicationForPeer);
 +      }
 +
 +      replicationForPeer.put(sourceTableId, work);
 +    }
 +  }
 +
 +  /**
 +   * Iterate over the queued work to remove entries that have been completed.
 +   */
 +  @Override
 +  protected void cleanupFinishedWork() {
 +    final Iterator<Entry<String,Map<String,String>>> queuedWork = queuedWorkByPeerName.entrySet().iterator();
 +    final String instanceId = conn.getInstance().getInstanceID();
 +
 +    int elementsRemoved = 0;
 +    // Check the status of all the work we've queued up
 +    while (queuedWork.hasNext()) {
 +      // {peer -> {tableId -> workKey, tableId -> workKey, ... }, peer -> ...}
 +      Entry<String,Map<String,String>> workForPeer = queuedWork.next();
 +
 +      // TableID to workKey (filename and ReplicationTarget)
 +      Map<String,String> queuedReplication = workForPeer.getValue();
 +
 +      Iterator<Entry<String,String>> iter = queuedReplication.entrySet().iterator();
 +      // Loop over every target we need to replicate this file to, removing the target when
 +      // the replication task has finished
 +      while (iter.hasNext()) {
 +        // tableID -> workKey
 +        Entry<String,String> entry = iter.next();
 +        // Null equates to the work for this target was finished
 +        if (null == zooCache.get(ZooUtil.getRoot(instanceId) + ReplicationConstants.ZOO_WORK_QUEUE + "/" + entry.getValue())) {
 +          log.debug("Removing {} from work assignment state", entry.getValue());
 +          iter.remove();
 +          elementsRemoved++;
 +        }
 +      }
 +    }
 +
 +    log.info("Removed {} elements from internal workqueue state because the work was complete", elementsRemoved);
 +  }
 +
 +  @Override
 +  protected int getQueueSize() {
 +    return queuedWorkByPeerName.size();
 +  }
 +
 +  @Override
 +  protected boolean shouldQueueWork(ReplicationTarget target) {
 +    Map<String,String> queuedWorkForPeer = this.queuedWorkByPeerName.get(target.getPeerName());
 +    if (null == queuedWorkForPeer) {
 +      return true;
 +    }
 +
 +    String queuedWork = queuedWorkForPeer.get(target.getSourceTableId());
 +
 +    // If we have no work for the local table to the given peer, submit some!
 +    return null == queuedWork;
 +  }
 +
 +  @Override
 +  protected boolean queueWork(Path path, ReplicationTarget target) {
 +    String queueKey = DistributedWorkQueueWorkAssignerHelper.getQueueKey(path.getName(), target);
 +    Map<String,String> workForPeer = this.queuedWorkByPeerName.get(target.getPeerName());
 +    if (null == workForPeer) {
 +      workForPeer = new HashMap<>();
 +      this.queuedWorkByPeerName.put(target.getPeerName(), workForPeer);
 +    }
 +
 +    String queuedWork = workForPeer.get(target.getSourceTableId());
 +    if (null == queuedWork) {
 +      try {
 +        workQueue.addWork(queueKey, path.toString());
 +        workForPeer.put(target.getSourceTableId(), queueKey);
 +      } catch (KeeperException | InterruptedException e) {
 +        log.warn("Could not queue work for {} to {}", path, target, e);
 +        return false;
 +      }
 +
 +      return true;
 +    } else if (queuedWork.startsWith(path.getName())) {
 +      log.debug("Not re-queueing work for {} as it has already been queued for replication to {}", path, target);
 +      return false;
 +    } else {
 +      log.debug("Not queueing {} for work as {} must be replicated to {} first", path, queuedWork, target.getPeerName());
 +      return false;
 +    }
 +  }
 +
 +  @Override
 +  protected Set<String> getQueuedWork(ReplicationTarget target) {
 +    Map<String,String> queuedWorkForPeer = this.queuedWorkByPeerName.get(target.getPeerName());
 +    if (null == queuedWorkForPeer) {
 +      return Collections.emptySet();
 +    }
 +
 +    String queuedWork = queuedWorkForPeer.get(target.getSourceTableId());
 +    if (null == queuedWork) {
 +      return Collections.emptySet();
 +    } else {
 +      return Collections.singleton(queuedWork);
 +    }
 +  }
 +
 +  @Override
 +  protected void removeQueuedWork(ReplicationTarget target, String queueKey) {
 +    Map<String,String> queuedWorkForPeer = this.queuedWorkByPeerName.get(target.getPeerName());
 +    if (null == queuedWorkForPeer) {
 +      log.warn("removeQueuedWork called when no work was queued for {}", target.getPeerName());
 +      return;
 +    }
 +
 +    String queuedWork = queuedWorkForPeer.get(target.getSourceTableId());
 +    if (queuedWork.equals(queueKey)) {
 +      queuedWorkForPeer.remove(target.getSourceTableId());
 +    } else {
 +      log.warn("removeQueuedWork called on {} with differing queueKeys, expected {} but was {}", target, queueKey, queuedWork);
 +      return;
 +    }
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
----------------------------------------------------------------------


[06/19] accumulo git commit: ACCUMULO-4102 Configure javadoc plugin for jdk8

Posted by ct...@apache.org.
ACCUMULO-4102 Configure javadoc plugin for jdk8


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/4169a12b
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/4169a12b
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/4169a12b

Branch: refs/heads/master
Commit: 4169a12b52c6e6744a975eab60f8b29fdcb2f22b
Parents: f38d5e7
Author: Christopher Tubbs <ct...@apache.org>
Authored: Fri Jan 8 17:43:57 2016 -0500
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Fri Jan 8 17:43:57 2016 -0500

----------------------------------------------------------------------
 pom.xml | 19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/4169a12b/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index ea40f31..6138dbc 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1401,11 +1401,28 @@
     <profile>
       <id>jdk8</id>
       <activation>
-        <jdk>[1.8,)</jdk>
+        <jdk>[1.8,1.9)</jdk>
       </activation>
       <properties>
         <findbugs.version>3.0.1</findbugs.version>
       </properties>
+      <build>
+        <pluginManagement>
+          <plugins>
+            <plugin>
+              <groupId>org.apache.maven.plugins</groupId>
+              <artifactId>maven-javadoc-plugin</artifactId>
+              <configuration>
+                <encoding>${project.reporting.outputEncoding}</encoding>
+                <quiet>true</quiet>
+                <javadocVersion>1.8</javadocVersion>
+                <additionalJOption>-J-Xmx512m</additionalJOption>
+                <additionalparam>-Xdoclint:all,-Xdoclint:-missing</additionalparam>
+              </configuration>
+            </plugin>
+          </plugins>
+        </pluginManagement>
+      </build>
     </profile>
   </profiles>
 </project>


[15/19] accumulo git commit: Merge branch 'javadoc-jdk8-1.6' into javadoc-jdk8-1.7

Posted by ct...@apache.org.
http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ReplicationServlet.java
----------------------------------------------------------------------
diff --cc server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ReplicationServlet.java
index 01bd23a,0000000..bf582c7
mode 100644,000000..100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ReplicationServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ReplicationServlet.java
@@@ -1,167 -1,0 +1,167 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.monitor.servlets;
 +
 +import java.util.Map;
 +import java.util.Map.Entry;
 +import java.util.Set;
 +
 +import javax.servlet.http.HttpServletRequest;
 +import javax.servlet.http.HttpServletResponse;
 +
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.admin.TableOperations;
 +import org.apache.accumulo.core.master.thrift.MasterMonitorInfo;
 +import org.apache.accumulo.core.replication.ReplicationConstants;
 +import org.apache.accumulo.core.replication.ReplicationTable;
 +import org.apache.accumulo.core.replication.ReplicationTarget;
 +import org.apache.accumulo.core.zookeeper.ZooUtil;
 +import org.apache.accumulo.monitor.Monitor;
 +import org.apache.accumulo.monitor.util.Table;
 +import org.apache.accumulo.monitor.util.celltypes.NumberType;
 +import org.apache.accumulo.server.replication.DistributedWorkQueueWorkAssignerHelper;
 +import org.apache.accumulo.server.replication.ReplicationUtil;
 +import org.apache.accumulo.server.zookeeper.DistributedWorkQueue;
 +import org.apache.zookeeper.KeeperException;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +/**
 + *
 + */
 +public class ReplicationServlet extends BasicServlet {
 +  private static final Logger log = LoggerFactory.getLogger(ReplicationServlet.class);
 +
 +  private static final long serialVersionUID = 1L;
 +
 +  // transient because it's not serializable and servlets are serializable
 +  private transient volatile ReplicationUtil replicationUtil = null;
 +
 +  private synchronized ReplicationUtil getReplicationUtil() {
 +    // make transient replicationUtil available as needed
 +    if (replicationUtil == null) {
 +      replicationUtil = new ReplicationUtil(Monitor.getContext());
 +    }
 +    return replicationUtil;
 +  }
 +
 +  @Override
 +  protected String getTitle(HttpServletRequest req) {
 +    return "Replication Overview";
 +  }
 +
 +  @Override
 +  protected void pageBody(HttpServletRequest req, HttpServletResponse response, StringBuilder sb) throws Exception {
 +    final Connector conn = Monitor.getContext().getConnector();
 +    final MasterMonitorInfo mmi = Monitor.getMmi();
 +
 +    // The total number of "slots" we have to replicate data
 +    int totalWorkQueueSize = getReplicationUtil().getMaxReplicationThreads(mmi);
 +
 +    TableOperations tops = conn.tableOperations();
 +    if (!ReplicationTable.isOnline(conn)) {
 +      banner(sb, "", "Replication table is offline");
 +      return;
 +    }
 +
 +    Table replicationStats = new Table("replicationStats", "Replication Status");
 +    replicationStats.addSortableColumn("Table");
 +    replicationStats.addSortableColumn("Peer");
 +    replicationStats.addSortableColumn("Remote Identifier");
 +    replicationStats.addSortableColumn("ReplicaSystem Type");
 +    replicationStats.addSortableColumn("Files needing replication", new NumberType<Long>(), null);
 +
 +    Map<String,String> peers = getReplicationUtil().getPeers();
 +
 +    // The total set of configured targets
 +    Set<ReplicationTarget> allConfiguredTargets = getReplicationUtil().getReplicationTargets();
 +
 +    // Number of files per target we have to replicate
 +    Map<ReplicationTarget,Long> targetCounts = getReplicationUtil().getPendingReplications();
 +
 +    Map<String,String> tableNameToId = tops.tableIdMap();
 +    Map<String,String> tableIdToName = getReplicationUtil().invert(tableNameToId);
 +
 +    long filesPendingOverAllTargets = 0l;
 +    for (ReplicationTarget configuredTarget : allConfiguredTargets) {
 +      String tableName = tableIdToName.get(configuredTarget.getSourceTableId());
 +      if (null == tableName) {
 +        log.trace("Could not determine table name from id {}", configuredTarget.getSourceTableId());
 +        continue;
 +      }
 +
 +      String replicaSystemClass = peers.get(configuredTarget.getPeerName());
 +      if (null == replicaSystemClass) {
 +        log.trace("Could not determine configured ReplicaSystem for {}", configuredTarget.getPeerName());
 +        continue;
 +      }
 +
 +      Long numFiles = targetCounts.get(configuredTarget);
 +
 +      if (null == numFiles) {
 +        replicationStats.addRow(tableName, configuredTarget.getPeerName(), configuredTarget.getRemoteIdentifier(), replicaSystemClass, 0);
 +      } else {
 +        replicationStats.addRow(tableName, configuredTarget.getPeerName(), configuredTarget.getRemoteIdentifier(), replicaSystemClass, numFiles);
 +        filesPendingOverAllTargets += numFiles;
 +      }
 +    }
 +
 +    // Up to 2x the number of slots for replication available, WARN
 +    // More than 2x the number of slots for replication available, ERROR
 +    NumberType<Long> filesPendingFormat = new NumberType<Long>(Long.valueOf(0), Long.valueOf(2 * totalWorkQueueSize), Long.valueOf(0),
 +        Long.valueOf(4 * totalWorkQueueSize));
 +
 +    String utilization = filesPendingFormat.format(filesPendingOverAllTargets);
 +
-     sb.append("<div><center><br/><span class=\"table-caption\">Total files pending replication: ").append(utilization).append("</span></center></div>");
++    sb.append("<div><center><br /><span class=\"table-caption\">Total files pending replication: ").append(utilization).append("</span></center></div>");
 +
 +    replicationStats.generate(req, sb);
 +
 +    // Make a table for the replication data in progress
 +    Table replicationInProgress = new Table("replicationInProgress", "In-Progress Replication");
 +    replicationInProgress.addSortableColumn("File");
 +    replicationInProgress.addSortableColumn("Peer");
 +    replicationInProgress.addSortableColumn("Source Table ID");
 +    replicationInProgress.addSortableColumn("Peer Identifier");
 +    replicationInProgress.addUnsortableColumn("Status");
 +
 +    // Read the files from the workqueue in zk
 +    String zkRoot = ZooUtil.getRoot(Monitor.getContext().getInstance());
 +    final String workQueuePath = zkRoot + ReplicationConstants.ZOO_WORK_QUEUE;
 +
 +    DistributedWorkQueue workQueue = new DistributedWorkQueue(workQueuePath, Monitor.getContext().getConfiguration());
 +
 +    try {
 +      for (String queueKey : workQueue.getWorkQueued()) {
 +        Entry<String,ReplicationTarget> queueKeyPair = DistributedWorkQueueWorkAssignerHelper.fromQueueKey(queueKey);
 +        String filename = queueKeyPair.getKey();
 +        ReplicationTarget target = queueKeyPair.getValue();
 +
 +        String path = getReplicationUtil().getAbsolutePath(conn, workQueuePath, queueKey);
 +        String progress = getReplicationUtil().getProgress(conn, path, target);
 +
 +        // Add a row in the table
 +        replicationInProgress.addRow(null == path ? ".../" + filename : path, target.getPeerName(), target.getSourceTableId(), target.getRemoteIdentifier(),
 +            progress);
 +      }
 +    } catch (KeeperException | InterruptedException e) {
 +      log.warn("Could not calculate replication in progress", e);
 +    }
 +
 +    replicationInProgress.generate(req, sb);
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/CompactionStrategy.java
----------------------------------------------------------------------
diff --cc server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/CompactionStrategy.java
index 40cb604,fa0b68b..0e0089a
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/CompactionStrategy.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/CompactionStrategy.java
@@@ -43,12 -43,6 +43,12 @@@ public abstract class CompactionStrateg
     * {@link #getCompactionPlan(MajorCompactionRequest)}) that it does not need to. Any state stored during shouldCompact will no longer exist when
     * {@link #gatherInformation(MajorCompactionRequest)} and {@link #getCompactionPlan(MajorCompactionRequest)} are called.
     *
-    * <P>
++   * <p>
 +   * Called while holding the tablet lock, so it should not be doing any blocking.
 +   *
-    * <P>
++   * <p>
 +   * Since no blocking should be done in this method, then its unexpected that this method will throw IOException. However since its in the API, it can not be
 +   * easily removed.
     */
    public abstract boolean shouldCompact(MajorCompactionRequest request) throws IOException;
  
@@@ -64,10 -58,6 +64,10 @@@
    /**
     * Get the plan for compacting a tablets files. Called while holding the tablet lock, so it should not be doing any blocking.
     *
-    * <P>
++   * <p>
 +   * Since no blocking should be done in this method, then its unexpected that this method will throw IOException. However since its in the API, it can not be
 +   * easily removed.
 +   *
     * @param request
     *          basic details about the tablet
     * @return the plan for a major compaction, or null to cancel the compaction.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/test/src/main/java/org/apache/accumulo/test/replication/merkle/package-info.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/replication/merkle/package-info.java
index 6afcdf5,0000000..fd19658
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/replication/merkle/package-info.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/merkle/package-info.java
@@@ -1,38 -1,0 +1,39 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +/**
 + * A <a href="http://en.wikipedia.org/wiki/Merkle_tree">Merkle tree</a> is a hash tree and can be used to evaluate equality over large
 + * files with the ability to ascertain what portions of the files differ. Each leaf of the Merkle tree is some hash of a
 + * portion of the file, with each leaf corresponding to some "range" within the source file. As such, if all leaves are
 + * considered as ranges of the source file, the "sum" of all leaves creates a contiguous range over the entire file.
-  * <P>
++ * <p>
 + * The parent of any nodes (typically, a binary tree; however this is not required) is the concatenation of the hashes of
 + * the children. We can construct a full tree by walking up the tree, creating parents from children, until we have a root
 + * node. To check equality of two files that each have a merkle tree built, we can very easily compare the value of at the
 + * root of the Merkle tree to know whether or not the files are the same.
-  * <P>
++ * <p>
 + * Additionally, in the situation where we have two files with we expect to be the same but are not, we can walk back down
 + * the tree, finding subtrees that are equal and subtrees that are not. Subtrees that are equal correspond to portions of
 + * the files which are identical, where subtrees that are not equal correspond to discrepancies between the two files.
-  * <P>
++ * <p>
 + * We can apply this concept to Accumulo, treating a table as a file, and ranges within a file as an Accumulo Range. We can
 + * then compute the hashes over each of these Ranges and compute the entire Merkle tree to determine if two tables are
 + * equivalent.
 + *
 + * @since 1.7.0
 + */
- package org.apache.accumulo.test.replication.merkle;
++package org.apache.accumulo.test.replication.merkle;
++

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/test/src/main/java/org/apache/accumulo/test/replication/merkle/skvi/DigestIterator.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/replication/merkle/skvi/DigestIterator.java
index 5fa9a5f,0000000..769241e
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/replication/merkle/skvi/DigestIterator.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/merkle/skvi/DigestIterator.java
@@@ -1,149 -1,0 +1,149 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.test.replication.merkle.skvi;
 +
 +import java.io.ByteArrayOutputStream;
 +import java.io.DataOutputStream;
 +import java.io.IOException;
 +import java.security.MessageDigest;
 +import java.security.NoSuchAlgorithmException;
 +import java.util.Collection;
 +import java.util.Map;
 +
 +import org.apache.accumulo.core.data.ByteSequence;
 +import org.apache.accumulo.core.data.Key;
 +import org.apache.accumulo.core.data.Range;
 +import org.apache.accumulo.core.data.Value;
 +import org.apache.accumulo.core.iterators.IteratorEnvironment;
 +import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 +
 +/**
 + * {@link SortedKeyValueIterator} which attempts to compute a hash over some range of Key-Value pairs.
-  * <P>
++ * <p>
 + * For the purposes of constructing a Merkle tree, this class will only generate a meaningful result if the (Batch)Scanner will compute a single digest over a
 + * Range. If the (Batch)Scanner stops and restarts in the middle of a session, incorrect values will be returned and the merkle tree will be invalid.
 + */
 +public class DigestIterator implements SortedKeyValueIterator<Key,Value> {
 +  public static final String HASH_NAME_KEY = "hash.name";
 +
 +  private MessageDigest digest;
 +  private Key topKey;
 +  private Value topValue;
 +  private SortedKeyValueIterator<Key,Value> source;
 +
 +  @Override
 +  public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
 +    String hashName = options.get(HASH_NAME_KEY);
 +    if (null == hashName) {
 +      throw new IOException(HASH_NAME_KEY + " must be provided as option");
 +    }
 +
 +    try {
 +      this.digest = MessageDigest.getInstance(hashName);
 +    } catch (NoSuchAlgorithmException e) {
 +      throw new IOException(e);
 +    }
 +
 +    this.topKey = null;
 +    this.topValue = null;
 +    this.source = source;
 +  }
 +
 +  @Override
 +  public boolean hasTop() {
 +    return null != topKey;
 +  }
 +
 +  @Override
 +  public void next() throws IOException {
 +    // We can't call next() if we already consumed it all
 +    if (!this.source.hasTop()) {
 +      this.topKey = null;
 +      this.topValue = null;
 +      return;
 +    }
 +
 +    this.source.next();
 +
 +    consume();
 +  }
 +
 +  @Override
 +  public void seek(Range range, Collection<ByteSequence> columnFamilies, boolean inclusive) throws IOException {
 +    this.source.seek(range, columnFamilies, inclusive);
 +
 +    consume();
 +  }
 +
 +  protected void consume() throws IOException {
 +    digest.reset();
 +    ByteArrayOutputStream baos = new ByteArrayOutputStream();
 +    DataOutputStream dos = new DataOutputStream(baos);
 +
 +    if (!this.source.hasTop()) {
 +      this.topKey = null;
 +      this.topValue = null;
 +
 +      return;
 +    }
 +
 +    Key lastKeySeen = null;
 +    while (this.source.hasTop()) {
 +      baos.reset();
 +
 +      Key currentKey = this.source.getTopKey();
 +      lastKeySeen = currentKey;
 +
 +      currentKey.write(dos);
 +      this.source.getTopValue().write(dos);
 +
 +      digest.update(baos.toByteArray());
 +
 +      this.source.next();
 +    }
 +
 +    this.topKey = lastKeySeen;
 +    this.topValue = new Value(digest.digest());
 +  }
 +
 +  @Override
 +  public Key getTopKey() {
 +    return topKey;
 +  }
 +
 +  @Override
 +  public Value getTopValue() {
 +    return topValue;
 +  }
 +
 +  @Override
 +  public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
 +    DigestIterator copy = new DigestIterator();
 +    try {
 +      copy.digest = MessageDigest.getInstance(digest.getAlgorithm());
 +    } catch (NoSuchAlgorithmException e) {
 +      throw new RuntimeException(e);
 +    }
 +
 +    copy.topKey = this.topKey;
 +    copy.topValue = this.topValue;
 +    copy.source = this.source.deepCopy(env);
 +
 +    return copy;
 +  }
 +
 +}


[09/19] accumulo git commit: ACCUMULO-4102 Bump maven-plugin-plugin to 3.4

Posted by ct...@apache.org.
ACCUMULO-4102 Bump maven-plugin-plugin to 3.4

* Bump maven-plugin-plugin so the generated HelpMojo doesn't have
  javadoc problems (especially on JDK8)


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/7cc81374
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/7cc81374
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/7cc81374

Branch: refs/heads/1.7
Commit: 7cc81374233b0f8ba3a243f6084eecce9d6a1e6f
Parents: 4169a12
Author: Christopher Tubbs <ct...@apache.org>
Authored: Fri Jan 8 20:45:03 2016 -0500
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Fri Jan 8 20:45:49 2016 -0500

----------------------------------------------------------------------
 pom.xml | 5 +++++
 1 file changed, 5 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/7cc81374/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 6138dbc..f04aa53 100644
--- a/pom.xml
+++ b/pom.xml
@@ -904,6 +904,11 @@
             </execution>
           </executions>
         </plugin>
+        <plugin>
+          <groupId>org.apache.maven.plugins</groupId>
+          <artifactId>maven-plugin-plugin</artifactId>
+          <version>3.4</version>
+        </plugin>
       </plugins>
     </pluginManagement>
     <plugins>


[13/19] accumulo git commit: Merge branch 'javadoc-jdk8-1.6' into javadoc-jdk8-1.7

Posted by ct...@apache.org.
http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ReplicationServlet.java
----------------------------------------------------------------------
diff --cc server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ReplicationServlet.java
index 01bd23a,0000000..bf582c7
mode 100644,000000..100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ReplicationServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/ReplicationServlet.java
@@@ -1,167 -1,0 +1,167 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.monitor.servlets;
 +
 +import java.util.Map;
 +import java.util.Map.Entry;
 +import java.util.Set;
 +
 +import javax.servlet.http.HttpServletRequest;
 +import javax.servlet.http.HttpServletResponse;
 +
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.admin.TableOperations;
 +import org.apache.accumulo.core.master.thrift.MasterMonitorInfo;
 +import org.apache.accumulo.core.replication.ReplicationConstants;
 +import org.apache.accumulo.core.replication.ReplicationTable;
 +import org.apache.accumulo.core.replication.ReplicationTarget;
 +import org.apache.accumulo.core.zookeeper.ZooUtil;
 +import org.apache.accumulo.monitor.Monitor;
 +import org.apache.accumulo.monitor.util.Table;
 +import org.apache.accumulo.monitor.util.celltypes.NumberType;
 +import org.apache.accumulo.server.replication.DistributedWorkQueueWorkAssignerHelper;
 +import org.apache.accumulo.server.replication.ReplicationUtil;
 +import org.apache.accumulo.server.zookeeper.DistributedWorkQueue;
 +import org.apache.zookeeper.KeeperException;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +/**
 + *
 + */
 +public class ReplicationServlet extends BasicServlet {
 +  private static final Logger log = LoggerFactory.getLogger(ReplicationServlet.class);
 +
 +  private static final long serialVersionUID = 1L;
 +
 +  // transient because it's not serializable and servlets are serializable
 +  private transient volatile ReplicationUtil replicationUtil = null;
 +
 +  private synchronized ReplicationUtil getReplicationUtil() {
 +    // make transient replicationUtil available as needed
 +    if (replicationUtil == null) {
 +      replicationUtil = new ReplicationUtil(Monitor.getContext());
 +    }
 +    return replicationUtil;
 +  }
 +
 +  @Override
 +  protected String getTitle(HttpServletRequest req) {
 +    return "Replication Overview";
 +  }
 +
 +  @Override
 +  protected void pageBody(HttpServletRequest req, HttpServletResponse response, StringBuilder sb) throws Exception {
 +    final Connector conn = Monitor.getContext().getConnector();
 +    final MasterMonitorInfo mmi = Monitor.getMmi();
 +
 +    // The total number of "slots" we have to replicate data
 +    int totalWorkQueueSize = getReplicationUtil().getMaxReplicationThreads(mmi);
 +
 +    TableOperations tops = conn.tableOperations();
 +    if (!ReplicationTable.isOnline(conn)) {
 +      banner(sb, "", "Replication table is offline");
 +      return;
 +    }
 +
 +    Table replicationStats = new Table("replicationStats", "Replication Status");
 +    replicationStats.addSortableColumn("Table");
 +    replicationStats.addSortableColumn("Peer");
 +    replicationStats.addSortableColumn("Remote Identifier");
 +    replicationStats.addSortableColumn("ReplicaSystem Type");
 +    replicationStats.addSortableColumn("Files needing replication", new NumberType<Long>(), null);
 +
 +    Map<String,String> peers = getReplicationUtil().getPeers();
 +
 +    // The total set of configured targets
 +    Set<ReplicationTarget> allConfiguredTargets = getReplicationUtil().getReplicationTargets();
 +
 +    // Number of files per target we have to replicate
 +    Map<ReplicationTarget,Long> targetCounts = getReplicationUtil().getPendingReplications();
 +
 +    Map<String,String> tableNameToId = tops.tableIdMap();
 +    Map<String,String> tableIdToName = getReplicationUtil().invert(tableNameToId);
 +
 +    long filesPendingOverAllTargets = 0l;
 +    for (ReplicationTarget configuredTarget : allConfiguredTargets) {
 +      String tableName = tableIdToName.get(configuredTarget.getSourceTableId());
 +      if (null == tableName) {
 +        log.trace("Could not determine table name from id {}", configuredTarget.getSourceTableId());
 +        continue;
 +      }
 +
 +      String replicaSystemClass = peers.get(configuredTarget.getPeerName());
 +      if (null == replicaSystemClass) {
 +        log.trace("Could not determine configured ReplicaSystem for {}", configuredTarget.getPeerName());
 +        continue;
 +      }
 +
 +      Long numFiles = targetCounts.get(configuredTarget);
 +
 +      if (null == numFiles) {
 +        replicationStats.addRow(tableName, configuredTarget.getPeerName(), configuredTarget.getRemoteIdentifier(), replicaSystemClass, 0);
 +      } else {
 +        replicationStats.addRow(tableName, configuredTarget.getPeerName(), configuredTarget.getRemoteIdentifier(), replicaSystemClass, numFiles);
 +        filesPendingOverAllTargets += numFiles;
 +      }
 +    }
 +
 +    // Up to 2x the number of slots for replication available, WARN
 +    // More than 2x the number of slots for replication available, ERROR
 +    NumberType<Long> filesPendingFormat = new NumberType<Long>(Long.valueOf(0), Long.valueOf(2 * totalWorkQueueSize), Long.valueOf(0),
 +        Long.valueOf(4 * totalWorkQueueSize));
 +
 +    String utilization = filesPendingFormat.format(filesPendingOverAllTargets);
 +
-     sb.append("<div><center><br/><span class=\"table-caption\">Total files pending replication: ").append(utilization).append("</span></center></div>");
++    sb.append("<div><center><br /><span class=\"table-caption\">Total files pending replication: ").append(utilization).append("</span></center></div>");
 +
 +    replicationStats.generate(req, sb);
 +
 +    // Make a table for the replication data in progress
 +    Table replicationInProgress = new Table("replicationInProgress", "In-Progress Replication");
 +    replicationInProgress.addSortableColumn("File");
 +    replicationInProgress.addSortableColumn("Peer");
 +    replicationInProgress.addSortableColumn("Source Table ID");
 +    replicationInProgress.addSortableColumn("Peer Identifier");
 +    replicationInProgress.addUnsortableColumn("Status");
 +
 +    // Read the files from the workqueue in zk
 +    String zkRoot = ZooUtil.getRoot(Monitor.getContext().getInstance());
 +    final String workQueuePath = zkRoot + ReplicationConstants.ZOO_WORK_QUEUE;
 +
 +    DistributedWorkQueue workQueue = new DistributedWorkQueue(workQueuePath, Monitor.getContext().getConfiguration());
 +
 +    try {
 +      for (String queueKey : workQueue.getWorkQueued()) {
 +        Entry<String,ReplicationTarget> queueKeyPair = DistributedWorkQueueWorkAssignerHelper.fromQueueKey(queueKey);
 +        String filename = queueKeyPair.getKey();
 +        ReplicationTarget target = queueKeyPair.getValue();
 +
 +        String path = getReplicationUtil().getAbsolutePath(conn, workQueuePath, queueKey);
 +        String progress = getReplicationUtil().getProgress(conn, path, target);
 +
 +        // Add a row in the table
 +        replicationInProgress.addRow(null == path ? ".../" + filename : path, target.getPeerName(), target.getSourceTableId(), target.getRemoteIdentifier(),
 +            progress);
 +      }
 +    } catch (KeeperException | InterruptedException e) {
 +      log.warn("Could not calculate replication in progress", e);
 +    }
 +
 +    replicationInProgress.generate(req, sb);
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/CompactionStrategy.java
----------------------------------------------------------------------
diff --cc server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/CompactionStrategy.java
index 40cb604,fa0b68b..0e0089a
--- a/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/CompactionStrategy.java
+++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/compaction/CompactionStrategy.java
@@@ -43,12 -43,6 +43,12 @@@ public abstract class CompactionStrateg
     * {@link #getCompactionPlan(MajorCompactionRequest)}) that it does not need to. Any state stored during shouldCompact will no longer exist when
     * {@link #gatherInformation(MajorCompactionRequest)} and {@link #getCompactionPlan(MajorCompactionRequest)} are called.
     *
-    * <P>
++   * <p>
 +   * Called while holding the tablet lock, so it should not be doing any blocking.
 +   *
-    * <P>
++   * <p>
 +   * Since no blocking should be done in this method, then its unexpected that this method will throw IOException. However since its in the API, it can not be
 +   * easily removed.
     */
    public abstract boolean shouldCompact(MajorCompactionRequest request) throws IOException;
  
@@@ -64,10 -58,6 +64,10 @@@
    /**
     * Get the plan for compacting a tablets files. Called while holding the tablet lock, so it should not be doing any blocking.
     *
-    * <P>
++   * <p>
 +   * Since no blocking should be done in this method, then its unexpected that this method will throw IOException. However since its in the API, it can not be
 +   * easily removed.
 +   *
     * @param request
     *          basic details about the tablet
     * @return the plan for a major compaction, or null to cancel the compaction.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/test/src/main/java/org/apache/accumulo/test/replication/merkle/package-info.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/replication/merkle/package-info.java
index 6afcdf5,0000000..fd19658
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/replication/merkle/package-info.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/merkle/package-info.java
@@@ -1,38 -1,0 +1,39 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +/**
 + * A <a href="http://en.wikipedia.org/wiki/Merkle_tree">Merkle tree</a> is a hash tree and can be used to evaluate equality over large
 + * files with the ability to ascertain what portions of the files differ. Each leaf of the Merkle tree is some hash of a
 + * portion of the file, with each leaf corresponding to some "range" within the source file. As such, if all leaves are
 + * considered as ranges of the source file, the "sum" of all leaves creates a contiguous range over the entire file.
-  * <P>
++ * <p>
 + * The parent of any nodes (typically, a binary tree; however this is not required) is the concatenation of the hashes of
 + * the children. We can construct a full tree by walking up the tree, creating parents from children, until we have a root
 + * node. To check equality of two files that each have a merkle tree built, we can very easily compare the value of at the
 + * root of the Merkle tree to know whether or not the files are the same.
-  * <P>
++ * <p>
 + * Additionally, in the situation where we have two files with we expect to be the same but are not, we can walk back down
 + * the tree, finding subtrees that are equal and subtrees that are not. Subtrees that are equal correspond to portions of
 + * the files which are identical, where subtrees that are not equal correspond to discrepancies between the two files.
-  * <P>
++ * <p>
 + * We can apply this concept to Accumulo, treating a table as a file, and ranges within a file as an Accumulo Range. We can
 + * then compute the hashes over each of these Ranges and compute the entire Merkle tree to determine if two tables are
 + * equivalent.
 + *
 + * @since 1.7.0
 + */
- package org.apache.accumulo.test.replication.merkle;
++package org.apache.accumulo.test.replication.merkle;
++

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/test/src/main/java/org/apache/accumulo/test/replication/merkle/skvi/DigestIterator.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/replication/merkle/skvi/DigestIterator.java
index 5fa9a5f,0000000..769241e
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/replication/merkle/skvi/DigestIterator.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/merkle/skvi/DigestIterator.java
@@@ -1,149 -1,0 +1,149 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.test.replication.merkle.skvi;
 +
 +import java.io.ByteArrayOutputStream;
 +import java.io.DataOutputStream;
 +import java.io.IOException;
 +import java.security.MessageDigest;
 +import java.security.NoSuchAlgorithmException;
 +import java.util.Collection;
 +import java.util.Map;
 +
 +import org.apache.accumulo.core.data.ByteSequence;
 +import org.apache.accumulo.core.data.Key;
 +import org.apache.accumulo.core.data.Range;
 +import org.apache.accumulo.core.data.Value;
 +import org.apache.accumulo.core.iterators.IteratorEnvironment;
 +import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 +
 +/**
 + * {@link SortedKeyValueIterator} which attempts to compute a hash over some range of Key-Value pairs.
-  * <P>
++ * <p>
 + * For the purposes of constructing a Merkle tree, this class will only generate a meaningful result if the (Batch)Scanner will compute a single digest over a
 + * Range. If the (Batch)Scanner stops and restarts in the middle of a session, incorrect values will be returned and the merkle tree will be invalid.
 + */
 +public class DigestIterator implements SortedKeyValueIterator<Key,Value> {
 +  public static final String HASH_NAME_KEY = "hash.name";
 +
 +  private MessageDigest digest;
 +  private Key topKey;
 +  private Value topValue;
 +  private SortedKeyValueIterator<Key,Value> source;
 +
 +  @Override
 +  public void init(SortedKeyValueIterator<Key,Value> source, Map<String,String> options, IteratorEnvironment env) throws IOException {
 +    String hashName = options.get(HASH_NAME_KEY);
 +    if (null == hashName) {
 +      throw new IOException(HASH_NAME_KEY + " must be provided as option");
 +    }
 +
 +    try {
 +      this.digest = MessageDigest.getInstance(hashName);
 +    } catch (NoSuchAlgorithmException e) {
 +      throw new IOException(e);
 +    }
 +
 +    this.topKey = null;
 +    this.topValue = null;
 +    this.source = source;
 +  }
 +
 +  @Override
 +  public boolean hasTop() {
 +    return null != topKey;
 +  }
 +
 +  @Override
 +  public void next() throws IOException {
 +    // We can't call next() if we already consumed it all
 +    if (!this.source.hasTop()) {
 +      this.topKey = null;
 +      this.topValue = null;
 +      return;
 +    }
 +
 +    this.source.next();
 +
 +    consume();
 +  }
 +
 +  @Override
 +  public void seek(Range range, Collection<ByteSequence> columnFamilies, boolean inclusive) throws IOException {
 +    this.source.seek(range, columnFamilies, inclusive);
 +
 +    consume();
 +  }
 +
 +  protected void consume() throws IOException {
 +    digest.reset();
 +    ByteArrayOutputStream baos = new ByteArrayOutputStream();
 +    DataOutputStream dos = new DataOutputStream(baos);
 +
 +    if (!this.source.hasTop()) {
 +      this.topKey = null;
 +      this.topValue = null;
 +
 +      return;
 +    }
 +
 +    Key lastKeySeen = null;
 +    while (this.source.hasTop()) {
 +      baos.reset();
 +
 +      Key currentKey = this.source.getTopKey();
 +      lastKeySeen = currentKey;
 +
 +      currentKey.write(dos);
 +      this.source.getTopValue().write(dos);
 +
 +      digest.update(baos.toByteArray());
 +
 +      this.source.next();
 +    }
 +
 +    this.topKey = lastKeySeen;
 +    this.topValue = new Value(digest.digest());
 +  }
 +
 +  @Override
 +  public Key getTopKey() {
 +    return topKey;
 +  }
 +
 +  @Override
 +  public Value getTopValue() {
 +    return topValue;
 +  }
 +
 +  @Override
 +  public SortedKeyValueIterator<Key,Value> deepCopy(IteratorEnvironment env) {
 +    DigestIterator copy = new DigestIterator();
 +    try {
 +      copy.digest = MessageDigest.getInstance(digest.getAlgorithm());
 +    } catch (NoSuchAlgorithmException e) {
 +      throw new RuntimeException(e);
 +    }
 +
 +    copy.topKey = this.topKey;
 +    copy.topValue = this.topValue;
 +    copy.source = this.source.deepCopy(env);
 +
 +    return copy;
 +  }
 +
 +}


[04/19] accumulo git commit: ACCUMULO-4102 Configure javadoc plugin for jdk8

Posted by ct...@apache.org.
ACCUMULO-4102 Configure javadoc plugin for jdk8


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/4169a12b
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/4169a12b
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/4169a12b

Branch: refs/heads/1.6
Commit: 4169a12b52c6e6744a975eab60f8b29fdcb2f22b
Parents: f38d5e7
Author: Christopher Tubbs <ct...@apache.org>
Authored: Fri Jan 8 17:43:57 2016 -0500
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Fri Jan 8 17:43:57 2016 -0500

----------------------------------------------------------------------
 pom.xml | 19 ++++++++++++++++++-
 1 file changed, 18 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/4169a12b/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index ea40f31..6138dbc 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1401,11 +1401,28 @@
     <profile>
       <id>jdk8</id>
       <activation>
-        <jdk>[1.8,)</jdk>
+        <jdk>[1.8,1.9)</jdk>
       </activation>
       <properties>
         <findbugs.version>3.0.1</findbugs.version>
       </properties>
+      <build>
+        <pluginManagement>
+          <plugins>
+            <plugin>
+              <groupId>org.apache.maven.plugins</groupId>
+              <artifactId>maven-javadoc-plugin</artifactId>
+              <configuration>
+                <encoding>${project.reporting.outputEncoding}</encoding>
+                <quiet>true</quiet>
+                <javadocVersion>1.8</javadocVersion>
+                <additionalJOption>-J-Xmx512m</additionalJOption>
+                <additionalparam>-Xdoclint:all,-Xdoclint:-missing</additionalparam>
+              </configuration>
+            </plugin>
+          </plugins>
+        </pluginManagement>
+      </build>
     </profile>
   </profiles>
 </project>


[19/19] accumulo git commit: Merge branch 'javadoc-jdk8-1.7'

Posted by ct...@apache.org.
Merge branch 'javadoc-jdk8-1.7'


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/8ff2ca81
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/8ff2ca81
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/8ff2ca81

Branch: refs/heads/master
Commit: 8ff2ca81cd6b2e7ddc76197bd60cfea64eac465f
Parents: c252d1a 0ccba14
Author: Christopher Tubbs <ct...@apache.org>
Authored: Fri Jan 8 22:35:43 2016 -0500
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Fri Jan 8 22:35:43 2016 -0500

----------------------------------------------------------------------
 .../core/bloomfilter/DynamicBloomFilter.java    |  4 +--
 .../accumulo/core/client/BatchWriterConfig.java | 10 +++---
 .../core/client/ConditionalWriterConfig.java    |  4 +--
 .../accumulo/core/client/ScannerBase.java       |  2 --
 .../client/mapred/AccumuloFileOutputFormat.java |  4 +--
 .../mapreduce/AccumuloFileOutputFormat.java     |  4 +--
 .../lib/impl/FileOutputConfigurator.java        |  4 +--
 .../lib/util/FileOutputConfigurator.java        |  4 +--
 .../security/tokens/AuthenticationToken.java    |  2 +-
 .../core/constraints/VisibilityConstraint.java  |  1 -
 .../java/org/apache/accumulo/core/data/Key.java |  2 +-
 .../org/apache/accumulo/core/data/Range.java    |  6 ++--
 .../file/blockfile/cache/CachedBlockQueue.java  |  2 +-
 .../core/file/blockfile/cache/ClassSize.java    |  4 +--
 .../accumulo/core/file/rfile/bcfile/Utils.java  | 35 +++++++++++---------
 .../core/iterators/IteratorEnvironment.java     |  2 --
 .../user/WholeColumnFamilyIterator.java         |  4 +--
 .../core/metadata/ServicerForMetadataTable.java |  2 +-
 .../core/metadata/ServicerForRootTable.java     |  2 +-
 .../core/metadata/ServicerForUserTables.java    |  2 +-
 .../core/metadata/schema/MetadataSchema.java    |  2 +-
 .../core/replication/ReplicationSchema.java     |  6 ++--
 .../accumulo/core/sample/RowColumnSampler.java  |  4 +--
 .../core/security/ColumnVisibility.java         |  8 ++---
 .../security/crypto/CryptoModuleParameters.java |  7 +---
 .../org/apache/accumulo/core/util/OpTimer.java  |  7 ++--
 .../accumulo/core/conf/config-header.html       | 12 +++----
 .../examples/simple/filedata/ChunkCombiner.java | 18 +++++-----
 pom.xml                                         | 23 +++++++++++++
 .../apache/accumulo/server/ServerConstants.java |  2 +-
 .../server/master/balancer/GroupBalancer.java   |  4 +--
 .../master/balancer/RegexGroupBalancer.java     |  6 ++--
 .../server/security/SecurityOperation.java      |  6 ++--
 .../server/security/UserImpersonation.java      |  2 +-
 .../server/security/SystemCredentialsTest.java  |  2 +-
 .../replication/SequentialWorkAssigner.java     |  2 +-
 .../monitor/servlets/DefaultServlet.java        |  2 +-
 .../monitor/servlets/ReplicationServlet.java    |  2 +-
 .../monitor/servlets/TablesServlet.java         |  4 +--
 .../tserver/compaction/CompactionStrategy.java  |  6 ++--
 .../accumulo/test/functional/ScanIdIT.java      | 11 +++---
 .../test/replication/merkle/package-info.java   |  9 ++---
 .../replication/merkle/skvi/DigestIterator.java |  2 +-
 43 files changed, 135 insertions(+), 112 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/8ff2ca81/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/8ff2ca81/core/src/main/java/org/apache/accumulo/core/client/ScannerBase.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/client/ScannerBase.java
index aed67bc,b5692d2..51f6fae
--- a/core/src/main/java/org/apache/accumulo/core/client/ScannerBase.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/ScannerBase.java
@@@ -175,98 -174,4 +175,96 @@@ public interface ScannerBase extends It
     * @return The authorizations set on the scanner instance
     */
    Authorizations getAuthorizations();
 +
 +  /**
 +   * Setting this will cause the scanner to read sample data, as long as that sample data was generated with the given configuration. By default this is not set
 +   * and all data is read.
 +   *
 +   * <p>
 +   * One way to use this method is as follows, where the sampler configuration is obtained from the table configuration. Sample data can be generated in many
 +   * different ways, so its important to verify the sample data configuration meets expectations.
 +   *
-    * <p>
-    *
 +   * <pre>
 +   * <code>
 +   *   // could cache this if creating many scanners to avoid RPCs.
 +   *   SamplerConfiguration samplerConfig = connector.tableOperations().getSamplerConfiguration(table);
 +   *   // verify table's sample data is generated in an expected way before using
 +   *   userCode.verifySamplerConfig(samplerConfig);
 +   *   scanner.setSamplerCongiguration(samplerConfig);
 +   * </code>
 +   * </pre>
 +   *
 +   * <p>
 +   * Of course this is not the only way to obtain a {@link SamplerConfiguration}, it could be a constant, configuration, etc.
 +   *
 +   * <p>
 +   * If sample data is not present or sample data was generated with a different configuration, then the scanner iterator will throw a
 +   * {@link SampleNotPresentException}. Also if a table's sampler configuration is changed while a scanner is iterating over a table, a
 +   * {@link SampleNotPresentException} may be thrown.
 +   *
 +   * @since 1.8.0
 +   */
 +  void setSamplerConfiguration(SamplerConfiguration samplerConfig);
 +
 +  /**
 +   * @return currently set sampler configuration. Returns null if no sampler configuration is set.
 +   * @since 1.8.0
 +   */
 +  SamplerConfiguration getSamplerConfiguration();
 +
 +  /**
 +   * Clears sampler configuration making a scanner read all data. After calling this, {@link #getSamplerConfiguration()} should return null.
 +   *
 +   * @since 1.8.0
 +   */
 +  void clearSamplerConfiguration();
 +
 +  /**
 +   * This setting determines how long a scanner will wait to fill the returned batch. By default, a scanner wait until the batch is full.
 +   *
 +   * <p>
 +   * Setting the timeout to zero (with any time unit) or {@link Long#MAX_VALUE} (with {@link TimeUnit#MILLISECONDS}) means no timeout.
 +   *
 +   * @param timeOut
 +   *          the length of the timeout
 +   * @param timeUnit
 +   *          the units of the timeout
 +   * @since 1.8.0
 +   */
 +  void setBatchTimeout(long timeOut, TimeUnit timeUnit);
 +
 +  /**
 +   * Returns the timeout to fill a batch in the given TimeUnit.
 +   *
 +   * @return the batch timeout configured for this scanner
 +   * @since 1.8.0
 +   */
 +  long getBatchTimeout(TimeUnit timeUnit);
 +
 +  /**
 +   * Sets the name of the classloader context on this scanner. See the administration chapter of the user manual for details on how to configure and use
 +   * classloader contexts.
 +   *
 +   * @param classLoaderContext
 +   *          name of the classloader context
 +   * @throws NullPointerException
 +   *           if context is null
 +   * @since 1.8.0
 +   */
 +  void setClassLoaderContext(String classLoaderContext);
 +
 +  /**
 +   * Clears the current classloader context set on this scanner
 +   *
 +   * @since 1.8.0
 +   */
 +  void clearClassLoaderContext();
 +
 +  /**
 +   * Returns the name of the current classloader context set on this scanner
 +   *
 +   * @return name of the current context
 +   * @since 1.8.0
 +   */
 +  String getClassLoaderContext();
  }

http://git-wip-us.apache.org/repos/asf/accumulo/blob/8ff2ca81/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/8ff2ca81/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/8ff2ca81/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/8ff2ca81/core/src/main/java/org/apache/accumulo/core/iterators/IteratorEnvironment.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/iterators/IteratorEnvironment.java
index 5dbafa6,5a53e93..5c265e2
--- a/core/src/main/java/org/apache/accumulo/core/iterators/IteratorEnvironment.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/IteratorEnvironment.java
@@@ -39,52 -37,4 +39,50 @@@ public interface IteratorEnvironment 
    void registerSideChannel(SortedKeyValueIterator<Key,Value> iter);
  
    Authorizations getAuthorizations();
 +
 +  /**
 +   * Returns a new iterator environment object that can be used to create deep copies over sample data. The new object created will use the current sampling
 +   * configuration for the table. The existing iterator environment object will not be modified.
 +   *
 +   * <p>
 +   * Since sample data could be created in many different ways, a good practice for an iterator is to verify the sampling configuration is as expected.
 +   *
-    * <p>
-    *
 +   * <pre>
 +   * <code>
 +   *   class MyIter implements SortedKeyValueIterator&lt;Key,Value&gt; {
 +   *     SortedKeyValueIterator&lt;Key,Value&gt; source;
 +   *     SortedKeyValueIterator&lt;Key,Value&gt; sampleIter;
 +   *     &#64;Override
 +   *     void init(SortedKeyValueIterator&lt;Key,Value&gt; source, Map&lt;String,String&gt; options, IteratorEnvironment env) {
 +   *       IteratorEnvironment sampleEnv = env.cloneWithSamplingEnabled();
 +   *       //do some sanity checks on sampling config
 +   *       validateSamplingConfiguration(sampleEnv.getSamplerConfiguration());
 +   *       sampleIter = source.deepCopy(sampleEnv);
 +   *       this.source = source;
 +   *     }
 +   *   }
 +   * </code>
 +   * </pre>
 +   *
 +   * @throws SampleNotPresentException
 +   *           when sampling is not configured for table.
 +   * @since 1.8.0
 +   */
 +  IteratorEnvironment cloneWithSamplingEnabled();
 +
 +  /**
 +   * There are at least two conditions under which sampling will be enabled for an environment. One condition is when sampling is enabled for the scan that
 +   * starts everything. Another possibility is for a deep copy created with an environment created by calling {@link #cloneWithSamplingEnabled()}
 +   *
 +   * @return true if sampling is enabled for this environment.
 +   * @since 1.8.0
 +   */
 +  boolean isSamplingEnabled();
 +
 +  /**
 +   *
 +   * @return sampling configuration is sampling is enabled for environment, otherwise returns null.
 +   * @since 1.8.0
 +   */
 +  SamplerConfiguration getSamplerConfiguration();
  }

http://git-wip-us.apache.org/repos/asf/accumulo/blob/8ff2ca81/core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataSchema.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/8ff2ca81/core/src/main/java/org/apache/accumulo/core/sample/RowColumnSampler.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/sample/RowColumnSampler.java
index ad68cf6,0000000..c3464ab
mode 100644,000000..100644
--- a/core/src/main/java/org/apache/accumulo/core/sample/RowColumnSampler.java
+++ b/core/src/main/java/org/apache/accumulo/core/sample/RowColumnSampler.java
@@@ -1,124 -1,0 +1,124 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +
 +package org.apache.accumulo.core.sample;
 +
 +import java.util.Set;
 +
 +import org.apache.accumulo.core.client.admin.SamplerConfiguration;
 +import org.apache.accumulo.core.data.ByteSequence;
 +import org.apache.accumulo.core.data.Key;
 +
 +import com.google.common.collect.ImmutableSet;
 +import com.google.common.hash.HashCode;
 +import com.google.common.hash.HashFunction;
 +import com.google.common.hash.Hasher;
 +
 +/**
 + * This sampler can hash any subset of a Key's fields. The fields that hashed for the sample are determined by the configuration options passed in
 + * {@link #init(SamplerConfiguration)}. The following key values are valid options.
 + *
-  * <UL>
++ * <ul>
 + * <li>row=true|false
 + * <li>family=true|false
 + * <li>qualifier=true|false
 + * <li>visibility=true|false
-  * </UL>
++ * </ul>
 + *
 + * <p>
 + * If not specified in the options, fields default to false.
 + *
 + * <p>
 + * To determine what options are valid for hashing see {@link AbstractHashSampler}
 + *
 + * <p>
 + * To configure Accumulo to generate sample data on one thousandth of the column qualifiers, the following SamplerConfiguration could be created and used to
 + * configure a table.
 + *
 + * <p>
 + * {@code new SamplerConfiguration(RowColumnSampler.class.getName()).setOptions(ImmutableMap.of("hasher","murmur3_32","modulus","1009","qualifier","true"))}
 + *
 + * <p>
 + * With this configuration, if a column qualifier is selected then all key values contains that column qualifier will end up in the sample data.
 + *
 + * @since 1.8.0
 + */
 +
 +public class RowColumnSampler extends AbstractHashSampler {
 +
 +  private boolean row = true;
 +  private boolean family = true;
 +  private boolean qualifier = true;
 +  private boolean visibility = true;
 +
 +  private static final Set<String> VALID_OPTIONS = ImmutableSet.of("row", "family", "qualifier", "visibility");
 +
 +  private boolean hashField(SamplerConfiguration config, String field) {
 +    String optValue = config.getOptions().get(field);
 +    if (optValue != null) {
 +      return Boolean.parseBoolean(optValue);
 +    }
 +
 +    return false;
 +  }
 +
 +  @Override
 +  protected boolean isValidOption(String option) {
 +    return super.isValidOption(option) || VALID_OPTIONS.contains(option);
 +  }
 +
 +  @Override
 +  public void init(SamplerConfiguration config) {
 +    super.init(config);
 +
 +    row = hashField(config, "row");
 +    family = hashField(config, "family");
 +    qualifier = hashField(config, "qualifier");
 +    visibility = hashField(config, "visibility");
 +
 +    if (!row && !family && !qualifier && !visibility) {
 +      throw new IllegalStateException("Must hash at least one key field");
 +    }
 +  }
 +
 +  private void putByteSquence(ByteSequence data, Hasher hasher) {
 +    hasher.putBytes(data.getBackingArray(), data.offset(), data.length());
 +  }
 +
 +  @Override
 +  protected HashCode hash(HashFunction hashFunction, Key k) {
 +    Hasher hasher = hashFunction.newHasher();
 +
 +    if (row) {
 +      putByteSquence(k.getRowData(), hasher);
 +    }
 +
 +    if (family) {
 +      putByteSquence(k.getColumnFamilyData(), hasher);
 +    }
 +
 +    if (qualifier) {
 +      putByteSquence(k.getColumnQualifierData(), hasher);
 +    }
 +
 +    if (visibility) {
 +      putByteSquence(k.getColumnVisibilityData(), hasher);
 +    }
 +
 +    return hasher.hash();
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/8ff2ca81/core/src/main/java/org/apache/accumulo/core/util/OpTimer.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/util/OpTimer.java
index 0fb8cc0,564a824..33ece1a
--- a/core/src/main/java/org/apache/accumulo/core/util/OpTimer.java
+++ b/core/src/main/java/org/apache/accumulo/core/util/OpTimer.java
@@@ -57,75 -41,12 +57,78 @@@ public class OpTimer 
      return this;
    }
  
 -  public void stop(String msg) {
 -    if (log.isEnabledFor(level)) {
 -      long t2 = System.currentTimeMillis();
 -      String duration = String.format("%.3f secs", (t2 - t1) / 1000.0);
 -      msg = msg.replace("%DURATION%", duration);
 -      log.log(level, "tid=" + Thread.currentThread().getId() + " oid=" + opid + "  " + msg);
 +  /**
 +   * Stop the timer instance.
 +   *
 +   * @return this instance for fluent chaining.
 +   * @throws IllegalStateException
 +   *           if stop is called on instance that is not running.
 +   */
 +  public OpTimer stop() throws IllegalStateException {
 +    if (!isStarted) {
 +      throw new IllegalStateException("OpTimer is already stopped");
      }
 +    long now = System.nanoTime();
 +    isStarted = false;
 +    currentElapsedNanos += now - startNanos;
 +    return this;
    }
 +
 +  /**
 +   * Stops timer instance and current elapsed time to 0.
 +   *
 +   * @return this instance for fluent chaining
 +   */
 +  public OpTimer reset() {
 +    currentElapsedNanos = 0;
 +    isStarted = false;
 +    return this;
 +  }
 +
 +  /**
 +   * Converts current timer value to specific unit. The conversion to courser granularities truncate with loss of precision.
 +   *
 +   * @param timeUnit
 +   *          the time unit that will converted to.
 +   * @return truncated time in unit of specified time unit.
 +   */
 +  public long now(TimeUnit timeUnit) {
 +    return timeUnit.convert(now(), TimeUnit.NANOSECONDS);
 +  }
 +
 +  /**
 +   * Returns the current elapsed time scaled to the provided time unit. This method does not truncate like {@link #now(TimeUnit)} but returns the value as a
-    * double. </p> Note: this method is not included in the hadoop 2.7 org.apache.hadoop.util.StopWatch class. If that class is adopted, then provisions will be
-    * required to replace this method.
++   * double.
++   *
++   * <p>
++   * Note: this method is not included in the hadoop 2.7 org.apache.hadoop.util.StopWatch class. If that class is adopted, then provisions will be required to
++   * replace this method.
 +   *
 +   * @param timeUnit
 +   *          the time unit to scale the elapsed time to.
 +   * @return the elapsed time of this instance scaled to the provided time unit.
 +   */
 +  public double scale(TimeUnit timeUnit) {
 +    return (double) now() / TimeUnit.NANOSECONDS.convert(1L, timeUnit);
 +  }
 +
 +  /**
 +   * Returns current timer elapsed time as nanoseconds.
 +   *
 +   * @return elapsed time in nanoseconds.
 +   */
 +  public long now() {
 +    return isStarted ? System.nanoTime() - startNanos + currentElapsedNanos : currentElapsedNanos;
 +  }
 +
 +  /**
 +   * Return the current elapsed time in nanoseconds as a string.
 +   *
 +   * @return timer elapsed time as nanoseconds.
 +   */
 +  @Override
 +  public String toString() {
 +    return String.valueOf(now());
 +  }
 +
  }

http://git-wip-us.apache.org/repos/asf/accumulo/blob/8ff2ca81/pom.xml
----------------------------------------------------------------------
diff --cc pom.xml
index 0f8ec29,644f506..4149d7a
--- a/pom.xml
+++ b/pom.xml
@@@ -1399,31 -1405,27 +1399,54 @@@
        </properties>
      </profile>
      <profile>
+       <id>jdk8</id>
+       <activation>
+         <jdk>[1.8,1.9)</jdk>
+       </activation>
+       <build>
+         <pluginManagement>
+           <plugins>
+             <plugin>
+               <groupId>org.apache.maven.plugins</groupId>
+               <artifactId>maven-javadoc-plugin</artifactId>
+               <configuration>
+                 <encoding>${project.reporting.outputEncoding}</encoding>
+                 <quiet>true</quiet>
+                 <javadocVersion>1.8</javadocVersion>
+                 <additionalJOption>-J-Xmx512m</additionalJOption>
+                 <additionalparam>-Xdoclint:all,-Xdoclint:-missing</additionalparam>
+               </configuration>
+             </plugin>
+           </plugins>
+         </pluginManagement>
+       </build>
+     </profile>
++    <profile>
 +      <id>performanceTests</id>
 +      <build>
 +        <pluginManagement>
 +          <plugins>
 +            <!-- Add an additional execution for performance tests -->
 +            <plugin>
 +              <groupId>org.apache.maven.plugins</groupId>
 +              <artifactId>maven-failsafe-plugin</artifactId>
 +              <executions>
 +                <execution>
 +                  <!-- Run only the performance tests -->
 +                  <id>run-performance-tests</id>
 +                  <goals>
 +                    <goal>integration-test</goal>
 +                    <goal>verify</goal>
 +                  </goals>
 +                  <configuration>
 +                    <groups>${accumulo.performanceTests}</groups>
 +                  </configuration>
 +                </execution>
 +              </executions>
 +            </plugin>
 +          </plugins>
 +        </pluginManagement>
 +      </build>
 +    </profile>
    </profiles>
  </project>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/8ff2ca81/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
----------------------------------------------------------------------
diff --cc server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
index 57c68c4,274ec76..a29e3dc
--- a/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
@@@ -58,16 -55,9 +58,16 @@@ public class SystemCredentialsTest 
      }
    }
  
 +  @Before
 +  public void setupInstance() {
 +    inst = EasyMock.createMock(Instance.class);
 +    EasyMock.expect(inst.getInstanceID()).andReturn(UUID.nameUUIDFromBytes(new byte[] {1, 2, 3, 4, 5, 6, 7, 8, 9, 0}).toString()).anyTimes();
 +    EasyMock.replay(inst);
 +  }
 +
    /**
     * This is a test to ensure the string literal in {@link ConnectorImpl#ConnectorImpl(org.apache.accumulo.core.client.impl.ClientContext)} is kept up-to-date
-    * if we move the {@link SystemToken}<br/>
+    * if we move the {@link SystemToken}<br>
     * This check will not be needed after ACCUMULO-1578
     */
    @Test

http://git-wip-us.apache.org/repos/asf/accumulo/blob/8ff2ca81/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/8ff2ca81/test/src/main/java/org/apache/accumulo/test/functional/ScanIdIT.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/functional/ScanIdIT.java
index 7830939,0000000..4f78b77
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/ScanIdIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ScanIdIT.java
@@@ -1,387 -1,0 +1,390 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.test.functional;
 +
 +import static com.google.common.base.Charsets.UTF_8;
 +import static org.junit.Assert.assertNotNull;
 +import static org.junit.Assert.assertTrue;
 +import static org.junit.Assert.fail;
 +
 +import java.util.EnumSet;
 +import java.util.HashSet;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.Random;
 +import java.util.Set;
 +import java.util.SortedSet;
 +import java.util.TreeSet;
 +import java.util.concurrent.ConcurrentHashMap;
 +import java.util.concurrent.CountDownLatch;
 +import java.util.concurrent.ExecutorService;
 +import java.util.concurrent.Executors;
 +import java.util.concurrent.TimeUnit;
 +import java.util.concurrent.atomic.AtomicBoolean;
 +
 +import org.apache.accumulo.core.client.AccumuloException;
 +import org.apache.accumulo.core.client.AccumuloSecurityException;
 +import org.apache.accumulo.core.client.BatchWriter;
 +import org.apache.accumulo.core.client.BatchWriterConfig;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.IteratorSetting;
 +import org.apache.accumulo.core.client.MutationsRejectedException;
 +import org.apache.accumulo.core.client.Scanner;
 +import org.apache.accumulo.core.client.TableNotFoundException;
 +import org.apache.accumulo.core.client.admin.ActiveScan;
 +import org.apache.accumulo.core.data.Key;
 +import org.apache.accumulo.core.data.Mutation;
 +import org.apache.accumulo.core.data.Range;
 +import org.apache.accumulo.core.data.Value;
 +import org.apache.accumulo.core.iterators.IteratorUtil;
 +import org.apache.accumulo.core.security.Authorizations;
 +import org.apache.accumulo.core.security.ColumnVisibility;
 +import org.apache.accumulo.harness.AccumuloClusterHarness;
 +import org.apache.hadoop.io.Text;
 +import org.junit.Test;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 +
 +/**
 + * ACCUMULO-2641 Integration test. ACCUMULO-2641 Adds scan id to thrift protocol so that {@code org.apache.accumulo.core.client.admin.ActiveScan.getScanid()}
 + * returns a unique scan id.
++ *
 + * <p>
-  * <p/>
 + * The test uses the Minicluster and the {@code org.apache.accumulo.test.functional.SlowIterator} to create multiple scan sessions. The test exercises multiple
 + * tablet servers with splits and multiple ranges to force the scans to occur across multiple tablet servers for completeness.
-  * <p/>
++ *
++ * <p>
 + * This patch modified thrift, the TraceRepoDeserializationTest test seems to fail unless the following be added:
-  * <p/>
++ *
++ * <p>
 + * private static final long serialVersionUID = -4659975753252858243l;
-  * <p/>
++ *
++ * <p>
 + * back into org.apache.accumulo.trace.thrift.TInfo until that test signature is regenerated.
 + */
 +public class ScanIdIT extends AccumuloClusterHarness {
 +
 +  private static final Logger log = LoggerFactory.getLogger(ScanIdIT.class);
 +
 +  private static final int NUM_SCANNERS = 8;
 +
 +  private static final int NUM_DATA_ROWS = 100;
 +
 +  private static final Random random = new Random();
 +
 +  private static final ExecutorService pool = Executors.newFixedThreadPool(NUM_SCANNERS);
 +
 +  private static final AtomicBoolean testInProgress = new AtomicBoolean(true);
 +
 +  private static final Map<Integer,Value> resultsByWorker = new ConcurrentHashMap<Integer,Value>();
 +
 +  @Override
 +  protected int defaultTimeoutSeconds() {
 +    return 60;
 +  }
 +
 +  /**
 +   * @throws Exception
 +   *           any exception is a test failure.
 +   */
 +  @Test
 +  public void testScanId() throws Exception {
 +
 +    final String tableName = getUniqueNames(1)[0];
 +    Connector conn = getConnector();
 +    conn.tableOperations().create(tableName);
 +
 +    addSplits(conn, tableName);
 +
 +    log.info("Splits added");
 +
 +    generateSampleData(conn, tableName);
 +
 +    log.info("Generated data for {}", tableName);
 +
 +    attachSlowIterator(conn, tableName);
 +
 +    CountDownLatch latch = new CountDownLatch(NUM_SCANNERS);
 +
 +    for (int scannerIndex = 0; scannerIndex < NUM_SCANNERS; scannerIndex++) {
 +      ScannerThread st = new ScannerThread(conn, scannerIndex, tableName, latch);
 +      pool.submit(st);
 +    }
 +
 +    // wait for scanners to report a result.
 +    while (testInProgress.get()) {
 +
 +      if (resultsByWorker.size() < NUM_SCANNERS) {
 +        log.trace("Results reported {}", resultsByWorker.size());
 +        sleepUninterruptibly(750, TimeUnit.MILLISECONDS);
 +      } else {
 +        // each worker has reported at least one result.
 +        testInProgress.set(false);
 +
 +        log.debug("Final result count {}", resultsByWorker.size());
 +
 +        // delay to allow scanners to react to end of test and cleanly close.
 +        sleepUninterruptibly(1, TimeUnit.SECONDS);
 +      }
 +
 +    }
 +
 +    // all scanner have reported at least 1 result, so check for unique scan ids.
 +    Set<Long> scanIds = new HashSet<Long>();
 +
 +    List<String> tservers = conn.instanceOperations().getTabletServers();
 +
 +    log.debug("tablet servers {}", tservers.toString());
 +
 +    for (String tserver : tservers) {
 +
 +      List<ActiveScan> activeScans = null;
 +      for (int i = 0; i < 10; i++) {
 +        try {
 +          activeScans = conn.instanceOperations().getActiveScans(tserver);
 +          break;
 +        } catch (AccumuloException e) {
 +          if (e.getCause() instanceof TableNotFoundException) {
 +            log.debug("Got TableNotFoundException, will retry");
 +            Thread.sleep(200);
 +            continue;
 +          }
 +          throw e;
 +        }
 +      }
 +
 +      assertNotNull("Repeatedly got exception trying to active scans", activeScans);
 +
 +      log.debug("TServer {} has {} active scans", tserver, activeScans.size());
 +
 +      for (ActiveScan scan : activeScans) {
 +        log.debug("Tserver {} scan id {}", tserver, scan.getScanid());
 +        scanIds.add(scan.getScanid());
 +      }
 +    }
 +
 +    assertTrue("Expected at least " + NUM_SCANNERS + " scanIds, but saw " + scanIds.size(), NUM_SCANNERS <= scanIds.size());
 +
 +  }
 +
 +  /**
 +   * Runs scanner in separate thread to allow multiple scanners to execute in parallel.
 +   * <p/>
 +   * The thread run method is terminated when the testInProgress flag is set to false.
 +   */
 +  private static class ScannerThread implements Runnable {
 +
 +    private final Connector connector;
 +    private Scanner scanner = null;
 +    private final int workerIndex;
 +    private final String tablename;
 +    private final CountDownLatch latch;
 +
 +    public ScannerThread(final Connector connector, final int workerIndex, final String tablename, final CountDownLatch latch) {
 +      this.connector = connector;
 +      this.workerIndex = workerIndex;
 +      this.tablename = tablename;
 +      this.latch = latch;
 +    }
 +
 +    /**
 +     * execute the scan across the sample data and put scan result into result map until testInProgress flag is set to false.
 +     */
 +    @Override
 +    public void run() {
 +
 +      latch.countDown();
 +      try {
 +        latch.await();
 +      } catch (InterruptedException e) {
 +        log.error("Thread interrupted with id {}", workerIndex);
 +        Thread.currentThread().interrupt();
 +        return;
 +      }
 +
 +      log.debug("Creating scanner in worker thread {}", workerIndex);
 +
 +      try {
 +
 +        scanner = connector.createScanner(tablename, new Authorizations());
 +
 +        // Never start readahead
 +        scanner.setReadaheadThreshold(Long.MAX_VALUE);
 +        scanner.setBatchSize(1);
 +
 +        // create different ranges to try to hit more than one tablet.
 +        scanner.setRange(new Range(new Text(Integer.toString(workerIndex)), new Text("9")));
 +
 +      } catch (TableNotFoundException e) {
 +        throw new IllegalStateException("Initialization failure. Could not create scanner", e);
 +      }
 +
 +      scanner.fetchColumnFamily(new Text("fam1"));
 +
 +      for (Map.Entry<Key,Value> entry : scanner) {
 +
 +        // exit when success condition is met.
 +        if (!testInProgress.get()) {
 +          scanner.clearScanIterators();
 +          scanner.close();
 +
 +          return;
 +        }
 +
 +        Text row = entry.getKey().getRow();
 +
 +        log.debug("worker {}, row {}", workerIndex, row.toString());
 +
 +        if (entry.getValue() != null) {
 +
 +          Value prevValue = resultsByWorker.put(workerIndex, entry.getValue());
 +
 +          // value should always being increasing
 +          if (prevValue != null) {
 +
 +            log.trace("worker {} values {}", workerIndex, String.format("%1$s < %2$s", prevValue, entry.getValue()));
 +
 +            assertTrue(prevValue.compareTo(entry.getValue()) > 0);
 +          }
 +        } else {
 +          log.info("Scanner returned null");
 +          fail("Scanner returned unexpected null value");
 +        }
 +
 +      }
 +
 +      log.debug("Scanner ran out of data. (info only, not an error) ");
 +
 +    }
 +  }
 +
 +  /**
 +   * Create splits on table and force migration by taking table offline and then bring back online for test.
 +   *
 +   * @param conn
 +   *          Accumulo connector Accumulo connector to test cluster or MAC instance.
 +   */
 +  private void addSplits(final Connector conn, final String tableName) {
 +
 +    SortedSet<Text> splits = createSplits();
 +
 +    try {
 +
 +      conn.tableOperations().addSplits(tableName, splits);
 +
 +      conn.tableOperations().offline(tableName, true);
 +
 +      sleepUninterruptibly(2, TimeUnit.SECONDS);
 +      conn.tableOperations().online(tableName, true);
 +
 +      for (Text split : conn.tableOperations().listSplits(tableName)) {
 +        log.trace("Split {}", split);
 +      }
 +
 +    } catch (AccumuloSecurityException e) {
 +      throw new IllegalStateException("Initialization failed. Could not add splits to " + tableName, e);
 +    } catch (TableNotFoundException e) {
 +      throw new IllegalStateException("Initialization failed. Could not add splits to " + tableName, e);
 +    } catch (AccumuloException e) {
 +      throw new IllegalStateException("Initialization failed. Could not add splits to " + tableName, e);
 +    }
 +
 +  }
 +
 +  /**
 +   * Create splits to distribute data across multiple tservers.
 +   *
 +   * @return splits in sorted set for addSplits.
 +   */
 +  private SortedSet<Text> createSplits() {
 +
 +    SortedSet<Text> splits = new TreeSet<Text>();
 +
 +    for (int split = 0; split < 10; split++) {
 +      splits.add(new Text(Integer.toString(split)));
 +    }
 +
 +    return splits;
 +  }
 +
 +  /**
 +   * Generate some sample data using random row id to distribute across splits.
 +   * <p/>
 +   * The primary goal is to determine that each scanner is assigned a unique scan id. This test does check that the count value for fam1 increases if a scanner
 +   * reads multiple value, but this is secondary consideration for this test, that is included for completeness.
 +   *
 +   * @param connector
 +   *          Accumulo connector Accumulo connector to test cluster or MAC instance.
 +   */
 +  private void generateSampleData(Connector connector, final String tablename) {
 +
 +    try {
 +
 +      BatchWriter bw = connector.createBatchWriter(tablename, new BatchWriterConfig());
 +
 +      ColumnVisibility vis = new ColumnVisibility("public");
 +
 +      for (int i = 0; i < NUM_DATA_ROWS; i++) {
 +
 +        Text rowId = new Text(String.format("%d", ((random.nextInt(10) * 100) + i)));
 +
 +        Mutation m = new Mutation(rowId);
 +        m.put(new Text("fam1"), new Text("count"), new Value(Integer.toString(i).getBytes(UTF_8)));
 +        m.put(new Text("fam1"), new Text("positive"), vis, new Value(Integer.toString(NUM_DATA_ROWS - i).getBytes(UTF_8)));
 +        m.put(new Text("fam1"), new Text("negative"), vis, new Value(Integer.toString(i - NUM_DATA_ROWS).getBytes(UTF_8)));
 +
 +        log.trace("Added row {}", rowId);
 +
 +        bw.addMutation(m);
 +      }
 +
 +      bw.close();
 +    } catch (TableNotFoundException ex) {
 +      throw new IllegalStateException("Initialization failed. Could not create test data", ex);
 +    } catch (MutationsRejectedException ex) {
 +      throw new IllegalStateException("Initialization failed. Could not create test data", ex);
 +    }
 +  }
 +
 +  /**
 +   * Attach the test slow iterator so that we have time to read the scan id without creating a large dataset. Uses a fairly large sleep and delay times because
 +   * we are not concerned with how much data is read and we do not read all of the data - the test stops once each scanner reports a scan id.
 +   *
 +   * @param connector
 +   *          Accumulo connector Accumulo connector to test cluster or MAC instance.
 +   */
 +  private void attachSlowIterator(Connector connector, final String tablename) {
 +    try {
 +
 +      IteratorSetting slowIter = new IteratorSetting(50, "slowIter", "org.apache.accumulo.test.functional.SlowIterator");
 +      slowIter.addOption("sleepTime", "200");
 +      slowIter.addOption("seekSleepTime", "200");
 +
 +      connector.tableOperations().attachIterator(tablename, slowIter, EnumSet.of(IteratorUtil.IteratorScope.scan));
 +
 +    } catch (AccumuloException ex) {
 +      throw new IllegalStateException("Initialization failed. Could not attach slow iterator", ex);
 +    } catch (TableNotFoundException ex) {
 +      throw new IllegalStateException("Initialization failed. Could not attach slow iterator", ex);
 +    } catch (AccumuloSecurityException ex) {
 +      throw new IllegalStateException("Initialization failed. Could not attach slow iterator", ex);
 +    }
 +  }
 +
 +}


[11/19] accumulo git commit: ACCUMULO-4102 Fix bad javadocs

Posted by ct...@apache.org.
ACCUMULO-4102 Fix bad javadocs


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/c8c0cf7f
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/c8c0cf7f
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/c8c0cf7f

Branch: refs/heads/master
Commit: c8c0cf7f90023a49cbb2b790f30819810bed0bf9
Parents: 7cc8137
Author: Christopher Tubbs <ct...@apache.org>
Authored: Fri Jan 8 19:50:42 2016 -0500
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Fri Jan 8 20:48:51 2016 -0500

----------------------------------------------------------------------
 .../core/bloomfilter/DynamicBloomFilter.java    |  4 +--
 .../accumulo/core/client/BatchWriterConfig.java | 10 +++---
 .../core/client/ConditionalWriterConfig.java    |  4 +--
 .../client/mapred/AccumuloFileOutputFormat.java |  4 +--
 .../mapreduce/AccumuloFileOutputFormat.java     |  6 ++--
 .../lib/impl/FileOutputConfigurator.java        |  4 +--
 .../lib/util/FileOutputConfigurator.java        |  4 +--
 .../security/tokens/AuthenticationToken.java    |  2 +-
 .../core/conf/AccumuloConfiguration.java        |  2 +-
 .../org/apache/accumulo/core/data/Range.java    |  8 ++---
 .../file/blockfile/cache/CachedBlockQueue.java  |  2 +-
 .../core/file/blockfile/cache/ClassSize.java    |  4 +--
 .../accumulo/core/file/rfile/bcfile/Utils.java  | 35 +++++++++++---------
 .../user/WholeColumnFamilyIterator.java         |  4 +--
 .../core/metadata/ServicerForMetadataTable.java |  2 +-
 .../core/metadata/ServicerForRootTable.java     |  2 +-
 .../core/metadata/ServicerForUserTables.java    |  2 +-
 .../core/security/ColumnVisibility.java         |  8 ++---
 .../core/security/VisibilityConstraint.java     |  1 -
 .../security/crypto/CryptoModuleParameters.java |  7 +---
 .../examples/simple/filedata/ChunkCombiner.java | 18 +++++-----
 .../apache/accumulo/server/ServerConstants.java |  2 +-
 .../server/security/SecurityOperation.java      |  6 ++--
 .../server/security/SystemCredentialsTest.java  |  2 +-
 .../monitor/servlets/DefaultServlet.java        |  2 +-
 .../monitor/servlets/TablesServlet.java         |  4 +--
 26 files changed, 73 insertions(+), 76 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/bloomfilter/DynamicBloomFilter.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/bloomfilter/DynamicBloomFilter.java b/core/src/main/java/org/apache/accumulo/core/bloomfilter/DynamicBloomFilter.java
index 740bdda..11e765a 100644
--- a/core/src/main/java/org/apache/accumulo/core/bloomfilter/DynamicBloomFilter.java
+++ b/core/src/main/java/org/apache/accumulo/core/bloomfilter/DynamicBloomFilter.java
@@ -60,8 +60,8 @@ import org.apache.hadoop.util.bloom.Key;
  * <p>
  * A dynamic Bloom filter (DBF) makes use of a <code>s * m</code> bit matrix but each of the <code>s</code> rows is a standard Bloom filter. The creation
  * process of a DBF is iterative. At the start, the DBF is a <code>1 * m</code> bit matrix, i.e., it is composed of a single standard Bloom filter. It assumes
- * that <code>n<sub>r</sub></code> elements are recorded in the initial bit vector, where <code>n<sub>r</sub> <= n</code> (<code>n</code> is the cardinality of
- * the set <code>A</code> to record in the filter).
+ * that <code>n<sub>r</sub></code> elements are recorded in the initial bit vector, where <code>n<sub>r</sub> &lt;= n</code> (<code>n</code> is the cardinality
+ * of the set <code>A</code> to record in the filter).
  * <p>
  * As the size of <code>A</code> grows during the execution of the application, several keys must be inserted in the DBF. When inserting a key into the DBF, one
  * must first get an active Bloom filter in the matrix. A Bloom filter is active when the number of recorded keys, <code>n<sub>r</sub></code>, is strictly less

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java b/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
index 08eb853..320ecf4 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
@@ -49,7 +49,7 @@ public class BatchWriterConfig implements Writable {
   private Integer maxWriteThreads = null;
 
   /**
-   * Sets the maximum memory to batch before writing. The smaller this value, the more frequently the {@link BatchWriter} will write.<br />
+   * Sets the maximum memory to batch before writing. The smaller this value, the more frequently the {@link BatchWriter} will write.<br>
    * If set to a value smaller than a single mutation, then it will {@link BatchWriter#flush()} after each added mutation. Must be non-negative.
    *
    * <p>
@@ -69,11 +69,11 @@ public class BatchWriterConfig implements Writable {
   }
 
   /**
-   * Sets the maximum amount of time to hold the data in memory before flushing it to servers.<br />
+   * Sets the maximum amount of time to hold the data in memory before flushing it to servers.<br>
    * For no maximum, set to zero, or {@link Long#MAX_VALUE} with {@link TimeUnit#MILLISECONDS}.
    *
    * <p>
-   * {@link TimeUnit#MICROSECONDS} or {@link TimeUnit#NANOSECONDS} will be truncated to the nearest {@link TimeUnit#MILLISECONDS}.<br />
+   * {@link TimeUnit#MICROSECONDS} or {@link TimeUnit#NANOSECONDS} will be truncated to the nearest {@link TimeUnit#MILLISECONDS}.<br>
    * If this truncation would result in making the value zero when it was specified as non-zero, then a minimum value of one {@link TimeUnit#MILLISECONDS} will
    * be used.
    *
@@ -101,11 +101,11 @@ public class BatchWriterConfig implements Writable {
   }
 
   /**
-   * Sets the maximum amount of time an unresponsive server will be re-tried. When this timeout is exceeded, the {@link BatchWriter} should throw an exception.<br />
+   * Sets the maximum amount of time an unresponsive server will be re-tried. When this timeout is exceeded, the {@link BatchWriter} should throw an exception.<br>
    * For no timeout, set to zero, or {@link Long#MAX_VALUE} with {@link TimeUnit#MILLISECONDS}.
    *
    * <p>
-   * {@link TimeUnit#MICROSECONDS} or {@link TimeUnit#NANOSECONDS} will be truncated to the nearest {@link TimeUnit#MILLISECONDS}.<br />
+   * {@link TimeUnit#MICROSECONDS} or {@link TimeUnit#NANOSECONDS} will be truncated to the nearest {@link TimeUnit#MILLISECONDS}.<br>
    * If this truncation would result in making the value zero when it was specified as non-zero, then a minimum value of one {@link TimeUnit#MILLISECONDS} will
    * be used.
    *

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java b/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java
index 360a302..627a580 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java
@@ -51,11 +51,11 @@ public class ConditionalWriterConfig {
 
   /**
    * Sets the maximum amount of time an unresponsive server will be re-tried. When this timeout is exceeded, the {@link ConditionalWriter} should return the
-   * mutation with an exception.<br />
+   * mutation with an exception.<br>
    * For no timeout, set to zero, or {@link Long#MAX_VALUE} with {@link TimeUnit#MILLISECONDS}.
    *
    * <p>
-   * {@link TimeUnit#MICROSECONDS} or {@link TimeUnit#NANOSECONDS} will be truncated to the nearest {@link TimeUnit#MILLISECONDS}.<br />
+   * {@link TimeUnit#MICROSECONDS} or {@link TimeUnit#NANOSECONDS} will be truncated to the nearest {@link TimeUnit#MILLISECONDS}.<br>
    * If this truncation would result in making the value zero when it was specified as non-zero, then a minimum value of one {@link TimeUnit#MILLISECONDS} will
    * be used.
    *

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
index 2f2b4b2..236fae5 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
@@ -41,7 +41,7 @@ import org.apache.hadoop.util.Progressable;
 import org.apache.log4j.Logger;
 
 /**
- * This class allows MapReduce jobs to write output in the Accumulo data file format.<br />
+ * This class allows MapReduce jobs to write output in the Accumulo data file format.<br>
  * Care should be taken to write only sorted data (sorted by {@link Key}), as this is an important requirement of Accumulo data files.
  *
  * <p>
@@ -81,7 +81,7 @@ public class AccumuloFileOutputFormat extends FileOutputFormat<Key,Value> {
   }
 
   /**
-   * Sets the size for data blocks within each file.<br />
+   * Sets the size for data blocks within each file.<br>
    * Data blocks are a span of key/value pairs stored in the file that are compressed and indexed as a group.
    *
    * <p>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
index 500f072..8c389d4 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
@@ -20,7 +20,6 @@ import java.io.IOException;
 import java.util.Arrays;
 
 import org.apache.accumulo.core.client.mapreduce.lib.impl.FileOutputConfigurator;
-import org.apache.accumulo.core.util.HadoopCompatUtil;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.ArrayByteSequence;
@@ -29,6 +28,7 @@ import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.file.FileOperations;
 import org.apache.accumulo.core.file.FileSKVWriter;
 import org.apache.accumulo.core.security.ColumnVisibility;
+import org.apache.accumulo.core.util.HadoopCompatUtil;
 import org.apache.commons.collections.map.LRUMap;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
@@ -40,7 +40,7 @@ import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
 import org.apache.log4j.Logger;
 
 /**
- * This class allows MapReduce jobs to write output in the Accumulo data file format.<br />
+ * This class allows MapReduce jobs to write output in the Accumulo data file format.<br>
  * Care should be taken to write only sorted data (sorted by {@link Key}), as this is an important requirement of Accumulo data files.
  *
  * <p>
@@ -80,7 +80,7 @@ public class AccumuloFileOutputFormat extends FileOutputFormat<Key,Value> {
   }
 
   /**
-   * Sets the size for data blocks within each file.<br />
+   * Sets the size for data blocks within each file.<br>
    * Data blocks are a span of key/value pairs stored in the file that are compressed and indexed as a group.
    *
    * <p>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java
index 882c6d3..f0f67b2 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java
@@ -39,7 +39,7 @@ public class FileOutputConfigurator extends ConfiguratorBase {
   }
 
   /**
-   * The supported Accumulo properties we set in this OutputFormat, that change the behavior of the RecordWriter.<br />
+   * The supported Accumulo properties we set in this OutputFormat, that change the behavior of the RecordWriter.<br>
    * These properties correspond to the supported public static setter methods available to this class.
    *
    * @param property
@@ -120,7 +120,7 @@ public class FileOutputConfigurator extends ConfiguratorBase {
   }
 
   /**
-   * Sets the size for data blocks within each file.<br />
+   * Sets the size for data blocks within each file.<br>
    * Data blocks are a span of key/value pairs stored in the file that are compressed and indexed as a group.
    *
    * <p>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/FileOutputConfigurator.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/FileOutputConfigurator.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/FileOutputConfigurator.java
index d43ecda..b4f6b8a 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/FileOutputConfigurator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/FileOutputConfigurator.java
@@ -39,7 +39,7 @@ public class FileOutputConfigurator extends ConfiguratorBase {
   }
 
   /**
-   * The supported Accumulo properties we set in this OutputFormat, that change the behavior of the RecordWriter.<br />
+   * The supported Accumulo properties we set in this OutputFormat, that change the behavior of the RecordWriter.<br>
    * These properties correspond to the supported public static setter methods available to this class.
    *
    * @param property
@@ -95,7 +95,7 @@ public class FileOutputConfigurator extends ConfiguratorBase {
   }
 
   /**
-   * Sets the size for data blocks within each file.<br />
+   * Sets the size for data blocks within each file.<br>
    * Data blocks are a span of key/value pairs stored in the file that are compressed and indexed as a group.
    *
    * <p>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java
index 7836ea5..5c20555 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java
@@ -39,7 +39,7 @@ import org.apache.hadoop.io.Writable;
 public interface AuthenticationToken extends Writable, Destroyable, Cloneable {
 
   /**
-   * A utility class to serialize/deserialize {@link AuthenticationToken} objects.<br/>
+   * A utility class to serialize/deserialize {@link AuthenticationToken} objects.<br>
    * Unfortunately, these methods are provided in an inner-class, to avoid breaking the interface API.
    *
    * @since 1.6.0

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java b/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
index 33b7aef..5da92cb 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
@@ -80,7 +80,7 @@ public abstract class AccumuloConfiguration implements Iterable<Entry<String,Str
   }
 
   /**
-   * This method returns all properties in a map of string->string under the given prefix property.
+   * This method returns all properties in a map of string-&gt;string under the given prefix property.
    *
    * @param property
    *          the prefix property, and must be of type PropertyType.PREFIX

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/data/Range.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/data/Range.java b/core/src/main/java/org/apache/accumulo/core/data/Range.java
index b832c33..7ccfe3d 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/Range.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/Range.java
@@ -423,7 +423,7 @@ public class Range implements WritableComparable<Range> {
 
         if (range.infiniteStopKey || (cmp = range.stop.compareTo(currentRange.stop)) > 0 || (cmp == 0 && range.stopKeyInclusive)) {
           currentRange = new Range(currentRange.getStartKey(), currentStartKeyInclusive, range.getEndKey(), range.stopKeyInclusive);
-        }/* else currentRange contains ral.get(i) */
+        } /* else currentRange contains ral.get(i) */
       } else {
         ret.add(currentRange);
         currentRange = range;
@@ -506,12 +506,12 @@ public class Range implements WritableComparable<Range> {
   }
 
   /**
-   * Creates a new range that is bounded by the columns passed in. The stary key in the returned range will have a column >= to the minimum column. The end key
-   * in the returned range will have a column <= the max column.
+   * Creates a new range that is bounded by the columns passed in. The stary key in the returned range will have a column &gt;= to the minimum column. The end
+   * key in the returned range will have a column &lt;= the max column.
    *
    * @return a column bounded range
    * @throws IllegalArgumentException
-   *           if min > max
+   *           if min &gt; max
    */
 
   public Range bound(Column min, Column max) {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java
index 0c35b98..9b52635 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java
@@ -21,7 +21,7 @@ import java.util.LinkedList;
 import java.util.PriorityQueue;
 
 /**
- * A memory-bound queue that will grow until an element brings total size >= maxSize. From then on, only entries that are sorted larger than the smallest
+ * A memory-bound queue that will grow until an element brings total size &gt;= maxSize. From then on, only entries that are sorted larger than the smallest
  * current entry will be inserted/replaced.
  *
  * <p>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/ClassSize.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/ClassSize.java b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/ClassSize.java
index f898a8f..f15e28f 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/ClassSize.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/ClassSize.java
@@ -251,7 +251,7 @@ public class ClassSize {
    *
    * @param num
    *          number to align to 8
-   * @return smallest number >= input that is a multiple of 8
+   * @return smallest number &gt;= input that is a multiple of 8
    */
   public static int align(int num) {
     return (int) (align((long) num));
@@ -262,7 +262,7 @@ public class ClassSize {
    *
    * @param num
    *          number to align to 8
-   * @return smallest number >= input that is a multiple of 8
+   * @return smallest number &gt;= input that is a multiple of 8
    */
   public static long align(long num) {
     // The 7 comes from that the alignSize is 8 which is the number of bytes

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Utils.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Utils.java b/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Utils.java
index 84b861b..46afc0b 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Utils.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Utils.java
@@ -54,18 +54,21 @@ public final class Utils {
    * Encoding a Long integer into a variable-length encoding format.
    * <ul>
    * <li>if n in [-32, 127): encode in one byte with the actual value. Otherwise,
-   * <li>if n in [-20*2^8, 20*2^8): encode in two bytes: byte[0] = n/256 - 52; byte[1]=n&0xff. Otherwise,
-   * <li>if n IN [-16*2^16, 16*2^16): encode in three bytes: byte[0]=n/2^16 - 88; byte[1]=(n>>8)&0xff; byte[2]=n&0xff. Otherwise,
-   * <li>if n in [-8*2^24, 8*2^24): encode in four bytes: byte[0]=n/2^24 - 112; byte[1] = (n>>16)&0xff; byte[2] = (n>>8)&0xff; byte[3]=n&0xff. Otherwise:
-   * <li>if n in [-2^31, 2^31): encode in five bytes: byte[0]=-125; byte[1] = (n>>24)&0xff; byte[2]=(n>>16)&0xff; byte[3]=(n>>8)&0xff; byte[4]=n&0xff;
-   * <li>if n in [-2^39, 2^39): encode in six bytes: byte[0]=-124; byte[1] = (n>>32)&0xff; byte[2]=(n>>24)&0xff; byte[3]=(n>>16)&0xff; byte[4]=(n>>8)&0xff;
-   * byte[5]=n&0xff
-   * <li>if n in [-2^47, 2^47): encode in seven bytes: byte[0]=-123; byte[1] = (n>>40)&0xff; byte[2]=(n>>32)&0xff; byte[3]=(n>>24)&0xff; byte[4]=(n>>16)&0xff;
-   * byte[5]=(n>>8)&0xff; byte[6]=n&0xff;
-   * <li>if n in [-2^55, 2^55): encode in eight bytes: byte[0]=-122; byte[1] = (n>>48)&0xff; byte[2] = (n>>40)&0xff; byte[3]=(n>>32)&0xff; byte[4]=(n>>24)&0xff;
-   * byte[5]=(n>>16)&0xff; byte[6]=(n>>8)&0xff; byte[7]=n&0xff;
-   * <li>if n in [-2^63, 2^63): encode in nine bytes: byte[0]=-121; byte[1] = (n>>54)&0xff; byte[2] = (n>>48)&0xff; byte[3] = (n>>40)&0xff;
-   * byte[4]=(n>>32)&0xff; byte[5]=(n>>24)&0xff; byte[6]=(n>>16)&0xff; byte[7]=(n>>8)&0xff; byte[8]=n&0xff;
+   * <li>if n in [-20*2^8, 20*2^8): encode in two bytes: byte[0] = n/256 - 52; byte[1]=n&amp;0xff. Otherwise,
+   * <li>if n IN [-16*2^16, 16*2^16): encode in three bytes: byte[0]=n/2^16 - 88; byte[1]=(n&gt;&gt;8)&amp;0xff; byte[2]=n&amp;0xff. Otherwise,
+   * <li>if n in [-8*2^24, 8*2^24): encode in four bytes: byte[0]=n/2^24 - 112; byte[1] = (n&gt;&gt;16)&amp;0xff; byte[2] = (n&gt;&gt;8)&amp;0xff;
+   * byte[3]=n&amp;0xff. Otherwise:
+   * <li>if n in [-2^31, 2^31): encode in five bytes: byte[0]=-125; byte[1] = (n&gt;&gt;24)&amp;0xff; byte[2]=(n&gt;&gt;16)&amp;0xff;
+   * byte[3]=(n&gt;&gt;8)&amp;0xff; byte[4]=n&amp;0xff;
+   * <li>if n in [-2^39, 2^39): encode in six bytes: byte[0]=-124; byte[1] = (n&gt;&gt;32)&amp;0xff; byte[2]=(n&gt;&gt;24)&amp;0xff;
+   * byte[3]=(n&gt;&gt;16)&amp;0xff; byte[4]=(n&gt;&gt;8)&amp;0xff; byte[5]=n&amp;0xff
+   * <li>if n in [-2^47, 2^47): encode in seven bytes: byte[0]=-123; byte[1] = (n&gt;&gt;40)&amp;0xff; byte[2]=(n&gt;&gt;32)&amp;0xff;
+   * byte[3]=(n&gt;&gt;24)&amp;0xff; byte[4]=(n&gt;&gt;16)&amp;0xff; byte[5]=(n&gt;&gt;8)&amp;0xff; byte[6]=n&amp;0xff;
+   * <li>if n in [-2^55, 2^55): encode in eight bytes: byte[0]=-122; byte[1] = (n&gt;&gt;48)&amp;0xff; byte[2] = (n&gt;&gt;40)&amp;0xff;
+   * byte[3]=(n&gt;&gt;32)&amp;0xff; byte[4]=(n&gt;&gt;24)&amp;0xff; byte[5]=(n&gt;&gt;16)&amp;0xff; byte[6]=(n&gt;&gt;8)&amp;0xff; byte[7]=n&amp;0xff;
+   * <li>if n in [-2^63, 2^63): encode in nine bytes: byte[0]=-121; byte[1] = (n&gt;&gt;54)&amp;0xff; byte[2] = (n&gt;&gt;48)&amp;0xff; byte[3] =
+   * (n&gt;&gt;40)&amp;0xff; byte[4]=(n&gt;&gt;32)&amp;0xff; byte[5]=(n&gt;&gt;24)&amp;0xff; byte[6]=(n&gt;&gt;16)&amp;0xff; byte[7]=(n&gt;&gt;8)&amp;0xff;
+   * byte[8]=n&amp;0xff;
    * </ul>
    *
    * @param out
@@ -159,10 +162,10 @@ public final class Utils {
   /**
    * Decoding the variable-length integer. Suppose the value of the first byte is FB, and the following bytes are NB[*].
    * <ul>
-   * <li>if (FB >= -32), return (long)FB;
-   * <li>if (FB in [-72, -33]), return (FB+52)<<8 + NB[0]&0xff;
-   * <li>if (FB in [-104, -73]), return (FB+88)<<16 + (NB[0]&0xff)<<8 + NB[1]&0xff;
-   * <li>if (FB in [-120, -105]), return (FB+112)<<24 + (NB[0]&0xff)<<16 + (NB[1]&0xff)<<8 + NB[2]&0xff;
+   * <li>if (FB &gt;= -32), return (long)FB;
+   * <li>if (FB in [-72, -33]), return (FB+52)&lt;&lt;8 + NB[0]&amp;0xff;
+   * <li>if (FB in [-104, -73]), return (FB+88)&lt;&lt;16 + (NB[0]&amp;0xff)&lt;&lt;8 + NB[1]&amp;0xff;
+   * <li>if (FB in [-120, -105]), return (FB+112)&lt;&lt;24 + (NB[0]&amp;0xff)&lt;&lt;16 + (NB[1]&amp;0xff)&lt;&lt;8 + NB[2]&amp;0xff;
    * <li>if (FB in [-128, -121]), return interpret NB[FB+129] as a signed big-endian integer.
    * </ul>
    *

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java
index 25f30a8..8e7a385 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java
@@ -180,9 +180,9 @@ public class WholeColumnFamilyIterator implements SortedKeyValueIterator<Key,Val
   /**
    *
    * @param currentRow
-   *          All keys & cf have this in their row portion (do not modify!).
+   *          All keys and cf have this in their row portion (do not modify!).
    * @param keys
-   *          One key for each key & cf group in the row, ordered as they are given by the source iterator (do not modify!).
+   *          One key for each key and cf group in the row, ordered as they are given by the source iterator (do not modify!).
    * @param values
    *          One value for each key in keys, ordered to correspond to the ordering in keys (do not modify!).
    * @return true if we want to keep the row, false if we want to skip it

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
index b642cb8..af48770 100644
--- a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
@@ -20,7 +20,7 @@ import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.security.Credentials;
 
 /**
- * A metadata servicer for the metadata table (which holds metadata for user tables).<br />
+ * A metadata servicer for the metadata table (which holds metadata for user tables).<br>
  * The metadata table's metadata is serviced in the root table.
  */
 class ServicerForMetadataTable extends TableMetadataServicer {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
index 205adc9..b279d01 100644
--- a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
@@ -26,7 +26,7 @@ import org.apache.accumulo.core.data.KeyExtent;
 import org.apache.accumulo.core.security.Credentials;
 
 /**
- * A metadata servicer for the root table.<br />
+ * A metadata servicer for the root table.<br>
  * The root table's metadata is serviced in zookeeper.
  */
 class ServicerForRootTable extends MetadataServicer {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
index d4827f2..607dfbd 100644
--- a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
@@ -20,7 +20,7 @@ import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.security.Credentials;
 
 /**
- * A metadata servicer for user tables.<br />
+ * A metadata servicer for user tables.<br>
  * Metadata for user tables are serviced in the metadata table.
  */
 class ServicerForUserTables extends TableMetadataServicer {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java b/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java
index 842e6f9..26d1cd0 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java
@@ -37,7 +37,7 @@ import org.apache.hadoop.io.WritableComparator;
  * Validate the column visibility is a valid expression and set the visibility for a Mutation. See {@link ColumnVisibility#ColumnVisibility(byte[])} for the
  * definition of an expression.
  *
- * <P>
+ * <p>
  * The expression is a sequence of characters from the set [A-Za-z0-9_-.] along with the binary operators "&amp;" and "|" indicating that both operands are
  * necessary, or the either is necessary. The following are valid expressions for visibility:
  *
@@ -48,7 +48,7 @@ import org.apache.hadoop.io.WritableComparator;
  * orange|(red&amp;yellow)
  * </pre>
  *
- * <P>
+ * <p>
  * The following are not valid expressions for visibility:
  *
  * <pre>
@@ -61,13 +61,13 @@ import org.apache.hadoop.io.WritableComparator;
  * dog|!cat
  * </pre>
  *
- * <P>
+ * <p>
  * In addition to the base set of visibilities, any character can be used in the expression if it is quoted. If the quoted term contains '&quot;' or '\', then
  * escape the character with '\'. The {@link #quote(String)} method can be used to properly quote and escape terms automatically. The following is an example of
  * a quoted term:
  *
  * <pre>
- * &quot;A#C&quot;<span />&amp;<span />B
+ * &quot;A#C&quot; &amp; B
  * </pre>
  */
 public class ColumnVisibility {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/security/VisibilityConstraint.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/security/VisibilityConstraint.java b/core/src/main/java/org/apache/accumulo/core/security/VisibilityConstraint.java
index 67175c0..d9d13d7 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/VisibilityConstraint.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/VisibilityConstraint.java
@@ -29,7 +29,6 @@ import org.apache.accumulo.core.util.BadArgumentException;
 
 /**
  * A constraint that checks the visibility of columns against the actor's authorizations. Violation codes:
- * <p>
  * <ul>
  * <li>1 = failure to parse visibility expression</li>
  * <li>2 = insufficient authorization</li>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleParameters.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleParameters.java b/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleParameters.java
index b9bf253..a7bb93d 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleParameters.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleParameters.java
@@ -195,9 +195,7 @@ public class CryptoModuleParameters {
   /**
    * Sets the name of the random number generator to use. The default for this for the baseline JCE implementation is "SHA1PRNG".
    * <p>
-   *
-   * <p>
-   * For <b>encryption</b>, this value is <b>required</b>. <br>
+   * For <b>encryption</b>, this value is <b>required</b>.<br>
    * For <b>decryption</b>, this value is often obtained from the underlying cipher stream.
    *
    * @param randomNumberGenerator
@@ -275,7 +273,6 @@ public class CryptoModuleParameters {
 
   /**
    * Sets the encrypted version of the plaintext key ({@link CryptoModuleParameters#getPlaintextKey()}). Generally this operation will be done either by:
-   * <p>
    * <ul>
    * <li>the code reading an encrypted stream and coming across the encrypted version of one of these keys, OR
    * <li>the {@link CryptoModuleParameters#getKeyEncryptionStrategyClass()} that encrypted the plaintext key (see
@@ -285,11 +282,9 @@ public class CryptoModuleParameters {
    * For <b>encryption</b>, this value is generally not required, but is usually set by the underlying module during encryption. <br>
    * For <b>decryption</b>, this value is <b>usually required</b>.
    *
-   *
    * @param encryptedKey
    *          the encrypted value of the plaintext key
    */
-
   public void setEncryptedKey(byte[] encryptedKey) {
     this.encryptedKey = encryptedKey;
   }

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkCombiner.java
----------------------------------------------------------------------
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkCombiner.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkCombiner.java
index ca77b39..0ffeca0 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkCombiner.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkCombiner.java
@@ -34,20 +34,20 @@ import org.apache.hadoop.io.Text;
  * This iterator dedupes chunks and sets their visibilities to the combined visibility of the refs columns. For example, it would combine
  *
  * <pre>
- *    row1 refs uid1\0a A&B V0
- *    row1 refs uid2\0b C&D V0
- *    row1 ~chunk 0 A&B V1
- *    row1 ~chunk 0 C&D V1
- *    row1 ~chunk 0 E&F V1
- *    row1 ~chunk 0 G&H V1
+ *    row1 refs uid1\0a A&amp;B V0
+ *    row1 refs uid2\0b C&amp;D V0
+ *    row1 ~chunk 0 A&amp;B V1
+ *    row1 ~chunk 0 C&amp;D V1
+ *    row1 ~chunk 0 E&amp;F V1
+ *    row1 ~chunk 0 G&amp;H V1
  * </pre>
  *
  * into the following
  *
  * <pre>
- *    row1 refs uid1\0a A&B V0
- *    row1 refs uid2\0b C&D V0
- *    row1 ~chunk 0 (A&B)|(C&D) V1
+ *    row1 refs uid1\0a A&amp;B V0
+ *    row1 refs uid2\0b C&amp;D V0
+ *    row1 ~chunk 0 (A&amp;B)|(C&amp;D) V1
  * </pre>
  *
  * {@link VisibilityCombiner} is used to combie the visibilities.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
----------------------------------------------------------------------
diff --git a/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java b/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
index 2b654ca..137a3fe 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
@@ -40,7 +40,7 @@ public class ServerConstants {
   public static final String INSTANCE_ID_DIR = "instance_id";
 
   /**
-   * current version (3) reflects additional namespace operations (ACCUMULO-802) in version 1.6.0 <br />
+   * current version (3) reflects additional namespace operations (ACCUMULO-802) in version 1.6.0<br>
    * (versions should never be negative)
    */
   public static final Integer WIRE_VERSION = 3;

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
----------------------------------------------------------------------
diff --git a/server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java b/server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
index 6f34247..273c9de 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
@@ -255,7 +255,7 @@ public class SecurityOperation {
   }
 
   /**
-   * Checks if a user has a system permission<br/>
+   * Checks if a user has a system permission<br>
    * This cannot check if a system user has permission.
    *
    * @return true if a user exists and has permission; false otherwise
@@ -289,7 +289,7 @@ public class SecurityOperation {
   }
 
   /**
-   * Checks if a user has a table permission<br/>
+   * Checks if a user has a table permission<br>
    * This cannot check if a system user has permission.
    *
    * @return true if a user exists and has permission; false otherwise
@@ -312,7 +312,7 @@ public class SecurityOperation {
   }
 
   /**
-   * Checks if a user has a namespace permission<br/>
+   * Checks if a user has a namespace permission<br>
    * This cannot check if a system user has permission.
    *
    * @return true if a user exists and has permission; false otherwise

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
----------------------------------------------------------------------
diff --git a/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java b/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
index a4db195..a4c5fd6 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
@@ -54,7 +54,7 @@ public class SystemCredentialsTest {
 
   /**
    * This is a test to ensure the string literal in {@link ConnectorImpl#ConnectorImpl(Instance, Credentials)} is kept up-to-date if we move the
-   * {@link SystemToken}<br/>
+   * {@link SystemToken}<br>
    * This check will not be needed after ACCUMULO-1578
    */
   @Test

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
index bb7e690..ef2f872 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
@@ -201,7 +201,7 @@ public class DefaultServlet extends BasicServlet {
     sb.append("</td>\n");
 
     sb.append("</tr></table>\n");
-    sb.append("<br/>\n");
+    sb.append("<br />\n");
 
     sb.append("<p/><table class=\"noborder\">\n");
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java
index 19633b8..224ba91 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java
@@ -92,9 +92,9 @@ public class TablesServlet extends BasicServlet {
     tableList.addSortableColumn("Entries<br />In&nbsp;Memory", new NumberType<Long>(),
         "The total number of key/value pairs stored in memory and not yet written to disk");
     tableList.addSortableColumn("Ingest", new NumberType<Long>(), "The number of Key/Value pairs inserted.  Note that deletes are 'inserted'.");
-    tableList.addSortableColumn("Entries<br/>Read", new NumberType<Long>(),
+    tableList.addSortableColumn("Entries<br />Read", new NumberType<Long>(),
         "The number of Key/Value pairs read on the server side.  Not all key values read may be returned to client because of filtering.");
-    tableList.addSortableColumn("Entries<br/>Returned", new NumberType<Long>(),
+    tableList.addSortableColumn("Entries<br />Returned", new NumberType<Long>(),
         "The number of Key/Value pairs returned to clients during queries.  This is <b>not</b> the number of scans.");
     tableList.addSortableColumn("Hold&nbsp;Time", new DurationType(0l, 0l),
         "The amount of time that ingest operations are suspended while waiting for data to be written to disk.");


[12/19] accumulo git commit: ACCUMULO-4102 Fix bad javadocs

Posted by ct...@apache.org.
ACCUMULO-4102 Fix bad javadocs


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/c8c0cf7f
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/c8c0cf7f
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/c8c0cf7f

Branch: refs/heads/1.7
Commit: c8c0cf7f90023a49cbb2b790f30819810bed0bf9
Parents: 7cc8137
Author: Christopher Tubbs <ct...@apache.org>
Authored: Fri Jan 8 19:50:42 2016 -0500
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Fri Jan 8 20:48:51 2016 -0500

----------------------------------------------------------------------
 .../core/bloomfilter/DynamicBloomFilter.java    |  4 +--
 .../accumulo/core/client/BatchWriterConfig.java | 10 +++---
 .../core/client/ConditionalWriterConfig.java    |  4 +--
 .../client/mapred/AccumuloFileOutputFormat.java |  4 +--
 .../mapreduce/AccumuloFileOutputFormat.java     |  6 ++--
 .../lib/impl/FileOutputConfigurator.java        |  4 +--
 .../lib/util/FileOutputConfigurator.java        |  4 +--
 .../security/tokens/AuthenticationToken.java    |  2 +-
 .../core/conf/AccumuloConfiguration.java        |  2 +-
 .../org/apache/accumulo/core/data/Range.java    |  8 ++---
 .../file/blockfile/cache/CachedBlockQueue.java  |  2 +-
 .../core/file/blockfile/cache/ClassSize.java    |  4 +--
 .../accumulo/core/file/rfile/bcfile/Utils.java  | 35 +++++++++++---------
 .../user/WholeColumnFamilyIterator.java         |  4 +--
 .../core/metadata/ServicerForMetadataTable.java |  2 +-
 .../core/metadata/ServicerForRootTable.java     |  2 +-
 .../core/metadata/ServicerForUserTables.java    |  2 +-
 .../core/security/ColumnVisibility.java         |  8 ++---
 .../core/security/VisibilityConstraint.java     |  1 -
 .../security/crypto/CryptoModuleParameters.java |  7 +---
 .../examples/simple/filedata/ChunkCombiner.java | 18 +++++-----
 .../apache/accumulo/server/ServerConstants.java |  2 +-
 .../server/security/SecurityOperation.java      |  6 ++--
 .../server/security/SystemCredentialsTest.java  |  2 +-
 .../monitor/servlets/DefaultServlet.java        |  2 +-
 .../monitor/servlets/TablesServlet.java         |  4 +--
 26 files changed, 73 insertions(+), 76 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/bloomfilter/DynamicBloomFilter.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/bloomfilter/DynamicBloomFilter.java b/core/src/main/java/org/apache/accumulo/core/bloomfilter/DynamicBloomFilter.java
index 740bdda..11e765a 100644
--- a/core/src/main/java/org/apache/accumulo/core/bloomfilter/DynamicBloomFilter.java
+++ b/core/src/main/java/org/apache/accumulo/core/bloomfilter/DynamicBloomFilter.java
@@ -60,8 +60,8 @@ import org.apache.hadoop.util.bloom.Key;
  * <p>
  * A dynamic Bloom filter (DBF) makes use of a <code>s * m</code> bit matrix but each of the <code>s</code> rows is a standard Bloom filter. The creation
  * process of a DBF is iterative. At the start, the DBF is a <code>1 * m</code> bit matrix, i.e., it is composed of a single standard Bloom filter. It assumes
- * that <code>n<sub>r</sub></code> elements are recorded in the initial bit vector, where <code>n<sub>r</sub> <= n</code> (<code>n</code> is the cardinality of
- * the set <code>A</code> to record in the filter).
+ * that <code>n<sub>r</sub></code> elements are recorded in the initial bit vector, where <code>n<sub>r</sub> &lt;= n</code> (<code>n</code> is the cardinality
+ * of the set <code>A</code> to record in the filter).
  * <p>
  * As the size of <code>A</code> grows during the execution of the application, several keys must be inserted in the DBF. When inserting a key into the DBF, one
  * must first get an active Bloom filter in the matrix. A Bloom filter is active when the number of recorded keys, <code>n<sub>r</sub></code>, is strictly less

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java b/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
index 08eb853..320ecf4 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
@@ -49,7 +49,7 @@ public class BatchWriterConfig implements Writable {
   private Integer maxWriteThreads = null;
 
   /**
-   * Sets the maximum memory to batch before writing. The smaller this value, the more frequently the {@link BatchWriter} will write.<br />
+   * Sets the maximum memory to batch before writing. The smaller this value, the more frequently the {@link BatchWriter} will write.<br>
    * If set to a value smaller than a single mutation, then it will {@link BatchWriter#flush()} after each added mutation. Must be non-negative.
    *
    * <p>
@@ -69,11 +69,11 @@ public class BatchWriterConfig implements Writable {
   }
 
   /**
-   * Sets the maximum amount of time to hold the data in memory before flushing it to servers.<br />
+   * Sets the maximum amount of time to hold the data in memory before flushing it to servers.<br>
    * For no maximum, set to zero, or {@link Long#MAX_VALUE} with {@link TimeUnit#MILLISECONDS}.
    *
    * <p>
-   * {@link TimeUnit#MICROSECONDS} or {@link TimeUnit#NANOSECONDS} will be truncated to the nearest {@link TimeUnit#MILLISECONDS}.<br />
+   * {@link TimeUnit#MICROSECONDS} or {@link TimeUnit#NANOSECONDS} will be truncated to the nearest {@link TimeUnit#MILLISECONDS}.<br>
    * If this truncation would result in making the value zero when it was specified as non-zero, then a minimum value of one {@link TimeUnit#MILLISECONDS} will
    * be used.
    *
@@ -101,11 +101,11 @@ public class BatchWriterConfig implements Writable {
   }
 
   /**
-   * Sets the maximum amount of time an unresponsive server will be re-tried. When this timeout is exceeded, the {@link BatchWriter} should throw an exception.<br />
+   * Sets the maximum amount of time an unresponsive server will be re-tried. When this timeout is exceeded, the {@link BatchWriter} should throw an exception.<br>
    * For no timeout, set to zero, or {@link Long#MAX_VALUE} with {@link TimeUnit#MILLISECONDS}.
    *
    * <p>
-   * {@link TimeUnit#MICROSECONDS} or {@link TimeUnit#NANOSECONDS} will be truncated to the nearest {@link TimeUnit#MILLISECONDS}.<br />
+   * {@link TimeUnit#MICROSECONDS} or {@link TimeUnit#NANOSECONDS} will be truncated to the nearest {@link TimeUnit#MILLISECONDS}.<br>
    * If this truncation would result in making the value zero when it was specified as non-zero, then a minimum value of one {@link TimeUnit#MILLISECONDS} will
    * be used.
    *

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java b/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java
index 360a302..627a580 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java
@@ -51,11 +51,11 @@ public class ConditionalWriterConfig {
 
   /**
    * Sets the maximum amount of time an unresponsive server will be re-tried. When this timeout is exceeded, the {@link ConditionalWriter} should return the
-   * mutation with an exception.<br />
+   * mutation with an exception.<br>
    * For no timeout, set to zero, or {@link Long#MAX_VALUE} with {@link TimeUnit#MILLISECONDS}.
    *
    * <p>
-   * {@link TimeUnit#MICROSECONDS} or {@link TimeUnit#NANOSECONDS} will be truncated to the nearest {@link TimeUnit#MILLISECONDS}.<br />
+   * {@link TimeUnit#MICROSECONDS} or {@link TimeUnit#NANOSECONDS} will be truncated to the nearest {@link TimeUnit#MILLISECONDS}.<br>
    * If this truncation would result in making the value zero when it was specified as non-zero, then a minimum value of one {@link TimeUnit#MILLISECONDS} will
    * be used.
    *

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
index 2f2b4b2..236fae5 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
@@ -41,7 +41,7 @@ import org.apache.hadoop.util.Progressable;
 import org.apache.log4j.Logger;
 
 /**
- * This class allows MapReduce jobs to write output in the Accumulo data file format.<br />
+ * This class allows MapReduce jobs to write output in the Accumulo data file format.<br>
  * Care should be taken to write only sorted data (sorted by {@link Key}), as this is an important requirement of Accumulo data files.
  *
  * <p>
@@ -81,7 +81,7 @@ public class AccumuloFileOutputFormat extends FileOutputFormat<Key,Value> {
   }
 
   /**
-   * Sets the size for data blocks within each file.<br />
+   * Sets the size for data blocks within each file.<br>
    * Data blocks are a span of key/value pairs stored in the file that are compressed and indexed as a group.
    *
    * <p>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
index 500f072..8c389d4 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
@@ -20,7 +20,6 @@ import java.io.IOException;
 import java.util.Arrays;
 
 import org.apache.accumulo.core.client.mapreduce.lib.impl.FileOutputConfigurator;
-import org.apache.accumulo.core.util.HadoopCompatUtil;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.ArrayByteSequence;
@@ -29,6 +28,7 @@ import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.file.FileOperations;
 import org.apache.accumulo.core.file.FileSKVWriter;
 import org.apache.accumulo.core.security.ColumnVisibility;
+import org.apache.accumulo.core.util.HadoopCompatUtil;
 import org.apache.commons.collections.map.LRUMap;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
@@ -40,7 +40,7 @@ import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
 import org.apache.log4j.Logger;
 
 /**
- * This class allows MapReduce jobs to write output in the Accumulo data file format.<br />
+ * This class allows MapReduce jobs to write output in the Accumulo data file format.<br>
  * Care should be taken to write only sorted data (sorted by {@link Key}), as this is an important requirement of Accumulo data files.
  *
  * <p>
@@ -80,7 +80,7 @@ public class AccumuloFileOutputFormat extends FileOutputFormat<Key,Value> {
   }
 
   /**
-   * Sets the size for data blocks within each file.<br />
+   * Sets the size for data blocks within each file.<br>
    * Data blocks are a span of key/value pairs stored in the file that are compressed and indexed as a group.
    *
    * <p>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java
index 882c6d3..f0f67b2 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java
@@ -39,7 +39,7 @@ public class FileOutputConfigurator extends ConfiguratorBase {
   }
 
   /**
-   * The supported Accumulo properties we set in this OutputFormat, that change the behavior of the RecordWriter.<br />
+   * The supported Accumulo properties we set in this OutputFormat, that change the behavior of the RecordWriter.<br>
    * These properties correspond to the supported public static setter methods available to this class.
    *
    * @param property
@@ -120,7 +120,7 @@ public class FileOutputConfigurator extends ConfiguratorBase {
   }
 
   /**
-   * Sets the size for data blocks within each file.<br />
+   * Sets the size for data blocks within each file.<br>
    * Data blocks are a span of key/value pairs stored in the file that are compressed and indexed as a group.
    *
    * <p>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/FileOutputConfigurator.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/FileOutputConfigurator.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/FileOutputConfigurator.java
index d43ecda..b4f6b8a 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/FileOutputConfigurator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/FileOutputConfigurator.java
@@ -39,7 +39,7 @@ public class FileOutputConfigurator extends ConfiguratorBase {
   }
 
   /**
-   * The supported Accumulo properties we set in this OutputFormat, that change the behavior of the RecordWriter.<br />
+   * The supported Accumulo properties we set in this OutputFormat, that change the behavior of the RecordWriter.<br>
    * These properties correspond to the supported public static setter methods available to this class.
    *
    * @param property
@@ -95,7 +95,7 @@ public class FileOutputConfigurator extends ConfiguratorBase {
   }
 
   /**
-   * Sets the size for data blocks within each file.<br />
+   * Sets the size for data blocks within each file.<br>
    * Data blocks are a span of key/value pairs stored in the file that are compressed and indexed as a group.
    *
    * <p>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java
index 7836ea5..5c20555 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java
@@ -39,7 +39,7 @@ import org.apache.hadoop.io.Writable;
 public interface AuthenticationToken extends Writable, Destroyable, Cloneable {
 
   /**
-   * A utility class to serialize/deserialize {@link AuthenticationToken} objects.<br/>
+   * A utility class to serialize/deserialize {@link AuthenticationToken} objects.<br>
    * Unfortunately, these methods are provided in an inner-class, to avoid breaking the interface API.
    *
    * @since 1.6.0

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java b/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
index 33b7aef..5da92cb 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
@@ -80,7 +80,7 @@ public abstract class AccumuloConfiguration implements Iterable<Entry<String,Str
   }
 
   /**
-   * This method returns all properties in a map of string->string under the given prefix property.
+   * This method returns all properties in a map of string-&gt;string under the given prefix property.
    *
    * @param property
    *          the prefix property, and must be of type PropertyType.PREFIX

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/data/Range.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/data/Range.java b/core/src/main/java/org/apache/accumulo/core/data/Range.java
index b832c33..7ccfe3d 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/Range.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/Range.java
@@ -423,7 +423,7 @@ public class Range implements WritableComparable<Range> {
 
         if (range.infiniteStopKey || (cmp = range.stop.compareTo(currentRange.stop)) > 0 || (cmp == 0 && range.stopKeyInclusive)) {
           currentRange = new Range(currentRange.getStartKey(), currentStartKeyInclusive, range.getEndKey(), range.stopKeyInclusive);
-        }/* else currentRange contains ral.get(i) */
+        } /* else currentRange contains ral.get(i) */
       } else {
         ret.add(currentRange);
         currentRange = range;
@@ -506,12 +506,12 @@ public class Range implements WritableComparable<Range> {
   }
 
   /**
-   * Creates a new range that is bounded by the columns passed in. The stary key in the returned range will have a column >= to the minimum column. The end key
-   * in the returned range will have a column <= the max column.
+   * Creates a new range that is bounded by the columns passed in. The stary key in the returned range will have a column &gt;= to the minimum column. The end
+   * key in the returned range will have a column &lt;= the max column.
    *
    * @return a column bounded range
    * @throws IllegalArgumentException
-   *           if min > max
+   *           if min &gt; max
    */
 
   public Range bound(Column min, Column max) {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java
index 0c35b98..9b52635 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java
@@ -21,7 +21,7 @@ import java.util.LinkedList;
 import java.util.PriorityQueue;
 
 /**
- * A memory-bound queue that will grow until an element brings total size >= maxSize. From then on, only entries that are sorted larger than the smallest
+ * A memory-bound queue that will grow until an element brings total size &gt;= maxSize. From then on, only entries that are sorted larger than the smallest
  * current entry will be inserted/replaced.
  *
  * <p>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/ClassSize.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/ClassSize.java b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/ClassSize.java
index f898a8f..f15e28f 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/ClassSize.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/ClassSize.java
@@ -251,7 +251,7 @@ public class ClassSize {
    *
    * @param num
    *          number to align to 8
-   * @return smallest number >= input that is a multiple of 8
+   * @return smallest number &gt;= input that is a multiple of 8
    */
   public static int align(int num) {
     return (int) (align((long) num));
@@ -262,7 +262,7 @@ public class ClassSize {
    *
    * @param num
    *          number to align to 8
-   * @return smallest number >= input that is a multiple of 8
+   * @return smallest number &gt;= input that is a multiple of 8
    */
   public static long align(long num) {
     // The 7 comes from that the alignSize is 8 which is the number of bytes

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Utils.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Utils.java b/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Utils.java
index 84b861b..46afc0b 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Utils.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Utils.java
@@ -54,18 +54,21 @@ public final class Utils {
    * Encoding a Long integer into a variable-length encoding format.
    * <ul>
    * <li>if n in [-32, 127): encode in one byte with the actual value. Otherwise,
-   * <li>if n in [-20*2^8, 20*2^8): encode in two bytes: byte[0] = n/256 - 52; byte[1]=n&0xff. Otherwise,
-   * <li>if n IN [-16*2^16, 16*2^16): encode in three bytes: byte[0]=n/2^16 - 88; byte[1]=(n>>8)&0xff; byte[2]=n&0xff. Otherwise,
-   * <li>if n in [-8*2^24, 8*2^24): encode in four bytes: byte[0]=n/2^24 - 112; byte[1] = (n>>16)&0xff; byte[2] = (n>>8)&0xff; byte[3]=n&0xff. Otherwise:
-   * <li>if n in [-2^31, 2^31): encode in five bytes: byte[0]=-125; byte[1] = (n>>24)&0xff; byte[2]=(n>>16)&0xff; byte[3]=(n>>8)&0xff; byte[4]=n&0xff;
-   * <li>if n in [-2^39, 2^39): encode in six bytes: byte[0]=-124; byte[1] = (n>>32)&0xff; byte[2]=(n>>24)&0xff; byte[3]=(n>>16)&0xff; byte[4]=(n>>8)&0xff;
-   * byte[5]=n&0xff
-   * <li>if n in [-2^47, 2^47): encode in seven bytes: byte[0]=-123; byte[1] = (n>>40)&0xff; byte[2]=(n>>32)&0xff; byte[3]=(n>>24)&0xff; byte[4]=(n>>16)&0xff;
-   * byte[5]=(n>>8)&0xff; byte[6]=n&0xff;
-   * <li>if n in [-2^55, 2^55): encode in eight bytes: byte[0]=-122; byte[1] = (n>>48)&0xff; byte[2] = (n>>40)&0xff; byte[3]=(n>>32)&0xff; byte[4]=(n>>24)&0xff;
-   * byte[5]=(n>>16)&0xff; byte[6]=(n>>8)&0xff; byte[7]=n&0xff;
-   * <li>if n in [-2^63, 2^63): encode in nine bytes: byte[0]=-121; byte[1] = (n>>54)&0xff; byte[2] = (n>>48)&0xff; byte[3] = (n>>40)&0xff;
-   * byte[4]=(n>>32)&0xff; byte[5]=(n>>24)&0xff; byte[6]=(n>>16)&0xff; byte[7]=(n>>8)&0xff; byte[8]=n&0xff;
+   * <li>if n in [-20*2^8, 20*2^8): encode in two bytes: byte[0] = n/256 - 52; byte[1]=n&amp;0xff. Otherwise,
+   * <li>if n IN [-16*2^16, 16*2^16): encode in three bytes: byte[0]=n/2^16 - 88; byte[1]=(n&gt;&gt;8)&amp;0xff; byte[2]=n&amp;0xff. Otherwise,
+   * <li>if n in [-8*2^24, 8*2^24): encode in four bytes: byte[0]=n/2^24 - 112; byte[1] = (n&gt;&gt;16)&amp;0xff; byte[2] = (n&gt;&gt;8)&amp;0xff;
+   * byte[3]=n&amp;0xff. Otherwise:
+   * <li>if n in [-2^31, 2^31): encode in five bytes: byte[0]=-125; byte[1] = (n&gt;&gt;24)&amp;0xff; byte[2]=(n&gt;&gt;16)&amp;0xff;
+   * byte[3]=(n&gt;&gt;8)&amp;0xff; byte[4]=n&amp;0xff;
+   * <li>if n in [-2^39, 2^39): encode in six bytes: byte[0]=-124; byte[1] = (n&gt;&gt;32)&amp;0xff; byte[2]=(n&gt;&gt;24)&amp;0xff;
+   * byte[3]=(n&gt;&gt;16)&amp;0xff; byte[4]=(n&gt;&gt;8)&amp;0xff; byte[5]=n&amp;0xff
+   * <li>if n in [-2^47, 2^47): encode in seven bytes: byte[0]=-123; byte[1] = (n&gt;&gt;40)&amp;0xff; byte[2]=(n&gt;&gt;32)&amp;0xff;
+   * byte[3]=(n&gt;&gt;24)&amp;0xff; byte[4]=(n&gt;&gt;16)&amp;0xff; byte[5]=(n&gt;&gt;8)&amp;0xff; byte[6]=n&amp;0xff;
+   * <li>if n in [-2^55, 2^55): encode in eight bytes: byte[0]=-122; byte[1] = (n&gt;&gt;48)&amp;0xff; byte[2] = (n&gt;&gt;40)&amp;0xff;
+   * byte[3]=(n&gt;&gt;32)&amp;0xff; byte[4]=(n&gt;&gt;24)&amp;0xff; byte[5]=(n&gt;&gt;16)&amp;0xff; byte[6]=(n&gt;&gt;8)&amp;0xff; byte[7]=n&amp;0xff;
+   * <li>if n in [-2^63, 2^63): encode in nine bytes: byte[0]=-121; byte[1] = (n&gt;&gt;54)&amp;0xff; byte[2] = (n&gt;&gt;48)&amp;0xff; byte[3] =
+   * (n&gt;&gt;40)&amp;0xff; byte[4]=(n&gt;&gt;32)&amp;0xff; byte[5]=(n&gt;&gt;24)&amp;0xff; byte[6]=(n&gt;&gt;16)&amp;0xff; byte[7]=(n&gt;&gt;8)&amp;0xff;
+   * byte[8]=n&amp;0xff;
    * </ul>
    *
    * @param out
@@ -159,10 +162,10 @@ public final class Utils {
   /**
    * Decoding the variable-length integer. Suppose the value of the first byte is FB, and the following bytes are NB[*].
    * <ul>
-   * <li>if (FB >= -32), return (long)FB;
-   * <li>if (FB in [-72, -33]), return (FB+52)<<8 + NB[0]&0xff;
-   * <li>if (FB in [-104, -73]), return (FB+88)<<16 + (NB[0]&0xff)<<8 + NB[1]&0xff;
-   * <li>if (FB in [-120, -105]), return (FB+112)<<24 + (NB[0]&0xff)<<16 + (NB[1]&0xff)<<8 + NB[2]&0xff;
+   * <li>if (FB &gt;= -32), return (long)FB;
+   * <li>if (FB in [-72, -33]), return (FB+52)&lt;&lt;8 + NB[0]&amp;0xff;
+   * <li>if (FB in [-104, -73]), return (FB+88)&lt;&lt;16 + (NB[0]&amp;0xff)&lt;&lt;8 + NB[1]&amp;0xff;
+   * <li>if (FB in [-120, -105]), return (FB+112)&lt;&lt;24 + (NB[0]&amp;0xff)&lt;&lt;16 + (NB[1]&amp;0xff)&lt;&lt;8 + NB[2]&amp;0xff;
    * <li>if (FB in [-128, -121]), return interpret NB[FB+129] as a signed big-endian integer.
    * </ul>
    *

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java
index 25f30a8..8e7a385 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java
@@ -180,9 +180,9 @@ public class WholeColumnFamilyIterator implements SortedKeyValueIterator<Key,Val
   /**
    *
    * @param currentRow
-   *          All keys & cf have this in their row portion (do not modify!).
+   *          All keys and cf have this in their row portion (do not modify!).
    * @param keys
-   *          One key for each key & cf group in the row, ordered as they are given by the source iterator (do not modify!).
+   *          One key for each key and cf group in the row, ordered as they are given by the source iterator (do not modify!).
    * @param values
    *          One value for each key in keys, ordered to correspond to the ordering in keys (do not modify!).
    * @return true if we want to keep the row, false if we want to skip it

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
index b642cb8..af48770 100644
--- a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
@@ -20,7 +20,7 @@ import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.security.Credentials;
 
 /**
- * A metadata servicer for the metadata table (which holds metadata for user tables).<br />
+ * A metadata servicer for the metadata table (which holds metadata for user tables).<br>
  * The metadata table's metadata is serviced in the root table.
  */
 class ServicerForMetadataTable extends TableMetadataServicer {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
index 205adc9..b279d01 100644
--- a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
@@ -26,7 +26,7 @@ import org.apache.accumulo.core.data.KeyExtent;
 import org.apache.accumulo.core.security.Credentials;
 
 /**
- * A metadata servicer for the root table.<br />
+ * A metadata servicer for the root table.<br>
  * The root table's metadata is serviced in zookeeper.
  */
 class ServicerForRootTable extends MetadataServicer {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
index d4827f2..607dfbd 100644
--- a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
@@ -20,7 +20,7 @@ import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.security.Credentials;
 
 /**
- * A metadata servicer for user tables.<br />
+ * A metadata servicer for user tables.<br>
  * Metadata for user tables are serviced in the metadata table.
  */
 class ServicerForUserTables extends TableMetadataServicer {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java b/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java
index 842e6f9..26d1cd0 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java
@@ -37,7 +37,7 @@ import org.apache.hadoop.io.WritableComparator;
  * Validate the column visibility is a valid expression and set the visibility for a Mutation. See {@link ColumnVisibility#ColumnVisibility(byte[])} for the
  * definition of an expression.
  *
- * <P>
+ * <p>
  * The expression is a sequence of characters from the set [A-Za-z0-9_-.] along with the binary operators "&amp;" and "|" indicating that both operands are
  * necessary, or the either is necessary. The following are valid expressions for visibility:
  *
@@ -48,7 +48,7 @@ import org.apache.hadoop.io.WritableComparator;
  * orange|(red&amp;yellow)
  * </pre>
  *
- * <P>
+ * <p>
  * The following are not valid expressions for visibility:
  *
  * <pre>
@@ -61,13 +61,13 @@ import org.apache.hadoop.io.WritableComparator;
  * dog|!cat
  * </pre>
  *
- * <P>
+ * <p>
  * In addition to the base set of visibilities, any character can be used in the expression if it is quoted. If the quoted term contains '&quot;' or '\', then
  * escape the character with '\'. The {@link #quote(String)} method can be used to properly quote and escape terms automatically. The following is an example of
  * a quoted term:
  *
  * <pre>
- * &quot;A#C&quot;<span />&amp;<span />B
+ * &quot;A#C&quot; &amp; B
  * </pre>
  */
 public class ColumnVisibility {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/security/VisibilityConstraint.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/security/VisibilityConstraint.java b/core/src/main/java/org/apache/accumulo/core/security/VisibilityConstraint.java
index 67175c0..d9d13d7 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/VisibilityConstraint.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/VisibilityConstraint.java
@@ -29,7 +29,6 @@ import org.apache.accumulo.core.util.BadArgumentException;
 
 /**
  * A constraint that checks the visibility of columns against the actor's authorizations. Violation codes:
- * <p>
  * <ul>
  * <li>1 = failure to parse visibility expression</li>
  * <li>2 = insufficient authorization</li>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleParameters.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleParameters.java b/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleParameters.java
index b9bf253..a7bb93d 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleParameters.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleParameters.java
@@ -195,9 +195,7 @@ public class CryptoModuleParameters {
   /**
    * Sets the name of the random number generator to use. The default for this for the baseline JCE implementation is "SHA1PRNG".
    * <p>
-   *
-   * <p>
-   * For <b>encryption</b>, this value is <b>required</b>. <br>
+   * For <b>encryption</b>, this value is <b>required</b>.<br>
    * For <b>decryption</b>, this value is often obtained from the underlying cipher stream.
    *
    * @param randomNumberGenerator
@@ -275,7 +273,6 @@ public class CryptoModuleParameters {
 
   /**
    * Sets the encrypted version of the plaintext key ({@link CryptoModuleParameters#getPlaintextKey()}). Generally this operation will be done either by:
-   * <p>
    * <ul>
    * <li>the code reading an encrypted stream and coming across the encrypted version of one of these keys, OR
    * <li>the {@link CryptoModuleParameters#getKeyEncryptionStrategyClass()} that encrypted the plaintext key (see
@@ -285,11 +282,9 @@ public class CryptoModuleParameters {
    * For <b>encryption</b>, this value is generally not required, but is usually set by the underlying module during encryption. <br>
    * For <b>decryption</b>, this value is <b>usually required</b>.
    *
-   *
    * @param encryptedKey
    *          the encrypted value of the plaintext key
    */
-
   public void setEncryptedKey(byte[] encryptedKey) {
     this.encryptedKey = encryptedKey;
   }

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkCombiner.java
----------------------------------------------------------------------
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkCombiner.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkCombiner.java
index ca77b39..0ffeca0 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkCombiner.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkCombiner.java
@@ -34,20 +34,20 @@ import org.apache.hadoop.io.Text;
  * This iterator dedupes chunks and sets their visibilities to the combined visibility of the refs columns. For example, it would combine
  *
  * <pre>
- *    row1 refs uid1\0a A&B V0
- *    row1 refs uid2\0b C&D V0
- *    row1 ~chunk 0 A&B V1
- *    row1 ~chunk 0 C&D V1
- *    row1 ~chunk 0 E&F V1
- *    row1 ~chunk 0 G&H V1
+ *    row1 refs uid1\0a A&amp;B V0
+ *    row1 refs uid2\0b C&amp;D V0
+ *    row1 ~chunk 0 A&amp;B V1
+ *    row1 ~chunk 0 C&amp;D V1
+ *    row1 ~chunk 0 E&amp;F V1
+ *    row1 ~chunk 0 G&amp;H V1
  * </pre>
  *
  * into the following
  *
  * <pre>
- *    row1 refs uid1\0a A&B V0
- *    row1 refs uid2\0b C&D V0
- *    row1 ~chunk 0 (A&B)|(C&D) V1
+ *    row1 refs uid1\0a A&amp;B V0
+ *    row1 refs uid2\0b C&amp;D V0
+ *    row1 ~chunk 0 (A&amp;B)|(C&amp;D) V1
  * </pre>
  *
  * {@link VisibilityCombiner} is used to combie the visibilities.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
----------------------------------------------------------------------
diff --git a/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java b/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
index 2b654ca..137a3fe 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
@@ -40,7 +40,7 @@ public class ServerConstants {
   public static final String INSTANCE_ID_DIR = "instance_id";
 
   /**
-   * current version (3) reflects additional namespace operations (ACCUMULO-802) in version 1.6.0 <br />
+   * current version (3) reflects additional namespace operations (ACCUMULO-802) in version 1.6.0<br>
    * (versions should never be negative)
    */
   public static final Integer WIRE_VERSION = 3;

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
----------------------------------------------------------------------
diff --git a/server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java b/server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
index 6f34247..273c9de 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
@@ -255,7 +255,7 @@ public class SecurityOperation {
   }
 
   /**
-   * Checks if a user has a system permission<br/>
+   * Checks if a user has a system permission<br>
    * This cannot check if a system user has permission.
    *
    * @return true if a user exists and has permission; false otherwise
@@ -289,7 +289,7 @@ public class SecurityOperation {
   }
 
   /**
-   * Checks if a user has a table permission<br/>
+   * Checks if a user has a table permission<br>
    * This cannot check if a system user has permission.
    *
    * @return true if a user exists and has permission; false otherwise
@@ -312,7 +312,7 @@ public class SecurityOperation {
   }
 
   /**
-   * Checks if a user has a namespace permission<br/>
+   * Checks if a user has a namespace permission<br>
    * This cannot check if a system user has permission.
    *
    * @return true if a user exists and has permission; false otherwise

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
----------------------------------------------------------------------
diff --git a/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java b/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
index a4db195..a4c5fd6 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
@@ -54,7 +54,7 @@ public class SystemCredentialsTest {
 
   /**
    * This is a test to ensure the string literal in {@link ConnectorImpl#ConnectorImpl(Instance, Credentials)} is kept up-to-date if we move the
-   * {@link SystemToken}<br/>
+   * {@link SystemToken}<br>
    * This check will not be needed after ACCUMULO-1578
    */
   @Test

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
index bb7e690..ef2f872 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
@@ -201,7 +201,7 @@ public class DefaultServlet extends BasicServlet {
     sb.append("</td>\n");
 
     sb.append("</tr></table>\n");
-    sb.append("<br/>\n");
+    sb.append("<br />\n");
 
     sb.append("<p/><table class=\"noborder\">\n");
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java
index 19633b8..224ba91 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java
@@ -92,9 +92,9 @@ public class TablesServlet extends BasicServlet {
     tableList.addSortableColumn("Entries<br />In&nbsp;Memory", new NumberType<Long>(),
         "The total number of key/value pairs stored in memory and not yet written to disk");
     tableList.addSortableColumn("Ingest", new NumberType<Long>(), "The number of Key/Value pairs inserted.  Note that deletes are 'inserted'.");
-    tableList.addSortableColumn("Entries<br/>Read", new NumberType<Long>(),
+    tableList.addSortableColumn("Entries<br />Read", new NumberType<Long>(),
         "The number of Key/Value pairs read on the server side.  Not all key values read may be returned to client because of filtering.");
-    tableList.addSortableColumn("Entries<br/>Returned", new NumberType<Long>(),
+    tableList.addSortableColumn("Entries<br />Returned", new NumberType<Long>(),
         "The number of Key/Value pairs returned to clients during queries.  This is <b>not</b> the number of scans.");
     tableList.addSortableColumn("Hold&nbsp;Time", new DurationType(0l, 0l),
         "The amount of time that ingest operations are suspended while waiting for data to be written to disk.");


[18/19] accumulo git commit: ACCUMULO-4203 Remove unnecessary findbugs.version 1.7 branch

Posted by ct...@apache.org.
ACCUMULO-4203 Remove unnecessary findbugs.version 1.7 branch

* findbugs.version defaults to 3.0.1 in 1.7 pom, which works with JDK7
  and JDK8, so no need to put it in the JDK8 profile.


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/0ccba14f
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/0ccba14f
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/0ccba14f

Branch: refs/heads/1.7
Commit: 0ccba14f8daf2352a12cd8f6a97b18373131a792
Parents: 6becfbd
Author: Christopher Tubbs <ct...@apache.org>
Authored: Fri Jan 8 22:12:28 2016 -0500
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Fri Jan 8 22:12:28 2016 -0500

----------------------------------------------------------------------
 pom.xml | 3 ---
 1 file changed, 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/0ccba14f/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 55bbaab..644f506 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1409,9 +1409,6 @@
       <activation>
         <jdk>[1.8,1.9)</jdk>
       </activation>
-      <properties>
-        <findbugs.version>3.0.1</findbugs.version>
-      </properties>
       <build>
         <pluginManagement>
           <plugins>


[10/19] accumulo git commit: ACCUMULO-4102 Fix bad javadocs

Posted by ct...@apache.org.
ACCUMULO-4102 Fix bad javadocs


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/c8c0cf7f
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/c8c0cf7f
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/c8c0cf7f

Branch: refs/heads/1.6
Commit: c8c0cf7f90023a49cbb2b790f30819810bed0bf9
Parents: 7cc8137
Author: Christopher Tubbs <ct...@apache.org>
Authored: Fri Jan 8 19:50:42 2016 -0500
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Fri Jan 8 20:48:51 2016 -0500

----------------------------------------------------------------------
 .../core/bloomfilter/DynamicBloomFilter.java    |  4 +--
 .../accumulo/core/client/BatchWriterConfig.java | 10 +++---
 .../core/client/ConditionalWriterConfig.java    |  4 +--
 .../client/mapred/AccumuloFileOutputFormat.java |  4 +--
 .../mapreduce/AccumuloFileOutputFormat.java     |  6 ++--
 .../lib/impl/FileOutputConfigurator.java        |  4 +--
 .../lib/util/FileOutputConfigurator.java        |  4 +--
 .../security/tokens/AuthenticationToken.java    |  2 +-
 .../core/conf/AccumuloConfiguration.java        |  2 +-
 .../org/apache/accumulo/core/data/Range.java    |  8 ++---
 .../file/blockfile/cache/CachedBlockQueue.java  |  2 +-
 .../core/file/blockfile/cache/ClassSize.java    |  4 +--
 .../accumulo/core/file/rfile/bcfile/Utils.java  | 35 +++++++++++---------
 .../user/WholeColumnFamilyIterator.java         |  4 +--
 .../core/metadata/ServicerForMetadataTable.java |  2 +-
 .../core/metadata/ServicerForRootTable.java     |  2 +-
 .../core/metadata/ServicerForUserTables.java    |  2 +-
 .../core/security/ColumnVisibility.java         |  8 ++---
 .../core/security/VisibilityConstraint.java     |  1 -
 .../security/crypto/CryptoModuleParameters.java |  7 +---
 .../examples/simple/filedata/ChunkCombiner.java | 18 +++++-----
 .../apache/accumulo/server/ServerConstants.java |  2 +-
 .../server/security/SecurityOperation.java      |  6 ++--
 .../server/security/SystemCredentialsTest.java  |  2 +-
 .../monitor/servlets/DefaultServlet.java        |  2 +-
 .../monitor/servlets/TablesServlet.java         |  4 +--
 26 files changed, 73 insertions(+), 76 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/bloomfilter/DynamicBloomFilter.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/bloomfilter/DynamicBloomFilter.java b/core/src/main/java/org/apache/accumulo/core/bloomfilter/DynamicBloomFilter.java
index 740bdda..11e765a 100644
--- a/core/src/main/java/org/apache/accumulo/core/bloomfilter/DynamicBloomFilter.java
+++ b/core/src/main/java/org/apache/accumulo/core/bloomfilter/DynamicBloomFilter.java
@@ -60,8 +60,8 @@ import org.apache.hadoop.util.bloom.Key;
  * <p>
  * A dynamic Bloom filter (DBF) makes use of a <code>s * m</code> bit matrix but each of the <code>s</code> rows is a standard Bloom filter. The creation
  * process of a DBF is iterative. At the start, the DBF is a <code>1 * m</code> bit matrix, i.e., it is composed of a single standard Bloom filter. It assumes
- * that <code>n<sub>r</sub></code> elements are recorded in the initial bit vector, where <code>n<sub>r</sub> <= n</code> (<code>n</code> is the cardinality of
- * the set <code>A</code> to record in the filter).
+ * that <code>n<sub>r</sub></code> elements are recorded in the initial bit vector, where <code>n<sub>r</sub> &lt;= n</code> (<code>n</code> is the cardinality
+ * of the set <code>A</code> to record in the filter).
  * <p>
  * As the size of <code>A</code> grows during the execution of the application, several keys must be inserted in the DBF. When inserting a key into the DBF, one
  * must first get an active Bloom filter in the matrix. A Bloom filter is active when the number of recorded keys, <code>n<sub>r</sub></code>, is strictly less

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java b/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
index 08eb853..320ecf4 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
@@ -49,7 +49,7 @@ public class BatchWriterConfig implements Writable {
   private Integer maxWriteThreads = null;
 
   /**
-   * Sets the maximum memory to batch before writing. The smaller this value, the more frequently the {@link BatchWriter} will write.<br />
+   * Sets the maximum memory to batch before writing. The smaller this value, the more frequently the {@link BatchWriter} will write.<br>
    * If set to a value smaller than a single mutation, then it will {@link BatchWriter#flush()} after each added mutation. Must be non-negative.
    *
    * <p>
@@ -69,11 +69,11 @@ public class BatchWriterConfig implements Writable {
   }
 
   /**
-   * Sets the maximum amount of time to hold the data in memory before flushing it to servers.<br />
+   * Sets the maximum amount of time to hold the data in memory before flushing it to servers.<br>
    * For no maximum, set to zero, or {@link Long#MAX_VALUE} with {@link TimeUnit#MILLISECONDS}.
    *
    * <p>
-   * {@link TimeUnit#MICROSECONDS} or {@link TimeUnit#NANOSECONDS} will be truncated to the nearest {@link TimeUnit#MILLISECONDS}.<br />
+   * {@link TimeUnit#MICROSECONDS} or {@link TimeUnit#NANOSECONDS} will be truncated to the nearest {@link TimeUnit#MILLISECONDS}.<br>
    * If this truncation would result in making the value zero when it was specified as non-zero, then a minimum value of one {@link TimeUnit#MILLISECONDS} will
    * be used.
    *
@@ -101,11 +101,11 @@ public class BatchWriterConfig implements Writable {
   }
 
   /**
-   * Sets the maximum amount of time an unresponsive server will be re-tried. When this timeout is exceeded, the {@link BatchWriter} should throw an exception.<br />
+   * Sets the maximum amount of time an unresponsive server will be re-tried. When this timeout is exceeded, the {@link BatchWriter} should throw an exception.<br>
    * For no timeout, set to zero, or {@link Long#MAX_VALUE} with {@link TimeUnit#MILLISECONDS}.
    *
    * <p>
-   * {@link TimeUnit#MICROSECONDS} or {@link TimeUnit#NANOSECONDS} will be truncated to the nearest {@link TimeUnit#MILLISECONDS}.<br />
+   * {@link TimeUnit#MICROSECONDS} or {@link TimeUnit#NANOSECONDS} will be truncated to the nearest {@link TimeUnit#MILLISECONDS}.<br>
    * If this truncation would result in making the value zero when it was specified as non-zero, then a minimum value of one {@link TimeUnit#MILLISECONDS} will
    * be used.
    *

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java b/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java
index 360a302..627a580 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java
@@ -51,11 +51,11 @@ public class ConditionalWriterConfig {
 
   /**
    * Sets the maximum amount of time an unresponsive server will be re-tried. When this timeout is exceeded, the {@link ConditionalWriter} should return the
-   * mutation with an exception.<br />
+   * mutation with an exception.<br>
    * For no timeout, set to zero, or {@link Long#MAX_VALUE} with {@link TimeUnit#MILLISECONDS}.
    *
    * <p>
-   * {@link TimeUnit#MICROSECONDS} or {@link TimeUnit#NANOSECONDS} will be truncated to the nearest {@link TimeUnit#MILLISECONDS}.<br />
+   * {@link TimeUnit#MICROSECONDS} or {@link TimeUnit#NANOSECONDS} will be truncated to the nearest {@link TimeUnit#MILLISECONDS}.<br>
    * If this truncation would result in making the value zero when it was specified as non-zero, then a minimum value of one {@link TimeUnit#MILLISECONDS} will
    * be used.
    *

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
index 2f2b4b2..236fae5 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
@@ -41,7 +41,7 @@ import org.apache.hadoop.util.Progressable;
 import org.apache.log4j.Logger;
 
 /**
- * This class allows MapReduce jobs to write output in the Accumulo data file format.<br />
+ * This class allows MapReduce jobs to write output in the Accumulo data file format.<br>
  * Care should be taken to write only sorted data (sorted by {@link Key}), as this is an important requirement of Accumulo data files.
  *
  * <p>
@@ -81,7 +81,7 @@ public class AccumuloFileOutputFormat extends FileOutputFormat<Key,Value> {
   }
 
   /**
-   * Sets the size for data blocks within each file.<br />
+   * Sets the size for data blocks within each file.<br>
    * Data blocks are a span of key/value pairs stored in the file that are compressed and indexed as a group.
    *
    * <p>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
index 500f072..8c389d4 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
@@ -20,7 +20,6 @@ import java.io.IOException;
 import java.util.Arrays;
 
 import org.apache.accumulo.core.client.mapreduce.lib.impl.FileOutputConfigurator;
-import org.apache.accumulo.core.util.HadoopCompatUtil;
 import org.apache.accumulo.core.conf.AccumuloConfiguration;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.data.ArrayByteSequence;
@@ -29,6 +28,7 @@ import org.apache.accumulo.core.data.Value;
 import org.apache.accumulo.core.file.FileOperations;
 import org.apache.accumulo.core.file.FileSKVWriter;
 import org.apache.accumulo.core.security.ColumnVisibility;
+import org.apache.accumulo.core.util.HadoopCompatUtil;
 import org.apache.commons.collections.map.LRUMap;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
@@ -40,7 +40,7 @@ import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
 import org.apache.log4j.Logger;
 
 /**
- * This class allows MapReduce jobs to write output in the Accumulo data file format.<br />
+ * This class allows MapReduce jobs to write output in the Accumulo data file format.<br>
  * Care should be taken to write only sorted data (sorted by {@link Key}), as this is an important requirement of Accumulo data files.
  *
  * <p>
@@ -80,7 +80,7 @@ public class AccumuloFileOutputFormat extends FileOutputFormat<Key,Value> {
   }
 
   /**
-   * Sets the size for data blocks within each file.<br />
+   * Sets the size for data blocks within each file.<br>
    * Data blocks are a span of key/value pairs stored in the file that are compressed and indexed as a group.
    *
    * <p>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java
index 882c6d3..f0f67b2 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/impl/FileOutputConfigurator.java
@@ -39,7 +39,7 @@ public class FileOutputConfigurator extends ConfiguratorBase {
   }
 
   /**
-   * The supported Accumulo properties we set in this OutputFormat, that change the behavior of the RecordWriter.<br />
+   * The supported Accumulo properties we set in this OutputFormat, that change the behavior of the RecordWriter.<br>
    * These properties correspond to the supported public static setter methods available to this class.
    *
    * @param property
@@ -120,7 +120,7 @@ public class FileOutputConfigurator extends ConfiguratorBase {
   }
 
   /**
-   * Sets the size for data blocks within each file.<br />
+   * Sets the size for data blocks within each file.<br>
    * Data blocks are a span of key/value pairs stored in the file that are compressed and indexed as a group.
    *
    * <p>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/FileOutputConfigurator.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/FileOutputConfigurator.java b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/FileOutputConfigurator.java
index d43ecda..b4f6b8a 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/FileOutputConfigurator.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/mapreduce/lib/util/FileOutputConfigurator.java
@@ -39,7 +39,7 @@ public class FileOutputConfigurator extends ConfiguratorBase {
   }
 
   /**
-   * The supported Accumulo properties we set in this OutputFormat, that change the behavior of the RecordWriter.<br />
+   * The supported Accumulo properties we set in this OutputFormat, that change the behavior of the RecordWriter.<br>
    * These properties correspond to the supported public static setter methods available to this class.
    *
    * @param property
@@ -95,7 +95,7 @@ public class FileOutputConfigurator extends ConfiguratorBase {
   }
 
   /**
-   * Sets the size for data blocks within each file.<br />
+   * Sets the size for data blocks within each file.<br>
    * Data blocks are a span of key/value pairs stored in the file that are compressed and indexed as a group.
    *
    * <p>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java
index 7836ea5..5c20555 100644
--- a/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/security/tokens/AuthenticationToken.java
@@ -39,7 +39,7 @@ import org.apache.hadoop.io.Writable;
 public interface AuthenticationToken extends Writable, Destroyable, Cloneable {
 
   /**
-   * A utility class to serialize/deserialize {@link AuthenticationToken} objects.<br/>
+   * A utility class to serialize/deserialize {@link AuthenticationToken} objects.<br>
    * Unfortunately, these methods are provided in an inner-class, to avoid breaking the interface API.
    *
    * @since 1.6.0

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java b/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
index 33b7aef..5da92cb 100644
--- a/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
+++ b/core/src/main/java/org/apache/accumulo/core/conf/AccumuloConfiguration.java
@@ -80,7 +80,7 @@ public abstract class AccumuloConfiguration implements Iterable<Entry<String,Str
   }
 
   /**
-   * This method returns all properties in a map of string->string under the given prefix property.
+   * This method returns all properties in a map of string-&gt;string under the given prefix property.
    *
    * @param property
    *          the prefix property, and must be of type PropertyType.PREFIX

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/data/Range.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/data/Range.java b/core/src/main/java/org/apache/accumulo/core/data/Range.java
index b832c33..7ccfe3d 100644
--- a/core/src/main/java/org/apache/accumulo/core/data/Range.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/Range.java
@@ -423,7 +423,7 @@ public class Range implements WritableComparable<Range> {
 
         if (range.infiniteStopKey || (cmp = range.stop.compareTo(currentRange.stop)) > 0 || (cmp == 0 && range.stopKeyInclusive)) {
           currentRange = new Range(currentRange.getStartKey(), currentStartKeyInclusive, range.getEndKey(), range.stopKeyInclusive);
-        }/* else currentRange contains ral.get(i) */
+        } /* else currentRange contains ral.get(i) */
       } else {
         ret.add(currentRange);
         currentRange = range;
@@ -506,12 +506,12 @@ public class Range implements WritableComparable<Range> {
   }
 
   /**
-   * Creates a new range that is bounded by the columns passed in. The stary key in the returned range will have a column >= to the minimum column. The end key
-   * in the returned range will have a column <= the max column.
+   * Creates a new range that is bounded by the columns passed in. The stary key in the returned range will have a column &gt;= to the minimum column. The end
+   * key in the returned range will have a column &lt;= the max column.
    *
    * @return a column bounded range
    * @throws IllegalArgumentException
-   *           if min > max
+   *           if min &gt; max
    */
 
   public Range bound(Column min, Column max) {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java
index 0c35b98..9b52635 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java
@@ -21,7 +21,7 @@ import java.util.LinkedList;
 import java.util.PriorityQueue;
 
 /**
- * A memory-bound queue that will grow until an element brings total size >= maxSize. From then on, only entries that are sorted larger than the smallest
+ * A memory-bound queue that will grow until an element brings total size &gt;= maxSize. From then on, only entries that are sorted larger than the smallest
  * current entry will be inserted/replaced.
  *
  * <p>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/ClassSize.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/ClassSize.java b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/ClassSize.java
index f898a8f..f15e28f 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/ClassSize.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/ClassSize.java
@@ -251,7 +251,7 @@ public class ClassSize {
    *
    * @param num
    *          number to align to 8
-   * @return smallest number >= input that is a multiple of 8
+   * @return smallest number &gt;= input that is a multiple of 8
    */
   public static int align(int num) {
     return (int) (align((long) num));
@@ -262,7 +262,7 @@ public class ClassSize {
    *
    * @param num
    *          number to align to 8
-   * @return smallest number >= input that is a multiple of 8
+   * @return smallest number &gt;= input that is a multiple of 8
    */
   public static long align(long num) {
     // The 7 comes from that the alignSize is 8 which is the number of bytes

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Utils.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Utils.java b/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Utils.java
index 84b861b..46afc0b 100644
--- a/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Utils.java
+++ b/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Utils.java
@@ -54,18 +54,21 @@ public final class Utils {
    * Encoding a Long integer into a variable-length encoding format.
    * <ul>
    * <li>if n in [-32, 127): encode in one byte with the actual value. Otherwise,
-   * <li>if n in [-20*2^8, 20*2^8): encode in two bytes: byte[0] = n/256 - 52; byte[1]=n&0xff. Otherwise,
-   * <li>if n IN [-16*2^16, 16*2^16): encode in three bytes: byte[0]=n/2^16 - 88; byte[1]=(n>>8)&0xff; byte[2]=n&0xff. Otherwise,
-   * <li>if n in [-8*2^24, 8*2^24): encode in four bytes: byte[0]=n/2^24 - 112; byte[1] = (n>>16)&0xff; byte[2] = (n>>8)&0xff; byte[3]=n&0xff. Otherwise:
-   * <li>if n in [-2^31, 2^31): encode in five bytes: byte[0]=-125; byte[1] = (n>>24)&0xff; byte[2]=(n>>16)&0xff; byte[3]=(n>>8)&0xff; byte[4]=n&0xff;
-   * <li>if n in [-2^39, 2^39): encode in six bytes: byte[0]=-124; byte[1] = (n>>32)&0xff; byte[2]=(n>>24)&0xff; byte[3]=(n>>16)&0xff; byte[4]=(n>>8)&0xff;
-   * byte[5]=n&0xff
-   * <li>if n in [-2^47, 2^47): encode in seven bytes: byte[0]=-123; byte[1] = (n>>40)&0xff; byte[2]=(n>>32)&0xff; byte[3]=(n>>24)&0xff; byte[4]=(n>>16)&0xff;
-   * byte[5]=(n>>8)&0xff; byte[6]=n&0xff;
-   * <li>if n in [-2^55, 2^55): encode in eight bytes: byte[0]=-122; byte[1] = (n>>48)&0xff; byte[2] = (n>>40)&0xff; byte[3]=(n>>32)&0xff; byte[4]=(n>>24)&0xff;
-   * byte[5]=(n>>16)&0xff; byte[6]=(n>>8)&0xff; byte[7]=n&0xff;
-   * <li>if n in [-2^63, 2^63): encode in nine bytes: byte[0]=-121; byte[1] = (n>>54)&0xff; byte[2] = (n>>48)&0xff; byte[3] = (n>>40)&0xff;
-   * byte[4]=(n>>32)&0xff; byte[5]=(n>>24)&0xff; byte[6]=(n>>16)&0xff; byte[7]=(n>>8)&0xff; byte[8]=n&0xff;
+   * <li>if n in [-20*2^8, 20*2^8): encode in two bytes: byte[0] = n/256 - 52; byte[1]=n&amp;0xff. Otherwise,
+   * <li>if n IN [-16*2^16, 16*2^16): encode in three bytes: byte[0]=n/2^16 - 88; byte[1]=(n&gt;&gt;8)&amp;0xff; byte[2]=n&amp;0xff. Otherwise,
+   * <li>if n in [-8*2^24, 8*2^24): encode in four bytes: byte[0]=n/2^24 - 112; byte[1] = (n&gt;&gt;16)&amp;0xff; byte[2] = (n&gt;&gt;8)&amp;0xff;
+   * byte[3]=n&amp;0xff. Otherwise:
+   * <li>if n in [-2^31, 2^31): encode in five bytes: byte[0]=-125; byte[1] = (n&gt;&gt;24)&amp;0xff; byte[2]=(n&gt;&gt;16)&amp;0xff;
+   * byte[3]=(n&gt;&gt;8)&amp;0xff; byte[4]=n&amp;0xff;
+   * <li>if n in [-2^39, 2^39): encode in six bytes: byte[0]=-124; byte[1] = (n&gt;&gt;32)&amp;0xff; byte[2]=(n&gt;&gt;24)&amp;0xff;
+   * byte[3]=(n&gt;&gt;16)&amp;0xff; byte[4]=(n&gt;&gt;8)&amp;0xff; byte[5]=n&amp;0xff
+   * <li>if n in [-2^47, 2^47): encode in seven bytes: byte[0]=-123; byte[1] = (n&gt;&gt;40)&amp;0xff; byte[2]=(n&gt;&gt;32)&amp;0xff;
+   * byte[3]=(n&gt;&gt;24)&amp;0xff; byte[4]=(n&gt;&gt;16)&amp;0xff; byte[5]=(n&gt;&gt;8)&amp;0xff; byte[6]=n&amp;0xff;
+   * <li>if n in [-2^55, 2^55): encode in eight bytes: byte[0]=-122; byte[1] = (n&gt;&gt;48)&amp;0xff; byte[2] = (n&gt;&gt;40)&amp;0xff;
+   * byte[3]=(n&gt;&gt;32)&amp;0xff; byte[4]=(n&gt;&gt;24)&amp;0xff; byte[5]=(n&gt;&gt;16)&amp;0xff; byte[6]=(n&gt;&gt;8)&amp;0xff; byte[7]=n&amp;0xff;
+   * <li>if n in [-2^63, 2^63): encode in nine bytes: byte[0]=-121; byte[1] = (n&gt;&gt;54)&amp;0xff; byte[2] = (n&gt;&gt;48)&amp;0xff; byte[3] =
+   * (n&gt;&gt;40)&amp;0xff; byte[4]=(n&gt;&gt;32)&amp;0xff; byte[5]=(n&gt;&gt;24)&amp;0xff; byte[6]=(n&gt;&gt;16)&amp;0xff; byte[7]=(n&gt;&gt;8)&amp;0xff;
+   * byte[8]=n&amp;0xff;
    * </ul>
    *
    * @param out
@@ -159,10 +162,10 @@ public final class Utils {
   /**
    * Decoding the variable-length integer. Suppose the value of the first byte is FB, and the following bytes are NB[*].
    * <ul>
-   * <li>if (FB >= -32), return (long)FB;
-   * <li>if (FB in [-72, -33]), return (FB+52)<<8 + NB[0]&0xff;
-   * <li>if (FB in [-104, -73]), return (FB+88)<<16 + (NB[0]&0xff)<<8 + NB[1]&0xff;
-   * <li>if (FB in [-120, -105]), return (FB+112)<<24 + (NB[0]&0xff)<<16 + (NB[1]&0xff)<<8 + NB[2]&0xff;
+   * <li>if (FB &gt;= -32), return (long)FB;
+   * <li>if (FB in [-72, -33]), return (FB+52)&lt;&lt;8 + NB[0]&amp;0xff;
+   * <li>if (FB in [-104, -73]), return (FB+88)&lt;&lt;16 + (NB[0]&amp;0xff)&lt;&lt;8 + NB[1]&amp;0xff;
+   * <li>if (FB in [-120, -105]), return (FB+112)&lt;&lt;24 + (NB[0]&amp;0xff)&lt;&lt;16 + (NB[1]&amp;0xff)&lt;&lt;8 + NB[2]&amp;0xff;
    * <li>if (FB in [-128, -121]), return interpret NB[FB+129] as a signed big-endian integer.
    * </ul>
    *

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java b/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java
index 25f30a8..8e7a385 100644
--- a/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java
+++ b/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java
@@ -180,9 +180,9 @@ public class WholeColumnFamilyIterator implements SortedKeyValueIterator<Key,Val
   /**
    *
    * @param currentRow
-   *          All keys & cf have this in their row portion (do not modify!).
+   *          All keys and cf have this in their row portion (do not modify!).
    * @param keys
-   *          One key for each key & cf group in the row, ordered as they are given by the source iterator (do not modify!).
+   *          One key for each key and cf group in the row, ordered as they are given by the source iterator (do not modify!).
    * @param values
    *          One value for each key in keys, ordered to correspond to the ordering in keys (do not modify!).
    * @return true if we want to keep the row, false if we want to skip it

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
index b642cb8..af48770 100644
--- a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
@@ -20,7 +20,7 @@ import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.security.Credentials;
 
 /**
- * A metadata servicer for the metadata table (which holds metadata for user tables).<br />
+ * A metadata servicer for the metadata table (which holds metadata for user tables).<br>
  * The metadata table's metadata is serviced in the root table.
  */
 class ServicerForMetadataTable extends TableMetadataServicer {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
index 205adc9..b279d01 100644
--- a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
@@ -26,7 +26,7 @@ import org.apache.accumulo.core.data.KeyExtent;
 import org.apache.accumulo.core.security.Credentials;
 
 /**
- * A metadata servicer for the root table.<br />
+ * A metadata servicer for the root table.<br>
  * The root table's metadata is serviced in zookeeper.
  */
 class ServicerForRootTable extends MetadataServicer {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
index d4827f2..607dfbd 100644
--- a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
@@ -20,7 +20,7 @@ import org.apache.accumulo.core.client.Instance;
 import org.apache.accumulo.core.security.Credentials;
 
 /**
- * A metadata servicer for user tables.<br />
+ * A metadata servicer for user tables.<br>
  * Metadata for user tables are serviced in the metadata table.
  */
 class ServicerForUserTables extends TableMetadataServicer {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java b/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java
index 842e6f9..26d1cd0 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java
@@ -37,7 +37,7 @@ import org.apache.hadoop.io.WritableComparator;
  * Validate the column visibility is a valid expression and set the visibility for a Mutation. See {@link ColumnVisibility#ColumnVisibility(byte[])} for the
  * definition of an expression.
  *
- * <P>
+ * <p>
  * The expression is a sequence of characters from the set [A-Za-z0-9_-.] along with the binary operators "&amp;" and "|" indicating that both operands are
  * necessary, or the either is necessary. The following are valid expressions for visibility:
  *
@@ -48,7 +48,7 @@ import org.apache.hadoop.io.WritableComparator;
  * orange|(red&amp;yellow)
  * </pre>
  *
- * <P>
+ * <p>
  * The following are not valid expressions for visibility:
  *
  * <pre>
@@ -61,13 +61,13 @@ import org.apache.hadoop.io.WritableComparator;
  * dog|!cat
  * </pre>
  *
- * <P>
+ * <p>
  * In addition to the base set of visibilities, any character can be used in the expression if it is quoted. If the quoted term contains '&quot;' or '\', then
  * escape the character with '\'. The {@link #quote(String)} method can be used to properly quote and escape terms automatically. The following is an example of
  * a quoted term:
  *
  * <pre>
- * &quot;A#C&quot;<span />&amp;<span />B
+ * &quot;A#C&quot; &amp; B
  * </pre>
  */
 public class ColumnVisibility {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/security/VisibilityConstraint.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/security/VisibilityConstraint.java b/core/src/main/java/org/apache/accumulo/core/security/VisibilityConstraint.java
index 67175c0..d9d13d7 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/VisibilityConstraint.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/VisibilityConstraint.java
@@ -29,7 +29,6 @@ import org.apache.accumulo.core.util.BadArgumentException;
 
 /**
  * A constraint that checks the visibility of columns against the actor's authorizations. Violation codes:
- * <p>
  * <ul>
  * <li>1 = failure to parse visibility expression</li>
  * <li>2 = insufficient authorization</li>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleParameters.java
----------------------------------------------------------------------
diff --git a/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleParameters.java b/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleParameters.java
index b9bf253..a7bb93d 100644
--- a/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleParameters.java
+++ b/core/src/main/java/org/apache/accumulo/core/security/crypto/CryptoModuleParameters.java
@@ -195,9 +195,7 @@ public class CryptoModuleParameters {
   /**
    * Sets the name of the random number generator to use. The default for this for the baseline JCE implementation is "SHA1PRNG".
    * <p>
-   *
-   * <p>
-   * For <b>encryption</b>, this value is <b>required</b>. <br>
+   * For <b>encryption</b>, this value is <b>required</b>.<br>
    * For <b>decryption</b>, this value is often obtained from the underlying cipher stream.
    *
    * @param randomNumberGenerator
@@ -275,7 +273,6 @@ public class CryptoModuleParameters {
 
   /**
    * Sets the encrypted version of the plaintext key ({@link CryptoModuleParameters#getPlaintextKey()}). Generally this operation will be done either by:
-   * <p>
    * <ul>
    * <li>the code reading an encrypted stream and coming across the encrypted version of one of these keys, OR
    * <li>the {@link CryptoModuleParameters#getKeyEncryptionStrategyClass()} that encrypted the plaintext key (see
@@ -285,11 +282,9 @@ public class CryptoModuleParameters {
    * For <b>encryption</b>, this value is generally not required, but is usually set by the underlying module during encryption. <br>
    * For <b>decryption</b>, this value is <b>usually required</b>.
    *
-   *
    * @param encryptedKey
    *          the encrypted value of the plaintext key
    */
-
   public void setEncryptedKey(byte[] encryptedKey) {
     this.encryptedKey = encryptedKey;
   }

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkCombiner.java
----------------------------------------------------------------------
diff --git a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkCombiner.java b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkCombiner.java
index ca77b39..0ffeca0 100644
--- a/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkCombiner.java
+++ b/examples/simple/src/main/java/org/apache/accumulo/examples/simple/filedata/ChunkCombiner.java
@@ -34,20 +34,20 @@ import org.apache.hadoop.io.Text;
  * This iterator dedupes chunks and sets their visibilities to the combined visibility of the refs columns. For example, it would combine
  *
  * <pre>
- *    row1 refs uid1\0a A&B V0
- *    row1 refs uid2\0b C&D V0
- *    row1 ~chunk 0 A&B V1
- *    row1 ~chunk 0 C&D V1
- *    row1 ~chunk 0 E&F V1
- *    row1 ~chunk 0 G&H V1
+ *    row1 refs uid1\0a A&amp;B V0
+ *    row1 refs uid2\0b C&amp;D V0
+ *    row1 ~chunk 0 A&amp;B V1
+ *    row1 ~chunk 0 C&amp;D V1
+ *    row1 ~chunk 0 E&amp;F V1
+ *    row1 ~chunk 0 G&amp;H V1
  * </pre>
  *
  * into the following
  *
  * <pre>
- *    row1 refs uid1\0a A&B V0
- *    row1 refs uid2\0b C&D V0
- *    row1 ~chunk 0 (A&B)|(C&D) V1
+ *    row1 refs uid1\0a A&amp;B V0
+ *    row1 refs uid2\0b C&amp;D V0
+ *    row1 ~chunk 0 (A&amp;B)|(C&amp;D) V1
  * </pre>
  *
  * {@link VisibilityCombiner} is used to combie the visibilities.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
----------------------------------------------------------------------
diff --git a/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java b/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
index 2b654ca..137a3fe 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
@@ -40,7 +40,7 @@ public class ServerConstants {
   public static final String INSTANCE_ID_DIR = "instance_id";
 
   /**
-   * current version (3) reflects additional namespace operations (ACCUMULO-802) in version 1.6.0 <br />
+   * current version (3) reflects additional namespace operations (ACCUMULO-802) in version 1.6.0<br>
    * (versions should never be negative)
    */
   public static final Integer WIRE_VERSION = 3;

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
----------------------------------------------------------------------
diff --git a/server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java b/server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
index 6f34247..273c9de 100644
--- a/server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
@@ -255,7 +255,7 @@ public class SecurityOperation {
   }
 
   /**
-   * Checks if a user has a system permission<br/>
+   * Checks if a user has a system permission<br>
    * This cannot check if a system user has permission.
    *
    * @return true if a user exists and has permission; false otherwise
@@ -289,7 +289,7 @@ public class SecurityOperation {
   }
 
   /**
-   * Checks if a user has a table permission<br/>
+   * Checks if a user has a table permission<br>
    * This cannot check if a system user has permission.
    *
    * @return true if a user exists and has permission; false otherwise
@@ -312,7 +312,7 @@ public class SecurityOperation {
   }
 
   /**
-   * Checks if a user has a namespace permission<br/>
+   * Checks if a user has a namespace permission<br>
    * This cannot check if a system user has permission.
    *
    * @return true if a user exists and has permission; false otherwise

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
----------------------------------------------------------------------
diff --git a/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java b/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
index a4db195..a4c5fd6 100644
--- a/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
@@ -54,7 +54,7 @@ public class SystemCredentialsTest {
 
   /**
    * This is a test to ensure the string literal in {@link ConnectorImpl#ConnectorImpl(Instance, Credentials)} is kept up-to-date if we move the
-   * {@link SystemToken}<br/>
+   * {@link SystemToken}<br>
    * This check will not be needed after ACCUMULO-1578
    */
   @Test

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
index bb7e690..ef2f872 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
@@ -201,7 +201,7 @@ public class DefaultServlet extends BasicServlet {
     sb.append("</td>\n");
 
     sb.append("</tr></table>\n");
-    sb.append("<br/>\n");
+    sb.append("<br />\n");
 
     sb.append("<p/><table class=\"noborder\">\n");
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/c8c0cf7f/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java
----------------------------------------------------------------------
diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java
index 19633b8..224ba91 100644
--- a/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java
+++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/TablesServlet.java
@@ -92,9 +92,9 @@ public class TablesServlet extends BasicServlet {
     tableList.addSortableColumn("Entries<br />In&nbsp;Memory", new NumberType<Long>(),
         "The total number of key/value pairs stored in memory and not yet written to disk");
     tableList.addSortableColumn("Ingest", new NumberType<Long>(), "The number of Key/Value pairs inserted.  Note that deletes are 'inserted'.");
-    tableList.addSortableColumn("Entries<br/>Read", new NumberType<Long>(),
+    tableList.addSortableColumn("Entries<br />Read", new NumberType<Long>(),
         "The number of Key/Value pairs read on the server side.  Not all key values read may be returned to client because of filtering.");
-    tableList.addSortableColumn("Entries<br/>Returned", new NumberType<Long>(),
+    tableList.addSortableColumn("Entries<br />Returned", new NumberType<Long>(),
         "The number of Key/Value pairs returned to clients during queries.  This is <b>not</b> the number of scans.");
     tableList.addSortableColumn("Hold&nbsp;Time", new DurationType(0l, 0l),
         "The amount of time that ingest operations are suspended while waiting for data to be written to disk.");


[16/19] accumulo git commit: Merge branch 'javadoc-jdk8-1.6' into javadoc-jdk8-1.7

Posted by ct...@apache.org.
Merge branch 'javadoc-jdk8-1.6' into javadoc-jdk8-1.7

* Merge to 1.7 branch, with additional javadoc fixes so build works
* Prevent merging maven-plugin-plugin version 3.4 specification (as it only applied to 1.6 branch)


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/6becfbd3
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/6becfbd3
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/6becfbd3

Branch: refs/heads/master
Commit: 6becfbd3852dc10f46658827d064f7d1e9ee6c45
Parents: d505843 c8c0cf7
Author: Christopher Tubbs <ct...@apache.org>
Authored: Fri Jan 8 22:04:57 2016 -0500
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Fri Jan 8 22:04:57 2016 -0500

----------------------------------------------------------------------
 .../core/bloomfilter/DynamicBloomFilter.java    |  4 +--
 .../accumulo/core/client/BatchWriterConfig.java | 10 +++---
 .../core/client/ConditionalWriterConfig.java    |  4 +--
 .../client/mapred/AccumuloFileOutputFormat.java |  4 +--
 .../mapreduce/AccumuloFileOutputFormat.java     |  4 +--
 .../lib/impl/FileOutputConfigurator.java        |  4 +--
 .../lib/util/FileOutputConfigurator.java        |  4 +--
 .../security/tokens/AuthenticationToken.java    |  2 +-
 .../core/constraints/VisibilityConstraint.java  |  1 -
 .../java/org/apache/accumulo/core/data/Key.java |  2 +-
 .../org/apache/accumulo/core/data/Range.java    |  6 ++--
 .../file/blockfile/cache/CachedBlockQueue.java  |  2 +-
 .../core/file/blockfile/cache/ClassSize.java    |  4 +--
 .../accumulo/core/file/rfile/bcfile/Utils.java  | 35 +++++++++++---------
 .../user/WholeColumnFamilyIterator.java         |  4 +--
 .../core/metadata/ServicerForMetadataTable.java |  2 +-
 .../core/metadata/ServicerForRootTable.java     |  2 +-
 .../core/metadata/ServicerForUserTables.java    |  2 +-
 .../core/metadata/schema/MetadataSchema.java    |  2 +-
 .../core/replication/ReplicationSchema.java     |  6 ++--
 .../core/security/ColumnVisibility.java         |  8 ++---
 .../security/crypto/CryptoModuleParameters.java |  7 +---
 .../accumulo/core/conf/config-header.html       | 12 +++----
 .../examples/simple/filedata/ChunkCombiner.java | 18 +++++-----
 pom.xml                                         | 26 +++++++++++++++
 .../apache/accumulo/server/ServerConstants.java |  2 +-
 .../server/master/balancer/GroupBalancer.java   |  4 +--
 .../master/balancer/RegexGroupBalancer.java     |  6 ++--
 .../server/security/SecurityOperation.java      |  6 ++--
 .../server/security/UserImpersonation.java      |  2 +-
 .../server/security/SystemCredentialsTest.java  |  2 +-
 .../replication/SequentialWorkAssigner.java     |  2 +-
 .../monitor/servlets/DefaultServlet.java        |  2 +-
 .../monitor/servlets/ReplicationServlet.java    |  2 +-
 .../monitor/servlets/TablesServlet.java         |  4 +--
 .../tserver/compaction/CompactionStrategy.java  |  6 ++--
 .../test/replication/merkle/package-info.java   |  9 ++---
 .../replication/merkle/skvi/DigestIterator.java |  2 +-
 38 files changed, 124 insertions(+), 100 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
index 6ceefad,320ecf4..3421f76
--- a/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
+++ b/core/src/main/java/org/apache/accumulo/core/client/BatchWriterConfig.java
@@@ -49,10 -48,8 +49,10 @@@ public class BatchWriterConfig implemen
    private static final Integer DEFAULT_MAX_WRITE_THREADS = 3;
    private Integer maxWriteThreads = null;
  
 +  private Durability durability = Durability.DEFAULT;
 +
    /**
-    * Sets the maximum memory to batch before writing. The smaller this value, the more frequently the {@link BatchWriter} will write.<br />
+    * Sets the maximum memory to batch before writing. The smaller this value, the more frequently the {@link BatchWriter} will write.<br>
     * If set to a value smaller than a single mutation, then it will {@link BatchWriter#flush()} after each added mutation. Must be non-negative.
     *
     * <p>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/client/ConditionalWriterConfig.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/client/mapred/AccumuloFileOutputFormat.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/client/mapreduce/AccumuloFileOutputFormat.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/constraints/VisibilityConstraint.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/constraints/VisibilityConstraint.java
index 91bc22f,0000000..648d044
mode 100644,000000..100644
--- a/core/src/main/java/org/apache/accumulo/core/constraints/VisibilityConstraint.java
+++ b/core/src/main/java/org/apache/accumulo/core/constraints/VisibilityConstraint.java
@@@ -1,93 -1,0 +1,92 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.core.constraints;
 +
 +import static java.nio.charset.StandardCharsets.UTF_8;
 +
 +import java.util.Collections;
 +import java.util.HashSet;
 +import java.util.List;
 +
 +import org.apache.accumulo.core.data.ColumnUpdate;
 +import org.apache.accumulo.core.data.Mutation;
 +import org.apache.accumulo.core.security.ColumnVisibility;
 +import org.apache.accumulo.core.security.VisibilityEvaluator;
 +import org.apache.accumulo.core.security.VisibilityParseException;
 +import org.apache.accumulo.core.util.BadArgumentException;
 +
 +/**
 + * A constraint that checks the visibility of columns against the actor's authorizations. Violation codes:
-  * <p>
 + * <ul>
 + * <li>1 = failure to parse visibility expression</li>
 + * <li>2 = insufficient authorization</li>
 + * </ul>
 + */
 +public class VisibilityConstraint implements Constraint {
 +
 +  @Override
 +  public String getViolationDescription(short violationCode) {
 +    switch (violationCode) {
 +      case 1:
 +        return "Malformed column visibility";
 +      case 2:
 +        return "User does not have authorization on column visibility";
 +    }
 +
 +    return null;
 +  }
 +
 +  @Override
 +  public List<Short> check(Environment env, Mutation mutation) {
 +    List<ColumnUpdate> updates = mutation.getUpdates();
 +
 +    HashSet<String> ok = null;
 +    if (updates.size() > 1)
 +      ok = new HashSet<String>();
 +
 +    VisibilityEvaluator ve = null;
 +
 +    for (ColumnUpdate update : updates) {
 +
 +      byte[] cv = update.getColumnVisibility();
 +      if (cv.length > 0) {
 +        String key = null;
 +        if (ok != null && ok.contains(key = new String(cv, UTF_8)))
 +          continue;
 +
 +        try {
 +
 +          if (ve == null)
 +            ve = new VisibilityEvaluator(env.getAuthorizationsContainer());
 +
 +          if (!ve.evaluate(new ColumnVisibility(cv)))
 +            return Collections.singletonList(Short.valueOf((short) 2));
 +
 +        } catch (BadArgumentException bae) {
 +          return Collections.singletonList(new Short((short) 1));
 +        } catch (VisibilityParseException e) {
 +          return Collections.singletonList(new Short((short) 1));
 +        }
 +
 +        if (ok != null)
 +          ok.add(key);
 +      }
 +    }
 +
 +    return null;
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/data/Key.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/data/Key.java
index f88ddaa,f605c98..758436d
--- a/core/src/main/java/org/apache/accumulo/core/data/Key.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/Key.java
@@@ -786,23 -660,6 +786,23 @@@ public class Key implements WritableCom
      return appendPrintableString(ba, offset, len, maxLen, new StringBuilder()).toString();
    }
  
 +  /**
 +   * Appends ASCII printable characters to a string, based on the given byte array, treating the bytes as ASCII characters. If a byte can be converted to a
-    * ASCII printable character it is appended as is; otherwise, it is appended as a character code, e.g., %05; for byte value 5. If len > maxlen, the string
++   * ASCII printable character it is appended as is; otherwise, it is appended as a character code, e.g., %05; for byte value 5. If len &gt; maxlen, the string
 +   * includes a "TRUNCATED" note at the end.
 +   *
 +   * @param ba
 +   *          byte array
 +   * @param offset
 +   *          offset to start with in byte array (inclusive)
 +   * @param len
 +   *          number of bytes to print
 +   * @param maxLen
 +   *          maximum number of bytes to convert to printable form
 +   * @param sb
 +   *          <code>StringBuilder</code> to append to
 +   * @return given <code>StringBuilder</code>
 +   */
    public static StringBuilder appendPrintableString(byte ba[], int offset, int len, int maxLen, StringBuilder sb) {
      int plen = Math.min(len, maxLen);
  

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/data/Range.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/data/Range.java
index 0fcfee6,7ccfe3d..c114e2b
--- a/core/src/main/java/org/apache/accumulo/core/data/Range.java
+++ b/core/src/main/java/org/apache/accumulo/core/data/Range.java
@@@ -555,17 -506,14 +555,17 @@@ public class Range implements WritableC
    }
  
    /**
-    * Creates a new range that is bounded by the columns passed in. The start key in the returned range will have a column >= to the minimum column. The end key
-    * in the returned range will have a column <= the max column.
 -   * Creates a new range that is bounded by the columns passed in. The stary key in the returned range will have a column &gt;= to the minimum column. The end
++   * Creates a new range that is bounded by the columns passed in. The start key in the returned range will have a column &gt;= to the minimum column. The end
+    * key in the returned range will have a column &lt;= the max column.
     *
 +   * @param min
 +   *          minimum column
 +   * @param max
 +   *          maximum column
     * @return a column bounded range
     * @throws IllegalArgumentException
 -   *           if min &gt; max
 +   *           if the minimum column compares greater than the maximum column
     */
 -
    public Range bound(Column min, Column max) {
  
      if (min.compareTo(max) > 0) {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/CachedBlockQueue.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/file/blockfile/cache/ClassSize.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/file/rfile/bcfile/Utils.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/iterators/user/WholeColumnFamilyIterator.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
index 525e2a2,af48770..5a96c20
--- a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForMetadataTable.java
@@@ -16,10 -16,11 +16,10 @@@
   */
  package org.apache.accumulo.core.metadata;
  
 -import org.apache.accumulo.core.client.Instance;
 -import org.apache.accumulo.core.security.Credentials;
 +import org.apache.accumulo.core.client.impl.ClientContext;
  
  /**
-  * A metadata servicer for the metadata table (which holds metadata for user tables).<br />
+  * A metadata servicer for the metadata table (which holds metadata for user tables).<br>
   * The metadata table's metadata is serviced in the root table.
   */
  class ServicerForMetadataTable extends TableMetadataServicer {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
index 73a943d,b279d01..32b5824
--- a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForRootTable.java
@@@ -22,11 -22,11 +22,11 @@@ import org.apache.accumulo.core.client.
  import org.apache.accumulo.core.client.AccumuloSecurityException;
  import org.apache.accumulo.core.client.Instance;
  import org.apache.accumulo.core.client.TableNotFoundException;
 -import org.apache.accumulo.core.data.KeyExtent;
 -import org.apache.accumulo.core.security.Credentials;
 +import org.apache.accumulo.core.client.impl.ClientContext;
 +import org.apache.accumulo.core.data.impl.KeyExtent;
  
  /**
-  * A metadata servicer for the root table.<br />
+  * A metadata servicer for the root table.<br>
   * The root table's metadata is serviced in zookeeper.
   */
  class ServicerForRootTable extends MetadataServicer {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
index 5efa8a6,607dfbd..73f9188
--- a/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/ServicerForUserTables.java
@@@ -16,10 -16,11 +16,10 @@@
   */
  package org.apache.accumulo.core.metadata;
  
 -import org.apache.accumulo.core.client.Instance;
 -import org.apache.accumulo.core.security.Credentials;
 +import org.apache.accumulo.core.client.impl.ClientContext;
  
  /**
-  * A metadata servicer for user tables.<br />
+  * A metadata servicer for user tables.<br>
   * Metadata for user tables are serviced in the metadata table.
   */
  class ServicerForUserTables extends TableMetadataServicer {

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataSchema.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataSchema.java
index 6baae17,f20fce1..3970c49
--- a/core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataSchema.java
+++ b/core/src/main/java/org/apache/accumulo/core/metadata/schema/MetadataSchema.java
@@@ -227,55 -233,4 +227,55 @@@ public class MetadataSchema 
  
    }
  
 +  /**
 +   * Holds references to files that need replication
 +   * <p>
-    * <code>~replhdfs://localhost:8020/accumulo/wal/tserver+port/WAL stat:local_table_id [] -> protobuf</code>
++   * <code>~replhdfs://localhost:8020/accumulo/wal/tserver+port/WAL stat:local_table_id [] -&gt; protobuf</code>
 +   */
 +  public static class ReplicationSection {
 +    public static final Text COLF = new Text("stat");
 +    private static final ArrayByteSequence COLF_BYTE_SEQ = new ArrayByteSequence(COLF.toString());
 +    private static final Section section = new Section(RESERVED_PREFIX + "repl", true, RESERVED_PREFIX + "repm", false);
 +
 +    public static Range getRange() {
 +      return section.getRange();
 +    }
 +
 +    public static String getRowPrefix() {
 +      return section.getRowPrefix();
 +    }
 +
 +    /**
 +     * Extract the table ID from the colfam into the given {@link Text}
 +     *
 +     * @param k
 +     *          Key to extract from
 +     * @param buff
 +     *          Text to place table ID into
 +     */
 +    public static void getTableId(Key k, Text buff) {
 +      Preconditions.checkNotNull(k);
 +      Preconditions.checkNotNull(buff);
 +
 +      k.getColumnQualifier(buff);
 +    }
 +
 +    /**
 +     * Extract the file name from the row suffix into the given {@link Text}
 +     *
 +     * @param k
 +     *          Key to extract from
 +     * @param buff
 +     *          Text to place file name into
 +     */
 +    public static void getFile(Key k, Text buff) {
 +      Preconditions.checkNotNull(k);
 +      Preconditions.checkNotNull(buff);
 +      Preconditions.checkArgument(COLF_BYTE_SEQ.equals(k.getColumnFamilyData()), "Given metadata replication status key with incorrect colfam");
 +
 +      k.getRow(buff);
 +
 +      buff.set(buff.getBytes(), section.getRowPrefix().length(), buff.getLength() - section.getRowPrefix().length());
 +    }
 +  }
  }

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/replication/ReplicationSchema.java
----------------------------------------------------------------------
diff --cc core/src/main/java/org/apache/accumulo/core/replication/ReplicationSchema.java
index ed46130,0000000..b352957
mode 100644,000000..100644
--- a/core/src/main/java/org/apache/accumulo/core/replication/ReplicationSchema.java
+++ b/core/src/main/java/org/apache/accumulo/core/replication/ReplicationSchema.java
@@@ -1,299 -1,0 +1,299 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.core.replication;
 +
 +import static java.nio.charset.StandardCharsets.UTF_8;
 +
 +import java.nio.charset.CharacterCodingException;
 +
 +import org.apache.accumulo.core.client.ScannerBase;
 +import org.apache.accumulo.core.client.lexicoder.ULongLexicoder;
 +import org.apache.accumulo.core.data.ArrayByteSequence;
 +import org.apache.accumulo.core.data.ByteSequence;
 +import org.apache.accumulo.core.data.Key;
 +import org.apache.accumulo.core.data.Mutation;
 +import org.apache.accumulo.core.data.Value;
 +import org.apache.hadoop.fs.Path;
 +import org.apache.hadoop.io.Text;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +import com.google.common.base.Preconditions;
 +
 +/**
 + *
 + */
 +public class ReplicationSchema {
 +  private static final Logger log = LoggerFactory.getLogger(ReplicationSchema.class);
 +
 +  /**
 +   * Portion of a file that must be replication to the given target: peer and some identifying location on that peer, e.g. remote table ID
 +   * <p>
-    * <code>hdfs://localhost:8020/accumulo/wal/tserver+port/WAL work:serialized_ReplicationTarget [] -> Status Protobuf</code>
++   * <code>hdfs://localhost:8020/accumulo/wal/tserver+port/WAL work:serialized_ReplicationTarget [] -&gt; Status Protobuf</code>
 +   */
 +  public static class WorkSection {
 +    public static final Text NAME = new Text("work");
 +    private static final ByteSequence BYTE_SEQ_NAME = new ArrayByteSequence("work");
 +
 +    public static void getFile(Key k, Text buff) {
 +      Preconditions.checkNotNull(k);
 +      Preconditions.checkNotNull(buff);
 +      Preconditions.checkArgument(BYTE_SEQ_NAME.equals(k.getColumnFamilyData()), "Given replication work key with incorrect colfam");
 +      _getFile(k, buff);
 +    }
 +
 +    public static ReplicationTarget getTarget(Key k) {
 +      return getTarget(k, new Text());
 +    }
 +
 +    public static ReplicationTarget getTarget(Key k, Text buff) {
 +      Preconditions.checkArgument(BYTE_SEQ_NAME.equals(k.getColumnFamilyData()), "Given replication work key with incorrect colfam");
 +      k.getColumnQualifier(buff);
 +
 +      return ReplicationTarget.from(buff);
 +    }
 +
 +    /**
 +     * Limit the scanner to only pull replication work records
 +     */
 +    public static void limit(ScannerBase scanner) {
 +      scanner.fetchColumnFamily(NAME);
 +    }
 +
 +    public static Mutation add(Mutation m, Text serializedTarget, Value v) {
 +      m.put(NAME, serializedTarget, v);
 +      return m;
 +    }
 +  }
 +
 +  /**
 +   * Holds replication markers tracking status for files
 +   * <p>
-    * <code>hdfs://localhost:8020/accumulo/wal/tserver+port/WAL repl:local_table_id [] -> Status Protobuf</code>
++   * <code>hdfs://localhost:8020/accumulo/wal/tserver+port/WAL repl:local_table_id [] -&gt; Status Protobuf</code>
 +   */
 +  public static class StatusSection {
 +    public static final Text NAME = new Text("repl");
 +    private static final ByteSequence BYTE_SEQ_NAME = new ArrayByteSequence("repl");
 +
 +    /**
 +     * Extract the table ID from the key (inefficiently if called repeatedly)
 +     *
 +     * @param k
 +     *          Key to extract from
 +     * @return The table ID
 +     * @see #getTableId(Key,Text)
 +     */
 +    public static String getTableId(Key k) {
 +      Text buff = new Text();
 +      getTableId(k, buff);
 +      return buff.toString();
 +    }
 +
 +    /**
 +     * Extract the table ID from the key into the given {@link Text}
 +     *
 +     * @param k
 +     *          Key to extract from
 +     * @param buff
 +     *          Text to place table ID into
 +     */
 +    public static void getTableId(Key k, Text buff) {
 +      Preconditions.checkNotNull(k);
 +      Preconditions.checkNotNull(buff);
 +
 +      k.getColumnQualifier(buff);
 +    }
 +
 +    /**
 +     * Extract the file name from the row suffix into the given {@link Text}
 +     *
 +     * @param k
 +     *          Key to extract from
 +     * @param buff
 +     *          Text to place file name into
 +     */
 +    public static void getFile(Key k, Text buff) {
 +      Preconditions.checkNotNull(k);
 +      Preconditions.checkNotNull(buff);
 +      Preconditions.checkArgument(BYTE_SEQ_NAME.equals(k.getColumnFamilyData()), "Given replication status key with incorrect colfam");
 +
 +      _getFile(k, buff);
 +    }
 +
 +    /**
 +     * Limit the scanner to only return Status records
 +     */
 +    public static void limit(ScannerBase scanner) {
 +      scanner.fetchColumnFamily(NAME);
 +    }
 +
 +    public static Mutation add(Mutation m, Text tableId, Value v) {
 +      m.put(NAME, tableId, v);
 +      return m;
 +    }
 +  }
 +
 +  /**
 +   * Holds the order in which files needed for replication were closed. The intent is to be able to guarantee that files which were closed earlier were
 +   * replicated first and we don't replay data in the wrong order on our peers
 +   * <p>
-    * <code>encodedTimeOfClosure\x00hdfs://localhost:8020/accumulo/wal/tserver+port/WAL order:source_table_id [] -> Status Protobuf</code>
++   * <code>encodedTimeOfClosure\x00hdfs://localhost:8020/accumulo/wal/tserver+port/WAL order:source_table_id [] -&gt; Status Protobuf</code>
 +   */
 +  public static class OrderSection {
 +    public static final Text NAME = new Text("order");
 +    public static final Text ROW_SEPARATOR = new Text(new byte[] {0});
 +    private static final ULongLexicoder longEncoder = new ULongLexicoder();
 +
 +    /**
 +     * Extract the table ID from the given key (inefficiently if called repeatedly)
 +     *
 +     * @param k
 +     *          OrderSection Key
 +     * @return source table id
 +     */
 +    public static String getTableId(Key k) {
 +      Text buff = new Text();
 +      getTableId(k, buff);
 +      return buff.toString();
 +    }
 +
 +    /**
 +     * Extract the table ID from the given key
 +     *
 +     * @param k
 +     *          OrderSection key
 +     * @param buff
 +     *          Text to place table ID into
 +     */
 +    public static void getTableId(Key k, Text buff) {
 +      Preconditions.checkNotNull(k);
 +      Preconditions.checkNotNull(buff);
 +
 +      k.getColumnQualifier(buff);
 +    }
 +
 +    /**
 +     * Limit the scanner to only return Order records
 +     */
 +    public static void limit(ScannerBase scanner) {
 +      scanner.fetchColumnFamily(NAME);
 +    }
 +
 +    /**
 +     * Creates the Mutation for the Order section for the given file and time
 +     *
 +     * @param file
 +     *          Filename
 +     * @param timeInMillis
 +     *          Time in millis that the file was closed
 +     * @return Mutation for the Order section
 +     */
 +    public static Mutation createMutation(String file, long timeInMillis) {
 +      Preconditions.checkNotNull(file);
 +      Preconditions.checkArgument(timeInMillis >= 0, "timeInMillis must be greater than zero");
 +
 +      // Encode the time so it sorts properly
 +      byte[] rowPrefix = longEncoder.encode(timeInMillis);
 +      Text row = new Text(rowPrefix);
 +
 +      // Normalize the file using Path
 +      Path p = new Path(file);
 +      String pathString = p.toUri().toString();
 +
 +      log.trace("Normalized {} into {}", file, pathString);
 +
 +      // Append the file as a suffix to the row
 +      row.append((ROW_SEPARATOR + pathString).getBytes(UTF_8), 0, pathString.length() + ROW_SEPARATOR.getLength());
 +
 +      // Make the mutation and add the column update
 +      return new Mutation(row);
 +    }
 +
 +    /**
 +     * Add a column update to the given mutation with the provided tableId and value
 +     *
 +     * @param m
 +     *          Mutation for OrderSection
 +     * @param tableId
 +     *          Source table id
 +     * @param v
 +     *          Serialized Status msg
 +     * @return The original Mutation
 +     */
 +    public static Mutation add(Mutation m, Text tableId, Value v) {
 +      m.put(NAME, tableId, v);
 +      return m;
 +    }
 +
 +    public static long getTimeClosed(Key k) {
 +      return getTimeClosed(k, new Text());
 +    }
 +
 +    public static long getTimeClosed(Key k, Text buff) {
 +      k.getRow(buff);
 +      int offset = 0;
 +      // find the last offset
 +      while (true) {
 +        int nextOffset = buff.find(ROW_SEPARATOR.toString(), offset + 1);
 +        if (-1 == nextOffset) {
 +          break;
 +        }
 +        offset = nextOffset;
 +      }
 +
 +      if (-1 == offset) {
 +        throw new IllegalArgumentException("Row does not contain expected separator for OrderSection");
 +      }
 +
 +      byte[] encodedLong = new byte[offset];
 +      System.arraycopy(buff.getBytes(), 0, encodedLong, 0, offset);
 +      return longEncoder.decode(encodedLong);
 +    }
 +
 +    public static String getFile(Key k) {
 +      Text buff = new Text();
 +      return getFile(k, buff);
 +    }
 +
 +    public static String getFile(Key k, Text buff) {
 +      k.getRow(buff);
 +      int offset = 0;
 +      // find the last offset
 +      while (true) {
 +        int nextOffset = buff.find(ROW_SEPARATOR.toString(), offset + 1);
 +        if (-1 == nextOffset) {
 +          break;
 +        }
 +        offset = nextOffset;
 +      }
 +
 +      if (-1 == offset) {
 +        throw new IllegalArgumentException("Row does not contain expected separator for OrderSection");
 +      }
 +
 +      try {
 +        return Text.decode(buff.getBytes(), offset + 1, buff.getLength() - (offset + 1));
 +      } catch (CharacterCodingException e) {
 +        throw new IllegalArgumentException("Could not decode file path", e);
 +      }
 +    }
 +  }
 +
 +  private static void _getFile(Key k, Text buff) {
 +    k.getRow(buff);
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/java/org/apache/accumulo/core/security/ColumnVisibility.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/core/src/main/resources/org/apache/accumulo/core/conf/config-header.html
----------------------------------------------------------------------
diff --cc core/src/main/resources/org/apache/accumulo/core/conf/config-header.html
index 9c770b1,8270ad2..49291fc
--- a/core/src/main/resources/org/apache/accumulo/core/conf/config-header.html
+++ b/core/src/main/resources/org/apache/accumulo/core/conf/config-header.html
@@@ -28,23 -28,23 +28,23 @@@
    below (from highest to lowest):</p>
    <table>
     <tr><th>Location</th><th>Description</th></tr>
--   <tr class='highlight'><td><b>Zookeeper<br/>table properties</b></td>
--       <td>Table properties are applied to the entire cluster when set in zookeeper using the accumulo API or shell.  While table properties take precedent over system properties, both will override properties set in accumulo-site.xml<br/><br/>
++   <tr class='highlight'><td><b>Zookeeper<br />table properties</b></td>
++       <td>Table properties are applied to the entire cluster when set in zookeeper using the accumulo API or shell.  While table properties take precedent over system properties, both will override properties set in accumulo-site.xml<br /><br />
             Table properties consist of all properties with the table.* prefix.  Table properties are configured on a per-table basis using the following shell commmand:
          <pre>config -t TABLE -s PROPERTY=VALUE</pre></td>
     </tr>
--   <tr><td><b>Zookeeper<br/>system properties</b></td>
++   <tr><td><b>Zookeeper<br />system properties</b></td>
        <td>System properties are applied to the entire cluster when set in zookeeper using the accumulo API or shell.  System properties consist of all properties with a 'yes' in the 'Zookeeper Mutable' column in the table below.  They are set with the following shell command:
          <pre>config -s PROPERTY=VALUE</pre>
--      If a table.* property is set using this method, the value will apply to all tables except those configured on per-table basis (which have higher precedence).<br/><br/>
++      If a table.* property is set using this method, the value will apply to all tables except those configured on per-table basis (which have higher precedence).<br /><br />
        While most system properties take effect immediately, some require a restart of the process which is indicated in 'Zookeeper Mutable'.</td>
     </tr>
     <tr class='highlight'><td><b>accumulo-site.xml</b></td>
--       <td>Accumulo processes (master, tserver, etc) read their local accumulo-site.xml on start up.  Therefore, changes made to accumulo-site.xml must rsynced across the cluster and processes must be restarted to apply changes.<br/><br/>
++       <td>Accumulo processes (master, tserver, etc) read their local accumulo-site.xml on start up.  Therefore, changes made to accumulo-site.xml must rsynced across the cluster and processes must be restarted to apply changes.<br /><br />
             Certain properties (indicated by a 'no' in 'Zookeeper Mutable') cannot be set in zookeeper and only set in this file.  The accumulo-site.xml also allows you to configure tablet servers with different settings.</td>
     </tr>
     <tr><td><b>Default</b></td>
--        <td>All properties have a default value in the source code.  This value has the lowest precedence and is overriden if set in accumulo-site.xml or zookeeper.<br/><br/>While the default value is usually optimal, there are cases where a change can increase query and ingest performance.</td>
++        <td>All properties have a default value in the source code.  This value has the lowest precedence and is overriden if set in accumulo-site.xml or zookeeper.<br /><br />While the default value is usually optimal, there are cases where a change can increase query and ingest performance.</td>
     </tr>
    </table>
  

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/pom.xml
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/base/src/main/java/org/apache/accumulo/server/ServerConstants.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java
----------------------------------------------------------------------
diff --cc server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java
index fb4e0d9,0000000..9734528
mode 100644,000000..100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/GroupBalancer.java
@@@ -1,788 -1,0 +1,788 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +
 +package org.apache.accumulo.server.master.balancer;
 +
 +import java.util.ArrayList;
 +import java.util.Collection;
 +import java.util.Collections;
 +import java.util.HashMap;
 +import java.util.Iterator;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.Map.Entry;
 +import java.util.Objects;
 +import java.util.Set;
 +import java.util.SortedMap;
 +
 +import org.apache.accumulo.core.client.IsolatedScanner;
 +import org.apache.accumulo.core.client.RowIterator;
 +import org.apache.accumulo.core.client.Scanner;
 +import org.apache.accumulo.core.data.Key;
 +import org.apache.accumulo.core.data.Value;
 +import org.apache.accumulo.core.data.impl.KeyExtent;
 +import org.apache.accumulo.core.master.thrift.TabletServerStatus;
 +import org.apache.accumulo.core.metadata.MetadataTable;
 +import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 +import org.apache.accumulo.core.security.Authorizations;
 +import org.apache.accumulo.core.util.ComparablePair;
 +import org.apache.accumulo.core.util.MapCounter;
 +import org.apache.accumulo.core.util.Pair;
 +import org.apache.accumulo.server.master.state.TServerInstance;
 +import org.apache.accumulo.server.master.state.TabletMigration;
 +import org.apache.commons.lang.mutable.MutableInt;
 +import org.apache.hadoop.io.Text;
 +
 +import com.google.common.base.Function;
 +import com.google.common.base.Preconditions;
 +import com.google.common.collect.HashBasedTable;
 +import com.google.common.collect.HashMultimap;
 +import com.google.common.collect.Iterators;
 +import com.google.common.collect.Multimap;
 +import com.google.common.collect.Table;
 +
 +/**
 + * A balancer that evenly spreads groups of tablets across all tablet server. This balancer accomplishes the following two goals :
 + *
 + * <ul>
-  * <li/>Evenly spreads each group across all tservers.
-  * <li/>Minimizes the total number of groups on each tserver.
++ * <li>Evenly spreads each group across all tservers.
++ * <li>Minimizes the total number of groups on each tserver.
 + * </ul>
 + *
 + * <p>
 + * To use this balancer you must extend it and implement {@link #getPartitioner()}. See {@link RegexGroupBalancer} as an example.
 + */
 +
 +public abstract class GroupBalancer extends TabletBalancer {
 +
 +  private final String tableId;
 +  private final Text textTableId;
 +  private long lastRun = 0;
 +
 +  /**
 +   * @return A function that groups tablets into named groups.
 +   */
 +  protected abstract Function<KeyExtent,String> getPartitioner();
 +
 +  public GroupBalancer(String tableId) {
 +    this.tableId = tableId;
 +    this.textTableId = new Text(tableId);
 +  }
 +
 +  protected Iterable<Pair<KeyExtent,Location>> getLocationProvider() {
 +    return new MetadataLocationProvider();
 +  }
 +
 +  /**
 +   * The amount of time to wait between balancing.
 +   */
 +  protected long getWaitTime() {
 +    return 60000;
 +  }
 +
 +  /**
 +   * The maximum number of migrations to perform in a single pass.
 +   */
 +  protected int getMaxMigrations() {
 +    return 1000;
 +  }
 +
 +  /**
 +   * @return Examine current tserver and migrations and return true if balancing should occur.
 +   */
 +  protected boolean shouldBalance(SortedMap<TServerInstance,TabletServerStatus> current, Set<KeyExtent> migrations) {
 +
 +    if (current.size() < 2) {
 +      return false;
 +    }
 +
 +    for (KeyExtent keyExtent : migrations) {
 +      if (keyExtent.getTableId().equals(textTableId)) {
 +        return false;
 +      }
 +    }
 +
 +    return true;
 +  }
 +
 +  @Override
 +  public void getAssignments(SortedMap<TServerInstance,TabletServerStatus> current, Map<KeyExtent,TServerInstance> unassigned,
 +      Map<KeyExtent,TServerInstance> assignments) {
 +
 +    if (current.size() == 0) {
 +      return;
 +    }
 +
 +    Function<KeyExtent,String> partitioner = getPartitioner();
 +
 +    List<ComparablePair<String,KeyExtent>> tabletsByGroup = new ArrayList<>();
 +    for (Entry<KeyExtent,TServerInstance> entry : unassigned.entrySet()) {
 +      TServerInstance last = entry.getValue();
 +      if (last != null) {
 +        // Maintain locality
 +        String fakeSessionID = " ";
 +        TServerInstance simple = new TServerInstance(last.getLocation(), fakeSessionID);
 +        Iterator<TServerInstance> find = current.tailMap(simple).keySet().iterator();
 +        if (find.hasNext()) {
 +          TServerInstance tserver = find.next();
 +          if (tserver.host().equals(last.host())) {
 +            assignments.put(entry.getKey(), tserver);
 +            continue;
 +          }
 +        }
 +      }
 +
 +      tabletsByGroup.add(new ComparablePair<String,KeyExtent>(partitioner.apply(entry.getKey()), entry.getKey()));
 +    }
 +
 +    Collections.sort(tabletsByGroup);
 +
 +    Iterator<TServerInstance> tserverIter = Iterators.cycle(current.keySet());
 +    for (ComparablePair<String,KeyExtent> pair : tabletsByGroup) {
 +      KeyExtent ke = pair.getSecond();
 +      assignments.put(ke, tserverIter.next());
 +    }
 +
 +  }
 +
 +  @Override
 +  public long balance(SortedMap<TServerInstance,TabletServerStatus> current, Set<KeyExtent> migrations, List<TabletMigration> migrationsOut) {
 +
 +    // The terminology extra and expected are used in this code. Expected tablets is the number of tablets a tserver must have for a given group and is
 +    // numInGroup/numTservers. Extra tablets are any tablets more than the number expected for a given group. If numInGroup % numTservers > 0, then a tserver
 +    // may have one extra tablet for a group.
 +    //
 +    // Assume we have 4 tservers and group A has 11 tablets.
 +    // * expected tablets : group A is expected to have 2 tablets on each tservers
 +    // * extra tablets : group A may have an additional tablet on each tserver. Group A has a total of 3 extra tablets.
 +    //
 +    // This balancer also evens out the extra tablets across all groups. The terminology extraExpected and extraExtra is used to describe these tablets.
 +    // ExtraExpected is totalExtra/numTservers. ExtraExtra is totalExtra%numTservers. Each tserver should have at least expectedExtra extra tablets and at most
 +    // one extraExtra tablets. All extra tablets on a tserver must be from different groups.
 +    //
 +    // Assume we have 6 tservers and three groups (G1, G2, G3) with 9 tablets each. Each tserver is expected to have one tablet from each group and could
 +    // possibly have 2 tablets from a group. Below is an illustration of an ideal balancing of extra tablets. To understand the illustration, the first column
 +    // shows tserver T1 with 2 tablets from G1, 1 tablet from G2, and two tablets from G3. EE means empty, put it there so eclipse formating would not mess up
 +    // table.
 +    //
 +    // T1 | T2 | T3 | T4 | T5 | T6
 +    // ---+----+----+----+----+-----
 +    // G3 | G2 | G3 | EE | EE | EE <-- extra extra tablets
 +    // G1 | G1 | G1 | G2 | G3 | G2 <-- extra expected tablets.
 +    // G1 | G1 | G1 | G1 | G1 | G1 <-- expected tablets for group 1
 +    // G2 | G2 | G2 | G2 | G2 | G2 <-- expected tablets for group 2
 +    // G3 | G3 | G3 | G3 | G3 | G3 <-- expected tablets for group 3
 +    //
 +    // Do not want to balance the extra tablets like the following. There are two problem with this. First extra tablets are not evenly spread. Since there are
 +    // a total of 9 extra tablets, every tserver is expected to have at least one extra tablet. Second tserver T1 has two extra tablet for group G1. This
 +    // violates the principal that a tserver can only have one extra tablet for a given group.
 +    //
 +    // T1 | T2 | T3 | T4 | T5 | T6
 +    // ---+----+----+----+----+-----
 +    // G1 | EE | EE | EE | EE | EE <--- one extra tablets from group 1
 +    // G3 | G3 | G3 | EE | EE | EE <--- three extra tablets from group 3
 +    // G2 | G2 | G2 | EE | EE | EE <--- three extra tablets from group 2
 +    // G1 | G1 | EE | EE | EE | EE <--- two extra tablets from group 1
 +    // G1 | G1 | G1 | G1 | G1 | G1 <-- expected tablets for group 1
 +    // G2 | G2 | G2 | G2 | G2 | G2 <-- expected tablets for group 2
 +    // G3 | G3 | G3 | G3 | G3 | G3 <-- expected tablets for group 3
 +
 +    if (!shouldBalance(current, migrations)) {
 +      return 5000;
 +    }
 +
 +    if (System.currentTimeMillis() - lastRun < getWaitTime()) {
 +      return 5000;
 +    }
 +
 +    MapCounter<String> groupCounts = new MapCounter<>();
 +    Map<TServerInstance,TserverGroupInfo> tservers = new HashMap<>();
 +
 +    for (TServerInstance tsi : current.keySet()) {
 +      tservers.put(tsi, new TserverGroupInfo(tsi));
 +    }
 +
 +    Function<KeyExtent,String> partitioner = getPartitioner();
 +
 +    // collect stats about current state
 +    for (Pair<KeyExtent,Location> entry : getLocationProvider()) {
 +      String group = partitioner.apply(entry.getFirst());
 +      Location loc = entry.getSecond();
 +
 +      if (loc.equals(Location.NONE) || !tservers.containsKey(loc.getTserverInstance())) {
 +        return 5000;
 +      }
 +
 +      groupCounts.increment(group, 1);
 +      TserverGroupInfo tgi = tservers.get(loc.getTserverInstance());
 +      tgi.addGroup(group);
 +    }
 +
 +    Map<String,Integer> expectedCounts = new HashMap<>();
 +
 +    int totalExtra = 0;
 +    for (String group : groupCounts.keySet()) {
 +      long groupCount = groupCounts.get(group);
 +      totalExtra += groupCount % current.size();
 +      expectedCounts.put(group, (int) (groupCount / current.size()));
 +    }
 +
 +    // The number of extra tablets from all groups that each tserver must have.
 +    int expectedExtra = totalExtra / current.size();
 +    int maxExtraGroups = expectedExtra + 1;
 +
 +    expectedCounts = Collections.unmodifiableMap(expectedCounts);
 +    tservers = Collections.unmodifiableMap(tservers);
 +
 +    for (TserverGroupInfo tgi : tservers.values()) {
 +      tgi.finishedAdding(expectedCounts);
 +    }
 +
 +    Moves moves = new Moves();
 +
 +    // The order of the following steps is important, because as ordered each step should not move any tablets moved by a previous step.
 +    balanceExpected(tservers, moves);
 +    if (moves.size() < getMaxMigrations()) {
 +      balanceExtraExpected(tservers, expectedExtra, moves);
 +      if (moves.size() < getMaxMigrations()) {
 +        boolean cont = balanceExtraMultiple(tservers, maxExtraGroups, moves);
 +        if (cont && moves.size() < getMaxMigrations()) {
 +          balanceExtraExtra(tservers, maxExtraGroups, moves);
 +        }
 +      }
 +    }
 +
 +    populateMigrations(tservers.keySet(), migrationsOut, moves);
 +
 +    lastRun = System.currentTimeMillis();
 +
 +    return 5000;
 +  }
 +
 +  public static class Location {
 +    public static final Location NONE = new Location();
 +    private final TServerInstance tserverInstance;
 +
 +    public Location() {
 +      this(null);
 +    }
 +
 +    public Location(TServerInstance tsi) {
 +      tserverInstance = tsi;
 +    }
 +
 +    public TServerInstance getTserverInstance() {
 +      return tserverInstance;
 +    }
 +
 +    @Override
 +    public int hashCode() {
 +      return Objects.hashCode(tserverInstance);
 +    }
 +
 +    @Override
 +    public boolean equals(Object o) {
 +      if (o instanceof Location) {
 +        Location ol = ((Location) o);
 +        if (tserverInstance == ol.tserverInstance) {
 +          return true;
 +        }
 +        return tserverInstance.equals(ol.tserverInstance);
 +      }
 +      return false;
 +    }
 +  }
 +
 +  static class TserverGroupInfo {
 +
 +    private Map<String,Integer> expectedCounts;
 +    private final Map<String,MutableInt> initialCounts = new HashMap<>();
 +    private final Map<String,Integer> extraCounts = new HashMap<>();
 +    private final Map<String,Integer> expectedDeficits = new HashMap<>();
 +
 +    private final TServerInstance tsi;
 +    private boolean finishedAdding = false;
 +
 +    TserverGroupInfo(TServerInstance tsi) {
 +      this.tsi = tsi;
 +    }
 +
 +    public void addGroup(String group) {
 +      Preconditions.checkState(!finishedAdding);
 +
 +      MutableInt mi = initialCounts.get(group);
 +      if (mi == null) {
 +        mi = new MutableInt();
 +        initialCounts.put(group, mi);
 +      }
 +
 +      mi.increment();
 +    }
 +
 +    public void finishedAdding(Map<String,Integer> expectedCounts) {
 +      Preconditions.checkState(!finishedAdding);
 +      finishedAdding = true;
 +      this.expectedCounts = expectedCounts;
 +
 +      for (Entry<String,Integer> entry : expectedCounts.entrySet()) {
 +        String group = entry.getKey();
 +        int expected = entry.getValue();
 +
 +        MutableInt count = initialCounts.get(group);
 +        int num = count == null ? 0 : count.intValue();
 +
 +        if (num < expected) {
 +          expectedDeficits.put(group, expected - num);
 +        } else if (num > expected) {
 +          extraCounts.put(group, num - expected);
 +        }
 +      }
 +
 +    }
 +
 +    public void moveOff(String group, int num) {
 +      Preconditions.checkArgument(num > 0);
 +      Preconditions.checkState(finishedAdding);
 +
 +      Integer extraCount = extraCounts.get(group);
 +
 +      Preconditions.checkArgument(extraCount != null && extraCount >= num, "group=%s num=%s extraCount=%s", group, num, extraCount);
 +
 +      MutableInt initialCount = initialCounts.get(group);
 +
 +      Preconditions.checkArgument(initialCount.intValue() >= num);
 +
 +      initialCount.subtract(num);
 +
 +      if (extraCount - num == 0) {
 +        extraCounts.remove(group);
 +      } else {
 +        extraCounts.put(group, extraCount - num);
 +      }
 +    }
 +
 +    public void moveTo(String group, int num) {
 +      Preconditions.checkArgument(num > 0);
 +      Preconditions.checkArgument(expectedCounts.containsKey(group));
 +      Preconditions.checkState(finishedAdding);
 +
 +      Integer deficit = expectedDeficits.get(group);
 +      if (deficit != null) {
 +        if (num >= deficit) {
 +          expectedDeficits.remove(group);
 +          num -= deficit;
 +        } else {
 +          expectedDeficits.put(group, deficit - num);
 +          num = 0;
 +        }
 +      }
 +
 +      if (num > 0) {
 +        Integer extra = extraCounts.get(group);
 +        if (extra == null) {
 +          extra = 0;
 +        }
 +
 +        extraCounts.put(group, extra + num);
 +      }
 +
 +      // TODO could check extra constraints
 +    }
 +
 +    public Map<String,Integer> getExpectedDeficits() {
 +      Preconditions.checkState(finishedAdding);
 +      return Collections.unmodifiableMap(expectedDeficits);
 +    }
 +
 +    public Map<String,Integer> getExtras() {
 +      Preconditions.checkState(finishedAdding);
 +      return Collections.unmodifiableMap(extraCounts);
 +    }
 +
 +    public TServerInstance getTserverInstance() {
 +      return tsi;
 +    }
 +
 +    @Override
 +    public int hashCode() {
 +      return tsi.hashCode();
 +    }
 +
 +    @Override
 +    public boolean equals(Object o) {
 +      if (o instanceof TserverGroupInfo) {
 +        TserverGroupInfo otgi = (TserverGroupInfo) o;
 +        return tsi.equals(otgi.tsi);
 +      }
 +
 +      return false;
 +    }
 +
 +    @Override
 +    public String toString() {
 +      return tsi.toString();
 +    }
 +
 +  }
 +
 +  private static class Move {
 +    TserverGroupInfo dest;
 +    int count;
 +
 +    public Move(TserverGroupInfo dest, int num) {
 +      this.dest = dest;
 +      this.count = num;
 +    }
 +  }
 +
 +  private static class Moves {
 +
 +    private final Table<TServerInstance,String,List<Move>> moves = HashBasedTable.create();
 +    private int totalMoves = 0;
 +
 +    public void move(String group, int num, TserverGroupInfo src, TserverGroupInfo dest) {
 +      Preconditions.checkArgument(num > 0);
 +      Preconditions.checkArgument(!src.equals(dest));
 +
 +      src.moveOff(group, num);
 +      dest.moveTo(group, num);
 +
 +      List<Move> srcMoves = moves.get(src.getTserverInstance(), group);
 +      if (srcMoves == null) {
 +        srcMoves = new ArrayList<>();
 +        moves.put(src.getTserverInstance(), group, srcMoves);
 +      }
 +
 +      srcMoves.add(new Move(dest, num));
 +      totalMoves += num;
 +    }
 +
 +    public TServerInstance removeMove(TServerInstance src, String group) {
 +      List<Move> srcMoves = moves.get(src, group);
 +      if (srcMoves == null) {
 +        return null;
 +      }
 +
 +      Move move = srcMoves.get(srcMoves.size() - 1);
 +      TServerInstance ret = move.dest.getTserverInstance();
 +      totalMoves--;
 +
 +      move.count--;
 +      if (move.count == 0) {
 +        srcMoves.remove(srcMoves.size() - 1);
 +        if (srcMoves.size() == 0) {
 +          moves.remove(src, group);
 +        }
 +      }
 +
 +      return ret;
 +    }
 +
 +    public int size() {
 +      return totalMoves;
 +    }
 +  }
 +
 +  private void balanceExtraExtra(Map<TServerInstance,TserverGroupInfo> tservers, int maxExtraGroups, Moves moves) {
 +    Table<String,TServerInstance,TserverGroupInfo> surplusExtra = HashBasedTable.create();
 +    for (TserverGroupInfo tgi : tservers.values()) {
 +      Map<String,Integer> extras = tgi.getExtras();
 +      if (extras.size() > maxExtraGroups) {
 +        for (String group : extras.keySet()) {
 +          surplusExtra.put(group, tgi.getTserverInstance(), tgi);
 +        }
 +      }
 +    }
 +
 +    ArrayList<Pair<String,TServerInstance>> serversGroupsToRemove = new ArrayList<>();
 +    ArrayList<TServerInstance> serversToRemove = new ArrayList<>();
 +
 +    for (TserverGroupInfo destTgi : tservers.values()) {
 +      if (surplusExtra.size() == 0) {
 +        break;
 +      }
 +
 +      Map<String,Integer> extras = destTgi.getExtras();
 +      if (extras.size() < maxExtraGroups) {
 +        serversToRemove.clear();
 +        serversGroupsToRemove.clear();
 +        for (String group : surplusExtra.rowKeySet()) {
 +          if (!extras.containsKey(group)) {
 +            TserverGroupInfo srcTgi = surplusExtra.row(group).values().iterator().next();
 +
 +            moves.move(group, 1, srcTgi, destTgi);
 +
 +            if (srcTgi.getExtras().size() <= maxExtraGroups) {
 +              serversToRemove.add(srcTgi.getTserverInstance());
 +            } else {
 +              serversGroupsToRemove.add(new Pair<String,TServerInstance>(group, srcTgi.getTserverInstance()));
 +            }
 +
 +            if (destTgi.getExtras().size() >= maxExtraGroups || moves.size() >= getMaxMigrations()) {
 +              break;
 +            }
 +          }
 +        }
 +
 +        if (serversToRemove.size() > 0) {
 +          surplusExtra.columnKeySet().removeAll(serversToRemove);
 +        }
 +
 +        for (Pair<String,TServerInstance> pair : serversGroupsToRemove) {
 +          surplusExtra.remove(pair.getFirst(), pair.getSecond());
 +        }
 +
 +        if (moves.size() >= getMaxMigrations()) {
 +          break;
 +        }
 +      }
 +    }
 +  }
 +
 +  private boolean balanceExtraMultiple(Map<TServerInstance,TserverGroupInfo> tservers, int maxExtraGroups, Moves moves) {
 +    Multimap<String,TserverGroupInfo> extraMultiple = HashMultimap.create();
 +
 +    for (TserverGroupInfo tgi : tservers.values()) {
 +      Map<String,Integer> extras = tgi.getExtras();
 +      for (Entry<String,Integer> entry : extras.entrySet()) {
 +        if (entry.getValue() > 1) {
 +          extraMultiple.put(entry.getKey(), tgi);
 +        }
 +      }
 +    }
 +
 +    balanceExtraMultiple(tservers, maxExtraGroups, moves, extraMultiple, false);
 +    if (moves.size() < getMaxMigrations() && extraMultiple.size() > 0) {
 +      // no place to move so must exceed maxExtra temporarily... subsequent balancer calls will smooth things out
 +      balanceExtraMultiple(tservers, maxExtraGroups, moves, extraMultiple, true);
 +      return false;
 +    } else {
 +      return true;
 +    }
 +  }
 +
 +  private void balanceExtraMultiple(Map<TServerInstance,TserverGroupInfo> tservers, int maxExtraGroups, Moves moves,
 +      Multimap<String,TserverGroupInfo> extraMultiple, boolean alwaysAdd) {
 +
 +    ArrayList<Pair<String,TserverGroupInfo>> serversToRemove = new ArrayList<>();
 +    for (TserverGroupInfo destTgi : tservers.values()) {
 +      Map<String,Integer> extras = destTgi.getExtras();
 +      if (alwaysAdd || extras.size() < maxExtraGroups) {
 +        serversToRemove.clear();
 +        for (String group : extraMultiple.keySet()) {
 +          if (!extras.containsKey(group)) {
 +            Collection<TserverGroupInfo> sources = extraMultiple.get(group);
 +            Iterator<TserverGroupInfo> iter = sources.iterator();
 +            TserverGroupInfo srcTgi = iter.next();
 +
 +            int num = srcTgi.getExtras().get(group);
 +
 +            moves.move(group, 1, srcTgi, destTgi);
 +
 +            if (num == 2) {
 +              serversToRemove.add(new Pair<String,TserverGroupInfo>(group, srcTgi));
 +            }
 +
 +            if (destTgi.getExtras().size() >= maxExtraGroups || moves.size() >= getMaxMigrations()) {
 +              break;
 +            }
 +          }
 +        }
 +
 +        for (Pair<String,TserverGroupInfo> pair : serversToRemove) {
 +          extraMultiple.remove(pair.getFirst(), pair.getSecond());
 +        }
 +
 +        if (extraMultiple.size() == 0 || moves.size() >= getMaxMigrations()) {
 +          break;
 +        }
 +      }
 +    }
 +  }
 +
 +  private void balanceExtraExpected(Map<TServerInstance,TserverGroupInfo> tservers, int expectedExtra, Moves moves) {
 +
 +    Table<String,TServerInstance,TserverGroupInfo> extraSurplus = HashBasedTable.create();
 +
 +    for (TserverGroupInfo tgi : tservers.values()) {
 +      Map<String,Integer> extras = tgi.getExtras();
 +      if (extras.size() > expectedExtra) {
 +        for (String group : extras.keySet()) {
 +          extraSurplus.put(group, tgi.getTserverInstance(), tgi);
 +        }
 +      }
 +    }
 +
 +    ArrayList<TServerInstance> emptyServers = new ArrayList<>();
 +    ArrayList<Pair<String,TServerInstance>> emptyServerGroups = new ArrayList<>();
 +    for (TserverGroupInfo destTgi : tservers.values()) {
 +      if (extraSurplus.size() == 0) {
 +        break;
 +      }
 +
 +      Map<String,Integer> extras = destTgi.getExtras();
 +      if (extras.size() < expectedExtra) {
 +        emptyServers.clear();
 +        emptyServerGroups.clear();
 +        nextGroup: for (String group : extraSurplus.rowKeySet()) {
 +          if (!extras.containsKey(group)) {
 +            Iterator<TserverGroupInfo> iter = extraSurplus.row(group).values().iterator();
 +            TserverGroupInfo srcTgi = iter.next();
 +
 +            while (srcTgi.getExtras().size() <= expectedExtra) {
 +              if (iter.hasNext()) {
 +                srcTgi = iter.next();
 +              } else {
 +                continue nextGroup;
 +              }
 +            }
 +
 +            moves.move(group, 1, srcTgi, destTgi);
 +
 +            if (srcTgi.getExtras().size() <= expectedExtra) {
 +              emptyServers.add(srcTgi.getTserverInstance());
 +            } else if (srcTgi.getExtras().get(group) == null) {
 +              emptyServerGroups.add(new Pair<String,TServerInstance>(group, srcTgi.getTserverInstance()));
 +            }
 +
 +            if (destTgi.getExtras().size() >= expectedExtra || moves.size() >= getMaxMigrations()) {
 +              break;
 +            }
 +          }
 +        }
 +
 +        if (emptyServers.size() > 0) {
 +          extraSurplus.columnKeySet().removeAll(emptyServers);
 +        }
 +
 +        for (Pair<String,TServerInstance> pair : emptyServerGroups) {
 +          extraSurplus.remove(pair.getFirst(), pair.getSecond());
 +        }
 +
 +        if (moves.size() >= getMaxMigrations()) {
 +          break;
 +        }
 +      }
 +    }
 +  }
 +
 +  private void balanceExpected(Map<TServerInstance,TserverGroupInfo> tservers, Moves moves) {
 +    Multimap<String,TserverGroupInfo> groupDefecits = HashMultimap.create();
 +    Multimap<String,TserverGroupInfo> groupSurplus = HashMultimap.create();
 +
 +    for (TserverGroupInfo tgi : tservers.values()) {
 +      for (String group : tgi.getExpectedDeficits().keySet()) {
 +        groupDefecits.put(group, tgi);
 +      }
 +
 +      for (String group : tgi.getExtras().keySet()) {
 +        groupSurplus.put(group, tgi);
 +      }
 +    }
 +
 +    for (String group : groupDefecits.keySet()) {
 +      Collection<TserverGroupInfo> defecitServers = groupDefecits.get(group);
 +      for (TserverGroupInfo defecitTsi : defecitServers) {
 +        int numToMove = defecitTsi.getExpectedDeficits().get(group);
 +
 +        Iterator<TserverGroupInfo> surplusIter = groupSurplus.get(group).iterator();
 +        while (numToMove > 0) {
 +          TserverGroupInfo surplusTsi = surplusIter.next();
 +
 +          int available = surplusTsi.getExtras().get(group);
 +
 +          if (numToMove >= available) {
 +            surplusIter.remove();
 +          }
 +
 +          int transfer = Math.min(numToMove, available);
 +
 +          numToMove -= transfer;
 +
 +          moves.move(group, transfer, surplusTsi, defecitTsi);
 +          if (moves.size() >= getMaxMigrations()) {
 +            return;
 +          }
 +        }
 +      }
 +    }
 +  }
 +
 +  private void populateMigrations(Set<TServerInstance> current, List<TabletMigration> migrationsOut, Moves moves) {
 +    if (moves.size() == 0) {
 +      return;
 +    }
 +
 +    Function<KeyExtent,String> partitioner = getPartitioner();
 +
 +    for (Pair<KeyExtent,Location> entry : getLocationProvider()) {
 +      String group = partitioner.apply(entry.getFirst());
 +      Location loc = entry.getSecond();
 +
 +      if (loc.equals(Location.NONE) || !current.contains(loc.getTserverInstance())) {
 +        migrationsOut.clear();
 +        return;
 +      }
 +
 +      TServerInstance dest = moves.removeMove(loc.getTserverInstance(), group);
 +      if (dest != null) {
 +        migrationsOut.add(new TabletMigration(entry.getFirst(), loc.getTserverInstance(), dest));
 +        if (moves.size() == 0) {
 +          break;
 +        }
 +      }
 +    }
 +  }
 +
 +  static class LocationFunction implements Function<Iterator<Entry<Key,Value>>,Pair<KeyExtent,Location>> {
 +    @Override
 +    public Pair<KeyExtent,Location> apply(Iterator<Entry<Key,Value>> input) {
 +      Location loc = Location.NONE;
 +      KeyExtent extent = null;
 +      while (input.hasNext()) {
 +        Entry<Key,Value> entry = input.next();
 +        if (entry.getKey().getColumnFamily().equals(MetadataSchema.TabletsSection.CurrentLocationColumnFamily.NAME)) {
 +          loc = new Location(new TServerInstance(entry.getValue(), entry.getKey().getColumnQualifier()));
 +        } else if (MetadataSchema.TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN.hasColumns(entry.getKey())) {
 +          extent = new KeyExtent(entry.getKey().getRow(), entry.getValue());
 +        }
 +      }
 +
 +      return new Pair<KeyExtent,Location>(extent, loc);
 +    }
 +
 +  }
 +
 +  class MetadataLocationProvider implements Iterable<Pair<KeyExtent,Location>> {
 +
 +    @Override
 +    public Iterator<Pair<KeyExtent,Location>> iterator() {
 +      try {
 +        Scanner scanner = new IsolatedScanner(context.getConnector().createScanner(MetadataTable.NAME, Authorizations.EMPTY));
 +        scanner.fetchColumnFamily(MetadataSchema.TabletsSection.CurrentLocationColumnFamily.NAME);
 +        MetadataSchema.TabletsSection.TabletColumnFamily.PREV_ROW_COLUMN.fetch(scanner);
 +        scanner.setRange(MetadataSchema.TabletsSection.getRange(tableId));
 +
 +        RowIterator rowIter = new RowIterator(scanner);
 +
 +        return Iterators.transform(rowIter, new LocationFunction());
 +      } catch (Exception e) {
 +        throw new RuntimeException(e);
 +      }
 +    }
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/base/src/main/java/org/apache/accumulo/server/master/balancer/RegexGroupBalancer.java
----------------------------------------------------------------------
diff --cc server/base/src/main/java/org/apache/accumulo/server/master/balancer/RegexGroupBalancer.java
index 724a606,0000000..0d07a77
mode 100644,000000..100644
--- a/server/base/src/main/java/org/apache/accumulo/server/master/balancer/RegexGroupBalancer.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/master/balancer/RegexGroupBalancer.java
@@@ -1,96 -1,0 +1,96 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +
 +package org.apache.accumulo.server.master.balancer;
 +
 +import java.util.Map;
 +import java.util.regex.Matcher;
 +import java.util.regex.Pattern;
 +
 +import org.apache.accumulo.core.conf.AccumuloConfiguration;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.data.impl.KeyExtent;
 +import org.apache.hadoop.io.Text;
 +
 +import com.google.common.base.Function;
 +
 +/**
 + * A {@link GroupBalancer} that groups tablets using a configurable regex. To use this balancer configure the following settings for your table then configure
 + * this balancer for your table.
 + *
 + * <ul>
-  * <li/>Set {@code table.custom.balancer.group.regex.pattern} to a regular expression. This regular expression must have one group. The regex is applied to the
++ * <li>Set {@code table.custom.balancer.group.regex.pattern} to a regular expression. This regular expression must have one group. The regex is applied to the
 + * tablet end row and whatever the regex group matches is used as the group. For example with a regex of {@code (\d\d).*} and an end row of {@code 12abc}, the
 + * group for the tablet would be {@code 12}.
-  * <li/>Set {@code table.custom.balancer.group.regex.default} to a default group. This group is returned for the last tablet in the table and tablets for which
++ * <li>Set {@code table.custom.balancer.group.regex.default} to a default group. This group is returned for the last tablet in the table and tablets for which
 + * the regex does not match.
-  * <li/>Optionally set {@code table.custom.balancer.group.regex.wait.time} to time (can use time suffixes). This determines how long to wait between balancing.
++ * <li>Optionally set {@code table.custom.balancer.group.regex.wait.time} to time (can use time suffixes). This determines how long to wait between balancing.
 + * Since this balancer scans the metadata table, may want to set this higher for large tables.
 + * </ul>
 + */
 +
 +public class RegexGroupBalancer extends GroupBalancer {
 +
 +  public static final String REGEX_PROPERTY = Property.TABLE_ARBITRARY_PROP_PREFIX.getKey() + "balancer.group.regex.pattern";
 +  public static final String DEFAUT_GROUP_PROPERTY = Property.TABLE_ARBITRARY_PROP_PREFIX.getKey() + "balancer.group.regex.default";
 +  public static final String WAIT_TIME_PROPERTY = Property.TABLE_ARBITRARY_PROP_PREFIX.getKey() + "balancer.group.regex.wait.time";
 +
 +  private final String tableId;
 +
 +  public RegexGroupBalancer(String tableId) {
 +    super(tableId);
 +    this.tableId = tableId;
 +  }
 +
 +  @Override
 +  protected long getWaitTime() {
 +    Map<String,String> customProps = configuration.getTableConfiguration(tableId).getAllPropertiesWithPrefix(Property.TABLE_ARBITRARY_PROP_PREFIX);
 +    if (customProps.containsKey(WAIT_TIME_PROPERTY)) {
 +      return AccumuloConfiguration.getTimeInMillis(customProps.get(WAIT_TIME_PROPERTY));
 +    }
 +
 +    return super.getWaitTime();
 +  }
 +
 +  @Override
 +  protected Function<KeyExtent,String> getPartitioner() {
 +
 +    Map<String,String> customProps = configuration.getTableConfiguration(tableId).getAllPropertiesWithPrefix(Property.TABLE_ARBITRARY_PROP_PREFIX);
 +    String regex = customProps.get(REGEX_PROPERTY);
 +    final String defaultGroup = customProps.get(DEFAUT_GROUP_PROPERTY);
 +
 +    final Pattern pattern = Pattern.compile(regex);
 +
 +    return new Function<KeyExtent,String>() {
 +
 +      @Override
 +      public String apply(KeyExtent input) {
 +        Text er = input.getEndRow();
 +        if (er == null) {
 +          return defaultGroup;
 +        }
 +
 +        Matcher matcher = pattern.matcher(er.toString());
 +        if (matcher.matches() && matcher.groupCount() == 1) {
 +          return matcher.group(1);
 +        }
 +
 +        return defaultGroup;
 +      }
 +    };
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/base/src/main/java/org/apache/accumulo/server/security/SecurityOperation.java
----------------------------------------------------------------------

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/base/src/main/java/org/apache/accumulo/server/security/UserImpersonation.java
----------------------------------------------------------------------
diff --cc server/base/src/main/java/org/apache/accumulo/server/security/UserImpersonation.java
index fada1ad,0000000..2a1fd00
mode 100644,000000..100644
--- a/server/base/src/main/java/org/apache/accumulo/server/security/UserImpersonation.java
+++ b/server/base/src/main/java/org/apache/accumulo/server/security/UserImpersonation.java
@@@ -1,228 -1,0 +1,228 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.server.security;
 +
 +import java.util.Arrays;
 +import java.util.Collection;
 +import java.util.HashMap;
 +import java.util.HashSet;
 +import java.util.Iterator;
 +import java.util.Map;
 +import java.util.Map.Entry;
 +import java.util.Set;
 +
 +import org.apache.accumulo.core.conf.AccumuloConfiguration;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.commons.lang.StringUtils;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +/**
 + * When SASL is enabled, this parses properties from the site configuration to build up a set of all users capable of impersonating another user, the users
 + * which may be impersonated and the hosts in which the impersonator may issue requests from.
 + *
-  * <code>rpc_user=>{allowed_accumulo_users=[...], allowed_client_hosts=[...]</code>
++ * <code>rpc_user=&gt;{allowed_accumulo_users=[...], allowed_client_hosts=[...]</code>
 + *
 + * @see Property#INSTANCE_RPC_SASL_PROXYUSERS
 + */
 +public class UserImpersonation {
 +
 +  private static final Logger log = LoggerFactory.getLogger(UserImpersonation.class);
 +  private static final Set<String> ALWAYS_TRUE = new AlwaysTrueSet<>();
 +  private static final String ALL = "*", USERS = "users", HOSTS = "hosts";
 +
 +  public static class AlwaysTrueSet<T> implements Set<T> {
 +
 +    @Override
 +    public int size() {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public boolean isEmpty() {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public boolean contains(Object o) {
 +      return true;
 +    }
 +
 +    @Override
 +    public Iterator<T> iterator() {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public Object[] toArray() {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public <E> E[] toArray(E[] a) {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public boolean add(T e) {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public boolean remove(Object o) {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public boolean containsAll(Collection<?> c) {
 +      return true;
 +    }
 +
 +    @Override
 +    public boolean addAll(Collection<? extends T> c) {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public boolean retainAll(Collection<?> c) {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public boolean removeAll(Collection<?> c) {
 +      throw new UnsupportedOperationException();
 +    }
 +
 +    @Override
 +    public void clear() {
 +      throw new UnsupportedOperationException();
 +    }
 +  }
 +
 +  public static class UsersWithHosts {
 +    private Set<String> users = new HashSet<>(), hosts = new HashSet<>();
 +    private boolean allUsers, allHosts;
 +
 +    public UsersWithHosts() {
 +      allUsers = allHosts = false;
 +    }
 +
 +    public UsersWithHosts(Set<String> users, Set<String> hosts) {
 +      this();
 +      this.users = users;
 +      this.hosts = hosts;
 +    }
 +
 +    public Set<String> getUsers() {
 +      if (allUsers) {
 +        return ALWAYS_TRUE;
 +      }
 +      return users;
 +    }
 +
 +    public Set<String> getHosts() {
 +      if (allHosts) {
 +        return ALWAYS_TRUE;
 +      }
 +      return hosts;
 +    }
 +
 +    public boolean acceptsAllUsers() {
 +      return allUsers;
 +    }
 +
 +    public void setAcceptAllUsers(boolean allUsers) {
 +      this.allUsers = allUsers;
 +    }
 +
 +    public boolean acceptsAllHosts() {
 +      return allHosts;
 +    }
 +
 +    public void setAcceptAllHosts(boolean allHosts) {
 +      this.allHosts = allHosts;
 +    }
 +
 +    public void setUsers(Set<String> users) {
 +      this.users = users;
 +      allUsers = false;
 +    }
 +
 +    public void setHosts(Set<String> hosts) {
 +      this.hosts = hosts;
 +      allHosts = false;
 +    }
 +  }
 +
 +  private final Map<String,UsersWithHosts> proxyUsers;
 +
 +  public UserImpersonation(AccumuloConfiguration conf) {
 +    Map<String,String> entries = conf.getAllPropertiesWithPrefix(Property.INSTANCE_RPC_SASL_PROXYUSERS);
 +    proxyUsers = new HashMap<>();
 +    final String configKey = Property.INSTANCE_RPC_SASL_PROXYUSERS.getKey();
 +    for (Entry<String,String> entry : entries.entrySet()) {
 +      String aclKey = entry.getKey().substring(configKey.length());
 +      int index = aclKey.lastIndexOf('.');
 +
 +      if (-1 == index) {
 +        throw new RuntimeException("Expected 2 elements in key suffix: " + aclKey);
 +      }
 +
 +      final String remoteUser = aclKey.substring(0, index).trim(), usersOrHosts = aclKey.substring(index + 1).trim();
 +      UsersWithHosts usersWithHosts = proxyUsers.get(remoteUser);
 +      if (null == usersWithHosts) {
 +        usersWithHosts = new UsersWithHosts();
 +        proxyUsers.put(remoteUser, usersWithHosts);
 +      }
 +
 +      if (USERS.equals(usersOrHosts)) {
 +        String userString = entry.getValue().trim();
 +        if (ALL.equals(userString)) {
 +          usersWithHosts.setAcceptAllUsers(true);
 +        } else if (!usersWithHosts.acceptsAllUsers()) {
 +          Set<String> users = usersWithHosts.getUsers();
 +          if (null == users) {
 +            users = new HashSet<>();
 +            usersWithHosts.setUsers(users);
 +          }
 +          String[] userValues = StringUtils.split(userString, ',');
 +          users.addAll(Arrays.<String> asList(userValues));
 +        }
 +      } else if (HOSTS.equals(usersOrHosts)) {
 +        String hostsString = entry.getValue().trim();
 +        if (ALL.equals(hostsString)) {
 +          usersWithHosts.setAcceptAllHosts(true);
 +        } else if (!usersWithHosts.acceptsAllHosts()) {
 +          Set<String> hosts = usersWithHosts.getHosts();
 +          if (null == hosts) {
 +            hosts = new HashSet<>();
 +            usersWithHosts.setHosts(hosts);
 +          }
 +          String[] hostValues = StringUtils.split(hostsString, ',');
 +          hosts.addAll(Arrays.<String> asList(hostValues));
 +        }
 +      } else {
 +        log.debug("Ignoring key " + aclKey);
 +      }
 +    }
 +  }
 +
 +  public UsersWithHosts get(String remoteUser) {
 +    return proxyUsers.get(remoteUser);
 +  }
 +
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
----------------------------------------------------------------------
diff --cc server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
index 1af908b,a4c5fd6..274ec76
--- a/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
+++ b/server/base/src/test/java/org/apache/accumulo/server/security/SystemCredentialsTest.java
@@@ -56,8 -53,8 +56,8 @@@ public class SystemCredentialsTest 
    }
  
    /**
 -   * This is a test to ensure the string literal in {@link ConnectorImpl#ConnectorImpl(Instance, Credentials)} is kept up-to-date if we move the
 -   * {@link SystemToken}<br>
 +   * This is a test to ensure the string literal in {@link ConnectorImpl#ConnectorImpl(org.apache.accumulo.core.client.impl.ClientContext)} is kept up-to-date
-    * if we move the {@link SystemToken}<br/>
++   * if we move the {@link SystemToken}<br>
     * This check will not be needed after ACCUMULO-1578
     */
    @Test

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/master/src/main/java/org/apache/accumulo/master/replication/SequentialWorkAssigner.java
----------------------------------------------------------------------
diff --cc server/master/src/main/java/org/apache/accumulo/master/replication/SequentialWorkAssigner.java
index e30e9ac,0000000..f24da7e
mode 100644,000000..100644
--- a/server/master/src/main/java/org/apache/accumulo/master/replication/SequentialWorkAssigner.java
+++ b/server/master/src/main/java/org/apache/accumulo/master/replication/SequentialWorkAssigner.java
@@@ -1,227 -1,0 +1,227 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.master.replication;
 +
 +import java.util.Collections;
 +import java.util.HashMap;
 +import java.util.Iterator;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.Map.Entry;
 +import java.util.Set;
 +
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.conf.AccumuloConfiguration;
 +import org.apache.accumulo.core.replication.ReplicationConstants;
 +import org.apache.accumulo.core.replication.ReplicationTarget;
 +import org.apache.accumulo.core.zookeeper.ZooUtil;
 +import org.apache.accumulo.server.replication.DistributedWorkQueueWorkAssignerHelper;
 +import org.apache.hadoop.fs.Path;
 +import org.apache.zookeeper.KeeperException;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +/**
-  * Creates work in ZK which is <code>filename.serialized_ReplicationTarget => filename</code>, but replicates files in the order in which they were created.
++ * Creates work in ZK which is <code>filename.serialized_ReplicationTarget =&gt; filename</code>, but replicates files in the order in which they were created.
 + * <p>
 + * The intent is to ensure that WALs are replayed in the same order on the peer in which they were applied on the primary.
 + */
 +public class SequentialWorkAssigner extends DistributedWorkQueueWorkAssigner {
 +  private static final Logger log = LoggerFactory.getLogger(SequentialWorkAssigner.class);
 +  private static final String NAME = "Sequential Work Assigner";
 +
 +  // @formatter:off
 +  /*
 +   * {
 +   *    peer1 => {sourceTableId1 => work_queue_key1, sourceTableId2 => work_queue_key2, ...}
 +   *    peer2 => {sourceTableId1 => work_queue_key1, sourceTableId3 => work_queue_key4, ...}
 +   *    ...
 +   * }
 +   */
 +  // @formatter:on
 +  private Map<String,Map<String,String>> queuedWorkByPeerName;
 +
 +  public SequentialWorkAssigner() {}
 +
 +  public SequentialWorkAssigner(AccumuloConfiguration conf, Connector conn) {
 +    configure(conf, conn);
 +  }
 +
 +  @Override
 +  public String getName() {
 +    return NAME;
 +  }
 +
 +  protected Map<String,Map<String,String>> getQueuedWork() {
 +    return queuedWorkByPeerName;
 +  }
 +
 +  protected void setQueuedWork(Map<String,Map<String,String>> queuedWork) {
 +    this.queuedWorkByPeerName = queuedWork;
 +  }
 +
 +  /**
 +   * Initialize the queuedWork set with the work already sent out
 +   */
 +  @Override
 +  protected void initializeQueuedWork() {
 +    if (null != queuedWorkByPeerName) {
 +      return;
 +    }
 +
 +    queuedWorkByPeerName = new HashMap<>();
 +    List<String> existingWork;
 +    try {
 +      existingWork = workQueue.getWorkQueued();
 +    } catch (KeeperException | InterruptedException e) {
 +      throw new RuntimeException("Error reading existing queued replication work", e);
 +    }
 +
 +    log.info("Restoring replication work queue state from zookeeper");
 +
 +    for (String work : existingWork) {
 +      Entry<String,ReplicationTarget> entry = DistributedWorkQueueWorkAssignerHelper.fromQueueKey(work);
 +      String filename = entry.getKey();
 +      String peerName = entry.getValue().getPeerName();
 +      String sourceTableId = entry.getValue().getSourceTableId();
 +
 +      log.debug("In progress replication of {} from table with ID {} to peer {}", filename, sourceTableId, peerName);
 +
 +      Map<String,String> replicationForPeer = queuedWorkByPeerName.get(peerName);
 +      if (null == replicationForPeer) {
 +        replicationForPeer = new HashMap<>();
 +        queuedWorkByPeerName.put(sourceTableId, replicationForPeer);
 +      }
 +
 +      replicationForPeer.put(sourceTableId, work);
 +    }
 +  }
 +
 +  /**
 +   * Iterate over the queued work to remove entries that have been completed.
 +   */
 +  @Override
 +  protected void cleanupFinishedWork() {
 +    final Iterator<Entry<String,Map<String,String>>> queuedWork = queuedWorkByPeerName.entrySet().iterator();
 +    final String instanceId = conn.getInstance().getInstanceID();
 +
 +    int elementsRemoved = 0;
 +    // Check the status of all the work we've queued up
 +    while (queuedWork.hasNext()) {
 +      // {peer -> {tableId -> workKey, tableId -> workKey, ... }, peer -> ...}
 +      Entry<String,Map<String,String>> workForPeer = queuedWork.next();
 +
 +      // TableID to workKey (filename and ReplicationTarget)
 +      Map<String,String> queuedReplication = workForPeer.getValue();
 +
 +      Iterator<Entry<String,String>> iter = queuedReplication.entrySet().iterator();
 +      // Loop over every target we need to replicate this file to, removing the target when
 +      // the replication task has finished
 +      while (iter.hasNext()) {
 +        // tableID -> workKey
 +        Entry<String,String> entry = iter.next();
 +        // Null equates to the work for this target was finished
 +        if (null == zooCache.get(ZooUtil.getRoot(instanceId) + ReplicationConstants.ZOO_WORK_QUEUE + "/" + entry.getValue())) {
 +          log.debug("Removing {} from work assignment state", entry.getValue());
 +          iter.remove();
 +          elementsRemoved++;
 +        }
 +      }
 +    }
 +
 +    log.info("Removed {} elements from internal workqueue state because the work was complete", elementsRemoved);
 +  }
 +
 +  @Override
 +  protected int getQueueSize() {
 +    return queuedWorkByPeerName.size();
 +  }
 +
 +  @Override
 +  protected boolean shouldQueueWork(ReplicationTarget target) {
 +    Map<String,String> queuedWorkForPeer = this.queuedWorkByPeerName.get(target.getPeerName());
 +    if (null == queuedWorkForPeer) {
 +      return true;
 +    }
 +
 +    String queuedWork = queuedWorkForPeer.get(target.getSourceTableId());
 +
 +    // If we have no work for the local table to the given peer, submit some!
 +    return null == queuedWork;
 +  }
 +
 +  @Override
 +  protected boolean queueWork(Path path, ReplicationTarget target) {
 +    String queueKey = DistributedWorkQueueWorkAssignerHelper.getQueueKey(path.getName(), target);
 +    Map<String,String> workForPeer = this.queuedWorkByPeerName.get(target.getPeerName());
 +    if (null == workForPeer) {
 +      workForPeer = new HashMap<>();
 +      this.queuedWorkByPeerName.put(target.getPeerName(), workForPeer);
 +    }
 +
 +    String queuedWork = workForPeer.get(target.getSourceTableId());
 +    if (null == queuedWork) {
 +      try {
 +        workQueue.addWork(queueKey, path.toString());
 +        workForPeer.put(target.getSourceTableId(), queueKey);
 +      } catch (KeeperException | InterruptedException e) {
 +        log.warn("Could not queue work for {} to {}", path, target, e);
 +        return false;
 +      }
 +
 +      return true;
 +    } else if (queuedWork.startsWith(path.getName())) {
 +      log.debug("Not re-queueing work for {} as it has already been queued for replication to {}", path, target);
 +      return false;
 +    } else {
 +      log.debug("Not queueing {} for work as {} must be replicated to {} first", path, queuedWork, target.getPeerName());
 +      return false;
 +    }
 +  }
 +
 +  @Override
 +  protected Set<String> getQueuedWork(ReplicationTarget target) {
 +    Map<String,String> queuedWorkForPeer = this.queuedWorkByPeerName.get(target.getPeerName());
 +    if (null == queuedWorkForPeer) {
 +      return Collections.emptySet();
 +    }
 +
 +    String queuedWork = queuedWorkForPeer.get(target.getSourceTableId());
 +    if (null == queuedWork) {
 +      return Collections.emptySet();
 +    } else {
 +      return Collections.singleton(queuedWork);
 +    }
 +  }
 +
 +  @Override
 +  protected void removeQueuedWork(ReplicationTarget target, String queueKey) {
 +    Map<String,String> queuedWorkForPeer = this.queuedWorkByPeerName.get(target.getPeerName());
 +    if (null == queuedWorkForPeer) {
 +      log.warn("removeQueuedWork called when no work was queued for {}", target.getPeerName());
 +      return;
 +    }
 +
 +    String queuedWork = queuedWorkForPeer.get(target.getSourceTableId());
 +    if (queuedWork.equals(queueKey)) {
 +      queuedWorkForPeer.remove(target.getSourceTableId());
 +    } else {
 +      log.warn("removeQueuedWork called on {} with differing queueKeys, expected {} but was {}", target, queueKey, queuedWork);
 +      return;
 +    }
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/6becfbd3/server/monitor/src/main/java/org/apache/accumulo/monitor/servlets/DefaultServlet.java
----------------------------------------------------------------------


[02/19] accumulo git commit: ACCUMULO-4103 Add jdk8 profile for findbugs

Posted by ct...@apache.org.
ACCUMULO-4103 Add jdk8 profile for findbugs

* Automatically set findbugs.version for jdk8


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/f38d5e7f
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/f38d5e7f
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/f38d5e7f

Branch: refs/heads/1.7
Commit: f38d5e7f69d21eec6d197e2109575bbc60b3eae0
Parents: 05811af
Author: Christopher Tubbs <ct...@apache.org>
Authored: Fri Jan 8 14:30:35 2016 -0500
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Fri Jan 8 15:34:07 2016 -0500

----------------------------------------------------------------------
 pom.xml | 9 +++++++++
 1 file changed, 9 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/f38d5e7f/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 833bf44..ea40f31 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1398,5 +1398,14 @@
         <slf4j.version>1.7.5</slf4j.version>
       </properties>
     </profile>
+    <profile>
+      <id>jdk8</id>
+      <activation>
+        <jdk>[1.8,)</jdk>
+      </activation>
+      <properties>
+        <findbugs.version>3.0.1</findbugs.version>
+      </properties>
+    </profile>
   </profiles>
 </project>


[03/19] accumulo git commit: ACCUMULO-4103 Add jdk8 profile for findbugs

Posted by ct...@apache.org.
ACCUMULO-4103 Add jdk8 profile for findbugs

* Automatically set findbugs.version for jdk8


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/f38d5e7f
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/f38d5e7f
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/f38d5e7f

Branch: refs/heads/master
Commit: f38d5e7f69d21eec6d197e2109575bbc60b3eae0
Parents: 05811af
Author: Christopher Tubbs <ct...@apache.org>
Authored: Fri Jan 8 14:30:35 2016 -0500
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Fri Jan 8 15:34:07 2016 -0500

----------------------------------------------------------------------
 pom.xml | 9 +++++++++
 1 file changed, 9 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/f38d5e7f/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 833bf44..ea40f31 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1398,5 +1398,14 @@
         <slf4j.version>1.7.5</slf4j.version>
       </properties>
     </profile>
+    <profile>
+      <id>jdk8</id>
+      <activation>
+        <jdk>[1.8,)</jdk>
+      </activation>
+      <properties>
+        <findbugs.version>3.0.1</findbugs.version>
+      </properties>
+    </profile>
   </profiles>
 </project>


[17/19] accumulo git commit: ACCUMULO-4203 Remove unnecessary findbugs.version 1.7 branch

Posted by ct...@apache.org.
ACCUMULO-4203 Remove unnecessary findbugs.version 1.7 branch

* findbugs.version defaults to 3.0.1 in 1.7 pom, which works with JDK7
  and JDK8, so no need to put it in the JDK8 profile.


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/0ccba14f
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/0ccba14f
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/0ccba14f

Branch: refs/heads/master
Commit: 0ccba14f8daf2352a12cd8f6a97b18373131a792
Parents: 6becfbd
Author: Christopher Tubbs <ct...@apache.org>
Authored: Fri Jan 8 22:12:28 2016 -0500
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Fri Jan 8 22:12:28 2016 -0500

----------------------------------------------------------------------
 pom.xml | 3 ---
 1 file changed, 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/0ccba14f/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 55bbaab..644f506 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1409,9 +1409,6 @@
       <activation>
         <jdk>[1.8,1.9)</jdk>
       </activation>
-      <properties>
-        <findbugs.version>3.0.1</findbugs.version>
-      </properties>
       <build>
         <pluginManagement>
           <plugins>


[08/19] accumulo git commit: ACCUMULO-4102 Bump maven-plugin-plugin to 3.4

Posted by ct...@apache.org.
ACCUMULO-4102 Bump maven-plugin-plugin to 3.4

* Bump maven-plugin-plugin so the generated HelpMojo doesn't have
  javadoc problems (especially on JDK8)


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/7cc81374
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/7cc81374
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/7cc81374

Branch: refs/heads/master
Commit: 7cc81374233b0f8ba3a243f6084eecce9d6a1e6f
Parents: 4169a12
Author: Christopher Tubbs <ct...@apache.org>
Authored: Fri Jan 8 20:45:03 2016 -0500
Committer: Christopher Tubbs <ct...@apache.org>
Committed: Fri Jan 8 20:45:49 2016 -0500

----------------------------------------------------------------------
 pom.xml | 5 +++++
 1 file changed, 5 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/7cc81374/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 6138dbc..f04aa53 100644
--- a/pom.xml
+++ b/pom.xml
@@ -904,6 +904,11 @@
             </execution>
           </executions>
         </plugin>
+        <plugin>
+          <groupId>org.apache.maven.plugins</groupId>
+          <artifactId>maven-plugin-plugin</artifactId>
+          <version>3.4</version>
+        </plugin>
       </plugins>
     </pluginManagement>
     <plugins>