You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@accumulo.apache.org by el...@apache.org on 2016/08/31 04:24:24 UTC

[01/10] accumulo git commit: ACCUMULO-4423 Annotate integration tests with categories

Repository: accumulo
Updated Branches:
  refs/heads/1.7 2be85ade3 -> 661dac336
  refs/heads/1.8 602799787 -> d28a3ee3e
  refs/heads/master 673fdb9ea -> 159560979


ACCUMULO-4423 Annotate integration tests with categories

Differentiates tests which always use a minicluster and those
which can use a minicluster or a standalone cluster. Out-of-the-box
test invocation should not have changed.

Includes updated documentation to TESTING.md as well.

Closes apache/accumulo#144


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/661dac33
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/661dac33
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/661dac33

Branch: refs/heads/1.7
Commit: 661dac33648fb8bb311434720563c322611c1f12
Parents: 2be85ad
Author: Josh Elser <el...@apache.org>
Authored: Tue Aug 30 16:23:48 2016 -0400
Committer: Josh Elser <el...@apache.org>
Committed: Tue Aug 30 23:27:09 2016 -0400

----------------------------------------------------------------------
 TESTING.md                                      | 25 ++++++++++++--------
 pom.xml                                         | 24 ++++++++++++++++++-
 .../accumulo/harness/AccumuloClusterIT.java     |  3 +++
 .../accumulo/harness/SharedMiniClusterIT.java   |  3 +++
 .../org/apache/accumulo/test/NamespacesIT.java  |  3 +++
 .../test/categories/AnyClusterTest.java         | 25 ++++++++++++++++++++
 .../test/categories/MiniClusterOnlyTest.java    | 24 +++++++++++++++++++
 .../accumulo/test/categories/package-info.java  | 21 ++++++++++++++++
 .../accumulo/test/functional/ClassLoaderIT.java |  3 +++
 .../test/functional/ConfigurableMacIT.java      |  3 +++
 .../accumulo/test/functional/KerberosIT.java    |  3 +++
 .../test/functional/KerberosProxyIT.java        |  3 +++
 .../test/functional/KerberosRenewalIT.java      |  3 +++
 .../accumulo/test/functional/PermissionsIT.java |  3 +++
 .../accumulo/test/functional/TableIT.java       |  3 +++
 .../test/replication/KerberosReplicationIT.java |  3 +++
 trace/pom.xml                                   |  6 +++++
 17 files changed, 147 insertions(+), 11 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/TESTING.md
----------------------------------------------------------------------
diff --git a/TESTING.md b/TESTING.md
index de484ee..125110b 100644
--- a/TESTING.md
+++ b/TESTING.md
@@ -47,23 +47,27 @@ but are checking for regressions that were previously seen in the codebase. Thes
 resources, at least another gigabyte of memory over what Maven itself requires. As such, it's recommended to have at
 least 3-4GB of free memory and 10GB of free disk space.
 
-## Accumulo for testing
+## Test Categories
 
-The primary reason these tests take so much longer than the unit tests is that most are using an Accumulo instance to
-perform the test. It's a necessary evil; however, there are things we can do to improve this.
+Accumulo uses JUnit Category annotations to categorize certain integration tests based on their runtime requirements.
+Presently there are three different categories:
 
-## MiniAccumuloCluster
+### MiniAccumuloCluster (`MiniClusterOnlyTest`)
 
-By default, these tests will use a MiniAccumuloCluster which is a multi-process "implementation" of Accumulo, managed
-through Java interfaces. This MiniAccumuloCluster has the ability to use the local filesystem or Apache Hadoop's
+These tests use MiniAccumuloCluster (MAC) which is a multi-process "implementation" of Accumulo, managed
+through Java APIs. This MiniAccumuloCluster has the ability to use the local filesystem or Apache Hadoop's
 MiniDFSCluster, as well as starting one to many tablet servers. MiniAccumuloCluster tends to be a very useful tool in
 that it can automatically provide a workable instance that mimics how an actual deployment functions.
 
 The downside of using MiniAccumuloCluster is that a significant portion of each test is now devoted to starting and
 stopping the MiniAccumuloCluster.  While this is a surefire way to isolate tests from interferring with one another, it
-increases the actual runtime of the test by, on average, 10x.
+increases the actual runtime of the test by, on average, 10x. Some times the tests require the use of MAC because the
+test is being destructive or some special environment setup (e.g. Kerberos).
 
-## Standalone Cluster
+By default, these tests are run during the `integration-test` lifecycle phase using `mvn verify`. These tests can
+also be run at the `test` lifecycle phase using `mvn package -Pminicluster-unit-tests`.
+
+### Standalone Cluster (`AnyClusterTest`)
 
 An alternative to the MiniAccumuloCluster for testing, a standalone Accumulo cluster can also be configured for use by
 most tests. This requires a manual step of building and deploying the Accumulo cluster by hand. The build can then be
@@ -75,7 +79,9 @@ Use of a standalone cluster can be enabled using system properties on the Maven
 providing a Java properties file on the Maven command line. The use of a properties file is recommended since it is
 typically a fixed file per standalone cluster you want to run the tests against.
 
-### Configuration
+These tests will always run during the `integration-test` lifecycle phase using `mvn verify`.
+
+## Configuration for Standalone clusters
 
 The following properties can be used to configure a standalone cluster:
 
@@ -128,4 +134,3 @@ at a time, for example the [Continuous Ingest][1] and [Randomwalk test][2] suite
 [3]: https://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html
 [4]: http://maven.apache.org/surefire/maven-surefire-plugin/
 [5]: http://maven.apache.org/surefire/maven-failsafe-plugin/
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 0f57f62..d6393d2 100644
--- a/pom.xml
+++ b/pom.xml
@@ -115,6 +115,10 @@
     <url>https://builds.apache.org/view/A-D/view/Accumulo/</url>
   </ciManagement>
   <properties>
+    <accumulo.anyClusterTests>org.apache.accumulo.test.categories.AnyClusterTest</accumulo.anyClusterTests>
+    <accumulo.it.excludedGroups />
+    <accumulo.it.groups>${accumulo.anyClusterTests},${accumulo.miniclusterTests}</accumulo.it.groups>
+    <accumulo.miniclusterTests>org.apache.accumulo.test.categories.MiniClusterOnlyTest</accumulo.miniclusterTests>
     <!-- used for filtering the java source with the current version -->
     <accumulo.release.version>${project.version}</accumulo.release.version>
     <assembly.tarLongFileMode>posix</assembly.tarLongFileMode>
@@ -240,7 +244,7 @@
       <dependency>
         <groupId>junit</groupId>
         <artifactId>junit</artifactId>
-        <version>4.11</version>
+        <version>4.12</version>
       </dependency>
       <dependency>
         <groupId>log4j</groupId>
@@ -1006,6 +1010,10 @@
               <goal>integration-test</goal>
               <goal>verify</goal>
             </goals>
+            <configuration>
+              <excludeGroups>${accumulo.it.excludedGroups}</excludeGroups>
+              <groups>${accumulo.it.groups}</groups>
+            </configuration>
           </execution>
         </executions>
       </plugin>
@@ -1399,5 +1407,19 @@
         </pluginManagement>
       </build>
     </profile>
+    <profile>
+      <id>only-minicluster-tests</id>
+      <properties>
+        <accumulo.it.excludedGroups>${accumulo.anyClusterTests}</accumulo.it.excludedGroups>
+        <accumulo.it.groups>${accumulo.miniclusterTests}</accumulo.it.groups>
+      </properties>
+    </profile>
+    <profile>
+      <id>standalone-capable-tests</id>
+      <properties>
+        <accumulo.it.excludedGroups>${accumulo.miniclusterTests}</accumulo.it.excludedGroups>
+        <accumulo.it.groups>${accumulo.anyClusterTests}</accumulo.it.groups>
+      </properties>
+    </profile>
   </profiles>
 </project>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/harness/AccumuloClusterIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/harness/AccumuloClusterIT.java b/test/src/test/java/org/apache/accumulo/harness/AccumuloClusterIT.java
index e2b35f4..436ceb5 100644
--- a/test/src/test/java/org/apache/accumulo/harness/AccumuloClusterIT.java
+++ b/test/src/test/java/org/apache/accumulo/harness/AccumuloClusterIT.java
@@ -43,6 +43,7 @@ import org.apache.accumulo.harness.conf.AccumuloMiniClusterConfiguration;
 import org.apache.accumulo.harness.conf.StandaloneAccumuloClusterConfiguration;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.test.categories.AnyClusterTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -52,12 +53,14 @@ import org.junit.AfterClass;
 import org.junit.Assume;
 import org.junit.Before;
 import org.junit.BeforeClass;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 /**
  * General Integration-Test base class that provides access to an Accumulo instance for testing. This instance could be MAC or a standalone instance.
  */
+@Category(AnyClusterTest.class)
 public abstract class AccumuloClusterIT extends AccumuloIT implements MiniClusterConfigurationCallback, ClusterUsers {
   private static final Logger log = LoggerFactory.getLogger(AccumuloClusterIT.class);
   private static final String TRUE = Boolean.toString(true);

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/harness/SharedMiniClusterIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/harness/SharedMiniClusterIT.java b/test/src/test/java/org/apache/accumulo/harness/SharedMiniClusterIT.java
index f66a192..644055f 100644
--- a/test/src/test/java/org/apache/accumulo/harness/SharedMiniClusterIT.java
+++ b/test/src/test/java/org/apache/accumulo/harness/SharedMiniClusterIT.java
@@ -31,9 +31,11 @@ import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -48,6 +50,7 @@ import org.slf4j.LoggerFactory;
  * a method annotated with the {@link org.junit.BeforeClass} JUnit annotation and {@link #stopMiniCluster()} in a method annotated with the
  * {@link org.junit.AfterClass} JUnit annotation.
  */
+@Category(MiniClusterOnlyTest.class)
 public abstract class SharedMiniClusterIT extends AccumuloIT implements ClusterUsers {
   private static final Logger log = LoggerFactory.getLogger(SharedMiniClusterIT.class);
   public static final String TRUE = Boolean.toString(true);

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/NamespacesIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/NamespacesIT.java b/test/src/test/java/org/apache/accumulo/test/NamespacesIT.java
index aaa6a6e..6ec2127 100644
--- a/test/src/test/java/org/apache/accumulo/test/NamespacesIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/NamespacesIT.java
@@ -77,14 +77,17 @@ import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.examples.simple.constraints.NumericValueConstraint;
 import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.io.Text;
 import org.junit.After;
 import org.junit.Assume;
 import org.junit.Before;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 
 // Testing default namespace configuration with inheritance requires altering the system state and restoring it back to normal
 // Punt on this for now and just let it use a minicluster.
+@Category(MiniClusterOnlyTest.class)
 public class NamespacesIT extends AccumuloClusterIT {
 
   private Connector c;

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/categories/AnyClusterTest.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/categories/AnyClusterTest.java b/test/src/test/java/org/apache/accumulo/test/categories/AnyClusterTest.java
new file mode 100644
index 0000000..765057e
--- /dev/null
+++ b/test/src/test/java/org/apache/accumulo/test/categories/AnyClusterTest.java
@@ -0,0 +1,25 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.categories;
+
+/**
+ * Interface to be used with JUnit Category annotation to denote that the IntegrationTest can be used with any kind of cluster (a MiniAccumuloCluster or a
+ * StandaloneAccumuloCluster).
+ */
+public interface AnyClusterTest {
+
+}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/categories/MiniClusterOnlyTest.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/categories/MiniClusterOnlyTest.java b/test/src/test/java/org/apache/accumulo/test/categories/MiniClusterOnlyTest.java
new file mode 100644
index 0000000..1a972ef
--- /dev/null
+++ b/test/src/test/java/org/apache/accumulo/test/categories/MiniClusterOnlyTest.java
@@ -0,0 +1,24 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.categories;
+
+/**
+ * Interface to be used with JUnit Category annotation to denote that the IntegrationTest requires the use of a MiniAccumuloCluster.
+ */
+public interface MiniClusterOnlyTest {
+
+}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/categories/package-info.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/categories/package-info.java b/test/src/test/java/org/apache/accumulo/test/categories/package-info.java
new file mode 100644
index 0000000..e7071fc
--- /dev/null
+++ b/test/src/test/java/org/apache/accumulo/test/categories/package-info.java
@@ -0,0 +1,21 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * JUnit categories for the various types of Accumulo integration tests.
+ */
+package org.apache.accumulo.test.categories;
+

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ClassLoaderIT.java b/test/src/test/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
index 4b51bd2..d09e2a6 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
@@ -40,13 +40,16 @@ import org.apache.accumulo.core.util.CachedConfiguration;
 import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.harness.AccumuloClusterIT;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.hamcrest.CoreMatchers;
 import org.junit.Assume;
 import org.junit.Before;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 
+@Category(MiniClusterOnlyTest.class)
 public class ClassLoaderIT extends AccumuloClusterIT {
 
   private static final long ZOOKEEPER_PROPAGATION_TIME = 10 * 1000;

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/ConfigurableMacIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ConfigurableMacIT.java b/test/src/test/java/org/apache/accumulo/test/functional/ConfigurableMacIT.java
index 53eb8e4..6d04610 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ConfigurableMacIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/ConfigurableMacIT.java
@@ -40,12 +40,14 @@ import org.apache.accumulo.minicluster.MiniAccumuloCluster;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.minicluster.impl.ZooKeeperBindException;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.accumulo.test.util.CertUtils;
 import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.zookeeper.KeeperException;
 import org.junit.After;
 import org.junit.Before;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -53,6 +55,7 @@ import org.slf4j.LoggerFactory;
  * General Integration-Test base class that provides access to a {@link MiniAccumuloCluster} for testing. Tests using these typically do very disruptive things
  * to the instance, and require specific configuration. Most tests don't need this level of control and should extend {@link AccumuloClusterIT} instead.
  */
+@Category(MiniClusterOnlyTest.class)
 public class ConfigurableMacIT extends AccumuloIT {
   public static final Logger log = LoggerFactory.getLogger(ConfigurableMacIT.class);
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java b/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
index 612718d..a3da827 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
@@ -68,6 +68,7 @@ import org.apache.accumulo.harness.TestingKdc;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.minikdc.MiniKdc;
@@ -77,6 +78,7 @@ import org.junit.AfterClass;
 import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -86,6 +88,7 @@ import com.google.common.collect.Sets;
 /**
  * MAC test which uses {@link MiniKdc} to simulate ta secure environment. Can be used as a sanity check for Kerberos/SASL testing.
  */
+@Category(MiniClusterOnlyTest.class)
 public class KerberosIT extends AccumuloIT {
   private static final Logger log = LoggerFactory.getLogger(KerberosIT.class);
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/KerberosProxyIT.java b/test/src/test/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
index af6310c..2bef539 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
@@ -56,6 +56,7 @@ import org.apache.accumulo.proxy.thrift.ScanResult;
 import org.apache.accumulo.proxy.thrift.TimeType;
 import org.apache.accumulo.proxy.thrift.WriterOptions;
 import org.apache.accumulo.server.util.PortUtils;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -71,6 +72,7 @@ import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Rule;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.junit.rules.ExpectedException;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -78,6 +80,7 @@ import org.slf4j.LoggerFactory;
 /**
  * Tests impersonation of clients by the proxy over SASL
  */
+@Category(MiniClusterOnlyTest.class)
 public class KerberosProxyIT extends AccumuloIT {
   private static final Logger log = LoggerFactory.getLogger(KerberosProxyIT.class);
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java b/test/src/test/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
index 28c1dfc..07e0662 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
@@ -45,6 +45,7 @@ import org.apache.accumulo.harness.MiniClusterHarness;
 import org.apache.accumulo.harness.TestingKdc;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.minikdc.MiniKdc;
@@ -54,6 +55,7 @@ import org.junit.AfterClass;
 import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -62,6 +64,7 @@ import com.google.common.collect.Iterables;
 /**
  * MAC test which uses {@link MiniKdc} to simulate ta secure environment. Can be used as a sanity check for Kerberos/SASL testing.
  */
+@Category(MiniClusterOnlyTest.class)
 public class KerberosRenewalIT extends AccumuloIT {
   private static final Logger log = LoggerFactory.getLogger(KerberosRenewalIT.class);
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/PermissionsIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/PermissionsIT.java b/test/src/test/java/org/apache/accumulo/test/functional/PermissionsIT.java
index 4aea354..6967a48 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/PermissionsIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/PermissionsIT.java
@@ -53,15 +53,18 @@ import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.SystemPermission;
 import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.io.Text;
 import org.junit.Assume;
 import org.junit.Before;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 // This test verifies the default permissions so a clean instance must be used. A shared instance might
 // not be representative of a fresh installation.
+@Category(MiniClusterOnlyTest.class)
 public class PermissionsIT extends AccumuloClusterIT {
   private static final Logger log = LoggerFactory.getLogger(PermissionsIT.class);
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/TableIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/TableIT.java b/test/src/test/java/org/apache/accumulo/test/functional/TableIT.java
index 3061b87..0bfdc00 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/TableIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/TableIT.java
@@ -39,15 +39,18 @@ import org.apache.accumulo.harness.AccumuloClusterIT;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.test.TestIngest;
 import org.apache.accumulo.test.VerifyIngest;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.io.Text;
 import org.hamcrest.CoreMatchers;
 import org.junit.Assume;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 
 import com.google.common.collect.Iterators;
 
+@Category(MiniClusterOnlyTest.class)
 public class TableIT extends AccumuloClusterIT {
 
   @Override

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java b/test/src/test/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
index be9e320..933dfb8 100644
--- a/test/src/test/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
@@ -41,6 +41,7 @@ import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.minicluster.impl.ProcessReference;
 import org.apache.accumulo.server.replication.ReplicaSystemFactory;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.accumulo.test.functional.KerberosIT;
 import org.apache.accumulo.tserver.TabletServer;
 import org.apache.accumulo.tserver.replication.AccumuloReplicaSystem;
@@ -54,6 +55,7 @@ import org.junit.Assert;
 import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -62,6 +64,7 @@ import com.google.common.collect.Iterators;
 /**
  * Ensure that replication occurs using keytabs instead of password (not to mention SASL)
  */
+@Category(MiniClusterOnlyTest.class)
 public class KerberosReplicationIT extends AccumuloIT {
   private static final Logger log = LoggerFactory.getLogger(KerberosIT.class);
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/trace/pom.xml
----------------------------------------------------------------------
diff --git a/trace/pom.xml b/trace/pom.xml
index 2b79288..2d93a84 100644
--- a/trace/pom.xml
+++ b/trace/pom.xml
@@ -34,5 +34,11 @@
       <groupId>org.apache.htrace</groupId>
       <artifactId>htrace-core</artifactId>
     </dependency>
+    <!-- Otherwise will see complaints from failsafe WRT groups -->
+    <dependency>
+      <groupId>junit</groupId>
+      <artifactId>junit</artifactId>
+      <scope>test</scope>
+    </dependency>
   </dependencies>
 </project>


[08/10] accumulo git commit: Merge branch '1.7' into 1.8

Posted by el...@apache.org.
http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/categories/AnyClusterTest.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/categories/AnyClusterTest.java
index 0000000,0000000..765057e
new file mode 100644
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/categories/AnyClusterTest.java
@@@ -1,0 -1,0 +1,25 @@@
++/*
++ * Licensed to the Apache Software Foundation (ASF) under one or more
++ * contributor license agreements.  See the NOTICE file distributed with
++ * this work for additional information regarding copyright ownership.
++ * The ASF licenses this file to you under the Apache License, Version 2.0
++ * (the "License"); you may not use this file except in compliance with
++ * the License.  You may obtain a copy of the License at
++ *
++ * http://www.apache.org/licenses/LICENSE-2.0
++ *
++ * Unless required by applicable law or agreed to in writing, software
++ * distributed under the License is distributed on an "AS IS" BASIS,
++ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
++ * See the License for the specific language governing permissions and
++ * limitations under the License.
++ */
++package org.apache.accumulo.test.categories;
++
++/**
++ * Interface to be used with JUnit Category annotation to denote that the IntegrationTest can be used with any kind of cluster (a MiniAccumuloCluster or a
++ * StandaloneAccumuloCluster).
++ */
++public interface AnyClusterTest {
++
++}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/categories/MiniClusterOnlyTest.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/categories/MiniClusterOnlyTest.java
index 0000000,0000000..1a972ef
new file mode 100644
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/categories/MiniClusterOnlyTest.java
@@@ -1,0 -1,0 +1,24 @@@
++/*
++ * Licensed to the Apache Software Foundation (ASF) under one or more
++ * contributor license agreements.  See the NOTICE file distributed with
++ * this work for additional information regarding copyright ownership.
++ * The ASF licenses this file to you under the Apache License, Version 2.0
++ * (the "License"); you may not use this file except in compliance with
++ * the License.  You may obtain a copy of the License at
++ *
++ * http://www.apache.org/licenses/LICENSE-2.0
++ *
++ * Unless required by applicable law or agreed to in writing, software
++ * distributed under the License is distributed on an "AS IS" BASIS,
++ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
++ * See the License for the specific language governing permissions and
++ * limitations under the License.
++ */
++package org.apache.accumulo.test.categories;
++
++/**
++ * Interface to be used with JUnit Category annotation to denote that the IntegrationTest requires the use of a MiniAccumuloCluster.
++ */
++public interface MiniClusterOnlyTest {
++
++}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/categories/package-info.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/categories/package-info.java
index 0000000,0000000..e7071fc
new file mode 100644
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/categories/package-info.java
@@@ -1,0 -1,0 +1,21 @@@
++/*
++ * Licensed to the Apache Software Foundation (ASF) under one or more
++ * contributor license agreements.  See the NOTICE file distributed with
++ * this work for additional information regarding copyright ownership.
++ * The ASF licenses this file to you under the Apache License, Version 2.0
++ * (the "License"); you may not use this file except in compliance with
++ * the License.  You may obtain a copy of the License at
++ *
++ * http://www.apache.org/licenses/LICENSE-2.0
++ *
++ * Unless required by applicable law or agreed to in writing, software
++ * distributed under the License is distributed on an "AS IS" BASIS,
++ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
++ * See the License for the specific language governing permissions and
++ * limitations under the License.
++ */
++/**
++ * JUnit categories for the various types of Accumulo integration tests.
++ */
++package org.apache.accumulo.test.categories;
++

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
index 29f2780,0000000..8dbbc12
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
@@@ -1,121 -1,0 +1,124 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.test.functional;
 +
 +import static org.junit.Assert.assertEquals;
 +import static org.junit.Assert.assertFalse;
 +import static org.junit.Assert.assertTrue;
 +
 +import java.io.IOException;
 +import java.io.InputStream;
 +import java.util.Collections;
 +import java.util.EnumSet;
 +import java.util.Iterator;
 +import java.util.Map.Entry;
 +import java.util.concurrent.TimeUnit;
 +
 +import org.apache.accumulo.core.client.BatchWriter;
 +import org.apache.accumulo.core.client.BatchWriterConfig;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.IteratorSetting;
 +import org.apache.accumulo.core.client.Scanner;
 +import org.apache.accumulo.core.data.Key;
 +import org.apache.accumulo.core.data.Mutation;
 +import org.apache.accumulo.core.data.Value;
 +import org.apache.accumulo.core.iterators.Combiner;
 +import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 +import org.apache.accumulo.core.security.Authorizations;
 +import org.apache.accumulo.harness.AccumuloClusterHarness;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 +import org.apache.hadoop.fs.FSDataOutputStream;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.hadoop.fs.FileSystem;
 +import org.apache.hadoop.fs.Path;
 +import org.hamcrest.CoreMatchers;
 +import org.junit.Assume;
 +import org.junit.Before;
 +import org.junit.Test;
++import org.junit.experimental.categories.Category;
 +
 +import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 +
++@Category(MiniClusterOnlyTest.class)
 +public class ClassLoaderIT extends AccumuloClusterHarness {
 +
 +  private static final long ZOOKEEPER_PROPAGATION_TIME = 10 * 1000;
 +
 +  @Override
 +  protected int defaultTimeoutSeconds() {
 +    return 2 * 60;
 +  }
 +
 +  private String rootPath;
 +
 +  @Before
 +  public void checkCluster() {
 +    Assume.assumeThat(getClusterType(), CoreMatchers.is(ClusterType.MINI));
 +    MiniAccumuloClusterImpl mac = (MiniAccumuloClusterImpl) getCluster();
 +    rootPath = mac.getConfig().getDir().getAbsolutePath();
 +  }
 +
 +  private static void copyStreamToFileSystem(FileSystem fs, String jarName, Path path) throws IOException {
 +    byte[] buffer = new byte[10 * 1024];
 +    try (FSDataOutputStream dest = fs.create(path); InputStream stream = ClassLoaderIT.class.getResourceAsStream(jarName)) {
 +      while (true) {
 +        int n = stream.read(buffer, 0, buffer.length);
 +        if (n <= 0) {
 +          break;
 +        }
 +        dest.write(buffer, 0, n);
 +      }
 +    }
 +  }
 +
 +  @Test
 +  public void test() throws Exception {
 +    Connector c = getConnector();
 +    String tableName = getUniqueNames(1)[0];
 +    c.tableOperations().create(tableName);
 +    BatchWriter bw = c.createBatchWriter(tableName, new BatchWriterConfig());
 +    Mutation m = new Mutation("row1");
 +    m.put("cf", "col1", "Test");
 +    bw.addMutation(m);
 +    bw.close();
 +    scanCheck(c, tableName, "Test");
 +    FileSystem fs = getCluster().getFileSystem();
 +    Path jarPath = new Path(rootPath + "/lib/ext/Test.jar");
 +    copyStreamToFileSystem(fs, "/TestCombinerX.jar", jarPath);
 +    sleepUninterruptibly(1, TimeUnit.SECONDS);
 +    IteratorSetting is = new IteratorSetting(10, "TestCombiner", "org.apache.accumulo.test.functional.TestCombiner");
 +    Combiner.setColumns(is, Collections.singletonList(new IteratorSetting.Column("cf")));
 +    c.tableOperations().attachIterator(tableName, is, EnumSet.of(IteratorScope.scan));
 +    sleepUninterruptibly(ZOOKEEPER_PROPAGATION_TIME, TimeUnit.MILLISECONDS);
 +    scanCheck(c, tableName, "TestX");
 +    fs.delete(jarPath, true);
 +    copyStreamToFileSystem(fs, "/TestCombinerY.jar", jarPath);
 +    sleepUninterruptibly(5, TimeUnit.SECONDS);
 +    scanCheck(c, tableName, "TestY");
 +    fs.delete(jarPath, true);
 +  }
 +
 +  private void scanCheck(Connector c, String tableName, String expected) throws Exception {
 +    Scanner bs = c.createScanner(tableName, Authorizations.EMPTY);
 +    Iterator<Entry<Key,Value>> iterator = bs.iterator();
 +    assertTrue(iterator.hasNext());
 +    Entry<Key,Value> next = iterator.next();
 +    assertFalse(iterator.hasNext());
 +    assertEquals(expected, next.getValue().toString());
 +  }
 +
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/functional/ConfigurableMacBase.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/functional/ConfigurableMacBase.java
index 85246bf,0000000..71777bf
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/ConfigurableMacBase.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ConfigurableMacBase.java
@@@ -1,185 -1,0 +1,188 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.test.functional;
 +
 +import static org.junit.Assert.assertTrue;
 +
 +import java.io.BufferedOutputStream;
 +import java.io.File;
 +import java.io.FileOutputStream;
 +import java.io.IOException;
 +import java.io.OutputStream;
 +import java.util.Map;
 +
 +import org.apache.accumulo.core.client.AccumuloException;
 +import org.apache.accumulo.core.client.AccumuloSecurityException;
 +import org.apache.accumulo.core.client.ClientConfiguration;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.Instance;
 +import org.apache.accumulo.core.client.ZooKeeperInstance;
 +import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.util.MonitorUtil;
 +import org.apache.accumulo.harness.AccumuloClusterHarness;
 +import org.apache.accumulo.harness.AccumuloITBase;
 +import org.apache.accumulo.minicluster.MiniAccumuloCluster;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 +import org.apache.accumulo.minicluster.impl.ZooKeeperBindException;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.accumulo.test.util.CertUtils;
 +import org.apache.commons.io.FileUtils;
 +import org.apache.hadoop.conf.Configuration;
 +import org.apache.hadoop.fs.Path;
 +import org.apache.zookeeper.KeeperException;
 +import org.junit.After;
 +import org.junit.Before;
++import org.junit.experimental.categories.Category;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +/**
 + * General Integration-Test base class that provides access to a {@link MiniAccumuloCluster} for testing. Tests using these typically do very disruptive things
 + * to the instance, and require specific configuration. Most tests don't need this level of control and should extend {@link AccumuloClusterHarness} instead.
 + */
++@Category(MiniClusterOnlyTest.class)
 +public class ConfigurableMacBase extends AccumuloITBase {
 +  public static final Logger log = LoggerFactory.getLogger(ConfigurableMacBase.class);
 +
 +  protected MiniAccumuloClusterImpl cluster;
 +
 +  protected void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {}
 +
 +  protected void beforeClusterStart(MiniAccumuloConfigImpl cfg) throws Exception {}
 +
 +  protected static final String ROOT_PASSWORD = "testRootPassword1";
 +
 +  public static void configureForEnvironment(MiniAccumuloConfigImpl cfg, Class<?> testClass, File folder) {
 +    if ("true".equals(System.getProperty("org.apache.accumulo.test.functional.useSslForIT"))) {
 +      configureForSsl(cfg, folder);
 +    }
 +    if ("true".equals(System.getProperty("org.apache.accumulo.test.functional.useCredProviderForIT"))) {
 +      cfg.setUseCredentialProvider(true);
 +    }
 +  }
 +
 +  protected static void configureForSsl(MiniAccumuloConfigImpl cfg, File sslDir) {
 +    Map<String,String> siteConfig = cfg.getSiteConfig();
 +    if ("true".equals(siteConfig.get(Property.INSTANCE_RPC_SSL_ENABLED.getKey()))) {
 +      // already enabled; don't mess with it
 +      return;
 +    }
 +
 +    // create parent directories, and ensure sslDir is empty
 +    assertTrue(sslDir.mkdirs() || sslDir.isDirectory());
 +    FileUtils.deleteQuietly(sslDir);
 +    assertTrue(sslDir.mkdir());
 +
 +    File rootKeystoreFile = new File(sslDir, "root-" + cfg.getInstanceName() + ".jks");
 +    File localKeystoreFile = new File(sslDir, "local-" + cfg.getInstanceName() + ".jks");
 +    File publicTruststoreFile = new File(sslDir, "public-" + cfg.getInstanceName() + ".jks");
 +    final String rootKeystorePassword = "root_keystore_password", truststorePassword = "truststore_password";
 +    try {
 +      new CertUtils(Property.RPC_SSL_KEYSTORE_TYPE.getDefaultValue(), "o=Apache Accumulo,cn=MiniAccumuloCluster", "RSA", 2048, "sha1WithRSAEncryption")
 +          .createAll(rootKeystoreFile, localKeystoreFile, publicTruststoreFile, cfg.getInstanceName(), rootKeystorePassword, cfg.getRootPassword(),
 +              truststorePassword);
 +    } catch (Exception e) {
 +      throw new RuntimeException("error creating MAC keystore", e);
 +    }
 +
 +    siteConfig.put(Property.INSTANCE_RPC_SSL_ENABLED.getKey(), "true");
 +    siteConfig.put(Property.RPC_SSL_KEYSTORE_PATH.getKey(), localKeystoreFile.getAbsolutePath());
 +    siteConfig.put(Property.RPC_SSL_KEYSTORE_PASSWORD.getKey(), cfg.getRootPassword());
 +    siteConfig.put(Property.RPC_SSL_TRUSTSTORE_PATH.getKey(), publicTruststoreFile.getAbsolutePath());
 +    siteConfig.put(Property.RPC_SSL_TRUSTSTORE_PASSWORD.getKey(), truststorePassword);
 +    cfg.setSiteConfig(siteConfig);
 +  }
 +
 +  @Before
 +  public void setUp() throws Exception {
 +    createMiniAccumulo();
 +    Exception lastException = null;
 +    for (int i = 0; i < 3; i++) {
 +      try {
 +        cluster.start();
 +        return;
 +      } catch (ZooKeeperBindException e) {
 +        lastException = e;
 +        log.warn("Failed to start MiniAccumuloCluster, assumably due to ZooKeeper issues", lastException);
 +        Thread.sleep(3000);
 +        createMiniAccumulo();
 +      }
 +    }
 +    throw new RuntimeException("Failed to start MiniAccumuloCluster after three attempts", lastException);
 +  }
 +
 +  private void createMiniAccumulo() throws Exception {
 +    // createTestDir will give us a empty directory, we don't need to clean it up ourselves
 +    File baseDir = createTestDir(this.getClass().getName() + "_" + this.testName.getMethodName());
 +    MiniAccumuloConfigImpl cfg = new MiniAccumuloConfigImpl(baseDir, ROOT_PASSWORD);
 +    String nativePathInDevTree = NativeMapIT.nativeMapLocation().getAbsolutePath();
 +    String nativePathInMapReduce = new File(System.getProperty("user.dir")).toString();
 +    cfg.setNativeLibPaths(nativePathInDevTree, nativePathInMapReduce);
 +    cfg.setProperty(Property.GC_FILE_ARCHIVE, Boolean.TRUE.toString());
 +    Configuration coreSite = new Configuration(false);
 +    configure(cfg, coreSite);
 +    cfg.setProperty(Property.TSERV_NATIVEMAP_ENABLED, Boolean.TRUE.toString());
 +    configureForEnvironment(cfg, getClass(), getSslDir(baseDir));
 +    cluster = new MiniAccumuloClusterImpl(cfg);
 +    if (coreSite.size() > 0) {
 +      File csFile = new File(cluster.getConfig().getConfDir(), "core-site.xml");
 +      if (csFile.exists()) {
 +        coreSite.addResource(new Path(csFile.getAbsolutePath()));
 +      }
 +      File tmp = new File(csFile.getAbsolutePath() + ".tmp");
 +      OutputStream out = new BufferedOutputStream(new FileOutputStream(tmp));
 +      coreSite.writeXml(out);
 +      out.close();
 +      assertTrue(tmp.renameTo(csFile));
 +    }
 +    beforeClusterStart(cfg);
 +  }
 +
 +  @After
 +  public void tearDown() throws Exception {
 +    if (cluster != null)
 +      try {
 +        cluster.stop();
 +      } catch (Exception e) {
 +        // ignored
 +      }
 +  }
 +
 +  protected MiniAccumuloClusterImpl getCluster() {
 +    return cluster;
 +  }
 +
 +  protected Connector getConnector() throws AccumuloException, AccumuloSecurityException {
 +    return getCluster().getConnector("root", new PasswordToken(ROOT_PASSWORD));
 +  }
 +
 +  protected Process exec(Class<?> clazz, String... args) throws IOException {
 +    return getCluster().exec(clazz, args);
 +  }
 +
 +  protected String getMonitor() throws KeeperException, InterruptedException {
 +    Instance instance = new ZooKeeperInstance(getCluster().getClientConfig());
 +    return MonitorUtil.getLocation(instance);
 +  }
 +
 +  protected ClientConfiguration getClientConfig() throws Exception {
 +    return new ClientConfiguration(getCluster().getConfig().getClientConfFile());
 +  }
 +
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/functional/KerberosIT.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/functional/KerberosIT.java
index e636daa,0000000..1bdc71a
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/KerberosIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/KerberosIT.java
@@@ -1,656 -1,0 +1,659 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.test.functional;
 +
 +import static org.junit.Assert.assertEquals;
 +import static org.junit.Assert.assertFalse;
 +import static org.junit.Assert.assertNotNull;
 +import static org.junit.Assert.assertTrue;
 +import static org.junit.Assert.fail;
 +
 +import java.io.File;
 +import java.lang.reflect.UndeclaredThrowableException;
 +import java.security.PrivilegedExceptionAction;
 +import java.util.Arrays;
 +import java.util.Collections;
 +import java.util.HashSet;
 +import java.util.Iterator;
 +import java.util.Map;
 +import java.util.Map.Entry;
 +import java.util.concurrent.TimeUnit;
 +
 +import org.apache.accumulo.cluster.ClusterUser;
 +import org.apache.accumulo.core.client.AccumuloException;
 +import org.apache.accumulo.core.client.AccumuloSecurityException;
 +import org.apache.accumulo.core.client.BatchScanner;
 +import org.apache.accumulo.core.client.BatchWriter;
 +import org.apache.accumulo.core.client.BatchWriterConfig;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.Scanner;
 +import org.apache.accumulo.core.client.TableExistsException;
 +import org.apache.accumulo.core.client.TableNotFoundException;
 +import org.apache.accumulo.core.client.admin.CompactionConfig;
 +import org.apache.accumulo.core.client.admin.DelegationTokenConfig;
 +import org.apache.accumulo.core.client.impl.AuthenticationTokenIdentifier;
 +import org.apache.accumulo.core.client.impl.DelegationTokenImpl;
 +import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 +import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 +import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.data.Key;
 +import org.apache.accumulo.core.data.Mutation;
 +import org.apache.accumulo.core.data.Range;
 +import org.apache.accumulo.core.data.Value;
 +import org.apache.accumulo.core.metadata.MetadataTable;
 +import org.apache.accumulo.core.metadata.RootTable;
 +import org.apache.accumulo.core.security.Authorizations;
 +import org.apache.accumulo.core.security.ColumnVisibility;
 +import org.apache.accumulo.core.security.SystemPermission;
 +import org.apache.accumulo.core.security.TablePermission;
 +import org.apache.accumulo.harness.AccumuloITBase;
 +import org.apache.accumulo.harness.MiniClusterConfigurationCallback;
 +import org.apache.accumulo.harness.MiniClusterHarness;
 +import org.apache.accumulo.harness.TestingKdc;
 +import org.apache.accumulo.minicluster.ServerType;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.hadoop.conf.Configuration;
 +import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 +import org.apache.hadoop.minikdc.MiniKdc;
 +import org.apache.hadoop.security.UserGroupInformation;
 +import org.junit.After;
 +import org.junit.AfterClass;
 +import org.junit.Before;
 +import org.junit.BeforeClass;
 +import org.junit.Test;
++import org.junit.experimental.categories.Category;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +import com.google.common.collect.Iterables;
 +import com.google.common.collect.Sets;
 +
 +/**
 + * MAC test which uses {@link MiniKdc} to simulate ta secure environment. Can be used as a sanity check for Kerberos/SASL testing.
 + */
++@Category(MiniClusterOnlyTest.class)
 +public class KerberosIT extends AccumuloITBase {
 +  private static final Logger log = LoggerFactory.getLogger(KerberosIT.class);
 +
 +  private static TestingKdc kdc;
 +  private static String krbEnabledForITs = null;
 +  private static ClusterUser rootUser;
 +
 +  @BeforeClass
 +  public static void startKdc() throws Exception {
 +    kdc = new TestingKdc();
 +    kdc.start();
 +    krbEnabledForITs = System.getProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION);
 +    if (null == krbEnabledForITs || !Boolean.parseBoolean(krbEnabledForITs)) {
 +      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, "true");
 +    }
 +    rootUser = kdc.getRootUser();
 +  }
 +
 +  @AfterClass
 +  public static void stopKdc() throws Exception {
 +    if (null != kdc) {
 +      kdc.stop();
 +    }
 +    if (null != krbEnabledForITs) {
 +      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, krbEnabledForITs);
 +    }
 +    UserGroupInformation.setConfiguration(new Configuration(false));
 +  }
 +
 +  @Override
 +  public int defaultTimeoutSeconds() {
 +    return 60 * 5;
 +  }
 +
 +  private MiniAccumuloClusterImpl mac;
 +
 +  @Before
 +  public void startMac() throws Exception {
 +    MiniClusterHarness harness = new MiniClusterHarness();
 +    mac = harness.create(this, new PasswordToken("unused"), kdc, new MiniClusterConfigurationCallback() {
 +
 +      @Override
 +      public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration coreSite) {
 +        Map<String,String> site = cfg.getSiteConfig();
 +        site.put(Property.INSTANCE_ZK_TIMEOUT.getKey(), "15s");
 +        cfg.setSiteConfig(site);
 +      }
 +
 +    });
 +
 +    mac.getConfig().setNumTservers(1);
 +    mac.start();
 +    // Enabled kerberos auth
 +    Configuration conf = new Configuration(false);
 +    conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "kerberos");
 +    UserGroupInformation.setConfiguration(conf);
 +  }
 +
 +  @After
 +  public void stopMac() throws Exception {
 +    if (null != mac) {
 +      mac.stop();
 +    }
 +  }
 +
 +  @Test
 +  public void testAdminUser() throws Exception {
 +    // Login as the client (provided to `accumulo init` as the "root" user)
 +    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        final Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +
 +        // The "root" user should have all system permissions
 +        for (SystemPermission perm : SystemPermission.values()) {
 +          assertTrue("Expected user to have permission: " + perm, conn.securityOperations().hasSystemPermission(conn.whoami(), perm));
 +        }
 +
 +        // and the ability to modify the root and metadata tables
 +        for (String table : Arrays.asList(RootTable.NAME, MetadataTable.NAME)) {
 +          assertTrue(conn.securityOperations().hasTablePermission(conn.whoami(), table, TablePermission.ALTER_TABLE));
 +        }
 +        return null;
 +      }
 +    });
 +  }
 +
 +  @Test
 +  public void testNewUser() throws Exception {
 +    String newUser = testName.getMethodName();
 +    final File newUserKeytab = new File(kdc.getKeytabDir(), newUser + ".keytab");
 +    if (newUserKeytab.exists() && !newUserKeytab.delete()) {
 +      log.warn("Unable to delete {}", newUserKeytab);
 +    }
 +
 +    // Create a new user
 +    kdc.createPrincipal(newUserKeytab, newUser);
 +
 +    final String newQualifiedUser = kdc.qualifyUser(newUser);
 +    final HashSet<String> users = Sets.newHashSet(rootUser.getPrincipal());
 +
 +    // Login as the "root" user
 +    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +    log.info("Logged in as {}", rootUser.getPrincipal());
 +
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +        log.info("Created connector as {}", rootUser.getPrincipal());
 +        assertEquals(rootUser.getPrincipal(), conn.whoami());
 +
 +        // Make sure the system user doesn't exist -- this will force some RPC to happen server-side
 +        createTableWithDataAndCompact(conn);
 +
 +        assertEquals(users, conn.securityOperations().listLocalUsers());
 +
 +        return null;
 +      }
 +    });
 +    // Switch to a new user
 +    ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(newQualifiedUser, newUserKeytab.getAbsolutePath());
 +    log.info("Logged in as {}", newQualifiedUser);
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        Connector conn = mac.getConnector(newQualifiedUser, new KerberosToken());
 +        log.info("Created connector as {}", newQualifiedUser);
 +        assertEquals(newQualifiedUser, conn.whoami());
 +
 +        // The new user should have no system permissions
 +        for (SystemPermission perm : SystemPermission.values()) {
 +          assertFalse(conn.securityOperations().hasSystemPermission(newQualifiedUser, perm));
 +        }
 +
 +        users.add(newQualifiedUser);
 +
 +        // Same users as before, plus the new user we just created
 +        assertEquals(users, conn.securityOperations().listLocalUsers());
 +        return null;
 +      }
 +
 +    });
 +  }
 +
 +  @Test
 +  public void testUserPrivilegesThroughGrant() throws Exception {
 +    String user1 = testName.getMethodName();
 +    final File user1Keytab = new File(kdc.getKeytabDir(), user1 + ".keytab");
 +    if (user1Keytab.exists() && !user1Keytab.delete()) {
 +      log.warn("Unable to delete {}", user1Keytab);
 +    }
 +
 +    // Create some new users
 +    kdc.createPrincipal(user1Keytab, user1);
 +
 +    final String qualifiedUser1 = kdc.qualifyUser(user1);
 +
 +    // Log in as user1
 +    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(user1, user1Keytab.getAbsolutePath());
 +    log.info("Logged in as {}", user1);
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        // Indirectly creates this user when we use it
 +        Connector conn = mac.getConnector(qualifiedUser1, new KerberosToken());
 +        log.info("Created connector as {}", qualifiedUser1);
 +
 +        // The new user should have no system permissions
 +        for (SystemPermission perm : SystemPermission.values()) {
 +          assertFalse(conn.securityOperations().hasSystemPermission(qualifiedUser1, perm));
 +        }
 +
 +        return null;
 +      }
 +    });
 +
 +    ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +        conn.securityOperations().grantSystemPermission(qualifiedUser1, SystemPermission.CREATE_TABLE);
 +        return null;
 +      }
 +    });
 +
 +    // Switch back to the original user
 +    ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(user1, user1Keytab.getAbsolutePath());
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        Connector conn = mac.getConnector(qualifiedUser1, new KerberosToken());
 +
 +        // Shouldn't throw an exception since we granted the create table permission
 +        final String table = testName.getMethodName() + "_user_table";
 +        conn.tableOperations().create(table);
 +
 +        // Make sure we can actually use the table we made
 +        BatchWriter bw = conn.createBatchWriter(table, new BatchWriterConfig());
 +        Mutation m = new Mutation("a");
 +        m.put("b", "c", "d");
 +        bw.addMutation(m);
 +        bw.close();
 +
 +        conn.tableOperations().compact(table, new CompactionConfig().setWait(true).setFlush(true));
 +        return null;
 +      }
 +    });
 +  }
 +
 +  @Test
 +  public void testUserPrivilegesForTable() throws Exception {
 +    String user1 = testName.getMethodName();
 +    final File user1Keytab = new File(kdc.getKeytabDir(), user1 + ".keytab");
 +    if (user1Keytab.exists() && !user1Keytab.delete()) {
 +      log.warn("Unable to delete {}", user1Keytab);
 +    }
 +
 +    // Create some new users -- cannot contain realm
 +    kdc.createPrincipal(user1Keytab, user1);
 +
 +    final String qualifiedUser1 = kdc.qualifyUser(user1);
 +
 +    // Log in as user1
 +    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(qualifiedUser1, user1Keytab.getAbsolutePath());
 +    log.info("Logged in as {}", user1);
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        // Indirectly creates this user when we use it
 +        Connector conn = mac.getConnector(qualifiedUser1, new KerberosToken());
 +        log.info("Created connector as {}", qualifiedUser1);
 +
 +        // The new user should have no system permissions
 +        for (SystemPermission perm : SystemPermission.values()) {
 +          assertFalse(conn.securityOperations().hasSystemPermission(qualifiedUser1, perm));
 +        }
 +        return null;
 +      }
 +
 +    });
 +
 +    final String table = testName.getMethodName() + "_user_table";
 +    final String viz = "viz";
 +
 +    ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +        conn.tableOperations().create(table);
 +        // Give our unprivileged user permission on the table we made for them
 +        conn.securityOperations().grantTablePermission(qualifiedUser1, table, TablePermission.READ);
 +        conn.securityOperations().grantTablePermission(qualifiedUser1, table, TablePermission.WRITE);
 +        conn.securityOperations().grantTablePermission(qualifiedUser1, table, TablePermission.ALTER_TABLE);
 +        conn.securityOperations().grantTablePermission(qualifiedUser1, table, TablePermission.DROP_TABLE);
 +        conn.securityOperations().changeUserAuthorizations(qualifiedUser1, new Authorizations(viz));
 +        return null;
 +      }
 +    });
 +
 +    // Switch back to the original user
 +    ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(qualifiedUser1, user1Keytab.getAbsolutePath());
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        Connector conn = mac.getConnector(qualifiedUser1, new KerberosToken());
 +
 +        // Make sure we can actually use the table we made
 +
 +        // Write data
 +        final long ts = 1000l;
 +        BatchWriter bw = conn.createBatchWriter(table, new BatchWriterConfig());
 +        Mutation m = new Mutation("a");
 +        m.put("b", "c", new ColumnVisibility(viz.getBytes()), ts, "d");
 +        bw.addMutation(m);
 +        bw.close();
 +
 +        // Compact
 +        conn.tableOperations().compact(table, new CompactionConfig().setWait(true).setFlush(true));
 +
 +        // Alter
 +        conn.tableOperations().setProperty(table, Property.TABLE_BLOOM_ENABLED.getKey(), "true");
 +
 +        // Read (and proper authorizations)
 +        Scanner s = conn.createScanner(table, new Authorizations(viz));
 +        Iterator<Entry<Key,Value>> iter = s.iterator();
 +        assertTrue("No results from iterator", iter.hasNext());
 +        Entry<Key,Value> entry = iter.next();
 +        assertEquals(new Key("a", "b", "c", viz, ts), entry.getKey());
 +        assertEquals(new Value("d".getBytes()), entry.getValue());
 +        assertFalse("Had more results from iterator", iter.hasNext());
 +        return null;
 +      }
 +    });
 +  }
 +
 +  @Test
 +  public void testDelegationToken() throws Exception {
 +    final String tableName = getUniqueNames(1)[0];
 +
 +    // Login as the "root" user
 +    UserGroupInformation root = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +    log.info("Logged in as {}", rootUser.getPrincipal());
 +
 +    final int numRows = 100, numColumns = 10;
 +
 +    // As the "root" user, open up the connection and get a delegation token
 +    final AuthenticationToken delegationToken = root.doAs(new PrivilegedExceptionAction<AuthenticationToken>() {
 +      @Override
 +      public AuthenticationToken run() throws Exception {
 +        Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +        log.info("Created connector as {}", rootUser.getPrincipal());
 +        assertEquals(rootUser.getPrincipal(), conn.whoami());
 +
 +        conn.tableOperations().create(tableName);
 +        BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
 +        for (int r = 0; r < numRows; r++) {
 +          Mutation m = new Mutation(Integer.toString(r));
 +          for (int c = 0; c < numColumns; c++) {
 +            String col = Integer.toString(c);
 +            m.put(col, col, col);
 +          }
 +          bw.addMutation(m);
 +        }
 +        bw.close();
 +
 +        return conn.securityOperations().getDelegationToken(new DelegationTokenConfig());
 +      }
 +    });
 +
 +    // The above login with keytab doesn't have a way to logout, so make a fake user that won't have krb credentials
 +    UserGroupInformation userWithoutPrivs = UserGroupInformation.createUserForTesting("fake_user", new String[0]);
 +    int recordsSeen = userWithoutPrivs.doAs(new PrivilegedExceptionAction<Integer>() {
 +      @Override
 +      public Integer run() throws Exception {
 +        Connector conn = mac.getConnector(rootUser.getPrincipal(), delegationToken);
 +
 +        BatchScanner bs = conn.createBatchScanner(tableName, Authorizations.EMPTY, 2);
 +        bs.setRanges(Collections.singleton(new Range()));
 +        int recordsSeen = Iterables.size(bs);
 +        bs.close();
 +        return recordsSeen;
 +      }
 +    });
 +
 +    assertEquals(numRows * numColumns, recordsSeen);
 +  }
 +
 +  @Test
 +  public void testDelegationTokenAsDifferentUser() throws Exception {
 +    // Login as the "root" user
 +    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +    log.info("Logged in as {}", rootUser.getPrincipal());
 +
 +    final AuthenticationToken delegationToken;
 +    try {
 +      delegationToken = ugi.doAs(new PrivilegedExceptionAction<AuthenticationToken>() {
 +        @Override
 +        public AuthenticationToken run() throws Exception {
 +          // As the "root" user, open up the connection and get a delegation token
 +          Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +          log.info("Created connector as {}", rootUser.getPrincipal());
 +          assertEquals(rootUser.getPrincipal(), conn.whoami());
 +          return conn.securityOperations().getDelegationToken(new DelegationTokenConfig());
 +        }
 +      });
 +    } catch (UndeclaredThrowableException ex) {
 +      throw ex;
 +    }
 +
 +    // make a fake user that won't have krb credentials
 +    UserGroupInformation userWithoutPrivs = UserGroupInformation.createUserForTesting("fake_user", new String[0]);
 +    try {
 +      // Use the delegation token to try to log in as a different user
 +      userWithoutPrivs.doAs(new PrivilegedExceptionAction<Void>() {
 +        @Override
 +        public Void run() throws Exception {
 +          mac.getConnector("some_other_user", delegationToken);
 +          return null;
 +        }
 +      });
 +      fail("Using a delegation token as a different user should throw an exception");
 +    } catch (UndeclaredThrowableException e) {
 +      Throwable cause = e.getCause();
 +      assertNotNull(cause);
 +      // We should get an AccumuloSecurityException from trying to use a delegation token for the wrong user
 +      assertTrue("Expected cause to be AccumuloSecurityException, but was " + cause.getClass(), cause instanceof AccumuloSecurityException);
 +    }
 +  }
 +
 +  @Test
 +  public void testGetDelegationTokenDenied() throws Exception {
 +    String newUser = testName.getMethodName();
 +    final File newUserKeytab = new File(kdc.getKeytabDir(), newUser + ".keytab");
 +    if (newUserKeytab.exists() && !newUserKeytab.delete()) {
 +      log.warn("Unable to delete {}", newUserKeytab);
 +    }
 +
 +    // Create a new user
 +    kdc.createPrincipal(newUserKeytab, newUser);
 +
 +    final String qualifiedNewUser = kdc.qualifyUser(newUser);
 +
 +    // Login as a normal user
 +    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(qualifiedNewUser, newUserKeytab.getAbsolutePath());
 +    try {
 +      ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +        @Override
 +        public Void run() throws Exception {
 +          // As the "root" user, open up the connection and get a delegation token
 +          Connector conn = mac.getConnector(qualifiedNewUser, new KerberosToken());
 +          log.info("Created connector as {}", qualifiedNewUser);
 +          assertEquals(qualifiedNewUser, conn.whoami());
 +
 +          conn.securityOperations().getDelegationToken(new DelegationTokenConfig());
 +          return null;
 +        }
 +      });
 +    } catch (UndeclaredThrowableException ex) {
 +      assertTrue(ex.getCause() instanceof AccumuloSecurityException);
 +    }
 +  }
 +
 +  @Test
 +  public void testRestartedMasterReusesSecretKey() throws Exception {
 +    // Login as the "root" user
 +    UserGroupInformation root = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +    log.info("Logged in as {}", rootUser.getPrincipal());
 +
 +    // As the "root" user, open up the connection and get a delegation token
 +    final AuthenticationToken delegationToken1 = root.doAs(new PrivilegedExceptionAction<AuthenticationToken>() {
 +      @Override
 +      public AuthenticationToken run() throws Exception {
 +        Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +        log.info("Created connector as {}", rootUser.getPrincipal());
 +        assertEquals(rootUser.getPrincipal(), conn.whoami());
 +
 +        AuthenticationToken token = conn.securityOperations().getDelegationToken(new DelegationTokenConfig());
 +
 +        assertTrue("Could not get tables with delegation token", mac.getConnector(rootUser.getPrincipal(), token).tableOperations().list().size() > 0);
 +
 +        return token;
 +      }
 +    });
 +
 +    log.info("Stopping master");
 +    mac.getClusterControl().stop(ServerType.MASTER);
 +    Thread.sleep(5000);
 +    log.info("Restarting master");
 +    mac.getClusterControl().start(ServerType.MASTER);
 +
 +    // Make sure our original token is still good
 +    root.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        Connector conn = mac.getConnector(rootUser.getPrincipal(), delegationToken1);
 +
 +        assertTrue("Could not get tables with delegation token", conn.tableOperations().list().size() > 0);
 +
 +        return null;
 +      }
 +    });
 +
 +    // Get a new token, so we can compare the keyId on the second to the first
 +    final AuthenticationToken delegationToken2 = root.doAs(new PrivilegedExceptionAction<AuthenticationToken>() {
 +      @Override
 +      public AuthenticationToken run() throws Exception {
 +        Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +        log.info("Created connector as {}", rootUser.getPrincipal());
 +        assertEquals(rootUser.getPrincipal(), conn.whoami());
 +
 +        AuthenticationToken token = conn.securityOperations().getDelegationToken(new DelegationTokenConfig());
 +
 +        assertTrue("Could not get tables with delegation token", mac.getConnector(rootUser.getPrincipal(), token).tableOperations().list().size() > 0);
 +
 +        return token;
 +      }
 +    });
 +
 +    // A restarted master should reuse the same secret key after a restart if the secret key hasn't expired (1day by default)
 +    DelegationTokenImpl dt1 = (DelegationTokenImpl) delegationToken1;
 +    DelegationTokenImpl dt2 = (DelegationTokenImpl) delegationToken2;
 +    assertEquals(dt1.getIdentifier().getKeyId(), dt2.getIdentifier().getKeyId());
 +  }
 +
 +  @Test(expected = AccumuloException.class)
 +  public void testDelegationTokenWithInvalidLifetime() throws Throwable {
 +    // Login as the "root" user
 +    UserGroupInformation root = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +    log.info("Logged in as {}", rootUser.getPrincipal());
 +
 +    // As the "root" user, open up the connection and get a delegation token
 +    try {
 +      root.doAs(new PrivilegedExceptionAction<AuthenticationToken>() {
 +        @Override
 +        public AuthenticationToken run() throws Exception {
 +          Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +          log.info("Created connector as {}", rootUser.getPrincipal());
 +          assertEquals(rootUser.getPrincipal(), conn.whoami());
 +
 +          // Should fail
 +          return conn.securityOperations().getDelegationToken(new DelegationTokenConfig().setTokenLifetime(Long.MAX_VALUE, TimeUnit.MILLISECONDS));
 +        }
 +      });
 +    } catch (UndeclaredThrowableException e) {
 +      Throwable cause = e.getCause();
 +      if (null != cause) {
 +        throw cause;
 +      } else {
 +        throw e;
 +      }
 +    }
 +  }
 +
 +  @Test
 +  public void testDelegationTokenWithReducedLifetime() throws Throwable {
 +    // Login as the "root" user
 +    UserGroupInformation root = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +    log.info("Logged in as {}", rootUser.getPrincipal());
 +
 +    // As the "root" user, open up the connection and get a delegation token
 +    final AuthenticationToken dt = root.doAs(new PrivilegedExceptionAction<AuthenticationToken>() {
 +      @Override
 +      public AuthenticationToken run() throws Exception {
 +        Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +        log.info("Created connector as {}", rootUser.getPrincipal());
 +        assertEquals(rootUser.getPrincipal(), conn.whoami());
 +
 +        return conn.securityOperations().getDelegationToken(new DelegationTokenConfig().setTokenLifetime(5, TimeUnit.MINUTES));
 +      }
 +    });
 +
 +    AuthenticationTokenIdentifier identifier = ((DelegationTokenImpl) dt).getIdentifier();
 +    assertTrue("Expected identifier to expire in no more than 5 minutes: " + identifier,
 +        identifier.getExpirationDate() - identifier.getIssueDate() <= (5 * 60 * 1000));
 +  }
 +
 +  @Test(expected = AccumuloSecurityException.class)
 +  public void testRootUserHasIrrevocablePermissions() throws Exception {
 +    // Login as the client (provided to `accumulo init` as the "root" user)
 +    UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +
 +    final Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +
 +    // The server-side implementation should prevent the revocation of the 'root' user's systems permissions
 +    // because once they're gone, it's possible that they could never be restored.
 +    conn.securityOperations().revokeSystemPermission(rootUser.getPrincipal(), SystemPermission.GRANT);
 +  }
 +
 +  /**
 +   * Creates a table, adds a record to it, and then compacts the table. A simple way to make sure that the system user exists (since the master does an RPC to
 +   * the tserver which will create the system user if it doesn't already exist).
 +   */
 +  private void createTableWithDataAndCompact(Connector conn) throws TableNotFoundException, AccumuloSecurityException, AccumuloException, TableExistsException {
 +    final String table = testName.getMethodName() + "_table";
 +    conn.tableOperations().create(table);
 +    BatchWriter bw = conn.createBatchWriter(table, new BatchWriterConfig());
 +    Mutation m = new Mutation("a");
 +    m.put("b", "c", "d");
 +    bw.addMutation(m);
 +    bw.close();
 +    conn.tableOperations().compact(table, new CompactionConfig().setFlush(true).setWait(true));
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
index 2337f91,0000000..7264a42
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
@@@ -1,482 -1,0 +1,485 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.test.functional;
 +
 +import static java.nio.charset.StandardCharsets.UTF_8;
 +import static org.junit.Assert.assertEquals;
 +import static org.junit.Assert.assertTrue;
 +
 +import java.io.File;
 +import java.io.FileWriter;
 +import java.io.IOException;
 +import java.net.ConnectException;
 +import java.net.InetAddress;
 +import java.nio.ByteBuffer;
 +import java.util.Collections;
 +import java.util.HashMap;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.Properties;
 +
 +import org.apache.accumulo.cluster.ClusterUser;
 +import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 +import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.rpc.UGIAssumingTransport;
 +import org.apache.accumulo.harness.AccumuloITBase;
 +import org.apache.accumulo.harness.MiniClusterConfigurationCallback;
 +import org.apache.accumulo.harness.MiniClusterHarness;
 +import org.apache.accumulo.harness.TestingKdc;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 +import org.apache.accumulo.proxy.Proxy;
 +import org.apache.accumulo.proxy.ProxyServer;
 +import org.apache.accumulo.proxy.thrift.AccumuloProxy;
 +import org.apache.accumulo.proxy.thrift.AccumuloProxy.Client;
 +import org.apache.accumulo.proxy.thrift.AccumuloSecurityException;
 +import org.apache.accumulo.proxy.thrift.ColumnUpdate;
 +import org.apache.accumulo.proxy.thrift.Key;
 +import org.apache.accumulo.proxy.thrift.KeyValue;
 +import org.apache.accumulo.proxy.thrift.ScanOptions;
 +import org.apache.accumulo.proxy.thrift.ScanResult;
 +import org.apache.accumulo.proxy.thrift.TimeType;
 +import org.apache.accumulo.proxy.thrift.WriterOptions;
 +import org.apache.accumulo.server.util.PortUtils;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.hadoop.conf.Configuration;
 +import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 +import org.apache.hadoop.security.UserGroupInformation;
 +import org.apache.thrift.protocol.TCompactProtocol;
 +import org.apache.thrift.transport.TSaslClientTransport;
 +import org.apache.thrift.transport.TSocket;
 +import org.apache.thrift.transport.TTransportException;
 +import org.hamcrest.Description;
 +import org.hamcrest.TypeSafeMatcher;
 +import org.junit.After;
 +import org.junit.AfterClass;
 +import org.junit.Before;
 +import org.junit.BeforeClass;
 +import org.junit.Rule;
 +import org.junit.Test;
++import org.junit.experimental.categories.Category;
 +import org.junit.rules.ExpectedException;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +/**
 + * Tests impersonation of clients by the proxy over SASL
 + */
++@Category(MiniClusterOnlyTest.class)
 +public class KerberosProxyIT extends AccumuloITBase {
 +  private static final Logger log = LoggerFactory.getLogger(KerberosProxyIT.class);
 +
 +  @Rule
 +  public ExpectedException thrown = ExpectedException.none();
 +
 +  private static TestingKdc kdc;
 +  private static String krbEnabledForITs = null;
 +  private static File proxyKeytab;
 +  private static String hostname, proxyPrimary, proxyPrincipal;
 +
 +  @Override
 +  protected int defaultTimeoutSeconds() {
 +    return 60 * 5;
 +  }
 +
 +  @BeforeClass
 +  public static void startKdc() throws Exception {
 +    kdc = new TestingKdc();
 +    kdc.start();
 +    krbEnabledForITs = System.getProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION);
 +    if (null == krbEnabledForITs || !Boolean.parseBoolean(krbEnabledForITs)) {
 +      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, "true");
 +    }
 +
 +    // Create a principal+keytab for the proxy
 +    proxyKeytab = new File(kdc.getKeytabDir(), "proxy.keytab");
 +    hostname = InetAddress.getLocalHost().getCanonicalHostName();
 +    // Set the primary because the client needs to know it
 +    proxyPrimary = "proxy";
 +    // Qualify with an instance
 +    proxyPrincipal = proxyPrimary + "/" + hostname;
 +    kdc.createPrincipal(proxyKeytab, proxyPrincipal);
 +    // Tack on the realm too
 +    proxyPrincipal = kdc.qualifyUser(proxyPrincipal);
 +  }
 +
 +  @AfterClass
 +  public static void stopKdc() throws Exception {
 +    if (null != kdc) {
 +      kdc.stop();
 +    }
 +    if (null != krbEnabledForITs) {
 +      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, krbEnabledForITs);
 +    }
 +    UserGroupInformation.setConfiguration(new Configuration(false));
 +  }
 +
 +  private MiniAccumuloClusterImpl mac;
 +  private Process proxyProcess;
 +  private int proxyPort;
 +
 +  @Before
 +  public void startMac() throws Exception {
 +    MiniClusterHarness harness = new MiniClusterHarness();
 +    mac = harness.create(getClass().getName(), testName.getMethodName(), new PasswordToken("unused"), new MiniClusterConfigurationCallback() {
 +
 +      @Override
 +      public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration coreSite) {
 +        cfg.setNumTservers(1);
 +        Map<String,String> siteCfg = cfg.getSiteConfig();
 +        // Allow the proxy to impersonate the client user, but no one else
 +        siteCfg.put(Property.INSTANCE_RPC_SASL_ALLOWED_USER_IMPERSONATION.getKey(), proxyPrincipal + ":" + kdc.getRootUser().getPrincipal());
 +        siteCfg.put(Property.INSTANCE_RPC_SASL_ALLOWED_HOST_IMPERSONATION.getKey(), "*");
 +        cfg.setSiteConfig(siteCfg);
 +      }
 +
 +    }, kdc);
 +
 +    mac.start();
 +    MiniAccumuloConfigImpl cfg = mac.getConfig();
 +
 +    // Generate Proxy configuration and start the proxy
 +    proxyProcess = startProxy(cfg);
 +
 +    // Enabled kerberos auth
 +    Configuration conf = new Configuration(false);
 +    conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "kerberos");
 +    UserGroupInformation.setConfiguration(conf);
 +
 +    boolean success = false;
 +    ClusterUser rootUser = kdc.getRootUser();
 +    // Rely on the junit timeout rule
 +    while (!success) {
 +      UserGroupInformation ugi;
 +      try {
 +        ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +      } catch (IOException ex) {
 +        log.info("Login as root is failing", ex);
 +        Thread.sleep(3000);
 +        continue;
 +      }
 +
 +      TSocket socket = new TSocket(hostname, proxyPort);
 +      log.info("Connecting to proxy with server primary '" + proxyPrimary + "' running on " + hostname);
 +      TSaslClientTransport transport = new TSaslClientTransport("GSSAPI", null, proxyPrimary, hostname, Collections.singletonMap("javax.security.sasl.qop",
 +          "auth"), null, socket);
 +
 +      final UGIAssumingTransport ugiTransport = new UGIAssumingTransport(transport, ugi);
 +
 +      try {
 +        // UGI transport will perform the doAs for us
 +        ugiTransport.open();
 +        success = true;
 +      } catch (TTransportException e) {
 +        Throwable cause = e.getCause();
 +        if (null != cause && cause instanceof ConnectException) {
 +          log.info("Proxy not yet up, waiting");
 +          Thread.sleep(3000);
 +          proxyProcess = checkProxyAndRestart(proxyProcess, cfg);
 +          continue;
 +        }
 +      } finally {
 +        if (null != ugiTransport) {
 +          ugiTransport.close();
 +        }
 +      }
 +    }
 +
 +    assertTrue("Failed to connect to the proxy repeatedly", success);
 +  }
 +
 +  /**
 +   * Starts the thrift proxy using the given MAConfig.
 +   *
 +   * @param cfg
 +   *          configuration for MAC
 +   * @return Process for the thrift proxy
 +   */
 +  private Process startProxy(MiniAccumuloConfigImpl cfg) throws IOException {
 +    File proxyPropertiesFile = generateNewProxyConfiguration(cfg);
 +    return mac.exec(Proxy.class, "-p", proxyPropertiesFile.getCanonicalPath());
 +  }
 +
 +  /**
 +   * Generates a proxy configuration file for the MAC instance. Implicitly updates {@link #proxyPort} when choosing the port the proxy will listen on.
 +   *
 +   * @param cfg
 +   *          The MAC configuration
 +   * @return The proxy's configuration file
 +   */
 +  private File generateNewProxyConfiguration(MiniAccumuloConfigImpl cfg) throws IOException {
 +    // Chooses a new port for the proxy as side-effect
 +    proxyPort = PortUtils.getRandomFreePort();
 +
 +    // Proxy configuration
 +    File proxyPropertiesFile = new File(cfg.getConfDir(), "proxy.properties");
 +    if (proxyPropertiesFile.exists()) {
 +      assertTrue("Failed to delete proxy.properties file", proxyPropertiesFile.delete());
 +    }
 +    Properties proxyProperties = new Properties();
 +    proxyProperties.setProperty("useMockInstance", "false");
 +    proxyProperties.setProperty("useMiniAccumulo", "false");
 +    proxyProperties.setProperty("protocolFactory", TCompactProtocol.Factory.class.getName());
 +    proxyProperties.setProperty("tokenClass", KerberosToken.class.getName());
 +    proxyProperties.setProperty("port", Integer.toString(proxyPort));
 +    proxyProperties.setProperty("maxFrameSize", "16M");
 +    proxyProperties.setProperty("instance", mac.getInstanceName());
 +    proxyProperties.setProperty("zookeepers", mac.getZooKeepers());
 +    proxyProperties.setProperty("thriftServerType", "sasl");
 +    proxyProperties.setProperty("kerberosPrincipal", proxyPrincipal);
 +    proxyProperties.setProperty("kerberosKeytab", proxyKeytab.getCanonicalPath());
 +
 +    // Write out the proxy.properties file
 +    FileWriter writer = new FileWriter(proxyPropertiesFile);
 +    proxyProperties.store(writer, "Configuration for Accumulo proxy");
 +    writer.close();
 +
 +    log.info("Created configuration for proxy listening on {}", proxyPort);
 +
 +    return proxyPropertiesFile;
 +  }
 +
 +  /**
 +   * Restarts the thrift proxy if the previous instance is no longer running. If the proxy is still running, this method does nothing.
 +   *
 +   * @param proxy
 +   *          The thrift proxy process
 +   * @param cfg
 +   *          The MAC configuration
 +   * @return The process for the Proxy, either the previous instance or a new instance.
 +   */
 +  private Process checkProxyAndRestart(Process proxy, MiniAccumuloConfigImpl cfg) throws IOException {
 +    try {
 +      // Get the return code
 +      proxy.exitValue();
 +    } catch (IllegalThreadStateException e) {
 +      log.info("Proxy is still running");
 +      // OK, process is still running, don't restart
 +      return proxy;
 +    }
 +
 +    log.info("Restarting proxy because it is no longer alive");
 +
 +    // We got a return code which means the proxy exited. We'll assume this is because it failed
 +    // to bind the port due to the known race condition between choosing a port and having the
 +    // proxy bind it.
 +    return startProxy(cfg);
 +  }
 +
 +  @After
 +  public void stopMac() throws Exception {
 +    if (null != proxyProcess) {
 +      log.info("Destroying proxy process");
 +      proxyProcess.destroy();
 +      log.info("Waiting for proxy termination");
 +      proxyProcess.waitFor();
 +      log.info("Proxy terminated");
 +    }
 +    if (null != mac) {
 +      mac.stop();
 +    }
 +  }
 +
 +  @Test
 +  public void testProxyClient() throws Exception {
 +    ClusterUser rootUser = kdc.getRootUser();
 +    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +
 +    TSocket socket = new TSocket(hostname, proxyPort);
 +    log.info("Connecting to proxy with server primary '" + proxyPrimary + "' running on " + hostname);
 +    TSaslClientTransport transport = new TSaslClientTransport("GSSAPI", null, proxyPrimary, hostname, Collections.singletonMap("javax.security.sasl.qop",
 +        "auth"), null, socket);
 +
 +    final UGIAssumingTransport ugiTransport = new UGIAssumingTransport(transport, ugi);
 +
 +    // UGI transport will perform the doAs for us
 +    ugiTransport.open();
 +
 +    AccumuloProxy.Client.Factory factory = new AccumuloProxy.Client.Factory();
 +    Client client = factory.getClient(new TCompactProtocol(ugiTransport), new TCompactProtocol(ugiTransport));
 +
 +    // Will fail if the proxy can impersonate the client
 +    ByteBuffer login = client.login(rootUser.getPrincipal(), Collections.<String,String> emptyMap());
 +
 +    // For all of the below actions, the proxy user doesn't have permission to do any of them, but the client user does.
 +    // The fact that any of them actually run tells us that impersonation is working.
 +
 +    // Create a table
 +    String table = "table";
 +    if (!client.tableExists(login, table)) {
 +      client.createTable(login, table, true, TimeType.MILLIS);
 +    }
 +
 +    // Write two records to the table
 +    String writer = client.createWriter(login, table, new WriterOptions());
 +    Map<ByteBuffer,List<ColumnUpdate>> updates = new HashMap<>();
 +    ColumnUpdate update = new ColumnUpdate(ByteBuffer.wrap("cf1".getBytes(UTF_8)), ByteBuffer.wrap("cq1".getBytes(UTF_8)));
 +    update.setValue(ByteBuffer.wrap("value1".getBytes(UTF_8)));
 +    updates.put(ByteBuffer.wrap("row1".getBytes(UTF_8)), Collections.<ColumnUpdate> singletonList(update));
 +    update = new ColumnUpdate(ByteBuffer.wrap("cf2".getBytes(UTF_8)), ByteBuffer.wrap("cq2".getBytes(UTF_8)));
 +    update.setValue(ByteBuffer.wrap("value2".getBytes(UTF_8)));
 +    updates.put(ByteBuffer.wrap("row2".getBytes(UTF_8)), Collections.<ColumnUpdate> singletonList(update));
 +    client.update(writer, updates);
 +
 +    // Flush and close the writer
 +    client.flush(writer);
 +    client.closeWriter(writer);
 +
 +    // Open a scanner to the table
 +    String scanner = client.createScanner(login, table, new ScanOptions());
 +    ScanResult results = client.nextK(scanner, 10);
 +    assertEquals(2, results.getResults().size());
 +
 +    // Check the first key-value
 +    KeyValue kv = results.getResults().get(0);
 +    Key k = kv.key;
 +    ByteBuffer v = kv.value;
 +    assertEquals(ByteBuffer.wrap("row1".getBytes(UTF_8)), k.row);
 +    assertEquals(ByteBuffer.wrap("cf1".getBytes(UTF_8)), k.colFamily);
 +    assertEquals(ByteBuffer.wrap("cq1".getBytes(UTF_8)), k.colQualifier);
 +    assertEquals(ByteBuffer.wrap(new byte[0]), k.colVisibility);
 +    assertEquals(ByteBuffer.wrap("value1".getBytes(UTF_8)), v);
 +
 +    // And then the second
 +    kv = results.getResults().get(1);
 +    k = kv.key;
 +    v = kv.value;
 +    assertEquals(ByteBuffer.wrap("row2".getBytes(UTF_8)), k.row);
 +    assertEquals(ByteBuffer.wrap("cf2".getBytes(UTF_8)), k.colFamily);
 +    assertEquals(ByteBuffer.wrap("cq2".getBytes(UTF_8)), k.colQualifier);
 +    assertEquals(ByteBuffer.wrap(new byte[0]), k.colVisibility);
 +    assertEquals(ByteBuffer.wrap("value2".getBytes(UTF_8)), v);
 +
 +    // Close the scanner
 +    client.closeScanner(scanner);
 +
 +    ugiTransport.close();
 +  }
 +
 +  @Test
 +  public void testDisallowedClientForImpersonation() throws Exception {
 +    String user = testName.getMethodName();
 +    File keytab = new File(kdc.getKeytabDir(), user + ".keytab");
 +    kdc.createPrincipal(keytab, user);
 +
 +    // Login as the new user
 +    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(user, keytab.getAbsolutePath());
 +
 +    log.info("Logged in as " + ugi);
 +
 +    // Expect an AccumuloSecurityException
 +    thrown.expect(AccumuloSecurityException.class);
 +    // Error msg would look like:
 +    //
 +    // org.apache.accumulo.core.client.AccumuloSecurityException: Error BAD_CREDENTIALS for user Principal in credentials object should match kerberos
 +    // principal.
 +    // Expected 'proxy/hw10447.local@EXAMPLE.COM' but was 'testDisallowedClientForImpersonation@EXAMPLE.COM' - Username or Password is Invalid)
 +    thrown.expect(new ThriftExceptionMatchesPattern(".*Error BAD_CREDENTIALS.*"));
 +    thrown.expect(new ThriftExceptionMatchesPattern(".*Expected '" + proxyPrincipal + "' but was '" + kdc.qualifyUser(user) + "'.*"));
 +
 +    TSocket socket = new TSocket(hostname, proxyPort);
 +    log.info("Connecting to proxy with server primary '" + proxyPrimary + "' running on " + hostname);
 +
 +    // Should fail to open the tran
 +    TSaslClientTransport transport = new TSaslClientTransport("GSSAPI", null, proxyPrimary, hostname, Collections.singletonMap("javax.security.sasl.qop",
 +        "auth"), null, socket);
 +
 +    final UGIAssumingTransport ugiTransport = new UGIAssumingTransport(transport, ugi);
 +
 +    // UGI transport will perform the doAs for us
 +    ugiTransport.open();
 +
 +    AccumuloProxy.Client.Factory factory = new AccumuloProxy.Client.Factory();
 +    Client client = factory.getClient(new TCompactProtocol(ugiTransport), new TCompactProtocol(ugiTransport));
 +
 +    // Will fail because the proxy can't impersonate this user (per the site configuration)
 +    try {
 +      client.login(kdc.qualifyUser(user), Collections.<String,String> emptyMap());
 +    } finally {
 +      if (null != ugiTransport) {
 +        ugiTransport.close();
 +      }
 +    }
 +  }
 +
 +  @Test
 +  public void testMismatchPrincipals() throws Exception {
 +    ClusterUser rootUser = kdc.getRootUser();
 +    // Should get an AccumuloSecurityException and the given message
 +    thrown.expect(AccumuloSecurityException.class);
 +    thrown.expect(new ThriftExceptionMatchesPattern(ProxyServer.RPC_ACCUMULO_PRINCIPAL_MISMATCH_MSG));
 +
 +    // Make a new user
 +    String user = testName.getMethodName();
 +    File keytab = new File(kdc.getKeytabDir(), user + ".keytab");
 +    kdc.createPrincipal(keytab, user);
 +
 +    // Login as the new user
 +    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(user, keytab.getAbsolutePath());
 +
 +    log.info("Logged in as " + ugi);
 +
 +    TSocket socket = new TSocket(hostname, proxyPort);
 +    log.info("Connecting to proxy with server primary '" + proxyPrimary + "' running on " + hostname);
 +
 +    // Should fail to open the tran
 +    TSaslClientTransport transport = new TSaslClientTransport("GSSAPI", null, proxyPrimary, hostname, Collections.singletonMap("javax.security.sasl.qop",
 +        "auth"), null, socket);
 +
 +    final UGIAssumingTransport ugiTransport = new UGIAssumingTransport(transport, ugi);
 +
 +    // UGI transport will perform the doAs for us
 +    ugiTransport.open();
 +
 +    AccumuloProxy.Client.Factory factory = new AccumuloProxy.Client.Factory();
 +    Client client = factory.getClient(new TCompactProtocol(ugiTransport), new TCompactProtocol(ugiTransport));
 +
 +    // The proxy needs to recognize that the requested principal isn't the same as the SASL principal and fail
 +    // Accumulo should let this through -- we need to rely on the proxy to dump me before talking to accumulo
 +    try {
 +      client.login(rootUser.getPrincipal(), Collections.<String,String> emptyMap());
 +    } finally {
 +      if (null != ugiTransport) {
 +        ugiTransport.close();
 +      }
 +    }
 +  }
 +
 +  private static class ThriftExceptionMatchesPattern extends TypeSafeMatcher<AccumuloSecurityException> {
 +    private String pattern;
 +
 +    public ThriftExceptionMatchesPattern(String pattern) {
 +      this.pattern = pattern;
 +    }
 +
 +    @Override
 +    protected boolean matchesSafely(AccumuloSecurityException item) {
 +      return item.isSetMsg() && item.msg.matches(pattern);
 +    }
 +
 +    @Override
 +    public void describeTo(Description description) {
 +      description.appendText("matches pattern ").appendValue(pattern);
 +    }
 +
 +    @Override
 +    protected void describeMismatchSafely(AccumuloSecurityException item, Description mismatchDescription) {
 +      mismatchDescription.appendText("does not match");
 +    }
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
index 142a8bb,0000000..0e60501
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
@@@ -1,188 -1,0 +1,191 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.test.functional;
 +
 +import static org.junit.Assert.assertEquals;
 +
 +import java.util.Map;
 +import java.util.Map.Entry;
 +
 +import org.apache.accumulo.cluster.ClusterUser;
 +import org.apache.accumulo.core.client.AccumuloException;
 +import org.apache.accumulo.core.client.AccumuloSecurityException;
 +import org.apache.accumulo.core.client.BatchWriter;
 +import org.apache.accumulo.core.client.BatchWriterConfig;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.Scanner;
 +import org.apache.accumulo.core.client.TableExistsException;
 +import org.apache.accumulo.core.client.TableNotFoundException;
 +import org.apache.accumulo.core.client.admin.CompactionConfig;
 +import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 +import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.data.Key;
 +import org.apache.accumulo.core.data.Mutation;
 +import org.apache.accumulo.core.data.PartialKey;
 +import org.apache.accumulo.core.data.Value;
 +import org.apache.accumulo.core.security.Authorizations;
 +import org.apache.accumulo.harness.AccumuloITBase;
 +import org.apache.accumulo.harness.MiniClusterConfigurationCallback;
 +import org.apache.accumulo.harness.MiniClusterHarness;
 +import org.apache.accumulo.harness.TestingKdc;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.hadoop.conf.Configuration;
 +import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 +import org.apache.hadoop.minikdc.MiniKdc;
 +import org.apache.hadoop.security.UserGroupInformation;
 +import org.junit.After;
 +import org.junit.AfterClass;
 +import org.junit.Before;
 +import org.junit.BeforeClass;
 +import org.junit.Test;
++import org.junit.experimental.categories.Category;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +import com.google.common.collect.Iterables;
 +
 +/**
 + * MAC test which uses {@link MiniKdc} to simulate ta secure environment. Can be used as a sanity check for Kerberos/SASL testing.
 + */
++@Category(MiniClusterOnlyTest.class)
 +public class KerberosRenewalIT extends AccumuloITBase {
 +  private static final Logger log = LoggerFactory.getLogger(KerberosRenewalIT.class);
 +
 +  private static TestingKdc kdc;
 +  private static String krbEnabledForITs = null;
 +  private static ClusterUser rootUser;
 +
 +  private static final long TICKET_LIFETIME = 6 * 60 * 1000; // Anything less seems to fail when generating the ticket
 +  private static final long TICKET_TEST_LIFETIME = 8 * 60 * 1000; // Run a test for 8 mins
 +  private static final long TEST_DURATION = 9 * 60 * 1000; // The test should finish within 9 mins
 +
 +  @BeforeClass
 +  public static void startKdc() throws Exception {
 +    // 30s renewal time window
 +    kdc = new TestingKdc(TestingKdc.computeKdcDir(), TestingKdc.computeKeytabDir(), TICKET_LIFETIME);
 +    kdc.start();
 +    krbEnabledForITs = System.getProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION);
 +    if (null == krbEnabledForITs || !Boolean.parseBoolean(krbEnabledForITs)) {
 +      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, "true");
 +    }
 +    rootUser = kdc.getRootUser();
 +  }
 +
 +  @AfterClass
 +  public static void stopKdc() throws Exception {
 +    if (null != kdc) {
 +      kdc.stop();
 +    }
 +    if (null != krbEnabledForITs) {
 +      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, krbEnabledForITs);
 +    }
 +  }
 +
 +  @Override
 +  public int defaultTimeoutSeconds() {
 +    return (int) TEST_DURATION / 1000;
 +  }
 +
 +  private MiniAccumuloClusterImpl mac;
 +
 +  @Before
 +  public void startMac() throws Exception {
 +    MiniClusterHarness harness = new MiniClusterHarness();
 +    mac = harness.create(this, new PasswordToken("unused"), kdc, new MiniClusterConfigurationCallback() {
 +
 +      @Override
 +      public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration coreSite) {
 +        Map<String,String> site = cfg.getSiteConfig();
 +        site.put(Property.INSTANCE_ZK_TIMEOUT.getKey(), "15s");
 +        // Reduce the period just to make sure we trigger renewal fast
 +        site.put(Property.GENERAL_KERBEROS_RENEWAL_PERIOD.getKey(), "5s");
 +        cfg.setSiteConfig(site);
 +      }
 +
 +    });
 +
 +    mac.getConfig().setNumTservers(1);
 +    mac.start();
 +    // Enabled kerberos auth
 +    Configuration conf = new Configuration(false);
 +    conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "kerberos");
 +    UserGroupInformation.setConfiguration(conf);
 +  }
 +
 +  @After
 +  public void stopMac() throws Exception {
 +    if (null != mac) {
 +      mac.stop();
 +    }
 +  }
 +
 +  // Intentially setting the Test annotation timeout. We do not want to scale the timeout.
 +  @Test(timeout = TEST_DURATION)
 +  public void testReadAndWriteThroughTicketLifetime() throws Exception {
 +    // Attempt to use Accumulo for a duration of time that exceeds the Kerberos ticket lifetime.
 +    // This is a functional test to verify that Accumulo services renew their ticket.
 +    // If the test doesn't finish on its own, this signifies that Accumulo services failed
 +    // and the test should fail. If Accumulo services renew their ticket, the test case
 +    // should exit gracefully on its own.
 +
 +    // Login as the "root" user
 +    UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +    log.info("Logged in as {}", rootUser.getPrincipal());
 +
 +    Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +    log.info("Created connector as {}", rootUser.getPrincipal());
 +    assertEquals(rootUser.getPrincipal(), conn.whoami());
 +
 +    long duration = 0;
 +    long last = System.currentTimeMillis();
 +    // Make sure we have a couple renewals happen
 +    while (duration < TICKET_TEST_LIFETIME) {
 +      // Create a table, write a record, compact, read the record, drop the table.
 +      createReadWriteDrop(conn);
 +      // Wait a bit after
 +      Thread.sleep(5000);
 +
 +      // Update the duration
 +      long now = System.currentTimeMillis();
 +      duration += now - last;
 +      last = now;
 +    }
 +  }
 +
 +  /**
 +   * Creates a table, adds a record to it, and then compacts the table. A simple way to make sure that the system user exists (since the master does an RPC to
 +   * the tserver which will create the system user if it doesn't already exist).
 +   */
 +  private void createReadWriteDrop(Connector conn) throws TableNotFoundException, AccumuloSecurityException, AccumuloException, TableExistsException {
 +    final String table = testName.getMethodName() + "_table";
 +    conn.tableOperations().create(table);
 +    BatchWriter bw = conn.createBatchWriter(table, new BatchWriterConfig());
 +    Mutation m = new Mutation("a");
 +    m.put("b", "c", "d");
 +    bw.addMutation(m);
 +    bw.close();
 +    conn.tableOperations().compact(table, new CompactionConfig().setFlush(true).setWait(true));
 +    Scanner s = conn.createScanner(table, Authorizations.EMPTY);
 +    Entry<Key,Value> entry = Iterables.getOnlyElement(s);
 +    assertEquals("Did not find the expected key", 0, new Key("a", "b", "c").compareTo(entry.getKey(), PartialKey.ROW_COLFAM_COLQUAL));
 +    assertEquals("d", entry.getValue().toString());
 +    conn.tableOperations().delete(table);
 +  }
 +}


[07/10] accumulo git commit: Merge branch '1.7' into 1.8

Posted by el...@apache.org.
http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/functional/PermissionsIT.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/functional/PermissionsIT.java
index 2fc256b,0000000..c7fc709
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/PermissionsIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/PermissionsIT.java
@@@ -1,707 -1,0 +1,710 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.test.functional;
 +
 +import static org.junit.Assert.assertFalse;
 +import static org.junit.Assert.assertTrue;
 +
 +import java.io.IOException;
 +import java.util.Arrays;
 +import java.util.HashMap;
 +import java.util.HashSet;
 +import java.util.Iterator;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.Map.Entry;
 +import java.util.Set;
 +
 +import org.apache.accumulo.cluster.ClusterUser;
 +import org.apache.accumulo.core.client.AccumuloException;
 +import org.apache.accumulo.core.client.AccumuloSecurityException;
 +import org.apache.accumulo.core.client.BatchWriter;
 +import org.apache.accumulo.core.client.BatchWriterConfig;
 +import org.apache.accumulo.core.client.ClientConfiguration;
 +import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.MutationsRejectedException;
 +import org.apache.accumulo.core.client.Scanner;
 +import org.apache.accumulo.core.client.TableExistsException;
 +import org.apache.accumulo.core.client.TableNotFoundException;
 +import org.apache.accumulo.core.client.security.SecurityErrorCode;
 +import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 +import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.data.Key;
 +import org.apache.accumulo.core.data.Mutation;
 +import org.apache.accumulo.core.data.Value;
 +import org.apache.accumulo.core.metadata.MetadataTable;
 +import org.apache.accumulo.core.security.Authorizations;
 +import org.apache.accumulo.core.security.SystemPermission;
 +import org.apache.accumulo.core.security.TablePermission;
 +import org.apache.accumulo.harness.AccumuloClusterHarness;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.hadoop.io.Text;
 +import org.junit.Assume;
 +import org.junit.Before;
 +import org.junit.Test;
++import org.junit.experimental.categories.Category;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +// This test verifies the default permissions so a clean instance must be used. A shared instance might
 +// not be representative of a fresh installation.
++@Category(MiniClusterOnlyTest.class)
 +public class PermissionsIT extends AccumuloClusterHarness {
 +  private static final Logger log = LoggerFactory.getLogger(PermissionsIT.class);
 +
 +  @Override
 +  public int defaultTimeoutSeconds() {
 +    return 60;
 +  }
 +
 +  @Before
 +  public void limitToMini() throws Exception {
 +    Assume.assumeTrue(ClusterType.MINI == getClusterType());
 +    Connector c = getConnector();
 +    Set<String> users = c.securityOperations().listLocalUsers();
 +    ClusterUser user = getUser(0);
 +    if (users.contains(user.getPrincipal())) {
 +      c.securityOperations().dropLocalUser(user.getPrincipal());
 +    }
 +  }
 +
 +  private void loginAs(ClusterUser user) throws IOException {
 +    // Force a re-login as the provided user
 +    user.getToken();
 +  }
 +
 +  @Test
 +  public void systemPermissionsTest() throws Exception {
 +    ClusterUser testUser = getUser(0), rootUser = getAdminUser();
 +
 +    // verify that the test is being run by root
 +    Connector c = getConnector();
 +    verifyHasOnlyTheseSystemPermissions(c, c.whoami(), SystemPermission.values());
 +
 +    // create the test user
 +    String principal = testUser.getPrincipal();
 +    AuthenticationToken token = testUser.getToken();
 +    PasswordToken passwordToken = null;
 +    if (token instanceof PasswordToken) {
 +      passwordToken = (PasswordToken) token;
 +    }
 +    loginAs(rootUser);
 +    c.securityOperations().createLocalUser(principal, passwordToken);
 +    loginAs(testUser);
 +    Connector test_user_conn = c.getInstance().getConnector(principal, token);
 +    loginAs(rootUser);
 +    verifyHasNoSystemPermissions(c, principal, SystemPermission.values());
 +
 +    // test each permission
 +    for (SystemPermission perm : SystemPermission.values()) {
 +      log.debug("Verifying the " + perm + " permission");
 +
 +      // test permission before and after granting it
 +      String tableNamePrefix = getUniqueNames(1)[0];
 +      testMissingSystemPermission(tableNamePrefix, c, rootUser, test_user_conn, testUser, perm);
 +      loginAs(rootUser);
 +      c.securityOperations().grantSystemPermission(principal, perm);
 +      verifyHasOnlyTheseSystemPermissions(c, principal, perm);
 +      testGrantedSystemPermission(tableNamePrefix, c, rootUser, test_user_conn, testUser, perm);
 +      loginAs(rootUser);
 +      c.securityOperations().revokeSystemPermission(principal, perm);
 +      verifyHasNoSystemPermissions(c, principal, perm);
 +    }
 +  }
 +
 +  static Map<String,String> map(Iterable<Entry<String,String>> i) {
 +    Map<String,String> result = new HashMap<>();
 +    for (Entry<String,String> e : i) {
 +      result.put(e.getKey(), e.getValue());
 +    }
 +    return result;
 +  }
 +
 +  private void testMissingSystemPermission(String tableNamePrefix, Connector root_conn, ClusterUser rootUser, Connector test_user_conn, ClusterUser testUser,
 +      SystemPermission perm) throws Exception {
 +    String tableName, user, password = "password", namespace;
 +    boolean passwordBased = testUser.getPassword() != null;
 +    log.debug("Confirming that the lack of the " + perm + " permission properly restricts the user");
 +
 +    // test permission prior to granting it
 +    switch (perm) {
 +      case CREATE_TABLE:
 +        tableName = tableNamePrefix + "__CREATE_TABLE_WITHOUT_PERM_TEST__";
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.tableOperations().create(tableName);
 +          throw new IllegalStateException("Should NOT be able to create a table");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED || root_conn.tableOperations().list().contains(tableName))
 +            throw e;
 +        }
 +        break;
 +      case DROP_TABLE:
 +        tableName = tableNamePrefix + "__DROP_TABLE_WITHOUT_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.tableOperations().create(tableName);
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.tableOperations().delete(tableName);
 +          throw new IllegalStateException("Should NOT be able to delete a table");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED || !root_conn.tableOperations().list().contains(tableName))
 +            throw e;
 +        }
 +        break;
 +      case ALTER_TABLE:
 +        tableName = tableNamePrefix + "__ALTER_TABLE_WITHOUT_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.tableOperations().create(tableName);
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.tableOperations().setProperty(tableName, Property.TABLE_BLOOM_ERRORRATE.getKey(), "003.14159%");
 +          throw new IllegalStateException("Should NOT be able to set a table property");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED
 +              || map(root_conn.tableOperations().getProperties(tableName)).get(Property.TABLE_BLOOM_ERRORRATE.getKey()).equals("003.14159%"))
 +            throw e;
 +        }
 +        loginAs(rootUser);
 +        root_conn.tableOperations().setProperty(tableName, Property.TABLE_BLOOM_ERRORRATE.getKey(), "003.14159%");
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.tableOperations().removeProperty(tableName, Property.TABLE_BLOOM_ERRORRATE.getKey());
 +          throw new IllegalStateException("Should NOT be able to remove a table property");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED
 +              || !map(root_conn.tableOperations().getProperties(tableName)).get(Property.TABLE_BLOOM_ERRORRATE.getKey()).equals("003.14159%"))
 +            throw e;
 +        }
 +        String table2 = tableName + "2";
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.tableOperations().rename(tableName, table2);
 +          throw new IllegalStateException("Should NOT be able to rename a table");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED || !root_conn.tableOperations().list().contains(tableName)
 +              || root_conn.tableOperations().list().contains(table2))
 +            throw e;
 +        }
 +        break;
 +      case CREATE_USER:
 +        user = "__CREATE_USER_WITHOUT_PERM_TEST__";
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.securityOperations().createLocalUser(user, (passwordBased ? new PasswordToken(password) : null));
 +          throw new IllegalStateException("Should NOT be able to create a user");
 +        } catch (AccumuloSecurityException e) {
 +          AuthenticationToken userToken = testUser.getToken();
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED
 +              || (userToken instanceof PasswordToken && root_conn.securityOperations().authenticateUser(user, userToken)))
 +            throw e;
 +        }
 +        break;
 +      case DROP_USER:
 +        user = "__DROP_USER_WITHOUT_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.securityOperations().createLocalUser(user, (passwordBased ? new PasswordToken(password) : null));
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.securityOperations().dropLocalUser(user);
 +          throw new IllegalStateException("Should NOT be able to delete a user");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED || !root_conn.securityOperations().listLocalUsers().contains(user)) {
 +            log.info("Failed to authenticate as " + user);
 +            throw e;
 +          }
 +        }
 +        break;
 +      case ALTER_USER:
 +        user = "__ALTER_USER_WITHOUT_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.securityOperations().createLocalUser(user, (passwordBased ? new PasswordToken(password) : null));
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.securityOperations().changeUserAuthorizations(user, new Authorizations("A", "B"));
 +          throw new IllegalStateException("Should NOT be able to alter a user");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED || !root_conn.securityOperations().getUserAuthorizations(user).isEmpty())
 +            throw e;
 +        }
 +        break;
 +      case SYSTEM:
 +        // test for system permission would go here
 +        break;
 +      case CREATE_NAMESPACE:
 +        namespace = "__CREATE_NAMESPACE_WITHOUT_PERM_TEST__";
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.namespaceOperations().create(namespace);
 +          throw new IllegalStateException("Should NOT be able to create a namespace");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED || root_conn.namespaceOperations().list().contains(namespace))
 +            throw e;
 +        }
 +        break;
 +      case DROP_NAMESPACE:
 +        namespace = "__DROP_NAMESPACE_WITHOUT_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.namespaceOperations().create(namespace);
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.namespaceOperations().delete(namespace);
 +          throw new IllegalStateException("Should NOT be able to delete a namespace");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED || !root_conn.namespaceOperations().list().contains(namespace))
 +            throw e;
 +        }
 +        break;
 +      case ALTER_NAMESPACE:
 +        namespace = "__ALTER_NAMESPACE_WITHOUT_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.namespaceOperations().create(namespace);
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.namespaceOperations().setProperty(namespace, Property.TABLE_BLOOM_ERRORRATE.getKey(), "003.14159%");
 +          throw new IllegalStateException("Should NOT be able to set a namespace property");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED
 +              || map(root_conn.namespaceOperations().getProperties(namespace)).get(Property.TABLE_BLOOM_ERRORRATE.getKey()).equals("003.14159%"))
 +            throw e;
 +        }
 +        loginAs(rootUser);
 +        root_conn.namespaceOperations().setProperty(namespace, Property.TABLE_BLOOM_ERRORRATE.getKey(), "003.14159%");
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.namespaceOperations().removeProperty(namespace, Property.TABLE_BLOOM_ERRORRATE.getKey());
 +          throw new IllegalStateException("Should NOT be able to remove a namespace property");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED
 +              || !map(root_conn.namespaceOperations().getProperties(namespace)).get(Property.TABLE_BLOOM_ERRORRATE.getKey()).equals("003.14159%"))
 +            throw e;
 +        }
 +        String namespace2 = namespace + "2";
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.namespaceOperations().rename(namespace, namespace2);
 +          throw new IllegalStateException("Should NOT be able to rename a namespace");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED || !root_conn.namespaceOperations().list().contains(namespace)
 +              || root_conn.namespaceOperations().list().contains(namespace2))
 +            throw e;
 +        }
 +        break;
 +      case OBTAIN_DELEGATION_TOKEN:
 +        ClientConfiguration clientConf = cluster.getClientConfig();
 +        if (clientConf.getBoolean(ClientProperty.INSTANCE_RPC_SASL_ENABLED.getKey(), false)) {
 +          // TODO Try to obtain a delegation token without the permission
 +        }
 +        break;
 +      case GRANT:
 +        loginAs(testUser);
 +        try {
 +          test_user_conn.securityOperations().grantSystemPermission(testUser.getPrincipal(), SystemPermission.GRANT);
 +          throw new IllegalStateException("Should NOT be able to grant System.GRANT to yourself");
 +        } catch (AccumuloSecurityException e) {
 +          // Expected
 +          loginAs(rootUser);
 +          assertFalse(root_conn.securityOperations().hasSystemPermission(testUser.getPrincipal(), SystemPermission.GRANT));
 +        }
 +        break;
 +      default:
 +        throw new IllegalArgumentException("Unrecognized System Permission: " + perm);
 +    }
 +  }
 +
 +  private void testGrantedSystemPermission(String tableNamePrefix, Connector root_conn, ClusterUser rootUser, Connector test_user_conn, ClusterUser testUser,
 +      SystemPermission perm) throws Exception {
 +    String tableName, user, password = "password", namespace;
 +    boolean passwordBased = testUser.getPassword() != null;
 +    log.debug("Confirming that the presence of the " + perm + " permission properly permits the user");
 +
 +    // test permission after granting it
 +    switch (perm) {
 +      case CREATE_TABLE:
 +        tableName = tableNamePrefix + "__CREATE_TABLE_WITH_PERM_TEST__";
 +        loginAs(testUser);
 +        test_user_conn.tableOperations().create(tableName);
 +        loginAs(rootUser);
 +        if (!root_conn.tableOperations().list().contains(tableName))
 +          throw new IllegalStateException("Should be able to create a table");
 +        break;
 +      case DROP_TABLE:
 +        tableName = tableNamePrefix + "__DROP_TABLE_WITH_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.tableOperations().create(tableName);
 +        loginAs(testUser);
 +        test_user_conn.tableOperations().delete(tableName);
 +        loginAs(rootUser);
 +        if (root_conn.tableOperations().list().contains(tableName))
 +          throw new IllegalStateException("Should be able to delete a table");
 +        break;
 +      case ALTER_TABLE:
 +        tableName = tableNamePrefix + "__ALTER_TABLE_WITH_PERM_TEST__";
 +        String table2 = tableName + "2";
 +        loginAs(rootUser);
 +        root_conn.tableOperations().create(tableName);
 +        loginAs(testUser);
 +        test_user_conn.tableOperations().setProperty(tableName, Property.TABLE_BLOOM_ERRORRATE.getKey(), "003.14159%");
 +        loginAs(rootUser);
 +        Map<String,String> properties = map(root_conn.tableOperations().getProperties(tableName));
 +        if (!properties.get(Property.TABLE_BLOOM_ERRORRATE.getKey()).equals("003.14159%"))
 +          throw new IllegalStateException("Should be able to set a table property");
 +        loginAs(testUser);
 +        test_user_conn.tableOperations().removeProperty(tableName, Property.TABLE_BLOOM_ERRORRATE.getKey());
 +        loginAs(rootUser);
 +        properties = map(root_conn.tableOperations().getProperties(tableName));
 +        if (properties.get(Property.TABLE_BLOOM_ERRORRATE.getKey()).equals("003.14159%"))
 +          throw new IllegalStateException("Should be able to remove a table property");
 +        loginAs(testUser);
 +        test_user_conn.tableOperations().rename(tableName, table2);
 +        loginAs(rootUser);
 +        if (root_conn.tableOperations().list().contains(tableName) || !root_conn.tableOperations().list().contains(table2))
 +          throw new IllegalStateException("Should be able to rename a table");
 +        break;
 +      case CREATE_USER:
 +        user = "__CREATE_USER_WITH_PERM_TEST__";
 +        loginAs(testUser);
 +        test_user_conn.securityOperations().createLocalUser(user, (passwordBased ? new PasswordToken(password) : null));
 +        loginAs(rootUser);
 +        if (passwordBased && !root_conn.securityOperations().authenticateUser(user, new PasswordToken(password)))
 +          throw new IllegalStateException("Should be able to create a user");
 +        break;
 +      case DROP_USER:
 +        user = "__DROP_USER_WITH_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.securityOperations().createLocalUser(user, (passwordBased ? new PasswordToken(password) : null));
 +        loginAs(testUser);
 +        test_user_conn.securityOperations().dropLocalUser(user);
 +        loginAs(rootUser);
 +        if (passwordBased && root_conn.securityOperations().authenticateUser(user, new PasswordToken(password)))
 +          throw new IllegalStateException("Should be able to delete a user");
 +        break;
 +      case ALTER_USER:
 +        user = "__ALTER_USER_WITH_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.securityOperations().createLocalUser(user, (passwordBased ? new PasswordToken(password) : null));
 +        loginAs(testUser);
 +        test_user_conn.securityOperations().changeUserAuthorizations(user, new Authorizations("A", "B"));
 +        loginAs(rootUser);
 +        if (root_conn.securityOperations().getUserAuthorizations(user).isEmpty())
 +          throw new IllegalStateException("Should be able to alter a user");
 +        break;
 +      case SYSTEM:
 +        // test for system permission would go here
 +        break;
 +      case CREATE_NAMESPACE:
 +        namespace = "__CREATE_NAMESPACE_WITH_PERM_TEST__";
 +        loginAs(testUser);
 +        test_user_conn.namespaceOperations().create(namespace);
 +        loginAs(rootUser);
 +        if (!root_conn.namespaceOperations().list().contains(namespace))
 +          throw new IllegalStateException("Should be able to create a namespace");
 +        break;
 +      case DROP_NAMESPACE:
 +        namespace = "__DROP_NAMESPACE_WITH_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.namespaceOperations().create(namespace);
 +        loginAs(testUser);
 +        test_user_conn.namespaceOperations().delete(namespace);
 +        loginAs(rootUser);
 +        if (root_conn.namespaceOperations().list().contains(namespace))
 +          throw new IllegalStateException("Should be able to delete a namespace");
 +        break;
 +      case ALTER_NAMESPACE:
 +        namespace = "__ALTER_NAMESPACE_WITH_PERM_TEST__";
 +        String namespace2 = namespace + "2";
 +        loginAs(rootUser);
 +        root_conn.namespaceOperations().create(namespace);
 +        loginAs(testUser);
 +        test_user_conn.namespaceOperations().setProperty(namespace, Property.TABLE_BLOOM_ERRORRATE.getKey(), "003.14159%");
 +        loginAs(rootUser);
 +        Map<String,String> propies = map(root_conn.namespaceOperations().getProperties(namespace));
 +        if (!propies.get(Property.TABLE_BLOOM_ERRORRATE.getKey()).equals("003.14159%"))
 +          throw new IllegalStateException("Should be able to set a table property");
 +        loginAs(testUser);
 +        test_user_conn.namespaceOperations().removeProperty(namespace, Property.TABLE_BLOOM_ERRORRATE.getKey());
 +        loginAs(rootUser);
 +        propies = map(root_conn.namespaceOperations().getProperties(namespace));
 +        if (propies.get(Property.TABLE_BLOOM_ERRORRATE.getKey()).equals("003.14159%"))
 +          throw new IllegalStateException("Should be able to remove a table property");
 +        loginAs(testUser);
 +        test_user_conn.namespaceOperations().rename(namespace, namespace2);
 +        loginAs(rootUser);
 +        if (root_conn.namespaceOperations().list().contains(namespace) || !root_conn.namespaceOperations().list().contains(namespace2))
 +          throw new IllegalStateException("Should be able to rename a table");
 +        break;
 +      case OBTAIN_DELEGATION_TOKEN:
 +        ClientConfiguration clientConf = cluster.getClientConfig();
 +        if (clientConf.getBoolean(ClientProperty.INSTANCE_RPC_SASL_ENABLED.getKey(), false)) {
 +          // TODO Try to obtain a delegation token with the permission
 +        }
 +        break;
 +      case GRANT:
 +        loginAs(rootUser);
 +        root_conn.securityOperations().grantSystemPermission(testUser.getPrincipal(), SystemPermission.GRANT);
 +        loginAs(testUser);
 +        test_user_conn.securityOperations().grantSystemPermission(testUser.getPrincipal(), SystemPermission.CREATE_TABLE);
 +        loginAs(rootUser);
 +        assertTrue("Test user should have CREATE_TABLE",
 +            root_conn.securityOperations().hasSystemPermission(testUser.getPrincipal(), SystemPermission.CREATE_TABLE));
 +        assertTrue("Test user should have GRANT", root_conn.securityOperations().hasSystemPermission(testUser.getPrincipal(), SystemPermission.GRANT));
 +        root_conn.securityOperations().revokeSystemPermission(testUser.getPrincipal(), SystemPermission.CREATE_TABLE);
 +        break;
 +      default:
 +        throw new IllegalArgumentException("Unrecognized System Permission: " + perm);
 +    }
 +  }
 +
 +  private void verifyHasOnlyTheseSystemPermissions(Connector root_conn, String user, SystemPermission... perms) throws AccumuloException,
 +      AccumuloSecurityException {
 +    List<SystemPermission> permList = Arrays.asList(perms);
 +    for (SystemPermission p : SystemPermission.values()) {
 +      if (permList.contains(p)) {
 +        // should have these
 +        if (!root_conn.securityOperations().hasSystemPermission(user, p))
 +          throw new IllegalStateException(user + " SHOULD have system permission " + p);
 +      } else {
 +        // should not have these
 +        if (root_conn.securityOperations().hasSystemPermission(user, p))
 +          throw new IllegalStateException(user + " SHOULD NOT have system permission " + p);
 +      }
 +    }
 +  }
 +
 +  private void verifyHasNoSystemPermissions(Connector root_conn, String user, SystemPermission... perms) throws AccumuloException, AccumuloSecurityException {
 +    for (SystemPermission p : perms)
 +      if (root_conn.securityOperations().hasSystemPermission(user, p))
 +        throw new IllegalStateException(user + " SHOULD NOT have system permission " + p);
 +  }
 +
 +  @Test
 +  public void tablePermissionTest() throws Exception {
 +    // create the test user
 +    ClusterUser testUser = getUser(0), rootUser = getAdminUser();
 +
 +    String principal = testUser.getPrincipal();
 +    AuthenticationToken token = testUser.getToken();
 +    PasswordToken passwordToken = null;
 +    if (token instanceof PasswordToken) {
 +      passwordToken = (PasswordToken) token;
 +    }
 +    loginAs(rootUser);
 +    Connector c = getConnector();
 +    c.securityOperations().createLocalUser(principal, passwordToken);
 +    loginAs(testUser);
 +    Connector test_user_conn = c.getInstance().getConnector(principal, token);
 +
 +    // check for read-only access to metadata table
 +    loginAs(rootUser);
 +    verifyHasOnlyTheseTablePermissions(c, c.whoami(), MetadataTable.NAME, TablePermission.READ, TablePermission.ALTER_TABLE);
 +    verifyHasOnlyTheseTablePermissions(c, principal, MetadataTable.NAME, TablePermission.READ);
 +    String tableName = getUniqueNames(1)[0] + "__TABLE_PERMISSION_TEST__";
 +
 +    // test each permission
 +    for (TablePermission perm : TablePermission.values()) {
 +      log.debug("Verifying the " + perm + " permission");
 +
 +      // test permission before and after granting it
 +      createTestTable(c, principal, tableName);
 +      loginAs(testUser);
 +      testMissingTablePermission(test_user_conn, testUser, perm, tableName);
 +      loginAs(rootUser);
 +      c.securityOperations().grantTablePermission(principal, tableName, perm);
 +      verifyHasOnlyTheseTablePermissions(c, principal, tableName, perm);
 +      loginAs(testUser);
 +      testGrantedTablePermission(test_user_conn, testUser, perm, tableName);
 +
 +      loginAs(rootUser);
 +      createTestTable(c, principal, tableName);
 +      c.securityOperations().revokeTablePermission(principal, tableName, perm);
 +      verifyHasNoTablePermissions(c, principal, tableName, perm);
 +    }
 +  }
 +
 +  private void createTestTable(Connector c, String testUser, String tableName) throws Exception, MutationsRejectedException {
 +    if (!c.tableOperations().exists(tableName)) {
 +      // create the test table
 +      c.tableOperations().create(tableName);
 +      // put in some initial data
 +      BatchWriter writer = c.createBatchWriter(tableName, new BatchWriterConfig());
 +      Mutation m = new Mutation(new Text("row"));
 +      m.put(new Text("cf"), new Text("cq"), new Value("val".getBytes()));
 +      writer.addMutation(m);
 +      writer.close();
 +
 +      // verify proper permissions for creator and test user
 +      verifyHasOnlyTheseTablePermissions(c, c.whoami(), tableName, TablePermission.values());
 +      verifyHasNoTablePermissions(c, testUser, tableName, TablePermission.values());
 +
 +    }
 +  }
 +
 +  private void testMissingTablePermission(Connector test_user_conn, ClusterUser testUser, TablePermission perm, String tableName) throws Exception {
 +    Scanner scanner;
 +    BatchWriter writer;
 +    Mutation m;
 +    log.debug("Confirming that the lack of the " + perm + " permission properly restricts the user");
 +
 +    // test permission prior to granting it
 +    switch (perm) {
 +      case READ:
 +        try {
 +          scanner = test_user_conn.createScanner(tableName, Authorizations.EMPTY);
 +          int i = 0;
 +          for (Entry<Key,Value> entry : scanner)
 +            i += 1 + entry.getKey().getRowData().length();
 +          if (i != 0)
 +            throw new IllegalStateException("Should NOT be able to read from the table");
 +        } catch (RuntimeException e) {
 +          AccumuloSecurityException se = (AccumuloSecurityException) e.getCause();
 +          if (se.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED)
 +            throw se;
 +        }
 +        break;
 +      case WRITE:
 +        try {
 +          writer = test_user_conn.createBatchWriter(tableName, new BatchWriterConfig());
 +          m = new Mutation(new Text("row"));
 +          m.put(new Text("a"), new Text("b"), new Value("c".getBytes()));
 +          writer.addMutation(m);
 +          try {
 +            writer.close();
 +          } catch (MutationsRejectedException e1) {
 +            if (e1.getSecurityErrorCodes().size() > 0)
 +              throw new AccumuloSecurityException(test_user_conn.whoami(), org.apache.accumulo.core.client.impl.thrift.SecurityErrorCode.PERMISSION_DENIED, e1);
 +          }
 +          throw new IllegalStateException("Should NOT be able to write to a table");
 +        } catch (AccumuloSecurityException e) {
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED)
 +            throw e;
 +        }
 +        break;
 +      case BULK_IMPORT:
 +        // test for bulk import permission would go here
 +        break;
 +      case ALTER_TABLE:
 +        Map<String,Set<Text>> groups = new HashMap<>();
 +        groups.put("tgroup", new HashSet<>(Arrays.asList(new Text("t1"), new Text("t2"))));
 +        try {
 +          test_user_conn.tableOperations().setLocalityGroups(tableName, groups);
 +          throw new IllegalStateException("User should not be able to set locality groups");
 +        } catch (AccumuloSecurityException e) {
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED)
 +            throw e;
 +        }
 +        break;
 +      case DROP_TABLE:
 +        try {
 +          test_user_conn.tableOperations().delete(tableName);
 +          throw new IllegalStateException("User should not be able delete the table");
 +        } catch (AccumuloSecurityException e) {
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED)
 +            throw e;
 +        }
 +        break;
 +      case GRANT:
 +        try {
 +          test_user_conn.securityOperations().grantTablePermission(getAdminPrincipal(), tableName, TablePermission.GRANT);
 +          throw new IllegalStateException("User should not be able grant permissions");
 +        } catch (AccumuloSecurityException e) {
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED)
 +            throw e;
 +        }
 +        break;
 +      default:
 +        throw new IllegalArgumentException("Unrecognized table Permission: " + perm);
 +    }
 +  }
 +
 +  private void testGrantedTablePermission(Connector test_user_conn, ClusterUser normalUser, TablePermission perm, String tableName) throws AccumuloException,
 +      TableExistsException, AccumuloSecurityException, TableNotFoundException, MutationsRejectedException {
 +    Scanner scanner;
 +    BatchWriter writer;
 +    Mutation m;
 +    log.debug("Confirming that the presence of the " + perm + " permission properly permits the user");
 +
 +    // test permission after granting it
 +    switch (perm) {
 +      case READ:
 +        scanner = test_user_conn.createScanner(tableName, Authorizations.EMPTY);
 +        Iterator<Entry<Key,Value>> iter = scanner.iterator();
 +        while (iter.hasNext())
 +          iter.next();
 +        break;
 +      case WRITE:
 +        writer = test_user_conn.createBatchWriter(tableName, new BatchWriterConfig());
 +        m = new Mutation(new Text("row"));
 +        m.put(new Text("a"), new Text("b"), new Value("c".getBytes()));
 +        writer.addMutation(m);
 +        writer.close();
 +        break;
 +      case BULK_IMPORT:
 +        // test for bulk import permission would go here
 +        break;
 +      case ALTER_TABLE:
 +        Map<String,Set<Text>> groups = new HashMap<>();
 +        groups.put("tgroup", new HashSet<>(Arrays.asList(new Text("t1"), new Text("t2"))));
 +        break;
 +      case DROP_TABLE:
 +        test_user_conn.tableOperations().delete(tableName);
 +        break;
 +      case GRANT:
 +        test_user_conn.securityOperations().grantTablePermission(getAdminPrincipal(), tableName, TablePermission.GRANT);
 +        break;
 +      default:
 +        throw new IllegalArgumentException("Unrecognized table Permission: " + perm);
 +    }
 +  }
 +
 +  private void verifyHasOnlyTheseTablePermissions(Connector root_conn, String user, String table, TablePermission... perms) throws AccumuloException,
 +      AccumuloSecurityException {
 +    List<TablePermission> permList = Arrays.asList(perms);
 +    for (TablePermission p : TablePermission.values()) {
 +      if (permList.contains(p)) {
 +        // should have these
 +        if (!root_conn.securityOperations().hasTablePermission(user, table, p))
 +          throw new IllegalStateException(user + " SHOULD have table permission " + p + " for table " + table);
 +      } else {
 +        // should not have these
 +        if (root_conn.securityOperations().hasTablePermission(user, table, p))
 +          throw new IllegalStateException(user + " SHOULD NOT have table permission " + p + " for table " + table);
 +      }
 +    }
 +  }
 +
 +  private void verifyHasNoTablePermissions(Connector root_conn, String user, String table, TablePermission... perms) throws AccumuloException,
 +      AccumuloSecurityException {
 +    for (TablePermission p : perms)
 +      if (root_conn.securityOperations().hasTablePermission(user, table, p))
 +        throw new IllegalStateException(user + " SHOULD NOT have table permission " + p + " for table " + table);
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/functional/TableIT.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/functional/TableIT.java
index 504a5d9,0000000..22fbf18
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/TableIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/TableIT.java
@@@ -1,107 -1,0 +1,110 @@@
 +package org.apache.accumulo.test.functional;
 +
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +import static org.junit.Assert.assertEquals;
 +import static org.junit.Assert.assertNull;
 +import static org.junit.Assert.assertTrue;
 +
 +import java.io.FileNotFoundException;
 +
 +import org.apache.accumulo.cluster.AccumuloCluster;
 +import org.apache.accumulo.core.cli.BatchWriterOpts;
 +import org.apache.accumulo.core.cli.ScannerOpts;
 +import org.apache.accumulo.core.client.ClientConfiguration;
 +import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.Scanner;
 +import org.apache.accumulo.core.client.admin.TableOperations;
 +import org.apache.accumulo.core.data.impl.KeyExtent;
 +import org.apache.accumulo.core.metadata.MetadataTable;
 +import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 +import org.apache.accumulo.core.security.Authorizations;
 +import org.apache.accumulo.harness.AccumuloClusterHarness;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 +import org.apache.accumulo.test.TestIngest;
 +import org.apache.accumulo.test.VerifyIngest;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.hadoop.fs.FileSystem;
 +import org.apache.hadoop.fs.Path;
 +import org.hamcrest.CoreMatchers;
 +import org.junit.Assume;
 +import org.junit.Test;
++import org.junit.experimental.categories.Category;
 +
 +import com.google.common.collect.Iterators;
 +
++@Category(MiniClusterOnlyTest.class)
 +public class TableIT extends AccumuloClusterHarness {
 +
 +  @Override
 +  protected int defaultTimeoutSeconds() {
 +    return 2 * 60;
 +  }
 +
 +  @Test
 +  public void test() throws Exception {
 +    Assume.assumeThat(getClusterType(), CoreMatchers.is(ClusterType.MINI));
 +
 +    AccumuloCluster cluster = getCluster();
 +    MiniAccumuloClusterImpl mac = (MiniAccumuloClusterImpl) cluster;
 +    String rootPath = mac.getConfig().getDir().getAbsolutePath();
 +
 +    Connector c = getConnector();
 +    TableOperations to = c.tableOperations();
 +    String tableName = getUniqueNames(1)[0];
 +    to.create(tableName);
 +
 +    TestIngest.Opts opts = new TestIngest.Opts();
 +    VerifyIngest.Opts vopts = new VerifyIngest.Opts();
 +    ClientConfiguration clientConfig = getCluster().getClientConfig();
 +    if (clientConfig.getBoolean(ClientProperty.INSTANCE_RPC_SASL_ENABLED.getKey(), false)) {
 +      opts.updateKerberosCredentials(clientConfig);
 +      vopts.updateKerberosCredentials(clientConfig);
 +    } else {
 +      opts.setPrincipal(getAdminPrincipal());
 +      vopts.setPrincipal(getAdminPrincipal());
 +    }
 +
 +    opts.setTableName(tableName);
 +    TestIngest.ingest(c, opts, new BatchWriterOpts());
 +    to.flush(tableName, null, null, true);
 +    vopts.setTableName(tableName);
 +    VerifyIngest.verifyIngest(c, vopts, new ScannerOpts());
 +    String id = to.tableIdMap().get(tableName);
 +    Scanner s = c.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
 +    s.setRange(new KeyExtent(id, null, null).toMetadataRange());
 +    s.fetchColumnFamily(MetadataSchema.TabletsSection.DataFileColumnFamily.NAME);
 +    assertTrue(Iterators.size(s.iterator()) > 0);
 +
 +    FileSystem fs = getCluster().getFileSystem();
 +    assertTrue(fs.listStatus(new Path(rootPath + "/accumulo/tables/" + id)).length > 0);
 +    to.delete(tableName);
 +    assertEquals(0, Iterators.size(s.iterator()));
 +    try {
 +      assertEquals(0, fs.listStatus(new Path(rootPath + "/accumulo/tables/" + id)).length);
 +    } catch (FileNotFoundException ex) {
 +      // that's fine, too
 +    }
 +    assertNull(to.tableIdMap().get(tableName));
 +    to.create(tableName);
 +    TestIngest.ingest(c, opts, new BatchWriterOpts());
 +    VerifyIngest.verifyIngest(c, vopts, new ScannerOpts());
 +    to.delete(tableName);
 +  }
 +
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
index 4559195,0000000..32df894
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
@@@ -1,243 -1,0 +1,246 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.test.replication;
 +
 +import java.security.PrivilegedExceptionAction;
 +import java.util.Map.Entry;
 +import java.util.Set;
 +
 +import org.apache.accumulo.cluster.ClusterUser;
 +import org.apache.accumulo.core.client.BatchWriter;
 +import org.apache.accumulo.core.client.BatchWriterConfig;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 +import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.data.Key;
 +import org.apache.accumulo.core.data.Mutation;
 +import org.apache.accumulo.core.data.Value;
 +import org.apache.accumulo.core.security.Authorizations;
 +import org.apache.accumulo.core.security.TablePermission;
 +import org.apache.accumulo.harness.AccumuloITBase;
 +import org.apache.accumulo.harness.MiniClusterConfigurationCallback;
 +import org.apache.accumulo.harness.MiniClusterHarness;
 +import org.apache.accumulo.harness.TestingKdc;
 +import org.apache.accumulo.master.replication.SequentialWorkAssigner;
 +import org.apache.accumulo.minicluster.ServerType;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 +import org.apache.accumulo.minicluster.impl.ProcessReference;
 +import org.apache.accumulo.server.replication.ReplicaSystemFactory;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.accumulo.test.functional.KerberosIT;
 +import org.apache.accumulo.tserver.TabletServer;
 +import org.apache.accumulo.tserver.replication.AccumuloReplicaSystem;
 +import org.apache.hadoop.conf.Configuration;
 +import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 +import org.apache.hadoop.fs.RawLocalFileSystem;
 +import org.apache.hadoop.security.UserGroupInformation;
 +import org.junit.After;
 +import org.junit.AfterClass;
 +import org.junit.Assert;
 +import org.junit.Before;
 +import org.junit.BeforeClass;
 +import org.junit.Test;
++import org.junit.experimental.categories.Category;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +import com.google.common.collect.Iterators;
 +
 +/**
 + * Ensure that replication occurs using keytabs instead of password (not to mention SASL)
 + */
++@Category(MiniClusterOnlyTest.class)
 +public class KerberosReplicationIT extends AccumuloITBase {
 +  private static final Logger log = LoggerFactory.getLogger(KerberosIT.class);
 +
 +  private static TestingKdc kdc;
 +  private static String krbEnabledForITs = null;
 +  private static ClusterUser rootUser;
 +
 +  @BeforeClass
 +  public static void startKdc() throws Exception {
 +    kdc = new TestingKdc();
 +    kdc.start();
 +    krbEnabledForITs = System.getProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION);
 +    if (null == krbEnabledForITs || !Boolean.parseBoolean(krbEnabledForITs)) {
 +      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, "true");
 +    }
 +    rootUser = kdc.getRootUser();
 +  }
 +
 +  @AfterClass
 +  public static void stopKdc() throws Exception {
 +    if (null != kdc) {
 +      kdc.stop();
 +    }
 +    if (null != krbEnabledForITs) {
 +      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, krbEnabledForITs);
 +    }
 +  }
 +
 +  private MiniAccumuloClusterImpl primary, peer;
 +  private String PRIMARY_NAME = "primary", PEER_NAME = "peer";
 +
 +  @Override
 +  protected int defaultTimeoutSeconds() {
 +    return 60 * 3;
 +  }
 +
 +  private MiniClusterConfigurationCallback getConfigCallback(final String name) {
 +    return new MiniClusterConfigurationCallback() {
 +      @Override
 +      public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration coreSite) {
 +        cfg.setNumTservers(1);
 +        cfg.setProperty(Property.INSTANCE_ZK_TIMEOUT, "15s");
 +        cfg.setProperty(Property.TSERV_WALOG_MAX_SIZE, "2M");
 +        cfg.setProperty(Property.GC_CYCLE_START, "1s");
 +        cfg.setProperty(Property.GC_CYCLE_DELAY, "5s");
 +        cfg.setProperty(Property.REPLICATION_WORK_ASSIGNMENT_SLEEP, "1s");
 +        cfg.setProperty(Property.MASTER_REPLICATION_SCAN_INTERVAL, "1s");
 +        cfg.setProperty(Property.REPLICATION_NAME, name);
 +        cfg.setProperty(Property.REPLICATION_MAX_UNIT_SIZE, "8M");
 +        cfg.setProperty(Property.REPLICATION_WORK_ASSIGNER, SequentialWorkAssigner.class.getName());
 +        cfg.setProperty(Property.TSERV_TOTAL_MUTATION_QUEUE_MAX, "1M");
 +        coreSite.set("fs.file.impl", RawLocalFileSystem.class.getName());
 +        coreSite.set("fs.defaultFS", "file:///");
 +      }
 +    };
 +  }
 +
 +  @Before
 +  public void setup() throws Exception {
 +    MiniClusterHarness harness = new MiniClusterHarness();
 +
 +    // Create a primary and a peer instance, both with the same "root" user
 +    primary = harness.create(getClass().getName(), testName.getMethodName(), new PasswordToken("unused"), getConfigCallback(PRIMARY_NAME), kdc);
 +    primary.start();
 +
 +    peer = harness.create(getClass().getName(), testName.getMethodName() + "_peer", new PasswordToken("unused"), getConfigCallback(PEER_NAME), kdc);
 +    peer.start();
 +
 +    // Enable kerberos auth
 +    Configuration conf = new Configuration(false);
 +    conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "kerberos");
 +    UserGroupInformation.setConfiguration(conf);
 +  }
 +
 +  @After
 +  public void teardown() throws Exception {
 +    if (null != peer) {
 +      peer.stop();
 +    }
 +    if (null != primary) {
 +      primary.stop();
 +    }
 +    UserGroupInformation.setConfiguration(new Configuration(false));
 +  }
 +
 +  @Test
 +  public void dataReplicatedToCorrectTable() throws Exception {
 +    // Login as the root user
 +    final UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().toURI().toString());
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        log.info("testing {}", ugi);
 +        final KerberosToken token = new KerberosToken();
 +        final Connector primaryConn = primary.getConnector(rootUser.getPrincipal(), token);
 +        final Connector peerConn = peer.getConnector(rootUser.getPrincipal(), token);
 +
 +        ClusterUser replicationUser = kdc.getClientPrincipal(0);
 +
 +        // Create user for replication to the peer
 +        peerConn.securityOperations().createLocalUser(replicationUser.getPrincipal(), null);
 +
 +        primaryConn.instanceOperations().setProperty(Property.REPLICATION_PEER_USER.getKey() + PEER_NAME, replicationUser.getPrincipal());
 +        primaryConn.instanceOperations().setProperty(Property.REPLICATION_PEER_KEYTAB.getKey() + PEER_NAME, replicationUser.getKeytab().getAbsolutePath());
 +
 +        // ...peer = AccumuloReplicaSystem,instanceName,zookeepers
 +        primaryConn.instanceOperations().setProperty(
 +            Property.REPLICATION_PEERS.getKey() + PEER_NAME,
 +            ReplicaSystemFactory.getPeerConfigurationValue(AccumuloReplicaSystem.class,
 +                AccumuloReplicaSystem.buildConfiguration(peerConn.getInstance().getInstanceName(), peerConn.getInstance().getZooKeepers())));
 +
 +        String primaryTable1 = "primary", peerTable1 = "peer";
 +
 +        // Create tables
 +        primaryConn.tableOperations().create(primaryTable1);
 +        String masterTableId1 = primaryConn.tableOperations().tableIdMap().get(primaryTable1);
 +        Assert.assertNotNull(masterTableId1);
 +
 +        peerConn.tableOperations().create(peerTable1);
 +        String peerTableId1 = peerConn.tableOperations().tableIdMap().get(peerTable1);
 +        Assert.assertNotNull(peerTableId1);
 +
 +        // Grant write permission
 +        peerConn.securityOperations().grantTablePermission(replicationUser.getPrincipal(), peerTable1, TablePermission.WRITE);
 +
 +        // Replicate this table to the peerClusterName in a table with the peerTableId table id
 +        primaryConn.tableOperations().setProperty(primaryTable1, Property.TABLE_REPLICATION.getKey(), "true");
 +        primaryConn.tableOperations().setProperty(primaryTable1, Property.TABLE_REPLICATION_TARGET.getKey() + PEER_NAME, peerTableId1);
 +
 +        // Write some data to table1
 +        BatchWriter bw = primaryConn.createBatchWriter(primaryTable1, new BatchWriterConfig());
 +        long masterTable1Records = 0l;
 +        for (int rows = 0; rows < 2500; rows++) {
 +          Mutation m = new Mutation(primaryTable1 + rows);
 +          for (int cols = 0; cols < 100; cols++) {
 +            String value = Integer.toString(cols);
 +            m.put(value, "", value);
 +            masterTable1Records++;
 +          }
 +          bw.addMutation(m);
 +        }
 +
 +        bw.close();
 +
 +        log.info("Wrote all data to primary cluster");
 +
 +        Set<String> filesFor1 = primaryConn.replicationOperations().referencedFiles(primaryTable1);
 +
 +        // Restart the tserver to force a close on the WAL
 +        for (ProcessReference proc : primary.getProcesses().get(ServerType.TABLET_SERVER)) {
 +          primary.killProcess(ServerType.TABLET_SERVER, proc);
 +        }
 +        primary.exec(TabletServer.class);
 +
 +        log.info("Restarted the tserver");
 +
 +        // Read the data -- the tserver is back up and running and tablets are assigned
 +        Iterators.size(primaryConn.createScanner(primaryTable1, Authorizations.EMPTY).iterator());
 +
 +        // Wait for both tables to be replicated
 +        log.info("Waiting for {} for {}", filesFor1, primaryTable1);
 +        primaryConn.replicationOperations().drain(primaryTable1, filesFor1);
 +
 +        long countTable = 0l;
 +        for (Entry<Key,Value> entry : peerConn.createScanner(peerTable1, Authorizations.EMPTY)) {
 +          countTable++;
 +          Assert.assertTrue("Found unexpected key-value" + entry.getKey().toStringNoTruncate() + " " + entry.getValue(), entry.getKey().getRow().toString()
 +              .startsWith(primaryTable1));
 +        }
 +
 +        log.info("Found {} records in {}", countTable, peerTable1);
 +        Assert.assertEquals(masterTable1Records, countTable);
 +
 +        return null;
 +      }
 +    });
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/trace/pom.xml
----------------------------------------------------------------------


[05/10] accumulo git commit: Merge branch '1.7' into 1.8

Posted by el...@apache.org.
http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/categories/AnyClusterTest.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/categories/AnyClusterTest.java
index 0000000,0000000..765057e
new file mode 100644
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/categories/AnyClusterTest.java
@@@ -1,0 -1,0 +1,25 @@@
++/*
++ * Licensed to the Apache Software Foundation (ASF) under one or more
++ * contributor license agreements.  See the NOTICE file distributed with
++ * this work for additional information regarding copyright ownership.
++ * The ASF licenses this file to you under the Apache License, Version 2.0
++ * (the "License"); you may not use this file except in compliance with
++ * the License.  You may obtain a copy of the License at
++ *
++ * http://www.apache.org/licenses/LICENSE-2.0
++ *
++ * Unless required by applicable law or agreed to in writing, software
++ * distributed under the License is distributed on an "AS IS" BASIS,
++ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
++ * See the License for the specific language governing permissions and
++ * limitations under the License.
++ */
++package org.apache.accumulo.test.categories;
++
++/**
++ * Interface to be used with JUnit Category annotation to denote that the IntegrationTest can be used with any kind of cluster (a MiniAccumuloCluster or a
++ * StandaloneAccumuloCluster).
++ */
++public interface AnyClusterTest {
++
++}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/categories/MiniClusterOnlyTest.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/categories/MiniClusterOnlyTest.java
index 0000000,0000000..1a972ef
new file mode 100644
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/categories/MiniClusterOnlyTest.java
@@@ -1,0 -1,0 +1,24 @@@
++/*
++ * Licensed to the Apache Software Foundation (ASF) under one or more
++ * contributor license agreements.  See the NOTICE file distributed with
++ * this work for additional information regarding copyright ownership.
++ * The ASF licenses this file to you under the Apache License, Version 2.0
++ * (the "License"); you may not use this file except in compliance with
++ * the License.  You may obtain a copy of the License at
++ *
++ * http://www.apache.org/licenses/LICENSE-2.0
++ *
++ * Unless required by applicable law or agreed to in writing, software
++ * distributed under the License is distributed on an "AS IS" BASIS,
++ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
++ * See the License for the specific language governing permissions and
++ * limitations under the License.
++ */
++package org.apache.accumulo.test.categories;
++
++/**
++ * Interface to be used with JUnit Category annotation to denote that the IntegrationTest requires the use of a MiniAccumuloCluster.
++ */
++public interface MiniClusterOnlyTest {
++
++}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/categories/package-info.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/categories/package-info.java
index 0000000,0000000..e7071fc
new file mode 100644
--- /dev/null
+++ b/test/src/main/java/org/apache/accumulo/test/categories/package-info.java
@@@ -1,0 -1,0 +1,21 @@@
++/*
++ * Licensed to the Apache Software Foundation (ASF) under one or more
++ * contributor license agreements.  See the NOTICE file distributed with
++ * this work for additional information regarding copyright ownership.
++ * The ASF licenses this file to you under the Apache License, Version 2.0
++ * (the "License"); you may not use this file except in compliance with
++ * the License.  You may obtain a copy of the License at
++ *
++ * http://www.apache.org/licenses/LICENSE-2.0
++ *
++ * Unless required by applicable law or agreed to in writing, software
++ * distributed under the License is distributed on an "AS IS" BASIS,
++ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
++ * See the License for the specific language governing permissions and
++ * limitations under the License.
++ */
++/**
++ * JUnit categories for the various types of Accumulo integration tests.
++ */
++package org.apache.accumulo.test.categories;
++

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
index 29f2780,0000000..8dbbc12
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
@@@ -1,121 -1,0 +1,124 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.test.functional;
 +
 +import static org.junit.Assert.assertEquals;
 +import static org.junit.Assert.assertFalse;
 +import static org.junit.Assert.assertTrue;
 +
 +import java.io.IOException;
 +import java.io.InputStream;
 +import java.util.Collections;
 +import java.util.EnumSet;
 +import java.util.Iterator;
 +import java.util.Map.Entry;
 +import java.util.concurrent.TimeUnit;
 +
 +import org.apache.accumulo.core.client.BatchWriter;
 +import org.apache.accumulo.core.client.BatchWriterConfig;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.IteratorSetting;
 +import org.apache.accumulo.core.client.Scanner;
 +import org.apache.accumulo.core.data.Key;
 +import org.apache.accumulo.core.data.Mutation;
 +import org.apache.accumulo.core.data.Value;
 +import org.apache.accumulo.core.iterators.Combiner;
 +import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 +import org.apache.accumulo.core.security.Authorizations;
 +import org.apache.accumulo.harness.AccumuloClusterHarness;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 +import org.apache.hadoop.fs.FSDataOutputStream;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.hadoop.fs.FileSystem;
 +import org.apache.hadoop.fs.Path;
 +import org.hamcrest.CoreMatchers;
 +import org.junit.Assume;
 +import org.junit.Before;
 +import org.junit.Test;
++import org.junit.experimental.categories.Category;
 +
 +import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 +
++@Category(MiniClusterOnlyTest.class)
 +public class ClassLoaderIT extends AccumuloClusterHarness {
 +
 +  private static final long ZOOKEEPER_PROPAGATION_TIME = 10 * 1000;
 +
 +  @Override
 +  protected int defaultTimeoutSeconds() {
 +    return 2 * 60;
 +  }
 +
 +  private String rootPath;
 +
 +  @Before
 +  public void checkCluster() {
 +    Assume.assumeThat(getClusterType(), CoreMatchers.is(ClusterType.MINI));
 +    MiniAccumuloClusterImpl mac = (MiniAccumuloClusterImpl) getCluster();
 +    rootPath = mac.getConfig().getDir().getAbsolutePath();
 +  }
 +
 +  private static void copyStreamToFileSystem(FileSystem fs, String jarName, Path path) throws IOException {
 +    byte[] buffer = new byte[10 * 1024];
 +    try (FSDataOutputStream dest = fs.create(path); InputStream stream = ClassLoaderIT.class.getResourceAsStream(jarName)) {
 +      while (true) {
 +        int n = stream.read(buffer, 0, buffer.length);
 +        if (n <= 0) {
 +          break;
 +        }
 +        dest.write(buffer, 0, n);
 +      }
 +    }
 +  }
 +
 +  @Test
 +  public void test() throws Exception {
 +    Connector c = getConnector();
 +    String tableName = getUniqueNames(1)[0];
 +    c.tableOperations().create(tableName);
 +    BatchWriter bw = c.createBatchWriter(tableName, new BatchWriterConfig());
 +    Mutation m = new Mutation("row1");
 +    m.put("cf", "col1", "Test");
 +    bw.addMutation(m);
 +    bw.close();
 +    scanCheck(c, tableName, "Test");
 +    FileSystem fs = getCluster().getFileSystem();
 +    Path jarPath = new Path(rootPath + "/lib/ext/Test.jar");
 +    copyStreamToFileSystem(fs, "/TestCombinerX.jar", jarPath);
 +    sleepUninterruptibly(1, TimeUnit.SECONDS);
 +    IteratorSetting is = new IteratorSetting(10, "TestCombiner", "org.apache.accumulo.test.functional.TestCombiner");
 +    Combiner.setColumns(is, Collections.singletonList(new IteratorSetting.Column("cf")));
 +    c.tableOperations().attachIterator(tableName, is, EnumSet.of(IteratorScope.scan));
 +    sleepUninterruptibly(ZOOKEEPER_PROPAGATION_TIME, TimeUnit.MILLISECONDS);
 +    scanCheck(c, tableName, "TestX");
 +    fs.delete(jarPath, true);
 +    copyStreamToFileSystem(fs, "/TestCombinerY.jar", jarPath);
 +    sleepUninterruptibly(5, TimeUnit.SECONDS);
 +    scanCheck(c, tableName, "TestY");
 +    fs.delete(jarPath, true);
 +  }
 +
 +  private void scanCheck(Connector c, String tableName, String expected) throws Exception {
 +    Scanner bs = c.createScanner(tableName, Authorizations.EMPTY);
 +    Iterator<Entry<Key,Value>> iterator = bs.iterator();
 +    assertTrue(iterator.hasNext());
 +    Entry<Key,Value> next = iterator.next();
 +    assertFalse(iterator.hasNext());
 +    assertEquals(expected, next.getValue().toString());
 +  }
 +
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/functional/ConfigurableMacBase.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/functional/ConfigurableMacBase.java
index 85246bf,0000000..71777bf
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/ConfigurableMacBase.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/ConfigurableMacBase.java
@@@ -1,185 -1,0 +1,188 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.test.functional;
 +
 +import static org.junit.Assert.assertTrue;
 +
 +import java.io.BufferedOutputStream;
 +import java.io.File;
 +import java.io.FileOutputStream;
 +import java.io.IOException;
 +import java.io.OutputStream;
 +import java.util.Map;
 +
 +import org.apache.accumulo.core.client.AccumuloException;
 +import org.apache.accumulo.core.client.AccumuloSecurityException;
 +import org.apache.accumulo.core.client.ClientConfiguration;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.Instance;
 +import org.apache.accumulo.core.client.ZooKeeperInstance;
 +import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.util.MonitorUtil;
 +import org.apache.accumulo.harness.AccumuloClusterHarness;
 +import org.apache.accumulo.harness.AccumuloITBase;
 +import org.apache.accumulo.minicluster.MiniAccumuloCluster;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 +import org.apache.accumulo.minicluster.impl.ZooKeeperBindException;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.accumulo.test.util.CertUtils;
 +import org.apache.commons.io.FileUtils;
 +import org.apache.hadoop.conf.Configuration;
 +import org.apache.hadoop.fs.Path;
 +import org.apache.zookeeper.KeeperException;
 +import org.junit.After;
 +import org.junit.Before;
++import org.junit.experimental.categories.Category;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +/**
 + * General Integration-Test base class that provides access to a {@link MiniAccumuloCluster} for testing. Tests using these typically do very disruptive things
 + * to the instance, and require specific configuration. Most tests don't need this level of control and should extend {@link AccumuloClusterHarness} instead.
 + */
++@Category(MiniClusterOnlyTest.class)
 +public class ConfigurableMacBase extends AccumuloITBase {
 +  public static final Logger log = LoggerFactory.getLogger(ConfigurableMacBase.class);
 +
 +  protected MiniAccumuloClusterImpl cluster;
 +
 +  protected void configure(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {}
 +
 +  protected void beforeClusterStart(MiniAccumuloConfigImpl cfg) throws Exception {}
 +
 +  protected static final String ROOT_PASSWORD = "testRootPassword1";
 +
 +  public static void configureForEnvironment(MiniAccumuloConfigImpl cfg, Class<?> testClass, File folder) {
 +    if ("true".equals(System.getProperty("org.apache.accumulo.test.functional.useSslForIT"))) {
 +      configureForSsl(cfg, folder);
 +    }
 +    if ("true".equals(System.getProperty("org.apache.accumulo.test.functional.useCredProviderForIT"))) {
 +      cfg.setUseCredentialProvider(true);
 +    }
 +  }
 +
 +  protected static void configureForSsl(MiniAccumuloConfigImpl cfg, File sslDir) {
 +    Map<String,String> siteConfig = cfg.getSiteConfig();
 +    if ("true".equals(siteConfig.get(Property.INSTANCE_RPC_SSL_ENABLED.getKey()))) {
 +      // already enabled; don't mess with it
 +      return;
 +    }
 +
 +    // create parent directories, and ensure sslDir is empty
 +    assertTrue(sslDir.mkdirs() || sslDir.isDirectory());
 +    FileUtils.deleteQuietly(sslDir);
 +    assertTrue(sslDir.mkdir());
 +
 +    File rootKeystoreFile = new File(sslDir, "root-" + cfg.getInstanceName() + ".jks");
 +    File localKeystoreFile = new File(sslDir, "local-" + cfg.getInstanceName() + ".jks");
 +    File publicTruststoreFile = new File(sslDir, "public-" + cfg.getInstanceName() + ".jks");
 +    final String rootKeystorePassword = "root_keystore_password", truststorePassword = "truststore_password";
 +    try {
 +      new CertUtils(Property.RPC_SSL_KEYSTORE_TYPE.getDefaultValue(), "o=Apache Accumulo,cn=MiniAccumuloCluster", "RSA", 2048, "sha1WithRSAEncryption")
 +          .createAll(rootKeystoreFile, localKeystoreFile, publicTruststoreFile, cfg.getInstanceName(), rootKeystorePassword, cfg.getRootPassword(),
 +              truststorePassword);
 +    } catch (Exception e) {
 +      throw new RuntimeException("error creating MAC keystore", e);
 +    }
 +
 +    siteConfig.put(Property.INSTANCE_RPC_SSL_ENABLED.getKey(), "true");
 +    siteConfig.put(Property.RPC_SSL_KEYSTORE_PATH.getKey(), localKeystoreFile.getAbsolutePath());
 +    siteConfig.put(Property.RPC_SSL_KEYSTORE_PASSWORD.getKey(), cfg.getRootPassword());
 +    siteConfig.put(Property.RPC_SSL_TRUSTSTORE_PATH.getKey(), publicTruststoreFile.getAbsolutePath());
 +    siteConfig.put(Property.RPC_SSL_TRUSTSTORE_PASSWORD.getKey(), truststorePassword);
 +    cfg.setSiteConfig(siteConfig);
 +  }
 +
 +  @Before
 +  public void setUp() throws Exception {
 +    createMiniAccumulo();
 +    Exception lastException = null;
 +    for (int i = 0; i < 3; i++) {
 +      try {
 +        cluster.start();
 +        return;
 +      } catch (ZooKeeperBindException e) {
 +        lastException = e;
 +        log.warn("Failed to start MiniAccumuloCluster, assumably due to ZooKeeper issues", lastException);
 +        Thread.sleep(3000);
 +        createMiniAccumulo();
 +      }
 +    }
 +    throw new RuntimeException("Failed to start MiniAccumuloCluster after three attempts", lastException);
 +  }
 +
 +  private void createMiniAccumulo() throws Exception {
 +    // createTestDir will give us a empty directory, we don't need to clean it up ourselves
 +    File baseDir = createTestDir(this.getClass().getName() + "_" + this.testName.getMethodName());
 +    MiniAccumuloConfigImpl cfg = new MiniAccumuloConfigImpl(baseDir, ROOT_PASSWORD);
 +    String nativePathInDevTree = NativeMapIT.nativeMapLocation().getAbsolutePath();
 +    String nativePathInMapReduce = new File(System.getProperty("user.dir")).toString();
 +    cfg.setNativeLibPaths(nativePathInDevTree, nativePathInMapReduce);
 +    cfg.setProperty(Property.GC_FILE_ARCHIVE, Boolean.TRUE.toString());
 +    Configuration coreSite = new Configuration(false);
 +    configure(cfg, coreSite);
 +    cfg.setProperty(Property.TSERV_NATIVEMAP_ENABLED, Boolean.TRUE.toString());
 +    configureForEnvironment(cfg, getClass(), getSslDir(baseDir));
 +    cluster = new MiniAccumuloClusterImpl(cfg);
 +    if (coreSite.size() > 0) {
 +      File csFile = new File(cluster.getConfig().getConfDir(), "core-site.xml");
 +      if (csFile.exists()) {
 +        coreSite.addResource(new Path(csFile.getAbsolutePath()));
 +      }
 +      File tmp = new File(csFile.getAbsolutePath() + ".tmp");
 +      OutputStream out = new BufferedOutputStream(new FileOutputStream(tmp));
 +      coreSite.writeXml(out);
 +      out.close();
 +      assertTrue(tmp.renameTo(csFile));
 +    }
 +    beforeClusterStart(cfg);
 +  }
 +
 +  @After
 +  public void tearDown() throws Exception {
 +    if (cluster != null)
 +      try {
 +        cluster.stop();
 +      } catch (Exception e) {
 +        // ignored
 +      }
 +  }
 +
 +  protected MiniAccumuloClusterImpl getCluster() {
 +    return cluster;
 +  }
 +
 +  protected Connector getConnector() throws AccumuloException, AccumuloSecurityException {
 +    return getCluster().getConnector("root", new PasswordToken(ROOT_PASSWORD));
 +  }
 +
 +  protected Process exec(Class<?> clazz, String... args) throws IOException {
 +    return getCluster().exec(clazz, args);
 +  }
 +
 +  protected String getMonitor() throws KeeperException, InterruptedException {
 +    Instance instance = new ZooKeeperInstance(getCluster().getClientConfig());
 +    return MonitorUtil.getLocation(instance);
 +  }
 +
 +  protected ClientConfiguration getClientConfig() throws Exception {
 +    return new ClientConfiguration(getCluster().getConfig().getClientConfFile());
 +  }
 +
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/functional/KerberosIT.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/functional/KerberosIT.java
index e636daa,0000000..1bdc71a
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/KerberosIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/KerberosIT.java
@@@ -1,656 -1,0 +1,659 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.test.functional;
 +
 +import static org.junit.Assert.assertEquals;
 +import static org.junit.Assert.assertFalse;
 +import static org.junit.Assert.assertNotNull;
 +import static org.junit.Assert.assertTrue;
 +import static org.junit.Assert.fail;
 +
 +import java.io.File;
 +import java.lang.reflect.UndeclaredThrowableException;
 +import java.security.PrivilegedExceptionAction;
 +import java.util.Arrays;
 +import java.util.Collections;
 +import java.util.HashSet;
 +import java.util.Iterator;
 +import java.util.Map;
 +import java.util.Map.Entry;
 +import java.util.concurrent.TimeUnit;
 +
 +import org.apache.accumulo.cluster.ClusterUser;
 +import org.apache.accumulo.core.client.AccumuloException;
 +import org.apache.accumulo.core.client.AccumuloSecurityException;
 +import org.apache.accumulo.core.client.BatchScanner;
 +import org.apache.accumulo.core.client.BatchWriter;
 +import org.apache.accumulo.core.client.BatchWriterConfig;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.Scanner;
 +import org.apache.accumulo.core.client.TableExistsException;
 +import org.apache.accumulo.core.client.TableNotFoundException;
 +import org.apache.accumulo.core.client.admin.CompactionConfig;
 +import org.apache.accumulo.core.client.admin.DelegationTokenConfig;
 +import org.apache.accumulo.core.client.impl.AuthenticationTokenIdentifier;
 +import org.apache.accumulo.core.client.impl.DelegationTokenImpl;
 +import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 +import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 +import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.data.Key;
 +import org.apache.accumulo.core.data.Mutation;
 +import org.apache.accumulo.core.data.Range;
 +import org.apache.accumulo.core.data.Value;
 +import org.apache.accumulo.core.metadata.MetadataTable;
 +import org.apache.accumulo.core.metadata.RootTable;
 +import org.apache.accumulo.core.security.Authorizations;
 +import org.apache.accumulo.core.security.ColumnVisibility;
 +import org.apache.accumulo.core.security.SystemPermission;
 +import org.apache.accumulo.core.security.TablePermission;
 +import org.apache.accumulo.harness.AccumuloITBase;
 +import org.apache.accumulo.harness.MiniClusterConfigurationCallback;
 +import org.apache.accumulo.harness.MiniClusterHarness;
 +import org.apache.accumulo.harness.TestingKdc;
 +import org.apache.accumulo.minicluster.ServerType;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.hadoop.conf.Configuration;
 +import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 +import org.apache.hadoop.minikdc.MiniKdc;
 +import org.apache.hadoop.security.UserGroupInformation;
 +import org.junit.After;
 +import org.junit.AfterClass;
 +import org.junit.Before;
 +import org.junit.BeforeClass;
 +import org.junit.Test;
++import org.junit.experimental.categories.Category;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +import com.google.common.collect.Iterables;
 +import com.google.common.collect.Sets;
 +
 +/**
 + * MAC test which uses {@link MiniKdc} to simulate ta secure environment. Can be used as a sanity check for Kerberos/SASL testing.
 + */
++@Category(MiniClusterOnlyTest.class)
 +public class KerberosIT extends AccumuloITBase {
 +  private static final Logger log = LoggerFactory.getLogger(KerberosIT.class);
 +
 +  private static TestingKdc kdc;
 +  private static String krbEnabledForITs = null;
 +  private static ClusterUser rootUser;
 +
 +  @BeforeClass
 +  public static void startKdc() throws Exception {
 +    kdc = new TestingKdc();
 +    kdc.start();
 +    krbEnabledForITs = System.getProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION);
 +    if (null == krbEnabledForITs || !Boolean.parseBoolean(krbEnabledForITs)) {
 +      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, "true");
 +    }
 +    rootUser = kdc.getRootUser();
 +  }
 +
 +  @AfterClass
 +  public static void stopKdc() throws Exception {
 +    if (null != kdc) {
 +      kdc.stop();
 +    }
 +    if (null != krbEnabledForITs) {
 +      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, krbEnabledForITs);
 +    }
 +    UserGroupInformation.setConfiguration(new Configuration(false));
 +  }
 +
 +  @Override
 +  public int defaultTimeoutSeconds() {
 +    return 60 * 5;
 +  }
 +
 +  private MiniAccumuloClusterImpl mac;
 +
 +  @Before
 +  public void startMac() throws Exception {
 +    MiniClusterHarness harness = new MiniClusterHarness();
 +    mac = harness.create(this, new PasswordToken("unused"), kdc, new MiniClusterConfigurationCallback() {
 +
 +      @Override
 +      public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration coreSite) {
 +        Map<String,String> site = cfg.getSiteConfig();
 +        site.put(Property.INSTANCE_ZK_TIMEOUT.getKey(), "15s");
 +        cfg.setSiteConfig(site);
 +      }
 +
 +    });
 +
 +    mac.getConfig().setNumTservers(1);
 +    mac.start();
 +    // Enabled kerberos auth
 +    Configuration conf = new Configuration(false);
 +    conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "kerberos");
 +    UserGroupInformation.setConfiguration(conf);
 +  }
 +
 +  @After
 +  public void stopMac() throws Exception {
 +    if (null != mac) {
 +      mac.stop();
 +    }
 +  }
 +
 +  @Test
 +  public void testAdminUser() throws Exception {
 +    // Login as the client (provided to `accumulo init` as the "root" user)
 +    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        final Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +
 +        // The "root" user should have all system permissions
 +        for (SystemPermission perm : SystemPermission.values()) {
 +          assertTrue("Expected user to have permission: " + perm, conn.securityOperations().hasSystemPermission(conn.whoami(), perm));
 +        }
 +
 +        // and the ability to modify the root and metadata tables
 +        for (String table : Arrays.asList(RootTable.NAME, MetadataTable.NAME)) {
 +          assertTrue(conn.securityOperations().hasTablePermission(conn.whoami(), table, TablePermission.ALTER_TABLE));
 +        }
 +        return null;
 +      }
 +    });
 +  }
 +
 +  @Test
 +  public void testNewUser() throws Exception {
 +    String newUser = testName.getMethodName();
 +    final File newUserKeytab = new File(kdc.getKeytabDir(), newUser + ".keytab");
 +    if (newUserKeytab.exists() && !newUserKeytab.delete()) {
 +      log.warn("Unable to delete {}", newUserKeytab);
 +    }
 +
 +    // Create a new user
 +    kdc.createPrincipal(newUserKeytab, newUser);
 +
 +    final String newQualifiedUser = kdc.qualifyUser(newUser);
 +    final HashSet<String> users = Sets.newHashSet(rootUser.getPrincipal());
 +
 +    // Login as the "root" user
 +    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +    log.info("Logged in as {}", rootUser.getPrincipal());
 +
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +        log.info("Created connector as {}", rootUser.getPrincipal());
 +        assertEquals(rootUser.getPrincipal(), conn.whoami());
 +
 +        // Make sure the system user doesn't exist -- this will force some RPC to happen server-side
 +        createTableWithDataAndCompact(conn);
 +
 +        assertEquals(users, conn.securityOperations().listLocalUsers());
 +
 +        return null;
 +      }
 +    });
 +    // Switch to a new user
 +    ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(newQualifiedUser, newUserKeytab.getAbsolutePath());
 +    log.info("Logged in as {}", newQualifiedUser);
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        Connector conn = mac.getConnector(newQualifiedUser, new KerberosToken());
 +        log.info("Created connector as {}", newQualifiedUser);
 +        assertEquals(newQualifiedUser, conn.whoami());
 +
 +        // The new user should have no system permissions
 +        for (SystemPermission perm : SystemPermission.values()) {
 +          assertFalse(conn.securityOperations().hasSystemPermission(newQualifiedUser, perm));
 +        }
 +
 +        users.add(newQualifiedUser);
 +
 +        // Same users as before, plus the new user we just created
 +        assertEquals(users, conn.securityOperations().listLocalUsers());
 +        return null;
 +      }
 +
 +    });
 +  }
 +
 +  @Test
 +  public void testUserPrivilegesThroughGrant() throws Exception {
 +    String user1 = testName.getMethodName();
 +    final File user1Keytab = new File(kdc.getKeytabDir(), user1 + ".keytab");
 +    if (user1Keytab.exists() && !user1Keytab.delete()) {
 +      log.warn("Unable to delete {}", user1Keytab);
 +    }
 +
 +    // Create some new users
 +    kdc.createPrincipal(user1Keytab, user1);
 +
 +    final String qualifiedUser1 = kdc.qualifyUser(user1);
 +
 +    // Log in as user1
 +    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(user1, user1Keytab.getAbsolutePath());
 +    log.info("Logged in as {}", user1);
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        // Indirectly creates this user when we use it
 +        Connector conn = mac.getConnector(qualifiedUser1, new KerberosToken());
 +        log.info("Created connector as {}", qualifiedUser1);
 +
 +        // The new user should have no system permissions
 +        for (SystemPermission perm : SystemPermission.values()) {
 +          assertFalse(conn.securityOperations().hasSystemPermission(qualifiedUser1, perm));
 +        }
 +
 +        return null;
 +      }
 +    });
 +
 +    ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +        conn.securityOperations().grantSystemPermission(qualifiedUser1, SystemPermission.CREATE_TABLE);
 +        return null;
 +      }
 +    });
 +
 +    // Switch back to the original user
 +    ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(user1, user1Keytab.getAbsolutePath());
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        Connector conn = mac.getConnector(qualifiedUser1, new KerberosToken());
 +
 +        // Shouldn't throw an exception since we granted the create table permission
 +        final String table = testName.getMethodName() + "_user_table";
 +        conn.tableOperations().create(table);
 +
 +        // Make sure we can actually use the table we made
 +        BatchWriter bw = conn.createBatchWriter(table, new BatchWriterConfig());
 +        Mutation m = new Mutation("a");
 +        m.put("b", "c", "d");
 +        bw.addMutation(m);
 +        bw.close();
 +
 +        conn.tableOperations().compact(table, new CompactionConfig().setWait(true).setFlush(true));
 +        return null;
 +      }
 +    });
 +  }
 +
 +  @Test
 +  public void testUserPrivilegesForTable() throws Exception {
 +    String user1 = testName.getMethodName();
 +    final File user1Keytab = new File(kdc.getKeytabDir(), user1 + ".keytab");
 +    if (user1Keytab.exists() && !user1Keytab.delete()) {
 +      log.warn("Unable to delete {}", user1Keytab);
 +    }
 +
 +    // Create some new users -- cannot contain realm
 +    kdc.createPrincipal(user1Keytab, user1);
 +
 +    final String qualifiedUser1 = kdc.qualifyUser(user1);
 +
 +    // Log in as user1
 +    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(qualifiedUser1, user1Keytab.getAbsolutePath());
 +    log.info("Logged in as {}", user1);
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        // Indirectly creates this user when we use it
 +        Connector conn = mac.getConnector(qualifiedUser1, new KerberosToken());
 +        log.info("Created connector as {}", qualifiedUser1);
 +
 +        // The new user should have no system permissions
 +        for (SystemPermission perm : SystemPermission.values()) {
 +          assertFalse(conn.securityOperations().hasSystemPermission(qualifiedUser1, perm));
 +        }
 +        return null;
 +      }
 +
 +    });
 +
 +    final String table = testName.getMethodName() + "_user_table";
 +    final String viz = "viz";
 +
 +    ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +        conn.tableOperations().create(table);
 +        // Give our unprivileged user permission on the table we made for them
 +        conn.securityOperations().grantTablePermission(qualifiedUser1, table, TablePermission.READ);
 +        conn.securityOperations().grantTablePermission(qualifiedUser1, table, TablePermission.WRITE);
 +        conn.securityOperations().grantTablePermission(qualifiedUser1, table, TablePermission.ALTER_TABLE);
 +        conn.securityOperations().grantTablePermission(qualifiedUser1, table, TablePermission.DROP_TABLE);
 +        conn.securityOperations().changeUserAuthorizations(qualifiedUser1, new Authorizations(viz));
 +        return null;
 +      }
 +    });
 +
 +    // Switch back to the original user
 +    ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(qualifiedUser1, user1Keytab.getAbsolutePath());
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        Connector conn = mac.getConnector(qualifiedUser1, new KerberosToken());
 +
 +        // Make sure we can actually use the table we made
 +
 +        // Write data
 +        final long ts = 1000l;
 +        BatchWriter bw = conn.createBatchWriter(table, new BatchWriterConfig());
 +        Mutation m = new Mutation("a");
 +        m.put("b", "c", new ColumnVisibility(viz.getBytes()), ts, "d");
 +        bw.addMutation(m);
 +        bw.close();
 +
 +        // Compact
 +        conn.tableOperations().compact(table, new CompactionConfig().setWait(true).setFlush(true));
 +
 +        // Alter
 +        conn.tableOperations().setProperty(table, Property.TABLE_BLOOM_ENABLED.getKey(), "true");
 +
 +        // Read (and proper authorizations)
 +        Scanner s = conn.createScanner(table, new Authorizations(viz));
 +        Iterator<Entry<Key,Value>> iter = s.iterator();
 +        assertTrue("No results from iterator", iter.hasNext());
 +        Entry<Key,Value> entry = iter.next();
 +        assertEquals(new Key("a", "b", "c", viz, ts), entry.getKey());
 +        assertEquals(new Value("d".getBytes()), entry.getValue());
 +        assertFalse("Had more results from iterator", iter.hasNext());
 +        return null;
 +      }
 +    });
 +  }
 +
 +  @Test
 +  public void testDelegationToken() throws Exception {
 +    final String tableName = getUniqueNames(1)[0];
 +
 +    // Login as the "root" user
 +    UserGroupInformation root = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +    log.info("Logged in as {}", rootUser.getPrincipal());
 +
 +    final int numRows = 100, numColumns = 10;
 +
 +    // As the "root" user, open up the connection and get a delegation token
 +    final AuthenticationToken delegationToken = root.doAs(new PrivilegedExceptionAction<AuthenticationToken>() {
 +      @Override
 +      public AuthenticationToken run() throws Exception {
 +        Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +        log.info("Created connector as {}", rootUser.getPrincipal());
 +        assertEquals(rootUser.getPrincipal(), conn.whoami());
 +
 +        conn.tableOperations().create(tableName);
 +        BatchWriter bw = conn.createBatchWriter(tableName, new BatchWriterConfig());
 +        for (int r = 0; r < numRows; r++) {
 +          Mutation m = new Mutation(Integer.toString(r));
 +          for (int c = 0; c < numColumns; c++) {
 +            String col = Integer.toString(c);
 +            m.put(col, col, col);
 +          }
 +          bw.addMutation(m);
 +        }
 +        bw.close();
 +
 +        return conn.securityOperations().getDelegationToken(new DelegationTokenConfig());
 +      }
 +    });
 +
 +    // The above login with keytab doesn't have a way to logout, so make a fake user that won't have krb credentials
 +    UserGroupInformation userWithoutPrivs = UserGroupInformation.createUserForTesting("fake_user", new String[0]);
 +    int recordsSeen = userWithoutPrivs.doAs(new PrivilegedExceptionAction<Integer>() {
 +      @Override
 +      public Integer run() throws Exception {
 +        Connector conn = mac.getConnector(rootUser.getPrincipal(), delegationToken);
 +
 +        BatchScanner bs = conn.createBatchScanner(tableName, Authorizations.EMPTY, 2);
 +        bs.setRanges(Collections.singleton(new Range()));
 +        int recordsSeen = Iterables.size(bs);
 +        bs.close();
 +        return recordsSeen;
 +      }
 +    });
 +
 +    assertEquals(numRows * numColumns, recordsSeen);
 +  }
 +
 +  @Test
 +  public void testDelegationTokenAsDifferentUser() throws Exception {
 +    // Login as the "root" user
 +    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +    log.info("Logged in as {}", rootUser.getPrincipal());
 +
 +    final AuthenticationToken delegationToken;
 +    try {
 +      delegationToken = ugi.doAs(new PrivilegedExceptionAction<AuthenticationToken>() {
 +        @Override
 +        public AuthenticationToken run() throws Exception {
 +          // As the "root" user, open up the connection and get a delegation token
 +          Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +          log.info("Created connector as {}", rootUser.getPrincipal());
 +          assertEquals(rootUser.getPrincipal(), conn.whoami());
 +          return conn.securityOperations().getDelegationToken(new DelegationTokenConfig());
 +        }
 +      });
 +    } catch (UndeclaredThrowableException ex) {
 +      throw ex;
 +    }
 +
 +    // make a fake user that won't have krb credentials
 +    UserGroupInformation userWithoutPrivs = UserGroupInformation.createUserForTesting("fake_user", new String[0]);
 +    try {
 +      // Use the delegation token to try to log in as a different user
 +      userWithoutPrivs.doAs(new PrivilegedExceptionAction<Void>() {
 +        @Override
 +        public Void run() throws Exception {
 +          mac.getConnector("some_other_user", delegationToken);
 +          return null;
 +        }
 +      });
 +      fail("Using a delegation token as a different user should throw an exception");
 +    } catch (UndeclaredThrowableException e) {
 +      Throwable cause = e.getCause();
 +      assertNotNull(cause);
 +      // We should get an AccumuloSecurityException from trying to use a delegation token for the wrong user
 +      assertTrue("Expected cause to be AccumuloSecurityException, but was " + cause.getClass(), cause instanceof AccumuloSecurityException);
 +    }
 +  }
 +
 +  @Test
 +  public void testGetDelegationTokenDenied() throws Exception {
 +    String newUser = testName.getMethodName();
 +    final File newUserKeytab = new File(kdc.getKeytabDir(), newUser + ".keytab");
 +    if (newUserKeytab.exists() && !newUserKeytab.delete()) {
 +      log.warn("Unable to delete {}", newUserKeytab);
 +    }
 +
 +    // Create a new user
 +    kdc.createPrincipal(newUserKeytab, newUser);
 +
 +    final String qualifiedNewUser = kdc.qualifyUser(newUser);
 +
 +    // Login as a normal user
 +    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(qualifiedNewUser, newUserKeytab.getAbsolutePath());
 +    try {
 +      ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +        @Override
 +        public Void run() throws Exception {
 +          // As the "root" user, open up the connection and get a delegation token
 +          Connector conn = mac.getConnector(qualifiedNewUser, new KerberosToken());
 +          log.info("Created connector as {}", qualifiedNewUser);
 +          assertEquals(qualifiedNewUser, conn.whoami());
 +
 +          conn.securityOperations().getDelegationToken(new DelegationTokenConfig());
 +          return null;
 +        }
 +      });
 +    } catch (UndeclaredThrowableException ex) {
 +      assertTrue(ex.getCause() instanceof AccumuloSecurityException);
 +    }
 +  }
 +
 +  @Test
 +  public void testRestartedMasterReusesSecretKey() throws Exception {
 +    // Login as the "root" user
 +    UserGroupInformation root = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +    log.info("Logged in as {}", rootUser.getPrincipal());
 +
 +    // As the "root" user, open up the connection and get a delegation token
 +    final AuthenticationToken delegationToken1 = root.doAs(new PrivilegedExceptionAction<AuthenticationToken>() {
 +      @Override
 +      public AuthenticationToken run() throws Exception {
 +        Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +        log.info("Created connector as {}", rootUser.getPrincipal());
 +        assertEquals(rootUser.getPrincipal(), conn.whoami());
 +
 +        AuthenticationToken token = conn.securityOperations().getDelegationToken(new DelegationTokenConfig());
 +
 +        assertTrue("Could not get tables with delegation token", mac.getConnector(rootUser.getPrincipal(), token).tableOperations().list().size() > 0);
 +
 +        return token;
 +      }
 +    });
 +
 +    log.info("Stopping master");
 +    mac.getClusterControl().stop(ServerType.MASTER);
 +    Thread.sleep(5000);
 +    log.info("Restarting master");
 +    mac.getClusterControl().start(ServerType.MASTER);
 +
 +    // Make sure our original token is still good
 +    root.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        Connector conn = mac.getConnector(rootUser.getPrincipal(), delegationToken1);
 +
 +        assertTrue("Could not get tables with delegation token", conn.tableOperations().list().size() > 0);
 +
 +        return null;
 +      }
 +    });
 +
 +    // Get a new token, so we can compare the keyId on the second to the first
 +    final AuthenticationToken delegationToken2 = root.doAs(new PrivilegedExceptionAction<AuthenticationToken>() {
 +      @Override
 +      public AuthenticationToken run() throws Exception {
 +        Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +        log.info("Created connector as {}", rootUser.getPrincipal());
 +        assertEquals(rootUser.getPrincipal(), conn.whoami());
 +
 +        AuthenticationToken token = conn.securityOperations().getDelegationToken(new DelegationTokenConfig());
 +
 +        assertTrue("Could not get tables with delegation token", mac.getConnector(rootUser.getPrincipal(), token).tableOperations().list().size() > 0);
 +
 +        return token;
 +      }
 +    });
 +
 +    // A restarted master should reuse the same secret key after a restart if the secret key hasn't expired (1day by default)
 +    DelegationTokenImpl dt1 = (DelegationTokenImpl) delegationToken1;
 +    DelegationTokenImpl dt2 = (DelegationTokenImpl) delegationToken2;
 +    assertEquals(dt1.getIdentifier().getKeyId(), dt2.getIdentifier().getKeyId());
 +  }
 +
 +  @Test(expected = AccumuloException.class)
 +  public void testDelegationTokenWithInvalidLifetime() throws Throwable {
 +    // Login as the "root" user
 +    UserGroupInformation root = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +    log.info("Logged in as {}", rootUser.getPrincipal());
 +
 +    // As the "root" user, open up the connection and get a delegation token
 +    try {
 +      root.doAs(new PrivilegedExceptionAction<AuthenticationToken>() {
 +        @Override
 +        public AuthenticationToken run() throws Exception {
 +          Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +          log.info("Created connector as {}", rootUser.getPrincipal());
 +          assertEquals(rootUser.getPrincipal(), conn.whoami());
 +
 +          // Should fail
 +          return conn.securityOperations().getDelegationToken(new DelegationTokenConfig().setTokenLifetime(Long.MAX_VALUE, TimeUnit.MILLISECONDS));
 +        }
 +      });
 +    } catch (UndeclaredThrowableException e) {
 +      Throwable cause = e.getCause();
 +      if (null != cause) {
 +        throw cause;
 +      } else {
 +        throw e;
 +      }
 +    }
 +  }
 +
 +  @Test
 +  public void testDelegationTokenWithReducedLifetime() throws Throwable {
 +    // Login as the "root" user
 +    UserGroupInformation root = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +    log.info("Logged in as {}", rootUser.getPrincipal());
 +
 +    // As the "root" user, open up the connection and get a delegation token
 +    final AuthenticationToken dt = root.doAs(new PrivilegedExceptionAction<AuthenticationToken>() {
 +      @Override
 +      public AuthenticationToken run() throws Exception {
 +        Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +        log.info("Created connector as {}", rootUser.getPrincipal());
 +        assertEquals(rootUser.getPrincipal(), conn.whoami());
 +
 +        return conn.securityOperations().getDelegationToken(new DelegationTokenConfig().setTokenLifetime(5, TimeUnit.MINUTES));
 +      }
 +    });
 +
 +    AuthenticationTokenIdentifier identifier = ((DelegationTokenImpl) dt).getIdentifier();
 +    assertTrue("Expected identifier to expire in no more than 5 minutes: " + identifier,
 +        identifier.getExpirationDate() - identifier.getIssueDate() <= (5 * 60 * 1000));
 +  }
 +
 +  @Test(expected = AccumuloSecurityException.class)
 +  public void testRootUserHasIrrevocablePermissions() throws Exception {
 +    // Login as the client (provided to `accumulo init` as the "root" user)
 +    UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +
 +    final Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +
 +    // The server-side implementation should prevent the revocation of the 'root' user's systems permissions
 +    // because once they're gone, it's possible that they could never be restored.
 +    conn.securityOperations().revokeSystemPermission(rootUser.getPrincipal(), SystemPermission.GRANT);
 +  }
 +
 +  /**
 +   * Creates a table, adds a record to it, and then compacts the table. A simple way to make sure that the system user exists (since the master does an RPC to
 +   * the tserver which will create the system user if it doesn't already exist).
 +   */
 +  private void createTableWithDataAndCompact(Connector conn) throws TableNotFoundException, AccumuloSecurityException, AccumuloException, TableExistsException {
 +    final String table = testName.getMethodName() + "_table";
 +    conn.tableOperations().create(table);
 +    BatchWriter bw = conn.createBatchWriter(table, new BatchWriterConfig());
 +    Mutation m = new Mutation("a");
 +    m.put("b", "c", "d");
 +    bw.addMutation(m);
 +    bw.close();
 +    conn.tableOperations().compact(table, new CompactionConfig().setFlush(true).setWait(true));
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
index 2337f91,0000000..7264a42
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
@@@ -1,482 -1,0 +1,485 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.test.functional;
 +
 +import static java.nio.charset.StandardCharsets.UTF_8;
 +import static org.junit.Assert.assertEquals;
 +import static org.junit.Assert.assertTrue;
 +
 +import java.io.File;
 +import java.io.FileWriter;
 +import java.io.IOException;
 +import java.net.ConnectException;
 +import java.net.InetAddress;
 +import java.nio.ByteBuffer;
 +import java.util.Collections;
 +import java.util.HashMap;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.Properties;
 +
 +import org.apache.accumulo.cluster.ClusterUser;
 +import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 +import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.rpc.UGIAssumingTransport;
 +import org.apache.accumulo.harness.AccumuloITBase;
 +import org.apache.accumulo.harness.MiniClusterConfigurationCallback;
 +import org.apache.accumulo.harness.MiniClusterHarness;
 +import org.apache.accumulo.harness.TestingKdc;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 +import org.apache.accumulo.proxy.Proxy;
 +import org.apache.accumulo.proxy.ProxyServer;
 +import org.apache.accumulo.proxy.thrift.AccumuloProxy;
 +import org.apache.accumulo.proxy.thrift.AccumuloProxy.Client;
 +import org.apache.accumulo.proxy.thrift.AccumuloSecurityException;
 +import org.apache.accumulo.proxy.thrift.ColumnUpdate;
 +import org.apache.accumulo.proxy.thrift.Key;
 +import org.apache.accumulo.proxy.thrift.KeyValue;
 +import org.apache.accumulo.proxy.thrift.ScanOptions;
 +import org.apache.accumulo.proxy.thrift.ScanResult;
 +import org.apache.accumulo.proxy.thrift.TimeType;
 +import org.apache.accumulo.proxy.thrift.WriterOptions;
 +import org.apache.accumulo.server.util.PortUtils;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.hadoop.conf.Configuration;
 +import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 +import org.apache.hadoop.security.UserGroupInformation;
 +import org.apache.thrift.protocol.TCompactProtocol;
 +import org.apache.thrift.transport.TSaslClientTransport;
 +import org.apache.thrift.transport.TSocket;
 +import org.apache.thrift.transport.TTransportException;
 +import org.hamcrest.Description;
 +import org.hamcrest.TypeSafeMatcher;
 +import org.junit.After;
 +import org.junit.AfterClass;
 +import org.junit.Before;
 +import org.junit.BeforeClass;
 +import org.junit.Rule;
 +import org.junit.Test;
++import org.junit.experimental.categories.Category;
 +import org.junit.rules.ExpectedException;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +/**
 + * Tests impersonation of clients by the proxy over SASL
 + */
++@Category(MiniClusterOnlyTest.class)
 +public class KerberosProxyIT extends AccumuloITBase {
 +  private static final Logger log = LoggerFactory.getLogger(KerberosProxyIT.class);
 +
 +  @Rule
 +  public ExpectedException thrown = ExpectedException.none();
 +
 +  private static TestingKdc kdc;
 +  private static String krbEnabledForITs = null;
 +  private static File proxyKeytab;
 +  private static String hostname, proxyPrimary, proxyPrincipal;
 +
 +  @Override
 +  protected int defaultTimeoutSeconds() {
 +    return 60 * 5;
 +  }
 +
 +  @BeforeClass
 +  public static void startKdc() throws Exception {
 +    kdc = new TestingKdc();
 +    kdc.start();
 +    krbEnabledForITs = System.getProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION);
 +    if (null == krbEnabledForITs || !Boolean.parseBoolean(krbEnabledForITs)) {
 +      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, "true");
 +    }
 +
 +    // Create a principal+keytab for the proxy
 +    proxyKeytab = new File(kdc.getKeytabDir(), "proxy.keytab");
 +    hostname = InetAddress.getLocalHost().getCanonicalHostName();
 +    // Set the primary because the client needs to know it
 +    proxyPrimary = "proxy";
 +    // Qualify with an instance
 +    proxyPrincipal = proxyPrimary + "/" + hostname;
 +    kdc.createPrincipal(proxyKeytab, proxyPrincipal);
 +    // Tack on the realm too
 +    proxyPrincipal = kdc.qualifyUser(proxyPrincipal);
 +  }
 +
 +  @AfterClass
 +  public static void stopKdc() throws Exception {
 +    if (null != kdc) {
 +      kdc.stop();
 +    }
 +    if (null != krbEnabledForITs) {
 +      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, krbEnabledForITs);
 +    }
 +    UserGroupInformation.setConfiguration(new Configuration(false));
 +  }
 +
 +  private MiniAccumuloClusterImpl mac;
 +  private Process proxyProcess;
 +  private int proxyPort;
 +
 +  @Before
 +  public void startMac() throws Exception {
 +    MiniClusterHarness harness = new MiniClusterHarness();
 +    mac = harness.create(getClass().getName(), testName.getMethodName(), new PasswordToken("unused"), new MiniClusterConfigurationCallback() {
 +
 +      @Override
 +      public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration coreSite) {
 +        cfg.setNumTservers(1);
 +        Map<String,String> siteCfg = cfg.getSiteConfig();
 +        // Allow the proxy to impersonate the client user, but no one else
 +        siteCfg.put(Property.INSTANCE_RPC_SASL_ALLOWED_USER_IMPERSONATION.getKey(), proxyPrincipal + ":" + kdc.getRootUser().getPrincipal());
 +        siteCfg.put(Property.INSTANCE_RPC_SASL_ALLOWED_HOST_IMPERSONATION.getKey(), "*");
 +        cfg.setSiteConfig(siteCfg);
 +      }
 +
 +    }, kdc);
 +
 +    mac.start();
 +    MiniAccumuloConfigImpl cfg = mac.getConfig();
 +
 +    // Generate Proxy configuration and start the proxy
 +    proxyProcess = startProxy(cfg);
 +
 +    // Enabled kerberos auth
 +    Configuration conf = new Configuration(false);
 +    conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "kerberos");
 +    UserGroupInformation.setConfiguration(conf);
 +
 +    boolean success = false;
 +    ClusterUser rootUser = kdc.getRootUser();
 +    // Rely on the junit timeout rule
 +    while (!success) {
 +      UserGroupInformation ugi;
 +      try {
 +        ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +      } catch (IOException ex) {
 +        log.info("Login as root is failing", ex);
 +        Thread.sleep(3000);
 +        continue;
 +      }
 +
 +      TSocket socket = new TSocket(hostname, proxyPort);
 +      log.info("Connecting to proxy with server primary '" + proxyPrimary + "' running on " + hostname);
 +      TSaslClientTransport transport = new TSaslClientTransport("GSSAPI", null, proxyPrimary, hostname, Collections.singletonMap("javax.security.sasl.qop",
 +          "auth"), null, socket);
 +
 +      final UGIAssumingTransport ugiTransport = new UGIAssumingTransport(transport, ugi);
 +
 +      try {
 +        // UGI transport will perform the doAs for us
 +        ugiTransport.open();
 +        success = true;
 +      } catch (TTransportException e) {
 +        Throwable cause = e.getCause();
 +        if (null != cause && cause instanceof ConnectException) {
 +          log.info("Proxy not yet up, waiting");
 +          Thread.sleep(3000);
 +          proxyProcess = checkProxyAndRestart(proxyProcess, cfg);
 +          continue;
 +        }
 +      } finally {
 +        if (null != ugiTransport) {
 +          ugiTransport.close();
 +        }
 +      }
 +    }
 +
 +    assertTrue("Failed to connect to the proxy repeatedly", success);
 +  }
 +
 +  /**
 +   * Starts the thrift proxy using the given MAConfig.
 +   *
 +   * @param cfg
 +   *          configuration for MAC
 +   * @return Process for the thrift proxy
 +   */
 +  private Process startProxy(MiniAccumuloConfigImpl cfg) throws IOException {
 +    File proxyPropertiesFile = generateNewProxyConfiguration(cfg);
 +    return mac.exec(Proxy.class, "-p", proxyPropertiesFile.getCanonicalPath());
 +  }
 +
 +  /**
 +   * Generates a proxy configuration file for the MAC instance. Implicitly updates {@link #proxyPort} when choosing the port the proxy will listen on.
 +   *
 +   * @param cfg
 +   *          The MAC configuration
 +   * @return The proxy's configuration file
 +   */
 +  private File generateNewProxyConfiguration(MiniAccumuloConfigImpl cfg) throws IOException {
 +    // Chooses a new port for the proxy as side-effect
 +    proxyPort = PortUtils.getRandomFreePort();
 +
 +    // Proxy configuration
 +    File proxyPropertiesFile = new File(cfg.getConfDir(), "proxy.properties");
 +    if (proxyPropertiesFile.exists()) {
 +      assertTrue("Failed to delete proxy.properties file", proxyPropertiesFile.delete());
 +    }
 +    Properties proxyProperties = new Properties();
 +    proxyProperties.setProperty("useMockInstance", "false");
 +    proxyProperties.setProperty("useMiniAccumulo", "false");
 +    proxyProperties.setProperty("protocolFactory", TCompactProtocol.Factory.class.getName());
 +    proxyProperties.setProperty("tokenClass", KerberosToken.class.getName());
 +    proxyProperties.setProperty("port", Integer.toString(proxyPort));
 +    proxyProperties.setProperty("maxFrameSize", "16M");
 +    proxyProperties.setProperty("instance", mac.getInstanceName());
 +    proxyProperties.setProperty("zookeepers", mac.getZooKeepers());
 +    proxyProperties.setProperty("thriftServerType", "sasl");
 +    proxyProperties.setProperty("kerberosPrincipal", proxyPrincipal);
 +    proxyProperties.setProperty("kerberosKeytab", proxyKeytab.getCanonicalPath());
 +
 +    // Write out the proxy.properties file
 +    FileWriter writer = new FileWriter(proxyPropertiesFile);
 +    proxyProperties.store(writer, "Configuration for Accumulo proxy");
 +    writer.close();
 +
 +    log.info("Created configuration for proxy listening on {}", proxyPort);
 +
 +    return proxyPropertiesFile;
 +  }
 +
 +  /**
 +   * Restarts the thrift proxy if the previous instance is no longer running. If the proxy is still running, this method does nothing.
 +   *
 +   * @param proxy
 +   *          The thrift proxy process
 +   * @param cfg
 +   *          The MAC configuration
 +   * @return The process for the Proxy, either the previous instance or a new instance.
 +   */
 +  private Process checkProxyAndRestart(Process proxy, MiniAccumuloConfigImpl cfg) throws IOException {
 +    try {
 +      // Get the return code
 +      proxy.exitValue();
 +    } catch (IllegalThreadStateException e) {
 +      log.info("Proxy is still running");
 +      // OK, process is still running, don't restart
 +      return proxy;
 +    }
 +
 +    log.info("Restarting proxy because it is no longer alive");
 +
 +    // We got a return code which means the proxy exited. We'll assume this is because it failed
 +    // to bind the port due to the known race condition between choosing a port and having the
 +    // proxy bind it.
 +    return startProxy(cfg);
 +  }
 +
 +  @After
 +  public void stopMac() throws Exception {
 +    if (null != proxyProcess) {
 +      log.info("Destroying proxy process");
 +      proxyProcess.destroy();
 +      log.info("Waiting for proxy termination");
 +      proxyProcess.waitFor();
 +      log.info("Proxy terminated");
 +    }
 +    if (null != mac) {
 +      mac.stop();
 +    }
 +  }
 +
 +  @Test
 +  public void testProxyClient() throws Exception {
 +    ClusterUser rootUser = kdc.getRootUser();
 +    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +
 +    TSocket socket = new TSocket(hostname, proxyPort);
 +    log.info("Connecting to proxy with server primary '" + proxyPrimary + "' running on " + hostname);
 +    TSaslClientTransport transport = new TSaslClientTransport("GSSAPI", null, proxyPrimary, hostname, Collections.singletonMap("javax.security.sasl.qop",
 +        "auth"), null, socket);
 +
 +    final UGIAssumingTransport ugiTransport = new UGIAssumingTransport(transport, ugi);
 +
 +    // UGI transport will perform the doAs for us
 +    ugiTransport.open();
 +
 +    AccumuloProxy.Client.Factory factory = new AccumuloProxy.Client.Factory();
 +    Client client = factory.getClient(new TCompactProtocol(ugiTransport), new TCompactProtocol(ugiTransport));
 +
 +    // Will fail if the proxy can impersonate the client
 +    ByteBuffer login = client.login(rootUser.getPrincipal(), Collections.<String,String> emptyMap());
 +
 +    // For all of the below actions, the proxy user doesn't have permission to do any of them, but the client user does.
 +    // The fact that any of them actually run tells us that impersonation is working.
 +
 +    // Create a table
 +    String table = "table";
 +    if (!client.tableExists(login, table)) {
 +      client.createTable(login, table, true, TimeType.MILLIS);
 +    }
 +
 +    // Write two records to the table
 +    String writer = client.createWriter(login, table, new WriterOptions());
 +    Map<ByteBuffer,List<ColumnUpdate>> updates = new HashMap<>();
 +    ColumnUpdate update = new ColumnUpdate(ByteBuffer.wrap("cf1".getBytes(UTF_8)), ByteBuffer.wrap("cq1".getBytes(UTF_8)));
 +    update.setValue(ByteBuffer.wrap("value1".getBytes(UTF_8)));
 +    updates.put(ByteBuffer.wrap("row1".getBytes(UTF_8)), Collections.<ColumnUpdate> singletonList(update));
 +    update = new ColumnUpdate(ByteBuffer.wrap("cf2".getBytes(UTF_8)), ByteBuffer.wrap("cq2".getBytes(UTF_8)));
 +    update.setValue(ByteBuffer.wrap("value2".getBytes(UTF_8)));
 +    updates.put(ByteBuffer.wrap("row2".getBytes(UTF_8)), Collections.<ColumnUpdate> singletonList(update));
 +    client.update(writer, updates);
 +
 +    // Flush and close the writer
 +    client.flush(writer);
 +    client.closeWriter(writer);
 +
 +    // Open a scanner to the table
 +    String scanner = client.createScanner(login, table, new ScanOptions());
 +    ScanResult results = client.nextK(scanner, 10);
 +    assertEquals(2, results.getResults().size());
 +
 +    // Check the first key-value
 +    KeyValue kv = results.getResults().get(0);
 +    Key k = kv.key;
 +    ByteBuffer v = kv.value;
 +    assertEquals(ByteBuffer.wrap("row1".getBytes(UTF_8)), k.row);
 +    assertEquals(ByteBuffer.wrap("cf1".getBytes(UTF_8)), k.colFamily);
 +    assertEquals(ByteBuffer.wrap("cq1".getBytes(UTF_8)), k.colQualifier);
 +    assertEquals(ByteBuffer.wrap(new byte[0]), k.colVisibility);
 +    assertEquals(ByteBuffer.wrap("value1".getBytes(UTF_8)), v);
 +
 +    // And then the second
 +    kv = results.getResults().get(1);
 +    k = kv.key;
 +    v = kv.value;
 +    assertEquals(ByteBuffer.wrap("row2".getBytes(UTF_8)), k.row);
 +    assertEquals(ByteBuffer.wrap("cf2".getBytes(UTF_8)), k.colFamily);
 +    assertEquals(ByteBuffer.wrap("cq2".getBytes(UTF_8)), k.colQualifier);
 +    assertEquals(ByteBuffer.wrap(new byte[0]), k.colVisibility);
 +    assertEquals(ByteBuffer.wrap("value2".getBytes(UTF_8)), v);
 +
 +    // Close the scanner
 +    client.closeScanner(scanner);
 +
 +    ugiTransport.close();
 +  }
 +
 +  @Test
 +  public void testDisallowedClientForImpersonation() throws Exception {
 +    String user = testName.getMethodName();
 +    File keytab = new File(kdc.getKeytabDir(), user + ".keytab");
 +    kdc.createPrincipal(keytab, user);
 +
 +    // Login as the new user
 +    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(user, keytab.getAbsolutePath());
 +
 +    log.info("Logged in as " + ugi);
 +
 +    // Expect an AccumuloSecurityException
 +    thrown.expect(AccumuloSecurityException.class);
 +    // Error msg would look like:
 +    //
 +    // org.apache.accumulo.core.client.AccumuloSecurityException: Error BAD_CREDENTIALS for user Principal in credentials object should match kerberos
 +    // principal.
 +    // Expected 'proxy/hw10447.local@EXAMPLE.COM' but was 'testDisallowedClientForImpersonation@EXAMPLE.COM' - Username or Password is Invalid)
 +    thrown.expect(new ThriftExceptionMatchesPattern(".*Error BAD_CREDENTIALS.*"));
 +    thrown.expect(new ThriftExceptionMatchesPattern(".*Expected '" + proxyPrincipal + "' but was '" + kdc.qualifyUser(user) + "'.*"));
 +
 +    TSocket socket = new TSocket(hostname, proxyPort);
 +    log.info("Connecting to proxy with server primary '" + proxyPrimary + "' running on " + hostname);
 +
 +    // Should fail to open the tran
 +    TSaslClientTransport transport = new TSaslClientTransport("GSSAPI", null, proxyPrimary, hostname, Collections.singletonMap("javax.security.sasl.qop",
 +        "auth"), null, socket);
 +
 +    final UGIAssumingTransport ugiTransport = new UGIAssumingTransport(transport, ugi);
 +
 +    // UGI transport will perform the doAs for us
 +    ugiTransport.open();
 +
 +    AccumuloProxy.Client.Factory factory = new AccumuloProxy.Client.Factory();
 +    Client client = factory.getClient(new TCompactProtocol(ugiTransport), new TCompactProtocol(ugiTransport));
 +
 +    // Will fail because the proxy can't impersonate this user (per the site configuration)
 +    try {
 +      client.login(kdc.qualifyUser(user), Collections.<String,String> emptyMap());
 +    } finally {
 +      if (null != ugiTransport) {
 +        ugiTransport.close();
 +      }
 +    }
 +  }
 +
 +  @Test
 +  public void testMismatchPrincipals() throws Exception {
 +    ClusterUser rootUser = kdc.getRootUser();
 +    // Should get an AccumuloSecurityException and the given message
 +    thrown.expect(AccumuloSecurityException.class);
 +    thrown.expect(new ThriftExceptionMatchesPattern(ProxyServer.RPC_ACCUMULO_PRINCIPAL_MISMATCH_MSG));
 +
 +    // Make a new user
 +    String user = testName.getMethodName();
 +    File keytab = new File(kdc.getKeytabDir(), user + ".keytab");
 +    kdc.createPrincipal(keytab, user);
 +
 +    // Login as the new user
 +    UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(user, keytab.getAbsolutePath());
 +
 +    log.info("Logged in as " + ugi);
 +
 +    TSocket socket = new TSocket(hostname, proxyPort);
 +    log.info("Connecting to proxy with server primary '" + proxyPrimary + "' running on " + hostname);
 +
 +    // Should fail to open the tran
 +    TSaslClientTransport transport = new TSaslClientTransport("GSSAPI", null, proxyPrimary, hostname, Collections.singletonMap("javax.security.sasl.qop",
 +        "auth"), null, socket);
 +
 +    final UGIAssumingTransport ugiTransport = new UGIAssumingTransport(transport, ugi);
 +
 +    // UGI transport will perform the doAs for us
 +    ugiTransport.open();
 +
 +    AccumuloProxy.Client.Factory factory = new AccumuloProxy.Client.Factory();
 +    Client client = factory.getClient(new TCompactProtocol(ugiTransport), new TCompactProtocol(ugiTransport));
 +
 +    // The proxy needs to recognize that the requested principal isn't the same as the SASL principal and fail
 +    // Accumulo should let this through -- we need to rely on the proxy to dump me before talking to accumulo
 +    try {
 +      client.login(rootUser.getPrincipal(), Collections.<String,String> emptyMap());
 +    } finally {
 +      if (null != ugiTransport) {
 +        ugiTransport.close();
 +      }
 +    }
 +  }
 +
 +  private static class ThriftExceptionMatchesPattern extends TypeSafeMatcher<AccumuloSecurityException> {
 +    private String pattern;
 +
 +    public ThriftExceptionMatchesPattern(String pattern) {
 +      this.pattern = pattern;
 +    }
 +
 +    @Override
 +    protected boolean matchesSafely(AccumuloSecurityException item) {
 +      return item.isSetMsg() && item.msg.matches(pattern);
 +    }
 +
 +    @Override
 +    public void describeTo(Description description) {
 +      description.appendText("matches pattern ").appendValue(pattern);
 +    }
 +
 +    @Override
 +    protected void describeMismatchSafely(AccumuloSecurityException item, Description mismatchDescription) {
 +      mismatchDescription.appendText("does not match");
 +    }
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
index 142a8bb,0000000..0e60501
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
@@@ -1,188 -1,0 +1,191 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.test.functional;
 +
 +import static org.junit.Assert.assertEquals;
 +
 +import java.util.Map;
 +import java.util.Map.Entry;
 +
 +import org.apache.accumulo.cluster.ClusterUser;
 +import org.apache.accumulo.core.client.AccumuloException;
 +import org.apache.accumulo.core.client.AccumuloSecurityException;
 +import org.apache.accumulo.core.client.BatchWriter;
 +import org.apache.accumulo.core.client.BatchWriterConfig;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.Scanner;
 +import org.apache.accumulo.core.client.TableExistsException;
 +import org.apache.accumulo.core.client.TableNotFoundException;
 +import org.apache.accumulo.core.client.admin.CompactionConfig;
 +import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 +import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.data.Key;
 +import org.apache.accumulo.core.data.Mutation;
 +import org.apache.accumulo.core.data.PartialKey;
 +import org.apache.accumulo.core.data.Value;
 +import org.apache.accumulo.core.security.Authorizations;
 +import org.apache.accumulo.harness.AccumuloITBase;
 +import org.apache.accumulo.harness.MiniClusterConfigurationCallback;
 +import org.apache.accumulo.harness.MiniClusterHarness;
 +import org.apache.accumulo.harness.TestingKdc;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.hadoop.conf.Configuration;
 +import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 +import org.apache.hadoop.minikdc.MiniKdc;
 +import org.apache.hadoop.security.UserGroupInformation;
 +import org.junit.After;
 +import org.junit.AfterClass;
 +import org.junit.Before;
 +import org.junit.BeforeClass;
 +import org.junit.Test;
++import org.junit.experimental.categories.Category;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +import com.google.common.collect.Iterables;
 +
 +/**
 + * MAC test which uses {@link MiniKdc} to simulate ta secure environment. Can be used as a sanity check for Kerberos/SASL testing.
 + */
++@Category(MiniClusterOnlyTest.class)
 +public class KerberosRenewalIT extends AccumuloITBase {
 +  private static final Logger log = LoggerFactory.getLogger(KerberosRenewalIT.class);
 +
 +  private static TestingKdc kdc;
 +  private static String krbEnabledForITs = null;
 +  private static ClusterUser rootUser;
 +
 +  private static final long TICKET_LIFETIME = 6 * 60 * 1000; // Anything less seems to fail when generating the ticket
 +  private static final long TICKET_TEST_LIFETIME = 8 * 60 * 1000; // Run a test for 8 mins
 +  private static final long TEST_DURATION = 9 * 60 * 1000; // The test should finish within 9 mins
 +
 +  @BeforeClass
 +  public static void startKdc() throws Exception {
 +    // 30s renewal time window
 +    kdc = new TestingKdc(TestingKdc.computeKdcDir(), TestingKdc.computeKeytabDir(), TICKET_LIFETIME);
 +    kdc.start();
 +    krbEnabledForITs = System.getProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION);
 +    if (null == krbEnabledForITs || !Boolean.parseBoolean(krbEnabledForITs)) {
 +      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, "true");
 +    }
 +    rootUser = kdc.getRootUser();
 +  }
 +
 +  @AfterClass
 +  public static void stopKdc() throws Exception {
 +    if (null != kdc) {
 +      kdc.stop();
 +    }
 +    if (null != krbEnabledForITs) {
 +      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, krbEnabledForITs);
 +    }
 +  }
 +
 +  @Override
 +  public int defaultTimeoutSeconds() {
 +    return (int) TEST_DURATION / 1000;
 +  }
 +
 +  private MiniAccumuloClusterImpl mac;
 +
 +  @Before
 +  public void startMac() throws Exception {
 +    MiniClusterHarness harness = new MiniClusterHarness();
 +    mac = harness.create(this, new PasswordToken("unused"), kdc, new MiniClusterConfigurationCallback() {
 +
 +      @Override
 +      public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration coreSite) {
 +        Map<String,String> site = cfg.getSiteConfig();
 +        site.put(Property.INSTANCE_ZK_TIMEOUT.getKey(), "15s");
 +        // Reduce the period just to make sure we trigger renewal fast
 +        site.put(Property.GENERAL_KERBEROS_RENEWAL_PERIOD.getKey(), "5s");
 +        cfg.setSiteConfig(site);
 +      }
 +
 +    });
 +
 +    mac.getConfig().setNumTservers(1);
 +    mac.start();
 +    // Enabled kerberos auth
 +    Configuration conf = new Configuration(false);
 +    conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "kerberos");
 +    UserGroupInformation.setConfiguration(conf);
 +  }
 +
 +  @After
 +  public void stopMac() throws Exception {
 +    if (null != mac) {
 +      mac.stop();
 +    }
 +  }
 +
 +  // Intentially setting the Test annotation timeout. We do not want to scale the timeout.
 +  @Test(timeout = TEST_DURATION)
 +  public void testReadAndWriteThroughTicketLifetime() throws Exception {
 +    // Attempt to use Accumulo for a duration of time that exceeds the Kerberos ticket lifetime.
 +    // This is a functional test to verify that Accumulo services renew their ticket.
 +    // If the test doesn't finish on its own, this signifies that Accumulo services failed
 +    // and the test should fail. If Accumulo services renew their ticket, the test case
 +    // should exit gracefully on its own.
 +
 +    // Login as the "root" user
 +    UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +    log.info("Logged in as {}", rootUser.getPrincipal());
 +
 +    Connector conn = mac.getConnector(rootUser.getPrincipal(), new KerberosToken());
 +    log.info("Created connector as {}", rootUser.getPrincipal());
 +    assertEquals(rootUser.getPrincipal(), conn.whoami());
 +
 +    long duration = 0;
 +    long last = System.currentTimeMillis();
 +    // Make sure we have a couple renewals happen
 +    while (duration < TICKET_TEST_LIFETIME) {
 +      // Create a table, write a record, compact, read the record, drop the table.
 +      createReadWriteDrop(conn);
 +      // Wait a bit after
 +      Thread.sleep(5000);
 +
 +      // Update the duration
 +      long now = System.currentTimeMillis();
 +      duration += now - last;
 +      last = now;
 +    }
 +  }
 +
 +  /**
 +   * Creates a table, adds a record to it, and then compacts the table. A simple way to make sure that the system user exists (since the master does an RPC to
 +   * the tserver which will create the system user if it doesn't already exist).
 +   */
 +  private void createReadWriteDrop(Connector conn) throws TableNotFoundException, AccumuloSecurityException, AccumuloException, TableExistsException {
 +    final String table = testName.getMethodName() + "_table";
 +    conn.tableOperations().create(table);
 +    BatchWriter bw = conn.createBatchWriter(table, new BatchWriterConfig());
 +    Mutation m = new Mutation("a");
 +    m.put("b", "c", "d");
 +    bw.addMutation(m);
 +    bw.close();
 +    conn.tableOperations().compact(table, new CompactionConfig().setFlush(true).setWait(true));
 +    Scanner s = conn.createScanner(table, Authorizations.EMPTY);
 +    Entry<Key,Value> entry = Iterables.getOnlyElement(s);
 +    assertEquals("Did not find the expected key", 0, new Key("a", "b", "c").compareTo(entry.getKey(), PartialKey.ROW_COLFAM_COLQUAL));
 +    assertEquals("d", entry.getValue().toString());
 +    conn.tableOperations().delete(table);
 +  }
 +}


[02/10] accumulo git commit: ACCUMULO-4423 Annotate integration tests with categories

Posted by el...@apache.org.
ACCUMULO-4423 Annotate integration tests with categories

Differentiates tests which always use a minicluster and those
which can use a minicluster or a standalone cluster. Out-of-the-box
test invocation should not have changed.

Includes updated documentation to TESTING.md as well.

Closes apache/accumulo#144


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/661dac33
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/661dac33
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/661dac33

Branch: refs/heads/1.8
Commit: 661dac33648fb8bb311434720563c322611c1f12
Parents: 2be85ad
Author: Josh Elser <el...@apache.org>
Authored: Tue Aug 30 16:23:48 2016 -0400
Committer: Josh Elser <el...@apache.org>
Committed: Tue Aug 30 23:27:09 2016 -0400

----------------------------------------------------------------------
 TESTING.md                                      | 25 ++++++++++++--------
 pom.xml                                         | 24 ++++++++++++++++++-
 .../accumulo/harness/AccumuloClusterIT.java     |  3 +++
 .../accumulo/harness/SharedMiniClusterIT.java   |  3 +++
 .../org/apache/accumulo/test/NamespacesIT.java  |  3 +++
 .../test/categories/AnyClusterTest.java         | 25 ++++++++++++++++++++
 .../test/categories/MiniClusterOnlyTest.java    | 24 +++++++++++++++++++
 .../accumulo/test/categories/package-info.java  | 21 ++++++++++++++++
 .../accumulo/test/functional/ClassLoaderIT.java |  3 +++
 .../test/functional/ConfigurableMacIT.java      |  3 +++
 .../accumulo/test/functional/KerberosIT.java    |  3 +++
 .../test/functional/KerberosProxyIT.java        |  3 +++
 .../test/functional/KerberosRenewalIT.java      |  3 +++
 .../accumulo/test/functional/PermissionsIT.java |  3 +++
 .../accumulo/test/functional/TableIT.java       |  3 +++
 .../test/replication/KerberosReplicationIT.java |  3 +++
 trace/pom.xml                                   |  6 +++++
 17 files changed, 147 insertions(+), 11 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/TESTING.md
----------------------------------------------------------------------
diff --git a/TESTING.md b/TESTING.md
index de484ee..125110b 100644
--- a/TESTING.md
+++ b/TESTING.md
@@ -47,23 +47,27 @@ but are checking for regressions that were previously seen in the codebase. Thes
 resources, at least another gigabyte of memory over what Maven itself requires. As such, it's recommended to have at
 least 3-4GB of free memory and 10GB of free disk space.
 
-## Accumulo for testing
+## Test Categories
 
-The primary reason these tests take so much longer than the unit tests is that most are using an Accumulo instance to
-perform the test. It's a necessary evil; however, there are things we can do to improve this.
+Accumulo uses JUnit Category annotations to categorize certain integration tests based on their runtime requirements.
+Presently there are three different categories:
 
-## MiniAccumuloCluster
+### MiniAccumuloCluster (`MiniClusterOnlyTest`)
 
-By default, these tests will use a MiniAccumuloCluster which is a multi-process "implementation" of Accumulo, managed
-through Java interfaces. This MiniAccumuloCluster has the ability to use the local filesystem or Apache Hadoop's
+These tests use MiniAccumuloCluster (MAC) which is a multi-process "implementation" of Accumulo, managed
+through Java APIs. This MiniAccumuloCluster has the ability to use the local filesystem or Apache Hadoop's
 MiniDFSCluster, as well as starting one to many tablet servers. MiniAccumuloCluster tends to be a very useful tool in
 that it can automatically provide a workable instance that mimics how an actual deployment functions.
 
 The downside of using MiniAccumuloCluster is that a significant portion of each test is now devoted to starting and
 stopping the MiniAccumuloCluster.  While this is a surefire way to isolate tests from interferring with one another, it
-increases the actual runtime of the test by, on average, 10x.
+increases the actual runtime of the test by, on average, 10x. Some times the tests require the use of MAC because the
+test is being destructive or some special environment setup (e.g. Kerberos).
 
-## Standalone Cluster
+By default, these tests are run during the `integration-test` lifecycle phase using `mvn verify`. These tests can
+also be run at the `test` lifecycle phase using `mvn package -Pminicluster-unit-tests`.
+
+### Standalone Cluster (`AnyClusterTest`)
 
 An alternative to the MiniAccumuloCluster for testing, a standalone Accumulo cluster can also be configured for use by
 most tests. This requires a manual step of building and deploying the Accumulo cluster by hand. The build can then be
@@ -75,7 +79,9 @@ Use of a standalone cluster can be enabled using system properties on the Maven
 providing a Java properties file on the Maven command line. The use of a properties file is recommended since it is
 typically a fixed file per standalone cluster you want to run the tests against.
 
-### Configuration
+These tests will always run during the `integration-test` lifecycle phase using `mvn verify`.
+
+## Configuration for Standalone clusters
 
 The following properties can be used to configure a standalone cluster:
 
@@ -128,4 +134,3 @@ at a time, for example the [Continuous Ingest][1] and [Randomwalk test][2] suite
 [3]: https://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html
 [4]: http://maven.apache.org/surefire/maven-surefire-plugin/
 [5]: http://maven.apache.org/surefire/maven-failsafe-plugin/
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 0f57f62..d6393d2 100644
--- a/pom.xml
+++ b/pom.xml
@@ -115,6 +115,10 @@
     <url>https://builds.apache.org/view/A-D/view/Accumulo/</url>
   </ciManagement>
   <properties>
+    <accumulo.anyClusterTests>org.apache.accumulo.test.categories.AnyClusterTest</accumulo.anyClusterTests>
+    <accumulo.it.excludedGroups />
+    <accumulo.it.groups>${accumulo.anyClusterTests},${accumulo.miniclusterTests}</accumulo.it.groups>
+    <accumulo.miniclusterTests>org.apache.accumulo.test.categories.MiniClusterOnlyTest</accumulo.miniclusterTests>
     <!-- used for filtering the java source with the current version -->
     <accumulo.release.version>${project.version}</accumulo.release.version>
     <assembly.tarLongFileMode>posix</assembly.tarLongFileMode>
@@ -240,7 +244,7 @@
       <dependency>
         <groupId>junit</groupId>
         <artifactId>junit</artifactId>
-        <version>4.11</version>
+        <version>4.12</version>
       </dependency>
       <dependency>
         <groupId>log4j</groupId>
@@ -1006,6 +1010,10 @@
               <goal>integration-test</goal>
               <goal>verify</goal>
             </goals>
+            <configuration>
+              <excludeGroups>${accumulo.it.excludedGroups}</excludeGroups>
+              <groups>${accumulo.it.groups}</groups>
+            </configuration>
           </execution>
         </executions>
       </plugin>
@@ -1399,5 +1407,19 @@
         </pluginManagement>
       </build>
     </profile>
+    <profile>
+      <id>only-minicluster-tests</id>
+      <properties>
+        <accumulo.it.excludedGroups>${accumulo.anyClusterTests}</accumulo.it.excludedGroups>
+        <accumulo.it.groups>${accumulo.miniclusterTests}</accumulo.it.groups>
+      </properties>
+    </profile>
+    <profile>
+      <id>standalone-capable-tests</id>
+      <properties>
+        <accumulo.it.excludedGroups>${accumulo.miniclusterTests}</accumulo.it.excludedGroups>
+        <accumulo.it.groups>${accumulo.anyClusterTests}</accumulo.it.groups>
+      </properties>
+    </profile>
   </profiles>
 </project>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/harness/AccumuloClusterIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/harness/AccumuloClusterIT.java b/test/src/test/java/org/apache/accumulo/harness/AccumuloClusterIT.java
index e2b35f4..436ceb5 100644
--- a/test/src/test/java/org/apache/accumulo/harness/AccumuloClusterIT.java
+++ b/test/src/test/java/org/apache/accumulo/harness/AccumuloClusterIT.java
@@ -43,6 +43,7 @@ import org.apache.accumulo.harness.conf.AccumuloMiniClusterConfiguration;
 import org.apache.accumulo.harness.conf.StandaloneAccumuloClusterConfiguration;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.test.categories.AnyClusterTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -52,12 +53,14 @@ import org.junit.AfterClass;
 import org.junit.Assume;
 import org.junit.Before;
 import org.junit.BeforeClass;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 /**
  * General Integration-Test base class that provides access to an Accumulo instance for testing. This instance could be MAC or a standalone instance.
  */
+@Category(AnyClusterTest.class)
 public abstract class AccumuloClusterIT extends AccumuloIT implements MiniClusterConfigurationCallback, ClusterUsers {
   private static final Logger log = LoggerFactory.getLogger(AccumuloClusterIT.class);
   private static final String TRUE = Boolean.toString(true);

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/harness/SharedMiniClusterIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/harness/SharedMiniClusterIT.java b/test/src/test/java/org/apache/accumulo/harness/SharedMiniClusterIT.java
index f66a192..644055f 100644
--- a/test/src/test/java/org/apache/accumulo/harness/SharedMiniClusterIT.java
+++ b/test/src/test/java/org/apache/accumulo/harness/SharedMiniClusterIT.java
@@ -31,9 +31,11 @@ import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -48,6 +50,7 @@ import org.slf4j.LoggerFactory;
  * a method annotated with the {@link org.junit.BeforeClass} JUnit annotation and {@link #stopMiniCluster()} in a method annotated with the
  * {@link org.junit.AfterClass} JUnit annotation.
  */
+@Category(MiniClusterOnlyTest.class)
 public abstract class SharedMiniClusterIT extends AccumuloIT implements ClusterUsers {
   private static final Logger log = LoggerFactory.getLogger(SharedMiniClusterIT.class);
   public static final String TRUE = Boolean.toString(true);

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/NamespacesIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/NamespacesIT.java b/test/src/test/java/org/apache/accumulo/test/NamespacesIT.java
index aaa6a6e..6ec2127 100644
--- a/test/src/test/java/org/apache/accumulo/test/NamespacesIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/NamespacesIT.java
@@ -77,14 +77,17 @@ import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.examples.simple.constraints.NumericValueConstraint;
 import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.io.Text;
 import org.junit.After;
 import org.junit.Assume;
 import org.junit.Before;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 
 // Testing default namespace configuration with inheritance requires altering the system state and restoring it back to normal
 // Punt on this for now and just let it use a minicluster.
+@Category(MiniClusterOnlyTest.class)
 public class NamespacesIT extends AccumuloClusterIT {
 
   private Connector c;

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/categories/AnyClusterTest.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/categories/AnyClusterTest.java b/test/src/test/java/org/apache/accumulo/test/categories/AnyClusterTest.java
new file mode 100644
index 0000000..765057e
--- /dev/null
+++ b/test/src/test/java/org/apache/accumulo/test/categories/AnyClusterTest.java
@@ -0,0 +1,25 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.categories;
+
+/**
+ * Interface to be used with JUnit Category annotation to denote that the IntegrationTest can be used with any kind of cluster (a MiniAccumuloCluster or a
+ * StandaloneAccumuloCluster).
+ */
+public interface AnyClusterTest {
+
+}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/categories/MiniClusterOnlyTest.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/categories/MiniClusterOnlyTest.java b/test/src/test/java/org/apache/accumulo/test/categories/MiniClusterOnlyTest.java
new file mode 100644
index 0000000..1a972ef
--- /dev/null
+++ b/test/src/test/java/org/apache/accumulo/test/categories/MiniClusterOnlyTest.java
@@ -0,0 +1,24 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.categories;
+
+/**
+ * Interface to be used with JUnit Category annotation to denote that the IntegrationTest requires the use of a MiniAccumuloCluster.
+ */
+public interface MiniClusterOnlyTest {
+
+}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/categories/package-info.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/categories/package-info.java b/test/src/test/java/org/apache/accumulo/test/categories/package-info.java
new file mode 100644
index 0000000..e7071fc
--- /dev/null
+++ b/test/src/test/java/org/apache/accumulo/test/categories/package-info.java
@@ -0,0 +1,21 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * JUnit categories for the various types of Accumulo integration tests.
+ */
+package org.apache.accumulo.test.categories;
+

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ClassLoaderIT.java b/test/src/test/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
index 4b51bd2..d09e2a6 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
@@ -40,13 +40,16 @@ import org.apache.accumulo.core.util.CachedConfiguration;
 import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.harness.AccumuloClusterIT;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.hamcrest.CoreMatchers;
 import org.junit.Assume;
 import org.junit.Before;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 
+@Category(MiniClusterOnlyTest.class)
 public class ClassLoaderIT extends AccumuloClusterIT {
 
   private static final long ZOOKEEPER_PROPAGATION_TIME = 10 * 1000;

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/ConfigurableMacIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ConfigurableMacIT.java b/test/src/test/java/org/apache/accumulo/test/functional/ConfigurableMacIT.java
index 53eb8e4..6d04610 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ConfigurableMacIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/ConfigurableMacIT.java
@@ -40,12 +40,14 @@ import org.apache.accumulo.minicluster.MiniAccumuloCluster;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.minicluster.impl.ZooKeeperBindException;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.accumulo.test.util.CertUtils;
 import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.zookeeper.KeeperException;
 import org.junit.After;
 import org.junit.Before;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -53,6 +55,7 @@ import org.slf4j.LoggerFactory;
  * General Integration-Test base class that provides access to a {@link MiniAccumuloCluster} for testing. Tests using these typically do very disruptive things
  * to the instance, and require specific configuration. Most tests don't need this level of control and should extend {@link AccumuloClusterIT} instead.
  */
+@Category(MiniClusterOnlyTest.class)
 public class ConfigurableMacIT extends AccumuloIT {
   public static final Logger log = LoggerFactory.getLogger(ConfigurableMacIT.class);
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java b/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
index 612718d..a3da827 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
@@ -68,6 +68,7 @@ import org.apache.accumulo.harness.TestingKdc;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.minikdc.MiniKdc;
@@ -77,6 +78,7 @@ import org.junit.AfterClass;
 import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -86,6 +88,7 @@ import com.google.common.collect.Sets;
 /**
  * MAC test which uses {@link MiniKdc} to simulate ta secure environment. Can be used as a sanity check for Kerberos/SASL testing.
  */
+@Category(MiniClusterOnlyTest.class)
 public class KerberosIT extends AccumuloIT {
   private static final Logger log = LoggerFactory.getLogger(KerberosIT.class);
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/KerberosProxyIT.java b/test/src/test/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
index af6310c..2bef539 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
@@ -56,6 +56,7 @@ import org.apache.accumulo.proxy.thrift.ScanResult;
 import org.apache.accumulo.proxy.thrift.TimeType;
 import org.apache.accumulo.proxy.thrift.WriterOptions;
 import org.apache.accumulo.server.util.PortUtils;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -71,6 +72,7 @@ import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Rule;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.junit.rules.ExpectedException;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -78,6 +80,7 @@ import org.slf4j.LoggerFactory;
 /**
  * Tests impersonation of clients by the proxy over SASL
  */
+@Category(MiniClusterOnlyTest.class)
 public class KerberosProxyIT extends AccumuloIT {
   private static final Logger log = LoggerFactory.getLogger(KerberosProxyIT.class);
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java b/test/src/test/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
index 28c1dfc..07e0662 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
@@ -45,6 +45,7 @@ import org.apache.accumulo.harness.MiniClusterHarness;
 import org.apache.accumulo.harness.TestingKdc;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.minikdc.MiniKdc;
@@ -54,6 +55,7 @@ import org.junit.AfterClass;
 import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -62,6 +64,7 @@ import com.google.common.collect.Iterables;
 /**
  * MAC test which uses {@link MiniKdc} to simulate ta secure environment. Can be used as a sanity check for Kerberos/SASL testing.
  */
+@Category(MiniClusterOnlyTest.class)
 public class KerberosRenewalIT extends AccumuloIT {
   private static final Logger log = LoggerFactory.getLogger(KerberosRenewalIT.class);
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/PermissionsIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/PermissionsIT.java b/test/src/test/java/org/apache/accumulo/test/functional/PermissionsIT.java
index 4aea354..6967a48 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/PermissionsIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/PermissionsIT.java
@@ -53,15 +53,18 @@ import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.SystemPermission;
 import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.io.Text;
 import org.junit.Assume;
 import org.junit.Before;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 // This test verifies the default permissions so a clean instance must be used. A shared instance might
 // not be representative of a fresh installation.
+@Category(MiniClusterOnlyTest.class)
 public class PermissionsIT extends AccumuloClusterIT {
   private static final Logger log = LoggerFactory.getLogger(PermissionsIT.class);
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/TableIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/TableIT.java b/test/src/test/java/org/apache/accumulo/test/functional/TableIT.java
index 3061b87..0bfdc00 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/TableIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/TableIT.java
@@ -39,15 +39,18 @@ import org.apache.accumulo.harness.AccumuloClusterIT;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.test.TestIngest;
 import org.apache.accumulo.test.VerifyIngest;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.io.Text;
 import org.hamcrest.CoreMatchers;
 import org.junit.Assume;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 
 import com.google.common.collect.Iterators;
 
+@Category(MiniClusterOnlyTest.class)
 public class TableIT extends AccumuloClusterIT {
 
   @Override

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java b/test/src/test/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
index be9e320..933dfb8 100644
--- a/test/src/test/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
@@ -41,6 +41,7 @@ import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.minicluster.impl.ProcessReference;
 import org.apache.accumulo.server.replication.ReplicaSystemFactory;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.accumulo.test.functional.KerberosIT;
 import org.apache.accumulo.tserver.TabletServer;
 import org.apache.accumulo.tserver.replication.AccumuloReplicaSystem;
@@ -54,6 +55,7 @@ import org.junit.Assert;
 import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -62,6 +64,7 @@ import com.google.common.collect.Iterators;
 /**
  * Ensure that replication occurs using keytabs instead of password (not to mention SASL)
  */
+@Category(MiniClusterOnlyTest.class)
 public class KerberosReplicationIT extends AccumuloIT {
   private static final Logger log = LoggerFactory.getLogger(KerberosIT.class);
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/trace/pom.xml
----------------------------------------------------------------------
diff --git a/trace/pom.xml b/trace/pom.xml
index 2b79288..2d93a84 100644
--- a/trace/pom.xml
+++ b/trace/pom.xml
@@ -34,5 +34,11 @@
       <groupId>org.apache.htrace</groupId>
       <artifactId>htrace-core</artifactId>
     </dependency>
+    <!-- Otherwise will see complaints from failsafe WRT groups -->
+    <dependency>
+      <groupId>junit</groupId>
+      <artifactId>junit</artifactId>
+      <scope>test</scope>
+    </dependency>
   </dependencies>
 </project>


[10/10] accumulo git commit: Merge branch '1.8'

Posted by el...@apache.org.
Merge branch '1.8'


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/15956097
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/15956097
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/15956097

Branch: refs/heads/master
Commit: 15956097990172b449fe8010475a7c27ff42ae7d
Parents: 673fdb9 d28a3ee
Author: Josh Elser <el...@apache.org>
Authored: Wed Aug 31 00:23:54 2016 -0400
Committer: Josh Elser <el...@apache.org>
Committed: Wed Aug 31 00:23:54 2016 -0400

----------------------------------------------------------------------
 TESTING.md                                      | 57 +++++++++++---------
 pom.xml                                         | 21 +++++++-
 .../harness/AccumuloClusterHarness.java         |  3 ++
 .../accumulo/harness/SharedMiniClusterBase.java |  3 ++
 .../org/apache/accumulo/test/NamespacesIT.java  |  3 ++
 .../test/categories/AnyClusterTest.java         | 25 +++++++++
 .../test/categories/MiniClusterOnlyTest.java    | 24 +++++++++
 .../accumulo/test/categories/package-info.java  | 21 ++++++++
 .../accumulo/test/functional/ClassLoaderIT.java |  3 ++
 .../test/functional/ConfigurableMacBase.java    |  3 ++
 .../accumulo/test/functional/KerberosIT.java    |  3 ++
 .../test/functional/KerberosProxyIT.java        |  3 ++
 .../test/functional/KerberosRenewalIT.java      |  3 ++
 .../accumulo/test/functional/PermissionsIT.java |  3 ++
 .../accumulo/test/functional/TableIT.java       |  3 ++
 .../test/replication/KerberosReplicationIT.java |  3 ++
 16 files changed, 155 insertions(+), 26 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/15956097/pom.xml
----------------------------------------------------------------------


[09/10] accumulo git commit: Merge branch '1.7' into 1.8

Posted by el...@apache.org.
Merge branch '1.7' into 1.8


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/d28a3ee3
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/d28a3ee3
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/d28a3ee3

Branch: refs/heads/1.8
Commit: d28a3ee3e154cc21b38523c254398bb01b7dbec7
Parents: 6027997 661dac3
Author: Josh Elser <el...@apache.org>
Authored: Wed Aug 31 00:00:44 2016 -0400
Committer: Josh Elser <el...@apache.org>
Committed: Wed Aug 31 00:00:44 2016 -0400

----------------------------------------------------------------------
 TESTING.md                                      | 57 +++++++++++---------
 pom.xml                                         | 21 +++++++-
 .../harness/AccumuloClusterHarness.java         |  3 ++
 .../accumulo/harness/SharedMiniClusterBase.java |  3 ++
 .../org/apache/accumulo/test/NamespacesIT.java  |  3 ++
 .../test/categories/AnyClusterTest.java         | 25 +++++++++
 .../test/categories/MiniClusterOnlyTest.java    | 24 +++++++++
 .../accumulo/test/categories/package-info.java  | 21 ++++++++
 .../accumulo/test/functional/ClassLoaderIT.java |  3 ++
 .../test/functional/ConfigurableMacBase.java    |  3 ++
 .../accumulo/test/functional/KerberosIT.java    |  3 ++
 .../test/functional/KerberosProxyIT.java        |  3 ++
 .../test/functional/KerberosRenewalIT.java      |  3 ++
 .../accumulo/test/functional/PermissionsIT.java |  3 ++
 .../accumulo/test/functional/TableIT.java       |  3 ++
 .../test/replication/KerberosReplicationIT.java |  3 ++
 trace/pom.xml                                   |  1 +
 17 files changed, 156 insertions(+), 26 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/TESTING.md
----------------------------------------------------------------------
diff --cc TESTING.md
index 2195108,125110b..9799397
--- a/TESTING.md
+++ b/TESTING.md
@@@ -91,7 -79,9 +79,26 @@@ Use of a standalone cluster can be enab
  providing a Java properties file on the Maven command line. The use of a properties file is recommended since it is
  typically a fixed file per standalone cluster you want to run the tests against.
  
- ### Configuration
+ These tests will always run during the `integration-test` lifecycle phase using `mvn verify`.
+ 
++### Performance tests
++
++Performance tests refer to a small subset of integration tests which are not activated by default. These tests allow
++developers to write tests which specifically exercise expected performance which may be dependent on the available
++resources of the host machine. Normal integration tests should be capable of running anywhere with a lower-bound on
++available memory.
++
++These tests are designated using the JUnit Category annotation with the `PerformanceTest` interface in the
++accumulo-test module. See the `PerformanceTest` interface for more information on how to use this to write your
++own performance test.
++
++To invoke the performance tests, activate the `performanceTests` Maven profile in addition to the integration-test
++or verify Maven lifecycle. For example `mvn verify -PperformanceTests` would invoke all of the integration tests:
++both normal integration tests and the performance tests. There is presently no way to invoke only the performance
++tests without the rest of the integration tests.
++
++
+ ## Configuration for Standalone clusters
  
  The following properties can be used to configure a standalone cluster:
  

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/pom.xml
----------------------------------------------------------------------
diff --cc pom.xml
index 77e5597,d6393d2..8106dff
--- a/pom.xml
+++ b/pom.xml
@@@ -116,8 -115,10 +116,12 @@@
      <url>https://builds.apache.org/view/A-D/view/Accumulo/</url>
    </ciManagement>
    <properties>
+     <accumulo.anyClusterTests>org.apache.accumulo.test.categories.AnyClusterTest</accumulo.anyClusterTests>
 -    <accumulo.it.excludedGroups />
++    <accumulo.it.excludedGroups>${accumulo.performanceTests</accumulo.it.excludedGroups>
+     <accumulo.it.groups>${accumulo.anyClusterTests},${accumulo.miniclusterTests}</accumulo.it.groups>
+     <accumulo.miniclusterTests>org.apache.accumulo.test.categories.MiniClusterOnlyTest</accumulo.miniclusterTests>
 +    <!-- Interface used to separate tests with JUnit category -->
 +    <accumulo.performanceTests>org.apache.accumulo.test.PerformanceTest</accumulo.performanceTests>
      <!-- used for filtering the java source with the current version -->
      <accumulo.release.version>${project.version}</accumulo.release.version>
      <assembly.tarLongFileMode>posix</assembly.tarLongFileMode>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/harness/AccumuloClusterHarness.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/harness/AccumuloClusterHarness.java
index 7d7b73a,0000000..70d8dc7
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/harness/AccumuloClusterHarness.java
+++ b/test/src/main/java/org/apache/accumulo/harness/AccumuloClusterHarness.java
@@@ -1,338 -1,0 +1,341 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.harness;
 +
 +import static com.google.common.base.Preconditions.checkState;
 +import static java.nio.charset.StandardCharsets.UTF_8;
 +import static org.junit.Assert.fail;
 +
 +import java.io.IOException;
 +
 +import org.apache.accumulo.cluster.AccumuloCluster;
 +import org.apache.accumulo.cluster.ClusterControl;
 +import org.apache.accumulo.cluster.ClusterUser;
 +import org.apache.accumulo.cluster.ClusterUsers;
 +import org.apache.accumulo.cluster.standalone.StandaloneAccumuloCluster;
 +import org.apache.accumulo.core.client.ClientConfiguration;
 +import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.admin.SecurityOperations;
 +import org.apache.accumulo.core.client.admin.TableOperations;
 +import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 +import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 +import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.security.TablePermission;
 +import org.apache.accumulo.harness.conf.AccumuloClusterConfiguration;
 +import org.apache.accumulo.harness.conf.AccumuloClusterPropertyConfiguration;
 +import org.apache.accumulo.harness.conf.AccumuloMiniClusterConfiguration;
 +import org.apache.accumulo.harness.conf.StandaloneAccumuloClusterConfiguration;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
++import org.apache.accumulo.test.categories.AnyClusterTest;
 +import org.apache.hadoop.conf.Configuration;
 +import org.apache.hadoop.fs.FileSystem;
 +import org.apache.hadoop.fs.Path;
 +import org.apache.hadoop.security.UserGroupInformation;
 +import org.junit.After;
 +import org.junit.AfterClass;
 +import org.junit.Assume;
 +import org.junit.Before;
 +import org.junit.BeforeClass;
++import org.junit.experimental.categories.Category;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +/**
 + * General Integration-Test base class that provides access to an Accumulo instance for testing. This instance could be MAC or a standalone instance.
 + */
++@Category(AnyClusterTest.class)
 +public abstract class AccumuloClusterHarness extends AccumuloITBase implements MiniClusterConfigurationCallback, ClusterUsers {
 +  private static final Logger log = LoggerFactory.getLogger(AccumuloClusterHarness.class);
 +  private static final String TRUE = Boolean.toString(true);
 +
 +  public static enum ClusterType {
 +    MINI, STANDALONE;
 +
 +    public boolean isDynamic() {
 +      return this == MINI;
 +    }
 +  }
 +
 +  private static boolean initialized = false;
 +
 +  protected static AccumuloCluster cluster;
 +  protected static ClusterType type;
 +  protected static AccumuloClusterPropertyConfiguration clusterConf;
 +  protected static TestingKdc krb;
 +
 +  @BeforeClass
 +  public static void setUp() throws Exception {
 +    clusterConf = AccumuloClusterPropertyConfiguration.get();
 +    type = clusterConf.getClusterType();
 +
 +    if (ClusterType.MINI == type && TRUE.equals(System.getProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION))) {
 +      krb = new TestingKdc();
 +      krb.start();
 +      log.info("MiniKdc started");
 +    }
 +
 +    initialized = true;
 +  }
 +
 +  @AfterClass
 +  public static void tearDownKdc() throws Exception {
 +    if (null != krb) {
 +      krb.stop();
 +    }
 +  }
 +
 +  /**
 +   * The {@link TestingKdc} used for this {@link AccumuloCluster}. Might be null.
 +   */
 +  public static TestingKdc getKdc() {
 +    return krb;
 +  }
 +
 +  @Before
 +  public void setupCluster() throws Exception {
 +    // Before we try to instantiate the cluster, check to see if the test even wants to run against this type of cluster
 +    Assume.assumeTrue(canRunTest(type));
 +
 +    switch (type) {
 +      case MINI:
 +        MiniClusterHarness miniClusterHarness = new MiniClusterHarness();
 +        // Intrinsically performs the callback to let tests alter MiniAccumuloConfig and core-site.xml
 +        MiniAccumuloClusterImpl impl = miniClusterHarness.create(this, getAdminToken(), krb);
 +        cluster = impl;
 +        // MAC makes a ClientConf for us, just set it
 +        ((AccumuloMiniClusterConfiguration) clusterConf).setClientConf(impl.getClientConfig());
 +        // Login as the "root" user
 +        if (null != krb) {
 +          ClusterUser rootUser = krb.getRootUser();
 +          // Log in the 'client' user
 +          UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +        }
 +        break;
 +      case STANDALONE:
 +        StandaloneAccumuloClusterConfiguration conf = (StandaloneAccumuloClusterConfiguration) clusterConf;
 +        ClientConfiguration clientConf = conf.getClientConf();
 +        StandaloneAccumuloCluster standaloneCluster = new StandaloneAccumuloCluster(conf.getInstance(), clientConf, conf.getTmpDirectory(), conf.getUsers(),
 +            conf.getAccumuloServerUser());
 +        // If these are provided in the configuration, pass them into the cluster
 +        standaloneCluster.setAccumuloHome(conf.getAccumuloHome());
 +        standaloneCluster.setClientAccumuloConfDir(conf.getClientAccumuloConfDir());
 +        standaloneCluster.setServerAccumuloConfDir(conf.getServerAccumuloConfDir());
 +        standaloneCluster.setHadoopConfDir(conf.getHadoopConfDir());
 +
 +        // For SASL, we need to get the Hadoop configuration files as well otherwise UGI will log in as SIMPLE instead of KERBEROS
 +        Configuration hadoopConfiguration = standaloneCluster.getHadoopConfiguration();
 +        if (clientConf.getBoolean(ClientProperty.INSTANCE_RPC_SASL_ENABLED.getKey(), false)) {
 +          UserGroupInformation.setConfiguration(hadoopConfiguration);
 +          // Login as the admin user to start the tests
 +          UserGroupInformation.loginUserFromKeytab(conf.getAdminPrincipal(), conf.getAdminKeytab().getAbsolutePath());
 +        }
 +
 +        // Set the implementation
 +        cluster = standaloneCluster;
 +        break;
 +      default:
 +        throw new RuntimeException("Unhandled type");
 +    }
 +
 +    if (type.isDynamic()) {
 +      cluster.start();
 +    } else {
 +      log.info("Removing tables which appear to be from a previous test run");
 +      cleanupTables();
 +      log.info("Removing users which appear to be from a previous test run");
 +      cleanupUsers();
 +    }
 +
 +    switch (type) {
 +      case MINI:
 +        if (null != krb) {
 +          final String traceTable = Property.TRACE_TABLE.getDefaultValue();
 +          final ClusterUser systemUser = krb.getAccumuloServerUser(), rootUser = krb.getRootUser();
 +
 +          // Login as the trace user
 +          UserGroupInformation.loginUserFromKeytab(systemUser.getPrincipal(), systemUser.getKeytab().getAbsolutePath());
 +
 +          // Open a connector as the system user (ensures the user will exist for us to assign permissions to)
 +          UserGroupInformation.loginUserFromKeytab(systemUser.getPrincipal(), systemUser.getKeytab().getAbsolutePath());
 +          Connector conn = cluster.getConnector(systemUser.getPrincipal(), new KerberosToken());
 +
 +          // Then, log back in as the "root" user and do the grant
 +          UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +          conn = getConnector();
 +
 +          // Create the trace table
 +          conn.tableOperations().create(traceTable);
 +
 +          // Trace user (which is the same kerberos principal as the system user, but using a normal KerberosToken) needs
 +          // to have the ability to read, write and alter the trace table
 +          conn.securityOperations().grantTablePermission(systemUser.getPrincipal(), traceTable, TablePermission.READ);
 +          conn.securityOperations().grantTablePermission(systemUser.getPrincipal(), traceTable, TablePermission.WRITE);
 +          conn.securityOperations().grantTablePermission(systemUser.getPrincipal(), traceTable, TablePermission.ALTER_TABLE);
 +        }
 +        break;
 +      default:
 +        // do nothing
 +    }
 +  }
 +
 +  public void cleanupTables() throws Exception {
 +    final String tablePrefix = this.getClass().getSimpleName() + "_";
 +    final TableOperations tops = getConnector().tableOperations();
 +    for (String table : tops.list()) {
 +      if (table.startsWith(tablePrefix)) {
 +        log.debug("Removing table {}", table);
 +        tops.delete(table);
 +      }
 +    }
 +  }
 +
 +  public void cleanupUsers() throws Exception {
 +    final String userPrefix = this.getClass().getSimpleName();
 +    final SecurityOperations secOps = getConnector().securityOperations();
 +    for (String user : secOps.listLocalUsers()) {
 +      if (user.startsWith(userPrefix)) {
 +        log.info("Dropping local user {}", user);
 +        secOps.dropLocalUser(user);
 +      }
 +    }
 +  }
 +
 +  @After
 +  public void teardownCluster() throws Exception {
 +    if (null != cluster) {
 +      if (type.isDynamic()) {
 +        cluster.stop();
 +      } else {
 +        log.info("Removing tables which appear to be from the current test");
 +        cleanupTables();
 +        log.info("Removing users which appear to be from the current test");
 +        cleanupUsers();
 +      }
 +    }
 +  }
 +
 +  public static AccumuloCluster getCluster() {
 +    checkState(initialized);
 +    return cluster;
 +  }
 +
 +  public static ClusterControl getClusterControl() {
 +    checkState(initialized);
 +    return cluster.getClusterControl();
 +  }
 +
 +  public static ClusterType getClusterType() {
 +    checkState(initialized);
 +    return type;
 +  }
 +
 +  public static String getAdminPrincipal() {
 +    checkState(initialized);
 +    return clusterConf.getAdminPrincipal();
 +  }
 +
 +  public static AuthenticationToken getAdminToken() {
 +    checkState(initialized);
 +    return clusterConf.getAdminToken();
 +  }
 +
 +  @Override
 +  public ClusterUser getAdminUser() {
 +    switch (type) {
 +      case MINI:
 +        if (null == krb) {
 +          PasswordToken passwordToken = (PasswordToken) getAdminToken();
 +          return new ClusterUser(getAdminPrincipal(), new String(passwordToken.getPassword(), UTF_8));
 +        }
 +        return krb.getRootUser();
 +      case STANDALONE:
 +        return new ClusterUser(getAdminPrincipal(), ((StandaloneAccumuloClusterConfiguration) clusterConf).getAdminKeytab());
 +      default:
 +        throw new RuntimeException("Unknown cluster type");
 +    }
 +  }
 +
 +  @Override
 +  public ClusterUser getUser(int offset) {
 +    switch (type) {
 +      case MINI:
 +        if (null != krb) {
 +          // Defer to the TestingKdc when kerberos is on so we can get the keytab instead of a password
 +          return krb.getClientPrincipal(offset);
 +        } else {
 +          // Come up with a mostly unique name
 +          String principal = getClass().getSimpleName() + "_" + testName.getMethodName() + "_" + offset;
 +          // Username and password are the same
 +          return new ClusterUser(principal, principal);
 +        }
 +      case STANDALONE:
 +        return ((StandaloneAccumuloCluster) cluster).getUser(offset);
 +      default:
 +        throw new RuntimeException("Unknown cluster type");
 +    }
 +  }
 +
 +  public static FileSystem getFileSystem() throws IOException {
 +    checkState(initialized);
 +    return cluster.getFileSystem();
 +  }
 +
 +  public static AccumuloClusterConfiguration getClusterConfiguration() {
 +    checkState(initialized);
 +    return clusterConf;
 +  }
 +
 +  public Connector getConnector() {
 +    try {
 +      String princ = getAdminPrincipal();
 +      AuthenticationToken token = getAdminToken();
 +      log.debug("Creating connector as {} with {}", princ, token);
 +      return cluster.getConnector(princ, token);
 +    } catch (Exception e) {
 +      log.error("Could not connect to Accumulo", e);
 +      fail("Could not connect to Accumulo: " + e.getMessage());
 +
 +      throw new RuntimeException("Could not connect to Accumulo", e);
 +    }
 +  }
 +
 +  // TODO Really don't want this here. Will ultimately need to abstract configuration method away from MAConfig
 +  // and change over to something more generic
 +  @Override
 +  public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {}
 +
 +  /**
 +   * A test may not be capable of running against a given AccumuloCluster. Implementations can override this method to advertise that they cannot (or perhaps do
 +   * not) want to run the test.
 +   */
 +  public boolean canRunTest(ClusterType type) {
 +    return true;
 +  }
 +
 +  /**
 +   * Tries to give a reasonable directory which can be used to create temporary files for the test. Makes a basic attempt to create the directory if it does not
 +   * already exist.
 +   *
 +   * @return A directory which can be expected to exist on the Cluster's FileSystem
 +   */
 +  public Path getUsableDir() throws IllegalArgumentException, IOException {
 +    return cluster.getTemporaryPath();
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/harness/SharedMiniClusterBase.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/harness/SharedMiniClusterBase.java
index 544b5de,0000000..0e486da
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/harness/SharedMiniClusterBase.java
+++ b/test/src/main/java/org/apache/accumulo/harness/SharedMiniClusterBase.java
@@@ -1,204 -1,0 +1,207 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.harness;
 +
 +import static org.junit.Assert.assertTrue;
 +
 +import java.io.File;
 +import java.io.IOException;
 +import java.util.Random;
 +
 +import org.apache.accumulo.cluster.ClusterUser;
 +import org.apache.accumulo.cluster.ClusterUsers;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 +import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 +import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.security.TablePermission;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.hadoop.conf.Configuration;
 +import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 +import org.apache.hadoop.security.UserGroupInformation;
++import org.junit.experimental.categories.Category;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +/**
 + * Convenience class which starts a single MAC instance for a test to leverage.
 + *
 + * There isn't a good way to build this off of the {@link AccumuloClusterHarness} (as would be the logical place) because we need to start the
 + * MiniAccumuloCluster in a static BeforeClass-annotated method. Because it is static and invoked before any other BeforeClass methods in the implementation,
 + * the actual test classes can't expose any information to tell the base class that it is to perform the one-MAC-per-class semantics.
 + *
 + * Implementations of this class must be sure to invoke {@link #startMiniCluster()} or {@link #startMiniClusterWithConfig(MiniClusterConfigurationCallback)} in
 + * a method annotated with the {@link org.junit.BeforeClass} JUnit annotation and {@link #stopMiniCluster()} in a method annotated with the
 + * {@link org.junit.AfterClass} JUnit annotation.
 + */
++@Category(MiniClusterOnlyTest.class)
 +public abstract class SharedMiniClusterBase extends AccumuloITBase implements ClusterUsers {
 +  private static final Logger log = LoggerFactory.getLogger(SharedMiniClusterBase.class);
 +  public static final String TRUE = Boolean.toString(true);
 +
 +  private static String principal = "root";
 +  private static String rootPassword;
 +  private static AuthenticationToken token;
 +  private static MiniAccumuloClusterImpl cluster;
 +  private static TestingKdc krb;
 +
 +  /**
 +   * Starts a MiniAccumuloCluster instance with the default configuration.
 +   */
 +  public static void startMiniCluster() throws Exception {
 +    startMiniClusterWithConfig(MiniClusterConfigurationCallback.NO_CALLBACK);
 +  }
 +
 +  /**
 +   * Starts a MiniAccumuloCluster instance with the default configuration but also provides the caller the opportunity to update the configuration before the
 +   * MiniAccumuloCluster is started.
 +   *
 +   * @param miniClusterCallback
 +   *          A callback to configure the minicluster before it is started.
 +   */
 +  public static void startMiniClusterWithConfig(MiniClusterConfigurationCallback miniClusterCallback) throws Exception {
 +    File baseDir = new File(System.getProperty("user.dir") + "/target/mini-tests");
 +    assertTrue(baseDir.mkdirs() || baseDir.isDirectory());
 +
 +    // Make a shared MAC instance instead of spinning up one per test method
 +    MiniClusterHarness harness = new MiniClusterHarness();
 +
 +    if (TRUE.equals(System.getProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION))) {
 +      krb = new TestingKdc();
 +      krb.start();
 +      // Enabled krb auth
 +      Configuration conf = new Configuration(false);
 +      conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "kerberos");
 +      UserGroupInformation.setConfiguration(conf);
 +      // Login as the client
 +      ClusterUser rootUser = krb.getRootUser();
 +      // Get the krb token
 +      UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +      token = new KerberosToken();
 +    } else {
 +      rootPassword = "rootPasswordShared1";
 +      token = new PasswordToken(rootPassword);
 +    }
 +
 +    cluster = harness.create(SharedMiniClusterBase.class.getName(), System.currentTimeMillis() + "_" + new Random().nextInt(Short.MAX_VALUE), token,
 +        miniClusterCallback, krb);
 +    cluster.start();
 +
 +    if (null != krb) {
 +      final String traceTable = Property.TRACE_TABLE.getDefaultValue();
 +      final ClusterUser systemUser = krb.getAccumuloServerUser(), rootUser = krb.getRootUser();
 +      // Login as the trace user
 +      // Open a connector as the system user (ensures the user will exist for us to assign permissions to)
 +      UserGroupInformation.loginUserFromKeytab(systemUser.getPrincipal(), systemUser.getKeytab().getAbsolutePath());
 +      Connector conn = cluster.getConnector(systemUser.getPrincipal(), new KerberosToken());
 +
 +      // Then, log back in as the "root" user and do the grant
 +      UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +      conn = cluster.getConnector(principal, token);
 +
 +      // Create the trace table
 +      conn.tableOperations().create(traceTable);
 +
 +      // Trace user (which is the same kerberos principal as the system user, but using a normal KerberosToken) needs
 +      // to have the ability to read, write and alter the trace table
 +      conn.securityOperations().grantTablePermission(systemUser.getPrincipal(), traceTable, TablePermission.READ);
 +      conn.securityOperations().grantTablePermission(systemUser.getPrincipal(), traceTable, TablePermission.WRITE);
 +      conn.securityOperations().grantTablePermission(systemUser.getPrincipal(), traceTable, TablePermission.ALTER_TABLE);
 +    }
 +  }
 +
 +  /**
 +   * Stops the MiniAccumuloCluster and related services if they are running.
 +   */
 +  public static void stopMiniCluster() throws Exception {
 +    if (null != cluster) {
 +      try {
 +        cluster.stop();
 +      } catch (Exception e) {
 +        log.error("Failed to stop minicluster", e);
 +      }
 +    }
 +    if (null != krb) {
 +      try {
 +        krb.stop();
 +      } catch (Exception e) {
 +        log.error("Failed to stop KDC", e);
 +      }
 +    }
 +  }
 +
 +  public static String getRootPassword() {
 +    return rootPassword;
 +  }
 +
 +  public static AuthenticationToken getToken() {
 +    if (token instanceof KerberosToken) {
 +      try {
 +        UserGroupInformation.loginUserFromKeytab(getPrincipal(), krb.getRootUser().getKeytab().getAbsolutePath());
 +      } catch (IOException e) {
 +        throw new RuntimeException("Failed to login", e);
 +      }
 +    }
 +    return token;
 +  }
 +
 +  public static String getPrincipal() {
 +    return principal;
 +  }
 +
 +  public static MiniAccumuloClusterImpl getCluster() {
 +    return cluster;
 +  }
 +
 +  public static File getMiniClusterDir() {
 +    return cluster.getConfig().getDir();
 +  }
 +
 +  public static Connector getConnector() {
 +    try {
 +      return getCluster().getConnector(principal, getToken());
 +    } catch (Exception e) {
 +      throw new RuntimeException(e);
 +    }
 +  }
 +
 +  public static TestingKdc getKdc() {
 +    return krb;
 +  }
 +
 +  @Override
 +  public ClusterUser getAdminUser() {
 +    if (null == krb) {
 +      return new ClusterUser(getPrincipal(), getRootPassword());
 +    } else {
 +      return krb.getRootUser();
 +    }
 +  }
 +
 +  @Override
 +  public ClusterUser getUser(int offset) {
 +    if (null == krb) {
 +      String user = SharedMiniClusterBase.class.getName() + "_" + testName.getMethodName() + "_" + offset;
 +      // Password is the username
 +      return new ClusterUser(user, user);
 +    } else {
 +      return krb.getClientPrincipal(offset);
 +    }
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/NamespacesIT.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/NamespacesIT.java
index cdb3d00,0000000..b9f0ae5
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/NamespacesIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/NamespacesIT.java
@@@ -1,1419 -1,0 +1,1422 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.test;
 +
 +import static java.nio.charset.StandardCharsets.UTF_8;
 +import static org.junit.Assert.assertEquals;
 +import static org.junit.Assert.assertFalse;
 +import static org.junit.Assert.assertNotNull;
 +import static org.junit.Assert.assertNull;
 +import static org.junit.Assert.assertTrue;
 +import static org.junit.Assert.fail;
 +
 +import java.io.IOException;
 +import java.util.Arrays;
 +import java.util.Collections;
 +import java.util.EnumSet;
 +import java.util.Iterator;
 +import java.util.Map;
 +import java.util.Map.Entry;
 +import java.util.Set;
 +import java.util.SortedSet;
 +import java.util.TreeSet;
 +import java.util.concurrent.TimeUnit;
 +
 +import org.apache.accumulo.cluster.ClusterUser;
 +import org.apache.accumulo.core.client.AccumuloException;
 +import org.apache.accumulo.core.client.AccumuloSecurityException;
 +import org.apache.accumulo.core.client.BatchWriter;
 +import org.apache.accumulo.core.client.BatchWriterConfig;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.IteratorSetting;
 +import org.apache.accumulo.core.client.MutationsRejectedException;
 +import org.apache.accumulo.core.client.NamespaceExistsException;
 +import org.apache.accumulo.core.client.NamespaceNotEmptyException;
 +import org.apache.accumulo.core.client.NamespaceNotFoundException;
 +import org.apache.accumulo.core.client.Scanner;
 +import org.apache.accumulo.core.client.TableExistsException;
 +import org.apache.accumulo.core.client.TableNotFoundException;
 +import org.apache.accumulo.core.client.admin.NamespaceOperations;
 +import org.apache.accumulo.core.client.admin.NewTableConfiguration;
 +import org.apache.accumulo.core.client.admin.TableOperations;
 +import org.apache.accumulo.core.client.impl.Namespaces;
 +import org.apache.accumulo.core.client.impl.Tables;
 +import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 +import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
 +import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 +import org.apache.accumulo.core.client.security.SecurityErrorCode;
 +import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.data.Key;
 +import org.apache.accumulo.core.data.Mutation;
 +import org.apache.accumulo.core.data.Range;
 +import org.apache.accumulo.core.data.Value;
 +import org.apache.accumulo.core.iterators.Filter;
 +import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 +import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 +import org.apache.accumulo.core.iterators.user.VersioningIterator;
 +import org.apache.accumulo.core.metadata.MetadataTable;
 +import org.apache.accumulo.core.metadata.RootTable;
 +import org.apache.accumulo.core.security.Authorizations;
 +import org.apache.accumulo.core.security.NamespacePermission;
 +import org.apache.accumulo.core.security.SystemPermission;
 +import org.apache.accumulo.core.security.TablePermission;
 +import org.apache.accumulo.examples.simple.constraints.NumericValueConstraint;
 +import org.apache.accumulo.harness.AccumuloClusterHarness;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.hadoop.io.Text;
 +import org.junit.After;
 +import org.junit.Assume;
 +import org.junit.Before;
 +import org.junit.Test;
++import org.junit.experimental.categories.Category;
 +
 +import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 +
 +// Testing default namespace configuration with inheritance requires altering the system state and restoring it back to normal
 +// Punt on this for now and just let it use a minicluster.
++@Category(MiniClusterOnlyTest.class)
 +public class NamespacesIT extends AccumuloClusterHarness {
 +
 +  private Connector c;
 +  private String namespace;
 +
 +  @Override
 +  public int defaultTimeoutSeconds() {
 +    return 60;
 +  }
 +
 +  @Before
 +  public void setupConnectorAndNamespace() throws Exception {
 +    Assume.assumeTrue(ClusterType.MINI == getClusterType());
 +
 +    // prepare a unique namespace and get a new root connector for each test
 +    c = getConnector();
 +    namespace = "ns_" + getUniqueNames(1)[0];
 +  }
 +
 +  @After
 +  public void swingMj�lnir() throws Exception {
 +    if (null == c) {
 +      return;
 +    }
 +    // clean up any added tables, namespaces, and users, after each test
 +    for (String t : c.tableOperations().list())
 +      if (!Tables.qualify(t).getFirst().equals(Namespaces.ACCUMULO_NAMESPACE))
 +        c.tableOperations().delete(t);
 +    assertEquals(3, c.tableOperations().list().size());
 +    for (String n : c.namespaceOperations().list())
 +      if (!n.equals(Namespaces.ACCUMULO_NAMESPACE) && !n.equals(Namespaces.DEFAULT_NAMESPACE))
 +        c.namespaceOperations().delete(n);
 +    assertEquals(2, c.namespaceOperations().list().size());
 +    for (String u : c.securityOperations().listLocalUsers())
 +      if (!getAdminPrincipal().equals(u))
 +        c.securityOperations().dropLocalUser(u);
 +    assertEquals(1, c.securityOperations().listLocalUsers().size());
 +  }
 +
 +  @Test
 +  public void checkReservedNamespaces() throws Exception {
 +    assertEquals(c.namespaceOperations().defaultNamespace(), Namespaces.DEFAULT_NAMESPACE);
 +    assertEquals(c.namespaceOperations().systemNamespace(), Namespaces.ACCUMULO_NAMESPACE);
 +  }
 +
 +  @Test
 +  public void checkBuiltInNamespaces() throws Exception {
 +    assertTrue(c.namespaceOperations().exists(Namespaces.DEFAULT_NAMESPACE));
 +    assertTrue(c.namespaceOperations().exists(Namespaces.ACCUMULO_NAMESPACE));
 +  }
 +
 +  @Test
 +  public void createTableInDefaultNamespace() throws Exception {
 +    String tableName = "1";
 +    c.tableOperations().create(tableName);
 +    assertTrue(c.tableOperations().exists(tableName));
 +  }
 +
 +  @Test(expected = AccumuloException.class)
 +  public void createTableInAccumuloNamespace() throws Exception {
 +    String tableName = Namespaces.ACCUMULO_NAMESPACE + ".1";
 +    assertFalse(c.tableOperations().exists(tableName));
 +    c.tableOperations().create(tableName); // should fail
 +  }
 +
 +  @Test(expected = AccumuloSecurityException.class)
 +  public void deleteDefaultNamespace() throws Exception {
 +    c.namespaceOperations().delete(Namespaces.DEFAULT_NAMESPACE); // should fail
 +  }
 +
 +  @Test(expected = AccumuloSecurityException.class)
 +  public void deleteAccumuloNamespace() throws Exception {
 +    c.namespaceOperations().delete(Namespaces.ACCUMULO_NAMESPACE); // should fail
 +  }
 +
 +  @Test
 +  public void createTableInMissingNamespace() throws Exception {
 +    String t = namespace + ".1";
 +    assertFalse(c.namespaceOperations().exists(namespace));
 +    assertFalse(c.tableOperations().exists(t));
 +    try {
 +      c.tableOperations().create(t);
 +      fail();
 +    } catch (AccumuloException e) {
 +      assertEquals(NamespaceNotFoundException.class.getName(), e.getCause().getClass().getName());
 +      assertFalse(c.namespaceOperations().exists(namespace));
 +      assertFalse(c.tableOperations().exists(t));
 +    }
 +  }
 +
 +  @Test
 +  public void createAndDeleteNamespace() throws Exception {
 +    String t1 = namespace + ".1";
 +    String t2 = namespace + ".2";
 +    assertFalse(c.namespaceOperations().exists(namespace));
 +    assertFalse(c.tableOperations().exists(t1));
 +    assertFalse(c.tableOperations().exists(t2));
 +    try {
 +      c.namespaceOperations().delete(namespace);
 +    } catch (NamespaceNotFoundException e) {}
 +    try {
 +      c.tableOperations().delete(t1);
 +    } catch (TableNotFoundException e) {
 +      assertEquals(NamespaceNotFoundException.class.getName(), e.getCause().getClass().getName());
 +    }
 +    c.namespaceOperations().create(namespace);
 +    assertTrue(c.namespaceOperations().exists(namespace));
 +    assertFalse(c.tableOperations().exists(t1));
 +    assertFalse(c.tableOperations().exists(t2));
 +    c.tableOperations().create(t1);
 +    assertTrue(c.namespaceOperations().exists(namespace));
 +    assertTrue(c.tableOperations().exists(t1));
 +    assertFalse(c.tableOperations().exists(t2));
 +    c.tableOperations().create(t2);
 +    assertTrue(c.namespaceOperations().exists(namespace));
 +    assertTrue(c.tableOperations().exists(t1));
 +    assertTrue(c.tableOperations().exists(t2));
 +    c.tableOperations().delete(t1);
 +    assertTrue(c.namespaceOperations().exists(namespace));
 +    assertFalse(c.tableOperations().exists(t1));
 +    assertTrue(c.tableOperations().exists(t2));
 +    c.tableOperations().delete(t2);
 +    assertTrue(c.namespaceOperations().exists(namespace));
 +    assertFalse(c.tableOperations().exists(t1));
 +    assertFalse(c.tableOperations().exists(t2));
 +    c.namespaceOperations().delete(namespace);
 +    assertFalse(c.namespaceOperations().exists(namespace));
 +    assertFalse(c.tableOperations().exists(t1));
 +    assertFalse(c.tableOperations().exists(t2));
 +  }
 +
 +  @Test(expected = NamespaceNotEmptyException.class)
 +  public void deleteNonEmptyNamespace() throws Exception {
 +    String tableName1 = namespace + ".1";
 +    assertFalse(c.namespaceOperations().exists(namespace));
 +    assertFalse(c.tableOperations().exists(tableName1));
 +    c.namespaceOperations().create(namespace);
 +    c.tableOperations().create(tableName1);
 +    assertTrue(c.namespaceOperations().exists(namespace));
 +    assertTrue(c.tableOperations().exists(tableName1));
 +    c.namespaceOperations().delete(namespace); // should fail
 +  }
 +
 +  @Test
 +  public void verifyPropertyInheritance() throws Exception {
 +    String t0 = "0";
 +    String t1 = namespace + ".1";
 +    String t2 = namespace + ".2";
 +
 +    String k = Property.TABLE_SCAN_MAXMEM.getKey();
 +    String v = "42K";
 +
 +    assertFalse(c.namespaceOperations().exists(namespace));
 +    assertFalse(c.tableOperations().exists(t1));
 +    assertFalse(c.tableOperations().exists(t2));
 +    c.namespaceOperations().create(namespace);
 +    c.tableOperations().create(t1);
 +    c.tableOperations().create(t0);
 +    assertTrue(c.namespaceOperations().exists(namespace));
 +    assertTrue(c.tableOperations().exists(t1));
 +    assertTrue(c.tableOperations().exists(t0));
 +
 +    // verify no property
 +    assertFalse(checkNamespaceHasProp(namespace, k, v));
 +    assertFalse(checkTableHasProp(t1, k, v));
 +    assertFalse(checkNamespaceHasProp(Namespaces.DEFAULT_NAMESPACE, k, v));
 +    assertFalse(checkTableHasProp(t0, k, v));
 +
 +    // set property and verify
 +    c.namespaceOperations().setProperty(namespace, k, v);
 +    assertTrue(checkNamespaceHasProp(namespace, k, v));
 +    assertTrue(checkTableHasProp(t1, k, v));
 +    assertFalse(checkNamespaceHasProp(Namespaces.DEFAULT_NAMESPACE, k, v));
 +    assertFalse(checkTableHasProp(t0, k, v));
 +
 +    // add a new table to namespace and verify
 +    assertFalse(c.tableOperations().exists(t2));
 +    c.tableOperations().create(t2);
 +    assertTrue(c.tableOperations().exists(t2));
 +    assertTrue(checkNamespaceHasProp(namespace, k, v));
 +    assertTrue(checkTableHasProp(t1, k, v));
 +    assertTrue(checkTableHasProp(t2, k, v));
 +    assertFalse(checkNamespaceHasProp(Namespaces.DEFAULT_NAMESPACE, k, v));
 +    assertFalse(checkTableHasProp(t0, k, v));
 +
 +    // remove property and verify
 +    c.namespaceOperations().removeProperty(namespace, k);
 +    assertFalse(checkNamespaceHasProp(namespace, k, v));
 +    assertFalse(checkTableHasProp(t1, k, v));
 +    assertFalse(checkTableHasProp(t2, k, v));
 +    assertFalse(checkNamespaceHasProp(Namespaces.DEFAULT_NAMESPACE, k, v));
 +    assertFalse(checkTableHasProp(t0, k, v));
 +
 +    // set property on default namespace and verify
 +    c.namespaceOperations().setProperty(Namespaces.DEFAULT_NAMESPACE, k, v);
 +    assertFalse(checkNamespaceHasProp(namespace, k, v));
 +    assertFalse(checkTableHasProp(t1, k, v));
 +    assertFalse(checkTableHasProp(t2, k, v));
 +    assertTrue(checkNamespaceHasProp(Namespaces.DEFAULT_NAMESPACE, k, v));
 +    assertTrue(checkTableHasProp(t0, k, v));
 +
 +    // test that table properties override namespace properties
 +    String k2 = Property.TABLE_FILE_MAX.getKey();
 +    String v2 = "42";
 +    String table_v2 = "13";
 +
 +    // set new property on some
 +    c.namespaceOperations().setProperty(namespace, k2, v2);
 +    c.tableOperations().setProperty(t2, k2, table_v2);
 +    assertTrue(checkNamespaceHasProp(namespace, k2, v2));
 +    assertTrue(checkTableHasProp(t1, k2, v2));
 +    assertTrue(checkTableHasProp(t2, k2, table_v2));
 +
 +    c.tableOperations().delete(t1);
 +    c.tableOperations().delete(t2);
 +    c.tableOperations().delete(t0);
 +    c.namespaceOperations().delete(namespace);
 +  }
 +
 +  @Test
 +  public void verifyIteratorInheritance() throws Exception {
 +    String t1 = namespace + ".1";
 +    c.namespaceOperations().create(namespace);
 +    c.tableOperations().create(t1);
 +    String iterName = namespace + "_iter";
 +
 +    BatchWriter bw = c.createBatchWriter(t1, new BatchWriterConfig());
 +    Mutation m = new Mutation("r");
 +    m.put("a", "b", new Value("abcde".getBytes()));
 +    bw.addMutation(m);
 +    bw.flush();
 +    bw.close();
 +
 +    IteratorSetting setting = new IteratorSetting(250, iterName, SimpleFilter.class.getName());
 +
 +    // verify can see inserted entry
 +    Scanner s = c.createScanner(t1, Authorizations.EMPTY);
 +    assertTrue(s.iterator().hasNext());
 +    assertFalse(c.namespaceOperations().listIterators(namespace).containsKey(iterName));
 +    assertFalse(c.tableOperations().listIterators(t1).containsKey(iterName));
 +
 +    // verify entry is filtered out (also, verify conflict checking API)
 +    c.namespaceOperations().checkIteratorConflicts(namespace, setting, EnumSet.allOf(IteratorScope.class));
 +    c.namespaceOperations().attachIterator(namespace, setting);
 +    sleepUninterruptibly(2, TimeUnit.SECONDS);
 +    try {
 +      c.namespaceOperations().checkIteratorConflicts(namespace, setting, EnumSet.allOf(IteratorScope.class));
 +      fail();
 +    } catch (AccumuloException e) {
 +      assertEquals(IllegalArgumentException.class.getName(), e.getCause().getClass().getName());
 +    }
 +    IteratorSetting setting2 = c.namespaceOperations().getIteratorSetting(namespace, setting.getName(), IteratorScope.scan);
 +    assertEquals(setting, setting2);
 +    assertTrue(c.namespaceOperations().listIterators(namespace).containsKey(iterName));
 +    assertTrue(c.tableOperations().listIterators(t1).containsKey(iterName));
 +    s = c.createScanner(t1, Authorizations.EMPTY);
 +    assertFalse(s.iterator().hasNext());
 +
 +    // verify can see inserted entry again
 +    c.namespaceOperations().removeIterator(namespace, setting.getName(), EnumSet.allOf(IteratorScope.class));
 +    sleepUninterruptibly(2, TimeUnit.SECONDS);
 +    assertFalse(c.namespaceOperations().listIterators(namespace).containsKey(iterName));
 +    assertFalse(c.tableOperations().listIterators(t1).containsKey(iterName));
 +    s = c.createScanner(t1, Authorizations.EMPTY);
 +    assertTrue(s.iterator().hasNext());
 +  }
 +
 +  @Test
 +  public void cloneTable() throws Exception {
 +    String namespace2 = namespace + "_clone";
 +    String t1 = namespace + ".1";
 +    String t2 = namespace + ".2";
 +    String t3 = namespace2 + ".2";
 +    String k1 = Property.TABLE_FILE_MAX.getKey();
 +    String k2 = Property.TABLE_FILE_REPLICATION.getKey();
 +    String k1v1 = "55";
 +    String k1v2 = "66";
 +    String k2v1 = "5";
 +    String k2v2 = "6";
 +
 +    c.namespaceOperations().create(namespace);
 +    c.tableOperations().create(t1);
 +    assertTrue(c.tableOperations().exists(t1));
 +    assertFalse(c.namespaceOperations().exists(namespace2));
 +    assertFalse(c.tableOperations().exists(t2));
 +    assertFalse(c.tableOperations().exists(t3));
 +
 +    try {
 +      // try to clone before namespace exists
 +      c.tableOperations().clone(t1, t3, false, null, null); // should fail
 +      fail();
 +    } catch (AccumuloException e) {
 +      assertEquals(NamespaceNotFoundException.class.getName(), e.getCause().getClass().getName());
 +    }
 +
 +    // try to clone before when target tables exist
 +    c.namespaceOperations().create(namespace2);
 +    c.tableOperations().create(t2);
 +    c.tableOperations().create(t3);
 +    for (String t : Arrays.asList(t2, t3)) {
 +      try {
 +        c.tableOperations().clone(t1, t, false, null, null); // should fail
 +        fail();
 +      } catch (TableExistsException e) {
 +        c.tableOperations().delete(t);
 +      }
 +    }
 +
 +    assertTrue(c.tableOperations().exists(t1));
 +    assertTrue(c.namespaceOperations().exists(namespace2));
 +    assertFalse(c.tableOperations().exists(t2));
 +    assertFalse(c.tableOperations().exists(t3));
 +
 +    // set property with different values in two namespaces and a separate property with different values on the table and both namespaces
 +    assertFalse(checkNamespaceHasProp(namespace, k1, k1v1));
 +    assertFalse(checkNamespaceHasProp(namespace2, k1, k1v2));
 +    assertFalse(checkTableHasProp(t1, k1, k1v1));
 +    assertFalse(checkTableHasProp(t1, k1, k1v2));
 +    assertFalse(checkNamespaceHasProp(namespace, k2, k2v1));
 +    assertFalse(checkNamespaceHasProp(namespace2, k2, k2v1));
 +    assertFalse(checkTableHasProp(t1, k2, k2v1));
 +    assertFalse(checkTableHasProp(t1, k2, k2v2));
 +    c.namespaceOperations().setProperty(namespace, k1, k1v1);
 +    c.namespaceOperations().setProperty(namespace2, k1, k1v2);
 +    c.namespaceOperations().setProperty(namespace, k2, k2v1);
 +    c.namespaceOperations().setProperty(namespace2, k2, k2v1);
 +    c.tableOperations().setProperty(t1, k2, k2v2);
 +    assertTrue(checkNamespaceHasProp(namespace, k1, k1v1));
 +    assertTrue(checkNamespaceHasProp(namespace2, k1, k1v2));
 +    assertTrue(checkTableHasProp(t1, k1, k1v1));
 +    assertFalse(checkTableHasProp(t1, k1, k1v2));
 +    assertTrue(checkNamespaceHasProp(namespace, k2, k2v1));
 +    assertTrue(checkNamespaceHasProp(namespace2, k2, k2v1));
 +    assertFalse(checkTableHasProp(t1, k2, k2v1));
 +    assertTrue(checkTableHasProp(t1, k2, k2v2));
 +
 +    // clone twice, once in same namespace, once in another
 +    for (String t : Arrays.asList(t2, t3))
 +      c.tableOperations().clone(t1, t, false, null, null);
 +
 +    assertTrue(c.namespaceOperations().exists(namespace2));
 +    assertTrue(c.tableOperations().exists(t1));
 +    assertTrue(c.tableOperations().exists(t2));
 +    assertTrue(c.tableOperations().exists(t3));
 +
 +    // verify the properties got transferred
 +    assertTrue(checkTableHasProp(t1, k1, k1v1));
 +    assertTrue(checkTableHasProp(t2, k1, k1v1));
 +    assertTrue(checkTableHasProp(t3, k1, k1v2));
 +    assertTrue(checkTableHasProp(t1, k2, k2v2));
 +    assertTrue(checkTableHasProp(t2, k2, k2v2));
 +    assertTrue(checkTableHasProp(t3, k2, k2v2));
 +  }
 +
 +  @Test
 +  public void renameNamespaceWithTable() throws Exception {
 +    String namespace2 = namespace + "_renamed";
 +    String t1 = namespace + ".t";
 +    String t2 = namespace2 + ".t";
 +
 +    c.namespaceOperations().create(namespace);
 +    c.tableOperations().create(t1);
 +    assertTrue(c.namespaceOperations().exists(namespace));
 +    assertTrue(c.tableOperations().exists(t1));
 +    assertFalse(c.namespaceOperations().exists(namespace2));
 +    assertFalse(c.tableOperations().exists(t2));
 +
 +    String namespaceId = c.namespaceOperations().namespaceIdMap().get(namespace);
 +    String tableId = c.tableOperations().tableIdMap().get(t1);
 +
 +    c.namespaceOperations().rename(namespace, namespace2);
 +    assertFalse(c.namespaceOperations().exists(namespace));
 +    assertFalse(c.tableOperations().exists(t1));
 +    assertTrue(c.namespaceOperations().exists(namespace2));
 +    assertTrue(c.tableOperations().exists(t2));
 +
 +    // verify id's didn't change
 +    String namespaceId2 = c.namespaceOperations().namespaceIdMap().get(namespace2);
 +    String tableId2 = c.tableOperations().tableIdMap().get(t2);
 +
 +    assertEquals(namespaceId, namespaceId2);
 +    assertEquals(tableId, tableId2);
 +  }
 +
 +  @Test
 +  public void verifyConstraintInheritance() throws Exception {
 +    String t1 = namespace + ".1";
 +    c.namespaceOperations().create(namespace);
 +    c.tableOperations().create(t1, new NewTableConfiguration().withoutDefaultIterators());
 +    String constraintClassName = NumericValueConstraint.class.getName();
 +
 +    assertFalse(c.namespaceOperations().listConstraints(namespace).containsKey(constraintClassName));
 +    assertFalse(c.tableOperations().listConstraints(t1).containsKey(constraintClassName));
 +
 +    c.namespaceOperations().addConstraint(namespace, constraintClassName);
 +    boolean passed = false;
 +    for (int i = 0; i < 5; i++) {
 +      if (!c.namespaceOperations().listConstraints(namespace).containsKey(constraintClassName)) {
 +        Thread.sleep(500);
 +        continue;
 +      }
 +      if (!c.tableOperations().listConstraints(t1).containsKey(constraintClassName)) {
 +        Thread.sleep(500);
 +        continue;
 +      }
 +      passed = true;
 +      break;
 +    }
 +    assertTrue("Failed to observe newly-added constraint", passed);
 +
 +    passed = false;
 +    Integer namespaceNum = null;
 +    for (int i = 0; i < 5; i++) {
 +      namespaceNum = c.namespaceOperations().listConstraints(namespace).get(constraintClassName);
 +      if (null == namespaceNum) {
 +        Thread.sleep(500);
 +        continue;
 +      }
 +      Integer tableNum = c.tableOperations().listConstraints(t1).get(constraintClassName);
 +      if (null == tableNum) {
 +        Thread.sleep(500);
 +        continue;
 +      }
 +      assertEquals(namespaceNum, tableNum);
 +      passed = true;
 +    }
 +    assertTrue("Failed to observe constraint in both table and namespace", passed);
 +
 +    Mutation m1 = new Mutation("r1");
 +    Mutation m2 = new Mutation("r2");
 +    Mutation m3 = new Mutation("r3");
 +    m1.put("a", "b", new Value("abcde".getBytes(UTF_8)));
 +    m2.put("e", "f", new Value("123".getBytes(UTF_8)));
 +    m3.put("c", "d", new Value("zyxwv".getBytes(UTF_8)));
 +
 +    passed = false;
 +    for (int i = 0; i < 5; i++) {
 +      BatchWriter bw = c.createBatchWriter(t1, new BatchWriterConfig());
 +      bw.addMutations(Arrays.asList(m1, m2, m3));
 +      try {
 +        bw.close();
 +        Thread.sleep(500);
 +      } catch (MutationsRejectedException e) {
 +        passed = true;
 +        assertEquals(1, e.getConstraintViolationSummaries().size());
 +        assertEquals(2, e.getConstraintViolationSummaries().get(0).getNumberOfViolatingMutations());
 +        break;
 +      }
 +    }
 +
 +    assertTrue("Failed to see mutations rejected after constraint was added", passed);
 +
 +    assertNotNull("Namespace constraint ID should not be null", namespaceNum);
 +    c.namespaceOperations().removeConstraint(namespace, namespaceNum);
 +    passed = false;
 +    for (int i = 0; i < 5; i++) {
 +      if (c.namespaceOperations().listConstraints(namespace).containsKey(constraintClassName)) {
 +        Thread.sleep(500);
 +        continue;
 +      }
 +      if (c.tableOperations().listConstraints(t1).containsKey(constraintClassName)) {
 +        Thread.sleep(500);
 +        continue;
 +      }
 +      passed = true;
 +    }
 +    assertTrue("Failed to verify that constraint was removed from namespace and table", passed);
 +
 +    passed = false;
 +    for (int i = 0; i < 5; i++) {
 +      BatchWriter bw = c.createBatchWriter(t1, new BatchWriterConfig());
 +      try {
 +        bw.addMutations(Arrays.asList(m1, m2, m3));
 +        bw.close();
 +      } catch (MutationsRejectedException e) {
 +        Thread.sleep(500);
 +        continue;
 +      }
 +      passed = true;
 +    }
 +    assertTrue("Failed to add mutations that should be allowed", passed);
 +  }
 +
 +  @Test
 +  public void renameTable() throws Exception {
 +    String namespace2 = namespace + "_renamed";
 +    String t1 = namespace + ".1";
 +    String t2 = namespace2 + ".2";
 +    String t3 = namespace + ".3";
 +    String t4 = namespace + ".4";
 +    String t5 = "5";
 +
 +    c.namespaceOperations().create(namespace);
 +    c.namespaceOperations().create(namespace2);
 +
 +    assertTrue(c.namespaceOperations().exists(namespace));
 +    assertTrue(c.namespaceOperations().exists(namespace2));
 +    assertFalse(c.tableOperations().exists(t1));
 +    assertFalse(c.tableOperations().exists(t2));
 +    assertFalse(c.tableOperations().exists(t3));
 +    assertFalse(c.tableOperations().exists(t4));
 +    assertFalse(c.tableOperations().exists(t5));
 +
 +    c.tableOperations().create(t1);
 +
 +    try {
 +      c.tableOperations().rename(t1, t2);
 +      fail();
 +    } catch (AccumuloException e) {
 +      // this is expected, because we don't allow renames across namespaces
 +      assertEquals(ThriftTableOperationException.class.getName(), e.getCause().getClass().getName());
 +      assertEquals(TableOperation.RENAME, ((ThriftTableOperationException) e.getCause()).getOp());
 +      assertEquals(TableOperationExceptionType.INVALID_NAME, ((ThriftTableOperationException) e.getCause()).getType());
 +    }
 +
 +    try {
 +      c.tableOperations().rename(t1, t5);
 +      fail();
 +    } catch (AccumuloException e) {
 +      // this is expected, because we don't allow renames across namespaces
 +      assertEquals(ThriftTableOperationException.class.getName(), e.getCause().getClass().getName());
 +      assertEquals(TableOperation.RENAME, ((ThriftTableOperationException) e.getCause()).getOp());
 +      assertEquals(TableOperationExceptionType.INVALID_NAME, ((ThriftTableOperationException) e.getCause()).getType());
 +    }
 +
 +    assertTrue(c.tableOperations().exists(t1));
 +    assertFalse(c.tableOperations().exists(t2));
 +    assertFalse(c.tableOperations().exists(t3));
 +    assertFalse(c.tableOperations().exists(t4));
 +    assertFalse(c.tableOperations().exists(t5));
 +
 +    // fully qualified rename
 +    c.tableOperations().rename(t1, t3);
 +    assertFalse(c.tableOperations().exists(t1));
 +    assertFalse(c.tableOperations().exists(t2));
 +    assertTrue(c.tableOperations().exists(t3));
 +    assertFalse(c.tableOperations().exists(t4));
 +    assertFalse(c.tableOperations().exists(t5));
 +  }
 +
 +  private void loginAs(ClusterUser user) throws IOException {
 +    user.getToken();
 +  }
 +
 +  /**
 +   * Tests new Namespace permissions as well as modifications to Table permissions because of namespaces. Checks each permission to first make sure the user
 +   * doesn't have permission to perform the action, then root grants them the permission and we check to make sure they could perform the action.
 +   */
 +  @Test
 +  public void testPermissions() throws Exception {
 +    ClusterUser user1 = getUser(0), user2 = getUser(1), root = getAdminUser();
 +    String u1 = user1.getPrincipal();
 +    String u2 = user2.getPrincipal();
 +    PasswordToken pass = (null != user1.getPassword() ? new PasswordToken(user1.getPassword()) : null);
 +
 +    String n1 = namespace;
 +    String t1 = n1 + ".1";
 +    String t2 = n1 + ".2";
 +    String t3 = n1 + ".3";
 +
 +    String n2 = namespace + "_2";
 +
 +    loginAs(root);
 +    c.namespaceOperations().create(n1);
 +    c.tableOperations().create(t1);
 +
 +    c.securityOperations().createLocalUser(u1, pass);
 +
 +    loginAs(user1);
 +    Connector user1Con = c.getInstance().getConnector(u1, user1.getToken());
 +
 +    try {
 +      user1Con.tableOperations().create(t2);
 +      fail();
 +    } catch (AccumuloSecurityException e) {
 +      expectPermissionDenied(e);
 +    }
 +
 +    loginAs(root);
 +    c.securityOperations().grantNamespacePermission(u1, n1, NamespacePermission.CREATE_TABLE);
 +    loginAs(user1);
 +    user1Con.tableOperations().create(t2);
 +    loginAs(root);
 +    assertTrue(c.tableOperations().list().contains(t2));
 +    c.securityOperations().revokeNamespacePermission(u1, n1, NamespacePermission.CREATE_TABLE);
 +
 +    loginAs(user1);
 +    try {
 +      user1Con.tableOperations().delete(t1);
 +      fail();
 +    } catch (AccumuloSecurityException e) {
 +      expectPermissionDenied(e);
 +    }
 +
 +    loginAs(root);
 +    c.securityOperations().grantNamespacePermission(u1, n1, NamespacePermission.DROP_TABLE);
 +    loginAs(user1);
 +    user1Con.tableOperations().delete(t1);
 +    loginAs(root);
 +    assertTrue(!c.tableOperations().list().contains(t1));
 +    c.securityOperations().revokeNamespacePermission(u1, n1, NamespacePermission.DROP_TABLE);
 +
 +    c.tableOperations().create(t3);
 +    BatchWriter bw = c.createBatchWriter(t3, null);
 +    Mutation m = new Mutation("row");
 +    m.put("cf", "cq", "value");
 +    bw.addMutation(m);
 +    bw.close();
 +
 +    loginAs(user1);
 +    Iterator<Entry<Key,Value>> i = user1Con.createScanner(t3, new Authorizations()).iterator();
 +    try {
 +      i.next();
 +      fail();
 +    } catch (RuntimeException e) {
 +      assertEquals(AccumuloSecurityException.class.getName(), e.getCause().getClass().getName());
 +      expectPermissionDenied((AccumuloSecurityException) e.getCause());
 +    }
 +
 +    loginAs(user1);
 +    m = new Mutation(u1);
 +    m.put("cf", "cq", "turtles");
 +    bw = user1Con.createBatchWriter(t3, null);
 +    try {
 +      bw.addMutation(m);
 +      bw.close();
 +      fail();
 +    } catch (MutationsRejectedException e) {
 +      assertEquals(1, e.getSecurityErrorCodes().size());
 +      assertEquals(1, e.getSecurityErrorCodes().entrySet().iterator().next().getValue().size());
 +      switch (e.getSecurityErrorCodes().entrySet().iterator().next().getValue().iterator().next()) {
 +        case PERMISSION_DENIED:
 +          break;
 +        default:
 +          fail();
 +      }
 +    }
 +
 +    loginAs(root);
 +    c.securityOperations().grantNamespacePermission(u1, n1, NamespacePermission.READ);
 +    loginAs(user1);
 +    i = user1Con.createScanner(t3, new Authorizations()).iterator();
 +    assertTrue(i.hasNext());
 +    loginAs(root);
 +    c.securityOperations().revokeNamespacePermission(u1, n1, NamespacePermission.READ);
 +    c.securityOperations().grantNamespacePermission(u1, n1, NamespacePermission.WRITE);
 +
 +    loginAs(user1);
 +    m = new Mutation(u1);
 +    m.put("cf", "cq", "turtles");
 +    bw = user1Con.createBatchWriter(t3, null);
 +    bw.addMutation(m);
 +    bw.close();
 +    loginAs(root);
 +    c.securityOperations().revokeNamespacePermission(u1, n1, NamespacePermission.WRITE);
 +
 +    loginAs(user1);
 +    try {
 +      user1Con.tableOperations().setProperty(t3, Property.TABLE_FILE_MAX.getKey(), "42");
 +      fail();
 +    } catch (AccumuloSecurityException e) {
 +      expectPermissionDenied(e);
 +    }
 +
 +    loginAs(root);
 +    c.securityOperations().grantNamespacePermission(u1, n1, NamespacePermission.ALTER_TABLE);
 +    loginAs(user1);
 +    user1Con.tableOperations().setProperty(t3, Property.TABLE_FILE_MAX.getKey(), "42");
 +    user1Con.tableOperations().removeProperty(t3, Property.TABLE_FILE_MAX.getKey());
 +    loginAs(root);
 +    c.securityOperations().revokeNamespacePermission(u1, n1, NamespacePermission.ALTER_TABLE);
 +
 +    loginAs(user1);
 +    try {
 +      user1Con.namespaceOperations().setProperty(n1, Property.TABLE_FILE_MAX.getKey(), "55");
 +      fail();
 +    } catch (AccumuloSecurityException e) {
 +      expectPermissionDenied(e);
 +    }
 +
 +    loginAs(root);
 +    c.securityOperations().grantNamespacePermission(u1, n1, NamespacePermission.ALTER_NAMESPACE);
 +    loginAs(user1);
 +    user1Con.namespaceOperations().setProperty(n1, Property.TABLE_FILE_MAX.getKey(), "42");
 +    user1Con.namespaceOperations().removeProperty(n1, Property.TABLE_FILE_MAX.getKey());
 +    loginAs(root);
 +    c.securityOperations().revokeNamespacePermission(u1, n1, NamespacePermission.ALTER_NAMESPACE);
 +
 +    loginAs(root);
 +    c.securityOperations().createLocalUser(u2, (root.getPassword() == null ? null : new PasswordToken(user2.getPassword())));
 +    loginAs(user1);
 +    try {
 +      user1Con.securityOperations().grantNamespacePermission(u2, n1, NamespacePermission.ALTER_NAMESPACE);
 +      fail();
 +    } catch (AccumuloSecurityException e) {
 +      expectPermissionDenied(e);
 +    }
 +
 +    loginAs(root);
 +    c.securityOperations().grantNamespacePermission(u1, n1, NamespacePermission.GRANT);
 +    loginAs(user1);
 +    user1Con.securityOperations().grantNamespacePermission(u2, n1, NamespacePermission.ALTER_NAMESPACE);
 +    user1Con.securityOperations().revokeNamespacePermission(u2, n1, NamespacePermission.ALTER_NAMESPACE);
 +    loginAs(root);
 +    c.securityOperations().revokeNamespacePermission(u1, n1, NamespacePermission.GRANT);
 +
 +    loginAs(user1);
 +    try {
 +      user1Con.namespaceOperations().create(n2);
 +      fail();
 +    } catch (AccumuloSecurityException e) {
 +      expectPermissionDenied(e);
 +    }
 +
 +    loginAs(root);
 +    c.securityOperations().grantSystemPermission(u1, SystemPermission.CREATE_NAMESPACE);
 +    loginAs(user1);
 +    user1Con.namespaceOperations().create(n2);
 +    loginAs(root);
 +    c.securityOperations().revokeSystemPermission(u1, SystemPermission.CREATE_NAMESPACE);
 +
 +    c.securityOperations().revokeNamespacePermission(u1, n2, NamespacePermission.DROP_NAMESPACE);
 +    loginAs(user1);
 +    try {
 +      user1Con.namespaceOperations().delete(n2);
 +      fail();
 +    } catch (AccumuloSecurityException e) {
 +      expectPermissionDenied(e);
 +    }
 +
 +    loginAs(root);
 +    c.securityOperations().grantSystemPermission(u1, SystemPermission.DROP_NAMESPACE);
 +    loginAs(user1);
 +    user1Con.namespaceOperations().delete(n2);
 +    loginAs(root);
 +    c.securityOperations().revokeSystemPermission(u1, SystemPermission.DROP_NAMESPACE);
 +
 +    loginAs(user1);
 +    try {
 +      user1Con.namespaceOperations().setProperty(n1, Property.TABLE_FILE_MAX.getKey(), "33");
 +      fail();
 +    } catch (AccumuloSecurityException e) {
 +      expectPermissionDenied(e);
 +    }
 +
 +    loginAs(root);
 +    c.securityOperations().grantSystemPermission(u1, SystemPermission.ALTER_NAMESPACE);
 +    loginAs(user1);
 +    user1Con.namespaceOperations().setProperty(n1, Property.TABLE_FILE_MAX.getKey(), "33");
 +    user1Con.namespaceOperations().removeProperty(n1, Property.TABLE_FILE_MAX.getKey());
 +    loginAs(root);
 +    c.securityOperations().revokeSystemPermission(u1, SystemPermission.ALTER_NAMESPACE);
 +  }
 +
 +  @Test
 +  public void verifySystemPropertyInheritance() throws Exception {
 +    String t1 = "1";
 +    String t2 = namespace + "." + t1;
 +    c.tableOperations().create(t1);
 +    c.namespaceOperations().create(namespace);
 +    c.tableOperations().create(t2);
 +
 +    // verify iterator inheritance
 +    _verifySystemPropertyInheritance(t1, t2, Property.TABLE_ITERATOR_PREFIX.getKey() + "scan.sum", "20," + SimpleFilter.class.getName(), false);
 +
 +    // verify constraint inheritance
 +    _verifySystemPropertyInheritance(t1, t2, Property.TABLE_CONSTRAINT_PREFIX.getKey() + "42", NumericValueConstraint.class.getName(), false);
 +
 +    // verify other inheritance
 +    _verifySystemPropertyInheritance(t1, t2, Property.TABLE_LOCALITY_GROUP_PREFIX.getKey() + "dummy", "dummy", true);
 +  }
 +
 +  private void _verifySystemPropertyInheritance(String defaultNamespaceTable, String namespaceTable, String k, String v, boolean systemNamespaceShouldInherit)
 +      throws Exception {
 +    // nobody should have any of these properties yet
 +    assertFalse(c.instanceOperations().getSystemConfiguration().containsValue(v));
 +    assertFalse(checkNamespaceHasProp(Namespaces.ACCUMULO_NAMESPACE, k, v));
 +    assertFalse(checkTableHasProp(RootTable.NAME, k, v));
 +    assertFalse(checkTableHasProp(MetadataTable.NAME, k, v));
 +    assertFalse(checkNamespaceHasProp(Namespaces.DEFAULT_NAMESPACE, k, v));
 +    assertFalse(checkTableHasProp(defaultNamespaceTable, k, v));
 +    assertFalse(checkNamespaceHasProp(namespace, k, v));
 +    assertFalse(checkTableHasProp(namespaceTable, k, v));
 +
 +    // set the filter, verify that accumulo namespace is the only one unaffected
 +    c.instanceOperations().setProperty(k, v);
 +    // doesn't take effect immediately, needs time to propagate to tserver's ZooKeeper cache
 +    sleepUninterruptibly(250, TimeUnit.MILLISECONDS);
 +    assertTrue(c.instanceOperations().getSystemConfiguration().containsValue(v));
 +    assertEquals(systemNamespaceShouldInherit, checkNamespaceHasProp(Namespaces.ACCUMULO_NAMESPACE, k, v));
 +    assertEquals(systemNamespaceShouldInherit, checkTableHasProp(RootTable.NAME, k, v));
 +    assertEquals(systemNamespaceShouldInherit, checkTableHasProp(MetadataTable.NAME, k, v));
 +    assertTrue(checkNamespaceHasProp(Namespaces.DEFAULT_NAMESPACE, k, v));
 +    assertTrue(checkTableHasProp(defaultNamespaceTable, k, v));
 +    assertTrue(checkNamespaceHasProp(namespace, k, v));
 +    assertTrue(checkTableHasProp(namespaceTable, k, v));
 +
 +    // verify it is no longer inherited
 +    c.instanceOperations().removeProperty(k);
 +    // doesn't take effect immediately, needs time to propagate to tserver's ZooKeeper cache
 +    sleepUninterruptibly(250, TimeUnit.MILLISECONDS);
 +    assertFalse(c.instanceOperations().getSystemConfiguration().containsValue(v));
 +    assertFalse(checkNamespaceHasProp(Namespaces.ACCUMULO_NAMESPACE, k, v));
 +    assertFalse(checkTableHasProp(RootTable.NAME, k, v));
 +    assertFalse(checkTableHasProp(MetadataTable.NAME, k, v));
 +    assertFalse(checkNamespaceHasProp(Namespaces.DEFAULT_NAMESPACE, k, v));
 +    assertFalse(checkTableHasProp(defaultNamespaceTable, k, v));
 +    assertFalse(checkNamespaceHasProp(namespace, k, v));
 +    assertFalse(checkTableHasProp(namespaceTable, k, v));
 +  }
 +
 +  @Test
 +  public void listNamespaces() throws Exception {
 +    SortedSet<String> namespaces = c.namespaceOperations().list();
 +    Map<String,String> map = c.namespaceOperations().namespaceIdMap();
 +    assertEquals(2, namespaces.size());
 +    assertEquals(2, map.size());
 +    assertTrue(namespaces.contains(Namespaces.ACCUMULO_NAMESPACE));
 +    assertTrue(namespaces.contains(Namespaces.DEFAULT_NAMESPACE));
 +    assertFalse(namespaces.contains(namespace));
 +    assertEquals(Namespaces.ACCUMULO_NAMESPACE_ID, map.get(Namespaces.ACCUMULO_NAMESPACE));
 +    assertEquals(Namespaces.DEFAULT_NAMESPACE_ID, map.get(Namespaces.DEFAULT_NAMESPACE));
 +    assertNull(map.get(namespace));
 +
 +    c.namespaceOperations().create(namespace);
 +    namespaces = c.namespaceOperations().list();
 +    map = c.namespaceOperations().namespaceIdMap();
 +    assertEquals(3, namespaces.size());
 +    assertEquals(3, map.size());
 +    assertTrue(namespaces.contains(Namespaces.ACCUMULO_NAMESPACE));
 +    assertTrue(namespaces.contains(Namespaces.DEFAULT_NAMESPACE));
 +    assertTrue(namespaces.contains(namespace));
 +    assertEquals(Namespaces.ACCUMULO_NAMESPACE_ID, map.get(Namespaces.ACCUMULO_NAMESPACE));
 +    assertEquals(Namespaces.DEFAULT_NAMESPACE_ID, map.get(Namespaces.DEFAULT_NAMESPACE));
 +    assertNotNull(map.get(namespace));
 +
 +    c.namespaceOperations().delete(namespace);
 +    namespaces = c.namespaceOperations().list();
 +    map = c.namespaceOperations().namespaceIdMap();
 +    assertEquals(2, namespaces.size());
 +    assertEquals(2, map.size());
 +    assertTrue(namespaces.contains(Namespaces.ACCUMULO_NAMESPACE));
 +    assertTrue(namespaces.contains(Namespaces.DEFAULT_NAMESPACE));
 +    assertFalse(namespaces.contains(namespace));
 +    assertEquals(Namespaces.ACCUMULO_NAMESPACE_ID, map.get(Namespaces.ACCUMULO_NAMESPACE));
 +    assertEquals(Namespaces.DEFAULT_NAMESPACE_ID, map.get(Namespaces.DEFAULT_NAMESPACE));
 +    assertNull(map.get(namespace));
 +  }
 +
 +  @Test
 +  public void loadClass() throws Exception {
 +    assertTrue(c.namespaceOperations().testClassLoad(Namespaces.DEFAULT_NAMESPACE, VersioningIterator.class.getName(), SortedKeyValueIterator.class.getName()));
 +    assertFalse(c.namespaceOperations().testClassLoad(Namespaces.DEFAULT_NAMESPACE, "dummy", SortedKeyValueIterator.class.getName()));
 +    try {
 +      c.namespaceOperations().testClassLoad(namespace, "dummy", "dummy");
 +      fail();
 +    } catch (NamespaceNotFoundException e) {
 +      // expected, ignore
 +    }
 +  }
 +
 +  @Test
 +  public void testModifyingPermissions() throws Exception {
 +    String tableName = namespace + ".modify";
 +    c.namespaceOperations().create(namespace);
 +    c.tableOperations().create(tableName);
 +    assertTrue(c.securityOperations().hasTablePermission(c.whoami(), tableName, TablePermission.READ));
 +    c.securityOperations().revokeTablePermission(c.whoami(), tableName, TablePermission.READ);
 +    assertFalse(c.securityOperations().hasTablePermission(c.whoami(), tableName, TablePermission.READ));
 +    c.securityOperations().grantTablePermission(c.whoami(), tableName, TablePermission.READ);
 +    assertTrue(c.securityOperations().hasTablePermission(c.whoami(), tableName, TablePermission.READ));
 +    c.tableOperations().delete(tableName);
 +
 +    try {
 +      c.securityOperations().hasTablePermission(c.whoami(), tableName, TablePermission.READ);
 +      fail();
 +    } catch (Exception e) {
 +      if (!(e instanceof AccumuloSecurityException) || !((AccumuloSecurityException) e).getSecurityErrorCode().equals(SecurityErrorCode.TABLE_DOESNT_EXIST))
 +        throw new Exception("Has permission resulted in " + e.getClass().getName(), e);
 +    }
 +
 +    try {
 +      c.securityOperations().grantTablePermission(c.whoami(), tableName, TablePermission.READ);
 +      fail();
 +    } catch (Exception e) {
 +      if (!(e instanceof AccumuloSecurityException) || !((AccumuloSecurityException) e).getSecurityErrorCode().equals(SecurityErrorCode.TABLE_DOESNT_EXIST))
 +        throw new Exception("Has permission resulted in " + e.getClass().getName(), e);
 +    }
 +
 +    try {
 +      c.securityOperations().revokeTablePermission(c.whoami(), tableName, TablePermission.READ);
 +      fail();
 +    } catch (Exception e) {
 +      if (!(e instanceof AccumuloSecurityException) || !((AccumuloSecurityException) e).getSecurityErrorCode().equals(SecurityErrorCode.TABLE_DOESNT_EXIST))
 +        throw new Exception("Has permission resulted in " + e.getClass().getName(), e);
 +    }
 +
 +    assertTrue(c.securityOperations().hasNamespacePermission(c.whoami(), namespace, NamespacePermission.READ));
 +    c.securityOperations().revokeNamespacePermission(c.whoami(), namespace, NamespacePermission.READ);
 +    assertFalse(c.securityOperations().hasNamespacePermission(c.whoami(), namespace, NamespacePermission.READ));
 +    c.securityOperations().grantNamespacePermission(c.whoami(), namespace, NamespacePermission.READ);
 +    assertTrue(c.securityOperations().hasNamespacePermission(c.whoami(), namespace, NamespacePermission.READ));
 +
 +    c.namespaceOperations().delete(namespace);
 +
 +    try {
 +      c.securityOperations().hasTablePermission(c.whoami(), tableName, TablePermission.READ);
 +      fail();
 +    } catch (Exception e) {
 +      if (!(e instanceof AccumuloSecurityException) || !((AccumuloSecurityException) e).getSecurityErrorCode().equals(SecurityErrorCode.TABLE_DOESNT_EXIST))
 +        throw new Exception("Has permission resulted in " + e.getClass().getName(), e);
 +    }
 +
 +    try {
 +      c.securityOperations().grantTablePermission(c.whoami(), tableName, TablePermission.READ);
 +      fail();
 +    } catch (Exception e) {
 +      if (!(e instanceof AccumuloSecurityException) || !((AccumuloSecurityException) e).getSecurityErrorCode().equals(SecurityErrorCode.TABLE_DOESNT_EXIST))
 +        throw new Exception("Has permission resulted in " + e.getClass().getName(), e);
 +    }
 +
 +    try {
 +      c.securityOperations().revokeTablePermission(c.whoami(), tableName, TablePermission.READ);
 +      fail();
 +    } catch (Exception e) {
 +      if (!(e instanceof AccumuloSecurityException) || !((AccumuloSecurityException) e).getSecurityErrorCode().equals(SecurityErrorCode.TABLE_DOESNT_EXIST))
 +        throw new Exception("Has permission resulted in " + e.getClass().getName(), e);
 +    }
 +
 +    try {
 +      c.securityOperations().hasNamespacePermission(c.whoami(), namespace, NamespacePermission.READ);
 +      fail();
 +    } catch (Exception e) {
 +      if (!(e instanceof AccumuloSecurityException) || !((AccumuloSecurityException) e).getSecurityErrorCode().equals(SecurityErrorCode.NAMESPACE_DOESNT_EXIST))
 +        throw new Exception("Has permission resulted in " + e.getClass().getName(), e);
 +    }
 +
 +    try {
 +      c.securityOperations().grantNamespacePermission(c.whoami(), namespace, NamespacePermission.READ);
 +      fail();
 +    } catch (Exception e) {
 +      if (!(e instanceof AccumuloSecurityException) || !((AccumuloSecurityException) e).getSecurityErrorCode().equals(SecurityErrorCode.NAMESPACE_DOESNT_EXIST))
 +        throw new Exception("Has permission resulted in " + e.getClass().getName(), e);
 +    }
 +
 +    try {
 +      c.securityOperations().revokeNamespacePermission(c.whoami(), namespace, NamespacePermission.READ);
 +      fail();
 +    } catch (Exception e) {
 +      if (!(e instanceof AccumuloSecurityException) || !((AccumuloSecurityException) e).getSecurityErrorCode().equals(SecurityErrorCode.NAMESPACE_DOESNT_EXIST))
 +        throw new Exception("Has permission resulted in " + e.getClass().getName(), e);
 +    }
 +
 +  }
 +
 +  @Test
 +  public void verifyTableOperationsExceptions() throws Exception {
 +    String tableName = namespace + ".1";
 +    IteratorSetting setting = new IteratorSetting(200, VersioningIterator.class);
 +    Text a = new Text("a");
 +    Text z = new Text("z");
 +    TableOperations ops = c.tableOperations();
 +
 +    // this one doesn't throw an exception, so don't fail; just check that it works
 +    assertFalse(ops.exists(tableName));
 +
 +    // table operations that should throw an AccumuloException caused by NamespaceNotFoundException
 +    int numRun = 0;
 +    ACCUMULOEXCEPTIONS_NAMESPACENOTFOUND: for (int i = 0;; ++i)
 +      try {
 +        switch (i) {
 +          case 0:
 +            ops.create(tableName);
 +            fail();
 +            break;
 +          case 1:
 +            ops.create("a");
 +            ops.clone("a", tableName, true, Collections.<String,String> emptyMap(), Collections.<String> emptySet());
 +            fail();
 +            break;
 +          case 2:
 +            ops.importTable(tableName, System.getProperty("user.dir") + "/target");
 +            fail();
 +            break;
 +          default:
 +            // break out of infinite loop
 +            assertEquals(3, i); // check test integrity
 +            assertEquals(3, numRun); // check test integrity
 +            break ACCUMULOEXCEPTIONS_NAMESPACENOTFOUND;
 +        }
 +      } catch (Exception e) {
 +        numRun++;
 +        if (!(e instanceof AccumuloException) || !(e.getCause() instanceof NamespaceNotFoundException))
 +          throw new Exception("Case " + i + " resulted in " + e.getClass().getName(), e);
 +      }
 +
 +    // table operations that should throw an AccumuloException caused by a TableNotFoundException caused by a NamespaceNotFoundException
 +    // these are here because we didn't declare TableNotFoundException in the API :(
 +    numRun = 0;
 +    ACCUMULOEXCEPTIONS_TABLENOTFOUND: for (int i = 0;; ++i)
 +      try {
 +        switch (i) {
 +          case 0:
 +            ops.removeConstraint(tableName, 0);
 +            fail();
 +            break;
 +          case 1:
 +            ops.removeProperty(tableName, "a");
 +            fail();
 +            break;
 +          case 2:
 +            ops.setProperty(tableName, "a", "b");
 +            fail();
 +            break;
 +          default:
 +            // break out of infinite loop
 +            assertEquals(3, i); // check test integrity
 +            assertEquals(3, numRun); // check test integrity
 +            break ACCUMULOEXCEPTIONS_TABLENOTFOUND;
 +        }
 +      } catch (Exception e) {
 +        numRun++;
 +        if (!(e instanceof AccumuloException) || !(e.getCause() instanceof TableNotFoundException)
 +            || !(e.getCause().getCause() instanceof NamespaceNotFoundException))
 +          throw new Exception("Case " + i + " resulted in " + e.getClass().getName(), e);
 +      }
 +
 +    // table operations that should throw a TableNotFoundException caused by NamespaceNotFoundException
 +    numRun = 0;
 +    TABLENOTFOUNDEXCEPTIONS: for (int i = 0;; ++i)
 +      try {
 +        switch (i) {
 +          case 0:
 +            ops.addConstraint(tableName, NumericValueConstraint.class.getName());
 +            fail();
 +            break;
 +          case 1:
 +            ops.addSplits(tableName, new TreeSet<Text>());
 +            fail();
 +            break;
 +          case 2:
 +            ops.attachIterator(tableName, setting);
 +            fail();
 +            break;
 +          case 3:
 +            ops.cancelCompaction(tableName);
 +            fail();
 +            break;
 +          case 4:
 +            ops.checkIteratorConflicts(tableName, setting, EnumSet.allOf(IteratorScope.class));
 +            fail();
 +            break;
 +          case 5:
 +            ops.clearLocatorCache(tableName);
 +            fail();
 +            break;
 +          case 6:
 +            ops.clone(tableName, "2", true, Collections.<String,String> emptyMap(), Collections.<String> emptySet());
 +            fail();
 +            break;
 +          case 7:
 +            ops.compact(tableName, a, z, true, true);
 +            fail();
 +            break;
 +          case 8:
 +            ops.delete(tableName);
 +            fail();
 +            break;
 +          case 9:
 +            ops.deleteRows(tableName, a, z);
 +            fail();
 +            break;
 +          case 10:
 +            ops.splitRangeByTablets(tableName, new Range(), 10);
 +            fail();
 +            break;
 +          case 11:
 +            ops.exportTable(tableName, namespace + "_dir");
 +            fail();
 +            break;
 +          case 12:
 +            ops.flush(tableName, a, z, true);
 +            fail();
 +            break;
 +          case 13:
 +            ops.getDiskUsage(Collections.singleton(tableName));
 +            fail();
 +            break;
 +          case 14:
 +            ops.getIteratorSetting(tableName, "a", IteratorScope.scan);
 +            fail();
 +            break;
 +          case 15:
 +            ops.getLocalityGroups(tableName);
 +            fail();
 +            break;
 +          case 16:
 +            ops.getMaxRow(tableName, Authorizations.EMPTY, a, true, z, true);
 +            fail();
 +            break;
 +          case 17:
 +            ops.getProperties(tableName);
 +            fail();
 +            break;
 +          case 18:
 +            ops.importDirectory(tableName, "", "", false);
 +            fail();
 +            break;
 +          case 19:
 +            ops.testClassLoad(tableName, VersioningIterator.class.getName(), SortedKeyValueIterator.class.getName());
 +            fail();
 +            break;
 +          case 20:
 +            ops.listConstraints(tableName);
 +            fail();
 +            break;
 +          case 21:
 +            ops.listIterators(tableName);
 +            fail();
 +            break;
 +          case 22:
 +            ops.listSplits(tableName);
 +            fail();
 +            break;
 +          case 23:
 +            ops.merge(tableName, a, z);
 +            fail();
 +            break;
 +          case 24:
 +            ops.offline(tableName, true);
 +            fail();
 +            break;
 +          case 25:
 +            ops.online(tableName, true);
 +            fail();
 +            break;
 +          case 26:
 +            ops.removeIterator(tableName, "a", EnumSet.of(IteratorScope.scan));
 +            fail();
 +            break;
 +          case 27:
 +            ops.rename(tableName, tableName + "2");
 +            fail();
 +            break;
 +          case 28:
 +            ops.setLocalityGroups(tableName, Collections.<String,Set<Text>> emptyMap());
 +            fail();
 +            break;
 +          default:
 +            // break out of infinite loop
 +            assertEquals(29, i); // check test integrity
 +            assertEquals(29, numRun); // check test integrity
 +            break TABLENOTFOUNDEXCEPTIONS;
 +        }
 +      } catch (Exception e) {
 +        numRun++;
 +        if (!(e instanceof TableNotFoundException) || !(e.getCause() instanceof NamespaceNotFoundException))
 +          throw new Exception("Case " + i + " resulted in " + e.getClass().getName(), e);
 +      }
 +  }
 +
 +  @Test
 +  public void verifyNamespaceOperationsExceptions() throws Exception {
 +    IteratorSetting setting = new IteratorSetting(200, VersioningIterator.class);
 +    NamespaceOperations ops = c.namespaceOperations();
 +
 +    // this one doesn't throw an exception, so don't fail; just check that it works
 +    assertFalse(ops.exists(namespace));
 +
 +    // namespace operations that should throw a NamespaceNotFoundException
 +    int numRun = 0;
 +    NAMESPACENOTFOUND: for (int i = 0;; ++i)
 +      try {
 +        switch (i) {
 +          case 0:
 +            ops.addConstraint(namespace, NumericValueConstraint.class.getName());
 +            fail();
 +            break;
 +          case 1:
 +            ops.attachIterator(namespace, setting);
 +            fail();
 +            break;
 +          case 2:
 +            ops.checkIteratorConflicts(namespace, setting, EnumSet.of(IteratorScope.scan));
 +            fail();
 +            break;
 +          case 3:
 +            ops.delete(namespace);
 +            fail();
 +            break;
 +          case 4:
 +            ops.getIteratorSetting(namespace, "thing", IteratorScope.scan);
 +            fail();
 +            break;
 +          case 5:
 +            ops.getProperties(namespace);
 +            fail();
 +            break;
 +          case 6:
 +            ops.listConstraints(namespace);
 +            fail();
 +            break;
 +          case 7:
 +            ops.listIterators(namespace);
 +            fail();
 +            break;
 +          case 8:
 +            ops.removeConstraint(namespace, 1);
 +            fail();
 +            break;
 +          case 9:
 +            ops.removeIterator(namespace, "thing", EnumSet.allOf(IteratorScope.class));
 +            fail();
 +            break;
 +          case 10:
 +            ops.removeProperty(namespace, "a");
 +            fail();
 +            break;
 +          case 11:
 +            ops.rename(namespace, namespace + "2");
 +            fail();
 +            break;
 +          case 12:
 +            ops.setProperty(namespace, "k", "v");
 +            fail();
 +            break;
 +          case 13:
 +            ops.testClassLoad(namespace, VersioningIterator.class.getName(), SortedKeyValueIterator.class.getName());
 +            fail();
 +            break;
 +          default:
 +            // break out of infinite loop
 +            assertEquals(14, i); // check test integrity
 +            assertEquals(14, numRun); // check test integrity
 +            break NAMESPACENOTFOUND;
 +        }
 +      } catch (Exception e) {
 +        numRun++;
 +        if (!(e instanceof NamespaceNotFoundException))
 +          throw new Exception("Case " + i + " resulted in " + e.getClass().getName(), e);
 +      }
 +
 +    // namespace operations that should throw a NamespaceExistsException
 +    numRun = 0;
 +    NAMESPACEEXISTS: for (int i = 0;; ++i)
 +      try {
 +        switch (i) {
 +          case 0:
 +            ops.create(namespace + "0");
 +            ops.create(namespace + "0"); // should fail here
 +            fail();
 +            break;
 +          case 1:
 +            ops.create(namespace + i + "_1");
 +            ops.create(namespace + i + "_2");
 +            ops.rename(namespace + i + "_1", namespace + i + "_2"); // should fail here
 +            fail();
 +            break;
 +          case 2:
 +            ops.create(Namespaces.DEFAULT_NAMESPACE);
 +            fail();
 +            break;
 +          case 3:
 +            ops.create(Namespaces.ACCUMULO_NAMESPACE);
 +            fail();
 +            break;
 +          case 4:
 +            ops.create(namespace + i + "_1");
 +            ops.rename(namespace + i + "_1", Namespaces.DEFAULT_NAMESPACE); // should fail here
 +            fail();
 +            break;
 +          case 5:
 +            ops.create(namespace + i + "_1");
 +            ops.rename(namespace + i + "_1", Namespaces.ACCUMULO_NAMESPACE); // should fail here
 +            fail();
 +            break;
 +          default:
 +            // break out of infinite loop
 +            assertEquals(6, i); // check test integrity
 +            assertEquals(6, numRun); // check test integrity
 +            break NAMESPACEEXISTS;
 +        }
 +      } catch (Exception e) {
 +        numRun++;
 +        if (!(e instanceof NamespaceExistsException))
 +          throw new Exception("Case " + i + " resulted in " + e.getClass().getName(), e);
 +      }
 +  }
 +
 +  private boolean checkTableHasProp(String t, String propKey, String propVal) {
 +    return checkHasProperty(t, propKey, propVal, true);
 +  }
 +
 +  private boolean checkNamespaceHasProp(String n, String propKey, String propVal) {
 +    return checkHasProperty(n, propKey, propVal, false);
 +  }
 +
 +  private boolean checkHasProperty(String name, String propKey, String propVal, boolean nameIsTable) {
 +    try {
 +      Iterable<Entry<String,String>> iterable = nameIsTable ? c.tableOperations().getProperties(name) : c.namespaceOperations().getProperties(name);
 +      for (Entry<String,String> e : iterable)
 +        if (propKey.equals(e.getKey()))
 +          return propVal.equals(e.getValue());
 +      return false;
 +    } catch (Exception e) {
 +      fail();
 +      return false;
 +    }
 +  }
 +
 +  public static class SimpleFilter extends Filter {
 +    @Override
 +    public boolean accept(Key k, Value v) {
 +      if (k.getColumnFamily().toString().equals("a"))
 +        return false;
 +      return true;
 +    }
 +  }
 +
 +  private void expectPermissionDenied(AccumuloSecurityException sec) {
 +    assertEquals(sec.getSecurityErrorCode().getClass(), SecurityErrorCode.class);
 +    switch (sec.getSecurityErrorCode()) {
 +      case PERMISSION_DENIED:
 +        break;
 +      default:
 +        fail();
 +    }
 +  }
 +
 +}


[06/10] accumulo git commit: Merge branch '1.7' into 1.8

Posted by el...@apache.org.
Merge branch '1.7' into 1.8


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/d28a3ee3
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/d28a3ee3
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/d28a3ee3

Branch: refs/heads/master
Commit: d28a3ee3e154cc21b38523c254398bb01b7dbec7
Parents: 6027997 661dac3
Author: Josh Elser <el...@apache.org>
Authored: Wed Aug 31 00:00:44 2016 -0400
Committer: Josh Elser <el...@apache.org>
Committed: Wed Aug 31 00:00:44 2016 -0400

----------------------------------------------------------------------
 TESTING.md                                      | 57 +++++++++++---------
 pom.xml                                         | 21 +++++++-
 .../harness/AccumuloClusterHarness.java         |  3 ++
 .../accumulo/harness/SharedMiniClusterBase.java |  3 ++
 .../org/apache/accumulo/test/NamespacesIT.java  |  3 ++
 .../test/categories/AnyClusterTest.java         | 25 +++++++++
 .../test/categories/MiniClusterOnlyTest.java    | 24 +++++++++
 .../accumulo/test/categories/package-info.java  | 21 ++++++++
 .../accumulo/test/functional/ClassLoaderIT.java |  3 ++
 .../test/functional/ConfigurableMacBase.java    |  3 ++
 .../accumulo/test/functional/KerberosIT.java    |  3 ++
 .../test/functional/KerberosProxyIT.java        |  3 ++
 .../test/functional/KerberosRenewalIT.java      |  3 ++
 .../accumulo/test/functional/PermissionsIT.java |  3 ++
 .../accumulo/test/functional/TableIT.java       |  3 ++
 .../test/replication/KerberosReplicationIT.java |  3 ++
 trace/pom.xml                                   |  1 +
 17 files changed, 156 insertions(+), 26 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/TESTING.md
----------------------------------------------------------------------
diff --cc TESTING.md
index 2195108,125110b..9799397
--- a/TESTING.md
+++ b/TESTING.md
@@@ -91,7 -79,9 +79,26 @@@ Use of a standalone cluster can be enab
  providing a Java properties file on the Maven command line. The use of a properties file is recommended since it is
  typically a fixed file per standalone cluster you want to run the tests against.
  
- ### Configuration
+ These tests will always run during the `integration-test` lifecycle phase using `mvn verify`.
+ 
++### Performance tests
++
++Performance tests refer to a small subset of integration tests which are not activated by default. These tests allow
++developers to write tests which specifically exercise expected performance which may be dependent on the available
++resources of the host machine. Normal integration tests should be capable of running anywhere with a lower-bound on
++available memory.
++
++These tests are designated using the JUnit Category annotation with the `PerformanceTest` interface in the
++accumulo-test module. See the `PerformanceTest` interface for more information on how to use this to write your
++own performance test.
++
++To invoke the performance tests, activate the `performanceTests` Maven profile in addition to the integration-test
++or verify Maven lifecycle. For example `mvn verify -PperformanceTests` would invoke all of the integration tests:
++both normal integration tests and the performance tests. There is presently no way to invoke only the performance
++tests without the rest of the integration tests.
++
++
+ ## Configuration for Standalone clusters
  
  The following properties can be used to configure a standalone cluster:
  

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/pom.xml
----------------------------------------------------------------------
diff --cc pom.xml
index 77e5597,d6393d2..8106dff
--- a/pom.xml
+++ b/pom.xml
@@@ -116,8 -115,10 +116,12 @@@
      <url>https://builds.apache.org/view/A-D/view/Accumulo/</url>
    </ciManagement>
    <properties>
+     <accumulo.anyClusterTests>org.apache.accumulo.test.categories.AnyClusterTest</accumulo.anyClusterTests>
 -    <accumulo.it.excludedGroups />
++    <accumulo.it.excludedGroups>${accumulo.performanceTests</accumulo.it.excludedGroups>
+     <accumulo.it.groups>${accumulo.anyClusterTests},${accumulo.miniclusterTests}</accumulo.it.groups>
+     <accumulo.miniclusterTests>org.apache.accumulo.test.categories.MiniClusterOnlyTest</accumulo.miniclusterTests>
 +    <!-- Interface used to separate tests with JUnit category -->
 +    <accumulo.performanceTests>org.apache.accumulo.test.PerformanceTest</accumulo.performanceTests>
      <!-- used for filtering the java source with the current version -->
      <accumulo.release.version>${project.version}</accumulo.release.version>
      <assembly.tarLongFileMode>posix</assembly.tarLongFileMode>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/harness/AccumuloClusterHarness.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/harness/AccumuloClusterHarness.java
index 7d7b73a,0000000..70d8dc7
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/harness/AccumuloClusterHarness.java
+++ b/test/src/main/java/org/apache/accumulo/harness/AccumuloClusterHarness.java
@@@ -1,338 -1,0 +1,341 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.harness;
 +
 +import static com.google.common.base.Preconditions.checkState;
 +import static java.nio.charset.StandardCharsets.UTF_8;
 +import static org.junit.Assert.fail;
 +
 +import java.io.IOException;
 +
 +import org.apache.accumulo.cluster.AccumuloCluster;
 +import org.apache.accumulo.cluster.ClusterControl;
 +import org.apache.accumulo.cluster.ClusterUser;
 +import org.apache.accumulo.cluster.ClusterUsers;
 +import org.apache.accumulo.cluster.standalone.StandaloneAccumuloCluster;
 +import org.apache.accumulo.core.client.ClientConfiguration;
 +import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.admin.SecurityOperations;
 +import org.apache.accumulo.core.client.admin.TableOperations;
 +import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 +import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 +import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.security.TablePermission;
 +import org.apache.accumulo.harness.conf.AccumuloClusterConfiguration;
 +import org.apache.accumulo.harness.conf.AccumuloClusterPropertyConfiguration;
 +import org.apache.accumulo.harness.conf.AccumuloMiniClusterConfiguration;
 +import org.apache.accumulo.harness.conf.StandaloneAccumuloClusterConfiguration;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
++import org.apache.accumulo.test.categories.AnyClusterTest;
 +import org.apache.hadoop.conf.Configuration;
 +import org.apache.hadoop.fs.FileSystem;
 +import org.apache.hadoop.fs.Path;
 +import org.apache.hadoop.security.UserGroupInformation;
 +import org.junit.After;
 +import org.junit.AfterClass;
 +import org.junit.Assume;
 +import org.junit.Before;
 +import org.junit.BeforeClass;
++import org.junit.experimental.categories.Category;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +/**
 + * General Integration-Test base class that provides access to an Accumulo instance for testing. This instance could be MAC or a standalone instance.
 + */
++@Category(AnyClusterTest.class)
 +public abstract class AccumuloClusterHarness extends AccumuloITBase implements MiniClusterConfigurationCallback, ClusterUsers {
 +  private static final Logger log = LoggerFactory.getLogger(AccumuloClusterHarness.class);
 +  private static final String TRUE = Boolean.toString(true);
 +
 +  public static enum ClusterType {
 +    MINI, STANDALONE;
 +
 +    public boolean isDynamic() {
 +      return this == MINI;
 +    }
 +  }
 +
 +  private static boolean initialized = false;
 +
 +  protected static AccumuloCluster cluster;
 +  protected static ClusterType type;
 +  protected static AccumuloClusterPropertyConfiguration clusterConf;
 +  protected static TestingKdc krb;
 +
 +  @BeforeClass
 +  public static void setUp() throws Exception {
 +    clusterConf = AccumuloClusterPropertyConfiguration.get();
 +    type = clusterConf.getClusterType();
 +
 +    if (ClusterType.MINI == type && TRUE.equals(System.getProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION))) {
 +      krb = new TestingKdc();
 +      krb.start();
 +      log.info("MiniKdc started");
 +    }
 +
 +    initialized = true;
 +  }
 +
 +  @AfterClass
 +  public static void tearDownKdc() throws Exception {
 +    if (null != krb) {
 +      krb.stop();
 +    }
 +  }
 +
 +  /**
 +   * The {@link TestingKdc} used for this {@link AccumuloCluster}. Might be null.
 +   */
 +  public static TestingKdc getKdc() {
 +    return krb;
 +  }
 +
 +  @Before
 +  public void setupCluster() throws Exception {
 +    // Before we try to instantiate the cluster, check to see if the test even wants to run against this type of cluster
 +    Assume.assumeTrue(canRunTest(type));
 +
 +    switch (type) {
 +      case MINI:
 +        MiniClusterHarness miniClusterHarness = new MiniClusterHarness();
 +        // Intrinsically performs the callback to let tests alter MiniAccumuloConfig and core-site.xml
 +        MiniAccumuloClusterImpl impl = miniClusterHarness.create(this, getAdminToken(), krb);
 +        cluster = impl;
 +        // MAC makes a ClientConf for us, just set it
 +        ((AccumuloMiniClusterConfiguration) clusterConf).setClientConf(impl.getClientConfig());
 +        // Login as the "root" user
 +        if (null != krb) {
 +          ClusterUser rootUser = krb.getRootUser();
 +          // Log in the 'client' user
 +          UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +        }
 +        break;
 +      case STANDALONE:
 +        StandaloneAccumuloClusterConfiguration conf = (StandaloneAccumuloClusterConfiguration) clusterConf;
 +        ClientConfiguration clientConf = conf.getClientConf();
 +        StandaloneAccumuloCluster standaloneCluster = new StandaloneAccumuloCluster(conf.getInstance(), clientConf, conf.getTmpDirectory(), conf.getUsers(),
 +            conf.getAccumuloServerUser());
 +        // If these are provided in the configuration, pass them into the cluster
 +        standaloneCluster.setAccumuloHome(conf.getAccumuloHome());
 +        standaloneCluster.setClientAccumuloConfDir(conf.getClientAccumuloConfDir());
 +        standaloneCluster.setServerAccumuloConfDir(conf.getServerAccumuloConfDir());
 +        standaloneCluster.setHadoopConfDir(conf.getHadoopConfDir());
 +
 +        // For SASL, we need to get the Hadoop configuration files as well otherwise UGI will log in as SIMPLE instead of KERBEROS
 +        Configuration hadoopConfiguration = standaloneCluster.getHadoopConfiguration();
 +        if (clientConf.getBoolean(ClientProperty.INSTANCE_RPC_SASL_ENABLED.getKey(), false)) {
 +          UserGroupInformation.setConfiguration(hadoopConfiguration);
 +          // Login as the admin user to start the tests
 +          UserGroupInformation.loginUserFromKeytab(conf.getAdminPrincipal(), conf.getAdminKeytab().getAbsolutePath());
 +        }
 +
 +        // Set the implementation
 +        cluster = standaloneCluster;
 +        break;
 +      default:
 +        throw new RuntimeException("Unhandled type");
 +    }
 +
 +    if (type.isDynamic()) {
 +      cluster.start();
 +    } else {
 +      log.info("Removing tables which appear to be from a previous test run");
 +      cleanupTables();
 +      log.info("Removing users which appear to be from a previous test run");
 +      cleanupUsers();
 +    }
 +
 +    switch (type) {
 +      case MINI:
 +        if (null != krb) {
 +          final String traceTable = Property.TRACE_TABLE.getDefaultValue();
 +          final ClusterUser systemUser = krb.getAccumuloServerUser(), rootUser = krb.getRootUser();
 +
 +          // Login as the trace user
 +          UserGroupInformation.loginUserFromKeytab(systemUser.getPrincipal(), systemUser.getKeytab().getAbsolutePath());
 +
 +          // Open a connector as the system user (ensures the user will exist for us to assign permissions to)
 +          UserGroupInformation.loginUserFromKeytab(systemUser.getPrincipal(), systemUser.getKeytab().getAbsolutePath());
 +          Connector conn = cluster.getConnector(systemUser.getPrincipal(), new KerberosToken());
 +
 +          // Then, log back in as the "root" user and do the grant
 +          UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +          conn = getConnector();
 +
 +          // Create the trace table
 +          conn.tableOperations().create(traceTable);
 +
 +          // Trace user (which is the same kerberos principal as the system user, but using a normal KerberosToken) needs
 +          // to have the ability to read, write and alter the trace table
 +          conn.securityOperations().grantTablePermission(systemUser.getPrincipal(), traceTable, TablePermission.READ);
 +          conn.securityOperations().grantTablePermission(systemUser.getPrincipal(), traceTable, TablePermission.WRITE);
 +          conn.securityOperations().grantTablePermission(systemUser.getPrincipal(), traceTable, TablePermission.ALTER_TABLE);
 +        }
 +        break;
 +      default:
 +        // do nothing
 +    }
 +  }
 +
 +  public void cleanupTables() throws Exception {
 +    final String tablePrefix = this.getClass().getSimpleName() + "_";
 +    final TableOperations tops = getConnector().tableOperations();
 +    for (String table : tops.list()) {
 +      if (table.startsWith(tablePrefix)) {
 +        log.debug("Removing table {}", table);
 +        tops.delete(table);
 +      }
 +    }
 +  }
 +
 +  public void cleanupUsers() throws Exception {
 +    final String userPrefix = this.getClass().getSimpleName();
 +    final SecurityOperations secOps = getConnector().securityOperations();
 +    for (String user : secOps.listLocalUsers()) {
 +      if (user.startsWith(userPrefix)) {
 +        log.info("Dropping local user {}", user);
 +        secOps.dropLocalUser(user);
 +      }
 +    }
 +  }
 +
 +  @After
 +  public void teardownCluster() throws Exception {
 +    if (null != cluster) {
 +      if (type.isDynamic()) {
 +        cluster.stop();
 +      } else {
 +        log.info("Removing tables which appear to be from the current test");
 +        cleanupTables();
 +        log.info("Removing users which appear to be from the current test");
 +        cleanupUsers();
 +      }
 +    }
 +  }
 +
 +  public static AccumuloCluster getCluster() {
 +    checkState(initialized);
 +    return cluster;
 +  }
 +
 +  public static ClusterControl getClusterControl() {
 +    checkState(initialized);
 +    return cluster.getClusterControl();
 +  }
 +
 +  public static ClusterType getClusterType() {
 +    checkState(initialized);
 +    return type;
 +  }
 +
 +  public static String getAdminPrincipal() {
 +    checkState(initialized);
 +    return clusterConf.getAdminPrincipal();
 +  }
 +
 +  public static AuthenticationToken getAdminToken() {
 +    checkState(initialized);
 +    return clusterConf.getAdminToken();
 +  }
 +
 +  @Override
 +  public ClusterUser getAdminUser() {
 +    switch (type) {
 +      case MINI:
 +        if (null == krb) {
 +          PasswordToken passwordToken = (PasswordToken) getAdminToken();
 +          return new ClusterUser(getAdminPrincipal(), new String(passwordToken.getPassword(), UTF_8));
 +        }
 +        return krb.getRootUser();
 +      case STANDALONE:
 +        return new ClusterUser(getAdminPrincipal(), ((StandaloneAccumuloClusterConfiguration) clusterConf).getAdminKeytab());
 +      default:
 +        throw new RuntimeException("Unknown cluster type");
 +    }
 +  }
 +
 +  @Override
 +  public ClusterUser getUser(int offset) {
 +    switch (type) {
 +      case MINI:
 +        if (null != krb) {
 +          // Defer to the TestingKdc when kerberos is on so we can get the keytab instead of a password
 +          return krb.getClientPrincipal(offset);
 +        } else {
 +          // Come up with a mostly unique name
 +          String principal = getClass().getSimpleName() + "_" + testName.getMethodName() + "_" + offset;
 +          // Username and password are the same
 +          return new ClusterUser(principal, principal);
 +        }
 +      case STANDALONE:
 +        return ((StandaloneAccumuloCluster) cluster).getUser(offset);
 +      default:
 +        throw new RuntimeException("Unknown cluster type");
 +    }
 +  }
 +
 +  public static FileSystem getFileSystem() throws IOException {
 +    checkState(initialized);
 +    return cluster.getFileSystem();
 +  }
 +
 +  public static AccumuloClusterConfiguration getClusterConfiguration() {
 +    checkState(initialized);
 +    return clusterConf;
 +  }
 +
 +  public Connector getConnector() {
 +    try {
 +      String princ = getAdminPrincipal();
 +      AuthenticationToken token = getAdminToken();
 +      log.debug("Creating connector as {} with {}", princ, token);
 +      return cluster.getConnector(princ, token);
 +    } catch (Exception e) {
 +      log.error("Could not connect to Accumulo", e);
 +      fail("Could not connect to Accumulo: " + e.getMessage());
 +
 +      throw new RuntimeException("Could not connect to Accumulo", e);
 +    }
 +  }
 +
 +  // TODO Really don't want this here. Will ultimately need to abstract configuration method away from MAConfig
 +  // and change over to something more generic
 +  @Override
 +  public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration hadoopCoreSite) {}
 +
 +  /**
 +   * A test may not be capable of running against a given AccumuloCluster. Implementations can override this method to advertise that they cannot (or perhaps do
 +   * not) want to run the test.
 +   */
 +  public boolean canRunTest(ClusterType type) {
 +    return true;
 +  }
 +
 +  /**
 +   * Tries to give a reasonable directory which can be used to create temporary files for the test. Makes a basic attempt to create the directory if it does not
 +   * already exist.
 +   *
 +   * @return A directory which can be expected to exist on the Cluster's FileSystem
 +   */
 +  public Path getUsableDir() throws IllegalArgumentException, IOException {
 +    return cluster.getTemporaryPath();
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/harness/SharedMiniClusterBase.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/harness/SharedMiniClusterBase.java
index 544b5de,0000000..0e486da
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/harness/SharedMiniClusterBase.java
+++ b/test/src/main/java/org/apache/accumulo/harness/SharedMiniClusterBase.java
@@@ -1,204 -1,0 +1,207 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.harness;
 +
 +import static org.junit.Assert.assertTrue;
 +
 +import java.io.File;
 +import java.io.IOException;
 +import java.util.Random;
 +
 +import org.apache.accumulo.cluster.ClusterUser;
 +import org.apache.accumulo.cluster.ClusterUsers;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 +import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 +import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.security.TablePermission;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.hadoop.conf.Configuration;
 +import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 +import org.apache.hadoop.security.UserGroupInformation;
++import org.junit.experimental.categories.Category;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +/**
 + * Convenience class which starts a single MAC instance for a test to leverage.
 + *
 + * There isn't a good way to build this off of the {@link AccumuloClusterHarness} (as would be the logical place) because we need to start the
 + * MiniAccumuloCluster in a static BeforeClass-annotated method. Because it is static and invoked before any other BeforeClass methods in the implementation,
 + * the actual test classes can't expose any information to tell the base class that it is to perform the one-MAC-per-class semantics.
 + *
 + * Implementations of this class must be sure to invoke {@link #startMiniCluster()} or {@link #startMiniClusterWithConfig(MiniClusterConfigurationCallback)} in
 + * a method annotated with the {@link org.junit.BeforeClass} JUnit annotation and {@link #stopMiniCluster()} in a method annotated with the
 + * {@link org.junit.AfterClass} JUnit annotation.
 + */
++@Category(MiniClusterOnlyTest.class)
 +public abstract class SharedMiniClusterBase extends AccumuloITBase implements ClusterUsers {
 +  private static final Logger log = LoggerFactory.getLogger(SharedMiniClusterBase.class);
 +  public static final String TRUE = Boolean.toString(true);
 +
 +  private static String principal = "root";
 +  private static String rootPassword;
 +  private static AuthenticationToken token;
 +  private static MiniAccumuloClusterImpl cluster;
 +  private static TestingKdc krb;
 +
 +  /**
 +   * Starts a MiniAccumuloCluster instance with the default configuration.
 +   */
 +  public static void startMiniCluster() throws Exception {
 +    startMiniClusterWithConfig(MiniClusterConfigurationCallback.NO_CALLBACK);
 +  }
 +
 +  /**
 +   * Starts a MiniAccumuloCluster instance with the default configuration but also provides the caller the opportunity to update the configuration before the
 +   * MiniAccumuloCluster is started.
 +   *
 +   * @param miniClusterCallback
 +   *          A callback to configure the minicluster before it is started.
 +   */
 +  public static void startMiniClusterWithConfig(MiniClusterConfigurationCallback miniClusterCallback) throws Exception {
 +    File baseDir = new File(System.getProperty("user.dir") + "/target/mini-tests");
 +    assertTrue(baseDir.mkdirs() || baseDir.isDirectory());
 +
 +    // Make a shared MAC instance instead of spinning up one per test method
 +    MiniClusterHarness harness = new MiniClusterHarness();
 +
 +    if (TRUE.equals(System.getProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION))) {
 +      krb = new TestingKdc();
 +      krb.start();
 +      // Enabled krb auth
 +      Configuration conf = new Configuration(false);
 +      conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "kerberos");
 +      UserGroupInformation.setConfiguration(conf);
 +      // Login as the client
 +      ClusterUser rootUser = krb.getRootUser();
 +      // Get the krb token
 +      UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +      token = new KerberosToken();
 +    } else {
 +      rootPassword = "rootPasswordShared1";
 +      token = new PasswordToken(rootPassword);
 +    }
 +
 +    cluster = harness.create(SharedMiniClusterBase.class.getName(), System.currentTimeMillis() + "_" + new Random().nextInt(Short.MAX_VALUE), token,
 +        miniClusterCallback, krb);
 +    cluster.start();
 +
 +    if (null != krb) {
 +      final String traceTable = Property.TRACE_TABLE.getDefaultValue();
 +      final ClusterUser systemUser = krb.getAccumuloServerUser(), rootUser = krb.getRootUser();
 +      // Login as the trace user
 +      // Open a connector as the system user (ensures the user will exist for us to assign permissions to)
 +      UserGroupInformation.loginUserFromKeytab(systemUser.getPrincipal(), systemUser.getKeytab().getAbsolutePath());
 +      Connector conn = cluster.getConnector(systemUser.getPrincipal(), new KerberosToken());
 +
 +      // Then, log back in as the "root" user and do the grant
 +      UserGroupInformation.loginUserFromKeytab(rootUser.getPrincipal(), rootUser.getKeytab().getAbsolutePath());
 +      conn = cluster.getConnector(principal, token);
 +
 +      // Create the trace table
 +      conn.tableOperations().create(traceTable);
 +
 +      // Trace user (which is the same kerberos principal as the system user, but using a normal KerberosToken) needs
 +      // to have the ability to read, write and alter the trace table
 +      conn.securityOperations().grantTablePermission(systemUser.getPrincipal(), traceTable, TablePermission.READ);
 +      conn.securityOperations().grantTablePermission(systemUser.getPrincipal(), traceTable, TablePermission.WRITE);
 +      conn.securityOperations().grantTablePermission(systemUser.getPrincipal(), traceTable, TablePermission.ALTER_TABLE);
 +    }
 +  }
 +
 +  /**
 +   * Stops the MiniAccumuloCluster and related services if they are running.
 +   */
 +  public static void stopMiniCluster() throws Exception {
 +    if (null != cluster) {
 +      try {
 +        cluster.stop();
 +      } catch (Exception e) {
 +        log.error("Failed to stop minicluster", e);
 +      }
 +    }
 +    if (null != krb) {
 +      try {
 +        krb.stop();
 +      } catch (Exception e) {
 +        log.error("Failed to stop KDC", e);
 +      }
 +    }
 +  }
 +
 +  public static String getRootPassword() {
 +    return rootPassword;
 +  }
 +
 +  public static AuthenticationToken getToken() {
 +    if (token instanceof KerberosToken) {
 +      try {
 +        UserGroupInformation.loginUserFromKeytab(getPrincipal(), krb.getRootUser().getKeytab().getAbsolutePath());
 +      } catch (IOException e) {
 +        throw new RuntimeException("Failed to login", e);
 +      }
 +    }
 +    return token;
 +  }
 +
 +  public static String getPrincipal() {
 +    return principal;
 +  }
 +
 +  public static MiniAccumuloClusterImpl getCluster() {
 +    return cluster;
 +  }
 +
 +  public static File getMiniClusterDir() {
 +    return cluster.getConfig().getDir();
 +  }
 +
 +  public static Connector getConnector() {
 +    try {
 +      return getCluster().getConnector(principal, getToken());
 +    } catch (Exception e) {
 +      throw new RuntimeException(e);
 +    }
 +  }
 +
 +  public static TestingKdc getKdc() {
 +    return krb;
 +  }
 +
 +  @Override
 +  public ClusterUser getAdminUser() {
 +    if (null == krb) {
 +      return new ClusterUser(getPrincipal(), getRootPassword());
 +    } else {
 +      return krb.getRootUser();
 +    }
 +  }
 +
 +  @Override
 +  public ClusterUser getUser(int offset) {
 +    if (null == krb) {
 +      String user = SharedMiniClusterBase.class.getName() + "_" + testName.getMethodName() + "_" + offset;
 +      // Password is the username
 +      return new ClusterUser(user, user);
 +    } else {
 +      return krb.getClientPrincipal(offset);
 +    }
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/NamespacesIT.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/NamespacesIT.java
index cdb3d00,0000000..b9f0ae5
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/NamespacesIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/NamespacesIT.java
@@@ -1,1419 -1,0 +1,1422 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.test;
 +
 +import static java.nio.charset.StandardCharsets.UTF_8;
 +import static org.junit.Assert.assertEquals;
 +import static org.junit.Assert.assertFalse;
 +import static org.junit.Assert.assertNotNull;
 +import static org.junit.Assert.assertNull;
 +import static org.junit.Assert.assertTrue;
 +import static org.junit.Assert.fail;
 +
 +import java.io.IOException;
 +import java.util.Arrays;
 +import java.util.Collections;
 +import java.util.EnumSet;
 +import java.util.Iterator;
 +import java.util.Map;
 +import java.util.Map.Entry;
 +import java.util.Set;
 +import java.util.SortedSet;
 +import java.util.TreeSet;
 +import java.util.concurrent.TimeUnit;
 +
 +import org.apache.accumulo.cluster.ClusterUser;
 +import org.apache.accumulo.core.client.AccumuloException;
 +import org.apache.accumulo.core.client.AccumuloSecurityException;
 +import org.apache.accumulo.core.client.BatchWriter;
 +import org.apache.accumulo.core.client.BatchWriterConfig;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.IteratorSetting;
 +import org.apache.accumulo.core.client.MutationsRejectedException;
 +import org.apache.accumulo.core.client.NamespaceExistsException;
 +import org.apache.accumulo.core.client.NamespaceNotEmptyException;
 +import org.apache.accumulo.core.client.NamespaceNotFoundException;
 +import org.apache.accumulo.core.client.Scanner;
 +import org.apache.accumulo.core.client.TableExistsException;
 +import org.apache.accumulo.core.client.TableNotFoundException;
 +import org.apache.accumulo.core.client.admin.NamespaceOperations;
 +import org.apache.accumulo.core.client.admin.NewTableConfiguration;
 +import org.apache.accumulo.core.client.admin.TableOperations;
 +import org.apache.accumulo.core.client.impl.Namespaces;
 +import org.apache.accumulo.core.client.impl.Tables;
 +import org.apache.accumulo.core.client.impl.thrift.TableOperation;
 +import org.apache.accumulo.core.client.impl.thrift.TableOperationExceptionType;
 +import org.apache.accumulo.core.client.impl.thrift.ThriftTableOperationException;
 +import org.apache.accumulo.core.client.security.SecurityErrorCode;
 +import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.data.Key;
 +import org.apache.accumulo.core.data.Mutation;
 +import org.apache.accumulo.core.data.Range;
 +import org.apache.accumulo.core.data.Value;
 +import org.apache.accumulo.core.iterators.Filter;
 +import org.apache.accumulo.core.iterators.IteratorUtil.IteratorScope;
 +import org.apache.accumulo.core.iterators.SortedKeyValueIterator;
 +import org.apache.accumulo.core.iterators.user.VersioningIterator;
 +import org.apache.accumulo.core.metadata.MetadataTable;
 +import org.apache.accumulo.core.metadata.RootTable;
 +import org.apache.accumulo.core.security.Authorizations;
 +import org.apache.accumulo.core.security.NamespacePermission;
 +import org.apache.accumulo.core.security.SystemPermission;
 +import org.apache.accumulo.core.security.TablePermission;
 +import org.apache.accumulo.examples.simple.constraints.NumericValueConstraint;
 +import org.apache.accumulo.harness.AccumuloClusterHarness;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.hadoop.io.Text;
 +import org.junit.After;
 +import org.junit.Assume;
 +import org.junit.Before;
 +import org.junit.Test;
++import org.junit.experimental.categories.Category;
 +
 +import static com.google.common.util.concurrent.Uninterruptibles.sleepUninterruptibly;
 +
 +// Testing default namespace configuration with inheritance requires altering the system state and restoring it back to normal
 +// Punt on this for now and just let it use a minicluster.
++@Category(MiniClusterOnlyTest.class)
 +public class NamespacesIT extends AccumuloClusterHarness {
 +
 +  private Connector c;
 +  private String namespace;
 +
 +  @Override
 +  public int defaultTimeoutSeconds() {
 +    return 60;
 +  }
 +
 +  @Before
 +  public void setupConnectorAndNamespace() throws Exception {
 +    Assume.assumeTrue(ClusterType.MINI == getClusterType());
 +
 +    // prepare a unique namespace and get a new root connector for each test
 +    c = getConnector();
 +    namespace = "ns_" + getUniqueNames(1)[0];
 +  }
 +
 +  @After
 +  public void swingMj�lnir() throws Exception {
 +    if (null == c) {
 +      return;
 +    }
 +    // clean up any added tables, namespaces, and users, after each test
 +    for (String t : c.tableOperations().list())
 +      if (!Tables.qualify(t).getFirst().equals(Namespaces.ACCUMULO_NAMESPACE))
 +        c.tableOperations().delete(t);
 +    assertEquals(3, c.tableOperations().list().size());
 +    for (String n : c.namespaceOperations().list())
 +      if (!n.equals(Namespaces.ACCUMULO_NAMESPACE) && !n.equals(Namespaces.DEFAULT_NAMESPACE))
 +        c.namespaceOperations().delete(n);
 +    assertEquals(2, c.namespaceOperations().list().size());
 +    for (String u : c.securityOperations().listLocalUsers())
 +      if (!getAdminPrincipal().equals(u))
 +        c.securityOperations().dropLocalUser(u);
 +    assertEquals(1, c.securityOperations().listLocalUsers().size());
 +  }
 +
 +  @Test
 +  public void checkReservedNamespaces() throws Exception {
 +    assertEquals(c.namespaceOperations().defaultNamespace(), Namespaces.DEFAULT_NAMESPACE);
 +    assertEquals(c.namespaceOperations().systemNamespace(), Namespaces.ACCUMULO_NAMESPACE);
 +  }
 +
 +  @Test
 +  public void checkBuiltInNamespaces() throws Exception {
 +    assertTrue(c.namespaceOperations().exists(Namespaces.DEFAULT_NAMESPACE));
 +    assertTrue(c.namespaceOperations().exists(Namespaces.ACCUMULO_NAMESPACE));
 +  }
 +
 +  @Test
 +  public void createTableInDefaultNamespace() throws Exception {
 +    String tableName = "1";
 +    c.tableOperations().create(tableName);
 +    assertTrue(c.tableOperations().exists(tableName));
 +  }
 +
 +  @Test(expected = AccumuloException.class)
 +  public void createTableInAccumuloNamespace() throws Exception {
 +    String tableName = Namespaces.ACCUMULO_NAMESPACE + ".1";
 +    assertFalse(c.tableOperations().exists(tableName));
 +    c.tableOperations().create(tableName); // should fail
 +  }
 +
 +  @Test(expected = AccumuloSecurityException.class)
 +  public void deleteDefaultNamespace() throws Exception {
 +    c.namespaceOperations().delete(Namespaces.DEFAULT_NAMESPACE); // should fail
 +  }
 +
 +  @Test(expected = AccumuloSecurityException.class)
 +  public void deleteAccumuloNamespace() throws Exception {
 +    c.namespaceOperations().delete(Namespaces.ACCUMULO_NAMESPACE); // should fail
 +  }
 +
 +  @Test
 +  public void createTableInMissingNamespace() throws Exception {
 +    String t = namespace + ".1";
 +    assertFalse(c.namespaceOperations().exists(namespace));
 +    assertFalse(c.tableOperations().exists(t));
 +    try {
 +      c.tableOperations().create(t);
 +      fail();
 +    } catch (AccumuloException e) {
 +      assertEquals(NamespaceNotFoundException.class.getName(), e.getCause().getClass().getName());
 +      assertFalse(c.namespaceOperations().exists(namespace));
 +      assertFalse(c.tableOperations().exists(t));
 +    }
 +  }
 +
 +  @Test
 +  public void createAndDeleteNamespace() throws Exception {
 +    String t1 = namespace + ".1";
 +    String t2 = namespace + ".2";
 +    assertFalse(c.namespaceOperations().exists(namespace));
 +    assertFalse(c.tableOperations().exists(t1));
 +    assertFalse(c.tableOperations().exists(t2));
 +    try {
 +      c.namespaceOperations().delete(namespace);
 +    } catch (NamespaceNotFoundException e) {}
 +    try {
 +      c.tableOperations().delete(t1);
 +    } catch (TableNotFoundException e) {
 +      assertEquals(NamespaceNotFoundException.class.getName(), e.getCause().getClass().getName());
 +    }
 +    c.namespaceOperations().create(namespace);
 +    assertTrue(c.namespaceOperations().exists(namespace));
 +    assertFalse(c.tableOperations().exists(t1));
 +    assertFalse(c.tableOperations().exists(t2));
 +    c.tableOperations().create(t1);
 +    assertTrue(c.namespaceOperations().exists(namespace));
 +    assertTrue(c.tableOperations().exists(t1));
 +    assertFalse(c.tableOperations().exists(t2));
 +    c.tableOperations().create(t2);
 +    assertTrue(c.namespaceOperations().exists(namespace));
 +    assertTrue(c.tableOperations().exists(t1));
 +    assertTrue(c.tableOperations().exists(t2));
 +    c.tableOperations().delete(t1);
 +    assertTrue(c.namespaceOperations().exists(namespace));
 +    assertFalse(c.tableOperations().exists(t1));
 +    assertTrue(c.tableOperations().exists(t2));
 +    c.tableOperations().delete(t2);
 +    assertTrue(c.namespaceOperations().exists(namespace));
 +    assertFalse(c.tableOperations().exists(t1));
 +    assertFalse(c.tableOperations().exists(t2));
 +    c.namespaceOperations().delete(namespace);
 +    assertFalse(c.namespaceOperations().exists(namespace));
 +    assertFalse(c.tableOperations().exists(t1));
 +    assertFalse(c.tableOperations().exists(t2));
 +  }
 +
 +  @Test(expected = NamespaceNotEmptyException.class)
 +  public void deleteNonEmptyNamespace() throws Exception {
 +    String tableName1 = namespace + ".1";
 +    assertFalse(c.namespaceOperations().exists(namespace));
 +    assertFalse(c.tableOperations().exists(tableName1));
 +    c.namespaceOperations().create(namespace);
 +    c.tableOperations().create(tableName1);
 +    assertTrue(c.namespaceOperations().exists(namespace));
 +    assertTrue(c.tableOperations().exists(tableName1));
 +    c.namespaceOperations().delete(namespace); // should fail
 +  }
 +
 +  @Test
 +  public void verifyPropertyInheritance() throws Exception {
 +    String t0 = "0";
 +    String t1 = namespace + ".1";
 +    String t2 = namespace + ".2";
 +
 +    String k = Property.TABLE_SCAN_MAXMEM.getKey();
 +    String v = "42K";
 +
 +    assertFalse(c.namespaceOperations().exists(namespace));
 +    assertFalse(c.tableOperations().exists(t1));
 +    assertFalse(c.tableOperations().exists(t2));
 +    c.namespaceOperations().create(namespace);
 +    c.tableOperations().create(t1);
 +    c.tableOperations().create(t0);
 +    assertTrue(c.namespaceOperations().exists(namespace));
 +    assertTrue(c.tableOperations().exists(t1));
 +    assertTrue(c.tableOperations().exists(t0));
 +
 +    // verify no property
 +    assertFalse(checkNamespaceHasProp(namespace, k, v));
 +    assertFalse(checkTableHasProp(t1, k, v));
 +    assertFalse(checkNamespaceHasProp(Namespaces.DEFAULT_NAMESPACE, k, v));
 +    assertFalse(checkTableHasProp(t0, k, v));
 +
 +    // set property and verify
 +    c.namespaceOperations().setProperty(namespace, k, v);
 +    assertTrue(checkNamespaceHasProp(namespace, k, v));
 +    assertTrue(checkTableHasProp(t1, k, v));
 +    assertFalse(checkNamespaceHasProp(Namespaces.DEFAULT_NAMESPACE, k, v));
 +    assertFalse(checkTableHasProp(t0, k, v));
 +
 +    // add a new table to namespace and verify
 +    assertFalse(c.tableOperations().exists(t2));
 +    c.tableOperations().create(t2);
 +    assertTrue(c.tableOperations().exists(t2));
 +    assertTrue(checkNamespaceHasProp(namespace, k, v));
 +    assertTrue(checkTableHasProp(t1, k, v));
 +    assertTrue(checkTableHasProp(t2, k, v));
 +    assertFalse(checkNamespaceHasProp(Namespaces.DEFAULT_NAMESPACE, k, v));
 +    assertFalse(checkTableHasProp(t0, k, v));
 +
 +    // remove property and verify
 +    c.namespaceOperations().removeProperty(namespace, k);
 +    assertFalse(checkNamespaceHasProp(namespace, k, v));
 +    assertFalse(checkTableHasProp(t1, k, v));
 +    assertFalse(checkTableHasProp(t2, k, v));
 +    assertFalse(checkNamespaceHasProp(Namespaces.DEFAULT_NAMESPACE, k, v));
 +    assertFalse(checkTableHasProp(t0, k, v));
 +
 +    // set property on default namespace and verify
 +    c.namespaceOperations().setProperty(Namespaces.DEFAULT_NAMESPACE, k, v);
 +    assertFalse(checkNamespaceHasProp(namespace, k, v));
 +    assertFalse(checkTableHasProp(t1, k, v));
 +    assertFalse(checkTableHasProp(t2, k, v));
 +    assertTrue(checkNamespaceHasProp(Namespaces.DEFAULT_NAMESPACE, k, v));
 +    assertTrue(checkTableHasProp(t0, k, v));
 +
 +    // test that table properties override namespace properties
 +    String k2 = Property.TABLE_FILE_MAX.getKey();
 +    String v2 = "42";
 +    String table_v2 = "13";
 +
 +    // set new property on some
 +    c.namespaceOperations().setProperty(namespace, k2, v2);
 +    c.tableOperations().setProperty(t2, k2, table_v2);
 +    assertTrue(checkNamespaceHasProp(namespace, k2, v2));
 +    assertTrue(checkTableHasProp(t1, k2, v2));
 +    assertTrue(checkTableHasProp(t2, k2, table_v2));
 +
 +    c.tableOperations().delete(t1);
 +    c.tableOperations().delete(t2);
 +    c.tableOperations().delete(t0);
 +    c.namespaceOperations().delete(namespace);
 +  }
 +
 +  @Test
 +  public void verifyIteratorInheritance() throws Exception {
 +    String t1 = namespace + ".1";
 +    c.namespaceOperations().create(namespace);
 +    c.tableOperations().create(t1);
 +    String iterName = namespace + "_iter";
 +
 +    BatchWriter bw = c.createBatchWriter(t1, new BatchWriterConfig());
 +    Mutation m = new Mutation("r");
 +    m.put("a", "b", new Value("abcde".getBytes()));
 +    bw.addMutation(m);
 +    bw.flush();
 +    bw.close();
 +
 +    IteratorSetting setting = new IteratorSetting(250, iterName, SimpleFilter.class.getName());
 +
 +    // verify can see inserted entry
 +    Scanner s = c.createScanner(t1, Authorizations.EMPTY);
 +    assertTrue(s.iterator().hasNext());
 +    assertFalse(c.namespaceOperations().listIterators(namespace).containsKey(iterName));
 +    assertFalse(c.tableOperations().listIterators(t1).containsKey(iterName));
 +
 +    // verify entry is filtered out (also, verify conflict checking API)
 +    c.namespaceOperations().checkIteratorConflicts(namespace, setting, EnumSet.allOf(IteratorScope.class));
 +    c.namespaceOperations().attachIterator(namespace, setting);
 +    sleepUninterruptibly(2, TimeUnit.SECONDS);
 +    try {
 +      c.namespaceOperations().checkIteratorConflicts(namespace, setting, EnumSet.allOf(IteratorScope.class));
 +      fail();
 +    } catch (AccumuloException e) {
 +      assertEquals(IllegalArgumentException.class.getName(), e.getCause().getClass().getName());
 +    }
 +    IteratorSetting setting2 = c.namespaceOperations().getIteratorSetting(namespace, setting.getName(), IteratorScope.scan);
 +    assertEquals(setting, setting2);
 +    assertTrue(c.namespaceOperations().listIterators(namespace).containsKey(iterName));
 +    assertTrue(c.tableOperations().listIterators(t1).containsKey(iterName));
 +    s = c.createScanner(t1, Authorizations.EMPTY);
 +    assertFalse(s.iterator().hasNext());
 +
 +    // verify can see inserted entry again
 +    c.namespaceOperations().removeIterator(namespace, setting.getName(), EnumSet.allOf(IteratorScope.class));
 +    sleepUninterruptibly(2, TimeUnit.SECONDS);
 +    assertFalse(c.namespaceOperations().listIterators(namespace).containsKey(iterName));
 +    assertFalse(c.tableOperations().listIterators(t1).containsKey(iterName));
 +    s = c.createScanner(t1, Authorizations.EMPTY);
 +    assertTrue(s.iterator().hasNext());
 +  }
 +
 +  @Test
 +  public void cloneTable() throws Exception {
 +    String namespace2 = namespace + "_clone";
 +    String t1 = namespace + ".1";
 +    String t2 = namespace + ".2";
 +    String t3 = namespace2 + ".2";
 +    String k1 = Property.TABLE_FILE_MAX.getKey();
 +    String k2 = Property.TABLE_FILE_REPLICATION.getKey();
 +    String k1v1 = "55";
 +    String k1v2 = "66";
 +    String k2v1 = "5";
 +    String k2v2 = "6";
 +
 +    c.namespaceOperations().create(namespace);
 +    c.tableOperations().create(t1);
 +    assertTrue(c.tableOperations().exists(t1));
 +    assertFalse(c.namespaceOperations().exists(namespace2));
 +    assertFalse(c.tableOperations().exists(t2));
 +    assertFalse(c.tableOperations().exists(t3));
 +
 +    try {
 +      // try to clone before namespace exists
 +      c.tableOperations().clone(t1, t3, false, null, null); // should fail
 +      fail();
 +    } catch (AccumuloException e) {
 +      assertEquals(NamespaceNotFoundException.class.getName(), e.getCause().getClass().getName());
 +    }
 +
 +    // try to clone before when target tables exist
 +    c.namespaceOperations().create(namespace2);
 +    c.tableOperations().create(t2);
 +    c.tableOperations().create(t3);
 +    for (String t : Arrays.asList(t2, t3)) {
 +      try {
 +        c.tableOperations().clone(t1, t, false, null, null); // should fail
 +        fail();
 +      } catch (TableExistsException e) {
 +        c.tableOperations().delete(t);
 +      }
 +    }
 +
 +    assertTrue(c.tableOperations().exists(t1));
 +    assertTrue(c.namespaceOperations().exists(namespace2));
 +    assertFalse(c.tableOperations().exists(t2));
 +    assertFalse(c.tableOperations().exists(t3));
 +
 +    // set property with different values in two namespaces and a separate property with different values on the table and both namespaces
 +    assertFalse(checkNamespaceHasProp(namespace, k1, k1v1));
 +    assertFalse(checkNamespaceHasProp(namespace2, k1, k1v2));
 +    assertFalse(checkTableHasProp(t1, k1, k1v1));
 +    assertFalse(checkTableHasProp(t1, k1, k1v2));
 +    assertFalse(checkNamespaceHasProp(namespace, k2, k2v1));
 +    assertFalse(checkNamespaceHasProp(namespace2, k2, k2v1));
 +    assertFalse(checkTableHasProp(t1, k2, k2v1));
 +    assertFalse(checkTableHasProp(t1, k2, k2v2));
 +    c.namespaceOperations().setProperty(namespace, k1, k1v1);
 +    c.namespaceOperations().setProperty(namespace2, k1, k1v2);
 +    c.namespaceOperations().setProperty(namespace, k2, k2v1);
 +    c.namespaceOperations().setProperty(namespace2, k2, k2v1);
 +    c.tableOperations().setProperty(t1, k2, k2v2);
 +    assertTrue(checkNamespaceHasProp(namespace, k1, k1v1));
 +    assertTrue(checkNamespaceHasProp(namespace2, k1, k1v2));
 +    assertTrue(checkTableHasProp(t1, k1, k1v1));
 +    assertFalse(checkTableHasProp(t1, k1, k1v2));
 +    assertTrue(checkNamespaceHasProp(namespace, k2, k2v1));
 +    assertTrue(checkNamespaceHasProp(namespace2, k2, k2v1));
 +    assertFalse(checkTableHasProp(t1, k2, k2v1));
 +    assertTrue(checkTableHasProp(t1, k2, k2v2));
 +
 +    // clone twice, once in same namespace, once in another
 +    for (String t : Arrays.asList(t2, t3))
 +      c.tableOperations().clone(t1, t, false, null, null);
 +
 +    assertTrue(c.namespaceOperations().exists(namespace2));
 +    assertTrue(c.tableOperations().exists(t1));
 +    assertTrue(c.tableOperations().exists(t2));
 +    assertTrue(c.tableOperations().exists(t3));
 +
 +    // verify the properties got transferred
 +    assertTrue(checkTableHasProp(t1, k1, k1v1));
 +    assertTrue(checkTableHasProp(t2, k1, k1v1));
 +    assertTrue(checkTableHasProp(t3, k1, k1v2));
 +    assertTrue(checkTableHasProp(t1, k2, k2v2));
 +    assertTrue(checkTableHasProp(t2, k2, k2v2));
 +    assertTrue(checkTableHasProp(t3, k2, k2v2));
 +  }
 +
 +  @Test
 +  public void renameNamespaceWithTable() throws Exception {
 +    String namespace2 = namespace + "_renamed";
 +    String t1 = namespace + ".t";
 +    String t2 = namespace2 + ".t";
 +
 +    c.namespaceOperations().create(namespace);
 +    c.tableOperations().create(t1);
 +    assertTrue(c.namespaceOperations().exists(namespace));
 +    assertTrue(c.tableOperations().exists(t1));
 +    assertFalse(c.namespaceOperations().exists(namespace2));
 +    assertFalse(c.tableOperations().exists(t2));
 +
 +    String namespaceId = c.namespaceOperations().namespaceIdMap().get(namespace);
 +    String tableId = c.tableOperations().tableIdMap().get(t1);
 +
 +    c.namespaceOperations().rename(namespace, namespace2);
 +    assertFalse(c.namespaceOperations().exists(namespace));
 +    assertFalse(c.tableOperations().exists(t1));
 +    assertTrue(c.namespaceOperations().exists(namespace2));
 +    assertTrue(c.tableOperations().exists(t2));
 +
 +    // verify id's didn't change
 +    String namespaceId2 = c.namespaceOperations().namespaceIdMap().get(namespace2);
 +    String tableId2 = c.tableOperations().tableIdMap().get(t2);
 +
 +    assertEquals(namespaceId, namespaceId2);
 +    assertEquals(tableId, tableId2);
 +  }
 +
 +  @Test
 +  public void verifyConstraintInheritance() throws Exception {
 +    String t1 = namespace + ".1";
 +    c.namespaceOperations().create(namespace);
 +    c.tableOperations().create(t1, new NewTableConfiguration().withoutDefaultIterators());
 +    String constraintClassName = NumericValueConstraint.class.getName();
 +
 +    assertFalse(c.namespaceOperations().listConstraints(namespace).containsKey(constraintClassName));
 +    assertFalse(c.tableOperations().listConstraints(t1).containsKey(constraintClassName));
 +
 +    c.namespaceOperations().addConstraint(namespace, constraintClassName);
 +    boolean passed = false;
 +    for (int i = 0; i < 5; i++) {
 +      if (!c.namespaceOperations().listConstraints(namespace).containsKey(constraintClassName)) {
 +        Thread.sleep(500);
 +        continue;
 +      }
 +      if (!c.tableOperations().listConstraints(t1).containsKey(constraintClassName)) {
 +        Thread.sleep(500);
 +        continue;
 +      }
 +      passed = true;
 +      break;
 +    }
 +    assertTrue("Failed to observe newly-added constraint", passed);
 +
 +    passed = false;
 +    Integer namespaceNum = null;
 +    for (int i = 0; i < 5; i++) {
 +      namespaceNum = c.namespaceOperations().listConstraints(namespace).get(constraintClassName);
 +      if (null == namespaceNum) {
 +        Thread.sleep(500);
 +        continue;
 +      }
 +      Integer tableNum = c.tableOperations().listConstraints(t1).get(constraintClassName);
 +      if (null == tableNum) {
 +        Thread.sleep(500);
 +        continue;
 +      }
 +      assertEquals(namespaceNum, tableNum);
 +      passed = true;
 +    }
 +    assertTrue("Failed to observe constraint in both table and namespace", passed);
 +
 +    Mutation m1 = new Mutation("r1");
 +    Mutation m2 = new Mutation("r2");
 +    Mutation m3 = new Mutation("r3");
 +    m1.put("a", "b", new Value("abcde".getBytes(UTF_8)));
 +    m2.put("e", "f", new Value("123".getBytes(UTF_8)));
 +    m3.put("c", "d", new Value("zyxwv".getBytes(UTF_8)));
 +
 +    passed = false;
 +    for (int i = 0; i < 5; i++) {
 +      BatchWriter bw = c.createBatchWriter(t1, new BatchWriterConfig());
 +      bw.addMutations(Arrays.asList(m1, m2, m3));
 +      try {
 +        bw.close();
 +        Thread.sleep(500);
 +      } catch (MutationsRejectedException e) {
 +        passed = true;
 +        assertEquals(1, e.getConstraintViolationSummaries().size());
 +        assertEquals(2, e.getConstraintViolationSummaries().get(0).getNumberOfViolatingMutations());
 +        break;
 +      }
 +    }
 +
 +    assertTrue("Failed to see mutations rejected after constraint was added", passed);
 +
 +    assertNotNull("Namespace constraint ID should not be null", namespaceNum);
 +    c.namespaceOperations().removeConstraint(namespace, namespaceNum);
 +    passed = false;
 +    for (int i = 0; i < 5; i++) {
 +      if (c.namespaceOperations().listConstraints(namespace).containsKey(constraintClassName)) {
 +        Thread.sleep(500);
 +        continue;
 +      }
 +      if (c.tableOperations().listConstraints(t1).containsKey(constraintClassName)) {
 +        Thread.sleep(500);
 +        continue;
 +      }
 +      passed = true;
 +    }
 +    assertTrue("Failed to verify that constraint was removed from namespace and table", passed);
 +
 +    passed = false;
 +    for (int i = 0; i < 5; i++) {
 +      BatchWriter bw = c.createBatchWriter(t1, new BatchWriterConfig());
 +      try {
 +        bw.addMutations(Arrays.asList(m1, m2, m3));
 +        bw.close();
 +      } catch (MutationsRejectedException e) {
 +        Thread.sleep(500);
 +        continue;
 +      }
 +      passed = true;
 +    }
 +    assertTrue("Failed to add mutations that should be allowed", passed);
 +  }
 +
 +  @Test
 +  public void renameTable() throws Exception {
 +    String namespace2 = namespace + "_renamed";
 +    String t1 = namespace + ".1";
 +    String t2 = namespace2 + ".2";
 +    String t3 = namespace + ".3";
 +    String t4 = namespace + ".4";
 +    String t5 = "5";
 +
 +    c.namespaceOperations().create(namespace);
 +    c.namespaceOperations().create(namespace2);
 +
 +    assertTrue(c.namespaceOperations().exists(namespace));
 +    assertTrue(c.namespaceOperations().exists(namespace2));
 +    assertFalse(c.tableOperations().exists(t1));
 +    assertFalse(c.tableOperations().exists(t2));
 +    assertFalse(c.tableOperations().exists(t3));
 +    assertFalse(c.tableOperations().exists(t4));
 +    assertFalse(c.tableOperations().exists(t5));
 +
 +    c.tableOperations().create(t1);
 +
 +    try {
 +      c.tableOperations().rename(t1, t2);
 +      fail();
 +    } catch (AccumuloException e) {
 +      // this is expected, because we don't allow renames across namespaces
 +      assertEquals(ThriftTableOperationException.class.getName(), e.getCause().getClass().getName());
 +      assertEquals(TableOperation.RENAME, ((ThriftTableOperationException) e.getCause()).getOp());
 +      assertEquals(TableOperationExceptionType.INVALID_NAME, ((ThriftTableOperationException) e.getCause()).getType());
 +    }
 +
 +    try {
 +      c.tableOperations().rename(t1, t5);
 +      fail();
 +    } catch (AccumuloException e) {
 +      // this is expected, because we don't allow renames across namespaces
 +      assertEquals(ThriftTableOperationException.class.getName(), e.getCause().getClass().getName());
 +      assertEquals(TableOperation.RENAME, ((ThriftTableOperationException) e.getCause()).getOp());
 +      assertEquals(TableOperationExceptionType.INVALID_NAME, ((ThriftTableOperationException) e.getCause()).getType());
 +    }
 +
 +    assertTrue(c.tableOperations().exists(t1));
 +    assertFalse(c.tableOperations().exists(t2));
 +    assertFalse(c.tableOperations().exists(t3));
 +    assertFalse(c.tableOperations().exists(t4));
 +    assertFalse(c.tableOperations().exists(t5));
 +
 +    // fully qualified rename
 +    c.tableOperations().rename(t1, t3);
 +    assertFalse(c.tableOperations().exists(t1));
 +    assertFalse(c.tableOperations().exists(t2));
 +    assertTrue(c.tableOperations().exists(t3));
 +    assertFalse(c.tableOperations().exists(t4));
 +    assertFalse(c.tableOperations().exists(t5));
 +  }
 +
 +  private void loginAs(ClusterUser user) throws IOException {
 +    user.getToken();
 +  }
 +
 +  /**
 +   * Tests new Namespace permissions as well as modifications to Table permissions because of namespaces. Checks each permission to first make sure the user
 +   * doesn't have permission to perform the action, then root grants them the permission and we check to make sure they could perform the action.
 +   */
 +  @Test
 +  public void testPermissions() throws Exception {
 +    ClusterUser user1 = getUser(0), user2 = getUser(1), root = getAdminUser();
 +    String u1 = user1.getPrincipal();
 +    String u2 = user2.getPrincipal();
 +    PasswordToken pass = (null != user1.getPassword() ? new PasswordToken(user1.getPassword()) : null);
 +
 +    String n1 = namespace;
 +    String t1 = n1 + ".1";
 +    String t2 = n1 + ".2";
 +    String t3 = n1 + ".3";
 +
 +    String n2 = namespace + "_2";
 +
 +    loginAs(root);
 +    c.namespaceOperations().create(n1);
 +    c.tableOperations().create(t1);
 +
 +    c.securityOperations().createLocalUser(u1, pass);
 +
 +    loginAs(user1);
 +    Connector user1Con = c.getInstance().getConnector(u1, user1.getToken());
 +
 +    try {
 +      user1Con.tableOperations().create(t2);
 +      fail();
 +    } catch (AccumuloSecurityException e) {
 +      expectPermissionDenied(e);
 +    }
 +
 +    loginAs(root);
 +    c.securityOperations().grantNamespacePermission(u1, n1, NamespacePermission.CREATE_TABLE);
 +    loginAs(user1);
 +    user1Con.tableOperations().create(t2);
 +    loginAs(root);
 +    assertTrue(c.tableOperations().list().contains(t2));
 +    c.securityOperations().revokeNamespacePermission(u1, n1, NamespacePermission.CREATE_TABLE);
 +
 +    loginAs(user1);
 +    try {
 +      user1Con.tableOperations().delete(t1);
 +      fail();
 +    } catch (AccumuloSecurityException e) {
 +      expectPermissionDenied(e);
 +    }
 +
 +    loginAs(root);
 +    c.securityOperations().grantNamespacePermission(u1, n1, NamespacePermission.DROP_TABLE);
 +    loginAs(user1);
 +    user1Con.tableOperations().delete(t1);
 +    loginAs(root);
 +    assertTrue(!c.tableOperations().list().contains(t1));
 +    c.securityOperations().revokeNamespacePermission(u1, n1, NamespacePermission.DROP_TABLE);
 +
 +    c.tableOperations().create(t3);
 +    BatchWriter bw = c.createBatchWriter(t3, null);
 +    Mutation m = new Mutation("row");
 +    m.put("cf", "cq", "value");
 +    bw.addMutation(m);
 +    bw.close();
 +
 +    loginAs(user1);
 +    Iterator<Entry<Key,Value>> i = user1Con.createScanner(t3, new Authorizations()).iterator();
 +    try {
 +      i.next();
 +      fail();
 +    } catch (RuntimeException e) {
 +      assertEquals(AccumuloSecurityException.class.getName(), e.getCause().getClass().getName());
 +      expectPermissionDenied((AccumuloSecurityException) e.getCause());
 +    }
 +
 +    loginAs(user1);
 +    m = new Mutation(u1);
 +    m.put("cf", "cq", "turtles");
 +    bw = user1Con.createBatchWriter(t3, null);
 +    try {
 +      bw.addMutation(m);
 +      bw.close();
 +      fail();
 +    } catch (MutationsRejectedException e) {
 +      assertEquals(1, e.getSecurityErrorCodes().size());
 +      assertEquals(1, e.getSecurityErrorCodes().entrySet().iterator().next().getValue().size());
 +      switch (e.getSecurityErrorCodes().entrySet().iterator().next().getValue().iterator().next()) {
 +        case PERMISSION_DENIED:
 +          break;
 +        default:
 +          fail();
 +      }
 +    }
 +
 +    loginAs(root);
 +    c.securityOperations().grantNamespacePermission(u1, n1, NamespacePermission.READ);
 +    loginAs(user1);
 +    i = user1Con.createScanner(t3, new Authorizations()).iterator();
 +    assertTrue(i.hasNext());
 +    loginAs(root);
 +    c.securityOperations().revokeNamespacePermission(u1, n1, NamespacePermission.READ);
 +    c.securityOperations().grantNamespacePermission(u1, n1, NamespacePermission.WRITE);
 +
 +    loginAs(user1);
 +    m = new Mutation(u1);
 +    m.put("cf", "cq", "turtles");
 +    bw = user1Con.createBatchWriter(t3, null);
 +    bw.addMutation(m);
 +    bw.close();
 +    loginAs(root);
 +    c.securityOperations().revokeNamespacePermission(u1, n1, NamespacePermission.WRITE);
 +
 +    loginAs(user1);
 +    try {
 +      user1Con.tableOperations().setProperty(t3, Property.TABLE_FILE_MAX.getKey(), "42");
 +      fail();
 +    } catch (AccumuloSecurityException e) {
 +      expectPermissionDenied(e);
 +    }
 +
 +    loginAs(root);
 +    c.securityOperations().grantNamespacePermission(u1, n1, NamespacePermission.ALTER_TABLE);
 +    loginAs(user1);
 +    user1Con.tableOperations().setProperty(t3, Property.TABLE_FILE_MAX.getKey(), "42");
 +    user1Con.tableOperations().removeProperty(t3, Property.TABLE_FILE_MAX.getKey());
 +    loginAs(root);
 +    c.securityOperations().revokeNamespacePermission(u1, n1, NamespacePermission.ALTER_TABLE);
 +
 +    loginAs(user1);
 +    try {
 +      user1Con.namespaceOperations().setProperty(n1, Property.TABLE_FILE_MAX.getKey(), "55");
 +      fail();
 +    } catch (AccumuloSecurityException e) {
 +      expectPermissionDenied(e);
 +    }
 +
 +    loginAs(root);
 +    c.securityOperations().grantNamespacePermission(u1, n1, NamespacePermission.ALTER_NAMESPACE);
 +    loginAs(user1);
 +    user1Con.namespaceOperations().setProperty(n1, Property.TABLE_FILE_MAX.getKey(), "42");
 +    user1Con.namespaceOperations().removeProperty(n1, Property.TABLE_FILE_MAX.getKey());
 +    loginAs(root);
 +    c.securityOperations().revokeNamespacePermission(u1, n1, NamespacePermission.ALTER_NAMESPACE);
 +
 +    loginAs(root);
 +    c.securityOperations().createLocalUser(u2, (root.getPassword() == null ? null : new PasswordToken(user2.getPassword())));
 +    loginAs(user1);
 +    try {
 +      user1Con.securityOperations().grantNamespacePermission(u2, n1, NamespacePermission.ALTER_NAMESPACE);
 +      fail();
 +    } catch (AccumuloSecurityException e) {
 +      expectPermissionDenied(e);
 +    }
 +
 +    loginAs(root);
 +    c.securityOperations().grantNamespacePermission(u1, n1, NamespacePermission.GRANT);
 +    loginAs(user1);
 +    user1Con.securityOperations().grantNamespacePermission(u2, n1, NamespacePermission.ALTER_NAMESPACE);
 +    user1Con.securityOperations().revokeNamespacePermission(u2, n1, NamespacePermission.ALTER_NAMESPACE);
 +    loginAs(root);
 +    c.securityOperations().revokeNamespacePermission(u1, n1, NamespacePermission.GRANT);
 +
 +    loginAs(user1);
 +    try {
 +      user1Con.namespaceOperations().create(n2);
 +      fail();
 +    } catch (AccumuloSecurityException e) {
 +      expectPermissionDenied(e);
 +    }
 +
 +    loginAs(root);
 +    c.securityOperations().grantSystemPermission(u1, SystemPermission.CREATE_NAMESPACE);
 +    loginAs(user1);
 +    user1Con.namespaceOperations().create(n2);
 +    loginAs(root);
 +    c.securityOperations().revokeSystemPermission(u1, SystemPermission.CREATE_NAMESPACE);
 +
 +    c.securityOperations().revokeNamespacePermission(u1, n2, NamespacePermission.DROP_NAMESPACE);
 +    loginAs(user1);
 +    try {
 +      user1Con.namespaceOperations().delete(n2);
 +      fail();
 +    } catch (AccumuloSecurityException e) {
 +      expectPermissionDenied(e);
 +    }
 +
 +    loginAs(root);
 +    c.securityOperations().grantSystemPermission(u1, SystemPermission.DROP_NAMESPACE);
 +    loginAs(user1);
 +    user1Con.namespaceOperations().delete(n2);
 +    loginAs(root);
 +    c.securityOperations().revokeSystemPermission(u1, SystemPermission.DROP_NAMESPACE);
 +
 +    loginAs(user1);
 +    try {
 +      user1Con.namespaceOperations().setProperty(n1, Property.TABLE_FILE_MAX.getKey(), "33");
 +      fail();
 +    } catch (AccumuloSecurityException e) {
 +      expectPermissionDenied(e);
 +    }
 +
 +    loginAs(root);
 +    c.securityOperations().grantSystemPermission(u1, SystemPermission.ALTER_NAMESPACE);
 +    loginAs(user1);
 +    user1Con.namespaceOperations().setProperty(n1, Property.TABLE_FILE_MAX.getKey(), "33");
 +    user1Con.namespaceOperations().removeProperty(n1, Property.TABLE_FILE_MAX.getKey());
 +    loginAs(root);
 +    c.securityOperations().revokeSystemPermission(u1, SystemPermission.ALTER_NAMESPACE);
 +  }
 +
 +  @Test
 +  public void verifySystemPropertyInheritance() throws Exception {
 +    String t1 = "1";
 +    String t2 = namespace + "." + t1;
 +    c.tableOperations().create(t1);
 +    c.namespaceOperations().create(namespace);
 +    c.tableOperations().create(t2);
 +
 +    // verify iterator inheritance
 +    _verifySystemPropertyInheritance(t1, t2, Property.TABLE_ITERATOR_PREFIX.getKey() + "scan.sum", "20," + SimpleFilter.class.getName(), false);
 +
 +    // verify constraint inheritance
 +    _verifySystemPropertyInheritance(t1, t2, Property.TABLE_CONSTRAINT_PREFIX.getKey() + "42", NumericValueConstraint.class.getName(), false);
 +
 +    // verify other inheritance
 +    _verifySystemPropertyInheritance(t1, t2, Property.TABLE_LOCALITY_GROUP_PREFIX.getKey() + "dummy", "dummy", true);
 +  }
 +
 +  private void _verifySystemPropertyInheritance(String defaultNamespaceTable, String namespaceTable, String k, String v, boolean systemNamespaceShouldInherit)
 +      throws Exception {
 +    // nobody should have any of these properties yet
 +    assertFalse(c.instanceOperations().getSystemConfiguration().containsValue(v));
 +    assertFalse(checkNamespaceHasProp(Namespaces.ACCUMULO_NAMESPACE, k, v));
 +    assertFalse(checkTableHasProp(RootTable.NAME, k, v));
 +    assertFalse(checkTableHasProp(MetadataTable.NAME, k, v));
 +    assertFalse(checkNamespaceHasProp(Namespaces.DEFAULT_NAMESPACE, k, v));
 +    assertFalse(checkTableHasProp(defaultNamespaceTable, k, v));
 +    assertFalse(checkNamespaceHasProp(namespace, k, v));
 +    assertFalse(checkTableHasProp(namespaceTable, k, v));
 +
 +    // set the filter, verify that accumulo namespace is the only one unaffected
 +    c.instanceOperations().setProperty(k, v);
 +    // doesn't take effect immediately, needs time to propagate to tserver's ZooKeeper cache
 +    sleepUninterruptibly(250, TimeUnit.MILLISECONDS);
 +    assertTrue(c.instanceOperations().getSystemConfiguration().containsValue(v));
 +    assertEquals(systemNamespaceShouldInherit, checkNamespaceHasProp(Namespaces.ACCUMULO_NAMESPACE, k, v));
 +    assertEquals(systemNamespaceShouldInherit, checkTableHasProp(RootTable.NAME, k, v));
 +    assertEquals(systemNamespaceShouldInherit, checkTableHasProp(MetadataTable.NAME, k, v));
 +    assertTrue(checkNamespaceHasProp(Namespaces.DEFAULT_NAMESPACE, k, v));
 +    assertTrue(checkTableHasProp(defaultNamespaceTable, k, v));
 +    assertTrue(checkNamespaceHasProp(namespace, k, v));
 +    assertTrue(checkTableHasProp(namespaceTable, k, v));
 +
 +    // verify it is no longer inherited
 +    c.instanceOperations().removeProperty(k);
 +    // doesn't take effect immediately, needs time to propagate to tserver's ZooKeeper cache
 +    sleepUninterruptibly(250, TimeUnit.MILLISECONDS);
 +    assertFalse(c.instanceOperations().getSystemConfiguration().containsValue(v));
 +    assertFalse(checkNamespaceHasProp(Namespaces.ACCUMULO_NAMESPACE, k, v));
 +    assertFalse(checkTableHasProp(RootTable.NAME, k, v));
 +    assertFalse(checkTableHasProp(MetadataTable.NAME, k, v));
 +    assertFalse(checkNamespaceHasProp(Namespaces.DEFAULT_NAMESPACE, k, v));
 +    assertFalse(checkTableHasProp(defaultNamespaceTable, k, v));
 +    assertFalse(checkNamespaceHasProp(namespace, k, v));
 +    assertFalse(checkTableHasProp(namespaceTable, k, v));
 +  }
 +
 +  @Test
 +  public void listNamespaces() throws Exception {
 +    SortedSet<String> namespaces = c.namespaceOperations().list();
 +    Map<String,String> map = c.namespaceOperations().namespaceIdMap();
 +    assertEquals(2, namespaces.size());
 +    assertEquals(2, map.size());
 +    assertTrue(namespaces.contains(Namespaces.ACCUMULO_NAMESPACE));
 +    assertTrue(namespaces.contains(Namespaces.DEFAULT_NAMESPACE));
 +    assertFalse(namespaces.contains(namespace));
 +    assertEquals(Namespaces.ACCUMULO_NAMESPACE_ID, map.get(Namespaces.ACCUMULO_NAMESPACE));
 +    assertEquals(Namespaces.DEFAULT_NAMESPACE_ID, map.get(Namespaces.DEFAULT_NAMESPACE));
 +    assertNull(map.get(namespace));
 +
 +    c.namespaceOperations().create(namespace);
 +    namespaces = c.namespaceOperations().list();
 +    map = c.namespaceOperations().namespaceIdMap();
 +    assertEquals(3, namespaces.size());
 +    assertEquals(3, map.size());
 +    assertTrue(namespaces.contains(Namespaces.ACCUMULO_NAMESPACE));
 +    assertTrue(namespaces.contains(Namespaces.DEFAULT_NAMESPACE));
 +    assertTrue(namespaces.contains(namespace));
 +    assertEquals(Namespaces.ACCUMULO_NAMESPACE_ID, map.get(Namespaces.ACCUMULO_NAMESPACE));
 +    assertEquals(Namespaces.DEFAULT_NAMESPACE_ID, map.get(Namespaces.DEFAULT_NAMESPACE));
 +    assertNotNull(map.get(namespace));
 +
 +    c.namespaceOperations().delete(namespace);
 +    namespaces = c.namespaceOperations().list();
 +    map = c.namespaceOperations().namespaceIdMap();
 +    assertEquals(2, namespaces.size());
 +    assertEquals(2, map.size());
 +    assertTrue(namespaces.contains(Namespaces.ACCUMULO_NAMESPACE));
 +    assertTrue(namespaces.contains(Namespaces.DEFAULT_NAMESPACE));
 +    assertFalse(namespaces.contains(namespace));
 +    assertEquals(Namespaces.ACCUMULO_NAMESPACE_ID, map.get(Namespaces.ACCUMULO_NAMESPACE));
 +    assertEquals(Namespaces.DEFAULT_NAMESPACE_ID, map.get(Namespaces.DEFAULT_NAMESPACE));
 +    assertNull(map.get(namespace));
 +  }
 +
 +  @Test
 +  public void loadClass() throws Exception {
 +    assertTrue(c.namespaceOperations().testClassLoad(Namespaces.DEFAULT_NAMESPACE, VersioningIterator.class.getName(), SortedKeyValueIterator.class.getName()));
 +    assertFalse(c.namespaceOperations().testClassLoad(Namespaces.DEFAULT_NAMESPACE, "dummy", SortedKeyValueIterator.class.getName()));
 +    try {
 +      c.namespaceOperations().testClassLoad(namespace, "dummy", "dummy");
 +      fail();
 +    } catch (NamespaceNotFoundException e) {
 +      // expected, ignore
 +    }
 +  }
 +
 +  @Test
 +  public void testModifyingPermissions() throws Exception {
 +    String tableName = namespace + ".modify";
 +    c.namespaceOperations().create(namespace);
 +    c.tableOperations().create(tableName);
 +    assertTrue(c.securityOperations().hasTablePermission(c.whoami(), tableName, TablePermission.READ));
 +    c.securityOperations().revokeTablePermission(c.whoami(), tableName, TablePermission.READ);
 +    assertFalse(c.securityOperations().hasTablePermission(c.whoami(), tableName, TablePermission.READ));
 +    c.securityOperations().grantTablePermission(c.whoami(), tableName, TablePermission.READ);
 +    assertTrue(c.securityOperations().hasTablePermission(c.whoami(), tableName, TablePermission.READ));
 +    c.tableOperations().delete(tableName);
 +
 +    try {
 +      c.securityOperations().hasTablePermission(c.whoami(), tableName, TablePermission.READ);
 +      fail();
 +    } catch (Exception e) {
 +      if (!(e instanceof AccumuloSecurityException) || !((AccumuloSecurityException) e).getSecurityErrorCode().equals(SecurityErrorCode.TABLE_DOESNT_EXIST))
 +        throw new Exception("Has permission resulted in " + e.getClass().getName(), e);
 +    }
 +
 +    try {
 +      c.securityOperations().grantTablePermission(c.whoami(), tableName, TablePermission.READ);
 +      fail();
 +    } catch (Exception e) {
 +      if (!(e instanceof AccumuloSecurityException) || !((AccumuloSecurityException) e).getSecurityErrorCode().equals(SecurityErrorCode.TABLE_DOESNT_EXIST))
 +        throw new Exception("Has permission resulted in " + e.getClass().getName(), e);
 +    }
 +
 +    try {
 +      c.securityOperations().revokeTablePermission(c.whoami(), tableName, TablePermission.READ);
 +      fail();
 +    } catch (Exception e) {
 +      if (!(e instanceof AccumuloSecurityException) || !((AccumuloSecurityException) e).getSecurityErrorCode().equals(SecurityErrorCode.TABLE_DOESNT_EXIST))
 +        throw new Exception("Has permission resulted in " + e.getClass().getName(), e);
 +    }
 +
 +    assertTrue(c.securityOperations().hasNamespacePermission(c.whoami(), namespace, NamespacePermission.READ));
 +    c.securityOperations().revokeNamespacePermission(c.whoami(), namespace, NamespacePermission.READ);
 +    assertFalse(c.securityOperations().hasNamespacePermission(c.whoami(), namespace, NamespacePermission.READ));
 +    c.securityOperations().grantNamespacePermission(c.whoami(), namespace, NamespacePermission.READ);
 +    assertTrue(c.securityOperations().hasNamespacePermission(c.whoami(), namespace, NamespacePermission.READ));
 +
 +    c.namespaceOperations().delete(namespace);
 +
 +    try {
 +      c.securityOperations().hasTablePermission(c.whoami(), tableName, TablePermission.READ);
 +      fail();
 +    } catch (Exception e) {
 +      if (!(e instanceof AccumuloSecurityException) || !((AccumuloSecurityException) e).getSecurityErrorCode().equals(SecurityErrorCode.TABLE_DOESNT_EXIST))
 +        throw new Exception("Has permission resulted in " + e.getClass().getName(), e);
 +    }
 +
 +    try {
 +      c.securityOperations().grantTablePermission(c.whoami(), tableName, TablePermission.READ);
 +      fail();
 +    } catch (Exception e) {
 +      if (!(e instanceof AccumuloSecurityException) || !((AccumuloSecurityException) e).getSecurityErrorCode().equals(SecurityErrorCode.TABLE_DOESNT_EXIST))
 +        throw new Exception("Has permission resulted in " + e.getClass().getName(), e);
 +    }
 +
 +    try {
 +      c.securityOperations().revokeTablePermission(c.whoami(), tableName, TablePermission.READ);
 +      fail();
 +    } catch (Exception e) {
 +      if (!(e instanceof AccumuloSecurityException) || !((AccumuloSecurityException) e).getSecurityErrorCode().equals(SecurityErrorCode.TABLE_DOESNT_EXIST))
 +        throw new Exception("Has permission resulted in " + e.getClass().getName(), e);
 +    }
 +
 +    try {
 +      c.securityOperations().hasNamespacePermission(c.whoami(), namespace, NamespacePermission.READ);
 +      fail();
 +    } catch (Exception e) {
 +      if (!(e instanceof AccumuloSecurityException) || !((AccumuloSecurityException) e).getSecurityErrorCode().equals(SecurityErrorCode.NAMESPACE_DOESNT_EXIST))
 +        throw new Exception("Has permission resulted in " + e.getClass().getName(), e);
 +    }
 +
 +    try {
 +      c.securityOperations().grantNamespacePermission(c.whoami(), namespace, NamespacePermission.READ);
 +      fail();
 +    } catch (Exception e) {
 +      if (!(e instanceof AccumuloSecurityException) || !((AccumuloSecurityException) e).getSecurityErrorCode().equals(SecurityErrorCode.NAMESPACE_DOESNT_EXIST))
 +        throw new Exception("Has permission resulted in " + e.getClass().getName(), e);
 +    }
 +
 +    try {
 +      c.securityOperations().revokeNamespacePermission(c.whoami(), namespace, NamespacePermission.READ);
 +      fail();
 +    } catch (Exception e) {
 +      if (!(e instanceof AccumuloSecurityException) || !((AccumuloSecurityException) e).getSecurityErrorCode().equals(SecurityErrorCode.NAMESPACE_DOESNT_EXIST))
 +        throw new Exception("Has permission resulted in " + e.getClass().getName(), e);
 +    }
 +
 +  }
 +
 +  @Test
 +  public void verifyTableOperationsExceptions() throws Exception {
 +    String tableName = namespace + ".1";
 +    IteratorSetting setting = new IteratorSetting(200, VersioningIterator.class);
 +    Text a = new Text("a");
 +    Text z = new Text("z");
 +    TableOperations ops = c.tableOperations();
 +
 +    // this one doesn't throw an exception, so don't fail; just check that it works
 +    assertFalse(ops.exists(tableName));
 +
 +    // table operations that should throw an AccumuloException caused by NamespaceNotFoundException
 +    int numRun = 0;
 +    ACCUMULOEXCEPTIONS_NAMESPACENOTFOUND: for (int i = 0;; ++i)
 +      try {
 +        switch (i) {
 +          case 0:
 +            ops.create(tableName);
 +            fail();
 +            break;
 +          case 1:
 +            ops.create("a");
 +            ops.clone("a", tableName, true, Collections.<String,String> emptyMap(), Collections.<String> emptySet());
 +            fail();
 +            break;
 +          case 2:
 +            ops.importTable(tableName, System.getProperty("user.dir") + "/target");
 +            fail();
 +            break;
 +          default:
 +            // break out of infinite loop
 +            assertEquals(3, i); // check test integrity
 +            assertEquals(3, numRun); // check test integrity
 +            break ACCUMULOEXCEPTIONS_NAMESPACENOTFOUND;
 +        }
 +      } catch (Exception e) {
 +        numRun++;
 +        if (!(e instanceof AccumuloException) || !(e.getCause() instanceof NamespaceNotFoundException))
 +          throw new Exception("Case " + i + " resulted in " + e.getClass().getName(), e);
 +      }
 +
 +    // table operations that should throw an AccumuloException caused by a TableNotFoundException caused by a NamespaceNotFoundException
 +    // these are here because we didn't declare TableNotFoundException in the API :(
 +    numRun = 0;
 +    ACCUMULOEXCEPTIONS_TABLENOTFOUND: for (int i = 0;; ++i)
 +      try {
 +        switch (i) {
 +          case 0:
 +            ops.removeConstraint(tableName, 0);
 +            fail();
 +            break;
 +          case 1:
 +            ops.removeProperty(tableName, "a");
 +            fail();
 +            break;
 +          case 2:
 +            ops.setProperty(tableName, "a", "b");
 +            fail();
 +            break;
 +          default:
 +            // break out of infinite loop
 +            assertEquals(3, i); // check test integrity
 +            assertEquals(3, numRun); // check test integrity
 +            break ACCUMULOEXCEPTIONS_TABLENOTFOUND;
 +        }
 +      } catch (Exception e) {
 +        numRun++;
 +        if (!(e instanceof AccumuloException) || !(e.getCause() instanceof TableNotFoundException)
 +            || !(e.getCause().getCause() instanceof NamespaceNotFoundException))
 +          throw new Exception("Case " + i + " resulted in " + e.getClass().getName(), e);
 +      }
 +
 +    // table operations that should throw a TableNotFoundException caused by NamespaceNotFoundException
 +    numRun = 0;
 +    TABLENOTFOUNDEXCEPTIONS: for (int i = 0;; ++i)
 +      try {
 +        switch (i) {
 +          case 0:
 +            ops.addConstraint(tableName, NumericValueConstraint.class.getName());
 +            fail();
 +            break;
 +          case 1:
 +            ops.addSplits(tableName, new TreeSet<Text>());
 +            fail();
 +            break;
 +          case 2:
 +            ops.attachIterator(tableName, setting);
 +            fail();
 +            break;
 +          case 3:
 +            ops.cancelCompaction(tableName);
 +            fail();
 +            break;
 +          case 4:
 +            ops.checkIteratorConflicts(tableName, setting, EnumSet.allOf(IteratorScope.class));
 +            fail();
 +            break;
 +          case 5:
 +            ops.clearLocatorCache(tableName);
 +            fail();
 +            break;
 +          case 6:
 +            ops.clone(tableName, "2", true, Collections.<String,String> emptyMap(), Collections.<String> emptySet());
 +            fail();
 +            break;
 +          case 7:
 +            ops.compact(tableName, a, z, true, true);
 +            fail();
 +            break;
 +          case 8:
 +            ops.delete(tableName);
 +            fail();
 +            break;
 +          case 9:
 +            ops.deleteRows(tableName, a, z);
 +            fail();
 +            break;
 +          case 10:
 +            ops.splitRangeByTablets(tableName, new Range(), 10);
 +            fail();
 +            break;
 +          case 11:
 +            ops.exportTable(tableName, namespace + "_dir");
 +            fail();
 +            break;
 +          case 12:
 +            ops.flush(tableName, a, z, true);
 +            fail();
 +            break;
 +          case 13:
 +            ops.getDiskUsage(Collections.singleton(tableName));
 +            fail();
 +            break;
 +          case 14:
 +            ops.getIteratorSetting(tableName, "a", IteratorScope.scan);
 +            fail();
 +            break;
 +          case 15:
 +            ops.getLocalityGroups(tableName);
 +            fail();
 +            break;
 +          case 16:
 +            ops.getMaxRow(tableName, Authorizations.EMPTY, a, true, z, true);
 +            fail();
 +            break;
 +          case 17:
 +            ops.getProperties(tableName);
 +            fail();
 +            break;
 +          case 18:
 +            ops.importDirectory(tableName, "", "", false);
 +            fail();
 +            break;
 +          case 19:
 +            ops.testClassLoad(tableName, VersioningIterator.class.getName(), SortedKeyValueIterator.class.getName());
 +            fail();
 +            break;
 +          case 20:
 +            ops.listConstraints(tableName);
 +            fail();
 +            break;
 +          case 21:
 +            ops.listIterators(tableName);
 +            fail();
 +            break;
 +          case 22:
 +            ops.listSplits(tableName);
 +            fail();
 +            break;
 +          case 23:
 +            ops.merge(tableName, a, z);
 +            fail();
 +            break;
 +          case 24:
 +            ops.offline(tableName, true);
 +            fail();
 +            break;
 +          case 25:
 +            ops.online(tableName, true);
 +            fail();
 +            break;
 +          case 26:
 +            ops.removeIterator(tableName, "a", EnumSet.of(IteratorScope.scan));
 +            fail();
 +            break;
 +          case 27:
 +            ops.rename(tableName, tableName + "2");
 +            fail();
 +            break;
 +          case 28:
 +            ops.setLocalityGroups(tableName, Collections.<String,Set<Text>> emptyMap());
 +            fail();
 +            break;
 +          default:
 +            // break out of infinite loop
 +            assertEquals(29, i); // check test integrity
 +            assertEquals(29, numRun); // check test integrity
 +            break TABLENOTFOUNDEXCEPTIONS;
 +        }
 +      } catch (Exception e) {
 +        numRun++;
 +        if (!(e instanceof TableNotFoundException) || !(e.getCause() instanceof NamespaceNotFoundException))
 +          throw new Exception("Case " + i + " resulted in " + e.getClass().getName(), e);
 +      }
 +  }
 +
 +  @Test
 +  public void verifyNamespaceOperationsExceptions() throws Exception {
 +    IteratorSetting setting = new IteratorSetting(200, VersioningIterator.class);
 +    NamespaceOperations ops = c.namespaceOperations();
 +
 +    // this one doesn't throw an exception, so don't fail; just check that it works
 +    assertFalse(ops.exists(namespace));
 +
 +    // namespace operations that should throw a NamespaceNotFoundException
 +    int numRun = 0;
 +    NAMESPACENOTFOUND: for (int i = 0;; ++i)
 +      try {
 +        switch (i) {
 +          case 0:
 +            ops.addConstraint(namespace, NumericValueConstraint.class.getName());
 +            fail();
 +            break;
 +          case 1:
 +            ops.attachIterator(namespace, setting);
 +            fail();
 +            break;
 +          case 2:
 +            ops.checkIteratorConflicts(namespace, setting, EnumSet.of(IteratorScope.scan));
 +            fail();
 +            break;
 +          case 3:
 +            ops.delete(namespace);
 +            fail();
 +            break;
 +          case 4:
 +            ops.getIteratorSetting(namespace, "thing", IteratorScope.scan);
 +            fail();
 +            break;
 +          case 5:
 +            ops.getProperties(namespace);
 +            fail();
 +            break;
 +          case 6:
 +            ops.listConstraints(namespace);
 +            fail();
 +            break;
 +          case 7:
 +            ops.listIterators(namespace);
 +            fail();
 +            break;
 +          case 8:
 +            ops.removeConstraint(namespace, 1);
 +            fail();
 +            break;
 +          case 9:
 +            ops.removeIterator(namespace, "thing", EnumSet.allOf(IteratorScope.class));
 +            fail();
 +            break;
 +          case 10:
 +            ops.removeProperty(namespace, "a");
 +            fail();
 +            break;
 +          case 11:
 +            ops.rename(namespace, namespace + "2");
 +            fail();
 +            break;
 +          case 12:
 +            ops.setProperty(namespace, "k", "v");
 +            fail();
 +            break;
 +          case 13:
 +            ops.testClassLoad(namespace, VersioningIterator.class.getName(), SortedKeyValueIterator.class.getName());
 +            fail();
 +            break;
 +          default:
 +            // break out of infinite loop
 +            assertEquals(14, i); // check test integrity
 +            assertEquals(14, numRun); // check test integrity
 +            break NAMESPACENOTFOUND;
 +        }
 +      } catch (Exception e) {
 +        numRun++;
 +        if (!(e instanceof NamespaceNotFoundException))
 +          throw new Exception("Case " + i + " resulted in " + e.getClass().getName(), e);
 +      }
 +
 +    // namespace operations that should throw a NamespaceExistsException
 +    numRun = 0;
 +    NAMESPACEEXISTS: for (int i = 0;; ++i)
 +      try {
 +        switch (i) {
 +          case 0:
 +            ops.create(namespace + "0");
 +            ops.create(namespace + "0"); // should fail here
 +            fail();
 +            break;
 +          case 1:
 +            ops.create(namespace + i + "_1");
 +            ops.create(namespace + i + "_2");
 +            ops.rename(namespace + i + "_1", namespace + i + "_2"); // should fail here
 +            fail();
 +            break;
 +          case 2:
 +            ops.create(Namespaces.DEFAULT_NAMESPACE);
 +            fail();
 +            break;
 +          case 3:
 +            ops.create(Namespaces.ACCUMULO_NAMESPACE);
 +            fail();
 +            break;
 +          case 4:
 +            ops.create(namespace + i + "_1");
 +            ops.rename(namespace + i + "_1", Namespaces.DEFAULT_NAMESPACE); // should fail here
 +            fail();
 +            break;
 +          case 5:
 +            ops.create(namespace + i + "_1");
 +            ops.rename(namespace + i + "_1", Namespaces.ACCUMULO_NAMESPACE); // should fail here
 +            fail();
 +            break;
 +          default:
 +            // break out of infinite loop
 +            assertEquals(6, i); // check test integrity
 +            assertEquals(6, numRun); // check test integrity
 +            break NAMESPACEEXISTS;
 +        }
 +      } catch (Exception e) {
 +        numRun++;
 +        if (!(e instanceof NamespaceExistsException))
 +          throw new Exception("Case " + i + " resulted in " + e.getClass().getName(), e);
 +      }
 +  }
 +
 +  private boolean checkTableHasProp(String t, String propKey, String propVal) {
 +    return checkHasProperty(t, propKey, propVal, true);
 +  }
 +
 +  private boolean checkNamespaceHasProp(String n, String propKey, String propVal) {
 +    return checkHasProperty(n, propKey, propVal, false);
 +  }
 +
 +  private boolean checkHasProperty(String name, String propKey, String propVal, boolean nameIsTable) {
 +    try {
 +      Iterable<Entry<String,String>> iterable = nameIsTable ? c.tableOperations().getProperties(name) : c.namespaceOperations().getProperties(name);
 +      for (Entry<String,String> e : iterable)
 +        if (propKey.equals(e.getKey()))
 +          return propVal.equals(e.getValue());
 +      return false;
 +    } catch (Exception e) {
 +      fail();
 +      return false;
 +    }
 +  }
 +
 +  public static class SimpleFilter extends Filter {
 +    @Override
 +    public boolean accept(Key k, Value v) {
 +      if (k.getColumnFamily().toString().equals("a"))
 +        return false;
 +      return true;
 +    }
 +  }
 +
 +  private void expectPermissionDenied(AccumuloSecurityException sec) {
 +    assertEquals(sec.getSecurityErrorCode().getClass(), SecurityErrorCode.class);
 +    switch (sec.getSecurityErrorCode()) {
 +      case PERMISSION_DENIED:
 +        break;
 +      default:
 +        fail();
 +    }
 +  }
 +
 +}


[04/10] accumulo git commit: Merge branch '1.7' into 1.8

Posted by el...@apache.org.
http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/functional/PermissionsIT.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/functional/PermissionsIT.java
index 2fc256b,0000000..c7fc709
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/PermissionsIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/PermissionsIT.java
@@@ -1,707 -1,0 +1,710 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.test.functional;
 +
 +import static org.junit.Assert.assertFalse;
 +import static org.junit.Assert.assertTrue;
 +
 +import java.io.IOException;
 +import java.util.Arrays;
 +import java.util.HashMap;
 +import java.util.HashSet;
 +import java.util.Iterator;
 +import java.util.List;
 +import java.util.Map;
 +import java.util.Map.Entry;
 +import java.util.Set;
 +
 +import org.apache.accumulo.cluster.ClusterUser;
 +import org.apache.accumulo.core.client.AccumuloException;
 +import org.apache.accumulo.core.client.AccumuloSecurityException;
 +import org.apache.accumulo.core.client.BatchWriter;
 +import org.apache.accumulo.core.client.BatchWriterConfig;
 +import org.apache.accumulo.core.client.ClientConfiguration;
 +import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.MutationsRejectedException;
 +import org.apache.accumulo.core.client.Scanner;
 +import org.apache.accumulo.core.client.TableExistsException;
 +import org.apache.accumulo.core.client.TableNotFoundException;
 +import org.apache.accumulo.core.client.security.SecurityErrorCode;
 +import org.apache.accumulo.core.client.security.tokens.AuthenticationToken;
 +import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.data.Key;
 +import org.apache.accumulo.core.data.Mutation;
 +import org.apache.accumulo.core.data.Value;
 +import org.apache.accumulo.core.metadata.MetadataTable;
 +import org.apache.accumulo.core.security.Authorizations;
 +import org.apache.accumulo.core.security.SystemPermission;
 +import org.apache.accumulo.core.security.TablePermission;
 +import org.apache.accumulo.harness.AccumuloClusterHarness;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.hadoop.io.Text;
 +import org.junit.Assume;
 +import org.junit.Before;
 +import org.junit.Test;
++import org.junit.experimental.categories.Category;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +// This test verifies the default permissions so a clean instance must be used. A shared instance might
 +// not be representative of a fresh installation.
++@Category(MiniClusterOnlyTest.class)
 +public class PermissionsIT extends AccumuloClusterHarness {
 +  private static final Logger log = LoggerFactory.getLogger(PermissionsIT.class);
 +
 +  @Override
 +  public int defaultTimeoutSeconds() {
 +    return 60;
 +  }
 +
 +  @Before
 +  public void limitToMini() throws Exception {
 +    Assume.assumeTrue(ClusterType.MINI == getClusterType());
 +    Connector c = getConnector();
 +    Set<String> users = c.securityOperations().listLocalUsers();
 +    ClusterUser user = getUser(0);
 +    if (users.contains(user.getPrincipal())) {
 +      c.securityOperations().dropLocalUser(user.getPrincipal());
 +    }
 +  }
 +
 +  private void loginAs(ClusterUser user) throws IOException {
 +    // Force a re-login as the provided user
 +    user.getToken();
 +  }
 +
 +  @Test
 +  public void systemPermissionsTest() throws Exception {
 +    ClusterUser testUser = getUser(0), rootUser = getAdminUser();
 +
 +    // verify that the test is being run by root
 +    Connector c = getConnector();
 +    verifyHasOnlyTheseSystemPermissions(c, c.whoami(), SystemPermission.values());
 +
 +    // create the test user
 +    String principal = testUser.getPrincipal();
 +    AuthenticationToken token = testUser.getToken();
 +    PasswordToken passwordToken = null;
 +    if (token instanceof PasswordToken) {
 +      passwordToken = (PasswordToken) token;
 +    }
 +    loginAs(rootUser);
 +    c.securityOperations().createLocalUser(principal, passwordToken);
 +    loginAs(testUser);
 +    Connector test_user_conn = c.getInstance().getConnector(principal, token);
 +    loginAs(rootUser);
 +    verifyHasNoSystemPermissions(c, principal, SystemPermission.values());
 +
 +    // test each permission
 +    for (SystemPermission perm : SystemPermission.values()) {
 +      log.debug("Verifying the " + perm + " permission");
 +
 +      // test permission before and after granting it
 +      String tableNamePrefix = getUniqueNames(1)[0];
 +      testMissingSystemPermission(tableNamePrefix, c, rootUser, test_user_conn, testUser, perm);
 +      loginAs(rootUser);
 +      c.securityOperations().grantSystemPermission(principal, perm);
 +      verifyHasOnlyTheseSystemPermissions(c, principal, perm);
 +      testGrantedSystemPermission(tableNamePrefix, c, rootUser, test_user_conn, testUser, perm);
 +      loginAs(rootUser);
 +      c.securityOperations().revokeSystemPermission(principal, perm);
 +      verifyHasNoSystemPermissions(c, principal, perm);
 +    }
 +  }
 +
 +  static Map<String,String> map(Iterable<Entry<String,String>> i) {
 +    Map<String,String> result = new HashMap<>();
 +    for (Entry<String,String> e : i) {
 +      result.put(e.getKey(), e.getValue());
 +    }
 +    return result;
 +  }
 +
 +  private void testMissingSystemPermission(String tableNamePrefix, Connector root_conn, ClusterUser rootUser, Connector test_user_conn, ClusterUser testUser,
 +      SystemPermission perm) throws Exception {
 +    String tableName, user, password = "password", namespace;
 +    boolean passwordBased = testUser.getPassword() != null;
 +    log.debug("Confirming that the lack of the " + perm + " permission properly restricts the user");
 +
 +    // test permission prior to granting it
 +    switch (perm) {
 +      case CREATE_TABLE:
 +        tableName = tableNamePrefix + "__CREATE_TABLE_WITHOUT_PERM_TEST__";
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.tableOperations().create(tableName);
 +          throw new IllegalStateException("Should NOT be able to create a table");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED || root_conn.tableOperations().list().contains(tableName))
 +            throw e;
 +        }
 +        break;
 +      case DROP_TABLE:
 +        tableName = tableNamePrefix + "__DROP_TABLE_WITHOUT_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.tableOperations().create(tableName);
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.tableOperations().delete(tableName);
 +          throw new IllegalStateException("Should NOT be able to delete a table");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED || !root_conn.tableOperations().list().contains(tableName))
 +            throw e;
 +        }
 +        break;
 +      case ALTER_TABLE:
 +        tableName = tableNamePrefix + "__ALTER_TABLE_WITHOUT_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.tableOperations().create(tableName);
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.tableOperations().setProperty(tableName, Property.TABLE_BLOOM_ERRORRATE.getKey(), "003.14159%");
 +          throw new IllegalStateException("Should NOT be able to set a table property");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED
 +              || map(root_conn.tableOperations().getProperties(tableName)).get(Property.TABLE_BLOOM_ERRORRATE.getKey()).equals("003.14159%"))
 +            throw e;
 +        }
 +        loginAs(rootUser);
 +        root_conn.tableOperations().setProperty(tableName, Property.TABLE_BLOOM_ERRORRATE.getKey(), "003.14159%");
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.tableOperations().removeProperty(tableName, Property.TABLE_BLOOM_ERRORRATE.getKey());
 +          throw new IllegalStateException("Should NOT be able to remove a table property");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED
 +              || !map(root_conn.tableOperations().getProperties(tableName)).get(Property.TABLE_BLOOM_ERRORRATE.getKey()).equals("003.14159%"))
 +            throw e;
 +        }
 +        String table2 = tableName + "2";
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.tableOperations().rename(tableName, table2);
 +          throw new IllegalStateException("Should NOT be able to rename a table");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED || !root_conn.tableOperations().list().contains(tableName)
 +              || root_conn.tableOperations().list().contains(table2))
 +            throw e;
 +        }
 +        break;
 +      case CREATE_USER:
 +        user = "__CREATE_USER_WITHOUT_PERM_TEST__";
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.securityOperations().createLocalUser(user, (passwordBased ? new PasswordToken(password) : null));
 +          throw new IllegalStateException("Should NOT be able to create a user");
 +        } catch (AccumuloSecurityException e) {
 +          AuthenticationToken userToken = testUser.getToken();
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED
 +              || (userToken instanceof PasswordToken && root_conn.securityOperations().authenticateUser(user, userToken)))
 +            throw e;
 +        }
 +        break;
 +      case DROP_USER:
 +        user = "__DROP_USER_WITHOUT_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.securityOperations().createLocalUser(user, (passwordBased ? new PasswordToken(password) : null));
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.securityOperations().dropLocalUser(user);
 +          throw new IllegalStateException("Should NOT be able to delete a user");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED || !root_conn.securityOperations().listLocalUsers().contains(user)) {
 +            log.info("Failed to authenticate as " + user);
 +            throw e;
 +          }
 +        }
 +        break;
 +      case ALTER_USER:
 +        user = "__ALTER_USER_WITHOUT_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.securityOperations().createLocalUser(user, (passwordBased ? new PasswordToken(password) : null));
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.securityOperations().changeUserAuthorizations(user, new Authorizations("A", "B"));
 +          throw new IllegalStateException("Should NOT be able to alter a user");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED || !root_conn.securityOperations().getUserAuthorizations(user).isEmpty())
 +            throw e;
 +        }
 +        break;
 +      case SYSTEM:
 +        // test for system permission would go here
 +        break;
 +      case CREATE_NAMESPACE:
 +        namespace = "__CREATE_NAMESPACE_WITHOUT_PERM_TEST__";
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.namespaceOperations().create(namespace);
 +          throw new IllegalStateException("Should NOT be able to create a namespace");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED || root_conn.namespaceOperations().list().contains(namespace))
 +            throw e;
 +        }
 +        break;
 +      case DROP_NAMESPACE:
 +        namespace = "__DROP_NAMESPACE_WITHOUT_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.namespaceOperations().create(namespace);
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.namespaceOperations().delete(namespace);
 +          throw new IllegalStateException("Should NOT be able to delete a namespace");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED || !root_conn.namespaceOperations().list().contains(namespace))
 +            throw e;
 +        }
 +        break;
 +      case ALTER_NAMESPACE:
 +        namespace = "__ALTER_NAMESPACE_WITHOUT_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.namespaceOperations().create(namespace);
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.namespaceOperations().setProperty(namespace, Property.TABLE_BLOOM_ERRORRATE.getKey(), "003.14159%");
 +          throw new IllegalStateException("Should NOT be able to set a namespace property");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED
 +              || map(root_conn.namespaceOperations().getProperties(namespace)).get(Property.TABLE_BLOOM_ERRORRATE.getKey()).equals("003.14159%"))
 +            throw e;
 +        }
 +        loginAs(rootUser);
 +        root_conn.namespaceOperations().setProperty(namespace, Property.TABLE_BLOOM_ERRORRATE.getKey(), "003.14159%");
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.namespaceOperations().removeProperty(namespace, Property.TABLE_BLOOM_ERRORRATE.getKey());
 +          throw new IllegalStateException("Should NOT be able to remove a namespace property");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED
 +              || !map(root_conn.namespaceOperations().getProperties(namespace)).get(Property.TABLE_BLOOM_ERRORRATE.getKey()).equals("003.14159%"))
 +            throw e;
 +        }
 +        String namespace2 = namespace + "2";
 +        try {
 +          loginAs(testUser);
 +          test_user_conn.namespaceOperations().rename(namespace, namespace2);
 +          throw new IllegalStateException("Should NOT be able to rename a namespace");
 +        } catch (AccumuloSecurityException e) {
 +          loginAs(rootUser);
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED || !root_conn.namespaceOperations().list().contains(namespace)
 +              || root_conn.namespaceOperations().list().contains(namespace2))
 +            throw e;
 +        }
 +        break;
 +      case OBTAIN_DELEGATION_TOKEN:
 +        ClientConfiguration clientConf = cluster.getClientConfig();
 +        if (clientConf.getBoolean(ClientProperty.INSTANCE_RPC_SASL_ENABLED.getKey(), false)) {
 +          // TODO Try to obtain a delegation token without the permission
 +        }
 +        break;
 +      case GRANT:
 +        loginAs(testUser);
 +        try {
 +          test_user_conn.securityOperations().grantSystemPermission(testUser.getPrincipal(), SystemPermission.GRANT);
 +          throw new IllegalStateException("Should NOT be able to grant System.GRANT to yourself");
 +        } catch (AccumuloSecurityException e) {
 +          // Expected
 +          loginAs(rootUser);
 +          assertFalse(root_conn.securityOperations().hasSystemPermission(testUser.getPrincipal(), SystemPermission.GRANT));
 +        }
 +        break;
 +      default:
 +        throw new IllegalArgumentException("Unrecognized System Permission: " + perm);
 +    }
 +  }
 +
 +  private void testGrantedSystemPermission(String tableNamePrefix, Connector root_conn, ClusterUser rootUser, Connector test_user_conn, ClusterUser testUser,
 +      SystemPermission perm) throws Exception {
 +    String tableName, user, password = "password", namespace;
 +    boolean passwordBased = testUser.getPassword() != null;
 +    log.debug("Confirming that the presence of the " + perm + " permission properly permits the user");
 +
 +    // test permission after granting it
 +    switch (perm) {
 +      case CREATE_TABLE:
 +        tableName = tableNamePrefix + "__CREATE_TABLE_WITH_PERM_TEST__";
 +        loginAs(testUser);
 +        test_user_conn.tableOperations().create(tableName);
 +        loginAs(rootUser);
 +        if (!root_conn.tableOperations().list().contains(tableName))
 +          throw new IllegalStateException("Should be able to create a table");
 +        break;
 +      case DROP_TABLE:
 +        tableName = tableNamePrefix + "__DROP_TABLE_WITH_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.tableOperations().create(tableName);
 +        loginAs(testUser);
 +        test_user_conn.tableOperations().delete(tableName);
 +        loginAs(rootUser);
 +        if (root_conn.tableOperations().list().contains(tableName))
 +          throw new IllegalStateException("Should be able to delete a table");
 +        break;
 +      case ALTER_TABLE:
 +        tableName = tableNamePrefix + "__ALTER_TABLE_WITH_PERM_TEST__";
 +        String table2 = tableName + "2";
 +        loginAs(rootUser);
 +        root_conn.tableOperations().create(tableName);
 +        loginAs(testUser);
 +        test_user_conn.tableOperations().setProperty(tableName, Property.TABLE_BLOOM_ERRORRATE.getKey(), "003.14159%");
 +        loginAs(rootUser);
 +        Map<String,String> properties = map(root_conn.tableOperations().getProperties(tableName));
 +        if (!properties.get(Property.TABLE_BLOOM_ERRORRATE.getKey()).equals("003.14159%"))
 +          throw new IllegalStateException("Should be able to set a table property");
 +        loginAs(testUser);
 +        test_user_conn.tableOperations().removeProperty(tableName, Property.TABLE_BLOOM_ERRORRATE.getKey());
 +        loginAs(rootUser);
 +        properties = map(root_conn.tableOperations().getProperties(tableName));
 +        if (properties.get(Property.TABLE_BLOOM_ERRORRATE.getKey()).equals("003.14159%"))
 +          throw new IllegalStateException("Should be able to remove a table property");
 +        loginAs(testUser);
 +        test_user_conn.tableOperations().rename(tableName, table2);
 +        loginAs(rootUser);
 +        if (root_conn.tableOperations().list().contains(tableName) || !root_conn.tableOperations().list().contains(table2))
 +          throw new IllegalStateException("Should be able to rename a table");
 +        break;
 +      case CREATE_USER:
 +        user = "__CREATE_USER_WITH_PERM_TEST__";
 +        loginAs(testUser);
 +        test_user_conn.securityOperations().createLocalUser(user, (passwordBased ? new PasswordToken(password) : null));
 +        loginAs(rootUser);
 +        if (passwordBased && !root_conn.securityOperations().authenticateUser(user, new PasswordToken(password)))
 +          throw new IllegalStateException("Should be able to create a user");
 +        break;
 +      case DROP_USER:
 +        user = "__DROP_USER_WITH_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.securityOperations().createLocalUser(user, (passwordBased ? new PasswordToken(password) : null));
 +        loginAs(testUser);
 +        test_user_conn.securityOperations().dropLocalUser(user);
 +        loginAs(rootUser);
 +        if (passwordBased && root_conn.securityOperations().authenticateUser(user, new PasswordToken(password)))
 +          throw new IllegalStateException("Should be able to delete a user");
 +        break;
 +      case ALTER_USER:
 +        user = "__ALTER_USER_WITH_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.securityOperations().createLocalUser(user, (passwordBased ? new PasswordToken(password) : null));
 +        loginAs(testUser);
 +        test_user_conn.securityOperations().changeUserAuthorizations(user, new Authorizations("A", "B"));
 +        loginAs(rootUser);
 +        if (root_conn.securityOperations().getUserAuthorizations(user).isEmpty())
 +          throw new IllegalStateException("Should be able to alter a user");
 +        break;
 +      case SYSTEM:
 +        // test for system permission would go here
 +        break;
 +      case CREATE_NAMESPACE:
 +        namespace = "__CREATE_NAMESPACE_WITH_PERM_TEST__";
 +        loginAs(testUser);
 +        test_user_conn.namespaceOperations().create(namespace);
 +        loginAs(rootUser);
 +        if (!root_conn.namespaceOperations().list().contains(namespace))
 +          throw new IllegalStateException("Should be able to create a namespace");
 +        break;
 +      case DROP_NAMESPACE:
 +        namespace = "__DROP_NAMESPACE_WITH_PERM_TEST__";
 +        loginAs(rootUser);
 +        root_conn.namespaceOperations().create(namespace);
 +        loginAs(testUser);
 +        test_user_conn.namespaceOperations().delete(namespace);
 +        loginAs(rootUser);
 +        if (root_conn.namespaceOperations().list().contains(namespace))
 +          throw new IllegalStateException("Should be able to delete a namespace");
 +        break;
 +      case ALTER_NAMESPACE:
 +        namespace = "__ALTER_NAMESPACE_WITH_PERM_TEST__";
 +        String namespace2 = namespace + "2";
 +        loginAs(rootUser);
 +        root_conn.namespaceOperations().create(namespace);
 +        loginAs(testUser);
 +        test_user_conn.namespaceOperations().setProperty(namespace, Property.TABLE_BLOOM_ERRORRATE.getKey(), "003.14159%");
 +        loginAs(rootUser);
 +        Map<String,String> propies = map(root_conn.namespaceOperations().getProperties(namespace));
 +        if (!propies.get(Property.TABLE_BLOOM_ERRORRATE.getKey()).equals("003.14159%"))
 +          throw new IllegalStateException("Should be able to set a table property");
 +        loginAs(testUser);
 +        test_user_conn.namespaceOperations().removeProperty(namespace, Property.TABLE_BLOOM_ERRORRATE.getKey());
 +        loginAs(rootUser);
 +        propies = map(root_conn.namespaceOperations().getProperties(namespace));
 +        if (propies.get(Property.TABLE_BLOOM_ERRORRATE.getKey()).equals("003.14159%"))
 +          throw new IllegalStateException("Should be able to remove a table property");
 +        loginAs(testUser);
 +        test_user_conn.namespaceOperations().rename(namespace, namespace2);
 +        loginAs(rootUser);
 +        if (root_conn.namespaceOperations().list().contains(namespace) || !root_conn.namespaceOperations().list().contains(namespace2))
 +          throw new IllegalStateException("Should be able to rename a table");
 +        break;
 +      case OBTAIN_DELEGATION_TOKEN:
 +        ClientConfiguration clientConf = cluster.getClientConfig();
 +        if (clientConf.getBoolean(ClientProperty.INSTANCE_RPC_SASL_ENABLED.getKey(), false)) {
 +          // TODO Try to obtain a delegation token with the permission
 +        }
 +        break;
 +      case GRANT:
 +        loginAs(rootUser);
 +        root_conn.securityOperations().grantSystemPermission(testUser.getPrincipal(), SystemPermission.GRANT);
 +        loginAs(testUser);
 +        test_user_conn.securityOperations().grantSystemPermission(testUser.getPrincipal(), SystemPermission.CREATE_TABLE);
 +        loginAs(rootUser);
 +        assertTrue("Test user should have CREATE_TABLE",
 +            root_conn.securityOperations().hasSystemPermission(testUser.getPrincipal(), SystemPermission.CREATE_TABLE));
 +        assertTrue("Test user should have GRANT", root_conn.securityOperations().hasSystemPermission(testUser.getPrincipal(), SystemPermission.GRANT));
 +        root_conn.securityOperations().revokeSystemPermission(testUser.getPrincipal(), SystemPermission.CREATE_TABLE);
 +        break;
 +      default:
 +        throw new IllegalArgumentException("Unrecognized System Permission: " + perm);
 +    }
 +  }
 +
 +  private void verifyHasOnlyTheseSystemPermissions(Connector root_conn, String user, SystemPermission... perms) throws AccumuloException,
 +      AccumuloSecurityException {
 +    List<SystemPermission> permList = Arrays.asList(perms);
 +    for (SystemPermission p : SystemPermission.values()) {
 +      if (permList.contains(p)) {
 +        // should have these
 +        if (!root_conn.securityOperations().hasSystemPermission(user, p))
 +          throw new IllegalStateException(user + " SHOULD have system permission " + p);
 +      } else {
 +        // should not have these
 +        if (root_conn.securityOperations().hasSystemPermission(user, p))
 +          throw new IllegalStateException(user + " SHOULD NOT have system permission " + p);
 +      }
 +    }
 +  }
 +
 +  private void verifyHasNoSystemPermissions(Connector root_conn, String user, SystemPermission... perms) throws AccumuloException, AccumuloSecurityException {
 +    for (SystemPermission p : perms)
 +      if (root_conn.securityOperations().hasSystemPermission(user, p))
 +        throw new IllegalStateException(user + " SHOULD NOT have system permission " + p);
 +  }
 +
 +  @Test
 +  public void tablePermissionTest() throws Exception {
 +    // create the test user
 +    ClusterUser testUser = getUser(0), rootUser = getAdminUser();
 +
 +    String principal = testUser.getPrincipal();
 +    AuthenticationToken token = testUser.getToken();
 +    PasswordToken passwordToken = null;
 +    if (token instanceof PasswordToken) {
 +      passwordToken = (PasswordToken) token;
 +    }
 +    loginAs(rootUser);
 +    Connector c = getConnector();
 +    c.securityOperations().createLocalUser(principal, passwordToken);
 +    loginAs(testUser);
 +    Connector test_user_conn = c.getInstance().getConnector(principal, token);
 +
 +    // check for read-only access to metadata table
 +    loginAs(rootUser);
 +    verifyHasOnlyTheseTablePermissions(c, c.whoami(), MetadataTable.NAME, TablePermission.READ, TablePermission.ALTER_TABLE);
 +    verifyHasOnlyTheseTablePermissions(c, principal, MetadataTable.NAME, TablePermission.READ);
 +    String tableName = getUniqueNames(1)[0] + "__TABLE_PERMISSION_TEST__";
 +
 +    // test each permission
 +    for (TablePermission perm : TablePermission.values()) {
 +      log.debug("Verifying the " + perm + " permission");
 +
 +      // test permission before and after granting it
 +      createTestTable(c, principal, tableName);
 +      loginAs(testUser);
 +      testMissingTablePermission(test_user_conn, testUser, perm, tableName);
 +      loginAs(rootUser);
 +      c.securityOperations().grantTablePermission(principal, tableName, perm);
 +      verifyHasOnlyTheseTablePermissions(c, principal, tableName, perm);
 +      loginAs(testUser);
 +      testGrantedTablePermission(test_user_conn, testUser, perm, tableName);
 +
 +      loginAs(rootUser);
 +      createTestTable(c, principal, tableName);
 +      c.securityOperations().revokeTablePermission(principal, tableName, perm);
 +      verifyHasNoTablePermissions(c, principal, tableName, perm);
 +    }
 +  }
 +
 +  private void createTestTable(Connector c, String testUser, String tableName) throws Exception, MutationsRejectedException {
 +    if (!c.tableOperations().exists(tableName)) {
 +      // create the test table
 +      c.tableOperations().create(tableName);
 +      // put in some initial data
 +      BatchWriter writer = c.createBatchWriter(tableName, new BatchWriterConfig());
 +      Mutation m = new Mutation(new Text("row"));
 +      m.put(new Text("cf"), new Text("cq"), new Value("val".getBytes()));
 +      writer.addMutation(m);
 +      writer.close();
 +
 +      // verify proper permissions for creator and test user
 +      verifyHasOnlyTheseTablePermissions(c, c.whoami(), tableName, TablePermission.values());
 +      verifyHasNoTablePermissions(c, testUser, tableName, TablePermission.values());
 +
 +    }
 +  }
 +
 +  private void testMissingTablePermission(Connector test_user_conn, ClusterUser testUser, TablePermission perm, String tableName) throws Exception {
 +    Scanner scanner;
 +    BatchWriter writer;
 +    Mutation m;
 +    log.debug("Confirming that the lack of the " + perm + " permission properly restricts the user");
 +
 +    // test permission prior to granting it
 +    switch (perm) {
 +      case READ:
 +        try {
 +          scanner = test_user_conn.createScanner(tableName, Authorizations.EMPTY);
 +          int i = 0;
 +          for (Entry<Key,Value> entry : scanner)
 +            i += 1 + entry.getKey().getRowData().length();
 +          if (i != 0)
 +            throw new IllegalStateException("Should NOT be able to read from the table");
 +        } catch (RuntimeException e) {
 +          AccumuloSecurityException se = (AccumuloSecurityException) e.getCause();
 +          if (se.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED)
 +            throw se;
 +        }
 +        break;
 +      case WRITE:
 +        try {
 +          writer = test_user_conn.createBatchWriter(tableName, new BatchWriterConfig());
 +          m = new Mutation(new Text("row"));
 +          m.put(new Text("a"), new Text("b"), new Value("c".getBytes()));
 +          writer.addMutation(m);
 +          try {
 +            writer.close();
 +          } catch (MutationsRejectedException e1) {
 +            if (e1.getSecurityErrorCodes().size() > 0)
 +              throw new AccumuloSecurityException(test_user_conn.whoami(), org.apache.accumulo.core.client.impl.thrift.SecurityErrorCode.PERMISSION_DENIED, e1);
 +          }
 +          throw new IllegalStateException("Should NOT be able to write to a table");
 +        } catch (AccumuloSecurityException e) {
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED)
 +            throw e;
 +        }
 +        break;
 +      case BULK_IMPORT:
 +        // test for bulk import permission would go here
 +        break;
 +      case ALTER_TABLE:
 +        Map<String,Set<Text>> groups = new HashMap<>();
 +        groups.put("tgroup", new HashSet<>(Arrays.asList(new Text("t1"), new Text("t2"))));
 +        try {
 +          test_user_conn.tableOperations().setLocalityGroups(tableName, groups);
 +          throw new IllegalStateException("User should not be able to set locality groups");
 +        } catch (AccumuloSecurityException e) {
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED)
 +            throw e;
 +        }
 +        break;
 +      case DROP_TABLE:
 +        try {
 +          test_user_conn.tableOperations().delete(tableName);
 +          throw new IllegalStateException("User should not be able delete the table");
 +        } catch (AccumuloSecurityException e) {
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED)
 +            throw e;
 +        }
 +        break;
 +      case GRANT:
 +        try {
 +          test_user_conn.securityOperations().grantTablePermission(getAdminPrincipal(), tableName, TablePermission.GRANT);
 +          throw new IllegalStateException("User should not be able grant permissions");
 +        } catch (AccumuloSecurityException e) {
 +          if (e.getSecurityErrorCode() != SecurityErrorCode.PERMISSION_DENIED)
 +            throw e;
 +        }
 +        break;
 +      default:
 +        throw new IllegalArgumentException("Unrecognized table Permission: " + perm);
 +    }
 +  }
 +
 +  private void testGrantedTablePermission(Connector test_user_conn, ClusterUser normalUser, TablePermission perm, String tableName) throws AccumuloException,
 +      TableExistsException, AccumuloSecurityException, TableNotFoundException, MutationsRejectedException {
 +    Scanner scanner;
 +    BatchWriter writer;
 +    Mutation m;
 +    log.debug("Confirming that the presence of the " + perm + " permission properly permits the user");
 +
 +    // test permission after granting it
 +    switch (perm) {
 +      case READ:
 +        scanner = test_user_conn.createScanner(tableName, Authorizations.EMPTY);
 +        Iterator<Entry<Key,Value>> iter = scanner.iterator();
 +        while (iter.hasNext())
 +          iter.next();
 +        break;
 +      case WRITE:
 +        writer = test_user_conn.createBatchWriter(tableName, new BatchWriterConfig());
 +        m = new Mutation(new Text("row"));
 +        m.put(new Text("a"), new Text("b"), new Value("c".getBytes()));
 +        writer.addMutation(m);
 +        writer.close();
 +        break;
 +      case BULK_IMPORT:
 +        // test for bulk import permission would go here
 +        break;
 +      case ALTER_TABLE:
 +        Map<String,Set<Text>> groups = new HashMap<>();
 +        groups.put("tgroup", new HashSet<>(Arrays.asList(new Text("t1"), new Text("t2"))));
 +        break;
 +      case DROP_TABLE:
 +        test_user_conn.tableOperations().delete(tableName);
 +        break;
 +      case GRANT:
 +        test_user_conn.securityOperations().grantTablePermission(getAdminPrincipal(), tableName, TablePermission.GRANT);
 +        break;
 +      default:
 +        throw new IllegalArgumentException("Unrecognized table Permission: " + perm);
 +    }
 +  }
 +
 +  private void verifyHasOnlyTheseTablePermissions(Connector root_conn, String user, String table, TablePermission... perms) throws AccumuloException,
 +      AccumuloSecurityException {
 +    List<TablePermission> permList = Arrays.asList(perms);
 +    for (TablePermission p : TablePermission.values()) {
 +      if (permList.contains(p)) {
 +        // should have these
 +        if (!root_conn.securityOperations().hasTablePermission(user, table, p))
 +          throw new IllegalStateException(user + " SHOULD have table permission " + p + " for table " + table);
 +      } else {
 +        // should not have these
 +        if (root_conn.securityOperations().hasTablePermission(user, table, p))
 +          throw new IllegalStateException(user + " SHOULD NOT have table permission " + p + " for table " + table);
 +      }
 +    }
 +  }
 +
 +  private void verifyHasNoTablePermissions(Connector root_conn, String user, String table, TablePermission... perms) throws AccumuloException,
 +      AccumuloSecurityException {
 +    for (TablePermission p : perms)
 +      if (root_conn.securityOperations().hasTablePermission(user, table, p))
 +        throw new IllegalStateException(user + " SHOULD NOT have table permission " + p + " for table " + table);
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/functional/TableIT.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/functional/TableIT.java
index 504a5d9,0000000..22fbf18
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/functional/TableIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/functional/TableIT.java
@@@ -1,107 -1,0 +1,110 @@@
 +package org.apache.accumulo.test.functional;
 +
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +import static org.junit.Assert.assertEquals;
 +import static org.junit.Assert.assertNull;
 +import static org.junit.Assert.assertTrue;
 +
 +import java.io.FileNotFoundException;
 +
 +import org.apache.accumulo.cluster.AccumuloCluster;
 +import org.apache.accumulo.core.cli.BatchWriterOpts;
 +import org.apache.accumulo.core.cli.ScannerOpts;
 +import org.apache.accumulo.core.client.ClientConfiguration;
 +import org.apache.accumulo.core.client.ClientConfiguration.ClientProperty;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.Scanner;
 +import org.apache.accumulo.core.client.admin.TableOperations;
 +import org.apache.accumulo.core.data.impl.KeyExtent;
 +import org.apache.accumulo.core.metadata.MetadataTable;
 +import org.apache.accumulo.core.metadata.schema.MetadataSchema;
 +import org.apache.accumulo.core.security.Authorizations;
 +import org.apache.accumulo.harness.AccumuloClusterHarness;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 +import org.apache.accumulo.test.TestIngest;
 +import org.apache.accumulo.test.VerifyIngest;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.hadoop.fs.FileSystem;
 +import org.apache.hadoop.fs.Path;
 +import org.hamcrest.CoreMatchers;
 +import org.junit.Assume;
 +import org.junit.Test;
++import org.junit.experimental.categories.Category;
 +
 +import com.google.common.collect.Iterators;
 +
++@Category(MiniClusterOnlyTest.class)
 +public class TableIT extends AccumuloClusterHarness {
 +
 +  @Override
 +  protected int defaultTimeoutSeconds() {
 +    return 2 * 60;
 +  }
 +
 +  @Test
 +  public void test() throws Exception {
 +    Assume.assumeThat(getClusterType(), CoreMatchers.is(ClusterType.MINI));
 +
 +    AccumuloCluster cluster = getCluster();
 +    MiniAccumuloClusterImpl mac = (MiniAccumuloClusterImpl) cluster;
 +    String rootPath = mac.getConfig().getDir().getAbsolutePath();
 +
 +    Connector c = getConnector();
 +    TableOperations to = c.tableOperations();
 +    String tableName = getUniqueNames(1)[0];
 +    to.create(tableName);
 +
 +    TestIngest.Opts opts = new TestIngest.Opts();
 +    VerifyIngest.Opts vopts = new VerifyIngest.Opts();
 +    ClientConfiguration clientConfig = getCluster().getClientConfig();
 +    if (clientConfig.getBoolean(ClientProperty.INSTANCE_RPC_SASL_ENABLED.getKey(), false)) {
 +      opts.updateKerberosCredentials(clientConfig);
 +      vopts.updateKerberosCredentials(clientConfig);
 +    } else {
 +      opts.setPrincipal(getAdminPrincipal());
 +      vopts.setPrincipal(getAdminPrincipal());
 +    }
 +
 +    opts.setTableName(tableName);
 +    TestIngest.ingest(c, opts, new BatchWriterOpts());
 +    to.flush(tableName, null, null, true);
 +    vopts.setTableName(tableName);
 +    VerifyIngest.verifyIngest(c, vopts, new ScannerOpts());
 +    String id = to.tableIdMap().get(tableName);
 +    Scanner s = c.createScanner(MetadataTable.NAME, Authorizations.EMPTY);
 +    s.setRange(new KeyExtent(id, null, null).toMetadataRange());
 +    s.fetchColumnFamily(MetadataSchema.TabletsSection.DataFileColumnFamily.NAME);
 +    assertTrue(Iterators.size(s.iterator()) > 0);
 +
 +    FileSystem fs = getCluster().getFileSystem();
 +    assertTrue(fs.listStatus(new Path(rootPath + "/accumulo/tables/" + id)).length > 0);
 +    to.delete(tableName);
 +    assertEquals(0, Iterators.size(s.iterator()));
 +    try {
 +      assertEquals(0, fs.listStatus(new Path(rootPath + "/accumulo/tables/" + id)).length);
 +    } catch (FileNotFoundException ex) {
 +      // that's fine, too
 +    }
 +    assertNull(to.tableIdMap().get(tableName));
 +    to.create(tableName);
 +    TestIngest.ingest(c, opts, new BatchWriterOpts());
 +    VerifyIngest.verifyIngest(c, vopts, new ScannerOpts());
 +    to.delete(tableName);
 +  }
 +
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/test/src/main/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
----------------------------------------------------------------------
diff --cc test/src/main/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
index 4559195,0000000..32df894
mode 100644,000000..100644
--- a/test/src/main/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
+++ b/test/src/main/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
@@@ -1,243 -1,0 +1,246 @@@
 +/*
 + * Licensed to the Apache Software Foundation (ASF) under one or more
 + * contributor license agreements.  See the NOTICE file distributed with
 + * this work for additional information regarding copyright ownership.
 + * The ASF licenses this file to You under the Apache License, Version 2.0
 + * (the "License"); you may not use this file except in compliance with
 + * the License.  You may obtain a copy of the License at
 + *
 + *     http://www.apache.org/licenses/LICENSE-2.0
 + *
 + * Unless required by applicable law or agreed to in writing, software
 + * distributed under the License is distributed on an "AS IS" BASIS,
 + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 + * See the License for the specific language governing permissions and
 + * limitations under the License.
 + */
 +package org.apache.accumulo.test.replication;
 +
 +import java.security.PrivilegedExceptionAction;
 +import java.util.Map.Entry;
 +import java.util.Set;
 +
 +import org.apache.accumulo.cluster.ClusterUser;
 +import org.apache.accumulo.core.client.BatchWriter;
 +import org.apache.accumulo.core.client.BatchWriterConfig;
 +import org.apache.accumulo.core.client.Connector;
 +import org.apache.accumulo.core.client.security.tokens.KerberosToken;
 +import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 +import org.apache.accumulo.core.conf.Property;
 +import org.apache.accumulo.core.data.Key;
 +import org.apache.accumulo.core.data.Mutation;
 +import org.apache.accumulo.core.data.Value;
 +import org.apache.accumulo.core.security.Authorizations;
 +import org.apache.accumulo.core.security.TablePermission;
 +import org.apache.accumulo.harness.AccumuloITBase;
 +import org.apache.accumulo.harness.MiniClusterConfigurationCallback;
 +import org.apache.accumulo.harness.MiniClusterHarness;
 +import org.apache.accumulo.harness.TestingKdc;
 +import org.apache.accumulo.master.replication.SequentialWorkAssigner;
 +import org.apache.accumulo.minicluster.ServerType;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 +import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 +import org.apache.accumulo.minicluster.impl.ProcessReference;
 +import org.apache.accumulo.server.replication.ReplicaSystemFactory;
++import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 +import org.apache.accumulo.test.functional.KerberosIT;
 +import org.apache.accumulo.tserver.TabletServer;
 +import org.apache.accumulo.tserver.replication.AccumuloReplicaSystem;
 +import org.apache.hadoop.conf.Configuration;
 +import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 +import org.apache.hadoop.fs.RawLocalFileSystem;
 +import org.apache.hadoop.security.UserGroupInformation;
 +import org.junit.After;
 +import org.junit.AfterClass;
 +import org.junit.Assert;
 +import org.junit.Before;
 +import org.junit.BeforeClass;
 +import org.junit.Test;
++import org.junit.experimental.categories.Category;
 +import org.slf4j.Logger;
 +import org.slf4j.LoggerFactory;
 +
 +import com.google.common.collect.Iterators;
 +
 +/**
 + * Ensure that replication occurs using keytabs instead of password (not to mention SASL)
 + */
++@Category(MiniClusterOnlyTest.class)
 +public class KerberosReplicationIT extends AccumuloITBase {
 +  private static final Logger log = LoggerFactory.getLogger(KerberosIT.class);
 +
 +  private static TestingKdc kdc;
 +  private static String krbEnabledForITs = null;
 +  private static ClusterUser rootUser;
 +
 +  @BeforeClass
 +  public static void startKdc() throws Exception {
 +    kdc = new TestingKdc();
 +    kdc.start();
 +    krbEnabledForITs = System.getProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION);
 +    if (null == krbEnabledForITs || !Boolean.parseBoolean(krbEnabledForITs)) {
 +      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, "true");
 +    }
 +    rootUser = kdc.getRootUser();
 +  }
 +
 +  @AfterClass
 +  public static void stopKdc() throws Exception {
 +    if (null != kdc) {
 +      kdc.stop();
 +    }
 +    if (null != krbEnabledForITs) {
 +      System.setProperty(MiniClusterHarness.USE_KERBEROS_FOR_IT_OPTION, krbEnabledForITs);
 +    }
 +  }
 +
 +  private MiniAccumuloClusterImpl primary, peer;
 +  private String PRIMARY_NAME = "primary", PEER_NAME = "peer";
 +
 +  @Override
 +  protected int defaultTimeoutSeconds() {
 +    return 60 * 3;
 +  }
 +
 +  private MiniClusterConfigurationCallback getConfigCallback(final String name) {
 +    return new MiniClusterConfigurationCallback() {
 +      @Override
 +      public void configureMiniCluster(MiniAccumuloConfigImpl cfg, Configuration coreSite) {
 +        cfg.setNumTservers(1);
 +        cfg.setProperty(Property.INSTANCE_ZK_TIMEOUT, "15s");
 +        cfg.setProperty(Property.TSERV_WALOG_MAX_SIZE, "2M");
 +        cfg.setProperty(Property.GC_CYCLE_START, "1s");
 +        cfg.setProperty(Property.GC_CYCLE_DELAY, "5s");
 +        cfg.setProperty(Property.REPLICATION_WORK_ASSIGNMENT_SLEEP, "1s");
 +        cfg.setProperty(Property.MASTER_REPLICATION_SCAN_INTERVAL, "1s");
 +        cfg.setProperty(Property.REPLICATION_NAME, name);
 +        cfg.setProperty(Property.REPLICATION_MAX_UNIT_SIZE, "8M");
 +        cfg.setProperty(Property.REPLICATION_WORK_ASSIGNER, SequentialWorkAssigner.class.getName());
 +        cfg.setProperty(Property.TSERV_TOTAL_MUTATION_QUEUE_MAX, "1M");
 +        coreSite.set("fs.file.impl", RawLocalFileSystem.class.getName());
 +        coreSite.set("fs.defaultFS", "file:///");
 +      }
 +    };
 +  }
 +
 +  @Before
 +  public void setup() throws Exception {
 +    MiniClusterHarness harness = new MiniClusterHarness();
 +
 +    // Create a primary and a peer instance, both with the same "root" user
 +    primary = harness.create(getClass().getName(), testName.getMethodName(), new PasswordToken("unused"), getConfigCallback(PRIMARY_NAME), kdc);
 +    primary.start();
 +
 +    peer = harness.create(getClass().getName(), testName.getMethodName() + "_peer", new PasswordToken("unused"), getConfigCallback(PEER_NAME), kdc);
 +    peer.start();
 +
 +    // Enable kerberos auth
 +    Configuration conf = new Configuration(false);
 +    conf.set(CommonConfigurationKeysPublic.HADOOP_SECURITY_AUTHENTICATION, "kerberos");
 +    UserGroupInformation.setConfiguration(conf);
 +  }
 +
 +  @After
 +  public void teardown() throws Exception {
 +    if (null != peer) {
 +      peer.stop();
 +    }
 +    if (null != primary) {
 +      primary.stop();
 +    }
 +    UserGroupInformation.setConfiguration(new Configuration(false));
 +  }
 +
 +  @Test
 +  public void dataReplicatedToCorrectTable() throws Exception {
 +    // Login as the root user
 +    final UserGroupInformation ugi = UserGroupInformation.loginUserFromKeytabAndReturnUGI(rootUser.getPrincipal(), rootUser.getKeytab().toURI().toString());
 +    ugi.doAs(new PrivilegedExceptionAction<Void>() {
 +      @Override
 +      public Void run() throws Exception {
 +        log.info("testing {}", ugi);
 +        final KerberosToken token = new KerberosToken();
 +        final Connector primaryConn = primary.getConnector(rootUser.getPrincipal(), token);
 +        final Connector peerConn = peer.getConnector(rootUser.getPrincipal(), token);
 +
 +        ClusterUser replicationUser = kdc.getClientPrincipal(0);
 +
 +        // Create user for replication to the peer
 +        peerConn.securityOperations().createLocalUser(replicationUser.getPrincipal(), null);
 +
 +        primaryConn.instanceOperations().setProperty(Property.REPLICATION_PEER_USER.getKey() + PEER_NAME, replicationUser.getPrincipal());
 +        primaryConn.instanceOperations().setProperty(Property.REPLICATION_PEER_KEYTAB.getKey() + PEER_NAME, replicationUser.getKeytab().getAbsolutePath());
 +
 +        // ...peer = AccumuloReplicaSystem,instanceName,zookeepers
 +        primaryConn.instanceOperations().setProperty(
 +            Property.REPLICATION_PEERS.getKey() + PEER_NAME,
 +            ReplicaSystemFactory.getPeerConfigurationValue(AccumuloReplicaSystem.class,
 +                AccumuloReplicaSystem.buildConfiguration(peerConn.getInstance().getInstanceName(), peerConn.getInstance().getZooKeepers())));
 +
 +        String primaryTable1 = "primary", peerTable1 = "peer";
 +
 +        // Create tables
 +        primaryConn.tableOperations().create(primaryTable1);
 +        String masterTableId1 = primaryConn.tableOperations().tableIdMap().get(primaryTable1);
 +        Assert.assertNotNull(masterTableId1);
 +
 +        peerConn.tableOperations().create(peerTable1);
 +        String peerTableId1 = peerConn.tableOperations().tableIdMap().get(peerTable1);
 +        Assert.assertNotNull(peerTableId1);
 +
 +        // Grant write permission
 +        peerConn.securityOperations().grantTablePermission(replicationUser.getPrincipal(), peerTable1, TablePermission.WRITE);
 +
 +        // Replicate this table to the peerClusterName in a table with the peerTableId table id
 +        primaryConn.tableOperations().setProperty(primaryTable1, Property.TABLE_REPLICATION.getKey(), "true");
 +        primaryConn.tableOperations().setProperty(primaryTable1, Property.TABLE_REPLICATION_TARGET.getKey() + PEER_NAME, peerTableId1);
 +
 +        // Write some data to table1
 +        BatchWriter bw = primaryConn.createBatchWriter(primaryTable1, new BatchWriterConfig());
 +        long masterTable1Records = 0l;
 +        for (int rows = 0; rows < 2500; rows++) {
 +          Mutation m = new Mutation(primaryTable1 + rows);
 +          for (int cols = 0; cols < 100; cols++) {
 +            String value = Integer.toString(cols);
 +            m.put(value, "", value);
 +            masterTable1Records++;
 +          }
 +          bw.addMutation(m);
 +        }
 +
 +        bw.close();
 +
 +        log.info("Wrote all data to primary cluster");
 +
 +        Set<String> filesFor1 = primaryConn.replicationOperations().referencedFiles(primaryTable1);
 +
 +        // Restart the tserver to force a close on the WAL
 +        for (ProcessReference proc : primary.getProcesses().get(ServerType.TABLET_SERVER)) {
 +          primary.killProcess(ServerType.TABLET_SERVER, proc);
 +        }
 +        primary.exec(TabletServer.class);
 +
 +        log.info("Restarted the tserver");
 +
 +        // Read the data -- the tserver is back up and running and tablets are assigned
 +        Iterators.size(primaryConn.createScanner(primaryTable1, Authorizations.EMPTY).iterator());
 +
 +        // Wait for both tables to be replicated
 +        log.info("Waiting for {} for {}", filesFor1, primaryTable1);
 +        primaryConn.replicationOperations().drain(primaryTable1, filesFor1);
 +
 +        long countTable = 0l;
 +        for (Entry<Key,Value> entry : peerConn.createScanner(peerTable1, Authorizations.EMPTY)) {
 +          countTable++;
 +          Assert.assertTrue("Found unexpected key-value" + entry.getKey().toStringNoTruncate() + " " + entry.getValue(), entry.getKey().getRow().toString()
 +              .startsWith(primaryTable1));
 +        }
 +
 +        log.info("Found {} records in {}", countTable, peerTable1);
 +        Assert.assertEquals(masterTable1Records, countTable);
 +
 +        return null;
 +      }
 +    });
 +  }
 +}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/d28a3ee3/trace/pom.xml
----------------------------------------------------------------------


[03/10] accumulo git commit: ACCUMULO-4423 Annotate integration tests with categories

Posted by el...@apache.org.
ACCUMULO-4423 Annotate integration tests with categories

Differentiates tests which always use a minicluster and those
which can use a minicluster or a standalone cluster. Out-of-the-box
test invocation should not have changed.

Includes updated documentation to TESTING.md as well.

Closes apache/accumulo#144


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/661dac33
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/661dac33
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/661dac33

Branch: refs/heads/master
Commit: 661dac33648fb8bb311434720563c322611c1f12
Parents: 2be85ad
Author: Josh Elser <el...@apache.org>
Authored: Tue Aug 30 16:23:48 2016 -0400
Committer: Josh Elser <el...@apache.org>
Committed: Tue Aug 30 23:27:09 2016 -0400

----------------------------------------------------------------------
 TESTING.md                                      | 25 ++++++++++++--------
 pom.xml                                         | 24 ++++++++++++++++++-
 .../accumulo/harness/AccumuloClusterIT.java     |  3 +++
 .../accumulo/harness/SharedMiniClusterIT.java   |  3 +++
 .../org/apache/accumulo/test/NamespacesIT.java  |  3 +++
 .../test/categories/AnyClusterTest.java         | 25 ++++++++++++++++++++
 .../test/categories/MiniClusterOnlyTest.java    | 24 +++++++++++++++++++
 .../accumulo/test/categories/package-info.java  | 21 ++++++++++++++++
 .../accumulo/test/functional/ClassLoaderIT.java |  3 +++
 .../test/functional/ConfigurableMacIT.java      |  3 +++
 .../accumulo/test/functional/KerberosIT.java    |  3 +++
 .../test/functional/KerberosProxyIT.java        |  3 +++
 .../test/functional/KerberosRenewalIT.java      |  3 +++
 .../accumulo/test/functional/PermissionsIT.java |  3 +++
 .../accumulo/test/functional/TableIT.java       |  3 +++
 .../test/replication/KerberosReplicationIT.java |  3 +++
 trace/pom.xml                                   |  6 +++++
 17 files changed, 147 insertions(+), 11 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/TESTING.md
----------------------------------------------------------------------
diff --git a/TESTING.md b/TESTING.md
index de484ee..125110b 100644
--- a/TESTING.md
+++ b/TESTING.md
@@ -47,23 +47,27 @@ but are checking for regressions that were previously seen in the codebase. Thes
 resources, at least another gigabyte of memory over what Maven itself requires. As such, it's recommended to have at
 least 3-4GB of free memory and 10GB of free disk space.
 
-## Accumulo for testing
+## Test Categories
 
-The primary reason these tests take so much longer than the unit tests is that most are using an Accumulo instance to
-perform the test. It's a necessary evil; however, there are things we can do to improve this.
+Accumulo uses JUnit Category annotations to categorize certain integration tests based on their runtime requirements.
+Presently there are three different categories:
 
-## MiniAccumuloCluster
+### MiniAccumuloCluster (`MiniClusterOnlyTest`)
 
-By default, these tests will use a MiniAccumuloCluster which is a multi-process "implementation" of Accumulo, managed
-through Java interfaces. This MiniAccumuloCluster has the ability to use the local filesystem or Apache Hadoop's
+These tests use MiniAccumuloCluster (MAC) which is a multi-process "implementation" of Accumulo, managed
+through Java APIs. This MiniAccumuloCluster has the ability to use the local filesystem or Apache Hadoop's
 MiniDFSCluster, as well as starting one to many tablet servers. MiniAccumuloCluster tends to be a very useful tool in
 that it can automatically provide a workable instance that mimics how an actual deployment functions.
 
 The downside of using MiniAccumuloCluster is that a significant portion of each test is now devoted to starting and
 stopping the MiniAccumuloCluster.  While this is a surefire way to isolate tests from interferring with one another, it
-increases the actual runtime of the test by, on average, 10x.
+increases the actual runtime of the test by, on average, 10x. Some times the tests require the use of MAC because the
+test is being destructive or some special environment setup (e.g. Kerberos).
 
-## Standalone Cluster
+By default, these tests are run during the `integration-test` lifecycle phase using `mvn verify`. These tests can
+also be run at the `test` lifecycle phase using `mvn package -Pminicluster-unit-tests`.
+
+### Standalone Cluster (`AnyClusterTest`)
 
 An alternative to the MiniAccumuloCluster for testing, a standalone Accumulo cluster can also be configured for use by
 most tests. This requires a manual step of building and deploying the Accumulo cluster by hand. The build can then be
@@ -75,7 +79,9 @@ Use of a standalone cluster can be enabled using system properties on the Maven
 providing a Java properties file on the Maven command line. The use of a properties file is recommended since it is
 typically a fixed file per standalone cluster you want to run the tests against.
 
-### Configuration
+These tests will always run during the `integration-test` lifecycle phase using `mvn verify`.
+
+## Configuration for Standalone clusters
 
 The following properties can be used to configure a standalone cluster:
 
@@ -128,4 +134,3 @@ at a time, for example the [Continuous Ingest][1] and [Randomwalk test][2] suite
 [3]: https://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html
 [4]: http://maven.apache.org/surefire/maven-surefire-plugin/
 [5]: http://maven.apache.org/surefire/maven-failsafe-plugin/
-

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/pom.xml
----------------------------------------------------------------------
diff --git a/pom.xml b/pom.xml
index 0f57f62..d6393d2 100644
--- a/pom.xml
+++ b/pom.xml
@@ -115,6 +115,10 @@
     <url>https://builds.apache.org/view/A-D/view/Accumulo/</url>
   </ciManagement>
   <properties>
+    <accumulo.anyClusterTests>org.apache.accumulo.test.categories.AnyClusterTest</accumulo.anyClusterTests>
+    <accumulo.it.excludedGroups />
+    <accumulo.it.groups>${accumulo.anyClusterTests},${accumulo.miniclusterTests}</accumulo.it.groups>
+    <accumulo.miniclusterTests>org.apache.accumulo.test.categories.MiniClusterOnlyTest</accumulo.miniclusterTests>
     <!-- used for filtering the java source with the current version -->
     <accumulo.release.version>${project.version}</accumulo.release.version>
     <assembly.tarLongFileMode>posix</assembly.tarLongFileMode>
@@ -240,7 +244,7 @@
       <dependency>
         <groupId>junit</groupId>
         <artifactId>junit</artifactId>
-        <version>4.11</version>
+        <version>4.12</version>
       </dependency>
       <dependency>
         <groupId>log4j</groupId>
@@ -1006,6 +1010,10 @@
               <goal>integration-test</goal>
               <goal>verify</goal>
             </goals>
+            <configuration>
+              <excludeGroups>${accumulo.it.excludedGroups}</excludeGroups>
+              <groups>${accumulo.it.groups}</groups>
+            </configuration>
           </execution>
         </executions>
       </plugin>
@@ -1399,5 +1407,19 @@
         </pluginManagement>
       </build>
     </profile>
+    <profile>
+      <id>only-minicluster-tests</id>
+      <properties>
+        <accumulo.it.excludedGroups>${accumulo.anyClusterTests}</accumulo.it.excludedGroups>
+        <accumulo.it.groups>${accumulo.miniclusterTests}</accumulo.it.groups>
+      </properties>
+    </profile>
+    <profile>
+      <id>standalone-capable-tests</id>
+      <properties>
+        <accumulo.it.excludedGroups>${accumulo.miniclusterTests}</accumulo.it.excludedGroups>
+        <accumulo.it.groups>${accumulo.anyClusterTests}</accumulo.it.groups>
+      </properties>
+    </profile>
   </profiles>
 </project>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/harness/AccumuloClusterIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/harness/AccumuloClusterIT.java b/test/src/test/java/org/apache/accumulo/harness/AccumuloClusterIT.java
index e2b35f4..436ceb5 100644
--- a/test/src/test/java/org/apache/accumulo/harness/AccumuloClusterIT.java
+++ b/test/src/test/java/org/apache/accumulo/harness/AccumuloClusterIT.java
@@ -43,6 +43,7 @@ import org.apache.accumulo.harness.conf.AccumuloMiniClusterConfiguration;
 import org.apache.accumulo.harness.conf.StandaloneAccumuloClusterConfiguration;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.test.categories.AnyClusterTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
@@ -52,12 +53,14 @@ import org.junit.AfterClass;
 import org.junit.Assume;
 import org.junit.Before;
 import org.junit.BeforeClass;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 /**
  * General Integration-Test base class that provides access to an Accumulo instance for testing. This instance could be MAC or a standalone instance.
  */
+@Category(AnyClusterTest.class)
 public abstract class AccumuloClusterIT extends AccumuloIT implements MiniClusterConfigurationCallback, ClusterUsers {
   private static final Logger log = LoggerFactory.getLogger(AccumuloClusterIT.class);
   private static final String TRUE = Boolean.toString(true);

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/harness/SharedMiniClusterIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/harness/SharedMiniClusterIT.java b/test/src/test/java/org/apache/accumulo/harness/SharedMiniClusterIT.java
index f66a192..644055f 100644
--- a/test/src/test/java/org/apache/accumulo/harness/SharedMiniClusterIT.java
+++ b/test/src/test/java/org/apache/accumulo/harness/SharedMiniClusterIT.java
@@ -31,9 +31,11 @@ import org.apache.accumulo.core.client.security.tokens.PasswordToken;
 import org.apache.accumulo.core.conf.Property;
 import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -48,6 +50,7 @@ import org.slf4j.LoggerFactory;
  * a method annotated with the {@link org.junit.BeforeClass} JUnit annotation and {@link #stopMiniCluster()} in a method annotated with the
  * {@link org.junit.AfterClass} JUnit annotation.
  */
+@Category(MiniClusterOnlyTest.class)
 public abstract class SharedMiniClusterIT extends AccumuloIT implements ClusterUsers {
   private static final Logger log = LoggerFactory.getLogger(SharedMiniClusterIT.class);
   public static final String TRUE = Boolean.toString(true);

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/NamespacesIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/NamespacesIT.java b/test/src/test/java/org/apache/accumulo/test/NamespacesIT.java
index aaa6a6e..6ec2127 100644
--- a/test/src/test/java/org/apache/accumulo/test/NamespacesIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/NamespacesIT.java
@@ -77,14 +77,17 @@ import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.examples.simple.constraints.NumericValueConstraint;
 import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.io.Text;
 import org.junit.After;
 import org.junit.Assume;
 import org.junit.Before;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 
 // Testing default namespace configuration with inheritance requires altering the system state and restoring it back to normal
 // Punt on this for now and just let it use a minicluster.
+@Category(MiniClusterOnlyTest.class)
 public class NamespacesIT extends AccumuloClusterIT {
 
   private Connector c;

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/categories/AnyClusterTest.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/categories/AnyClusterTest.java b/test/src/test/java/org/apache/accumulo/test/categories/AnyClusterTest.java
new file mode 100644
index 0000000..765057e
--- /dev/null
+++ b/test/src/test/java/org/apache/accumulo/test/categories/AnyClusterTest.java
@@ -0,0 +1,25 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.categories;
+
+/**
+ * Interface to be used with JUnit Category annotation to denote that the IntegrationTest can be used with any kind of cluster (a MiniAccumuloCluster or a
+ * StandaloneAccumuloCluster).
+ */
+public interface AnyClusterTest {
+
+}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/categories/MiniClusterOnlyTest.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/categories/MiniClusterOnlyTest.java b/test/src/test/java/org/apache/accumulo/test/categories/MiniClusterOnlyTest.java
new file mode 100644
index 0000000..1a972ef
--- /dev/null
+++ b/test/src/test/java/org/apache/accumulo/test/categories/MiniClusterOnlyTest.java
@@ -0,0 +1,24 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.accumulo.test.categories;
+
+/**
+ * Interface to be used with JUnit Category annotation to denote that the IntegrationTest requires the use of a MiniAccumuloCluster.
+ */
+public interface MiniClusterOnlyTest {
+
+}

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/categories/package-info.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/categories/package-info.java b/test/src/test/java/org/apache/accumulo/test/categories/package-info.java
new file mode 100644
index 0000000..e7071fc
--- /dev/null
+++ b/test/src/test/java/org/apache/accumulo/test/categories/package-info.java
@@ -0,0 +1,21 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+/**
+ * JUnit categories for the various types of Accumulo integration tests.
+ */
+package org.apache.accumulo.test.categories;
+

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ClassLoaderIT.java b/test/src/test/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
index 4b51bd2..d09e2a6 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/ClassLoaderIT.java
@@ -40,13 +40,16 @@ import org.apache.accumulo.core.util.CachedConfiguration;
 import org.apache.accumulo.core.util.UtilWaitThread;
 import org.apache.accumulo.harness.AccumuloClusterIT;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.hamcrest.CoreMatchers;
 import org.junit.Assume;
 import org.junit.Before;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 
+@Category(MiniClusterOnlyTest.class)
 public class ClassLoaderIT extends AccumuloClusterIT {
 
   private static final long ZOOKEEPER_PROPAGATION_TIME = 10 * 1000;

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/ConfigurableMacIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/ConfigurableMacIT.java b/test/src/test/java/org/apache/accumulo/test/functional/ConfigurableMacIT.java
index 53eb8e4..6d04610 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/ConfigurableMacIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/ConfigurableMacIT.java
@@ -40,12 +40,14 @@ import org.apache.accumulo.minicluster.MiniAccumuloCluster;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.minicluster.impl.ZooKeeperBindException;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.accumulo.test.util.CertUtils;
 import org.apache.commons.io.FileUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.zookeeper.KeeperException;
 import org.junit.After;
 import org.junit.Before;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -53,6 +55,7 @@ import org.slf4j.LoggerFactory;
  * General Integration-Test base class that provides access to a {@link MiniAccumuloCluster} for testing. Tests using these typically do very disruptive things
  * to the instance, and require specific configuration. Most tests don't need this level of control and should extend {@link AccumuloClusterIT} instead.
  */
+@Category(MiniClusterOnlyTest.class)
 public class ConfigurableMacIT extends AccumuloIT {
   public static final Logger log = LoggerFactory.getLogger(ConfigurableMacIT.class);
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java b/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
index 612718d..a3da827 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/KerberosIT.java
@@ -68,6 +68,7 @@ import org.apache.accumulo.harness.TestingKdc;
 import org.apache.accumulo.minicluster.ServerType;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.minikdc.MiniKdc;
@@ -77,6 +78,7 @@ import org.junit.AfterClass;
 import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -86,6 +88,7 @@ import com.google.common.collect.Sets;
 /**
  * MAC test which uses {@link MiniKdc} to simulate ta secure environment. Can be used as a sanity check for Kerberos/SASL testing.
  */
+@Category(MiniClusterOnlyTest.class)
 public class KerberosIT extends AccumuloIT {
   private static final Logger log = LoggerFactory.getLogger(KerberosIT.class);
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/KerberosProxyIT.java b/test/src/test/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
index af6310c..2bef539 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/KerberosProxyIT.java
@@ -56,6 +56,7 @@ import org.apache.accumulo.proxy.thrift.ScanResult;
 import org.apache.accumulo.proxy.thrift.TimeType;
 import org.apache.accumulo.proxy.thrift.WriterOptions;
 import org.apache.accumulo.server.util.PortUtils;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.security.UserGroupInformation;
@@ -71,6 +72,7 @@ import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Rule;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.junit.rules.ExpectedException;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
@@ -78,6 +80,7 @@ import org.slf4j.LoggerFactory;
 /**
  * Tests impersonation of clients by the proxy over SASL
  */
+@Category(MiniClusterOnlyTest.class)
 public class KerberosProxyIT extends AccumuloIT {
   private static final Logger log = LoggerFactory.getLogger(KerberosProxyIT.class);
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java b/test/src/test/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
index 28c1dfc..07e0662 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/KerberosRenewalIT.java
@@ -45,6 +45,7 @@ import org.apache.accumulo.harness.MiniClusterHarness;
 import org.apache.accumulo.harness.TestingKdc;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
 import org.apache.hadoop.minikdc.MiniKdc;
@@ -54,6 +55,7 @@ import org.junit.AfterClass;
 import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -62,6 +64,7 @@ import com.google.common.collect.Iterables;
 /**
  * MAC test which uses {@link MiniKdc} to simulate ta secure environment. Can be used as a sanity check for Kerberos/SASL testing.
  */
+@Category(MiniClusterOnlyTest.class)
 public class KerberosRenewalIT extends AccumuloIT {
   private static final Logger log = LoggerFactory.getLogger(KerberosRenewalIT.class);
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/PermissionsIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/PermissionsIT.java b/test/src/test/java/org/apache/accumulo/test/functional/PermissionsIT.java
index 4aea354..6967a48 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/PermissionsIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/PermissionsIT.java
@@ -53,15 +53,18 @@ import org.apache.accumulo.core.security.Authorizations;
 import org.apache.accumulo.core.security.SystemPermission;
 import org.apache.accumulo.core.security.TablePermission;
 import org.apache.accumulo.harness.AccumuloClusterIT;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.io.Text;
 import org.junit.Assume;
 import org.junit.Before;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 // This test verifies the default permissions so a clean instance must be used. A shared instance might
 // not be representative of a fresh installation.
+@Category(MiniClusterOnlyTest.class)
 public class PermissionsIT extends AccumuloClusterIT {
   private static final Logger log = LoggerFactory.getLogger(PermissionsIT.class);
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/functional/TableIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/functional/TableIT.java b/test/src/test/java/org/apache/accumulo/test/functional/TableIT.java
index 3061b87..0bfdc00 100644
--- a/test/src/test/java/org/apache/accumulo/test/functional/TableIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/functional/TableIT.java
@@ -39,15 +39,18 @@ import org.apache.accumulo.harness.AccumuloClusterIT;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.test.TestIngest;
 import org.apache.accumulo.test.VerifyIngest;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.io.Text;
 import org.hamcrest.CoreMatchers;
 import org.junit.Assume;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 
 import com.google.common.collect.Iterators;
 
+@Category(MiniClusterOnlyTest.class)
 public class TableIT extends AccumuloClusterIT {
 
   @Override

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/test/src/test/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
----------------------------------------------------------------------
diff --git a/test/src/test/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java b/test/src/test/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
index be9e320..933dfb8 100644
--- a/test/src/test/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
+++ b/test/src/test/java/org/apache/accumulo/test/replication/KerberosReplicationIT.java
@@ -41,6 +41,7 @@ import org.apache.accumulo.minicluster.impl.MiniAccumuloClusterImpl;
 import org.apache.accumulo.minicluster.impl.MiniAccumuloConfigImpl;
 import org.apache.accumulo.minicluster.impl.ProcessReference;
 import org.apache.accumulo.server.replication.ReplicaSystemFactory;
+import org.apache.accumulo.test.categories.MiniClusterOnlyTest;
 import org.apache.accumulo.test.functional.KerberosIT;
 import org.apache.accumulo.tserver.TabletServer;
 import org.apache.accumulo.tserver.replication.AccumuloReplicaSystem;
@@ -54,6 +55,7 @@ import org.junit.Assert;
 import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -62,6 +64,7 @@ import com.google.common.collect.Iterators;
 /**
  * Ensure that replication occurs using keytabs instead of password (not to mention SASL)
  */
+@Category(MiniClusterOnlyTest.class)
 public class KerberosReplicationIT extends AccumuloIT {
   private static final Logger log = LoggerFactory.getLogger(KerberosIT.class);
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/661dac33/trace/pom.xml
----------------------------------------------------------------------
diff --git a/trace/pom.xml b/trace/pom.xml
index 2b79288..2d93a84 100644
--- a/trace/pom.xml
+++ b/trace/pom.xml
@@ -34,5 +34,11 @@
       <groupId>org.apache.htrace</groupId>
       <artifactId>htrace-core</artifactId>
     </dependency>
+    <!-- Otherwise will see complaints from failsafe WRT groups -->
+    <dependency>
+      <groupId>junit</groupId>
+      <artifactId>junit</artifactId>
+      <scope>test</scope>
+    </dependency>
   </dependencies>
 </project>