You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@impala.apache.org by ta...@apache.org on 2019/06/01 17:27:52 UTC

[impala] branch master updated (3c68ddf -> cd30949)

This is an automated email from the ASF dual-hosted git repository.

tarmstrong pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git.


    from 3c68ddf  IMPALA-8596: Add debugging output to assertion in test
     new d26aae5  IMPALA-8504: Support CREATE TABLE statement with Kudu/HMS integration
     new d9de31e  [DOCS] Added the section on object ownership
     new cd30949  IMPALA-8502: Bump CDH_BUILD_NUMBER and Kudu version

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 bin/impala-config.sh                               |   6 +-
 bin/run-all-tests.sh                               |   8 +-
 docs/topics/impala_authorization.xml               |  39 ++
 .../apache/impala/analysis/CreateTableStmt.java    |  55 ++-
 .../org/apache/impala/analysis/ToSqlUtils.java     |   9 +-
 .../java/org/apache/impala/catalog/KuduTable.java  |  42 +-
 .../apache/impala/service/CatalogOpExecutor.java   |   6 +-
 .../main/java/org/apache/impala/util/KuduUtil.java |  25 +-
 .../org/apache/impala/analysis/AnalyzeDDLTest.java | 434 +-----------------
 .../apache/impala/analysis/AnalyzeKuduDDLTest.java | 495 +++++++++++++++++++++
 .../apache/impala/analysis/AuditingKuduTest.java   | 131 ++++++
 .../org/apache/impala/analysis/AuditingTest.java   | 112 +----
 .../java/org/apache/impala/analysis/ToSqlTest.java |  33 +-
 .../org/apache/impala/common/FrontendTestBase.java |  17 +
 .../CustomServiceRunner.java}                      |  27 +-
 .../customservice/KuduHMSIntegrationTest.java      |  82 ++++
 .../node_templates/common/etc/init.d/kudu-master   |   3 +
 tests/query_test/test_kudu.py                      |   2 +-
 18 files changed, 932 insertions(+), 594 deletions(-)
 create mode 100644 fe/src/test/java/org/apache/impala/analysis/AnalyzeKuduDDLTest.java
 create mode 100644 fe/src/test/java/org/apache/impala/analysis/AuditingKuduTest.java
 copy fe/src/test/java/org/apache/impala/{customcluster/CustomClusterRunner.java => customservice/CustomServiceRunner.java} (56%)
 create mode 100644 fe/src/test/java/org/apache/impala/customservice/KuduHMSIntegrationTest.java


[impala] 01/03: IMPALA-8504: Support CREATE TABLE statement with Kudu/HMS integration

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tarmstrong pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git

commit d26aae5f2d171cc0ec97ffff9b3e029fe2259eaf
Author: Hao Hao <ha...@cloudera.com>
AuthorDate: Sun May 19 17:42:08 2019 -0700

    IMPALA-8504: Support CREATE TABLE statement with Kudu/HMS integration
    
    This commit adds support for the syntax of CREATE TABLE (and CTAS)
    statements for managed Kudu tables with Kudu/HMS integration. A follow
    up patch will address the actual handling of CREATE TABLE statement
    with Kudu/HMS integration.
    
    For a managed table the syntax remains the same. However, the detailed
    changes includes:
    1) Kudu table will always be created with the new Kudu storage handler
       'org.apache.kudu.hive.KuduStorageHandler' even when Kudu/HMS integration
       is disabled. The legacy storage handler will be eventually deprecated.
    2) When Kudu/HMS integration is enabled, the Kudu table underneath the
       managed HMS table will follow the naming convention 'db_name.table_name'
       instead of 'impala::db_name.table_name'.
    3) Add 'kudu.table_id' table property to be used with Kudu/HMS integration.
    
    This commit also extracts Kudu-related DDL parsing and analyzing tests,
    so that they can be run with or without Kudu/HMS integration enabled.
    
    Change-Id: I465673d749221bd5f3772814b1c22c2673a53f5c
    Reviewed-on: http://gerrit.cloudera.org:8080/13318
    Tested-by: Impala Public Jenkins <im...@cloudera.com>
    Reviewed-by: Thomas Marshall <tm...@cloudera.com>
---
 bin/run-all-tests.sh                               |   8 +-
 .../apache/impala/analysis/CreateTableStmt.java    |  55 ++-
 .../org/apache/impala/analysis/ToSqlUtils.java     |   9 +-
 .../java/org/apache/impala/catalog/KuduTable.java  |  42 +-
 .../apache/impala/service/CatalogOpExecutor.java   |   6 +-
 .../main/java/org/apache/impala/util/KuduUtil.java |  25 +-
 .../org/apache/impala/analysis/AnalyzeDDLTest.java | 434 +-----------------
 .../apache/impala/analysis/AnalyzeKuduDDLTest.java | 495 +++++++++++++++++++++
 .../apache/impala/analysis/AuditingKuduTest.java   | 131 ++++++
 .../org/apache/impala/analysis/AuditingTest.java   | 112 +----
 .../java/org/apache/impala/analysis/ToSqlTest.java |  33 +-
 .../org/apache/impala/common/FrontendTestBase.java |  17 +
 .../impala/customservice/CustomServiceRunner.java  |  43 ++
 .../customservice/KuduHMSIntegrationTest.java      |  82 ++++
 .../node_templates/common/etc/init.d/kudu-master   |   3 +
 tests/query_test/test_kudu.py                      |   2 +-
 16 files changed, 920 insertions(+), 577 deletions(-)

diff --git a/bin/run-all-tests.sh b/bin/run-all-tests.sh
index d124e03..83fb80d 100755
--- a/bin/run-all-tests.sh
+++ b/bin/run-all-tests.sh
@@ -195,9 +195,9 @@ do
     if [[ "$CODE_COVERAGE" == true ]]; then
       MVN_ARGS+="-DcodeCoverage"
     fi
-    # Don't run the FE custom cluster tests here since they restart Impala. We'll run them
-    # with the other custom cluster tests below.
-    MVN_ARGS+=" -Dtest=!org.apache.impala.customcluster.*Test "
+    # Don't run the FE custom cluster/service tests here since they restart Impala. We'll
+    # run them with the other custom cluster/service tests below.
+    MVN_ARGS+=" -Dtest=!org.apache.impala.custom*.*Test"
     if ! "${IMPALA_HOME}/bin/mvn-quiet.sh" -fae test ${MVN_ARGS}; then
       TEST_RET_CODE=1
     fi
@@ -248,7 +248,7 @@ do
 
     # Run the FE custom cluster tests.
     pushd "${IMPALA_FE_DIR}"
-    MVN_ARGS=" -Dtest=org.apache.impala.customcluster.*Test "
+    MVN_ARGS=" -Dtest=org.apache.impala.custom*.*Test "
     if ! "${IMPALA_HOME}/bin/mvn-quiet.sh" -fae test ${MVN_ARGS}; then
       TEST_RET_CODE=1
     fi
diff --git a/fe/src/main/java/org/apache/impala/analysis/CreateTableStmt.java b/fe/src/main/java/org/apache/impala/analysis/CreateTableStmt.java
index b334865..aef5e5c 100644
--- a/fe/src/main/java/org/apache/impala/analysis/CreateTableStmt.java
+++ b/fe/src/main/java/org/apache/impala/analysis/CreateTableStmt.java
@@ -219,8 +219,8 @@ public class CreateTableStmt extends StatementBase {
    */
   private void analyzeKuduFormat(Analyzer analyzer) throws AnalysisException {
     if (getFileFormat() != THdfsFileFormat.KUDU) {
-      if (KuduTable.KUDU_LEGACY_STORAGE_HANDLER.equals(
-          getTblProperties().get(KuduTable.KEY_STORAGE_HANDLER))) {
+      String handler = getTblProperties().get(KuduTable.KEY_STORAGE_HANDLER);
+      if (KuduTable.isKuduStorageHandler(handler)) {
         throw new AnalysisException(KUDU_STORAGE_HANDLER_ERROR_MESSAGE);
       }
       AnalysisUtils.throwIfNotEmpty(getKuduPartitionParams(),
@@ -262,23 +262,20 @@ public class CreateTableStmt extends StatementBase {
 
     // Only the Kudu storage handler may be specified for Kudu tables.
     String handler = getTblProperties().get(KuduTable.KEY_STORAGE_HANDLER);
-    if (handler != null && !handler.equals(KuduTable.KUDU_LEGACY_STORAGE_HANDLER)) {
+    if (handler != null && !KuduTable.isKuduStorageHandler(handler)) {
       throw new AnalysisException("Invalid storage handler specified for Kudu table: " +
           handler);
     }
     putGeneratedKuduProperty(KuduTable.KEY_STORAGE_HANDLER,
-        KuduTable.KUDU_LEGACY_STORAGE_HANDLER);
+        KuduTable.KUDU_STORAGE_HANDLER);
 
-    String masterHosts = getTblProperties().get(KuduTable.KEY_MASTER_HOSTS);
-    if (Strings.isNullOrEmpty(masterHosts)) {
-      masterHosts = analyzer.getCatalog().getDefaultKuduMasterHosts();
-      if (masterHosts.isEmpty()) {
-        throw new AnalysisException(String.format(
-            "Table property '%s' is required when the impalad startup flag " +
-            "-kudu_master_hosts is not used.", KuduTable.KEY_MASTER_HOSTS));
-      }
-      putGeneratedKuduProperty(KuduTable.KEY_MASTER_HOSTS, masterHosts);
+    String kuduMasters = populateKuduMasters(analyzer);
+    if (kuduMasters.isEmpty()) {
+      throw new AnalysisException(String.format(
+          "Table property '%s' is required when the impalad startup flag " +
+          "-kudu_master_hosts is not used.", KuduTable.KEY_MASTER_HOSTS));
     }
+    putGeneratedKuduProperty(KuduTable.KEY_MASTER_HOSTS, kuduMasters);
 
     // TODO: Find out what is creating a directory in HDFS and stop doing that. Kudu
     //       tables shouldn't have HDFS dirs: IMPALA-3570
@@ -288,6 +285,21 @@ public class CreateTableStmt extends StatementBase {
         "Kudu table.");
     AnalysisUtils.throwIfNotEmpty(tableDef_.getPartitionColumnDefs(),
         "PARTITIONED BY cannot be used in Kudu tables.");
+    AnalysisUtils.throwIfNotNull(getTblProperties().get(KuduTable.KEY_TABLE_ID),
+        String.format("Table property %s should not be specified when creating a " +
+            "Kudu table.", KuduTable.KEY_TABLE_ID));
+  }
+
+  /**
+   *  Populates Kudu master addresses either from table property or
+   *  the -kudu_master_hosts flag.
+   */
+  private String populateKuduMasters(Analyzer analyzer) {
+    String kuduMasters = getTblProperties().get(KuduTable.KEY_MASTER_HOSTS);
+    if (Strings.isNullOrEmpty(kuduMasters)) {
+      kuduMasters = analyzer.getCatalog().getDefaultKuduMasterHosts();
+    }
+    return kuduMasters;
   }
 
   /**
@@ -316,7 +328,7 @@ public class CreateTableStmt extends StatementBase {
    * Analyzes and checks parameters specified for managed Kudu tables.
    */
   private void analyzeManagedKuduTableParams(Analyzer analyzer) throws AnalysisException {
-    analyzeManagedKuduTableName();
+    analyzeManagedKuduTableName(analyzer);
 
     // Check column types are valid Kudu types
     for (ColumnDef col: getColumnDefs()) {
@@ -359,12 +371,23 @@ public class CreateTableStmt extends StatementBase {
    * it in TableDef.generatedKuduTableName_. Throws if the Kudu table
    * name was given manually via TBLPROPERTIES.
    */
-  private void analyzeManagedKuduTableName() throws AnalysisException {
+  private void analyzeManagedKuduTableName(Analyzer analyzer) throws AnalysisException {
     AnalysisUtils.throwIfNotNull(getTblProperties().get(KuduTable.KEY_TABLE_NAME),
         String.format("Not allowed to set '%s' manually for managed Kudu tables .",
             KuduTable.KEY_TABLE_NAME));
+    String kuduMasters = populateKuduMasters(analyzer);
+    boolean isHMSIntegrationEnabled;
+    try {
+      // Check if Kudu's integration with the Hive Metastore is enabled. Validation
+      // of whether Kudu is configured to use the same Hive Metstore as Impala is skipped
+      // and is not necessary for syntax parsing.
+      isHMSIntegrationEnabled = KuduTable.isHMSIntegrationEnabled(kuduMasters);
+    } catch (ImpalaRuntimeException e) {
+      throw new AnalysisException(String.format("Cannot analyze Kudu table '%s': %s",
+          getTbl(), e.getMessage()));
+    }
     putGeneratedKuduProperty(KuduTable.KEY_TABLE_NAME,
-        KuduUtil.getDefaultCreateKuduTableName(getDb(), getTbl()));
+        KuduUtil.getDefaultKuduTableName(getDb(), getTbl(), isHMSIntegrationEnabled));
   }
 
   /**
diff --git a/fe/src/main/java/org/apache/impala/analysis/ToSqlUtils.java b/fe/src/main/java/org/apache/impala/analysis/ToSqlUtils.java
index 0951c40..5ded467 100644
--- a/fe/src/main/java/org/apache/impala/analysis/ToSqlUtils.java
+++ b/fe/src/main/java/org/apache/impala/analysis/ToSqlUtils.java
@@ -326,11 +326,14 @@ public class ToSqlUtils {
       storageHandlerClassName = null;
       properties.remove(KuduTable.KEY_STORAGE_HANDLER);
       String kuduTableName = properties.get(KuduTable.KEY_TABLE_NAME);
-      Preconditions.checkNotNull(kuduTableName);
-      if (kuduTableName.equals(KuduUtil.getDefaultCreateKuduTableName(
-          table.getDb().getName(), table.getName()))) {
+      // Remove the hidden table property 'kudu.table_name' for a managed Kudu table.
+      if (kuduTableName != null &&
+          KuduUtil.isDefaultKuduTableName(kuduTableName,
+              table.getDb().getName(), table.getName())) {
         properties.remove(KuduTable.KEY_TABLE_NAME);
       }
+      // Remove the hidden table property 'kudu.table_id'.
+      properties.remove(KuduTable.KEY_TABLE_ID);
       // Internal property, should not be exposed to the user.
       properties.remove(StatsSetupConst.DO_NOT_UPDATE_STATS);
 
diff --git a/fe/src/main/java/org/apache/impala/catalog/KuduTable.java b/fe/src/main/java/org/apache/impala/catalog/KuduTable.java
index e751419..dfa960e 100644
--- a/fe/src/main/java/org/apache/impala/catalog/KuduTable.java
+++ b/fe/src/main/java/org/apache/impala/catalog/KuduTable.java
@@ -38,6 +38,7 @@ import org.apache.impala.thrift.TTableDescriptor;
 import org.apache.impala.thrift.TTableType;
 import org.apache.impala.util.KuduUtil;
 import org.apache.kudu.ColumnSchema;
+import org.apache.kudu.client.HiveMetastoreConfig;
 import org.apache.kudu.client.KuduClient;
 import org.apache.kudu.client.KuduException;
 import org.apache.thrift.TException;
@@ -60,6 +61,9 @@ public class KuduTable extends Table implements FeKuduTable {
   // Key to access the table name from the table properties.
   public static final String KEY_TABLE_NAME = "kudu.table_name";
 
+  // Key to access the table ID from the table properties.
+  public static final String KEY_TABLE_ID = "kudu.table_id";
+
   // Key to access the columns used to build the (composite) key of the table.
   // Deprecated - Used only for error checking.
   public static final String KEY_KEY_COLUMNS = "kudu.key_columns";
@@ -152,6 +156,34 @@ public class KuduTable extends Table implements FeKuduTable {
   }
 
   /**
+   * Get the Hive Metastore configuration from Kudu masters.
+   */
+  private static HiveMetastoreConfig getHiveMetastoreConfig(String kuduMasters)
+      throws ImpalaRuntimeException {
+    Preconditions.checkNotNull(kuduMasters);
+    Preconditions.checkArgument(!kuduMasters.isEmpty());
+    KuduClient kuduClient = KuduUtil.getKuduClient(kuduMasters);
+    HiveMetastoreConfig hmsConfig;
+    try {
+      hmsConfig = kuduClient.getHiveMetastoreConfig();
+    } catch (KuduException e) {
+      throw new ImpalaRuntimeException(
+          String.format("Error determining if Kudu's integration with " +
+              "the Hive Metastore is enabled: %s", e.getMessage()));
+    }
+    return hmsConfig;
+  }
+
+  /**
+   * Check with Kudu master to see if Kudu's integration with the Hive Metastore
+   * is enabled.
+   */
+  public static boolean isHMSIntegrationEnabled(String kuduMasters)
+      throws ImpalaRuntimeException {
+    return getHiveMetastoreConfig(kuduMasters) != null;
+  }
+
+  /**
    * Load schema and partitioning schemes directly from Kudu.
    */
   public void loadSchemaFromKudu() throws ImpalaRuntimeException {
@@ -192,9 +224,15 @@ public class KuduTable extends Table implements FeKuduTable {
       // Copy the table to check later if anything has changed.
       msTable_ = msTbl.deepCopy();
       kuduTableName_ = msTable_.getParameters().get(KuduTable.KEY_TABLE_NAME);
-      Preconditions.checkNotNull(kuduTableName_);
+      if (kuduTableName_ == null || kuduTableName_.isEmpty()) {
+        throw new TableLoadingException("No " + KuduTable.KEY_TABLE_NAME +
+            " property found for Kudu table " + kuduTableName_);
+      }
       kuduMasters_ = msTable_.getParameters().get(KuduTable.KEY_MASTER_HOSTS);
-      Preconditions.checkNotNull(kuduMasters_);
+      if (kuduMasters_ == null || kuduMasters_.isEmpty()) {
+        throw new TableLoadingException("No " + KuduTable.KEY_MASTER_HOSTS +
+            " property found for Kudu table " + kuduTableName_);
+      }
       setTableStats(msTable_);
       // Load metadata from Kudu and HMS
       try {
diff --git a/fe/src/main/java/org/apache/impala/service/CatalogOpExecutor.java b/fe/src/main/java/org/apache/impala/service/CatalogOpExecutor.java
index 80859fa..06d55bc 100644
--- a/fe/src/main/java/org/apache/impala/service/CatalogOpExecutor.java
+++ b/fe/src/main/java/org/apache/impala/service/CatalogOpExecutor.java
@@ -2575,8 +2575,10 @@ public class CatalogOpExecutor {
   private void renameKuduTable(KuduTable oldTbl,
       org.apache.hadoop.hive.metastore.api.Table oldMsTbl, TableName newTableName)
       throws ImpalaRuntimeException {
-    String newKuduTableName = KuduUtil.getDefaultCreateKuduTableName(
-        newTableName.getDb(), newTableName.getTbl());
+    // TODO: update it to properly handle HMS integration.
+    String newKuduTableName = KuduUtil.getDefaultKuduTableName(
+        newTableName.getDb(), newTableName.getTbl(),
+        /* isHMSIntegrationEnabled= */false);
 
     // If the name of the Kudu table has not changed, do nothing
     if (oldTbl.getKuduTableName().equals(newKuduTableName)) return;
diff --git a/fe/src/main/java/org/apache/impala/util/KuduUtil.java b/fe/src/main/java/org/apache/impala/util/KuduUtil.java
index 00e0f7c..2c9c719 100644
--- a/fe/src/main/java/org/apache/impala/util/KuduUtil.java
+++ b/fe/src/main/java/org/apache/impala/util/KuduUtil.java
@@ -382,12 +382,27 @@ public class KuduUtil {
   }
 
   /**
-   * Return the name that should be used in Kudu when creating a table, assuming a custom
-   * name was not provided.
+   * When Kudu's integration with the Hive Metastore is enabled, returns the Kudu
+   * table name with the format 'metastoreDbName.metastoreTableName'. Otherwise,
+   * returns with the format 'impala::metastoreDbName.metastoreTableName'. This
+   * should only be used for managed table.
    */
-  public static String getDefaultCreateKuduTableName(String metastoreDbName,
-      String metastoreTableName) {
-    return KUDU_TABLE_NAME_PREFIX + metastoreDbName + "." + metastoreTableName;
+  public static String getDefaultKuduTableName(String metastoreDbName,
+      String metastoreTableName, boolean isHMSIntegrationEnabled) {
+    return isHMSIntegrationEnabled ? metastoreDbName + "." + metastoreTableName :
+        KUDU_TABLE_NAME_PREFIX + metastoreDbName + "." + metastoreTableName;
+  }
+
+  /**
+   * Check if the given name is the default Kudu table name for managed table
+   * whether Kudu's integration with the Hive Metastore is enabled or not.
+   */
+  public static boolean isDefaultKuduTableName(String name,
+      String metastoreDbName, String metastoreTableName) {
+    return getDefaultKuduTableName(metastoreDbName,
+        metastoreTableName, true).equals(name) ||
+           getDefaultKuduTableName(metastoreDbName,
+        metastoreTableName, false).equals(name);
   }
 
   /**
diff --git a/fe/src/test/java/org/apache/impala/analysis/AnalyzeDDLTest.java b/fe/src/test/java/org/apache/impala/analysis/AnalyzeDDLTest.java
index 02d0b84..6337612 100644
--- a/fe/src/test/java/org/apache/impala/analysis/AnalyzeDDLTest.java
+++ b/fe/src/test/java/org/apache/impala/analysis/AnalyzeDDLTest.java
@@ -40,7 +40,6 @@ import org.apache.impala.catalog.Column;
 import org.apache.impala.catalog.ColumnStats;
 import org.apache.impala.catalog.DataSource;
 import org.apache.impala.catalog.DataSourceTable;
-import org.apache.impala.catalog.KuduTable;
 import org.apache.impala.catalog.PrimitiveType;
 import org.apache.impala.catalog.ScalarType;
 import org.apache.impala.catalog.Type;
@@ -55,8 +54,6 @@ import org.apache.impala.thrift.TBackendGflags;
 import org.apache.impala.thrift.TDescribeTableParams;
 import org.apache.impala.thrift.TQueryOptions;
 import org.apache.impala.util.MetaStoreUtil;
-import org.apache.kudu.ColumnSchema.CompressionAlgorithm;
-import org.apache.kudu.ColumnSchema.Encoding;
 import org.junit.Assert;
 import org.junit.Test;
 
@@ -584,11 +581,11 @@ public class AnalyzeDDLTest extends FrontendTestBase {
         "ALTER TABLE not allowed on a table produced by a data source: " +
             "functional.alltypes_datasource");
 
-    // Cannot ALTER TABLE REPLACE COLUMNS on an HBase table.
+    // Cannot ALTER TABLE REPLACE COLUMNS on a HBase table.
     AnalysisError("alter table functional_hbase.alltypes replace columns (i int)",
         "ALTER TABLE REPLACE COLUMNS not currently supported on HBase tables.");
 
-    // Cannot ALTER TABLE REPLACE COLUMNS on an Kudu table.
+    // Cannot ALTER TABLE REPLACE COLUMNS on a Kudu table.
     AnalysisError("alter table functional_kudu.alltypes replace columns (i int)",
         "ALTER TABLE REPLACE COLUMNS is not supported on Kudu tables.");
   }
@@ -2640,433 +2637,6 @@ public class AnalyzeDDLTest extends FrontendTestBase {
   }
 
   @Test
-  public void TestCreateManagedKuduTable() {
-    TestUtils.assumeKuduIsSupported();
-    // Test primary keys and partition by clauses
-    AnalyzesOk("create table tab (x int primary key) partition by hash(x) " +
-        "partitions 8 stored as kudu");
-    AnalyzesOk("create table tab (x int, primary key(x)) partition by hash(x) " +
-        "partitions 8 stored as kudu");
-    AnalyzesOk("create table tab (x int, y int, primary key (x, y)) " +
-        "partition by hash(x, y) partitions 8 stored as kudu");
-    AnalyzesOk("create table tab (x int, y int, primary key (x)) " +
-        "partition by hash(x) partitions 8 stored as kudu");
-    AnalyzesOk("create table tab (x int, y int, primary key(x, y)) " +
-        "partition by hash(y) partitions 8 stored as kudu");
-    AnalyzesOk("create table tab (x timestamp, y timestamp, primary key(x)) " +
-        "partition by hash(x) partitions 8 stored as kudu");
-    AnalyzesOk("create table tab (x int, y string, primary key (x)) partition by " +
-        "hash (x) partitions 3, range (x) (partition values < 1, partition " +
-        "1 <= values < 10, partition 10 <= values < 20, partition value = 30) " +
-        "stored as kudu");
-    AnalyzesOk("create table tab (x int, y int, primary key (x, y)) partition by " +
-        "range (x, y) (partition value = (2001, 1), partition value = (2002, 1), " +
-        "partition value = (2003, 2)) stored as kudu");
-    // Non-literal boundary values in range partitions
-    AnalyzesOk("create table tab (x int, y int, primary key (x)) partition by " +
-        "range (x) (partition values < 1 + 1, partition (1+3) + 2 < values < 10, " +
-        "partition factorial(4) < values < factorial(5), " +
-        "partition value = factorial(6)) stored as kudu");
-    AnalyzesOk("create table tab (x int, y int, primary key(x, y)) partition by " +
-        "range(x, y) (partition value = (1+1, 2+2), partition value = ((1+1+1)+1, 10), " +
-        "partition value = (cast (30 as int), factorial(5))) stored as kudu");
-    AnalysisError("create table tab (x int primary key) partition by range (x) " +
-        "(partition values < x + 1) stored as kudu", "Only constant values are allowed " +
-        "for range-partition bounds: x + 1");
-    AnalysisError("create table tab (x int primary key) partition by range (x) " +
-        "(partition values <= isnull(null, null)) stored as kudu", "Range partition " +
-        "values cannot be NULL. Range partition: 'PARTITION VALUES <= " +
-        "isnull(NULL, NULL)'");
-    AnalysisError("create table tab (x int primary key) partition by range (x) " +
-        "(partition values <= (select count(*) from functional.alltypestiny)) " +
-        "stored as kudu", "Only constant values are allowed for range-partition " +
-        "bounds: (SELECT count(*) FROM functional.alltypestiny)");
-    // Multilevel partitioning. Data is split into 3 buckets based on 'x' and each
-    // bucket is partitioned into 4 tablets based on the range partitions of 'y'.
-    AnalyzesOk("create table tab (x int, y string, primary key(x, y)) " +
-        "partition by hash(x) partitions 3, range(y) " +
-        "(partition values < 'aa', partition 'aa' <= values < 'bb', " +
-        "partition 'bb' <= values < 'cc', partition 'cc' <= values) " +
-        "stored as kudu");
-    // Key column in upper case
-    AnalyzesOk("create table tab (x int, y int, primary key (X)) " +
-        "partition by hash (x) partitions 8 stored as kudu");
-    // Flexible Partitioning
-    AnalyzesOk("create table tab (a int, b int, c int, d int, primary key (a, b, c))" +
-        "partition by hash (a, b) partitions 8, hash(c) partitions 2 stored as " +
-        "kudu");
-    // No columns specified in the PARTITION BY HASH clause
-    AnalyzesOk("create table tab (a int primary key, b int, c int, d int) " +
-        "partition by hash partitions 8 stored as kudu");
-    // Distribute range data types are picked up during analysis and forwarded to Kudu.
-    // Column names in distribute params should also be case-insensitive.
-    AnalyzesOk("create table tab (a int, b int, c int, d int, primary key(a, b, c, d))" +
-        "partition by hash (a, B, c) partitions 8, " +
-        "range (A) (partition values < 1, partition 1 <= values < 2, " +
-        "partition 2 <= values < 3, partition 3 <= values < 4, partition 4 <= values) " +
-        "stored as kudu");
-    // Allowing range partitioning on a subset of the primary keys
-    AnalyzesOk("create table tab (id int, name string, valf float, vali bigint, " +
-        "primary key (id, name)) partition by range (name) " +
-        "(partition 'aa' < values <= 'bb') stored as kudu");
-    // Null values in range partition values
-    AnalysisError("create table tab (id int, name string, primary key(id, name)) " +
-        "partition by hash (id) partitions 3, range (name) " +
-        "(partition value = null, partition value = 1) stored as kudu",
-        "Range partition values cannot be NULL. Range partition: 'PARTITION " +
-        "VALUE = NULL'");
-    // Primary key specified in tblproperties
-    AnalysisError(String.format("create table tab (x int) partition by hash (x) " +
-        "partitions 8 stored as kudu tblproperties ('%s' = 'x')",
-        KuduTable.KEY_KEY_COLUMNS), "PRIMARY KEY must be used instead of the table " +
-        "property");
-    // Primary key column that doesn't exist
-    AnalysisError("create table tab (x int, y int, primary key (z)) " +
-        "partition by hash (x) partitions 8 stored as kudu",
-        "PRIMARY KEY column 'z' does not exist in the table");
-    // Invalid composite primary key
-    AnalysisError("create table tab (x int primary key, primary key(x)) stored " +
-        "as kudu", "Multiple primary keys specified. Composite primary keys can " +
-        "be specified using the PRIMARY KEY (col1, col2, ...) syntax at the end " +
-        "of the column definition.");
-    AnalysisError("create table tab (x int primary key, y int primary key) stored " +
-        "as kudu", "Multiple primary keys specified. Composite primary keys can " +
-        "be specified using the PRIMARY KEY (col1, col2, ...) syntax at the end " +
-        "of the column definition.");
-    // Specifying the same primary key column multiple times
-    AnalysisError("create table tab (x int, primary key (x, x)) partition by hash (x) " +
-        "partitions 8 stored as kudu",
-        "Column 'x' is listed multiple times as a PRIMARY KEY.");
-    // Number of range partition boundary values should be equal to the number of range
-    // columns.
-    AnalysisError("create table tab (a int, b int, c int, d int, primary key(a, b, c)) " +
-        "partition by range(a) (partition value = (1, 2), " +
-        "partition value = 3, partition value = 4) stored as kudu",
-        "Number of specified range partition values is different than the number of " +
-        "partitioning columns: (2 vs 1). Range partition: 'PARTITION VALUE = (1, 2)'");
-    // Key ranges must match the column types.
-    AnalysisError("create table tab (a int, b int, c int, d int, primary key(a, b, c)) " +
-        "partition by hash (a, b, c) partitions 8, range (a) " +
-        "(partition value = 1, partition value = 'abc', partition 3 <= values) " +
-        "stored as kudu", "Range partition value 'abc' (type: STRING) is not type " +
-        "compatible with partitioning column 'a' (type: INT).");
-    AnalysisError("create table tab (a tinyint primary key) partition by range (a) " +
-        "(partition value = 128) stored as kudu", "Range partition value 128 " +
-        "(type: SMALLINT) is not type compatible with partitioning column 'a' " +
-        "(type: TINYINT)");
-    AnalysisError("create table tab (a smallint primary key) partition by range (a) " +
-        "(partition value = 32768) stored as kudu", "Range partition value 32768 " +
-        "(type: INT) is not type compatible with partitioning column 'a' " +
-        "(type: SMALLINT)");
-    AnalysisError("create table tab (a int primary key) partition by range (a) " +
-        "(partition value = 2147483648) stored as kudu", "Range partition value " +
-        "2147483648 (type: BIGINT) is not type compatible with partitioning column 'a' " +
-        "(type: INT)");
-    AnalysisError("create table tab (a bigint primary key) partition by range (a) " +
-        "(partition value = 9223372036854775808) stored as kudu", "Range partition " +
-        "value 9223372036854775808 (type: DECIMAL(19,0)) is not type compatible with " +
-        "partitioning column 'a' (type: BIGINT)");
-    // Test implicit casting/folding of partition values.
-    AnalyzesOk("create table tab (a int primary key) partition by range (a) " +
-        "(partition value = false, partition value = true) stored as kudu");
-    // Non-key column used in PARTITION BY
-    AnalysisError("create table tab (a int, b string, c bigint, primary key (a)) " +
-        "partition by range (b) (partition value = 'abc') stored as kudu",
-        "Column 'b' in 'RANGE (b) (PARTITION VALUE = 'abc')' is not a key column. " +
-        "Only key columns can be used in PARTITION BY.");
-    // No float range partition values
-    AnalysisError("create table tab (a int, b int, c int, d int, primary key (a, b, c))" +
-        "partition by hash (a, b, c) partitions 8, " +
-        "range (a) (partition value = 1.2, partition value = 2) stored as kudu",
-        "Range partition value 1.2 (type: DECIMAL(2,1)) is not type compatible with " +
-        "partitioning column 'a' (type: INT).");
-    // Non-existing column used in PARTITION BY
-    AnalysisError("create table tab (a int, b int, primary key (a, b)) " +
-        "partition by range(unknown_column) (partition value = 'abc') stored as kudu",
-        "Column 'unknown_column' in 'RANGE (unknown_column) (PARTITION VALUE = 'abc')' " +
-        "is not a key column. Only key columns can be used in PARTITION BY");
-    // Kudu num_tablet_replicas is specified in tblproperties
-    AnalyzesOk("create table tab (x int primary key) partition by hash (x) " +
-        "partitions 8 stored as kudu tblproperties ('kudu.num_tablet_replicas'='1'," +
-        "'kudu.master_addresses' = '127.0.0.1:8080, 127.0.0.1:8081')");
-    // Kudu table name is specified in tblproperties resulting in an error
-    AnalysisError("create table tab (x int primary key) partition by hash (x) " +
-        "partitions 8 stored as kudu tblproperties ('kudu.table_name'='tab')",
-        "Not allowed to set 'kudu.table_name' manually for managed Kudu tables");
-    // No port is specified in kudu master address
-    AnalyzesOk("create table tdata_no_port (id int primary key, name string, " +
-        "valf float, vali bigint) partition by range(id) (partition values <= 10, " +
-        "partition 10 < values <= 30, partition 30 < values) " +
-        "stored as kudu tblproperties('kudu.master_addresses'='127.0.0.1')");
-    // Not using the STORED AS KUDU syntax to specify a Kudu table
-    AnalysisError("create table tab (x int) tblproperties (" +
-        "'storage_handler'='com.cloudera.kudu.hive.KuduStorageHandler')",
-        CreateTableStmt.KUDU_STORAGE_HANDLER_ERROR_MESSAGE);
-    // Creating unpartitioned table results in a warning.
-    AnalyzesOk("create table tab (x int primary key) stored as kudu tblproperties (" +
-        "'storage_handler'='com.cloudera.kudu.hive.KuduStorageHandler')",
-        "Unpartitioned Kudu tables are inefficient for large data sizes.");
-    // Invalid value for number of replicas
-    AnalysisError("create table t (x int primary key) stored as kudu tblproperties (" +
-        "'kudu.num_tablet_replicas'='1.1')",
-        "Table property 'kudu.num_tablet_replicas' must be an integer.");
-    // Don't allow caching
-    AnalysisError("create table tab (x int primary key) stored as kudu cached in " +
-        "'testPool'", "A Kudu table cannot be cached in HDFS.");
-    // LOCATION cannot be used with Kudu tables
-    AnalysisError("create table tab (a int primary key) partition by hash (a) " +
-        "partitions 3 stored as kudu location '/test-warehouse/'",
-        "LOCATION cannot be specified for a Kudu table.");
-    // Creating unpartitioned table results in a warning.
-    AnalyzesOk("create table tab (a int, primary key (a)) stored as kudu",
-        "Unpartitioned Kudu tables are inefficient for large data sizes.");
-    AnalysisError("create table tab (a int) stored as kudu",
-        "A primary key is required for a Kudu table.");
-    // Using ROW FORMAT with a Kudu table
-    AnalysisError("create table tab (x int primary key) " +
-        "row format delimited escaped by 'X' stored as kudu",
-        "ROW FORMAT cannot be specified for file format KUDU.");
-    // Using PARTITIONED BY with a Kudu table
-    AnalysisError("create table tab (x int primary key) " +
-        "partitioned by (y int) stored as kudu", "PARTITIONED BY cannot be used " +
-        "in Kudu tables.");
-    // Multi-column range partitions
-    AnalyzesOk("create table tab (a bigint, b tinyint, c double, primary key(a, b)) " +
-        "partition by range(a, b) (partition (0, 0) < values <= (1, 1)) stored as kudu");
-    AnalysisError("create table tab (a bigint, b tinyint, c double, primary key(a, b)) " +
-        "partition by range(a, b) (partition values <= (1, 'b')) stored as kudu",
-        "Range partition value 'b' (type: STRING) is not type compatible with " +
-        "partitioning column 'b' (type: TINYINT)");
-    AnalysisError("create table tab (a bigint, b tinyint, c double, primary key(a, b)) " +
-        "partition by range(a, b) (partition 0 < values <= 1) stored as kudu",
-        "Number of specified range partition values is different than the number of " +
-        "partitioning columns: (1 vs 2). Range partition: 'PARTITION 0 < VALUES <= 1'");
-
-
-    // Test unsupported Kudu types
-    List<String> unsupportedTypes = Lists.newArrayList("VARCHAR(20)", "CHAR(20)",
-        "STRUCT<f1:INT,f2:STRING>", "ARRAY<INT>", "MAP<STRING,STRING>");
-    for (String t: unsupportedTypes) {
-      String expectedError = String.format(
-          "Cannot create table 'tab': Type %s is not supported in Kudu", t);
-
-      // Unsupported type is PK and partition col
-      String stmt = String.format("create table tab (x %s primary key) " +
-          "partition by hash(x) partitions 3 stored as kudu", t);
-      AnalysisError(stmt, expectedError);
-
-      // Unsupported type is not PK/partition col
-      stmt = String.format("create table tab (x int primary key, y %s) " +
-          "partition by hash(x) partitions 3 stored as kudu", t);
-      AnalysisError(stmt, expectedError);
-    }
-
-    // Test column options
-    String[] nullability = {"not null", "null", ""};
-    String[] defaultVal = {"default 10", ""};
-    String[] blockSize = {"block_size 4096", ""};
-    for (Encoding enc: Encoding.values()) {
-      for (CompressionAlgorithm comp: CompressionAlgorithm.values()) {
-        for (String nul: nullability) {
-          for (String def: defaultVal) {
-            for (String block: blockSize) {
-              // Test analysis for a non-key column
-              AnalyzesOk(String.format("create table tab (x int primary key " +
-                  "not null encoding %s compression %s %s %s, y int encoding %s " +
-                  "compression %s %s %s %s) partition by hash (x) " +
-                  "partitions 3 stored as kudu", enc, comp, def, block, enc,
-                  comp, def, nul, block));
-
-              // For a key column
-              String createTblStr = String.format("create table tab (x int primary key " +
-                  "%s encoding %s compression %s %s %s) partition by hash (x) " +
-                  "partitions 3 stored as kudu", nul, enc, comp, def, block);
-              if (nul.equals("null")) {
-                AnalysisError(createTblStr, "Primary key columns cannot be nullable");
-              } else {
-                AnalyzesOk(createTblStr);
-              }
-            }
-          }
-        }
-      }
-    }
-    // Use NULL as default values
-    AnalyzesOk("create table tab (x int primary key, i1 tinyint default null, " +
-        "i2 smallint default null, i3 int default null, i4 bigint default null, " +
-        "vals string default null, valf float default null, vald double default null, " +
-        "valb boolean default null, valdec decimal(10, 5) default null) " +
-        "partition by hash (x) partitions 3 stored as kudu");
-    // Use NULL as a default value on a non-nullable column
-    AnalysisError("create table tab (x int primary key, y int not null default null) " +
-        "partition by hash (x) partitions 3 stored as kudu", "Default value of NULL " +
-        "not allowed on non-nullable column: 'y'");
-    // Primary key specified using the PRIMARY KEY clause
-    AnalyzesOk("create table tab (x int not null encoding plain_encoding " +
-        "compression snappy block_size 1, y int null encoding rle compression lz4 " +
-        "default 1, primary key(x)) partition by hash (x) partitions 3 " +
-        "stored as kudu");
-    // Primary keys can't be null
-    AnalysisError("create table tab (x int primary key null, y int not null) " +
-        "partition by hash (x) partitions 3 stored as kudu", "Primary key columns " +
-        "cannot be nullable: x INT PRIMARY KEY NULL");
-    AnalysisError("create table tab (x int not null, y int null, primary key (x, y)) " +
-        "partition by hash (x) partitions 3 stored as kudu", "Primary key columns " +
-        "cannot be nullable: y INT NULL");
-    // Unsupported encoding value
-    AnalysisError("create table tab (x int primary key, y int encoding invalid_enc) " +
-        "partition by hash (x) partitions 3 stored as kudu", "Unsupported encoding " +
-        "value 'INVALID_ENC'. Supported encoding values are: " +
-        Joiner.on(", ").join(Encoding.values()));
-    // Unsupported compression algorithm
-    AnalysisError("create table tab (x int primary key, y int compression " +
-        "invalid_comp) partition by hash (x) partitions 3 stored as kudu",
-        "Unsupported compression algorithm 'INVALID_COMP'. Supported compression " +
-        "algorithms are: " + Joiner.on(", ").join(CompressionAlgorithm.values()));
-    // Default values
-    AnalyzesOk("create table tab (i1 tinyint default 1, i2 smallint default 10, " +
-        "i3 int default 100, i4 bigint default 1000, vals string default 'test', " +
-        "valf float default cast(1.2 as float), vald double default " +
-        "cast(3.1452 as double), valb boolean default true, " +
-        "valdec decimal(10, 5) default 3.14159, " +
-        "primary key (i1, i2, i3, i4, vals)) partition by hash (i1) partitions 3 " +
-        "stored as kudu");
-    AnalyzesOk("create table tab (i int primary key default 1+1+1) " +
-        "partition by hash (i) partitions 3 stored as kudu");
-    AnalyzesOk("create table tab (i int primary key default factorial(5)) " +
-        "partition by hash (i) partitions 3 stored as kudu");
-    AnalyzesOk("create table tab (i int primary key, x int null default " +
-        "isnull(null, null)) partition by hash (i) partitions 3 stored as kudu");
-    // Invalid default values
-    AnalysisError("create table tab (i int primary key default 'string_val') " +
-        "partition by hash (i) partitions 3 stored as kudu", "Default value " +
-        "'string_val' (type: STRING) is not compatible with column 'i' (type: INT).");
-    AnalysisError("create table tab (i int primary key, x int default 1.1) " +
-        "partition by hash (i) partitions 3 stored as kudu",
-        "Default value 1.1 (type: DECIMAL(2,1)) is not compatible with column " +
-        "'x' (type: INT).");
-    AnalysisError("create table tab (i tinyint primary key default 128) " +
-        "partition by hash (i) partitions 3 stored as kudu", "Default value " +
-        "128 (type: SMALLINT) is not compatible with column 'i' (type: TINYINT).");
-    AnalysisError("create table tab (i int primary key default isnull(null, null)) " +
-        "partition by hash (i) partitions 3 stored as kudu", "Default value of " +
-        "NULL not allowed on non-nullable column: 'i'");
-    AnalysisError("create table tab (i int primary key, x int not null " +
-        "default isnull(null, null)) partition by hash (i) partitions 3 " +
-        "stored as kudu", "Default value of NULL not allowed on non-nullable column: " +
-        "'x'");
-    // Invalid block_size values
-    AnalysisError("create table tab (i int primary key block_size 1.1) " +
-        "partition by hash (i) partitions 3 stored as kudu", "Invalid value " +
-        "for BLOCK_SIZE: 1.1. A positive INTEGER value is expected.");
-    AnalysisError("create table tab (i int primary key block_size 'val') " +
-        "partition by hash (i) partitions 3 stored as kudu", "Invalid value " +
-        "for BLOCK_SIZE: 'val'. A positive INTEGER value is expected.");
-
-    // Sort columns are not supported for Kudu tables.
-    AnalysisError("create table tab (i int, x int primary key) partition by hash(x) " +
-        "partitions 8 sort by(i) stored as kudu", "SORT BY is not supported for Kudu " +
-        "tables.");
-
-    // Range partitions with TIMESTAMP
-    AnalyzesOk("create table ts_ranges (ts timestamp primary key) " +
-        "partition by range (partition cast('2009-01-01 00:00:00' as timestamp) " +
-        "<= VALUES < '2009-01-02 00:00:00') stored as kudu");
-    AnalyzesOk("create table ts_ranges (ts timestamp primary key) " +
-        "partition by range (partition value = cast('2009-01-01 00:00:00' as timestamp" +
-        ")) stored as kudu");
-    AnalyzesOk("create table ts_ranges (ts timestamp primary key) " +
-        "partition by range (partition value = '2009-01-01 00:00:00') " +
-        "stored as kudu");
-    AnalyzesOk("create table ts_ranges (id int, ts timestamp, primary key(id, ts))" +
-        "partition by range (partition value = (9, cast('2009-01-01 00:00:00' as " +
-        "timestamp))) stored as kudu");
-    AnalyzesOk("create table ts_ranges (id int, ts timestamp, primary key(id, ts))" +
-        "partition by range (partition value = (9, '2009-01-01 00:00:00')) " +
-        "stored as kudu");
-    AnalysisError("create table ts_ranges (ts timestamp primary key, i int)" +
-        "partition by range (partition '2009-01-01 00:00:00' <= VALUES < " +
-        "'NOT A TIMESTAMP') stored as kudu",
-        "Range partition value 'NOT A TIMESTAMP' cannot be cast to target TIMESTAMP " +
-        "partitioning column.");
-    AnalysisError("create table ts_ranges (ts timestamp primary key, i int)" +
-        "partition by range (partition 100 <= VALUES < 200) stored as kudu",
-        "Range partition value 100 (type: TINYINT) is not type " +
-        "compatible with partitioning column 'ts' (type: TIMESTAMP).");
-
-    // TIMESTAMP columns with default values
-    AnalyzesOk("create table tdefault (id int primary key, ts timestamp default now())" +
-        "partition by hash(id) partitions 3 stored as kudu");
-    AnalyzesOk("create table tdefault (id int primary key, ts timestamp default " +
-        "unix_micros_to_utc_timestamp(1230768000000000)) partition by hash(id) " +
-        "partitions 3 stored as kudu");
-    AnalyzesOk("create table tdefault (id int primary key, " +
-        "ts timestamp not null default '2009-01-01 00:00:00') " +
-        "partition by hash(id) partitions 3 stored as kudu");
-    AnalyzesOk("create table tdefault (id int primary key, " +
-        "ts timestamp not null default cast('2009-01-01 00:00:00' as timestamp)) " +
-        "partition by hash(id) partitions 3 stored as kudu");
-    AnalysisError("create table tdefault (id int primary key, ts timestamp " +
-        "default null) partition by hash(id) partitions 3 stored as kudu",
-        "NULL cannot be cast to a TIMESTAMP literal.");
-    AnalysisError("create table tdefault (id int primary key, " +
-        "ts timestamp not null default cast('00:00:00' as timestamp)) " +
-        "partition by hash(id) partitions 3 stored as kudu",
-        "CAST('00:00:00' AS TIMESTAMP) cannot be cast to a TIMESTAMP literal.");
-    AnalysisError("create table tdefault (id int primary key, " +
-        "ts timestamp not null default '2009-1 foo') " +
-        "partition by hash(id) partitions 3 stored as kudu",
-        "String '2009-1 foo' cannot be cast to a TIMESTAMP literal.");
-
-    // Test column comments.
-    AnalyzesOk("create table tab (x int comment 'x', y int comment 'y', " +
-        "primary key (x, y)) stored as kudu");
-  }
-
-  @Test
-  public void TestCreateExternalKuduTable() {
-    AnalyzesOk("create external table t stored as kudu " +
-        "tblproperties('kudu.table_name'='t')");
-    // Use all allowed optional table props.
-    AnalyzesOk("create external table t stored as kudu tblproperties (" +
-        "'kudu.table_name'='tab'," +
-        "'kudu.master_addresses' = '127.0.0.1:8080, 127.0.0.1:8081')");
-    // Kudu table should be specified using the STORED AS KUDU syntax.
-    AnalysisError("create external table t tblproperties (" +
-        "'storage_handler'='com.cloudera.kudu.hive.KuduStorageHandler'," +
-        "'kudu.table_name'='t')",
-        CreateTableStmt.KUDU_STORAGE_HANDLER_ERROR_MESSAGE);
-    // Columns should not be specified in an external Kudu table
-    AnalysisError("create external table t (x int) stored as kudu " +
-        "tblproperties('kudu.table_name'='t')",
-        "Columns cannot be specified with an external Kudu table.");
-    // Primary keys cannot be specified in an external Kudu table
-    AnalysisError("create external table t (x int primary key) stored as kudu " +
-        "tblproperties('kudu.table_name'='t')", "Primary keys cannot be specified " +
-        "for an external Kudu table");
-    // Invalid syntax for specifying a Kudu table
-    AnalysisError("create external table t (x int) stored as parquet tblproperties (" +
-        "'storage_handler'='com.cloudera.kudu.hive.KuduStorageHandler'," +
-        "'kudu.table_name'='t')", CreateTableStmt.KUDU_STORAGE_HANDLER_ERROR_MESSAGE);
-    AnalysisError("create external table t stored as kudu tblproperties (" +
-        "'storage_handler'='foo', 'kudu.table_name'='t')",
-        "Invalid storage handler specified for Kudu table: foo");
-    // Cannot specify the number of replicas for external Kudu tables
-    AnalysisError("create external table tab (x int) stored as kudu " +
-        "tblproperties ('kudu.num_tablet_replicas' = '1', " +
-        "'kudu.table_name'='tab')",
-        "Table property 'kudu.num_tablet_replicas' cannot be used with an external " +
-        "Kudu table.");
-    // Don't allow caching
-    AnalysisError("create external table t stored as kudu cached in 'testPool' " +
-        "tblproperties('kudu.table_name'='t')", "A Kudu table cannot be cached in HDFS.");
-    // LOCATION cannot be used for a Kudu table
-    AnalysisError("create external table t stored as kudu " +
-        "location '/test-warehouse' tblproperties('kudu.table_name'='t')",
-        "LOCATION cannot be specified for a Kudu table.");
-  }
-
-  @Test
   public void TestCreateAvroTest() {
     String alltypesSchemaLoc =
         "hdfs:///test-warehouse/avro_schemas/functional/alltypes.json";
diff --git a/fe/src/test/java/org/apache/impala/analysis/AnalyzeKuduDDLTest.java b/fe/src/test/java/org/apache/impala/analysis/AnalyzeKuduDDLTest.java
new file mode 100644
index 0000000..6ac19b5
--- /dev/null
+++ b/fe/src/test/java/org/apache/impala/analysis/AnalyzeKuduDDLTest.java
@@ -0,0 +1,495 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.impala.analysis;
+
+import org.apache.impala.catalog.KuduTable;
+import org.apache.impala.common.FrontendTestBase;
+import org.apache.impala.testutil.TestUtils;
+import org.apache.kudu.ColumnSchema.CompressionAlgorithm;
+import org.apache.kudu.ColumnSchema.Encoding;
+import org.junit.Test;
+
+import com.google.common.base.Joiner;
+import com.google.common.collect.Lists;
+
+import java.util.List;
+
+/**
+ * Tests on DDL analysis for Kudu tables.
+ */
+public class AnalyzeKuduDDLTest extends FrontendTestBase {
+
+  @Test
+  public void TestCreateManagedKuduTable() {
+    TestUtils.assumeKuduIsSupported();
+    // Test primary keys and partition by clauses
+    AnalyzesOk("create table tab (x int primary key) partition by hash(x) " +
+        "partitions 8 stored as kudu");
+    AnalyzesOk("create table tab (x int, primary key(x)) partition by hash(x) " +
+        "partitions 8 stored as kudu");
+    AnalyzesOk("create table tab (x int, y int, primary key (x, y)) " +
+        "partition by hash(x, y) partitions 8 stored as kudu");
+    AnalyzesOk("create table tab (x int, y int, primary key (x)) " +
+        "partition by hash(x) partitions 8 stored as kudu");
+    AnalyzesOk("create table tab (x int, y int, primary key(x, y)) " +
+        "partition by hash(y) partitions 8 stored as kudu");
+    AnalyzesOk("create table tab (x timestamp, y timestamp, primary key(x)) " +
+        "partition by hash(x) partitions 8 stored as kudu");
+    AnalyzesOk("create table tab (x int, y string, primary key (x)) partition by " +
+        "hash (x) partitions 3, range (x) (partition values < 1, partition " +
+        "1 <= values < 10, partition 10 <= values < 20, partition value = 30) " +
+        "stored as kudu");
+    AnalyzesOk("create table tab (x int, y int, primary key (x, y)) partition by " +
+        "range (x, y) (partition value = (2001, 1), partition value = (2002, 1), " +
+        "partition value = (2003, 2)) stored as kudu");
+    // Non-literal boundary values in range partitions
+    AnalyzesOk("create table tab (x int, y int, primary key (x)) partition by " +
+        "range (x) (partition values < 1 + 1, partition (1+3) + 2 < values < 10, " +
+        "partition factorial(4) < values < factorial(5), " +
+        "partition value = factorial(6)) stored as kudu");
+    AnalyzesOk("create table tab (x int, y int, primary key(x, y)) partition by " +
+        "range(x, y) (partition value = (1+1, 2+2), partition value = ((1+1+1)+1, 10), " +
+        "partition value = (cast (30 as int), factorial(5))) stored as kudu");
+    AnalysisError("create table tab (x int primary key) partition by range (x) " +
+        "(partition values < x + 1) stored as kudu", "Only constant values are allowed " +
+        "for range-partition bounds: x + 1");
+    AnalysisError("create table tab (x int primary key) partition by range (x) " +
+        "(partition values <= isnull(null, null)) stored as kudu", "Range partition " +
+        "values cannot be NULL. Range partition: 'PARTITION VALUES <= " +
+        "isnull(NULL, NULL)'");
+    AnalysisError("create table tab (x int primary key) partition by range (x) " +
+        "(partition values <= (select count(*) from functional.alltypestiny)) " +
+        "stored as kudu", "Only constant values are allowed for range-partition " +
+        "bounds: (SELECT count(*) FROM functional.alltypestiny)");
+    // Multilevel partitioning. Data is split into 3 buckets based on 'x' and each
+    // bucket is partitioned into 4 tablets based on the range partitions of 'y'.
+    AnalyzesOk("create table tab (x int, y string, primary key(x, y)) " +
+        "partition by hash(x) partitions 3, range(y) " +
+        "(partition values < 'aa', partition 'aa' <= values < 'bb', " +
+        "partition 'bb' <= values < 'cc', partition 'cc' <= values) " +
+        "stored as kudu");
+    // Key column in upper case
+    AnalyzesOk("create table tab (x int, y int, primary key (X)) " +
+        "partition by hash (x) partitions 8 stored as kudu");
+    // Flexible Partitioning
+    AnalyzesOk("create table tab (a int, b int, c int, d int, primary key (a, b, c))" +
+        "partition by hash (a, b) partitions 8, hash(c) partitions 2 stored as " +
+        "kudu");
+    // No columns specified in the PARTITION BY HASH clause
+    AnalyzesOk("create table tab (a int primary key, b int, c int, d int) " +
+        "partition by hash partitions 8 stored as kudu");
+    // Distribute range data types are picked up during analysis and forwarded to Kudu.
+    // Column names in distribute params should also be case-insensitive.
+    AnalyzesOk("create table tab (a int, b int, c int, d int, primary key(a, b, c, d))" +
+        "partition by hash (a, B, c) partitions 8, " +
+        "range (A) (partition values < 1, partition 1 <= values < 2, " +
+        "partition 2 <= values < 3, partition 3 <= values < 4, partition 4 <= values) " +
+        "stored as kudu");
+    // Allowing range partitioning on a subset of the primary keys
+    AnalyzesOk("create table tab (id int, name string, valf float, vali bigint, " +
+        "primary key (id, name)) partition by range (name) " +
+        "(partition 'aa' < values <= 'bb') stored as kudu");
+    // Null values in range partition values
+    AnalysisError("create table tab (id int, name string, primary key(id, name)) " +
+        "partition by hash (id) partitions 3, range (name) " +
+        "(partition value = null, partition value = 1) stored as kudu",
+"Range partition values cannot be NULL. Range partition: 'PARTITION " +
+        "VALUE = NULL'");
+    // Primary key specified in tblproperties
+    AnalysisError(String.format("create table tab (x int) partition by hash (x) " +
+        "partitions 8 stored as kudu tblproperties ('%s' = 'x')",
+        KuduTable.KEY_KEY_COLUMNS), "PRIMARY KEY must be used instead of the table " +
+        "property");
+    // Primary key column that doesn't exist
+    AnalysisError("create table tab (x int, y int, primary key (z)) " +
+        "partition by hash (x) partitions 8 stored as kudu",
+        "PRIMARY KEY column 'z' does not exist in the table");
+    // Invalid composite primary key
+    AnalysisError("create table tab (x int primary key, primary key(x)) stored " +
+        "as kudu", "Multiple primary keys specified. Composite primary keys can " +
+        "be specified using the PRIMARY KEY (col1, col2, ...) syntax at the end " +
+        "of the column definition.");
+    AnalysisError("create table tab (x int primary key, y int primary key) stored " +
+        "as kudu", "Multiple primary keys specified. Composite primary keys can " +
+        "be specified using the PRIMARY KEY (col1, col2, ...) syntax at the end " +
+        "of the column definition.");
+    // Specifying the same primary key column multiple times
+    AnalysisError("create table tab (x int, primary key (x, x)) partition by hash (x) " +
+        "partitions 8 stored as kudu",
+        "Column 'x' is listed multiple times as a PRIMARY KEY.");
+    // Number of range partition boundary values should be equal to the number of range
+    // columns.
+    AnalysisError("create table tab (a int, b int, c int, d int, primary key(a, b, c)) " +
+        "partition by range(a) (partition value = (1, 2), " +
+        "partition value = 3, partition value = 4) stored as kudu",
+"Number of specified range partition values is different than the number of " +
+        "partitioning columns: (2 vs 1). Range partition: 'PARTITION VALUE = (1, 2)'");
+    // Key ranges must match the column types.
+    AnalysisError("create table tab (a int, b int, c int, d int, primary key(a, b, c)) " +
+        "partition by hash (a, b, c) partitions 8, range (a) " +
+        "(partition value = 1, partition value = 'abc', partition 3 <= values) " +
+        "stored as kudu", "Range partition value 'abc' (type: STRING) is not type " +
+        "compatible with partitioning column 'a' (type: INT).");
+    AnalysisError("create table tab (a tinyint primary key) partition by range (a) " +
+        "(partition value = 128) stored as kudu", "Range partition value 128 " +
+        "(type: SMALLINT) is not type compatible with partitioning column 'a' " +
+        "(type: TINYINT)");
+    AnalysisError("create table tab (a smallint primary key) partition by range (a) " +
+        "(partition value = 32768) stored as kudu", "Range partition value 32768 " +
+        "(type: INT) is not type compatible with partitioning column 'a' " +
+        "(type: SMALLINT)");
+    AnalysisError("create table tab (a int primary key) partition by range (a) " +
+        "(partition value = 2147483648) stored as kudu", "Range partition value " +
+        "2147483648 (type: BIGINT) is not type compatible with partitioning column 'a' " +
+        "(type: INT)");
+    AnalysisError("create table tab (a bigint primary key) partition by range (a) " +
+        "(partition value = 9223372036854775808) stored as kudu", "Range partition " +
+        "value 9223372036854775808 (type: DECIMAL(19,0)) is not type compatible with " +
+        "partitioning column 'a' (type: BIGINT)");
+    // Test implicit casting/folding of partition values.
+    AnalyzesOk("create table tab (a int primary key) partition by range (a) " +
+        "(partition value = false, partition value = true) stored as kudu");
+    // Non-key column used in PARTITION BY
+    AnalysisError("create table tab (a int, b string, c bigint, primary key (a)) " +
+        "partition by range (b) (partition value = 'abc') stored as kudu",
+  "Column 'b' in 'RANGE (b) (PARTITION VALUE = 'abc')' is not a key column. " +
+        "Only key columns can be used in PARTITION BY.");
+    // No float range partition values
+    AnalysisError("create table tab (a int, b int, c int, d int, primary key (a, b, c))" +
+        "partition by hash (a, b, c) partitions 8, " +
+        "range (a) (partition value = 1.2, partition value = 2) stored as kudu",
+        "Range partition value 1.2 (type: DECIMAL(2,1)) is not type compatible with " +
+        "partitioning column 'a' (type: INT).");
+    // Non-existing column used in PARTITION BY
+    AnalysisError("create table tab (a int, b int, primary key (a, b)) " +
+        "partition by range(unknown_column) (partition value = 'abc') stored as kudu",
+        "Column 'unknown_column' in 'RANGE (unknown_column) " +
+        "(PARTITION VALUE = 'abc')' is not a key column. Only key columns can be used " +
+        "in PARTITION BY");
+    // Kudu num_tablet_replicas is specified in tblproperties
+    String kuduMasters = catalog_.getDefaultKuduMasterHosts();
+    AnalyzesOk(String.format("create table tab (x int primary key) partition by " +
+        "hash (x) partitions 8 stored as kudu tblproperties " +
+        "('kudu.num_tablet_replicas'='1', 'kudu.master_addresses' = '%s')",
+        kuduMasters));
+    // Kudu table name is specified in tblproperties resulting in an error
+    AnalysisError("create table tab (x int primary key) partition by hash (x) " +
+        "partitions 8 stored as kudu tblproperties ('kudu.table_name'='tab')",
+        "Not allowed to set 'kudu.table_name' manually for managed Kudu tables");
+    // No port is specified in kudu master address
+    AnalyzesOk(String.format("create table tdata_no_port (id int primary key, " +
+        "name string, valf float, vali bigint) partition by range(id) " +
+        "(partition values <= 10, partition 10 < values <= 30, " +
+        "partition 30 < values) stored as kudu tblproperties" +
+        "('kudu.master_addresses' = '%s')", kuduMasters));
+    // Not using the STORED AS KUDU syntax to specify a Kudu table
+    AnalysisError("create table tab (x int) tblproperties (" +
+        "'storage_handler'='org.apache.kudu.hive.KuduStorageHandler')",
+        CreateTableStmt.KUDU_STORAGE_HANDLER_ERROR_MESSAGE);
+    // Creating unpartitioned table results in a warning.
+    AnalyzesOk("create table tab (x int primary key) stored as kudu " +
+        "tblproperties ('storage_handler'='org.apache.kudu.hive.KuduStorageHandler')",
+        "Unpartitioned Kudu tables are inefficient for large data sizes.");
+    // Invalid value for number of replicas
+    AnalysisError("create table t (x int primary key) stored as kudu tblproperties (" +
+        "'kudu.num_tablet_replicas'='1.1')",
+        "Table property 'kudu.num_tablet_replicas' must be an integer.");
+    // Don't allow caching
+    AnalysisError("create table tab (x int primary key) stored as kudu cached in " +
+        "'testPool'", "A Kudu table cannot be cached in HDFS.");
+    // LOCATION cannot be used with Kudu tables
+    AnalysisError("create table tab (a int primary key) partition by hash (a) " +
+        "partitions 3 stored as kudu location '/test-warehouse/'",
+        "LOCATION cannot be specified for a Kudu table.");
+    // Creating unpartitioned table results in a warning.
+    AnalyzesOk("create table tab (a int, primary key (a)) stored as kudu",
+        "Unpartitioned Kudu tables are inefficient for large data sizes.");
+    AnalysisError("create table tab (a int) stored as kudu",
+        "A primary key is required for a Kudu table.");
+    // Using ROW FORMAT with a Kudu table
+    AnalysisError("create table tab (x int primary key) " +
+        "row format delimited escaped by 'X' stored as kudu",
+        "ROW FORMAT cannot be specified for file format KUDU.");
+    // Using PARTITIONED BY with a Kudu table
+    AnalysisError("create table tab (x int primary key) " +
+        "partitioned by (y int) stored as kudu", "PARTITIONED BY cannot be used " +
+        "in Kudu tables.");
+    // Multi-column range partitions
+    AnalyzesOk("create table tab (a bigint, b tinyint, c double, primary key(a, b)) " +
+        "partition by range(a, b) (partition (0, 0) < values <= (1, 1)) stored as kudu");
+    AnalysisError("create table tab (a bigint, b tinyint, c double, primary key(a, b)) " +
+        "partition by range(a, b) (partition values <= (1, 'b')) stored as kudu",
+        "Range partition value 'b' (type: STRING) is not type compatible with " +
+        "partitioning column 'b' (type: TINYINT)");
+    AnalysisError("create table tab (a bigint, b tinyint, c double, primary key(a, b)) " +
+        "partition by range(a, b) (partition 0 < values <= 1) stored as kudu",
+        "Number of specified range partition values is different than the number of " +
+        "partitioning columns: (1 vs 2). Range partition: 'PARTITION 0 < VALUES <= 1'");
+
+
+    // Test unsupported Kudu types
+    List<String> unsupportedTypes = Lists.newArrayList("VARCHAR(20)", "CHAR(20)",
+        "STRUCT<f1:INT,f2:STRING>", "ARRAY<INT>", "MAP<STRING,STRING>");
+    for (String t: unsupportedTypes) {
+      String expectedError = String.format(
+          "Cannot create table 'tab': Type %s is not supported in Kudu", t);
+
+      // Unsupported type is PK and partition col
+      String stmt = String.format("create table tab (x %s primary key) " +
+          "partition by hash(x) partitions 3 stored as kudu", t);
+      AnalysisError(stmt, expectedError);
+
+      // Unsupported type is not PK/partition col
+      stmt = String.format("create table tab (x int primary key, y %s) " +
+          "partition by hash(x) partitions 3 stored as kudu", t);
+      AnalysisError(stmt, expectedError);
+    }
+
+    // Test column options
+    String[] nullability = {"not null", "null", ""};
+    String[] defaultVal = {"default 10", ""};
+    String[] blockSize = {"block_size 4096", ""};
+    for (Encoding enc: Encoding.values()) {
+      for (CompressionAlgorithm comp: CompressionAlgorithm.values()) {
+        for (String nul: nullability) {
+          for (String def: defaultVal) {
+            for (String block: blockSize) {
+              // Test analysis for a non-key column
+              AnalyzesOk(String.format("create table tab (x int primary key " +
+                  "not null encoding %s compression %s %s %s, y int encoding %s " +
+                  "compression %s %s %s %s) partition by hash (x) " +
+                  "partitions 3 stored as kudu", enc, comp, def, block, enc,
+                  comp, def, nul, block));
+
+              // For a key column
+              String createTblStr = String.format("create table tab (x int primary key " +
+                  "%s encoding %s compression %s %s %s) partition by hash (x) " +
+                  "partitions 3 stored as kudu", nul, enc, comp, def, block);
+              if (nul.equals("null")) {
+                AnalysisError(createTblStr, "Primary key columns cannot be nullable");
+              } else {
+                AnalyzesOk(createTblStr);
+              }
+            }
+          }
+        }
+      }
+    }
+    // Use NULL as default values
+    AnalyzesOk("create table tab (x int primary key, i1 tinyint default null, " +
+        "i2 smallint default null, i3 int default null, i4 bigint default null, " +
+        "vals string default null, valf float default null, vald double default null, " +
+        "valb boolean default null, valdec decimal(10, 5) default null) " +
+        "partition by hash (x) partitions 3 stored as kudu");
+    // Use NULL as a default value on a non-nullable column
+    AnalysisError("create table tab (x int primary key, y int not null default null) " +
+        "partition by hash (x) partitions 3 stored as kudu", "Default value of NULL " +
+        "not allowed on non-nullable column: 'y'");
+    // Primary key specified using the PRIMARY KEY clause
+    AnalyzesOk("create table tab (x int not null encoding plain_encoding " +
+        "compression snappy block_size 1, y int null encoding rle compression lz4 " +
+        "default 1, primary key(x)) partition by hash (x) partitions 3 " +
+        "stored as kudu");
+    // Primary keys can't be null
+    AnalysisError("create table tab (x int primary key null, y int not null) " +
+        "partition by hash (x) partitions 3 stored as kudu", "Primary key columns " +
+        "cannot be nullable: x INT PRIMARY KEY NULL");
+    AnalysisError("create table tab (x int not null, y int null, primary key (x, y)) " +
+        "partition by hash (x) partitions 3 stored as kudu", "Primary key columns " +
+        "cannot be nullable: y INT NULL");
+    // Unsupported encoding value
+    AnalysisError("create table tab (x int primary key, y int encoding invalid_enc) " +
+        "partition by hash (x) partitions 3 stored as kudu", "Unsupported encoding " +
+        "value 'INVALID_ENC'. Supported encoding values are: " +
+        Joiner.on(", ").join(Encoding.values()));
+    // Unsupported compression algorithm
+    AnalysisError("create table tab (x int primary key, y int compression " +
+        "invalid_comp) partition by hash (x) partitions 3 stored as kudu",
+        "Unsupported compression algorithm 'INVALID_COMP'. Supported compression " +
+        "algorithms are: " + Joiner.on(", ").join(CompressionAlgorithm.values()));
+    // Default values
+    AnalyzesOk("create table tab (i1 tinyint default 1, i2 smallint default 10, " +
+        "i3 int default 100, i4 bigint default 1000, vals string default 'test', " +
+        "valf float default cast(1.2 as float), vald double default " +
+        "cast(3.1452 as double), valb boolean default true, " +
+        "valdec decimal(10, 5) default 3.14159, " +
+        "primary key (i1, i2, i3, i4, vals)) partition by hash (i1) partitions 3 " +
+        "stored as kudu");
+    AnalyzesOk("create table tab (i int primary key default 1+1+1) " +
+        "partition by hash (i) partitions 3 stored as kudu");
+    AnalyzesOk("create table tab (i int primary key default factorial(5)) " +
+        "partition by hash (i) partitions 3 stored as kudu");
+    AnalyzesOk("create table tab (i int primary key, x int null default " +
+        "isnull(null, null)) partition by hash (i) partitions 3 stored as kudu");
+    // Invalid default values
+    AnalysisError("create table tab (i int primary key default 'string_val') " +
+        "partition by hash (i) partitions 3 stored as kudu", "Default value " +
+        "'string_val' (type: STRING) is not compatible with column 'i' (type: INT).");
+    AnalysisError("create table tab (i int primary key, x int default 1.1) " +
+        "partition by hash (i) partitions 3 stored as kudu",
+        "Default value 1.1 (type: DECIMAL(2,1)) is not compatible with column " +
+        "'x' (type: INT).");
+    AnalysisError("create table tab (i tinyint primary key default 128) " +
+        "partition by hash (i) partitions 3 stored as kudu", "Default value " +
+        "128 (type: SMALLINT) is not compatible with column 'i' (type: TINYINT).");
+    AnalysisError("create table tab (i int primary key default isnull(null, null)) " +
+        "partition by hash (i) partitions 3 stored as kudu", "Default value of " +
+        "NULL not allowed on non-nullable column: 'i'");
+    AnalysisError("create table tab (i int primary key, x int not null " +
+        "default isnull(null, null)) partition by hash (i) partitions 3 " +
+        "stored as kudu", "Default value of NULL not allowed on non-nullable column: " +
+        "'x'");
+    // Invalid block_size values
+    AnalysisError("create table tab (i int primary key block_size 1.1) " +
+        "partition by hash (i) partitions 3 stored as kudu", "Invalid value " +
+        "for BLOCK_SIZE: 1.1. A positive INTEGER value is expected.");
+    AnalysisError("create table tab (i int primary key block_size 'val') " +
+        "partition by hash (i) partitions 3 stored as kudu", "Invalid value " +
+        "for BLOCK_SIZE: 'val'. A positive INTEGER value is expected.");
+
+    // Sort columns are not supported for Kudu tables.
+    AnalysisError("create table tab (i int, x int primary key) partition by hash(x) " +
+        "partitions 8 sort by(i) stored as kudu", "SORT BY is not supported for Kudu " +
+        "tables.");
+
+    // Range partitions with TIMESTAMP
+    AnalyzesOk("create table ts_ranges (ts timestamp primary key) " +
+        "partition by range (partition cast('2009-01-01 00:00:00' as timestamp) " +
+        "<= VALUES < '2009-01-02 00:00:00') stored as kudu");
+    AnalyzesOk("create table ts_ranges (ts timestamp primary key) " +
+        "partition by range (partition value = cast('2009-01-01 00:00:00' as timestamp" +
+        ")) stored as kudu");
+    AnalyzesOk("create table ts_ranges (ts timestamp primary key) " +
+        "partition by range (partition value = '2009-01-01 00:00:00') " +
+        "stored as kudu");
+    AnalyzesOk("create table ts_ranges (id int, ts timestamp, primary key(id, ts))" +
+        "partition by range (partition value = (9, cast('2009-01-01 00:00:00' as " +
+        "timestamp))) stored as kudu");
+    AnalyzesOk("create table ts_ranges (id int, ts timestamp, primary key(id, ts))" +
+        "partition by range (partition value = (9, '2009-01-01 00:00:00')) " +
+        "stored as kudu");
+    AnalysisError("create table ts_ranges (ts timestamp primary key, i int)" +
+        "partition by range (partition '2009-01-01 00:00:00' <= VALUES < " +
+        "'NOT A TIMESTAMP') stored as kudu",
+        "Range partition value 'NOT A TIMESTAMP' cannot be cast to target TIMESTAMP " +
+        "partitioning column.");
+    AnalysisError("create table ts_ranges (ts timestamp primary key, i int)" +
+        "partition by range (partition 100 <= VALUES < 200) stored as kudu",
+        "Range partition value 100 (type: TINYINT) is not type " +
+        "compatible with partitioning column 'ts' (type: TIMESTAMP).");
+
+    // TIMESTAMP columns with default values
+    AnalyzesOk("create table tdefault (id int primary key, ts timestamp default now())" +
+        "partition by hash(id) partitions 3 stored as kudu");
+    AnalyzesOk("create table tdefault (id int primary key, ts timestamp default " +
+        "unix_micros_to_utc_timestamp(1230768000000000)) partition by hash(id) " +
+        "partitions 3 stored as kudu");
+    AnalyzesOk("create table tdefault (id int primary key, " +
+        "ts timestamp not null default '2009-01-01 00:00:00') " +
+        "partition by hash(id) partitions 3 stored as kudu");
+    AnalyzesOk("create table tdefault (id int primary key, " +
+        "ts timestamp not null default cast('2009-01-01 00:00:00' as timestamp)) " +
+        "partition by hash(id) partitions 3 stored as kudu");
+    AnalysisError("create table tdefault (id int primary key, ts timestamp " +
+        "default null) partition by hash(id) partitions 3 stored as kudu",
+        "NULL cannot be cast to a TIMESTAMP literal.");
+    AnalysisError("create table tdefault (id int primary key, " +
+        "ts timestamp not null default cast('00:00:00' as timestamp)) " +
+        "partition by hash(id) partitions 3 stored as kudu",
+        "CAST('00:00:00' AS TIMESTAMP) cannot be cast to a TIMESTAMP literal.");
+    AnalysisError("create table tdefault (id int primary key, " +
+        "ts timestamp not null default '2009-1 foo') " +
+        "partition by hash(id) partitions 3 stored as kudu",
+        "String '2009-1 foo' cannot be cast to a TIMESTAMP literal.");
+
+    // Test column comments.
+    AnalyzesOk("create table tab (x int comment 'x', y int comment 'y', " +
+        "primary key (x, y)) stored as kudu");
+
+    // Managed table is not allowed to set table property 'kudu.table_id'.
+    AnalysisError("create table tab (x int primary key) partition by hash(x) " +
+        "partitions 8 stored as kudu tblproperties ('kudu.table_id'='123456')",
+        String.format("Table property %s should not be specified when " +
+            "creating a Kudu table.", KuduTable.KEY_TABLE_ID));
+
+    // Kudu master address needs to be valid.
+    AnalysisError("create table tab (x int primary key) partition by " +
+        "hash (x) partitions 8 stored as kudu tblproperties " +
+        "('kudu.master_addresses' = 'foo')",
+        "Cannot analyze Kudu table 'tab': Error determining if Kudu's integration " +
+        "with the Hive Metastore is enabled");
+  }
+
+  @Test
+  public void TestCreateExternalKuduTable() {
+    TestUtils.assumeKuduIsSupported();
+    final String kuduMasters = catalog_.getDefaultKuduMasterHosts();
+    AnalyzesOk("create external table t stored as kudu " +
+        "tblproperties('kudu.table_name'='t')");
+    // Use all allowed optional table props.
+    AnalyzesOk(String.format("create external table t stored as kudu tblproperties (" +
+        "'kudu.table_name'='tab', 'kudu.master_addresses' = '%s', " +
+        "'storage_handler'='org.apache.kudu.hive.KuduStorageHandler')", kuduMasters));
+    // Kudu table should be specified using the STORED AS KUDU syntax.
+    AnalysisError("create external table t tblproperties (" +
+        "'storage_handler'='com.cloudera.kudu.hive.KuduStorageHandler'," +
+        "'kudu.table_name'='t')",
+        CreateTableStmt.KUDU_STORAGE_HANDLER_ERROR_MESSAGE);
+    AnalysisError("create external table t tblproperties (" +
+        "'storage_handler'='org.apache.kudu.hive.KuduStorageHandler'," +
+        "'kudu.table_name'='t')",
+        CreateTableStmt.KUDU_STORAGE_HANDLER_ERROR_MESSAGE);
+    // Columns should not be specified in an external Kudu table
+    AnalysisError("create external table t (x int) stored as kudu " +
+        "tblproperties('kudu.table_name'='t')",
+        "Columns cannot be specified with an external Kudu table.");
+    // Primary keys cannot be specified in an external Kudu table
+    AnalysisError("create external table t (x int primary key) stored as kudu " +
+        "tblproperties('kudu.table_name'='t')", "Primary keys cannot be specified " +
+        "for an external Kudu table");
+    // Invalid syntax for specifying a Kudu table
+    AnalysisError("create external table t (x int) stored as parquet tblproperties (" +
+        "'storage_handler'='com.cloudera.kudu.hive.KuduStorageHandler'," +
+        "'kudu.table_name'='t')", CreateTableStmt.KUDU_STORAGE_HANDLER_ERROR_MESSAGE);
+    AnalysisError("create external table t (x int) stored as parquet tblproperties (" +
+        "'storage_handler'='org.apache.kudu.hive.KuduStorageHandler'," +
+        "'kudu.table_name'='t')", CreateTableStmt.KUDU_STORAGE_HANDLER_ERROR_MESSAGE);
+    AnalysisError("create external table t stored as kudu tblproperties (" +
+        "'storage_handler'='foo', 'kudu.table_name'='t')",
+        "Invalid storage handler specified for Kudu table: foo");
+    // Cannot specify the number of replicas for external Kudu tables
+    AnalysisError("create external table tab (x int) stored as kudu " +
+        "tblproperties ('kudu.num_tablet_replicas' = '1', " +
+        "'kudu.table_name'='tab')",
+        "Table property 'kudu.num_tablet_replicas' cannot be used with an external " +
+        "Kudu table.");
+    // Don't allow caching
+    AnalysisError("create external table t stored as kudu cached in 'testPool' " +
+        "tblproperties('kudu.table_name'='t')",
+        "A Kudu table cannot be cached in HDFS.");
+    // LOCATION cannot be used for a Kudu table
+    AnalysisError("create external table t stored as kudu " +
+        "location '/test-warehouse' tblproperties('kudu.table_name'='t')",
+        "LOCATION cannot be specified for a Kudu table.");
+    // External table is not allowed to set table property 'kudu.table_id'.
+    AnalysisError("create external table t stored as kudu tblproperties " +
+        "('kudu.table_name'='t', 'kudu.table_id'='123456')",
+        String.format("Table property %s should not be specified when creating " +
+            "a Kudu table.", KuduTable.KEY_TABLE_ID));
+  }
+}
diff --git a/fe/src/test/java/org/apache/impala/analysis/AuditingKuduTest.java b/fe/src/test/java/org/apache/impala/analysis/AuditingKuduTest.java
new file mode 100644
index 0000000..5ac049c
--- /dev/null
+++ b/fe/src/test/java/org/apache/impala/analysis/AuditingKuduTest.java
@@ -0,0 +1,131 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.impala.analysis;
+
+import com.google.common.collect.Sets;
+
+import java.util.Set;
+
+import org.apache.impala.authorization.AuthorizationException;
+import org.apache.impala.common.AnalysisException;
+import org.apache.impala.common.FrontendTestBase;
+import org.apache.impala.testutil.TestUtils;
+import org.apache.impala.thrift.TAccessEvent;
+import org.apache.impala.thrift.TCatalogObjectType;
+import org.junit.Assert;
+import org.junit.Test;
+
+/**
+ * Tests that auditing access events are properly captured during analysis for all
+ * statement types on Kudu tables.
+ */
+public class AuditingKuduTest extends FrontendTestBase {
+  @Test
+  public void TestKuduStatements() throws AuthorizationException, AnalysisException {
+    TestUtils.assumeKuduIsSupported();
+    // Select
+    Set<TAccessEvent> accessEvents =
+        AnalyzeAccessEvents("select * from functional_kudu.testtbl");
+    Assert.assertEquals(accessEvents, Sets.newHashSet(
+        new TAccessEvent("functional_kudu.testtbl",
+                         TCatalogObjectType.TABLE, "SELECT")));
+
+    // Insert
+    accessEvents = AnalyzeAccessEvents(
+        "insert into functional_kudu.testtbl (id) select id from " +
+        "functional_kudu.alltypes");
+    Assert.assertEquals(accessEvents, Sets.newHashSet(
+        new TAccessEvent("functional_kudu.alltypes",
+                         TCatalogObjectType.TABLE, "SELECT"),
+        new TAccessEvent("functional_kudu.testtbl",
+                         TCatalogObjectType.TABLE, "INSERT")));
+
+    // Upsert
+    accessEvents = AnalyzeAccessEvents(
+        "upsert into functional_kudu.testtbl (id) select id from " +
+        "functional_kudu.alltypes");
+    Assert.assertEquals(accessEvents, Sets.newHashSet(
+        new TAccessEvent("functional_kudu.alltypes",
+                         TCatalogObjectType.TABLE, "SELECT"),
+        new TAccessEvent("functional_kudu.testtbl",
+                         TCatalogObjectType.TABLE, "ALL")));
+
+    // Delete
+    accessEvents = AnalyzeAccessEvents(
+        "delete from functional_kudu.testtbl where id = 1");
+    Assert.assertEquals(accessEvents, Sets.newHashSet(
+        new TAccessEvent("functional_kudu.testtbl",
+                         TCatalogObjectType.TABLE, "SELECT"),
+        new TAccessEvent("functional_kudu.testtbl",
+                         TCatalogObjectType.TABLE, "ALL")));
+
+    // Delete using a complex query
+    accessEvents = AnalyzeAccessEvents(
+        "delete c from functional_kudu.testtbl c, functional_kudu.alltypes s where " +
+        "c.id = s.id and s.int_col < 10");
+    Assert.assertEquals(accessEvents, Sets.newHashSet(
+        new TAccessEvent("functional_kudu.testtbl",
+                         TCatalogObjectType.TABLE, "SELECT"),
+        new TAccessEvent("functional_kudu.alltypes",
+                         TCatalogObjectType.TABLE, "SELECT"),
+        new TAccessEvent("functional_kudu.testtbl",
+                         TCatalogObjectType.TABLE, "ALL")));
+
+    // Update
+    accessEvents = AnalyzeAccessEvents(
+        "update functional_kudu.testtbl set name = 'test' where id < 10");
+    Assert.assertEquals(accessEvents, Sets.newHashSet(
+        new TAccessEvent("functional_kudu.testtbl",
+                         TCatalogObjectType.TABLE, "SELECT"),
+        new TAccessEvent("functional_kudu.testtbl",
+                         TCatalogObjectType.TABLE, "ALL")));
+
+    // Drop table
+    accessEvents = AnalyzeAccessEvents("drop table functional_kudu.testtbl");
+    Assert.assertEquals(accessEvents, Sets.newHashSet(new TAccessEvent(
+        "functional_kudu.testtbl", TCatalogObjectType.TABLE, "DROP")));
+
+    // Drop table if exist
+    accessEvents = AnalyzeAccessEvents("drop table if exists functional_kudu.testtbl");
+    Assert.assertEquals(accessEvents, Sets.newHashSet(new TAccessEvent(
+            "functional_kudu.testtbl", TCatalogObjectType.TABLE, "DROP")));
+
+    // Show create table
+    accessEvents = AnalyzeAccessEvents("show create table functional_kudu.testtbl");
+    Assert.assertEquals(accessEvents, Sets.newHashSet(new TAccessEvent(
+        "functional_kudu.testtbl", TCatalogObjectType.TABLE, "VIEW_METADATA")));
+
+    // Compute stats
+    accessEvents = AnalyzeAccessEvents("compute stats functional_kudu.testtbl");
+    Assert.assertEquals(accessEvents, Sets.newHashSet(
+        new TAccessEvent("functional_kudu.testtbl",
+                         TCatalogObjectType.TABLE, "ALTER"),
+        new TAccessEvent("functional_kudu.testtbl",
+                         TCatalogObjectType.TABLE, "SELECT")));
+
+    // Describe
+    accessEvents = AnalyzeAccessEvents("describe functional_kudu.testtbl");
+    Assert.assertEquals(accessEvents, Sets.newHashSet(new TAccessEvent(
+        "functional_kudu.testtbl", TCatalogObjectType.TABLE, "ANY")));
+
+    // Describe formatted
+    accessEvents = AnalyzeAccessEvents("describe formatted functional_kudu.testtbl");
+    Assert.assertEquals(accessEvents, Sets.newHashSet(new TAccessEvent(
+        "functional_kudu.testtbl", TCatalogObjectType.TABLE, "ANY")));
+  }
+}
diff --git a/fe/src/test/java/org/apache/impala/analysis/AuditingTest.java b/fe/src/test/java/org/apache/impala/analysis/AuditingTest.java
index 1107389..c4278ab 100644
--- a/fe/src/test/java/org/apache/impala/analysis/AuditingTest.java
+++ b/fe/src/test/java/org/apache/impala/analysis/AuditingTest.java
@@ -21,14 +21,12 @@ import java.util.Set;
 
 import org.apache.impala.authorization.AuthorizationFactory;
 import org.apache.impala.authorization.AuthorizationException;
-import org.apache.impala.catalog.Catalog;
 import org.apache.impala.catalog.ImpaladCatalog;
 import org.apache.impala.common.AnalysisException;
 import org.apache.impala.common.FrontendTestBase;
 import org.apache.impala.common.ImpalaException;
 import org.apache.impala.service.Frontend;
 import org.apache.impala.testutil.ImpaladTestCatalog;
-import org.apache.impala.testutil.TestUtils;
 import org.apache.impala.thrift.TAccessEvent;
 import org.apache.impala.thrift.TCatalogObjectType;
 import org.junit.Assert;
@@ -62,10 +60,14 @@ public class AuditingTest extends FrontendTestBase {
     // Select from a view that contains a subquery.
     accessEvents = AnalyzeAccessEvents("select * from functional_rc.subquery_view");
     Assert.assertEquals(accessEvents, Sets.newHashSet(
-        new TAccessEvent("functional_rc.alltypessmall", TCatalogObjectType.TABLE, "SELECT"),
-        new TAccessEvent("functional_rc.alltypes", TCatalogObjectType.TABLE, "SELECT"),
-        new TAccessEvent("functional_rc.subquery_view", TCatalogObjectType.VIEW, "SELECT"),
-        new TAccessEvent("_impala_builtins", TCatalogObjectType.DATABASE, "VIEW_METADATA")
+        new TAccessEvent("functional_rc.alltypessmall", TCatalogObjectType.TABLE,
+            "SELECT"),
+        new TAccessEvent("functional_rc.alltypes", TCatalogObjectType.TABLE,
+            "SELECT"),
+        new TAccessEvent("functional_rc.subquery_view", TCatalogObjectType.VIEW,
+            "SELECT"),
+        new TAccessEvent("_impala_builtins", TCatalogObjectType.DATABASE,
+            "VIEW_METADATA")
         ));
 
     // Select from an inline view.
@@ -194,6 +196,13 @@ public class AuditingTest extends FrontendTestBase {
         + "'/test-warehouse/schemas/zipcode_incomes.parquet'");
     Assert.assertEquals(accessEvents, Sets.newHashSet(
         new TAccessEvent("tpch.new_table", TCatalogObjectType.TABLE, "CREATE")));
+
+    accessEvents = AnalyzeAccessEvents(
+        "create table tpch.new_table as select * from functional.alltypesagg");
+    Assert.assertEquals(accessEvents, Sets.newHashSet(
+        new TAccessEvent("tpch", TCatalogObjectType.DATABASE, "ANY"),
+        new TAccessEvent("functional.alltypesagg", TCatalogObjectType.TABLE, "SELECT"),
+        new TAccessEvent("tpch.new_table", TCatalogObjectType.TABLE, "CREATE")));
   }
 
   @Test
@@ -390,95 +399,4 @@ public class AuditingTest extends FrontendTestBase {
         new TAccessEvent("_impala_builtins", TCatalogObjectType.DATABASE, "VIEW_METADATA"),
         new TAccessEvent("functional.alltypesagg", TCatalogObjectType.TABLE, "SELECT")));
   }
-
-  @Test
-  public void TestKuduStatements() throws AuthorizationException, AnalysisException {
-    TestUtils.assumeKuduIsSupported();
-    // Select
-    Set<TAccessEvent> accessEvents =
-        AnalyzeAccessEvents("select * from functional_kudu.testtbl");
-    Assert.assertEquals(accessEvents, Sets.newHashSet(
-        new TAccessEvent("functional_kudu.testtbl", TCatalogObjectType.TABLE, "SELECT")));
-
-    // Insert
-    accessEvents = AnalyzeAccessEvents(
-        "insert into functional_kudu.testtbl (id) select id from " +
-        "functional_kudu.alltypes");
-    Assert.assertEquals(accessEvents, Sets.newHashSet(
-        new TAccessEvent("functional_kudu.alltypes", TCatalogObjectType.TABLE, "SELECT"),
-        new TAccessEvent("functional_kudu.testtbl", TCatalogObjectType.TABLE, "INSERT")));
-
-    // Upsert
-    accessEvents = AnalyzeAccessEvents(
-        "upsert into functional_kudu.testtbl (id) select id from " +
-        "functional_kudu.alltypes");
-    Assert.assertEquals(accessEvents, Sets.newHashSet(
-        new TAccessEvent("functional_kudu.alltypes", TCatalogObjectType.TABLE, "SELECT"),
-        new TAccessEvent("functional_kudu.testtbl", TCatalogObjectType.TABLE, "ALL")));
-
-    // Delete
-    accessEvents = AnalyzeAccessEvents(
-        "delete from functional_kudu.testtbl where id = 1");
-    Assert.assertEquals(accessEvents, Sets.newHashSet(
-        new TAccessEvent("functional_kudu.testtbl", TCatalogObjectType.TABLE, "SELECT"),
-        new TAccessEvent("functional_kudu.testtbl", TCatalogObjectType.TABLE, "ALL")));
-
-    // Delete using a complex query
-    accessEvents = AnalyzeAccessEvents(
-        "delete c from functional_kudu.testtbl c, functional_kudu.alltypes s where " +
-        "c.id = s.id and s.int_col < 10");
-    Assert.assertEquals(accessEvents, Sets.newHashSet(
-        new TAccessEvent("functional_kudu.testtbl", TCatalogObjectType.TABLE, "SELECT"),
-        new TAccessEvent("functional_kudu.alltypes", TCatalogObjectType.TABLE, "SELECT"),
-        new TAccessEvent("functional_kudu.testtbl", TCatalogObjectType.TABLE, "ALL")));
-
-    // Update
-    accessEvents = AnalyzeAccessEvents(
-        "update functional_kudu.testtbl set name = 'test' where id < 10");
-    Assert.assertEquals(accessEvents, Sets.newHashSet(
-        new TAccessEvent("functional_kudu.testtbl", TCatalogObjectType.TABLE, "SELECT"),
-        new TAccessEvent("functional_kudu.testtbl", TCatalogObjectType.TABLE, "ALL")));
-
-    // Drop table
-    accessEvents = AnalyzeAccessEvents("drop table functional_kudu.testtbl");
-    Assert.assertEquals(accessEvents, Sets.newHashSet(new TAccessEvent(
-        "functional_kudu.testtbl", TCatalogObjectType.TABLE, "DROP")));
-
-    // Show create table
-    accessEvents = AnalyzeAccessEvents("show create table functional_kudu.testtbl");
-    Assert.assertEquals(accessEvents, Sets.newHashSet(new TAccessEvent(
-        "functional_kudu.testtbl", TCatalogObjectType.TABLE, "VIEW_METADATA")));
-
-    // Compute stats
-    accessEvents = AnalyzeAccessEvents("compute stats functional_kudu.testtbl");
-    Assert.assertEquals(accessEvents, Sets.newHashSet(
-        new TAccessEvent("functional_kudu.testtbl", TCatalogObjectType.TABLE, "ALTER"),
-        new TAccessEvent("functional_kudu.testtbl", TCatalogObjectType.TABLE, "SELECT")));
-
-    // Describe
-    accessEvents = AnalyzeAccessEvents("describe functional_kudu.testtbl");
-    Assert.assertEquals(accessEvents, Sets.newHashSet(new TAccessEvent(
-        "functional_kudu.testtbl", TCatalogObjectType.TABLE, "ANY")));
-
-    // Describe formatted
-    accessEvents = AnalyzeAccessEvents("describe formatted functional_kudu.testtbl");
-    Assert.assertEquals(accessEvents, Sets.newHashSet(new TAccessEvent(
-        "functional_kudu.testtbl", TCatalogObjectType.TABLE, "ANY")));
-  }
-
-  /**
-   * Analyzes the given statement and returns the set of TAccessEvents
-   * that were captured as part of analysis.
-   */
-  private Set<TAccessEvent> AnalyzeAccessEvents(String stmt)
-      throws AuthorizationException, AnalysisException {
-    return AnalyzeAccessEvents(stmt, Catalog.DEFAULT_DB);
-  }
-
-  private Set<TAccessEvent> AnalyzeAccessEvents(String stmt, String db)
-      throws AuthorizationException, AnalysisException {
-    AnalysisContext ctx = createAnalysisCtx(db);
-    AnalyzesOk(stmt, ctx);
-    return ctx.getAnalyzer().getAccessEvents();
-  }
 }
diff --git a/fe/src/test/java/org/apache/impala/analysis/ToSqlTest.java b/fe/src/test/java/org/apache/impala/analysis/ToSqlTest.java
index caf06ee..78eb8e6 100644
--- a/fe/src/test/java/org/apache/impala/analysis/ToSqlTest.java
+++ b/fe/src/test/java/org/apache/impala/analysis/ToSqlTest.java
@@ -330,14 +330,16 @@ public class ToSqlTest extends FrontendTestBase {
         "CREATE TABLE default.p ( a INT, b INT ) PARTITIONED BY ( day STRING ) " +
         "SORT BY ( a, b ) STORED AS TEXTFILE" , true);
     // Kudu table with a TIMESTAMP column default value
-    testToSql("create table p (a bigint primary key, b timestamp default '1987-05-19') " +
-        "partition by hash(a) partitions 3 stored as kudu " +
-        "tblproperties ('kudu.master_addresses'='foo')",
+    String kuduMasters = catalog_.getDefaultKuduMasterHosts();
+    testToSql(String.format("create table p (a bigint primary key, " +
+        "b timestamp default '1987-05-19') partition by hash(a) partitions 3 " +
+        "stored as kudu tblproperties ('kudu.master_addresses'='%s')", kuduMasters),
         "default",
-        "CREATE TABLE default.p ( a BIGINT PRIMARY KEY, b TIMESTAMP " +
+        String.format("CREATE TABLE default.p ( a BIGINT PRIMARY KEY, b TIMESTAMP " +
         "DEFAULT '1987-05-19' ) PARTITION BY HASH (a) PARTITIONS 3 " +
-        "STORED AS KUDU TBLPROPERTIES ('kudu.master_addresses'='foo', " +
-        "'storage_handler'='com.cloudera.kudu.hive.KuduStorageHandler')", true);
+        "STORED AS KUDU TBLPROPERTIES ('kudu.master_addresses'='%s', " +
+        "'storage_handler'='org.apache.kudu.hive.KuduStorageHandler')", kuduMasters),
+        true);
   }
 
   @Test
@@ -361,16 +363,17 @@ public class ToSqlTest extends FrontendTestBase {
         "STORED AS TEXTFILE AS SELECT double_col, string_col, int_col FROM " +
         "functional.alltypes", true);
     // Kudu table with multiple partition params
-    testToSql("create table p primary key (a,b) partition by hash(a) partitions 3, " +
-        "range (b) (partition value = 1) stored as kudu " +
-        "tblproperties ('kudu.master_addresses'='foo') as select int_col a, bigint_col " +
-        "b from functional.alltypes",
+    String kuduMasters = catalog_.getDefaultKuduMasterHosts();
+    testToSql(String.format("create table p primary key (a,b) " +
+        "partition by hash(a) partitions 3, range (b) (partition value = 1) " +
+        "stored as kudu tblproperties ('kudu.master_addresses'='%s') as select " +
+        "int_col a, bigint_col b from functional.alltypes", kuduMasters),
         "default",
-        "CREATE TABLE default.p PRIMARY KEY (a, b) PARTITION BY HASH (a) PARTITIONS 3, " +
-        "RANGE (b) (PARTITION VALUE = 1) STORED AS KUDU TBLPROPERTIES " +
-        "('kudu.master_addresses'='foo', " +
-        "'storage_handler'='com.cloudera.kudu.hive.KuduStorageHandler') AS " +
-        "SELECT int_col a, bigint_col b FROM functional.alltypes", true);
+        String.format("CREATE TABLE default.p PRIMARY KEY (a, b) " +
+        "PARTITION BY HASH (a) PARTITIONS 3, RANGE (b) (PARTITION VALUE = 1) " +
+        "STORED AS KUDU TBLPROPERTIES ('kudu.master_addresses'='%s', " +
+        "'storage_handler'='org.apache.kudu.hive.KuduStorageHandler') AS SELECT " +
+        "int_col a, bigint_col b FROM functional.alltypes", kuduMasters), true);
   }
 
   @Test
diff --git a/fe/src/test/java/org/apache/impala/common/FrontendTestBase.java b/fe/src/test/java/org/apache/impala/common/FrontendTestBase.java
index 66a3393..b08ecf1 100644
--- a/fe/src/test/java/org/apache/impala/common/FrontendTestBase.java
+++ b/fe/src/test/java/org/apache/impala/common/FrontendTestBase.java
@@ -55,6 +55,7 @@ import org.apache.impala.catalog.Type;
 import org.apache.impala.service.FeCatalogManager;
 import org.apache.impala.service.Frontend;
 import org.apache.impala.testutil.ImpaladTestCatalog;
+import org.apache.impala.thrift.TAccessEvent;
 import org.apache.impala.thrift.TQueryOptions;
 import org.junit.Assert;
 
@@ -304,6 +305,22 @@ public class FrontendTestBase extends AbstractFrontendTest {
   }
 
   /**
+   * Analyzes the given statement and returns the set of TAccessEvents
+   * that were captured as part of analysis.
+   */
+  protected Set<TAccessEvent> AnalyzeAccessEvents(String stmt)
+      throws AuthorizationException, AnalysisException {
+    return AnalyzeAccessEvents(stmt, Catalog.DEFAULT_DB);
+  }
+
+  protected Set<TAccessEvent> AnalyzeAccessEvents(String stmt, String db)
+      throws AuthorizationException, AnalysisException {
+    AnalysisContext ctx = createAnalysisCtx(db);
+    AnalyzesOk(stmt, ctx);
+    return ctx.getAnalyzer().getAccessEvents();
+  }
+
+  /**
    * Creates a dummy {@link AuthorizationFactory} with authorization enabled, but does
    * not do the actual authorization.
    */
diff --git a/fe/src/test/java/org/apache/impala/customservice/CustomServiceRunner.java b/fe/src/test/java/org/apache/impala/customservice/CustomServiceRunner.java
new file mode 100644
index 0000000..8d09389
--- /dev/null
+++ b/fe/src/test/java/org/apache/impala/customservice/CustomServiceRunner.java
@@ -0,0 +1,43 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.impala.customservice;
+
+import java.io.IOException;
+
+/**
+ * Runs a minicluster component with custom flags.
+ *
+ * In order to prevent this from affecting other tests, we filter on the package name
+ * to run FE custom cluster tests when we run the python custom cluster tests, so this
+ * should not be used outside of this package.
+ */
+class CustomServiceRunner {
+
+  /**
+   *  Restarts a specified minicluster component (e.g kudu) in a separate process
+   *  with the specified environment.
+   */
+  public static int RestartMiniclusterComponent(String component,
+      String[] envp) throws IOException, InterruptedException {
+    String command = System.getenv().get("IMPALA_HOME") +
+        "/testdata/cluster/admin restart " + component;
+    Process p = Runtime.getRuntime().exec(command, envp);
+    p.waitFor();
+    return p.exitValue();
+  }
+}
diff --git a/fe/src/test/java/org/apache/impala/customservice/KuduHMSIntegrationTest.java b/fe/src/test/java/org/apache/impala/customservice/KuduHMSIntegrationTest.java
new file mode 100644
index 0000000..2019c23
--- /dev/null
+++ b/fe/src/test/java/org/apache/impala/customservice/KuduHMSIntegrationTest.java
@@ -0,0 +1,82 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.impala.customservice;
+
+import static org.junit.Assert.assertEquals;
+
+import org.apache.impala.analysis.AnalyzeKuduDDLTest;
+import org.apache.impala.analysis.AuditingKuduTest;
+import org.apache.impala.analysis.ParserTest;
+import org.apache.impala.analysis.ToSqlTest;
+import org.apache.impala.customservice.CustomServiceRunner;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.runner.RunWith;
+import org.junit.runners.Suite;
+import org.junit.runners.Suite.SuiteClasses;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+@RunWith(Suite.class)
+@SuiteClasses({ AnalyzeKuduDDLTest.class, AuditingKuduTest.class,
+                ParserTest.class, ToSqlTest.class })
+/**
+ * Test suite on Kudu tables when HMS integration is enabled.
+ */
+public class KuduHMSIntegrationTest {
+  /**
+   * Restarts Kudu cluster with or without HMS Integration.
+   */
+  private static void restartKudu(boolean enableHMSIntegration)
+      throws Exception {
+    List<String> envp = getSystemEnv(enableHMSIntegration);
+    int exitVal = CustomServiceRunner.RestartMiniclusterComponent(
+        "kudu", envp.toArray(new String[envp.size()]));
+    assertEquals(0, exitVal);
+  }
+
+  /**
+   * Parsing system environment variables and set IMPALA_KUDU_STARTUP_FLAGS
+   * if HMS integration should be enabled.
+   */
+  private static List<String> getSystemEnv(boolean enableHMSIntegration) {
+    List<String> envp = new ArrayList<>();
+    for (Map.Entry<String,String> entry : System.getenv().entrySet()) {
+      envp.add(entry.getKey() + "=" + entry.getValue());
+    }
+    if (enableHMSIntegration) {
+      final String hmsIntegrationEnv = String.format("IMPALA_KUDU_STARTUP_FLAGS=" +
+          "-hive_metastore_uris=thrift://%s:9083",
+          System.getenv("INTERNAL_LISTEN_HOST"));
+      envp.add(hmsIntegrationEnv);
+    }
+    return envp;
+  }
+
+  @BeforeClass
+  public static void setUp() throws Exception {
+    restartKudu(true);
+  }
+
+  @AfterClass
+  public static void cleanUp() throws Exception {
+    restartKudu(false);
+  }
+}
\ No newline at end of file
diff --git a/testdata/cluster/node_templates/common/etc/init.d/kudu-master b/testdata/cluster/node_templates/common/etc/init.d/kudu-master
index a0b4234..b733662 100755
--- a/testdata/cluster/node_templates/common/etc/init.d/kudu-master
+++ b/testdata/cluster/node_templates/common/etc/init.d/kudu-master
@@ -25,6 +25,9 @@ DIR=$(dirname $0)
 . "$DIR/kudu-common"   # Sets KUDU_COMMON_ARGS
 
 function start {
+  if [[ -n "$IMPALA_KUDU_STARTUP_FLAGS" ]]; then
+    KUDU_COMMON_ARGS+=(${IMPALA_KUDU_STARTUP_FLAGS})
+  fi
   do_start "$KUDU_BIN_DIR"/kudu-master \
       -flagfile "$NODE_DIR"/etc/kudu/master.conf \
       "${KUDU_COMMON_ARGS[@]}"
diff --git a/tests/query_test/test_kudu.py b/tests/query_test/test_kudu.py
index 216a41a..f00a6b5 100644
--- a/tests/query_test/test_kudu.py
+++ b/tests/query_test/test_kudu.py
@@ -556,7 +556,7 @@ class TestCreateExternalTable(KuduTestSuite):
         assert ["", "EXTERNAL", "TRUE"] in table_desc
         assert ["", "kudu.master_addresses", KUDU_MASTER_HOSTS] in table_desc
         assert ["", "kudu.table_name", kudu_table.name] in table_desc
-        assert ["", "storage_handler", "com.cloudera.kudu.hive.KuduStorageHandler"] \
+        assert ["", "storage_handler", "org.apache.kudu.hive.KuduStorageHandler"] \
             in table_desc
 
   def test_col_types(self, cursor, kudu_client):


[impala] 02/03: [DOCS] Added the section on object ownership

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tarmstrong pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git

commit d9de31ea439847a9c011c3b3e14f38a8ee0f606b
Author: Alex Rodoni <ar...@cloudera.com>
AuthorDate: Thu May 30 12:18:43 2019 -0700

    [DOCS] Added the section on object ownership
    
    Change-Id: Iff48684f457ef19a27524adfbcc2ae5e098320a3
    Reviewed-on: http://gerrit.cloudera.org:8080/13478
    Tested-by: Impala Public Jenkins <im...@cloudera.com>
    Reviewed-by: Alex Rodoni <ar...@cloudera.com>
---
 docs/topics/impala_authorization.xml | 39 ++++++++++++++++++++++++++++++++++++
 1 file changed, 39 insertions(+)

diff --git a/docs/topics/impala_authorization.xml b/docs/topics/impala_authorization.xml
index c49fa97..f4be80b 100644
--- a/docs/topics/impala_authorization.xml
+++ b/docs/topics/impala_authorization.xml
@@ -106,6 +106,45 @@ under the License.
 
   </concept>
 
+  <concept id="object_ownership">
+
+    <title>Object Ownership in Sentry</title>
+
+    <conbody>
+
+      <p>
+        Impala supports the ownership on databases, tables, and views. The
+        <codeph>CREATE</codeph> statements implicitly make the user running the statement the
+        owner of the object. An owner has the <codeph>OWNER</codeph> privilege if enabled in
+        Sentry. For example, if <varname>User A</varname> creates a database,
+        <varname>foo</varname>, via the <codeph>CREATE DATABASE</codeph> statement,
+        <varname>User A</varname> now owns the <varname>foo</varname> database and is authorized
+        to perform any operation on the <varname>foo</varname> database.
+      </p>
+
+      <p>
+        The <codeph>OWNER</codeph> privilege is not a grantable or revokable privilege whereas
+        the <codeph>ALL</codeph> privilege is explicitly granted via the <codeph>GRANT</codeph>
+        statement.
+      </p>
+
+      <p>
+        The object ownership feature is controlled by a Sentry configuration. The
+        <codeph>OWNER</codeph> privilege is only granted when the feature is enabled in Sentry.
+        When enabled they get the owner privilege, with or without the <codeph>GRANT
+        OPTION</codeph>, which is also controlled by the Sentry configuration.
+      </p>
+
+      <p>
+        An ownership can be transferred to another user or role via the <codeph>ALTER
+        DATABASE</codeph>, <codeph>ALTER TABLE</codeph>, or <codeph>ALTER VIEW</codeph> with the
+        <codeph>SET OWNER</codeph> clause.
+      </p>
+
+    </conbody>
+
+  </concept>
+
   <concept id="secure_startup">
 
     <title>Starting Impala with Sentry Authorization Enabled</title>


[impala] 03/03: IMPALA-8502: Bump CDH_BUILD_NUMBER and Kudu version

Posted by ta...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

tarmstrong pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git

commit cd30949102425e28adadb51232653d910ac8422f
Author: Hao Hao <ha...@cloudera.com>
AuthorDate: Thu May 30 11:04:24 2019 -0700

    IMPALA-8502: Bump CDH_BUILD_NUMBER and Kudu version
    
    This bumps CDH_BUILD_NUMBER to 1137441 to bring in kudu-hive.jar which
    is recently added to impala-minicluster-tarballs for testing Kudu tables
    with the Hive Metastore integration enabled. It also brings in the
    latest Kudu side changes which adjust external Kudu table handling.
    
    Change-Id: I4c9c7d264f08454d9da8ad2d06ead643214ee23e
    Reviewed-on: http://gerrit.cloudera.org:8080/13481
    Reviewed-by: Thomas Marshall <tm...@cloudera.com>
    Reviewed-by: Grant Henke <gr...@apache.org>
    Tested-by: Impala Public Jenkins <im...@cloudera.com>
---
 bin/impala-config.sh | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/bin/impala-config.sh b/bin/impala-config.sh
index 5e60f45..f4b739b 100755
--- a/bin/impala-config.sh
+++ b/bin/impala-config.sh
@@ -68,7 +68,7 @@ fi
 # moving to a different build of the toolchain, e.g. when a version is bumped or a
 # compile option is changed. The build id can be found in the output of the toolchain
 # build jobs, it is constructed from the build number and toolchain git hash prefix.
-export IMPALA_TOOLCHAIN_BUILD_ID=40-193a30b3af
+export IMPALA_TOOLCHAIN_BUILD_ID=41-e711325d56
 # Versions of toolchain dependencies.
 # -----------------------------------
 export IMPALA_AVRO_VERSION=1.7.4-p4
@@ -160,7 +160,7 @@ fi
 : ${IMPALA_TOOLCHAIN_HOST:=native-toolchain.s3.amazonaws.com}
 export IMPALA_TOOLCHAIN_HOST
 export CDH_MAJOR_VERSION=6
-export CDH_BUILD_NUMBER=1055188
+export CDH_BUILD_NUMBER=1137441
 export CDP_BUILD_NUMBER=1056671
 export CDH_HADOOP_VERSION=3.0.0-cdh6.x-SNAPSHOT
 export CDP_HADOOP_VERSION=3.1.1.6.0.99.0-147
@@ -664,7 +664,7 @@ if $USE_CDH_KUDU; then
   export IMPALA_KUDU_VERSION=${IMPALA_KUDU_VERSION-"1.10.0-cdh6.x-SNAPSHOT"}
   export IMPALA_KUDU_HOME=${CDH_COMPONENTS_HOME}/kudu-$IMPALA_KUDU_VERSION
 else
-  export IMPALA_KUDU_VERSION=${IMPALA_KUDU_VERSION-"9ba901a"}
+  export IMPALA_KUDU_VERSION=${IMPALA_KUDU_VERSION-"84086fe"}
   export IMPALA_KUDU_HOME=${IMPALA_TOOLCHAIN}/kudu-$IMPALA_KUDU_VERSION
 fi