You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@impala.apache.org by jo...@apache.org on 2019/07/03 17:42:40 UTC

[impala] branch master updated (875961d -> 5fdef39)

This is an automated email from the ASF dual-hosted git repository.

joemcdonnell pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git.


    from 875961d  IMPALA-8341: [DOCS] Added a note about the experiemental feature
     new 252b117  IMPALA-8734: Reload table schema on TBLPROPERTIES change
     new 111035e  IMPALA-8673: Add query option to force plan hints for insert queries
     new 78c5523  build: use thin static archives
     new 9845924  [DOCS] A format fix in impala_txtfile
     new 5fdef39  IMPALA-8341: [DOCS] Added a note about the requirement for existing dirs

The 5 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CMakeLists.txt                                     |   9 ++
 be/src/service/query-options.cc                    |   4 +
 be/src/service/query-options.h                     |   4 +-
 common/thrift/ImpalaInternalService.thrift         |   3 +
 common/thrift/ImpalaService.thrift                 |   4 +
 docs/topics/impala_data_cache.xml                  |   5 +-
 docs/topics/impala_txtfile.xml                     |   5 +-
 .../apache/impala/analysis/AnalysisContext.java    |   6 +
 .../org/apache/impala/analysis/InsertStmt.java     |  25 ++-
 .../apache/impala/service/CatalogOpExecutor.java   |   1 +
 .../org/apache/impala/analysis/AnalyzeDDLTest.java |  20 +++
 .../apache/impala/analysis/AnalyzeStmtsTest.java   | 100 ++++++++++++
 .../org/apache/impala/planner/PlannerTest.java     |  56 +++++++
 .../insert-default-clustered-noshuffle.test        | 164 ++++++++++++++++++++
 .../insert-default-clustered-shuffle.test          | 111 +++++++++++++
 .../PlannerTest/insert-default-clustered.test      | 163 +++++++++++++++++++
 .../insert-default-noclustered-noshuffle.test      | 172 +++++++++++++++++++++
 .../insert-default-noclustered-shuffle.test        | 119 ++++++++++++++
 .../PlannerTest/insert-default-noclustered.test    | 119 ++++++++++++++
 .../PlannerTest/insert-default-noshuffle.test      | 172 +++++++++++++++++++++
 .../PlannerTest/insert-default-shuffle.test        | 121 +++++++++++++++
 tests/metadata/test_ddl.py                         |  19 +++
 22 files changed, 1393 insertions(+), 9 deletions(-)
 create mode 100644 testdata/workloads/functional-planner/queries/PlannerTest/insert-default-clustered-noshuffle.test
 create mode 100644 testdata/workloads/functional-planner/queries/PlannerTest/insert-default-clustered-shuffle.test
 create mode 100644 testdata/workloads/functional-planner/queries/PlannerTest/insert-default-clustered.test
 create mode 100644 testdata/workloads/functional-planner/queries/PlannerTest/insert-default-noclustered-noshuffle.test
 create mode 100644 testdata/workloads/functional-planner/queries/PlannerTest/insert-default-noclustered-shuffle.test
 create mode 100644 testdata/workloads/functional-planner/queries/PlannerTest/insert-default-noclustered.test
 create mode 100644 testdata/workloads/functional-planner/queries/PlannerTest/insert-default-noshuffle.test
 create mode 100644 testdata/workloads/functional-planner/queries/PlannerTest/insert-default-shuffle.test


[impala] 01/05: IMPALA-8734: Reload table schema on TBLPROPERTIES change

Posted by jo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

joemcdonnell pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git

commit 252b117954065db68eba746ca5afb0476c94313b
Author: Fredy Wijaya <fw...@cloudera.com>
AuthorDate: Tue Jul 2 11:41:27 2019 -0500

    IMPALA-8734: Reload table schema on TBLPROPERTIES change
    
    Prior to this patch, an INVALIDATE METADATA was required when altering
    the TBLPROPERTIES for the changes to take effect. With this patch the
    table schema is automatically reloaded on TBLPROPERTIES change.
    
    Testing:
    - Added a new test in test_ddl.py
    - Ran test_ddl.py
    
    Change-Id: I2a43a962c2a456f3ddc078b2924f551fccb5c2ad
    Reviewed-on: http://gerrit.cloudera.org:8080/13785
    Reviewed-by: Impala Public Jenkins <im...@cloudera.com>
    Tested-by: Impala Public Jenkins <im...@cloudera.com>
---
 .../org/apache/impala/service/CatalogOpExecutor.java  |  1 +
 tests/metadata/test_ddl.py                            | 19 +++++++++++++++++++
 2 files changed, 20 insertions(+)

diff --git a/fe/src/main/java/org/apache/impala/service/CatalogOpExecutor.java b/fe/src/main/java/org/apache/impala/service/CatalogOpExecutor.java
index fc27cdb..ae968a9 100644
--- a/fe/src/main/java/org/apache/impala/service/CatalogOpExecutor.java
+++ b/fe/src/main/java/org/apache/impala/service/CatalogOpExecutor.java
@@ -654,6 +654,7 @@ public class CatalogOpExecutor {
         case SET_TBL_PROPERTIES:
           alterTableSetTblProperties(tbl, params.getSet_tbl_properties_params(),
               numUpdatedPartitions);
+          reloadTableSchema = true;
           if (params.getSet_tbl_properties_params().isSetPartition_set()) {
             addSummary(response,
                 "Updated " + numUpdatedPartitions.getRef() + " partition(s).");
diff --git a/tests/metadata/test_ddl.py b/tests/metadata/test_ddl.py
index 5266718..dce0b69 100644
--- a/tests/metadata/test_ddl.py
+++ b/tests/metadata/test_ddl.py
@@ -691,6 +691,25 @@ class TestDdlStatements(TestDdlBase):
     assert properties['p2'] == 'val3'
     assert properties[''] == ''
 
+  def test_alter_tbl_properties_reload(self, vector, unique_database):
+    # IMPALA-8734: Force a table schema reload when setting table properties.
+    tbl_name = "test_tbl"
+    self.execute_query_expect_success(self.client, "create table {0}.{1} (c1 string)"
+                                      .format(unique_database, tbl_name))
+    self.filesystem_client.create_file("test-warehouse/{0}.db/{1}/f".
+                                       format(unique_database, tbl_name),
+                                       file_data="\nfoo\n")
+    self.execute_query_expect_success(self.client,
+                                      "alter table {0}.{1} set tblproperties"
+                                      "('serialization.null.format'='foo')"
+                                      .format(unique_database, tbl_name))
+    result = self.execute_query_expect_success(self.client,
+                                               "select * from {0}.{1}"
+                                               .format(unique_database, tbl_name))
+    assert len(result.data) == 2
+    assert result.data[0] == ''
+    assert result.data[1] == 'NULL'
+
   @UniqueDatabase.parametrize(sync_ddl=True)
   def test_partition_ddl_predicates(self, vector, unique_database):
     self.run_test_case('QueryTest/partition-ddl-predicates-all-fs', vector,


[impala] 03/05: build: use thin static archives

Posted by jo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

joemcdonnell pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git

commit 78c55230288f8874bcd16454eb3c55277211719a
Author: Todd Lipcon <to...@apache.org>
AuthorDate: Mon Jul 1 16:54:37 2019 -0700

    build: use thin static archives
    
    This enables thin static archives for our internal libraries during the
    build. This makes linking much faster since the static archives just
    point to object files instead of copying them.
    
    This reduces the size of intermediate '.a' files for my debug build from
    about 1.4GB to 58MB.
    
    Incremental rebuilds are slightly sped up, though maybe in the realm of
    noise (likely the performance improvement depends on how much RAM is
    available for buffer caching the IO):
    
    Without patch incremental build of catalogd after modifying CatalogObjects.thrift:
    real    0m53.433s
    user    7m6.400s
    sys     1m21.610s
    
    With patch:
    real    0m44.772s
    user    7m6.632s
    sys     1m16.870s
    
    Change-Id: I3b54b5be8658c951914758c406cca55d4cc1756e
    Reviewed-on: http://gerrit.cloudera.org:8080/13775
    Reviewed-by: Impala Public Jenkins <im...@cloudera.com>
    Tested-by: Impala Public Jenkins <im...@cloudera.com>
---
 CMakeLists.txt | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/CMakeLists.txt b/CMakeLists.txt
index 6d72430..e60e811 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -398,6 +398,15 @@ endif()
 find_package(kuduClient REQUIRED NO_DEFAULT_PATH)
 include_directories(SYSTEM ${KUDU_CLIENT_INCLUDE_DIR})
 
+# Use "thin archives" for our static libraries. We only use static libraries
+# internal to our own build, so thin ones are just as good and much smaller.
+if (${CMAKE_SYSTEM_NAME} STREQUAL "Linux")
+  set(CMAKE_CXX_ARCHIVE_CREATE "<CMAKE_AR> qcT <TARGET> <LINK_FLAGS> <OBJECTS>")
+  set(CMAKE_C_ARCHIVE_CREATE "<CMAKE_AR> qcT <TARGET> <LINK_FLAGS> <OBJECTS>")
+  set(CMAKE_CXX_ARCHIVE_APPEND "<CMAKE_AR> qT <TARGET> <LINK_FLAGS> <OBJECTS>")
+  set(CMAKE_C_ARCHIVE_APPEND "<CMAKE_AR> qT <TARGET> <LINK_FLAGS> <OBJECTS>")
+endif()
+
 # compile these subdirs using their own CMakeLists.txt
 add_subdirectory(common/function-registry)
 add_subdirectory(common/thrift)


[impala] 04/05: [DOCS] A format fix in impala_txtfile

Posted by jo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

joemcdonnell pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git

commit 98459244954e5b5e34427efea10c5770c2bf70e7
Author: Alex Rodoni <ar...@cloudera.com>
AuthorDate: Wed Jul 3 10:06:55 2019 -0700

    [DOCS] A format fix in impala_txtfile
    
    Change-Id: I63c2d4dfb4985bc2560fb559bee3f4987d415405
    Reviewed-on: http://gerrit.cloudera.org:8080/13797
    Reviewed-by: Alex Rodoni <ar...@cloudera.com>
    Tested-by: Impala Public Jenkins <im...@cloudera.com>
---
 docs/topics/impala_txtfile.xml | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/docs/topics/impala_txtfile.xml b/docs/topics/impala_txtfile.xml
index 8be36bf..ecf11bb 100644
--- a/docs/topics/impala_txtfile.xml
+++ b/docs/topics/impala_txtfile.xml
@@ -302,9 +302,8 @@ create table pipe_separated(id int, s string, n int, t timestamp, b boolean)
             options <codeph>--null-non-string</codeph> and
               <codeph>--null-string</codeph> to ensure all <codeph>NULL</codeph>
             values are represented correctly in the Sqoop output files.
-              <codeph>\N</codeph> needs to be escaped as in the below example: </p>
-          <p><codeph>--null-string '\\N' <codeph>--null-non-string
-                '\\N'</codeph></codeph>
+              <codeph>\N</codeph> needs to be escaped as in the below example:
+            <codeblock>--null-string '\\N' --null-non-string '\\N'</codeblock>
           </p>
         </li>
         <li>


[impala] 02/05: IMPALA-8673: Add query option to force plan hints for insert queries

Posted by jo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

joemcdonnell pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git

commit 111035ef77f0f3331fff6abe3e60185c3d4e9a10
Author: Abhishek Rawat <ar...@cloudera.com>
AuthorDate: Thu Jun 27 11:09:44 2019 -0700

    IMPALA-8673: Add query option to force plan hints for insert queries
    
    IMPALA-5293 enabled the pre-insert clustering by default. This could
    cause performance regression and this change provides a query option
    for setting default hints for INSERT statement.
    
    New query option 'DEFAULT_HINTS_INSERT_STATEMENT' was added. It also
    supports adding multiple supported hints when separated by ':'
      set DEFAULT_HINTS_INSERT_STATEMENT=[clustered|noclustered];
      set DEFAULT_HINTS_INSERT_STATEMENT=[shuffle|noshuffle];
      set DEFAULT_HINTS_INSERT_STATEMENT=
              [clustered|noclustered]:[shuffle|noshuffle];
    
    If a given insert statement already has plan hints in the query text,
    the default hints, if any, are all ignored. This is because, if a query
    has plan hints specified by the user, we don't want to override it.
    When a default hint is set, and there is an INSERT statement without any
    plan hints in the query text, the default hints have the same affect as
    they would have had, if they were applied as plan hints in the query
    text. So these default hints have the same application and restrictions
    as the existing plan hints for INSERT statement. The default hints apply
    to HDFS and Kudu table formats and are ignored for HBase table format.
    
    Testing:
    - Added unit tests in AnalyzeDDLTest for CTAS.
    - Added unit tests in AnalyzeStmtsTest for insert statements.
    - Added unit tests in PlannerTest validating the plan for various
      scenarios involving different combinations of default hints.
    
    Change-Id: I1c3f213402b8e4d1940f96738ad21edf800fa43a
    Reviewed-on: http://gerrit.cloudera.org:8080/13753
    Reviewed-by: Impala Public Jenkins <im...@cloudera.com>
    Tested-by: Impala Public Jenkins <im...@cloudera.com>
---
 be/src/service/query-options.cc                    |   4 +
 be/src/service/query-options.h                     |   4 +-
 common/thrift/ImpalaInternalService.thrift         |   3 +
 common/thrift/ImpalaService.thrift                 |   4 +
 .../apache/impala/analysis/AnalysisContext.java    |   6 +
 .../org/apache/impala/analysis/InsertStmt.java     |  25 ++-
 .../org/apache/impala/analysis/AnalyzeDDLTest.java |  20 +++
 .../apache/impala/analysis/AnalyzeStmtsTest.java   | 100 ++++++++++++
 .../org/apache/impala/planner/PlannerTest.java     |  56 +++++++
 .../insert-default-clustered-noshuffle.test        | 164 ++++++++++++++++++++
 .../insert-default-clustered-shuffle.test          | 111 +++++++++++++
 .../PlannerTest/insert-default-clustered.test      | 163 +++++++++++++++++++
 .../insert-default-noclustered-noshuffle.test      | 172 +++++++++++++++++++++
 .../insert-default-noclustered-shuffle.test        | 119 ++++++++++++++
 .../PlannerTest/insert-default-noclustered.test    | 119 ++++++++++++++
 .../PlannerTest/insert-default-noshuffle.test      | 172 +++++++++++++++++++++
 .../PlannerTest/insert-default-shuffle.test        | 121 +++++++++++++++
 17 files changed, 1359 insertions(+), 4 deletions(-)

diff --git a/be/src/service/query-options.cc b/be/src/service/query-options.cc
index d7cd260..2133aaf 100644
--- a/be/src/service/query-options.cc
+++ b/be/src/service/query-options.cc
@@ -800,6 +800,10 @@ Status impala::SetQueryOption(const string& key, const string& value,
         query_options->__set_disable_hdfs_num_rows_estimate(IsTrue(value));
         break;
       }
+      case TImpalaQueryOptions::DEFAULT_HINTS_INSERT_STATEMENT: {
+        query_options->__set_default_hints_insert_statement(value);
+        break;
+      }
       default:
         if (IsRemovedQueryOption(key)) {
           LOG(WARNING) << "Ignoring attempt to set removed query option '" << key << "'";
diff --git a/be/src/service/query-options.h b/be/src/service/query-options.h
index b4ffad2..a9bf704 100644
--- a/be/src/service/query-options.h
+++ b/be/src/service/query-options.h
@@ -47,7 +47,7 @@ typedef std::unordered_map<string, beeswax::TQueryOptionLevel::type>
 // time we add or remove a query option to/from the enum TImpalaQueryOptions.
 #define QUERY_OPTS_TABLE\
   DCHECK_EQ(_TImpalaQueryOptions_VALUES_TO_NAMES.size(),\
-      TImpalaQueryOptions::DISABLE_HDFS_NUM_ROWS_ESTIMATE + 1);\
+      TImpalaQueryOptions::DEFAULT_HINTS_INSERT_STATEMENT + 1);\
   REMOVED_QUERY_OPT_FN(abort_on_default_limit_exceeded, ABORT_ON_DEFAULT_LIMIT_EXCEEDED)\
   QUERY_OPT_FN(abort_on_error, ABORT_ON_ERROR, TQueryOptionLevel::REGULAR)\
   REMOVED_QUERY_OPT_FN(allow_unsupported_formats, ALLOW_UNSUPPORTED_FORMATS)\
@@ -168,6 +168,8 @@ typedef std::unordered_map<string, beeswax::TQueryOptionLevel::type>
   QUERY_OPT_FN(parquet_page_row_count_limit, PARQUET_PAGE_ROW_COUNT_LIMIT,\
       TQueryOptionLevel::ADVANCED)\
   QUERY_OPT_FN(disable_hdfs_num_rows_estimate, DISABLE_HDFS_NUM_ROWS_ESTIMATE,\
+      TQueryOptionLevel::REGULAR)\
+  QUERY_OPT_FN(default_hints_insert_statement, DEFAULT_HINTS_INSERT_STATEMENT,\
       TQueryOptionLevel::REGULAR)
   ;
 
diff --git a/common/thrift/ImpalaInternalService.thrift b/common/thrift/ImpalaInternalService.thrift
index 4f5a070..658735f 100644
--- a/common/thrift/ImpalaInternalService.thrift
+++ b/common/thrift/ImpalaInternalService.thrift
@@ -352,6 +352,9 @@ struct TQueryOptions {
   // Disable the attempt to compute an estimated number of rows in an
   // hdfs table.
   84: optional bool disable_hdfs_num_rows_estimate = false;
+
+  // See comment in ImpalaService.thrift.
+  85: optional string default_hints_insert_statement;
 }
 
 // Impala currently has two types of sessions: Beeswax and HiveServer2
diff --git a/common/thrift/ImpalaService.thrift b/common/thrift/ImpalaService.thrift
index 46ca91c..06c8858 100644
--- a/common/thrift/ImpalaService.thrift
+++ b/common/thrift/ImpalaService.thrift
@@ -400,6 +400,10 @@ enum TImpalaQueryOptions {
   // Disable the attempt to compute an estimated number of rows in an
   // hdfs table.
   DISABLE_HDFS_NUM_ROWS_ESTIMATE = 83
+
+  // Default hints for insert statement. Will be overridden by hints in the INSERT
+  // statement, if any.
+  DEFAULT_HINTS_INSERT_STATEMENT = 84
 }
 
 // The summary of a DML statement.
diff --git a/fe/src/main/java/org/apache/impala/analysis/AnalysisContext.java b/fe/src/main/java/org/apache/impala/analysis/AnalysisContext.java
index 2353bfc..d5312f9 100644
--- a/fe/src/main/java/org/apache/impala/analysis/AnalysisContext.java
+++ b/fe/src/main/java/org/apache/impala/analysis/AnalysisContext.java
@@ -521,4 +521,10 @@ public class AnalysisContext {
 
   public Analyzer getAnalyzer() { return analysisResult_.getAnalyzer(); }
   public EventSequence getTimeline() { return timeline_; }
+  // This should only be called after analyzeAndAuthorize().
+  public AnalysisResult getAnalysisResult() {
+    Preconditions.checkNotNull(analysisResult_);
+    Preconditions.checkNotNull(analysisResult_.stmt_);
+    return analysisResult_;
+  }
 }
diff --git a/fe/src/main/java/org/apache/impala/analysis/InsertStmt.java b/fe/src/main/java/org/apache/impala/analysis/InsertStmt.java
index e0bd6fb..18bd264 100644
--- a/fe/src/main/java/org/apache/impala/analysis/InsertStmt.java
+++ b/fe/src/main/java/org/apache/impala/analysis/InsertStmt.java
@@ -837,11 +837,30 @@ public class InsertStmt extends StatementBase {
   }
 
   private void analyzePlanHints(Analyzer analyzer) throws AnalysisException {
-    if (planHints_.isEmpty()) return;
+    // If there are no hints then early exit.
+    if (planHints_.isEmpty() &&
+        !(analyzer.getQueryOptions().isSetDefault_hints_insert_statement())) return;
+
     if (table_ instanceof FeHBaseTable) {
-      throw new AnalysisException(String.format("INSERT hints are only supported for " +
-          "inserting into Hdfs and Kudu tables: %s", getTargetTableName()));
+      if (!planHints_.isEmpty()) {
+        throw new AnalysisException(String.format("INSERT hints are only supported for " +
+            "inserting into Hdfs and Kudu tables: %s", getTargetTableName()));
+      }
+      // Insert hints are not supported for HBase table so ignore any default hints.
+      return;
+    }
+
+    // Set up the plan hints from query option DEFAULT_HINTS_INSERT_STATEMENT.
+    // Default hint is ignored, if the original statement had hints.
+    if (planHints_.isEmpty() &&
+        analyzer.getQueryOptions().isSetDefault_hints_insert_statement()) {
+      String defaultHints =
+        analyzer.getQueryOptions().getDefault_hints_insert_statement();
+      for (String hint: defaultHints.trim().split(":")) {
+        planHints_.add(new PlanHint(hint.trim()));
+      }
     }
+
     for (PlanHint hint: planHints_) {
       if (hint.is("SHUFFLE")) {
         hasShuffleHint_ = true;
diff --git a/fe/src/test/java/org/apache/impala/analysis/AnalyzeDDLTest.java b/fe/src/test/java/org/apache/impala/analysis/AnalyzeDDLTest.java
index c466a1f..aeb71df 100644
--- a/fe/src/test/java/org/apache/impala/analysis/AnalyzeDDLTest.java
+++ b/fe/src/test/java/org/apache/impala/analysis/AnalyzeDDLTest.java
@@ -2187,6 +2187,26 @@ public class AnalyzeDDLTest extends FrontendTestBase {
           "select * from functional.alltypes", prefix, suffix),
           "Conflicting INSERT hints: shuffle and noshuffle");
     }
+
+    // Test default hints using query option.
+    AnalysisContext insertCtx = createAnalysisCtx();
+    // Test default hints for partitioned Hdfs tables.
+    insertCtx.getQueryOptions().setDefault_hints_insert_statement("SHUFFLE");
+    AnalyzesOk("create table t partitioned by (year, month) " +
+        "as select * from functional.alltypes",
+        insertCtx);
+    // Warn on unrecognized hints.
+    insertCtx.getQueryOptions().setDefault_hints_insert_statement("badhint");
+    AnalyzesOk("create table t partitioned by (year, month) " +
+        "as select * from functional.alltypes",
+        insertCtx,
+        "INSERT hint not recognized: badhint");
+    // Conflicting plan hints.
+    insertCtx.getQueryOptions().setDefault_hints_insert_statement("shuffle:noshuFFLe");
+    AnalysisError("create table t partitioned by (year, month) " +
+        "as select * from functional.alltypes",
+        insertCtx,
+        "Conflicting INSERT hints: shuffle and noshuffle");
   }
 
   @Test
diff --git a/fe/src/test/java/org/apache/impala/analysis/AnalyzeStmtsTest.java b/fe/src/test/java/org/apache/impala/analysis/AnalyzeStmtsTest.java
index 9b42e3e..3c4a8c4 100644
--- a/fe/src/test/java/org/apache/impala/analysis/AnalyzeStmtsTest.java
+++ b/fe/src/test/java/org/apache/impala/analysis/AnalyzeStmtsTest.java
@@ -225,6 +225,41 @@ public class AnalyzeStmtsTest extends AnalyzerTest {
     }
   }
 
+  /**
+   * Test default hints applied during analysis.
+   */
+  public void testDefaultHintApplied(AnalysisContext insertCtx) {
+    String defaultHints =
+        insertCtx.getQueryOptions().getDefault_hints_insert_statement();
+    List<PlanHint> planHints =
+        insertCtx.getAnalysisResult().getInsertStmt().getPlanHints();
+    String[] defaultHintsArray = defaultHints.trim().split(":");
+    Assert.assertEquals(defaultHintsArray.length, planHints.size());
+    for (String hint: defaultHintsArray) {
+      Assert.assertTrue(planHints.contains(new PlanHint(hint.trim())));
+    }
+  }
+
+  /**
+   * Test default hints ignored when query has plan hints.
+   */
+  public void testDefaultHintIgnored(String query, String defaultHints) {
+    // Analyze query with out default hints.
+    AnalysisContext insertCtx = createAnalysisCtx();
+    AnalyzesOk(query, insertCtx);
+    List<PlanHint> planHints =
+        insertCtx.getAnalysisResult().getInsertStmt().getPlanHints();
+
+    // Analyze query with default hints.
+    insertCtx.getQueryOptions().setDefault_hints_insert_statement(defaultHints);
+    AnalyzesOk(query, insertCtx);
+    List<PlanHint> planHintsWithDefaultHints =
+        insertCtx.getAnalysisResult().getInsertStmt().getPlanHints();
+
+    // Default hint should be ignored when plan hints exist.
+    Assert.assertEquals(planHints, planHintsWithDefaultHints);
+  }
+
   @Test
   public void TestCollectionTableRefs() throws AnalysisException {
     // Test ARRAY type referenced as a table.
@@ -1990,6 +2025,18 @@ public class AnalyzeStmtsTest extends AnalyzerTest {
           "functional.alltypes", prefix, suffix),
           "Insert statement has 'noclustered' hint, but table has 'sort.columns' " +
           "property. The 'noclustered' hint will be ignored.");
+
+      // Default hints should be ignored when query has plan hints.
+      testDefaultHintIgnored(String.format(
+          "insert into functional.alltypes partition (year, month) " +
+          "%snoclustered,shuffle%s select * from functional.alltypes",
+          prefix, suffix),
+          "CLUSTERED");
+      testDefaultHintIgnored(String.format(
+          "insert into functional_kudu.alltypes " +
+          "%snoclustered,shuffle%s select * from functional.alltypes",
+          prefix, suffix),
+          "CLUSTERED : NOSHUFFLE ");
     }
 
     // Multiple non-conflicting hints and case insensitivity of hints.
@@ -1999,6 +2046,59 @@ public class AnalyzeStmtsTest extends AnalyzerTest {
     AnalyzesOk("insert into table functional.alltypessmall " +
         "partition (year, month) [shuffle, ShUfFlE] " +
         "select * from functional.alltypes");
+
+    // Test default hints.
+    AnalysisContext insertCtx = createAnalysisCtx();
+    // Bad hint returns a warning.
+    insertCtx.getQueryOptions().setDefault_hints_insert_statement("badhint");
+    AnalyzesOk("insert into functional.alltypessmall partition (year, month) " +
+        "select * from functional.alltypes",
+        insertCtx,
+        "INSERT hint not recognized: badhint");
+    // Bad hint returns a warning.
+    insertCtx.getQueryOptions().setDefault_hints_insert_statement(
+        "clustered:noshuffle:badhint");
+    AnalyzesOk("insert into functional.alltypessmall partition (year, month) " +
+        "select * from functional.alltypes",
+        insertCtx,
+        "INSERT hint not recognized: badhint");
+    // Conflicting hints return an error.
+    insertCtx.getQueryOptions().setDefault_hints_insert_statement(
+        "clustered:noclustered");
+    AnalysisError("insert into functional.alltypessmall partition (year, month) " +
+        "select * from functional.alltypes",
+        insertCtx,
+        "Conflicting INSERT hints: clustered and noclustered");
+    // Conflicting hints return an error.
+    insertCtx.getQueryOptions().setDefault_hints_insert_statement("shuffle:noshuffle");
+    AnalysisError("insert into functional.alltypessmall partition (year, month) " +
+        "select * from functional.alltypes",
+        insertCtx,
+        "Conflicting INSERT hints: shuffle and noshuffle");
+    // Default hints ignored for HBase table.
+    insertCtx.getQueryOptions().setDefault_hints_insert_statement("noclustered");
+    AnalyzesOk("insert into table functional_hbase.alltypes " +
+        "select * from functional_hbase.alltypes",
+        insertCtx);
+    // Default hints are ok for Kudu table.
+    insertCtx.getQueryOptions().setDefault_hints_insert_statement("clustered:noshuffle");
+    AnalyzesOk(String.format("insert into table functional_kudu.alltypes " +
+        "select * from functional_kudu.alltypes"),
+        insertCtx);
+    testDefaultHintApplied(insertCtx);
+    // Default hints are ok for partitioned Hdfs tables.
+    insertCtx.getQueryOptions().setDefault_hints_insert_statement(
+        "NOCLUSTERED:noshuffle");
+    AnalyzesOk("insert into functional.alltypessmall partition (year, month) " +
+        "select * from functional.alltypes",
+        insertCtx);
+    testDefaultHintApplied(insertCtx);
+    // Default hints are ok for non partitioned Hdfs tables.
+    insertCtx.getQueryOptions().setDefault_hints_insert_statement("CLUSTERED:SHUFFLE");
+    AnalyzesOk("insert into functional.alltypesnopart " +
+        "select * from functional.alltypesnopart",
+        insertCtx);
+    testDefaultHintApplied(insertCtx);
   }
 
   @Test
diff --git a/fe/src/test/java/org/apache/impala/planner/PlannerTest.java b/fe/src/test/java/org/apache/impala/planner/PlannerTest.java
index 524f896..76016c1 100644
--- a/fe/src/test/java/org/apache/impala/planner/PlannerTest.java
+++ b/fe/src/test/java/org/apache/impala/planner/PlannerTest.java
@@ -188,6 +188,62 @@ public class PlannerTest extends PlannerTestBase {
   }
 
   @Test
+  public void testInsertDefaultClustered() {
+    TQueryOptions options = defaultQueryOptions();
+    options.setDefault_hints_insert_statement("clustered");
+    runPlannerTestFile("insert-default-clustered", options);
+  }
+
+  @Test
+  public void testInsertDefaultNoClustered() {
+    TQueryOptions options = defaultQueryOptions();
+    options.setDefault_hints_insert_statement("noclustered  ");
+    runPlannerTestFile("insert-default-noclustered", options);
+  }
+
+  @Test
+  public void testInsertDefaultShuffle() {
+    TQueryOptions options = defaultQueryOptions();
+    options.setDefault_hints_insert_statement("shuffle");
+    runPlannerTestFile("insert-default-shuffle", options);
+  }
+
+  @Test
+  public void testInsertDefaultNoShuffle() {
+    TQueryOptions options = defaultQueryOptions();
+    options.setDefault_hints_insert_statement("  noshuffle ");
+    runPlannerTestFile("insert-default-noshuffle", options);
+  }
+
+  @Test
+  public void testInsertDefaultClusteredShuffle() {
+    TQueryOptions options = defaultQueryOptions();
+    options.setDefault_hints_insert_statement("clustered:shuffle");
+    runPlannerTestFile("insert-default-clustered-shuffle", options);
+  }
+
+  @Test
+  public void testInsertDefaultClusteredNoShuffle() {
+    TQueryOptions options = defaultQueryOptions();
+    options.setDefault_hints_insert_statement("clustered : noshuffle");
+    runPlannerTestFile("insert-default-clustered-noshuffle", options);
+  }
+
+  @Test
+  public void testInsertDefaultNoClusteredShuffle() {
+    TQueryOptions options = defaultQueryOptions();
+    options.setDefault_hints_insert_statement("  noclustered:  shuffle");
+    runPlannerTestFile("insert-default-noclustered-shuffle", options);
+  }
+
+  @Test
+  public void testInsertDefaultNoClusteredNoShuffle() {
+    TQueryOptions options = defaultQueryOptions();
+    options.setDefault_hints_insert_statement("  noclustered  :  noshuffle  ");
+    runPlannerTestFile("insert-default-noclustered-noshuffle", options);
+  }
+
+  @Test
   public void testInsertSortBy() {
     // Add a test table with a SORT BY clause to test that the corresponding sort nodes
     // are added by the insert statements in insert-sort-by.test.
diff --git a/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-clustered-noshuffle.test b/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-clustered-noshuffle.test
new file mode 100644
index 0000000..2817296
--- /dev/null
+++ b/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-clustered-noshuffle.test
@@ -0,0 +1,164 @@
+# HBASE; EXPECT: Default INSERT hints should be ignored;
+insert into functional_hbase.alltypes select * from functional_hbase.alltypes
+---- PLAN
+WRITE TO HBASE table=functional_hbase.alltypes
+|
+00:SCAN HBASE [functional_hbase.alltypes]
+   row-size=80B cardinality=14.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HBASE table=functional_hbase.alltypes
+|
+00:SCAN HBASE [functional_hbase.alltypes]
+   row-size=80B cardinality=14.30K
+====
+# KUDU; DEFAULT: CLUSTERED, NOSHUFFLE; EXPECT: PARTIAL SORT, NO EXCHANGE;
+upsert into functional_kudu.alltypes select * from functional.alltypes
+---- PLAN
+UPSERT INTO KUDU [functional_kudu.alltypes]
+|
+01:PARTIAL SORT
+|  order by: KuduPartition(functional.alltypes.id) ASC NULLS LAST, id ASC NULLS LAST
+|  row-size=93B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+UPSERT INTO KUDU [functional_kudu.alltypes]
+|
+01:PARTIAL SORT
+|  order by: KuduPartition(functional.alltypes.id) ASC NULLS LAST, id ASC NULLS LAST
+|  row-size=93B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED; DEFAULT: CLUSTERED, NOSHUFFLE; EXPECT: SORT, NO EXCHANGE;
+insert into table functional.alltypes partition(year, month)
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# NON-PARTITIONED; DEFAULT: CLUSTERED, NOSHUFFLE; EXPECT: NO SORT, NO EXCHANGE;
+insert into table functional.alltypesnopart select * from functional.alltypesnopart
+---- PLAN
+WRITE TO HDFS [functional.alltypesnopart, OVERWRITE=false]
+|  partitions=1
+|
+00:SCAN HDFS [functional.alltypesnopart]
+   HDFS partitions=1/1 files=0 size=0B
+   row-size=72B cardinality=0
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypesnopart, OVERWRITE=false]
+|  partitions=1
+|
+00:SCAN HDFS [functional.alltypesnopart]
+   HDFS partitions=1/1 files=0 size=0B
+   row-size=72B cardinality=0
+====
+# PARTITIONED; DEFAULT: CLUSTERED, NOSHUFFLE; PLAN HINT: NOCLUSTERED; EXPECT: NO SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +noclustered */
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED; DEFAULT: CLUSTERED, NOSHUFFLE; PLAN HINT: SHUFFLE; EXPECT: SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +shuffle */
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+02:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED; DEFAULT: CLUSTERED, NOSHUFFLE; PLAN HINT: NOCLUSTERED, SHUFFLE; EXPECT: NO SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +noclustered,shuffle */
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# KUDU; DEFAULT: CLUSTERED, NOSHUFFLE; PLAN HINT: NOCLUSTERED, SHUFFLE; EXPECT: NO SORT, EXCHANGE;
+upsert into functional_kudu.alltypes /* +noclustered,shuffle */ select * from functional.alltypes
+---- PLAN
+UPSERT INTO KUDU [functional_kudu.alltypes]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+UPSERT INTO KUDU [functional_kudu.alltypes]
+|
+01:EXCHANGE [KUDU(KuduPartition(functional.alltypes.id))]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
diff --git a/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-clustered-shuffle.test b/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-clustered-shuffle.test
new file mode 100644
index 0000000..fc07ebc
--- /dev/null
+++ b/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-clustered-shuffle.test
@@ -0,0 +1,111 @@
+# PARTITIONED; DEFAULT: CLUSTERED, SHUFFLE; EXPECT: SORT, EXCHANGE
+insert into table functional.alltypes partition(year, month)
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+02:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# NON-PARTITIONED; DEFAULT: CLUSTERED, SHUFFLE; EXPECT: NO SORT, EXCHANGE
+insert into table functional.alltypesnopart select * from functional.alltypesnopart
+---- PLAN
+WRITE TO HDFS [functional.alltypesnopart, OVERWRITE=false]
+|  partitions=1
+|
+00:SCAN HDFS [functional.alltypesnopart]
+   HDFS partitions=1/1 files=0 size=0B
+   row-size=72B cardinality=0
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypesnopart, OVERWRITE=false]
+|  partitions=1
+|
+01:EXCHANGE [UNPARTITIONED]
+|
+00:SCAN HDFS [functional.alltypesnopart]
+   HDFS partitions=1/1 files=0 size=0B
+   row-size=72B cardinality=0
+====
+# PARTITIONED; DEFAULT: CLUSTERED, SHUFFLE; PLAN HINT: NOCLUSTERED; EXPECT: NO SORT, EXCHANGE
+insert into table functional.alltypes partition(year, month) /* +noclustered */
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED; DEFAULT: CLUSTERED, SHUFFLE; PLAN HINT: NOSHUFFLE; EXPECT: SORT, NO EXCHANGE
+insert into table functional.alltypes partition(year, month) /* +noshuffle */
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED; DEFAULT: CLUSTERED, SHUFFLE; PLAN HINT: NOCLUSTERED, NOSHUFFLE; EXPECT: NO SORT, NO EXCHANGE
+insert into table functional.alltypes partition(year, month) /* +noclustered,noshuffle */
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
diff --git a/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-clustered.test b/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-clustered.test
new file mode 100644
index 0000000..80be316
--- /dev/null
+++ b/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-clustered.test
@@ -0,0 +1,163 @@
+# KUDU; DEFAULT: CLUSTERED; EXPECT: PARTIAL SORT, EXCHANGE;
+upsert into functional_kudu.alltypes select * from functional.alltypes
+---- PLAN
+UPSERT INTO KUDU [functional_kudu.alltypes]
+|
+01:PARTIAL SORT
+|  order by: KuduPartition(functional.alltypes.id) ASC NULLS LAST, id ASC NULLS LAST
+|  row-size=93B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+UPSERT INTO KUDU [functional_kudu.alltypes]
+|
+02:PARTIAL SORT
+|  order by: KuduPartition(functional.alltypes.id) ASC NULLS LAST, id ASC NULLS LAST
+|  row-size=93B cardinality=7.30K
+|
+01:EXCHANGE [KUDU(KuduPartition(functional.alltypes.id))]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED; DEFAULT: CLUSTERED; EXPECT: SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month)
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+02:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# NON-PARTITIONED; DEFAULT: CLUSTERED; EXPECT: NO SORT,NO EXCHANGE;
+insert into table functional.alltypesnopart select * from functional.alltypesnopart
+---- PLAN
+WRITE TO HDFS [functional.alltypesnopart, OVERWRITE=false]
+|  partitions=1
+|
+00:SCAN HDFS [functional.alltypesnopart]
+   HDFS partitions=1/1 files=0 size=0B
+   row-size=72B cardinality=0
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypesnopart, OVERWRITE=false]
+|  partitions=1
+|
+00:SCAN HDFS [functional.alltypesnopart]
+   HDFS partitions=1/1 files=0 size=0B
+   row-size=72B cardinality=0
+====
+# PARTITIONED; DEFAULT: CLUSTERED; PLAN HINT: NOCLUSTERED;  EXPECT: NO SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +noclustered */
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED; DEFAULT: CLUSTERED; PLAN HINT: SHUFFLE;  EXPECT: SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +shuffle */
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+02:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED; DEFAULT: CLUSTERED; PLAN HINT: NOSHUFFLE;  EXPECT: SORT, NO EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +noshuffle */
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# KUDU; DEFAULT: CLUSTERED; PLAN HINT: NOSHUFFLE;  EXPECT: PARTIAL SORT, NO EXCHANGE;
+upsert into functional_kudu.alltypes /* +noshuffle */ select * from functional.alltypes
+---- PLAN
+UPSERT INTO KUDU [functional_kudu.alltypes]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+UPSERT INTO KUDU [functional_kudu.alltypes]
+|
+01:PARTIAL SORT
+|  order by: KuduPartition(functional.alltypes.id) ASC NULLS LAST, id ASC NULLS LAST
+|  row-size=93B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
diff --git a/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-noclustered-noshuffle.test b/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-noclustered-noshuffle.test
new file mode 100644
index 0000000..921fbdf
--- /dev/null
+++ b/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-noclustered-noshuffle.test
@@ -0,0 +1,172 @@
+# HBASE; EXPECT: Default INSERT hints should be ignored;
+insert into functional_hbase.alltypes select * from functional_hbase.alltypes
+---- PLAN
+WRITE TO HBASE table=functional_hbase.alltypes
+|
+00:SCAN HBASE [functional_hbase.alltypes]
+   row-size=80B cardinality=14.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HBASE table=functional_hbase.alltypes
+|
+00:SCAN HBASE [functional_hbase.alltypes]
+   row-size=80B cardinality=14.30K
+====
+# KUDU; DEFAULT: NOCLUSTERED, NOSHUFFLE; EXPECT: NO PARTIAL SORT, NO EXCHANGE;
+upsert into functional_kudu.alltypes  select * from functional.alltypes
+---- PLAN
+UPSERT INTO KUDU [functional_kudu.alltypes]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+UPSERT INTO KUDU [functional_kudu.alltypes]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED; DEFAULT: NOCLUSTERED, NOSHUFFLE; EXPECT: NO SORT, NO EXCHANGE;
+insert into table functional.alltypes partition(year, month)
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# NON-PARTITIONED; DEFAULT: NOCLUSTERED, NOSHUFFLE; EXPECT: NO SORT, NO EXCHANGE;
+insert into table functional.alltypesnopart select * from functional.alltypesnopart
+---- PLAN
+WRITE TO HDFS [functional.alltypesnopart, OVERWRITE=false]
+|  partitions=1
+|
+00:SCAN HDFS [functional.alltypesnopart]
+   HDFS partitions=1/1 files=0 size=0B
+   row-size=72B cardinality=0
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypesnopart, OVERWRITE=false]
+|  partitions=1
+|
+00:SCAN HDFS [functional.alltypesnopart]
+   HDFS partitions=1/1 files=0 size=0B
+   row-size=72B cardinality=0
+====
+# PARTITIONED; DEFAULT: NOCLUSTERED, NOSHUFFLE; PLAN HINT: CLUSTERED; EXPECT: SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +clustered */
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+02:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED; DEFAULT: NOCLUSTERED, NOSHUFFLE; PLAN HINT: SHUFFLE; EXPECT: SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +shuffle */
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+02:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED; DEFAULT: NOCLUSTERED, NOSHUFFLE; PLAN HINT: CLUSTERED, SHUFFLE; EXPECT: SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +clustered,shuffle */
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+02:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# KUDU; DEFAULT: NOCLUSTERED, NOSHUFFLE; PLAN HINT: CLUSTERED, SHUFFLE; EXPECT: SORT, EXCHANGE;
+upsert into functional_kudu.alltypes /* +clustered,shuffle */ select * from functional.alltypes
+---- PLAN
+UPSERT INTO KUDU [functional_kudu.alltypes]
+|
+01:PARTIAL SORT
+|  order by: KuduPartition(functional.alltypes.id) ASC NULLS LAST, id ASC NULLS LAST
+|  row-size=93B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+UPSERT INTO KUDU [functional_kudu.alltypes]
+|
+02:PARTIAL SORT
+|  order by: KuduPartition(functional.alltypes.id) ASC NULLS LAST, id ASC NULLS LAST
+|  row-size=93B cardinality=7.30K
+|
+01:EXCHANGE [KUDU(KuduPartition(functional.alltypes.id))]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
diff --git a/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-noclustered-shuffle.test b/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-noclustered-shuffle.test
new file mode 100644
index 0000000..4601544
--- /dev/null
+++ b/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-noclustered-shuffle.test
@@ -0,0 +1,119 @@
+# PARTITIONED: DEFAULT: NOCLUSTERED, SHUFFLE; EXPECT: NO SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month)
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# NON_PARTITIONED: DEFAULT: NOCLUSTERED, SHUFFLE; EXPECT: NO SORT, EXCHANGE;
+insert into table functional.alltypesnopart select * from functional.alltypesnopart
+---- PLAN
+WRITE TO HDFS [functional.alltypesnopart, OVERWRITE=false]
+|  partitions=1
+|
+00:SCAN HDFS [functional.alltypesnopart]
+   HDFS partitions=1/1 files=0 size=0B
+   row-size=72B cardinality=0
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypesnopart, OVERWRITE=false]
+|  partitions=1
+|
+01:EXCHANGE [UNPARTITIONED]
+|
+00:SCAN HDFS [functional.alltypesnopart]
+   HDFS partitions=1/1 files=0 size=0B
+   row-size=72B cardinality=0
+====
+# PARTITIONED: DEFAULT: NOCLUSTERED, SHUFFLE; PLAN HINT: CLUSTERED; EXPECT: SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +clustered */
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+02:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED: DEFAULT: NOCLUSTERED, SHUFFLE; PLAN HINT: NOSHUFFLE; EXPECT: SORT, NO EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +noshuffle */
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED: DEFAULT: NOCLUSTERED, SHUFFLE; PLAN HINT: CLUSTERED, NOSHUFFLE; EXPECT: SORT, NO EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +clustered,noshuffle */
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
diff --git a/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-noclustered.test b/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-noclustered.test
new file mode 100644
index 0000000..bd437b1
--- /dev/null
+++ b/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-noclustered.test
@@ -0,0 +1,119 @@
+# PARTITIONED; DEFAULT: NOCLUSTERED; EXPECT: NO SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month)
+select * from functional.alltypes;
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# NON-PARTITIONED; DEFAULT: NOCLUSTERED; EXPECT: NO SORT, NO EXCHANGE;
+insert into table functional.alltypesnopart select * from functional.alltypesnopart;
+---- PLAN
+WRITE TO HDFS [functional.alltypesnopart, OVERWRITE=false]
+|  partitions=1
+|
+00:SCAN HDFS [functional.alltypesnopart]
+   HDFS partitions=1/1 files=0 size=0B
+   row-size=72B cardinality=0
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypesnopart, OVERWRITE=false]
+|  partitions=1
+|
+00:SCAN HDFS [functional.alltypesnopart]
+   HDFS partitions=1/1 files=0 size=0B
+   row-size=72B cardinality=0
+====
+# PARTITIONED; DEFAULT: NOCLUSTERED; PLAN HINT: CLUSTERED; EXPECT: SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +clustered */
+select * from functional.alltypes;
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+02:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED; DEFAULT: NOCLUSTERED; PLAN HINT: SHUFFLE; EXPECT: SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +shuffle */
+select * from functional.alltypes;
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+02:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED; DEFAULT: NOCLUSTERED; PLAN HINT: NOSHUFFLE; EXPECT: SORT, NO EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +noshuffle */
+select * from functional.alltypes;
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
diff --git a/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-noshuffle.test b/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-noshuffle.test
new file mode 100644
index 0000000..12bf707
--- /dev/null
+++ b/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-noshuffle.test
@@ -0,0 +1,172 @@
+# HBASE; EXPECT: Default INSERT hints should be ignored;
+insert into functional_hbase.alltypes select * from functional_hbase.alltypes
+---- PLAN
+WRITE TO HBASE table=functional_hbase.alltypes
+|
+00:SCAN HBASE [functional_hbase.alltypes]
+   row-size=80B cardinality=14.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HBASE table=functional_hbase.alltypes
+|
+00:SCAN HBASE [functional_hbase.alltypes]
+   row-size=80B cardinality=14.30K
+====
+# KUDU; DEFAULT: NOSHUFFLE; EXPECT: PARTIAL SORT, NO EXCHANGE;
+upsert into functional_kudu.alltypes select * from functional.alltypes
+---- PLAN
+UPSERT INTO KUDU [functional_kudu.alltypes]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+UPSERT INTO KUDU [functional_kudu.alltypes]
+|
+01:PARTIAL SORT
+|  order by: KuduPartition(functional.alltypes.id) ASC NULLS LAST, id ASC NULLS LAST
+|  row-size=93B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED; DEFAULT: NOSHUFFLE; EXPECT: SORT, NO EXCHANGE;
+insert into table functional.alltypes partition(year, month)
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# NON-PARTITIONED; DEFAULT: NOSHUFFLE; EXPECT: NO SORT, NO EXCHANGE;
+insert into table functional.alltypesnopart select * from functional.alltypesnopart
+---- PLAN
+WRITE TO HDFS [functional.alltypesnopart, OVERWRITE=false]
+|  partitions=1
+|
+00:SCAN HDFS [functional.alltypesnopart]
+   HDFS partitions=1/1 files=0 size=0B
+   row-size=72B cardinality=0
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypesnopart, OVERWRITE=false]
+|  partitions=1
+|
+00:SCAN HDFS [functional.alltypesnopart]
+   HDFS partitions=1/1 files=0 size=0B
+   row-size=72B cardinality=0
+====
+# PARTITIONED; DEFAULT: NOSHUFFLE; PLAN HINT: CLUSTERED; EXPECT: SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +clustered */
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+02:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED; DEFAULT: NOSHUFFLE; PLAN HINT: NOCLUSTERED; EXPECT: NO SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +noclustered */
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED; DEFAULT: NOSHUFFLE; PLAN HINT: SHUFFLE; EXPECT: SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +shuffle */
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+02:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# KUDU; DEFAULT: NOSHUFFLE; PLAN HINT: SHUFFLE; EXPECT: PARTIAL SORT, EXCHANGE;
+upsert into functional_kudu.alltypes /* +shuffle */ select * from functional.alltypes
+---- PLAN
+UPSERT INTO KUDU [functional_kudu.alltypes]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+UPSERT INTO KUDU [functional_kudu.alltypes]
+|
+02:PARTIAL SORT
+|  order by: KuduPartition(functional.alltypes.id) ASC NULLS LAST, id ASC NULLS LAST
+|  row-size=93B cardinality=7.30K
+|
+01:EXCHANGE [KUDU(KuduPartition(functional.alltypes.id))]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
diff --git a/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-shuffle.test b/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-shuffle.test
new file mode 100644
index 0000000..e7b7496
--- /dev/null
+++ b/testdata/workloads/functional-planner/queries/PlannerTest/insert-default-shuffle.test
@@ -0,0 +1,121 @@
+# PARTITIONED; DEFAULT: SHUFFLE; EXPECT: SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month)
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+02:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# NON-PARTITIONED; DEFAULT: SHUFFLE; EXPECT: NO SORT, EXCHANGE;
+insert into table functional.alltypesnopart select * from functional.alltypesnopart
+---- PLAN
+WRITE TO HDFS [functional.alltypesnopart, OVERWRITE=false]
+|  partitions=1
+|
+00:SCAN HDFS [functional.alltypesnopart]
+   HDFS partitions=1/1 files=0 size=0B
+   row-size=72B cardinality=0
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypesnopart, OVERWRITE=false]
+|  partitions=1
+|
+01:EXCHANGE [UNPARTITIONED]
+|
+00:SCAN HDFS [functional.alltypesnopart]
+   HDFS partitions=1/1 files=0 size=0B
+   row-size=72B cardinality=0
+====
+# PARTITIONED; DEFAULT: SHUFFLE; PLAN HINT: CLUSTERED; EXPECT: SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +clustered */
+select * from functional.alltypes
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+02:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED; DEFAULT: SHUFFLE; PLAN HINT: NOCLUSTERED; EXPECT: NO SORT, EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +noclustered */
+select * from functional.alltypes;
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(functional.alltypes.year,functional.alltypes.month)]
+|  partitions=24
+|
+01:EXCHANGE [HASH(functional.alltypes.year,functional.alltypes.month)]
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====
+# PARTITIONED; DEFAULT: SHUFFLE; PLAN HINT: NOSHUFFLE; EXPECT: SORT, NO EXCHANGE;
+insert into table functional.alltypes partition(year, month) /* +noshuffle */
+select * from functional.alltypes;
+---- PLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+---- DISTRIBUTEDPLAN
+WRITE TO HDFS [functional.alltypes, OVERWRITE=false, PARTITION-KEYS=(year,month)]
+|  partitions=24
+|
+01:SORT
+|  order by: year ASC NULLS LAST, month ASC NULLS LAST
+|  row-size=89B cardinality=7.30K
+|
+00:SCAN HDFS [functional.alltypes]
+   HDFS partitions=24/24 files=24 size=478.45KB
+   row-size=89B cardinality=7.30K
+====


[impala] 05/05: IMPALA-8341: [DOCS] Added a note about the requirement for existing dirs

Posted by jo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

joemcdonnell pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git

commit 5fdef39fcf7e1b59bcf8d670d217ca7a88fc2738
Author: Alex Rodoni <ar...@cloudera.com>
AuthorDate: Tue Jul 2 18:35:24 2019 -0700

    IMPALA-8341: [DOCS] Added a note about the requirement for existing dirs
    
    Change-Id: I5feddddff3ab7c09ee681098ec3e630977cb92f1
    Reviewed-on: http://gerrit.cloudera.org:8080/13793
    Reviewed-by: Alex Rodoni <ar...@cloudera.com>
    Tested-by: Impala Public Jenkins <im...@cloudera.com>
---
 docs/topics/impala_data_cache.xml | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/docs/topics/impala_data_cache.xml b/docs/topics/impala_data_cache.xml
index ee753dc..e9cd14a 100644
--- a/docs/topics/impala_data_cache.xml
+++ b/docs/topics/impala_data_cache.xml
@@ -31,7 +31,7 @@ under the License.
     </p>
 
     <note>
-      This is an experimental feature in Impala 3.3 and not generally supported.
+      This is an experimental feature in Impala 3.3 and is not generally supported.
     </note>
 
     <p>
@@ -61,7 +61,8 @@ under the License.
     </p>
 
     <p>
-      The specified directories must exist in the local filesystem of each Impala Daemon.
+      The specified directories must exist in the local filesystem of each Impala Daemon, or
+      Impala will fail to start.
     </p>
 
     <p>