You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kyuubi.apache.org by ul...@apache.org on 2021/09/09 08:08:39 UTC

[incubator-kyuubi] branch master updated: [KYUUBI #1037] Add classification of SQL_Type: for dql and auxiliary statement

This is an automated email from the ASF dual-hosted git repository.

ulyssesyou pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-kyuubi.git


The following commit(s) were added to refs/heads/master by this push:
     new 86aa469  [KYUUBI #1037] Add classification of SQL_Type: for dql and auxiliary statement
86aa469 is described below

commit 86aa4691c5d0f3f202597dbb5d30d5d54bbe43a3
Author: 张宇翔 <zh...@126.com>
AuthorDate: Thu Sep 9 16:08:20 2021 +0800

    [KYUUBI #1037] Add classification of SQL_Type: for dql and auxiliary statement
    
    This PR, we have two different of sql classification:
    - other:  all auxiliary statement fail into this classification
    - dql
    
    We support the user use the self-defined matching rule: sql-classification.json.
    If there have no this named jsonFile, the service will upload the default matching rule: sql-classification-default.json.
    
    ### _Why are the changes needed?_
    <!--
    Please clarify why the changes are needed. For instance,
      1. If you add a feature, you can talk about the use case of it.
      2. If you fix a bug, you can clarify why it is a bug.
    -->
    
    ### _How was this patch tested?_
    - [ ] Add some test cases that check the changes thoroughly including negative and positive cases if possible
    
    - [ ] Add screenshots for manual tests if appropriate
    
    - [ ] [Run test](https://kyuubi.readthedocs.io/en/latest/develop_tools/testing.html#running-tests) locally before make a pull request
    
    Closes #1037 from zhang1002/branch-1.3_get-sql-type-for-aux.
    
    Closes #1037
    
    e6e9e906 [张宇翔] merge master
    f971c9ad [张宇翔] merge master
    af28520b [张宇翔] Merge remote-tracking branch 'upstream/master'
    e94ad829 [张宇翔] 1. Remove NullPointException 2. If the url is null, use the default jsonFile
    9d346c61 [张宇翔] 1. Add analyzeColumnCommand and AnalyzePartitionCommand 2. change ExplainCommand to other
    a73941ed [张宇翔] order the matching rule in jsonFile
    72a18ece [张宇翔] change format
    d7f75387 [张宇翔] change format
    e6f9b6b2 [张宇翔] Merge branch 'master' into branch-1.3_get-sql-type-for-aux
    d258e806 [张宇翔] Merge branch 'master' into branch-1.3_get-sql-type-for-aux
    c328c884 [张宇翔] Merge remote-tracking branch 'upstream/master'
    47a6dc5a [张宇翔] Merge branch 'master' into branch-1.3_get-sql-type-for-aux
    abcaae1f [张宇翔] Merge remote-tracking branch 'upstream/master'
    524a3801 [张宇翔] Support user use the self-defined matching rule, named sql-classification.json
    8c132ff6 [张宇翔] Support user use the self-defined matching rule, named sql-classification.json
    aec0d368 [张宇翔] Merge branch 'branch-1.3_get-sql-type-for-dml' into branch-1.3_get-sql-type-for-aux
    eb6a96d8 [张宇翔] Merge branch 'master' into branch-1.3_get-sql-type-for-dml
    f0a7e566 [张宇翔] Merge remote-tracking branch 'upstream/master'
    5e986922 [张宇翔] 1. Add sql classification of auxiliary statement 2. Add sql classification of dql
    8e586df1 [张宇翔] Merge branch 'master' into branch-1.3_get-sql-type-for-dml
    ba0cd0a3 [张宇翔] Merge branch 'master' into branch-1.3_get-sql-type-for-dml
    112aa6b6 [张宇翔] Merge remote-tracking branch 'upstream/master'
    18f35f2b [张宇翔] Add the classification of sql: DML
    025bc3f8 [张宇翔] Add the classification of sql: DML
    55ef6af7 [张宇翔] some modification
    f1f8b355 [张宇翔] change other to undefined
    1052ae45 [张宇翔] Change some code standards
    5e21dc62 [张宇翔] Change some code standards
    f531744d [张宇翔] Add dml test
    3017b96c [张宇翔] Merge branch 'master' into branch-1.3_get-sql-type-for-ddl
    c55652fd [张宇翔] Merge remote-tracking branch 'upstream/master'
    c2572b98 [张宇翔] 1. Use RuleBuilder to develop this function 2. Use analyzed logical plan to judge this sql's classification 3. Change the matching rule: use map, the key is simpleClassName, the value is classification of this sql
    93b5624a [张宇翔] Exclude license check for json
    d8187ced [张宇翔] Exclude license check for json
    e46bc86e [张宇翔] Add exception
    3b358bf0 [张宇翔] Add licence
    1125b600 [张宇翔] Merge branch 'master' into branch-1.3_get-sql-type-for-ddl
    ef5e8c55 [张宇翔] Merge remote-tracking branch 'upstream/master'
    ba8f99eb [张宇翔] Use extension to get simpleName
    c0bdea7b [张宇翔] Merge branch 'master' into branch-1.3_get-sql-type-for-ddl
    5a75384c [张宇翔] Merge remote-tracking branch 'upstream/master'
    55b85849 [张宇翔] Update settings.md
    ecbd8000 [张宇翔] Repair the scalastyle violations.
    76edd20d [张宇翔] Repair the scalastyle violations.
    d8e820ee [张宇翔] Merge branch 'master' into branch-1.3_get-sql-type-for-ddl
    8da4f7ed [张宇翔] Merge remote-tracking branch 'upstream/master'
    65a90958 [张宇翔] Classification for sqlType: DDL
    a7ba1bfc [张宇翔] Merge remote-tracking branch 'upstream/master'
    b662a989 [张宇翔] Merge remote-tracking branch 'upstream/master'
    4c8f3b87 [张宇翔] Merge remote-tracking branch 'upstream/master'
    8b686767 [张宇翔] Merge remote-tracking branch 'upstream/master'
    cf99e309 [张宇翔] Merge remote-tracking branch 'upstream/master'
    0afaa578 [张宇翔] Merge remote-tracking branch 'upstream/master'
    b24fea07 [张宇翔] Merge remote-tracking branch 'upstream/master'
    e517cfc5 [张宇翔] Merge remote-tracking branch 'upstream/master'
    18aebe76 [张宇翔] Merge remote-tracking branch 'upstream/master'
    f248bef7 [张宇翔] Merge remote-tracking branch 'upstream/master'
    5ffb54f3 [张宇翔] Add kyuubi-spark-monitor module for nightly.yml
    
    Authored-by: 张宇翔 <zh...@126.com>
    Signed-off-by: ulysses-you <ul...@apache.org>
---
 .../main/resources/sql-classification-default.json |  59 ++-
 .../KyuubiGetSqlClassification.scala               |  25 +-
 .../test/resources/sql-classification-default.json |  59 ++-
 .../apache/spark/sql/KyuubiExtensionSuite.scala    | 426 ++++++++++++++++++++-
 4 files changed, 528 insertions(+), 41 deletions(-)

diff --git a/dev/kyuubi-extension-spark-3-1/src/main/resources/sql-classification-default.json b/dev/kyuubi-extension-spark-3-1/src/main/resources/sql-classification-default.json
index 1280ad3..23e6bd5 100644
--- a/dev/kyuubi-extension-spark-3-1/src/main/resources/sql-classification-default.json
+++ b/dev/kyuubi-extension-spark-3-1/src/main/resources/sql-classification-default.json
@@ -1,34 +1,63 @@
 {
   "AlterDatabasePropertiesCommand": "ddl",
-  "AlterTableRenameCommand": "ddl",
-  "AlterTableRenamePartitionCommand": "ddl",
   "AlterTableAddColumnsCommand": "ddl",
-  "AlterTableChangeColumnCommand": "ddl",
   "AlterTableAddPartitionCommand": "ddl",
+  "AlterTableChangeColumnCommand": "ddl",
   "AlterTableDropPartitionCommand": "ddl",
-  "AlterTableSetPropertiesCommand": "ddl",
-  "AlterTableSetLocationCommand": "ddl",
-  "AlterTableSerDePropertiesCommand": "ddl",
-  "AlterTableUnsetPropertiesCommand": "ddl",
+  "AlterTableRecoverPartitionsCommand": "ddl",
+  "AlterTableRenameCommand": "ddl",
   "AlterTableRenameCommand": "ddl",
+  "AlterTableRenamePartitionCommand": "ddl",
+  "AlterTableSerDePropertiesCommand": "ddl",
+  "AlterTableSetLocationCommand": "ddl",
   "AlterTableSetPropertiesCommand": "ddl",
+  "AlterTableSetPropertiesCommand": "ddl",
+  "AlterTableUnsetPropertiesCommand": "ddl",
   "AlterTableUnsetPropertiesCommand": "ddl",
   "AlterViewAsCommand": "ddl",
+  "CreateDataSourceTableAsSelectCommand": "ddl",
+  "CreateDataSourceTableCommand": "ddl",
   "CreateDatabaseCommand": "ddl",
   "CreateFunctionCommand": "ddl",
-  "CreateDataSourceTableCommand": "ddl",
-  "CreateDataSourceTableAsSelectCommand": "ddl",
   "CreateTableLikeCommand": "ddl",
   "CreateViewCommand": "ddl",
   "DropDatabaseCommand": "ddl",
   "DropFunctionCommand": "ddl",
-  "NoopDropTable": "ddl",
   "DropTableCommand": "ddl",
-  "TruncateTableCommand": "ddl",
-  "AlterTableRecoverPartitionsCommand": "ddl",
+  "NoopDropTable": "ddl",
   "SetCatalogAndNamespace": "ddl",
+  "TruncateTableCommand": "ddl",
+  "InsertIntoDataSourceDirCommand": "dml",
   "InsertIntoHadoopFsRelationCommand": "dml",
+  "InsertIntoHiveDirCommand": "dml",
   "LoadDataCommand": "dml",
-  "InsertIntoDataSourceDirCommand": "dml",
-  "InsertIntoHiveDirCommand": "dml"
-}
\ No newline at end of file
+  "AddFileCommand": "other",
+  "AddJarCommand": "other",
+  "AnalyzeColumnCommand": "other",
+  "AnalyzePartitionCommand": "other",
+  "AnalyzeTable": "other",
+  "CacheTableCommand": "other",
+  "ClearCacheCommand$": "other",
+  "DescribeColumnCommand": "other",
+  "DescribeDatabaseCommand": "other",
+  "DescribeFunctionCommand": "other",
+  "DescribeQueryCommand": "other",
+  "DescribeTableCommand": "other",
+  "ExplainCommand": "other",
+  "ListFilesCommand": "other",
+  "ListJarsCommand": "other",
+  "RefreshFunctionCommand": "other",
+  "RefreshResource": "other",
+  "RefreshTableCommand": "other",
+  "ResetCommand": "other",
+  "SetCommand": "other",
+  "ShowColumnsCommand": "other",
+  "ShowCreateTableCommand": "other",
+  "ShowFunctionsCommand": "other",
+  "ShowNamespaces": "other",
+  "ShowPartitionsCommand": "other",
+  "ShowTablePropertiesCommand": "other",
+  "ShowTablesCommand": "other",
+  "ShowViewsCommand": "other",
+  "UncacheTableCommand": "other"
+}
diff --git a/dev/kyuubi-extension-spark-3-1/src/main/scala/org/apache/kyuubi/sql/sqlclassification/KyuubiGetSqlClassification.scala b/dev/kyuubi-extension-spark-3-1/src/main/scala/org/apache/kyuubi/sql/sqlclassification/KyuubiGetSqlClassification.scala
index ad3db10..fd17c5a 100644
--- a/dev/kyuubi-extension-spark-3-1/src/main/scala/org/apache/kyuubi/sql/sqlclassification/KyuubiGetSqlClassification.scala
+++ b/dev/kyuubi-extension-spark-3-1/src/main/scala/org/apache/kyuubi/sql/sqlclassification/KyuubiGetSqlClassification.scala
@@ -18,6 +18,7 @@
 package org.apache.kyuubi.sql.sqlclassification
 
 import java.io.File
+import java.net.URL
 
 import com.fasterxml.jackson.databind.{JsonNode, ObjectMapper}
 import org.apache.spark.sql.internal.SQLConf
@@ -28,22 +29,26 @@ import org.apache.kyuubi.sql.KyuubiSQLConf._
  * This object is used for getting sql_classification by the logicalPlan's simpleName.
  * When the configuration item: SQL_CLASSIFICATION_ENABLED is on,
  * we will load the rule from sql-classification-default.json.
+ *
+ * Notice:
+ *  We support the user use the self-defined matching rule: sql-classification.json.
+ *  If there have no this named jsonFile,
+ *  the service will upload the default matching rule: sql-classification-default.json.
  */
 object KyuubiGetSqlClassification {
 
   private val jsonNode: Option[JsonNode] = {
     SQLConf.get.getConf(SQL_CLASSIFICATION_ENABLED) match {
       case true =>
-        try {
-          val defaultSqlClassificationFile =
-            Thread.currentThread().getContextClassLoader
-              .getResource("sql-classification-default.json").getPath
-          val objectMapper = new ObjectMapper
-          Some(objectMapper.readTree(new File(defaultSqlClassificationFile)))
-        } catch {
-          case e: Exception =>
-            throw new IllegalArgumentException("sql-classification-default.json is not exist.", e)
+        val objectMapper = new ObjectMapper
+        var url: URL = Thread.currentThread().getContextClassLoader
+          .getResource("sql-classification.json")
+        if (url == null) {
+          url = Thread.currentThread().getContextClassLoader
+            .getResource("sql-classification-default.json")
         }
+        val defaultSqlClassificationFile = url.getPath
+        Some(objectMapper.readTree(new File(defaultSqlClassificationFile)))
       case false =>
         None
     }
@@ -60,7 +65,7 @@ object KyuubiGetSqlClassification {
     jsonNode.map { json =>
       val sqlClassififation = json.get(simpleName)
       if (sqlClassififation == null) {
-        "undefined"
+        "dql"
       } else {
         sqlClassififation.asText()
       }
diff --git a/dev/kyuubi-extension-spark-3-1/src/test/resources/sql-classification-default.json b/dev/kyuubi-extension-spark-3-1/src/test/resources/sql-classification-default.json
index 1280ad3..23e6bd5 100644
--- a/dev/kyuubi-extension-spark-3-1/src/test/resources/sql-classification-default.json
+++ b/dev/kyuubi-extension-spark-3-1/src/test/resources/sql-classification-default.json
@@ -1,34 +1,63 @@
 {
   "AlterDatabasePropertiesCommand": "ddl",
-  "AlterTableRenameCommand": "ddl",
-  "AlterTableRenamePartitionCommand": "ddl",
   "AlterTableAddColumnsCommand": "ddl",
-  "AlterTableChangeColumnCommand": "ddl",
   "AlterTableAddPartitionCommand": "ddl",
+  "AlterTableChangeColumnCommand": "ddl",
   "AlterTableDropPartitionCommand": "ddl",
-  "AlterTableSetPropertiesCommand": "ddl",
-  "AlterTableSetLocationCommand": "ddl",
-  "AlterTableSerDePropertiesCommand": "ddl",
-  "AlterTableUnsetPropertiesCommand": "ddl",
+  "AlterTableRecoverPartitionsCommand": "ddl",
+  "AlterTableRenameCommand": "ddl",
   "AlterTableRenameCommand": "ddl",
+  "AlterTableRenamePartitionCommand": "ddl",
+  "AlterTableSerDePropertiesCommand": "ddl",
+  "AlterTableSetLocationCommand": "ddl",
   "AlterTableSetPropertiesCommand": "ddl",
+  "AlterTableSetPropertiesCommand": "ddl",
+  "AlterTableUnsetPropertiesCommand": "ddl",
   "AlterTableUnsetPropertiesCommand": "ddl",
   "AlterViewAsCommand": "ddl",
+  "CreateDataSourceTableAsSelectCommand": "ddl",
+  "CreateDataSourceTableCommand": "ddl",
   "CreateDatabaseCommand": "ddl",
   "CreateFunctionCommand": "ddl",
-  "CreateDataSourceTableCommand": "ddl",
-  "CreateDataSourceTableAsSelectCommand": "ddl",
   "CreateTableLikeCommand": "ddl",
   "CreateViewCommand": "ddl",
   "DropDatabaseCommand": "ddl",
   "DropFunctionCommand": "ddl",
-  "NoopDropTable": "ddl",
   "DropTableCommand": "ddl",
-  "TruncateTableCommand": "ddl",
-  "AlterTableRecoverPartitionsCommand": "ddl",
+  "NoopDropTable": "ddl",
   "SetCatalogAndNamespace": "ddl",
+  "TruncateTableCommand": "ddl",
+  "InsertIntoDataSourceDirCommand": "dml",
   "InsertIntoHadoopFsRelationCommand": "dml",
+  "InsertIntoHiveDirCommand": "dml",
   "LoadDataCommand": "dml",
-  "InsertIntoDataSourceDirCommand": "dml",
-  "InsertIntoHiveDirCommand": "dml"
-}
\ No newline at end of file
+  "AddFileCommand": "other",
+  "AddJarCommand": "other",
+  "AnalyzeColumnCommand": "other",
+  "AnalyzePartitionCommand": "other",
+  "AnalyzeTable": "other",
+  "CacheTableCommand": "other",
+  "ClearCacheCommand$": "other",
+  "DescribeColumnCommand": "other",
+  "DescribeDatabaseCommand": "other",
+  "DescribeFunctionCommand": "other",
+  "DescribeQueryCommand": "other",
+  "DescribeTableCommand": "other",
+  "ExplainCommand": "other",
+  "ListFilesCommand": "other",
+  "ListJarsCommand": "other",
+  "RefreshFunctionCommand": "other",
+  "RefreshResource": "other",
+  "RefreshTableCommand": "other",
+  "ResetCommand": "other",
+  "SetCommand": "other",
+  "ShowColumnsCommand": "other",
+  "ShowCreateTableCommand": "other",
+  "ShowFunctionsCommand": "other",
+  "ShowNamespaces": "other",
+  "ShowPartitionsCommand": "other",
+  "ShowTablePropertiesCommand": "other",
+  "ShowTablesCommand": "other",
+  "ShowViewsCommand": "other",
+  "UncacheTableCommand": "other"
+}
diff --git a/dev/kyuubi-extension-spark-3-1/src/test/scala/org/apache/spark/sql/KyuubiExtensionSuite.scala b/dev/kyuubi-extension-spark-3-1/src/test/scala/org/apache/spark/sql/KyuubiExtensionSuite.scala
index 2f64587..1261468 100644
--- a/dev/kyuubi-extension-spark-3-1/src/test/scala/org/apache/spark/sql/KyuubiExtensionSuite.scala
+++ b/dev/kyuubi-extension-spark-3-1/src/test/scala/org/apache/spark/sql/KyuubiExtensionSuite.scala
@@ -26,6 +26,7 @@ import org.apache.spark.sql.hive.execution.OptimizedCreateHiveTableAsSelectComma
 import org.apache.spark.sql.internal.{SQLConf, StaticSQLConf}
 import org.apache.spark.sql.test.SQLTestData.TestData
 import org.apache.spark.sql.test.SQLTestUtils
+import scala.collection.mutable.Set
 
 import org.apache.kyuubi.sql.{FinalStageConfigIsolation, KyuubiSQLConf}
 import org.apache.kyuubi.sql.KyuubiSQLExtensionException
@@ -450,6 +451,20 @@ class KyuubiExtensionSuite extends QueryTest with SQLTestUtils with AdaptiveSpar
     }
   }
 
+  test("Sql classification for other and dql") {
+    withSQLConf(KyuubiSQLConf.SQL_CLASSIFICATION_ENABLED.key -> "true") {
+      val df01 = sql("SET spark.sql.variable.substitute=false")
+      assert(df01.sparkSession.conf.get("kyuubi.spark.sql.classification") === "other")
+
+      val sql02 = "select timestamp'2021-06-01'"
+      val df02 = sql(sql02)
+
+      assert(df02.sparkSession.conf.get("kyuubi.spark.sql.classification") === "dql")
+    }
+  }
+
+  // TODO: #1064
+  // TODO: The matching rule for sql classification should be generated automatically not manually
   test("get simple name for DDL") {
 
     import scala.collection.mutable.Set
@@ -665,6 +680,13 @@ class KyuubiExtensionSuite extends QueryTest with SQLTestUtils with AdaptiveSpar
       ).getClass.getSimpleName
     )
 
+    val sql27 = "TRUNCATE TABLE StudentInfo partition(age=10);"
+    ddlSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql27)
+      ).getClass.getSimpleName
+    )
+
     // REPAIR TABLE
     val sql25 = "MSCK REPAIR TABLE StudentInfo;"
     ddlSimpleName.add(
@@ -673,10 +695,12 @@ class KyuubiExtensionSuite extends QueryTest with SQLTestUtils with AdaptiveSpar
       ).getClass.getSimpleName
     )
     // scalastyle:off println
-    println("ddl simple name is :" + ddlSimpleName)
+    println("ddl simple name is :" + ddlSimpleName.toSeq.sorted)
     // scalastyle:on println
   }
 
+  // TODO: #1064
+  // TODO: The matching rule for sql classification should be generated automatically not manually
   test("get simple name for DML") {
     import scala.collection.mutable.Set
     val dmlSimpleName: Set[String] = Set()
@@ -1029,6 +1053,406 @@ class KyuubiExtensionSuite extends QueryTest with SQLTestUtils with AdaptiveSpar
     }
   }
 
+  // TODO: #1064
+  // TODO: The matching rule for sql classification should be generated automatically not manually
+  test("get simple name for auxiliary statement") {
+    val auxiStatementSimpleName: Set[String] = Set()
+
+    var pre_sql = "CREATE TABLE IF NOT EXISTS students_testtest " +
+      "(name STRING, student_id INT) PARTITIONED BY (student_id);"
+    spark.sql(pre_sql)
+    pre_sql = "INSERT INTO students_testtest PARTITION (student_id = 111111) VALUES ('Mark');"
+    spark.sql(pre_sql)
+    pre_sql = "INSERT INTO students_testtest PARTITION (student_id = 222222) VALUES ('John');"
+    spark.sql(pre_sql)
+
+    val sql01 = "ANALYZE TABLE students_testtest COMPUTE STATISTICS NOSCAN;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql01)
+      ).getClass.getSimpleName
+    )
+
+    val sql48 = "ANALYZE TABLE students_testtest PARTITION " +
+      "(student_id = 111111) COMPUTE STATISTICS;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql48)
+      ).getClass.getSimpleName
+    )
+
+    val sql49 = "ANALYZE TABLE students_testtest COMPUTE STATISTICS FOR COLUMNS name;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql49)
+      ).getClass.getSimpleName
+    )
+
+    val sql29 = "SHOW PARTITIONS students_testtest;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql29)
+      ).getClass.getSimpleName
+    )
+
+    val sql30 = "SHOW PARTITIONS students_testtest PARTITION (student_id = 111111);"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql30)
+      ).getClass.getSimpleName
+    )
+
+    val sql31 = "SHOW TABLE EXTENDED LIKE 'students_testtest';"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql31)
+      ).getClass.getSimpleName
+    )
+
+    val sql32 = "SHOW TABLES;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql32)
+      ).getClass.getSimpleName
+    )
+
+    val sql33 = "SHOW TABLES FROM default;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql33)
+      ).getClass.getSimpleName
+    )
+
+    val sql02 = "CACHE TABLE testCache OPTIONS ('storageLevel' 'DISK_ONLY') " +
+      "SELECT * FROM students_testtest;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql02)
+      ).getClass.getSimpleName
+    )
+
+    val sql03 = "CACHE LAZY TABLE testCache OPTIONS ('storageLevel' 'DISK_ONLY') " +
+      "SELECT * FROM students_testtest;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql03)
+      ).getClass.getSimpleName
+    )
+
+    val sql09 = "REFRESH \"hdfs://path/to/table\""
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql09)
+      ).getClass.getSimpleName
+    )
+
+    val sql04 = "UNCACHE TABLE IF EXISTS testCache;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql04)
+      ).getClass.getSimpleName
+    )
+
+    val sql05 = "CLEAR CACHE;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql05)
+      ).getClass.getSimpleName
+    )
+
+    val sql06 = "REFRESH TABLE students_testtest;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql06)
+      ).getClass.getSimpleName
+    )
+
+    pre_sql = "CREATE OR REPLACE VIEW students_testtest_view " +
+      "AS SELECT name, student_id FROM students_testtest;"
+    spark.sql(pre_sql)
+    val sql07 = "REFRESH TABLE default.students_testtest_view;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql07)
+      ).getClass.getSimpleName
+    )
+
+    val sql35 = "SHOW VIEWS;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql35)
+      ).getClass.getSimpleName
+    )
+
+    pre_sql = "DROP VIEW IF EXISTS students_testtest_view;"
+    spark.sql(pre_sql)
+
+    pre_sql = "CREATE FUNCTION IF NOT EXISTS test_avg AS " +
+      "'org.apache.hadoop.hive.ql.udf.generic.GenericUDAFAverage';"
+    spark.sql(pre_sql)
+    val sql08 = "REFRESH FUNCTION test_avg;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql08)
+      ).getClass.getSimpleName
+    )
+
+    val sql15 = "DESC FUNCTION test_avg;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql15)
+      ).getClass.getSimpleName
+    )
+
+    val sql26 = "SHOW FUNCTIONS test_avg;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql26)
+      ).getClass.getSimpleName
+    )
+
+    val sql27 = "SHOW SYSTEM FUNCTIONS concat;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql27)
+      ).getClass.getSimpleName
+    )
+
+    val sql28 = "SHOW FUNCTIONS LIKE 'test*';"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql28)
+      ).getClass.getSimpleName
+    )
+
+    pre_sql = "DROP FUNCTION IF EXISTS test_avg;"
+    spark.sql(pre_sql)
+
+    pre_sql = "CREATE DATABASE IF NOT EXISTS employees COMMENT 'For software companies';"
+    spark.sql(pre_sql)
+    val sql10 = "DESCRIBE DATABASE employees;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql10)
+      ).getClass.getSimpleName
+    )
+
+    val sql23 = "SHOW DATABASES;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql23)
+      ).getClass.getSimpleName
+    )
+
+    val sql24 = "SHOW DATABASES LIKE 'employ*';"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql24)
+      ).getClass.getSimpleName
+    )
+
+    val sql25 = "SHOW SCHEMAS;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql25)
+      ).getClass.getSimpleName
+    )
+
+    pre_sql = "ALTER DATABASE employees SET DBPROPERTIES " +
+      "('Create-by' = 'Kevin', 'Create-date' = '09/01/2019');"
+    val sql11 = "DESCRIBE DATABASE EXTENDED employees;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql11)
+      ).getClass.getSimpleName
+    )
+    pre_sql = "DROP DATABASE IF EXISTS employees;"
+    spark.sql(pre_sql)
+
+    pre_sql = "CREATE TABLE IF NOT EXISTS customer" +
+      "(cust_id INT, state VARCHAR(20), name STRING COMMENT 'Short name') " +
+      "USING parquet PARTITIONED BY (state);"
+    spark.sql(pre_sql)
+    pre_sql = "INSERT INTO customer PARTITION (state = 'AR') VALUES (100, 'Mike');"
+    val sql12 = "DESCRIBE TABLE customer;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql12)
+      ).getClass.getSimpleName
+    )
+
+    val sql21 = "SHOW COLUMNS IN customer;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql21)
+      ).getClass.getSimpleName
+    )
+
+    val sql22 = "SHOW CREATE TABLE customer;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql22)
+      ).getClass.getSimpleName
+    )
+
+    val sql13 = "DESCRIBE TABLE EXTENDED customer PARTITION (state = 'AR');"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql13)
+      ).getClass.getSimpleName
+    )
+
+    val sql14 = "DESCRIBE customer salesdb.customer.name;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql14)
+      ).getClass.getSimpleName
+    )
+    pre_sql = "DROP TABLE IF EXISTS customer;"
+    spark.sql(pre_sql)
+
+    pre_sql = "CREATE TABLE IF NOT EXISTS person " +
+      "(name STRING , age INT COMMENT 'Age column', address STRING);"
+    spark.sql(pre_sql)
+    val sql16 = "DESCRIBE QUERY SELECT age, sum(age) FROM person GROUP BY age;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql16)
+      ).getClass.getSimpleName
+    )
+
+    val sql17 = "DESCRIBE QUERY WITH all_names_cte AS " +
+      "(SELECT name from person) SELECT * FROM all_names_cte;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql17)
+      ).getClass.getSimpleName
+    )
+
+    val sql18 = "DESC QUERY VALUES(100, 'John', 10000.20D) " +
+      "AS employee(id, name, salary);"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql18)
+      ).getClass.getSimpleName
+    )
+
+    val sql19 = "DESC QUERY TABLE person;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql19)
+      ).getClass.getSimpleName
+    )
+
+    val sql20 = "DESCRIBE FROM person SELECT age;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql20)
+      ).getClass.getSimpleName
+    )
+    pre_sql = "DROP TABLE IF EXISTS person;"
+    spark.sql(pre_sql)
+
+    pre_sql = "CREATE TABLE IF NOT EXISTS customer" +
+      "(cust_code INT, name VARCHAR(100), cust_addr STRING) " +
+      "TBLPROPERTIES ('created.by.user' = 'John', 'created.date' = '01-01-2001');"
+    spark.sql(pre_sql)
+    val sql34 = "SHOW TBLPROPERTIES customer;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql34)
+      ).getClass.getSimpleName
+    )
+
+    val sql36 = "SET spark.sql.variable.substitute=false"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql36)
+      ).getClass.getSimpleName
+    )
+
+    val sql37 = "SET"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql37)
+      ).getClass.getSimpleName
+    )
+
+    val sql38 = "SET spark.sql.variable.substitute"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql38)
+      ).getClass.getSimpleName
+    )
+
+    val sql39 = "RESET"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql39)
+      ).getClass.getSimpleName
+    )
+
+    val sql40 = "RESET spark.sql.variable.substitute"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql40)
+      ).getClass.getSimpleName
+    )
+
+    val sql41 = "SET TIME ZONE LOCAL;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql41)
+      ).getClass.getSimpleName
+    )
+
+    val sql42 = "ADD FILE /tmp/test;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql42)
+      ).getClass.getSimpleName
+    )
+
+    val sql43 = "ADD JAR /tmp/test.jar;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql43)
+      ).getClass.getSimpleName
+    )
+
+    val sql44 = "LIST FILE;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql44)
+      ).getClass.getSimpleName
+    )
+
+    val sql45 = "LIST JAR;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql45)
+      ).getClass.getSimpleName
+    )
+
+    val sql46 = "EXPLAIN select k, sum(v) from values (1, 2), (1, 3) t(k, v) group by k;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql46)
+      ).getClass.getSimpleName
+    )
+
+    val sql47 = "EXPLAIN EXTENDED select k, sum(v) from values (1, 2), (1, 3) t(k, v) group by k;"
+    auxiStatementSimpleName.add(
+      spark.sessionState.analyzer.execute(
+        spark.sessionState.sqlParser.parsePlan(sql47)
+      ).getClass.getSimpleName
+    )
+
+    // scalastyle:off println
+    println("auxiliary statement simple name is :" + auxiStatementSimpleName.toSeq.sorted)
+    // scalastyle:on println
+  }
+
   test("optimize zorder with datasource table") {
     // TODO remove this if we support datasource table
     withTable("t") {