You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@impala.apache.org by jo...@apache.org on 2019/05/01 15:46:34 UTC

[impala] branch master updated (d820952 -> eb97c74)

This is an automated email from the ASF dual-hosted git repository.

joemcdonnell pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git.


    from d820952  IMPALA-8469: admit_mem_limit for dedicated coordinator
     new b4ff801  Bump Kudu version to 9ba901a
     new a89762b  IMPALA-8369 : Impala should be able to interoperate with Hive 3.1.0
     new 61e7ff1  IMPALA-8475: Fix unbound CMAKE_BUILD_TYPE_LIST in buildall.sh
     new eb97c74  IMPALA-8293 (Part 2): Add support for Ranger cache invalidation

The 4 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 CMakeLists.txt                                     |   1 +
 README.md                                          |   1 -
 be/src/catalog/catalog-util.cc                     |   6 +
 bin/bootstrap_toolchain.py                         |   4 +-
 bin/impala-config.sh                               |  86 +--
 buildall.sh                                        |   2 +-
 common/thrift/.gitignore                           |   1 +
 common/thrift/CMakeLists.txt                       |   9 +-
 common/thrift/CatalogObjects.thrift                |  13 +
 fe/CMakeLists.txt                                  |   2 +-
 fe/pom.xml                                         | 436 +++++++++------
 .../org/apache/impala/compat/MetastoreShim.java    | 102 +++-
 .../impala/compat/HiveMetadataFormatUtils.java     | 612 +++++++++++++++++++++
 .../org/apache/impala/compat/MetastoreShim.java    | 367 ++++++++++++
 .../org/apache/impala/analysis/StringLiteral.java  |   4 +-
 .../impala/authorization/AuthorizationChecker.java |   5 +
 .../authorization/NoopAuthorizationFactory.java    |   3 +
 .../ranger/RangerAuthorizationChecker.java         |  16 +
 .../ranger/RangerAuthorizationFactory.java         |   2 +-
 .../ranger/RangerCatalogdAuthorizationManager.java |  33 +-
 .../sentry/SentryAuthorizationChecker.java         |   5 +
 ...sCachePool.java => AuthzCacheInvalidation.java} |  43 +-
 .../java/org/apache/impala/catalog/Catalog.java    |  27 +
 .../impala/catalog/CatalogServiceCatalog.java      |  68 +++
 .../org/apache/impala/catalog/FeHBaseTable.java    |  29 +-
 .../org/apache/impala/catalog/ImpaladCatalog.java  |  31 +-
 .../org/apache/impala/catalog/TableLoader.java     |   7 +-
 .../impala/catalog/events/MetastoreEvents.java     |  33 +-
 .../catalog/events/MetastoreEventsProcessor.java   |  18 +-
 .../impala/catalog/local/CatalogdMetaProvider.java |  29 +-
 .../impala/catalog/local/DirectMetaProvider.java   |   4 +-
 .../apache/impala/hive/executor/UdfExecutor.java   |   2 +
 .../apache/impala/service/CatalogOpExecutor.java   |   2 +-
 .../impala/service/DescribeResultFactory.java      |   8 +-
 .../apache/impala/service/FeCatalogManager.java    |  12 +-
 .../java/org/apache/impala/service/Frontend.java   |   1 +
 .../java/org/apache/impala/service/MetadataOp.java |  32 +-
 .../impala/analysis/AuthorizationStmtTest.java     |   2 +-
 .../events/MetastoreEventsProcessorTest.java       | 105 ++--
 .../org/apache/impala/common/FrontendTestBase.java |   3 +
 .../impala/hive/executor/UdfExecutorTest.java      |   3 +-
 .../testutil/EmbeddedMetastoreClientPool.java      |  24 +-
 .../apache/impala/testutil/ImpaladTestCatalog.java |   4 +-
 fe/src/test/resources/ranger-hive-security.xml     |   2 +-
 impala-parent/pom.xml                              |   9 +
 shaded-deps/.gitignore                             |   1 +
 {impala-parent => shaded-deps}/CMakeLists.txt      |   2 +-
 shaded-deps/pom.xml                                | 108 ++++
 testdata/bin/run-hive-server.sh                    |   5 +-
 tests/authorization/test_ranger.py                 | 103 ++--
 tests/custom_cluster/test_permanent_udfs.py        |   6 +
 51 files changed, 1990 insertions(+), 443 deletions(-)
 rename fe/src/{main => compat-hive-2}/java/org/apache/impala/compat/MetastoreShim.java (55%)
 create mode 100644 fe/src/compat-hive-3/java/org/apache/impala/compat/HiveMetadataFormatUtils.java
 create mode 100644 fe/src/compat-hive-3/java/org/apache/impala/compat/MetastoreShim.java
 copy fe/src/main/java/org/apache/impala/catalog/{HdfsCachePool.java => AuthzCacheInvalidation.java} (52%)
 create mode 100644 shaded-deps/.gitignore
 copy {impala-parent => shaded-deps}/CMakeLists.txt (93%)
 create mode 100644 shaded-deps/pom.xml


[impala] 02/04: IMPALA-8369 : Impala should be able to interoperate with Hive 3.1.0

Posted by jo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

joemcdonnell pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git

commit a89762bc014e97095c5108fd5785f51bf54a7a5d
Author: Vihang Karajgaonkar <vi...@cloudera.com>
AuthorDate: Mon Apr 1 17:47:51 2019 -0700

    IMPALA-8369 : Impala should be able to interoperate with Hive 3.1.0
    
    This change adds a compatibility shim in fe so that Impala can
    interoperate with Hive 3.1.0. It moves the existing Metastoreshim class
    to a compat-hive-2 directory and adds a new Metastoreshim class under
    compat-hive-3 directory. These shim classes implement method which are
    different in hive-2 v/s hive-3 and are used by front end code. At the
    build time, based on the environment variable
    IMPALA_HIVE_MAJOR_VERSION one of the two shims is added to as source
    using the fe/pom.xml build plugin.
    
    Additionally, in order to reduce the dependencies footprint of Hive in
    the front end code, this patch also introduces a new module called
    shaded-deps. This module using shade plugin to include only the source
    files from hive-exec which are need by the fe code. For hive-2 build
    path, no changes are done with respect to hive dependencies to minimize
    the risk of destabilizing the master branch on the default build option
    of using Hive-2.
    
    The different set of dependencies are activated using maven profiles.
    The activation of each profile is automatic based on the
    IMPALA_HIVE_MAJOR_VERSION.
    
    Testing:
    1. Code compiles and runs against both HMS-3 and HMS-2
    2. Ran full-suite of tests using the private jenkins job against HMS-2
    3. Running full-tests against HMS-3 will need more work like supporting
    Tez in the mini-cluster (for dataloading) and HMS transaction support
    since HMS3 create transactional tables by default. THis will be on-going
    effort and test failures on Hive-3 will be fixed in additional
    sub-tasks.
    
    Notes:
    1. Patch uses a custom build of Hive to be deployed in mini-cluster. This
    build has the fixes for HIVE-21596. This hack will be removed when the
    patches are available in official CDP Hive builds.
    2. Some of the existing tests rely on the fact the UDFs implement the
    UDF interface in Hive (UDFLength, UDFHour, UDFYear). These built-in hive
    functions have been moved to use GenericUDF interface in Hive 3. Impala
    currently only supports UDFExecutor. In order to have a full
    compatibility with all the functions in Hive 2.x we should support
    GenericUDFs too. That would be taken up as a separate patch.
    3. Sentry dependencies bring a lot of transitive hive dependencies. The
    patch excludes such dependencies since they create problems while
    building against Hive-3. Since these hive-2 dependencies are
    already included when building against hive-2 this should not be a problem.
    
    Change-Id: I45a4dadbdfe30a02f722dbd917a49bc182fc6436
    Reviewed-on: http://gerrit.cloudera.org:8080/13005
    Reviewed-by: Joe McDonnell <jo...@cloudera.com>
    Tested-by: Impala Public Jenkins <im...@cloudera.com>
---
 CMakeLists.txt                                     |   1 +
 README.md                                          |   1 -
 bin/bootstrap_toolchain.py                         |   4 +-
 bin/impala-config.sh                               |  82 +--
 common/thrift/.gitignore                           |   1 +
 common/thrift/CMakeLists.txt                       |   9 +-
 fe/CMakeLists.txt                                  |   2 +-
 fe/pom.xml                                         | 436 +++++++++------
 .../org/apache/impala/compat/MetastoreShim.java    | 102 +++-
 .../impala/compat/HiveMetadataFormatUtils.java     | 612 +++++++++++++++++++++
 .../org/apache/impala/compat/MetastoreShim.java    | 367 ++++++++++++
 .../org/apache/impala/analysis/StringLiteral.java  |   4 +-
 .../org/apache/impala/catalog/FeHBaseTable.java    |  29 +-
 .../org/apache/impala/catalog/TableLoader.java     |   7 +-
 .../impala/catalog/events/MetastoreEvents.java     |  33 +-
 .../catalog/events/MetastoreEventsProcessor.java   |  18 +-
 .../impala/catalog/local/DirectMetaProvider.java   |   4 +-
 .../apache/impala/hive/executor/UdfExecutor.java   |   2 +
 .../apache/impala/service/CatalogOpExecutor.java   |   2 +-
 .../impala/service/DescribeResultFactory.java      |   8 +-
 .../java/org/apache/impala/service/MetadataOp.java |  32 +-
 .../events/MetastoreEventsProcessorTest.java       | 105 ++--
 .../impala/hive/executor/UdfExecutorTest.java      |   3 +-
 .../testutil/EmbeddedMetastoreClientPool.java      |  24 +-
 impala-parent/pom.xml                              |   9 +
 shaded-deps/.gitignore                             |   1 +
 shaded-deps/CMakeLists.txt                         |  20 +
 shaded-deps/pom.xml                                | 108 ++++
 testdata/bin/run-hive-server.sh                    |   5 +-
 tests/custom_cluster/test_permanent_udfs.py        |   6 +
 30 files changed, 1694 insertions(+), 343 deletions(-)

diff --git a/CMakeLists.txt b/CMakeLists.txt
index 285e617..ebccb69 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -401,6 +401,7 @@ add_subdirectory(common/yarn-extras)
 add_subdirectory(common/protobuf)
 add_subdirectory(be)
 add_subdirectory(docker)
+add_subdirectory(shaded-deps)
 add_subdirectory(fe)
 add_subdirectory(impala-parent)
 add_subdirectory(ext-data-source)
diff --git a/README.md b/README.md
index 1ea3789..d2df5f1 100644
--- a/README.md
+++ b/README.md
@@ -100,7 +100,6 @@ can do so through the environment variables and scripts listed below.
 | HADOOP_INCLUDE_DIR   | "${HADOOP_HOME}/include" | For 'hdfs.h' |
 | HADOOP_LIB_DIR       | "${HADOOP_HOME}/lib" | For 'libhdfs.a' or 'libhdfs.so' |
 | HIVE_HOME            | "${CDH_COMPONENTS_HOME}/{hive-${IMPALA_HIVE_VERSION}/" | |
-| HIVE_SRC_DIR         | "${HIVE_HOME}/src" | Used to find Hive thrift files. |
 | HBASE_HOME           | "${CDH_COMPONENTS_HOME}/hbase-${IMPALA_HBASE_VERSION}/" | |
 | SENTRY_HOME          | "${CDH_COMPONENTS_HOME}/sentry-${IMPALA_SENTRY_VERSION}/" | Used to setup test data |
 | THRIFT_HOME          | "${IMPALA_TOOLCHAIN}/thrift-${IMPALA_THRIFT_VERSION}" | |
diff --git a/bin/bootstrap_toolchain.py b/bin/bootstrap_toolchain.py
index d51b5cf..9bfd6c8 100755
--- a/bin/bootstrap_toolchain.py
+++ b/bin/bootstrap_toolchain.py
@@ -570,8 +570,10 @@ if __name__ == "__main__":
   ]
   use_cdp_hive = os.getenv("USE_CDP_HIVE") == "true"
   if use_cdp_hive:
+    cdp_components.append(CdpComponent("apache-hive-{0}-src"
+                          .format(os.environ.get("IMPALA_HIVE_VERSION")))),
     cdp_components.append(CdpComponent("apache-hive-{0}-bin"
-                          .format(os.environ.get("IMPALA_HIVE_VERSION"))))
+                          .format(os.environ.get("IMPALA_HIVE_VERSION")))),
     cdp_components.append(CdpComponent(
         "tez-{0}-minimal".format(os.environ.get("IMPALA_TEZ_VERSION")),
         makedir=True))
diff --git a/bin/impala-config.sh b/bin/impala-config.sh
index bcd1a83..ff811df 100755
--- a/bin/impala-config.sh
+++ b/bin/impala-config.sh
@@ -171,22 +171,11 @@ export IMPALA_AVRO_JAVA_VERSION=1.8.2-cdh6.x-SNAPSHOT
 export IMPALA_LLAMA_MINIKDC_VERSION=1.0.0
 export IMPALA_KITE_VERSION=1.0.0-cdh6.x-SNAPSHOT
 export KUDU_JAVA_VERSION=1.10.0-cdh6.x-SNAPSHOT
-export USE_CDP_HIVE=${USE_CDP_HIVE-false}
-if $USE_CDP_HIVE; then
-  export IMPALA_HIVE_VERSION=3.1.0.6.0.99.0-45
-  # Temporary version of Tez, patched with the fix for TEZ-1348:
-  # https://github.com/apache/tez/pull/40
-  # We'll switch to a non-"todd" version of Tez once that fix is integrated.
-  # For now, if you're bumping the CDP build number, you'll need to download
-  # this tarball from an earlier build and re-upload it to the new directory
-  # in the toolchain bucket.
-  #
-  # TODO(todd) switch to an official build.
-  export IMPALA_TEZ_VERSION=0.10.0-todd-6fcc41e5798b.1
-  export TEZ_HOME="$CDP_COMPONENTS_HOME/tez-${IMPALA_TEZ_VERSION}-minimal"
-else
-  export IMPALA_HIVE_VERSION=2.1.1-cdh6.x-SNAPSHOT
-fi
+export CDH_HIVE_VERSION=2.1.1-cdh6.x-SNAPSHOT
+# This is a custom build of Hive which includes patches for HIVE-21586
+# HIVE-21077, HIVE-21526, HIVE-21561
+# TODO Use a official once these patches are merged
+export CDP_HIVE_VERSION=3.1.0.6.0.99.0-38-0e7f6337a50
 
 # When IMPALA_(CDH_COMPONENT)_URL are overridden, they may contain '$(platform_label)'
 # which will be substituted for the CDH platform label in bootstrap_toolchain.py
@@ -206,6 +195,33 @@ if [ -f "$IMPALA_HOME/bin/impala-config-local.sh" ]; then
   . "$IMPALA_HOME/bin/impala-config-local.sh"
 fi
 
+export CDH_COMPONENTS_HOME="$IMPALA_TOOLCHAIN/cdh_components-$CDH_BUILD_NUMBER"
+export CDP_COMPONENTS_HOME="$IMPALA_TOOLCHAIN/cdp_components-$CDP_BUILD_NUMBER"
+export USE_CDP_HIVE=${USE_CDP_HIVE-false}
+if $USE_CDP_HIVE; then
+  # When USE_CDP_HIVE is set we use the CDP hive version to build as well as deploy in
+  # the minicluster
+  export IMPALA_HIVE_VERSION=${CDP_HIVE_VERSION}
+  # Temporary version of Tez, patched with the fix for TEZ-1348:
+  # https://github.com/apache/tez/pull/40
+  # We'll switch to a non-"todd" version of Tez once that fix is integrated.
+  # For now, if you're bumping the CDP build number, you'll need to download
+  # this tarball from an earlier build and re-upload it to the new directory
+  # in the toolchain bucket.
+  #
+  # TODO(todd) switch to an official build.
+  export IMPALA_TEZ_VERSION=0.10.0-todd-6fcc41e5798b.1
+else
+  # CDH hive version is used to build and deploy in minicluster when USE_CDP_HIVE is
+  # false
+  export IMPALA_HIVE_VERSION=${CDH_HIVE_VERSION}
+fi
+# Extract the first component of the hive version.
+# Allow overriding of Hive source location in case we want to build Impala without
+# a complete Hive build. This is used by fe/pom.xml to activate compatibility shims
+# for Hive-2 or Hive-3
+export IMPALA_HIVE_MAJOR_VERSION=$(echo "$IMPALA_HIVE_VERSION" | cut -d . -f 1)
+
 # It is important to have a coherent view of the JAVA_HOME and JAVA executable.
 # The JAVA_HOME should be determined first, then the JAVA executable should be
 # derived from JAVA_HOME. bin/bootstrap_development.sh adds code to
@@ -291,16 +307,29 @@ export EXTERNAL_LISTEN_HOST="${EXTERNAL_LISTEN_HOST-0.0.0.0}"
 export DEFAULT_FS="${DEFAULT_FS-hdfs://${INTERNAL_LISTEN_HOST}:20500}"
 export WAREHOUSE_LOCATION_PREFIX="${WAREHOUSE_LOCATION_PREFIX-}"
 export LOCAL_FS="file:${WAREHOUSE_LOCATION_PREFIX}"
+
 ESCAPED_IMPALA_HOME=$(sed "s/[^0-9a-zA-Z]/_/g" <<< "$IMPALA_HOME")
 if $USE_CDP_HIVE; then
-  # It is likely that devs will want to with both the versions of metastore
+  export HIVE_HOME="$CDP_COMPONENTS_HOME/apache-hive-${IMPALA_HIVE_VERSION}-bin"
+  export HIVE_SRC_DIR=${HIVE_SRC_DIR_OVERRIDE:-"${CDP_COMPONENTS_HOME}/apache-hive-\
+${IMPALA_HIVE_VERSION}-src"}
+  # Set the path to the hive_metastore.thrift which is used to build thrift code
+  export HIVE_METASTORE_THRIFT_DIR=$HIVE_SRC_DIR/standalone-metastore/src/main/thrift
+  # It is likely that devs will want to work with both the versions of metastore
   # if cdp hive is being used change the metastore db name, so we don't have to
   # format the metastore db everytime we switch between hive versions
   export METASTORE_DB=${METASTORE_DB-"$(cut -c-59 <<< HMS$ESCAPED_IMPALA_HOME)_cdp"}
+  export TEZ_HOME="$CDP_COMPONENTS_HOME/tez-${IMPALA_TEZ_VERSION}-minimal"
 else
+  export HIVE_HOME="$CDH_COMPONENTS_HOME/hive-${IMPALA_HIVE_VERSION}"
+  # Allow overriding of Hive source location in case we want to build Impala without
+# a complete Hive build.
+  export HIVE_SRC_DIR=${HIVE_SRC_DIR_OVERRIDE:-"${HIVE_HOME}/src"}
+  export HIVE_METASTORE_THRIFT_DIR=$HIVE_SRC_DIR/metastore/if
   export METASTORE_DB=${METASTORE_DB-$(cut -c-63 <<< HMS$ESCAPED_IMPALA_HOME)}
 fi
-
+# Set the Hive binaries in the path
+export PATH="$HIVE_HOME/bin:$PATH"
 
 export SENTRY_POLICY_DB=${SENTRY_POLICY_DB-$(cut -c-63 <<< SP$ESCAPED_IMPALA_HOME)}
 if [[ "${TARGET_FILESYSTEM}" == "s3" ]]; then
@@ -492,12 +521,6 @@ export IMPALA_COMMON_DIR="$IMPALA_HOME/common"
 export PATH="$IMPALA_TOOLCHAIN/gdb-$IMPALA_GDB_VERSION/bin:$PATH"
 export PATH="$IMPALA_HOME/bin:$IMPALA_TOOLCHAIN/cmake-$IMPALA_CMAKE_VERSION/bin/:$PATH"
 
-# The directory in which all the thirdparty CDH components live.
-export CDH_COMPONENTS_HOME="$IMPALA_TOOLCHAIN/cdh_components-$CDH_BUILD_NUMBER"
-
-# The directory in which all the thirdparty CDP components live.
-export CDP_COMPONENTS_HOME="$IMPALA_TOOLCHAIN/cdp_components-$CDP_BUILD_NUMBER"
-
 # Typically we build against a snapshot build of Hadoop that includes everything we need
 # for building Impala and running a minicluster.
 export HADOOP_HOME="$CDH_COMPONENTS_HOME/hadoop-${IMPALA_HADOOP_VERSION}/"
@@ -531,17 +554,6 @@ export SENTRY_CONF_DIR="$IMPALA_HOME/fe/src/test/resources"
 export RANGER_HOME="${CDP_COMPONENTS_HOME}/ranger-${IMPALA_RANGER_VERSION}-admin"
 export RANGER_CONF_DIR="$IMPALA_HOME/fe/src/test/resources"
 
-# Extract the first component of the hive version.
-export IMPALA_HIVE_MAJOR_VERSION=$(echo "$IMPALA_HIVE_VERSION" | cut -d . -f 1)
-if $USE_CDP_HIVE; then
-  export HIVE_HOME="$CDP_COMPONENTS_HOME/apache-hive-${IMPALA_HIVE_VERSION}-bin"
-else
-  export HIVE_HOME="$CDH_COMPONENTS_HOME/hive-${IMPALA_HIVE_VERSION}/"
-fi
-export PATH="$HIVE_HOME/bin:$PATH"
-# Allow overriding of Hive source location in case we want to build Impala without
-# a complete Hive build.
-export HIVE_SRC_DIR=${HIVE_SRC_DIR_OVERRIDE:-"${HIVE_HOME}/src"}
 # To configure Hive logging, there's a hive-log4j2.properties[.template]
 # file in fe/src/test/resources. To get it into the classpath earlier
 # than the hive-log4j2.properties file included in some Hive jars,
diff --git a/common/thrift/.gitignore b/common/thrift/.gitignore
index f0ee7f2..0f6c0c5 100644
--- a/common/thrift/.gitignore
+++ b/common/thrift/.gitignore
@@ -2,3 +2,4 @@ Opcodes.thrift
 ErrorCodes.thrift
 MetricDefs.thrift
 hive-2-api/TCLIService.thrift
+hive-3-api/TCLIService.thrift
diff --git a/common/thrift/CMakeLists.txt b/common/thrift/CMakeLists.txt
index dd7e1f3..f297292 100644
--- a/common/thrift/CMakeLists.txt
+++ b/common/thrift/CMakeLists.txt
@@ -172,7 +172,7 @@ set(HIVE_THRIFT_SOURCE_DIR "hive-$ENV{IMPALA_HIVE_MAJOR_VERSION}-api")
 set(TCLI_SERVICE_THRIFT "${HIVE_THRIFT_SOURCE_DIR}/TCLIService.thrift")
 message("Using Thrift compiler: ${THRIFT_COMPILER}")
 message("Using Thrift 11 compiler: ${THRIFT11_COMPILER}")
-set(THRIFT_INCLUDE_DIR_OPTION -I ${THRIFT_CONTRIB_DIR} -I $ENV{HIVE_SRC_DIR}/metastore/if
+set(THRIFT_INCLUDE_DIR_OPTION -I ${THRIFT_CONTRIB_DIR} -I $ENV{HIVE_METASTORE_THRIFT_DIR}
   -I ${HIVE_THRIFT_SOURCE_DIR})
 set(BE_OUTPUT_DIR ${CMAKE_SOURCE_DIR}/be/generated-sources)
 set(FE_OUTPUT_DIR ${CMAKE_SOURCE_DIR}/fe/generated-sources)
@@ -258,6 +258,13 @@ add_custom_command(OUTPUT hive-2-api/TCLIService.thrift
   DEPENDS hive-1-api/TCLIService.thrift
 )
 
+add_custom_command(OUTPUT hive-3-api/TCLIService.thrift
+  COMMAND sed
+      's/namespace java org.apache.hive.service.cli.thrift/namespace java org.apache.hive.service.rpc.thrift/'
+      hive-1-api/TCLIService.thrift > hive-3-api/TCLIService.thrift
+  DEPENDS hive-1-api/TCLIService.thrift
+)
+
 # Create a build command for each of the thrift src files and generate
 # a list of files they produce
 THRIFT_GEN(THRIFT_ALL_FILES ${SRC_FILES})
diff --git a/fe/CMakeLists.txt b/fe/CMakeLists.txt
index c6af7db..5453387 100644
--- a/fe/CMakeLists.txt
+++ b/fe/CMakeLists.txt
@@ -16,7 +16,7 @@
 # under the License.
 
 add_custom_target(fe ALL DEPENDS
-  thrift-deps fb-deps yarn-extras function-registry ext-data-source impala-parent
+  shaded-deps thrift-deps fb-deps yarn-extras function-registry ext-data-source impala-parent
   COMMAND ${CMAKE_SOURCE_DIR}/bin/mvn-quiet.sh -B install -DskipTests
 )
 
diff --git a/fe/pom.xml b/fe/pom.xml
index 920b7e9..2a20e58 100644
--- a/fe/pom.xml
+++ b/fe/pom.xml
@@ -176,6 +176,14 @@ under the License.
           <groupId>net.minidev</groupId>
           <artifactId>json-smart</artifactId>
         </exclusion>
+        <exclusion>
+          <groupId>org.apache.hive.hcatalog</groupId>
+          <artifactId>*</artifactId>
+        </exclusion>
+        <exclusion>
+          <groupId>org.apache.hive</groupId>
+          <artifactId>*</artifactId>
+        </exclusion>
       </exclusions>
     </dependency>
 
@@ -214,6 +222,10 @@ under the License.
           <groupId>net.minidev</groupId>
           <artifactId>json-smart</artifactId>
         </exclusion>
+        <exclusion>
+          <groupId>org.apache.hive.hcatalog</groupId>
+          <artifactId>*</artifactId>
+        </exclusion>
       </exclusions>
     </dependency>
 
@@ -233,6 +245,10 @@ under the License.
           <groupId>net.minidev</groupId>
           <artifactId>json-smart</artifactId>
         </exclusion>
+        <exclusion>
+          <groupId>org.apache.hive</groupId>
+          <artifactId>*</artifactId>
+        </exclusion>
       </exclusions>
     </dependency>
 
@@ -286,9 +302,6 @@ under the License.
       <version>0.11-a-czt02-cdh</version>
     </dependency>
 
-    <!-- Moved above Hive, because Hive bundles its own Thrift version
-         which supercedes this one if it comes first in the dependency
-         tree -->
     <dependency>
       <groupId>org.apache.thrift</groupId>
       <artifactId>libthrift</artifactId>
@@ -296,165 +309,6 @@ under the License.
     </dependency>
 
     <dependency>
-      <groupId>org.apache.hive</groupId>
-      <artifactId>hive-service</artifactId>
-      <version>${hive.version}</version>
-      <!--
-           hive-service pulls in an incoherent (not ${hadoop.version}) version of
-           hadoop-client via this dependency, which we don't need.
-      -->
-      <exclusions>
-        <exclusion>
-          <groupId>org.apache.hive</groupId>
-          <artifactId>hive-llap-server</artifactId>
-        </exclusion>
-        <!-- https://issues.apache.org/jira/browse/HADOOP-14903 -->
-        <exclusion>
-          <groupId>net.minidev</groupId>
-          <artifactId>json-smart</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>org.apache.calcite.avatica</groupId>
-          <artifactId>avatica</artifactId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-
-    <dependency>
-      <groupId>org.apache.hive</groupId>
-      <artifactId>hive-serde</artifactId>
-      <version>${hive.version}</version>
-       <exclusions>
-         <!-- https://issues.apache.org/jira/browse/HADOOP-14903 -->
-         <exclusion>
-           <groupId>net.minidev</groupId>
-           <artifactId>json-smart</artifactId>
-         </exclusion>
-      </exclusions>
-    </dependency>
-
-    <dependency>
-      <groupId>org.apache.hive</groupId>
-      <artifactId>hive-exec</artifactId>
-      <version>${hive.version}</version>
-      <exclusions>
-        <!-- Impala uses log4j v1; avoid pulling in slf4j handling for log4j2 -->
-        <exclusion>
-          <groupId>org.apache.logging.log4j</groupId>
-          <artifactId>log4j-slf4j-impl</artifactId>
-        </exclusion>
-        <!-- Similarly, avoid pulling in the "Log4j 1.2 Bridge" -->
-        <exclusion>
-          <groupId>org.apache.logging.log4j</groupId>
-          <artifactId>log4j-1.2-api</artifactId>
-        </exclusion>
-        <!-- https://issues.apache.org/jira/browse/HADOOP-14903 -->
-        <exclusion>
-          <groupId>net.minidev</groupId>
-          <artifactId>json-smart</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>org.apache.calcite.avatica</groupId>
-          <artifactId>avatica</artifactId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-
-    <dependency>
-      <groupId>org.apache.hive</groupId>
-      <artifactId>hive-common</artifactId>
-      <version>${hive.version}</version>
-      <exclusions>
-        <!-- Impala uses log4j v1; avoid pulling in slf4j handling for log4j2 -->
-        <exclusion>
-          <groupId>org.apache.logging.log4j</groupId>
-          <artifactId>log4j-slf4j-impl</artifactId>
-        </exclusion>
-        <!-- Similarly, avoid pulling in the "Log4j 1.2 Bridge" -->
-        <exclusion>
-          <groupId>org.apache.logging.log4j</groupId>
-          <artifactId>log4j-1.2-api</artifactId>
-        </exclusion>
-        <!-- https://issues.apache.org/jira/browse/HADOOP-14903 -->
-        <exclusion>
-          <groupId>net.minidev</groupId>
-          <artifactId>json-smart</artifactId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-
-    <dependency>
-      <groupId>org.apache.hive</groupId>
-      <artifactId>hive-jdbc</artifactId>
-      <version>${hive.version}</version>
-      <scope>test</scope>
-      <exclusions>
-        <!-- Impala uses log4j v1; avoid pulling in slf4j handling for log4j2 -->
-        <exclusion>
-          <groupId>org.apache.logging.log4j</groupId>
-          <artifactId>log4j-slf4j-impl</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>net.minidev</groupId>
-          <artifactId>json-smart</artifactId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-
-    <dependency>
-      <groupId>org.apache.hive</groupId>
-      <artifactId>hive-hbase-handler</artifactId>
-      <version>${hive.version}</version>
-      <exclusions>
-        <!-- Impala uses log4j v1; avoid pulling in slf4j handling for log4j2 -->
-        <exclusion>
-          <groupId>org.apache.logging.log4j</groupId>
-          <artifactId>log4j-slf4j-impl</artifactId>
-        </exclusion>
-        <!-- https://issues.apache.org/jira/browse/HADOOP-14903 -->
-        <exclusion>
-          <groupId>net.minidev</groupId>
-          <artifactId>json-smart</artifactId>
-        </exclusion>
-        <exclusion>
-          <groupId>org.apache.calcite.avatica</groupId>
-          <artifactId>avatica</artifactId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-
-    <dependency>
-      <groupId>org.apache.hive</groupId>
-      <artifactId>hive-metastore</artifactId>
-      <version>${hive.version}</version>
-      <exclusions>
-        <!-- Impala uses log4j v1; avoid pulling in slf4j handling for log4j2 -->
-        <exclusion>
-          <groupId>org.apache.logging.log4j</groupId>
-          <artifactId>log4j-slf4j-impl</artifactId>
-        </exclusion>
-        <!-- https://issues.apache.org/jira/browse/HADOOP-14903 -->
-        <exclusion>
-          <groupId>net.minidev</groupId>
-          <artifactId>json-smart</artifactId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-
-    <dependency>
-      <groupId>org.apache.hive.shims</groupId>
-      <artifactId>hive-shims-common</artifactId>
-      <version>${hive.version}</version>
-      <exclusions>
-        <!-- Impala uses log4j v1; avoid pulling in slf4j handling for log4j2 -->
-        <exclusion>
-          <groupId>org.apache.logging.log4j</groupId>
-          <artifactId>log4j-slf4j-impl</artifactId>
-        </exclusion>
-      </exclusions>
-    </dependency>
-
-    <dependency>
       <groupId>org.apache.kudu</groupId>
       <artifactId>kudu-client</artifactId>
       <version>${kudu.version}</version>
@@ -550,7 +404,6 @@ under the License.
       <version>2.23.4</version>
       <scope>test</scope>
     </dependency>
-
   </dependencies>
 
   <reporting>
@@ -694,7 +547,8 @@ under the License.
                         -->
                 <source>${project.basedir}/generated-sources/gen-java</source>
                 <source>${project.build.directory}/generated-sources/cup</source>
-              </sources>
+                <source>${project.basedir}/src/compat-hive-${hive.major.version}/java</source>
+               </sources>
             </configuration>
           </execution>
         </executions>
@@ -788,6 +642,7 @@ under the License.
                     <include>org.apache.hadoop:*:${hadoop.version}</include>
                     <include>org.apache.hbase:*:${hbase.version}</include>
                     <include>org.apache.hive:*:${hive.version}</include>
+                    <include>org.apache.hive:hive-storage-api:${hive.storage.api.version}</include>
                     <include>org.apache.kudu:*:${kudu.version}</include>
                     <include>org.apache.sentry:*:${sentry.version}</include>
                     <include>org.apache.parquet:*:${parquet.version}</include>
@@ -799,8 +654,8 @@ under the License.
           </execution>
         </executions>
       </plugin>
+    </plugins>
 
-  </plugins>
     <pluginManagement>
       <plugins>
         <!--This plugin's configuration is used to store Eclipse m2e settings only. It has no influence on the Maven build itself.-->
@@ -898,6 +753,257 @@ under the License.
   </build>
 
   <profiles>
+    <!-- Profile which includes all of the Hive 2.x dependencies and appropriate exclusions. -->
+    <profile>
+      <id>hive-2</id>
+      <activation>
+        <property>
+          <name>env.IMPALA_HIVE_MAJOR_VERSION</name>
+          <value>2</value>
+        </property>
+      </activation>
+      <dependencies>
+        <dependency>
+          <groupId>org.apache.hive</groupId>
+          <artifactId>hive-service</artifactId>
+          <version>${hive.version}</version>
+          <!--
+              hive-service pulls in an incoherent (not ${hadoop.version}) version of
+              hadoop-client via this dependency, which we don't need.
+          -->
+          <exclusions>
+            <exclusion>
+              <groupId>org.apache.hive</groupId>
+              <artifactId>hive-llap-server</artifactId>
+            </exclusion>
+            <!-- https://issues.apache.org/jira/browse/HADOOP-14903 -->
+            <exclusion>
+              <groupId>net.minidev</groupId>
+              <artifactId>json-smart</artifactId>
+            </exclusion>
+            <exclusion>
+              <groupId>org.apache.calcite.avatica</groupId>
+              <artifactId>avatica</artifactId>
+            </exclusion>
+          </exclusions>
+        </dependency>
+
+        <dependency>
+          <groupId>org.apache.hive</groupId>
+          <artifactId>hive-serde</artifactId>
+          <version>${hive.version}</version>
+          <exclusions>
+            <!-- https://issues.apache.org/jira/browse/HADOOP-14903 -->
+            <exclusion>
+              <groupId>net.minidev</groupId>
+              <artifactId>json-smart</artifactId>
+            </exclusion>
+          </exclusions>
+        </dependency>
+
+        <dependency>
+          <groupId>org.apache.hive</groupId>
+          <artifactId>hive-exec</artifactId>
+          <version>${hive.version}</version>
+          <exclusions>
+            <!-- Impala uses log4j v1; avoid pulling in slf4j handling for log4j2 -->
+            <exclusion>
+              <groupId>org.apache.logging.log4j</groupId>
+              <artifactId>log4j-slf4j-impl</artifactId>
+            </exclusion>
+            <!-- Similarly, avoid pulling in the "Log4j 1.2 Bridge" -->
+            <exclusion>
+              <groupId>org.apache.logging.log4j</groupId>
+              <artifactId>log4j-1.2-api</artifactId>
+            </exclusion>
+            <!-- https://issues.apache.org/jira/browse/HADOOP-14903 -->
+            <exclusion>
+              <groupId>net.minidev</groupId>
+              <artifactId>json-smart</artifactId>
+            </exclusion>
+            <exclusion>
+              <groupId>org.apache.calcite.avatica</groupId>
+              <artifactId>avatica</artifactId>
+            </exclusion>
+          </exclusions>
+        </dependency>
+
+        <dependency>
+          <groupId>org.apache.hive</groupId>
+          <artifactId>hive-common</artifactId>
+          <version>${hive.version}</version>
+          <exclusions>
+            <!-- Impala uses log4j v1; avoid pulling in slf4j handling for log4j2 -->
+            <exclusion>
+              <groupId>org.apache.logging.log4j</groupId>
+              <artifactId>log4j-slf4j-impl</artifactId>
+            </exclusion>
+            <!-- Similarly, avoid pulling in the "Log4j 1.2 Bridge" -->
+            <exclusion>
+              <groupId>org.apache.logging.log4j</groupId>
+              <artifactId>log4j-1.2-api</artifactId>
+            </exclusion>
+            <!-- https://issues.apache.org/jira/browse/HADOOP-14903 -->
+            <exclusion>
+              <groupId>net.minidev</groupId>
+              <artifactId>json-smart</artifactId>
+            </exclusion>
+          </exclusions>
+        </dependency>
+
+        <dependency>
+          <groupId>org.apache.hive</groupId>
+          <artifactId>hive-jdbc</artifactId>
+          <version>${hive.version}</version>
+          <scope>test</scope>
+          <exclusions>
+            <!-- Impala uses log4j v1; avoid pulling in slf4j handling for log4j2 -->
+            <exclusion>
+              <groupId>org.apache.logging.log4j</groupId>
+              <artifactId>log4j-slf4j-impl</artifactId>
+            </exclusion>
+            <exclusion>
+              <groupId>net.minidev</groupId>
+              <artifactId>json-smart</artifactId>
+            </exclusion>
+          </exclusions>
+        </dependency>
+
+        <dependency>
+          <groupId>org.apache.hive</groupId>
+          <artifactId>hive-hbase-handler</artifactId>
+          <version>${hive.version}</version>
+          <exclusions>
+            <!-- Impala uses log4j v1; avoid pulling in slf4j handling for log4j2 -->
+            <exclusion>
+              <groupId>org.apache.logging.log4j</groupId>
+              <artifactId>log4j-slf4j-impl</artifactId>
+            </exclusion>
+            <!-- https://issues.apache.org/jira/browse/HADOOP-14903 -->
+            <exclusion>
+              <groupId>net.minidev</groupId>
+              <artifactId>json-smart</artifactId>
+            </exclusion>
+            <exclusion>
+              <groupId>org.apache.calcite.avatica</groupId>
+              <artifactId>avatica</artifactId>
+            </exclusion>
+          </exclusions>
+        </dependency>
+
+        <dependency>
+          <groupId>org.apache.hive</groupId>
+          <artifactId>hive-metastore</artifactId>
+          <version>${hive.version}</version>
+          <exclusions>
+            <!-- Impala uses log4j v1; avoid pulling in slf4j handling for log4j2 -->
+            <exclusion>
+              <groupId>org.apache.logging.log4j</groupId>
+              <artifactId>log4j-slf4j-impl</artifactId>
+            </exclusion>
+            <!-- https://issues.apache.org/jira/browse/HADOOP-14903 -->
+            <exclusion>
+              <groupId>net.minidev</groupId>
+              <artifactId>json-smart</artifactId>
+            </exclusion>
+          </exclusions>
+        </dependency>
+
+        <dependency>
+          <groupId>org.apache.hive.shims</groupId>
+          <artifactId>hive-shims-common</artifactId>
+          <version>${hive.version}</version>
+          <exclusions>
+            <!-- Impala uses log4j v1; avoid pulling in slf4j handling for log4j2 -->
+            <exclusion>
+              <groupId>org.apache.logging.log4j</groupId>
+              <artifactId>log4j-slf4j-impl</artifactId>
+            </exclusion>
+          </exclusions>
+        </dependency>
+      </dependencies>
+    </profile>
+
+    <!-- Profile which includes all of the Hive 3.x dependencies and appropriate exclusions. -->
+    <profile>
+      <id>hive-3</id>
+      <activation>
+        <property>
+          <name>env.IMPALA_HIVE_MAJOR_VERSION</name>
+          <value>3</value>
+        </property>
+      </activation>
+      <dependencies>
+        <!-- This reduced dependency is derived from hive-exec and only contains classes needed
+             by Impala. See shaded-deps/pom.xml for more details -->
+        <dependency>
+          <groupId>org.apache.impala</groupId>
+          <artifactId>impala-minimal-hive-exec</artifactId>
+          <version>${project.version}</version>
+        </dependency>
+        <!-- Needed for tests like JdbcTest which instantiates HiveDriver -->
+        <dependency>
+          <groupId>org.apache.hive</groupId>
+          <artifactId>hive-jdbc</artifactId>
+          <version>${hive.version}</version>
+          <scope>test</scope>
+          <exclusions>
+            <exclusion>
+              <groupId>org.apache.hbase</groupId>
+              <artifactId>*</artifactId>
+            </exclusion>
+            <exclusion>
+              <groupId>org.apache.logging.log4j</groupId>
+              <artifactId>log4j-slf4j-impl</artifactId>
+            </exclusion>
+            <exclusion>
+              <groupId>org.apache.logging.log4j</groupId>
+              <artifactId>log4j-1.2-api</artifactId>
+            </exclusion>
+            <!-- We should exclude hive-serde since it brings along a
+               different version of flatbuffers causing problems for loading tables -->
+            <exclusion>
+              <groupId>org.apache.hive</groupId>
+              <artifactId>hive-serde</artifactId>
+            </exclusion>
+            <exclusion>
+              <groupId>org.apache.hive</groupId>
+              <artifactId>hive-metastore</artifactId>
+            </exclusion>
+          </exclusions>
+        </dependency>
+        <dependency>
+          <groupId>org.apache.hive</groupId>
+          <artifactId>hive-standalone-metastore</artifactId>
+          <version>${hive.version}</version>
+          <exclusions>
+            <!-- Impala uses log4j v1; avoid pulling in slf4j handling for log4j2 -->
+            <exclusion>
+              <groupId>org.apache.logging.log4j</groupId>
+              <artifactId>log4j-slf4j-impl</artifactId>
+            </exclusion>
+            <exclusion>
+              <groupId>org.apache.logging.log4j</groupId>
+              <artifactId>log4j-1.2-api</artifactId>
+            </exclusion>
+            <!-- https://issues.apache.org/jira/browse/HADOOP-14903 -->
+            <exclusion>
+              <groupId>net.minidev</groupId>
+              <artifactId>json-smart</artifactId>
+            </exclusion>
+            <exclusion>
+              <groupId>org.apache.hive</groupId>
+              <artifactId>hive-serde</artifactId>
+            </exclusion>
+            <exclusion>
+              <groupId>org.apache.hive</groupId>
+              <artifactId>hive-shims</artifactId>
+            </exclusion>
+          </exclusions>
+        </dependency>
+      </dependencies>
+    </profile>
+
     <profile>
       <id>thrift-home-defined</id>
       <activation>
diff --git a/fe/src/main/java/org/apache/impala/compat/MetastoreShim.java b/fe/src/compat-hive-2/java/org/apache/impala/compat/MetastoreShim.java
similarity index 55%
rename from fe/src/main/java/org/apache/impala/compat/MetastoreShim.java
rename to fe/src/compat-hive-2/java/org/apache/impala/compat/MetastoreShim.java
index 3d69545..8ceb127 100644
--- a/fe/src/main/java/org/apache/impala/compat/MetastoreShim.java
+++ b/fe/src/compat-hive-2/java/org/apache/impala/compat/MetastoreShim.java
@@ -17,16 +17,33 @@
 
 package org.apache.impala.compat;
 
+import static org.apache.impala.service.MetadataOp.TABLE_TYPE_TABLE;
+import static org.apache.impala.service.MetadataOp.TABLE_TYPE_VIEW;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.base.Preconditions;
+import com.google.common.collect.ImmutableMap;
+import java.util.EnumSet;
 import java.util.List;
 
+import org.apache.hadoop.hive.common.FileUtils;
 import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
 import org.apache.hadoop.hive.conf.HiveConf;
 import org.apache.hadoop.hive.metastore.IMetaStoreClient;
 import org.apache.hadoop.hive.metastore.MetaStoreUtils;
+import org.apache.hadoop.hive.metastore.TableType;
 import org.apache.hadoop.hive.metastore.Warehouse;
 import org.apache.hadoop.hive.metastore.api.InvalidOperationException;
 import org.apache.hadoop.hive.metastore.api.MetaException;
 import org.apache.hadoop.hive.metastore.api.Partition;
+import org.apache.hadoop.hive.metastore.api.Table;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer;
+import org.apache.hadoop.hive.metastore.messaging.AlterTableMessage;
+import org.apache.hadoop.hive.metastore.messaging.InsertMessage;
+import org.apache.hadoop.hive.metastore.messaging.MessageDeserializer;
+import org.apache.hadoop.hive.metastore.messaging.json.ExtendedJSONMessageFactory;
 import org.apache.hive.service.rpc.thrift.TGetColumnsReq;
 import org.apache.hive.service.rpc.thrift.TGetFunctionsReq;
 import org.apache.hive.service.rpc.thrift.TGetSchemasReq;
@@ -76,8 +93,8 @@ public class MetastoreShim {
    * Wrapper around MetaStoreUtils.updatePartitionStatsFast() to deal with added
    * arguments.
    */
-  public static void updatePartitionStatsFast(Partition partition, Warehouse warehouse)
-      throws MetaException {
+  public static void updatePartitionStatsFast(Partition partition, Table tbl,
+      Warehouse warehouse) throws MetaException {
     MetaStoreUtils.updatePartitionStatsFast(partition, warehouse, null);
   }
 
@@ -124,4 +141,85 @@ public class MetastoreShim {
     return MetadataOp.getSchemas(
         frontend, req.getCatalogName(), req.getSchemaName(), user);
   }
+
+  /**
+   * Supported HMS-2 types
+   */
+  public static final EnumSet<TableType> IMPALA_SUPPORTED_TABLE_TYPES = EnumSet
+      .of(TableType.EXTERNAL_TABLE, TableType.MANAGED_TABLE, TableType.VIRTUAL_VIEW);
+
+  /**
+   * mapping between the HMS-2 type the Impala types
+   */
+  public static final ImmutableMap<String, String> HMS_TO_IMPALA_TYPE =
+      new ImmutableMap.Builder<String, String>()
+          .put("EXTERNAL_TABLE", TABLE_TYPE_TABLE)
+          .put("MANAGED_TABLE", TABLE_TYPE_TABLE)
+          .put("INDEX_TABLE", TABLE_TYPE_TABLE)
+          .put("VIRTUAL_VIEW", TABLE_TYPE_VIEW).build();
+
+  /**
+   * Wrapper method which returns ExtendedJSONMessageFactory in case Impala is
+   * building against Hive-2 to keep compatibility with Sentry
+   */
+  public static MessageDeserializer getMessageDeserializer() {
+    return ExtendedJSONMessageFactory.getInstance().getDeserializer();
+  }
+
+  /**
+   * Wrapper around FileUtils.makePartName to deal with package relocation in Hive 3
+   * @param partitionColNames
+   * @param values
+   * @return
+   */
+  public static String makePartName(List<String> partitionColNames, List<String> values) {
+    return FileUtils.makePartName(partitionColNames, values);
+  }
+
+  /**
+   * Wrapper method around message factory's build alter table message due to added
+   * arguments in hive 3.
+   */
+  @VisibleForTesting
+  public static AlterTableMessage buildAlterTableMessage(Table before, Table after,
+      boolean isTruncateOp, long writeId) {
+    Preconditions.checkArgument(writeId < 0, "Write ids are not supported in Hive-2 "
+        + "compatible build");
+    Preconditions.checkArgument(!isTruncateOp, "Truncate operation is not supported in "
+        + "alter table messages in Hive-2 compatible build");
+    return ExtendedJSONMessageFactory.getInstance().buildAlterTableMessage(before, after);
+  }
+
+  @VisibleForTesting
+  public static InsertMessage buildInsertMessage(Table msTbl, Partition partition,
+      boolean isInsertOverwrite, List<String> newFiles) {
+    return ExtendedJSONMessageFactory.getInstance().buildInsertMessage(msTbl, partition,
+        isInsertOverwrite, newFiles);
+  }
+
+  public static String getAllColumnsInformation(List<FieldSchema> tabCols,
+      List<FieldSchema> partitionCols, boolean printHeader, boolean isOutputPadded,
+      boolean showPartColsSeparately) {
+    return MetaDataFormatUtils
+        .getAllColumnsInformation(tabCols, partitionCols, printHeader, isOutputPadded,
+            showPartColsSeparately);
+  }
+
+  /**
+   * Wrapper method around Hive's MetadataFormatUtils.getTableInformation which has
+   * changed significantly in Hive-3
+   * @return
+   */
+  public static String getTableInformation(
+      org.apache.hadoop.hive.ql.metadata.Table table) {
+    return MetaDataFormatUtils.getTableInformation(table);
+  }
+
+  /**
+   * Wrapper method around BaseSemanticAnalyzer's unespaceSQLString to be compatibility
+   * with Hive. Takes in a normalized value of the string surrounded by single quotes
+   */
+  public static String unescapeSQLString(String normalizedStringLiteral) {
+    return BaseSemanticAnalyzer.unescapeSQLString(normalizedStringLiteral);
+  }
 }
diff --git a/fe/src/compat-hive-3/java/org/apache/impala/compat/HiveMetadataFormatUtils.java b/fe/src/compat-hive-3/java/org/apache/impala/compat/HiveMetadataFormatUtils.java
new file mode 100644
index 0000000..073031e
--- /dev/null
+++ b/fe/src/compat-hive-3/java/org/apache/impala/compat/HiveMetadataFormatUtils.java
@@ -0,0 +1,612 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+package org.apache.impala.compat;
+
+import java.math.BigInteger;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.Comparator;
+import java.util.Date;
+import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+import java.util.Set;
+import java.util.TreeMap;
+import org.apache.commons.lang.StringEscapeUtils;
+import org.apache.commons.lang3.text.translate.CharSequenceTranslator;
+import org.apache.commons.lang3.text.translate.EntityArrays;
+import org.apache.commons.lang3.text.translate.LookupTranslator;
+import org.apache.hadoop.hive.common.type.HiveDecimal;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.api.BinaryColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.BooleanColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsData;
+import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
+import org.apache.hadoop.hive.metastore.api.DateColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Decimal;
+import org.apache.hadoop.hive.metastore.api.DecimalColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.DoubleColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.LongColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
+import org.apache.hadoop.hive.metastore.api.StringColumnStatsData;
+import org.apache.hadoop.hive.metastore.api.Table;
+import org.apache.hadoop.hive.serde2.io.DateWritable;
+
+/**
+ * Most of the code in this class is copied from Hive 2.1.1. This is used so that
+ * the describe table output from Impala matches to Hive as much as possible. Initially,
+ * Impala had a dependency with hive-exec which pulled in this class directly from
+ * hive-exec jar. But in Hive 3 this code has diverged a lot and getting it from hive-exec
+ * pulls in a lot of unnecessary dependencies. It could be argued that
+ * supporting describe table to be similar to Hive's describe table does not make much
+ * sense. Since the code has diverged anyways when compared to Hive 3, we should maintain
+ * our own code from now on and make changes as required.
+ */
+public class HiveMetadataFormatUtils {
+
+  public static final String FIELD_DELIM = "\t";
+  public static final String LINE_DELIM = "\n";
+
+  static final int DEFAULT_STRINGBUILDER_SIZE = 2048;
+  private static final int ALIGNMENT = 20;
+
+  /**
+   * Write formatted information about the given columns, including partition columns to a
+   * string
+   *
+   * @param cols - list of columns
+   * @param partCols - list of partition columns
+   * @param printHeader - if header should be included
+   * @param isOutputPadded - make it more human readable by setting indentation with
+   *     spaces. Turned off for use by HiveServer2
+   * @return string with formatted column information
+   */
+  public static String getAllColumnsInformation(List<FieldSchema> cols,
+      List<FieldSchema> partCols, boolean printHeader, boolean isOutputPadded,
+      boolean showPartColsSep) {
+    StringBuilder columnInformation = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+    if (printHeader) {
+      formatColumnsHeader(columnInformation, null);
+    }
+    formatAllFields(columnInformation, cols, isOutputPadded, null);
+
+    if ((partCols != null) && !partCols.isEmpty() && showPartColsSep) {
+      columnInformation.append(LINE_DELIM).append("# Partition Information")
+          .append(LINE_DELIM);
+      formatColumnsHeader(columnInformation, null);
+      formatAllFields(columnInformation, partCols, isOutputPadded, null);
+    }
+
+    return columnInformation.toString();
+  }
+
+  private static void formatColumnsHeader(StringBuilder columnInformation,
+      List<ColumnStatisticsObj> colStats) {
+    columnInformation.append("# "); // Easy for shell scripts to ignore
+    formatOutput(getColumnsHeader(colStats), columnInformation, false);
+    columnInformation.append(LINE_DELIM);
+  }
+
+  /**
+   * Prints a row with the given fields into the builder The last field could be a
+   * multiline field, and the extra lines should be padded
+   *
+   * @param fields The fields to print
+   * @param tableInfo The target builder
+   * @param isLastLinePadded Is the last field could be printed in multiple lines, if
+   *     contains newlines?
+   */
+  private static void formatOutput(String[] fields, StringBuilder tableInfo,
+      boolean isLastLinePadded) {
+    int[] paddings = new int[fields.length - 1];
+    if (fields.length > 1) {
+      for (int i = 0; i < fields.length - 1; i++) {
+        if (fields[i] == null) {
+          tableInfo.append(FIELD_DELIM);
+          continue;
+        }
+        tableInfo.append(String.format("%-" + ALIGNMENT + "s", fields[i]))
+            .append(FIELD_DELIM);
+        paddings[i] = ALIGNMENT > fields[i].length() ? ALIGNMENT : fields[i].length();
+      }
+    }
+    if (fields.length > 0) {
+      String value = fields[fields.length - 1];
+      String unescapedValue = (isLastLinePadded && value != null) ? value
+          .replaceAll("\\\\n|\\\\r|\\\\r\\\\n", "\n") : value;
+      indentMultilineValue(unescapedValue, tableInfo, paddings, false);
+    } else {
+      tableInfo.append(LINE_DELIM);
+    }
+  }
+
+  private static final String schema = "col_name,data_type,comment#string:string:string";
+  private static final String colStatsSchema = "col_name,data_type,min,max,num_nulls,"
+      + "distinct_count,avg_col_len,max_col_len,num_trues,num_falses,comment"
+      + "#string:string:string:string:string:string:string:string:string:string:string";
+
+  public static String[] getColumnsHeader(List<ColumnStatisticsObj> colStats) {
+    String colSchema = schema;
+    if (colStats != null) {
+      colSchema = colStatsSchema;
+    }
+    return colSchema.split("#")[0].split(",");
+  }
+
+  /**
+   * Write formatted column information into given StringBuilder
+   *
+   * @param tableInfo - StringBuilder to append column information into
+   * @param cols - list of columns
+   * @param isOutputPadded - make it more human readable by setting indentation with
+   *     spaces. Turned off for use by HiveServer2
+   */
+  private static void formatAllFields(StringBuilder tableInfo, List<FieldSchema> cols,
+      boolean isOutputPadded, List<ColumnStatisticsObj> colStats) {
+    for (FieldSchema col : cols) {
+      if (isOutputPadded) {
+        formatWithIndentation(col.getName(), col.getType(), getComment(col), tableInfo,
+            colStats);
+      } else {
+        formatWithoutIndentation(col.getName(), col.getType(), col.getComment(),
+            tableInfo, colStats);
+      }
+    }
+  }
+
+  private static void formatWithoutIndentation(String name, String type, String comment,
+      StringBuilder colBuffer, List<ColumnStatisticsObj> colStats) {
+    colBuffer.append(name);
+    colBuffer.append(FIELD_DELIM);
+    colBuffer.append(type);
+    colBuffer.append(FIELD_DELIM);
+    if (colStats != null) {
+      ColumnStatisticsObj cso = getColumnStatisticsObject(name, type, colStats);
+      if (cso != null) {
+        ColumnStatisticsData csd = cso.getStatsData();
+        if (csd.isSetBinaryStats()) {
+          BinaryColumnStatsData bcsd = csd.getBinaryStats();
+          appendColumnStatsNoFormatting(colBuffer, "", "", bcsd.getNumNulls(), "",
+              bcsd.getAvgColLen(), bcsd.getMaxColLen(), "", "");
+        } else if (csd.isSetStringStats()) {
+          StringColumnStatsData scsd = csd.getStringStats();
+          appendColumnStatsNoFormatting(colBuffer, "", "", scsd.getNumNulls(),
+              scsd.getNumDVs(), scsd.getAvgColLen(), scsd.getMaxColLen(), "", "");
+        } else if (csd.isSetBooleanStats()) {
+          BooleanColumnStatsData bcsd = csd.getBooleanStats();
+          appendColumnStatsNoFormatting(colBuffer, "", "", bcsd.getNumNulls(), "", "", "",
+              bcsd.getNumTrues(), bcsd.getNumFalses());
+        } else if (csd.isSetDecimalStats()) {
+          DecimalColumnStatsData dcsd = csd.getDecimalStats();
+          appendColumnStatsNoFormatting(colBuffer, convertToString(dcsd.getLowValue()),
+              convertToString(dcsd.getHighValue()), dcsd.getNumNulls(), dcsd.getNumDVs(),
+              "", "", "", "");
+        } else if (csd.isSetDoubleStats()) {
+          DoubleColumnStatsData dcsd = csd.getDoubleStats();
+          appendColumnStatsNoFormatting(colBuffer, dcsd.getLowValue(),
+              dcsd.getHighValue(), dcsd.getNumNulls(), dcsd.getNumDVs(), "", "", "", "");
+        } else if (csd.isSetLongStats()) {
+          LongColumnStatsData lcsd = csd.getLongStats();
+          appendColumnStatsNoFormatting(colBuffer, lcsd.getLowValue(),
+              lcsd.getHighValue(), lcsd.getNumNulls(), lcsd.getNumDVs(), "", "", "", "");
+        } else if (csd.isSetDateStats()) {
+          DateColumnStatsData dcsd = csd.getDateStats();
+          appendColumnStatsNoFormatting(colBuffer, convertToString(dcsd.getLowValue()),
+              convertToString(dcsd.getHighValue()), dcsd.getNumNulls(), dcsd.getNumDVs(),
+              "", "", "", "");
+        }
+      } else {
+        appendColumnStatsNoFormatting(colBuffer, "", "", "", "", "", "", "", "");
+      }
+    }
+    colBuffer.append(comment == null ? "" : ESCAPE_JAVA.translate(comment));
+    colBuffer.append(LINE_DELIM);
+  }
+
+  private static final CharSequenceTranslator ESCAPE_JAVA =
+      new LookupTranslator(new String[][]{{"\"", "\\\""}, {"\\", "\\\\"},})
+          .with(new LookupTranslator(EntityArrays.JAVA_CTRL_CHARS_ESCAPE()));
+
+  private static void appendColumnStatsNoFormatting(StringBuilder sb, Object min,
+      Object max, Object numNulls, Object ndv, Object avgColLen, Object maxColLen,
+      Object numTrues, Object numFalses) {
+    sb.append(min).append(FIELD_DELIM);
+    sb.append(max).append(FIELD_DELIM);
+    sb.append(numNulls).append(FIELD_DELIM);
+    sb.append(ndv).append(FIELD_DELIM);
+    sb.append(avgColLen).append(FIELD_DELIM);
+    sb.append(maxColLen).append(FIELD_DELIM);
+    sb.append(numTrues).append(FIELD_DELIM);
+    sb.append(numFalses).append(FIELD_DELIM);
+  }
+
+  static String getComment(FieldSchema col) {
+    return col.getComment() != null ? col.getComment() : "";
+  }
+
+  private static void formatWithIndentation(String colName, String colType,
+      String colComment, StringBuilder tableInfo, List<ColumnStatisticsObj> colStats) {
+    tableInfo.append(String.format("%-" + ALIGNMENT + "s", colName)).append(FIELD_DELIM);
+    tableInfo.append(String.format("%-" + ALIGNMENT + "s", colType)).append(FIELD_DELIM);
+
+    if (colStats != null) {
+      ColumnStatisticsObj cso = getColumnStatisticsObject(colName, colType, colStats);
+      if (cso != null) {
+        ColumnStatisticsData csd = cso.getStatsData();
+        if (csd.isSetBinaryStats()) {
+          BinaryColumnStatsData bcsd = csd.getBinaryStats();
+          appendColumnStats(tableInfo, "", "", bcsd.getNumNulls(), "",
+              bcsd.getAvgColLen(), bcsd.getMaxColLen(), "", "");
+        } else if (csd.isSetStringStats()) {
+          StringColumnStatsData scsd = csd.getStringStats();
+          appendColumnStats(tableInfo, "", "", scsd.getNumNulls(), scsd.getNumDVs(),
+              scsd.getAvgColLen(), scsd.getMaxColLen(), "", "");
+        } else if (csd.isSetBooleanStats()) {
+          BooleanColumnStatsData bcsd = csd.getBooleanStats();
+          appendColumnStats(tableInfo, "", "", bcsd.getNumNulls(), "", "", "",
+              bcsd.getNumTrues(), bcsd.getNumFalses());
+        } else if (csd.isSetDecimalStats()) {
+          DecimalColumnStatsData dcsd = csd.getDecimalStats();
+          appendColumnStats(tableInfo, convertToString(dcsd.getLowValue()),
+              convertToString(dcsd.getHighValue()), dcsd.getNumNulls(), dcsd.getNumDVs(),
+              "", "", "", "");
+        } else if (csd.isSetDoubleStats()) {
+          DoubleColumnStatsData dcsd = csd.getDoubleStats();
+          appendColumnStats(tableInfo, dcsd.getLowValue(), dcsd.getHighValue(),
+              dcsd.getNumNulls(), dcsd.getNumDVs(), "", "", "", "");
+        } else if (csd.isSetLongStats()) {
+          LongColumnStatsData lcsd = csd.getLongStats();
+          appendColumnStats(tableInfo, lcsd.getLowValue(), lcsd.getHighValue(),
+              lcsd.getNumNulls(), lcsd.getNumDVs(), "", "", "", "");
+        } else if (csd.isSetDateStats()) {
+          DateColumnStatsData dcsd = csd.getDateStats();
+          appendColumnStats(tableInfo, convertToString(dcsd.getLowValue()),
+              convertToString(dcsd.getHighValue()), dcsd.getNumNulls(), dcsd.getNumDVs(),
+              "", "", "", "");
+        }
+      } else {
+        appendColumnStats(tableInfo, "", "", "", "", "", "", "", "");
+      }
+    }
+
+    int colNameLength = ALIGNMENT > colName.length() ? ALIGNMENT : colName.length();
+    int colTypeLength = ALIGNMENT > colType.length() ? ALIGNMENT : colType.length();
+    indentMultilineValue(colComment, tableInfo, new int[]{colNameLength, colTypeLength},
+        false);
+  }
+
+  /**
+   * comment indent processing for multi-line values values should be indented the same
+   * amount on each line if the first line comment starts indented by k, the following
+   * line comments should also be indented by k
+   *
+   * @param value the value to write
+   * @param tableInfo the buffer to write to
+   * @param columnWidths the widths of the previous columns
+   * @param printNull print null as a string, or do not print anything
+   */
+  private static void indentMultilineValue(String value, StringBuilder tableInfo,
+      int[] columnWidths, boolean printNull) {
+    if (value == null) {
+      if (printNull) {
+        tableInfo.append(String.format("%-" + ALIGNMENT + "s", value));
+      }
+      tableInfo.append(LINE_DELIM);
+    } else {
+      String[] valueSegments = value.split("\n|\r|\r\n");
+      tableInfo.append(String.format("%-" + ALIGNMENT + "s", valueSegments[0]))
+          .append(LINE_DELIM);
+      for (int i = 1; i < valueSegments.length; i++) {
+        printPadding(tableInfo, columnWidths);
+        tableInfo.append(String.format("%-" + ALIGNMENT + "s", valueSegments[i]))
+            .append(LINE_DELIM);
+      }
+    }
+  }
+
+  /**
+   * Print the rigth padding, with the given column widths
+   *
+   * @param tableInfo The buffer to write to
+   * @param columnWidths The column widths
+   */
+  private static void printPadding(StringBuilder tableInfo, int[] columnWidths) {
+    for (int columnWidth : columnWidths) {
+      if (columnWidth == 0) {
+        tableInfo.append(FIELD_DELIM);
+      } else {
+        tableInfo.append(String.format("%" + columnWidth + "s" + FIELD_DELIM, ""));
+      }
+    }
+  }
+
+  private static String convertToString(Decimal val) {
+    if (val == null) {
+      return "";
+    }
+
+    HiveDecimal result =
+        HiveDecimal.create(new BigInteger(val.getUnscaled()), val.getScale());
+    if (result != null) {
+      return result.toString();
+    } else {
+      return "";
+    }
+  }
+
+  private static String convertToString(org.apache.hadoop.hive.metastore.api.Date val) {
+    if (val == null) {
+      return "";
+    }
+
+    DateWritable writableValue = new DateWritable((int) val.getDaysSinceEpoch());
+    return writableValue.toString();
+  }
+
+  private static void appendColumnStats(StringBuilder sb, Object min, Object max,
+      Object numNulls, Object ndv, Object avgColLen, Object maxColLen, Object numTrues,
+      Object numFalses) {
+    sb.append(String.format("%-" + ALIGNMENT + "s", min)).append(FIELD_DELIM);
+    sb.append(String.format("%-" + ALIGNMENT + "s", max)).append(FIELD_DELIM);
+    sb.append(String.format("%-" + ALIGNMENT + "s", numNulls)).append(FIELD_DELIM);
+    sb.append(String.format("%-" + ALIGNMENT + "s", ndv)).append(FIELD_DELIM);
+    sb.append(String.format("%-" + ALIGNMENT + "s", avgColLen)).append(FIELD_DELIM);
+    sb.append(String.format("%-" + ALIGNMENT + "s", maxColLen)).append(FIELD_DELIM);
+    sb.append(String.format("%-" + ALIGNMENT + "s", numTrues)).append(FIELD_DELIM);
+    sb.append(String.format("%-" + ALIGNMENT + "s", numFalses)).append(FIELD_DELIM);
+  }
+
+  private static ColumnStatisticsObj getColumnStatisticsObject(String colName,
+      String colType, List<ColumnStatisticsObj> colStats) {
+    if (colStats != null && !colStats.isEmpty()) {
+      for (ColumnStatisticsObj cso : colStats) {
+        if (cso.getColName().equalsIgnoreCase(colName) && cso.getColType()
+            .equalsIgnoreCase(colType)) {
+          return cso;
+        }
+      }
+    }
+    return null;
+  }
+
+  public static String getTableInformation(Table table, boolean isOutputPadded) {
+    StringBuilder tableInfo = new StringBuilder(DEFAULT_STRINGBUILDER_SIZE);
+
+    // Table Metadata
+    tableInfo.append(LINE_DELIM).append("# Detailed Table Information")
+        .append(LINE_DELIM);
+    getTableMetaDataInformation(tableInfo, table, isOutputPadded);
+
+    // Storage information.
+    tableInfo.append(LINE_DELIM).append("# Storage Information").append(LINE_DELIM);
+    getStorageDescriptorInfo(tableInfo, table.getSd());
+
+    if (TableType.VIRTUAL_VIEW.equals(TableType.valueOf(table.getTableType()))) {
+      tableInfo.append(LINE_DELIM).append("# View Information").append(LINE_DELIM);
+      getViewInfo(tableInfo, table);
+    }
+
+    return tableInfo.toString();
+  }
+
+  private static void getViewInfo(StringBuilder tableInfo, Table tbl) {
+    formatOutput("View Original Text:", tbl.getViewOriginalText(), tableInfo);
+    formatOutput("View Expanded Text:", tbl.getViewExpandedText(), tableInfo);
+  }
+
+  private static void getTableMetaDataInformation(StringBuilder tableInfo, Table tbl,
+      boolean isOutputPadded) {
+    formatOutput("Database:", tbl.getDbName(), tableInfo);
+    formatOutput("OwnerType:",
+        (tbl.getOwnerType() != null) ? tbl.getOwnerType().name() : "null", tableInfo);
+    formatOutput("Owner:", tbl.getOwner(), tableInfo);
+    formatOutput("CreateTime:", formatDate(tbl.getCreateTime()), tableInfo);
+    formatOutput("LastAccessTime:", formatDate(tbl.getLastAccessTime()), tableInfo);
+    formatOutput("Retention:", Integer.toString(tbl.getRetention()), tableInfo);
+    if (!TableType.VIRTUAL_VIEW.toString().equals(tbl.getTableType())) {
+      String location = null;
+      if (tbl.getSd() != null) {
+        location = tbl.getSd().getLocation();
+      }
+      formatOutput("Location:", location, tableInfo);
+    }
+    formatOutput("Table Type:", tbl.getTableType(), tableInfo);
+
+    if (tbl.getParameters().size() > 0) {
+      tableInfo.append("Table Parameters:").append(LINE_DELIM);
+      displayAllParameters(tbl.getParameters(), tableInfo, false, isOutputPadded);
+    }
+  }
+
+  /**
+   * The name of the statistic for Number of Erasure Coded Files - to be published or
+   * gathered.
+   */
+  private static final String NUM_ERASURE_CODED_FILES = "numFilesErasureCoded";
+
+  /**
+   * Display key, value pairs of the parameters. The characters will be escaped including
+   * unicode if escapeUnicode is true; otherwise the characters other than unicode will be
+   * escaped.
+   */
+  private static void displayAllParameters(Map<String, String> params,
+      StringBuilder tableInfo, boolean escapeUnicode, boolean isOutputPadded) {
+    List<String> keys = new ArrayList<String>(params.keySet());
+    Collections.sort(keys);
+    for (String key : keys) {
+      String value = params.get(key);
+      //TODO(Vihang) HIVE-18118 should be ported to Hive-3.1
+      if (key.equals(NUM_ERASURE_CODED_FILES)) {
+        if ("0".equals(value)) {
+          continue;
+        }
+      }
+      tableInfo.append(FIELD_DELIM); // Ensures all params are indented.
+      formatOutput(key, escapeUnicode ? StringEscapeUtils.escapeJava(value)
+          : ESCAPE_JAVA.translate(value), tableInfo, isOutputPadded);
+    }
+  }
+
+  /**
+   * Prints the name value pair It the output is padded then unescape the value, so it
+   * could be printed in multiple lines. In this case it assumes the pair is already
+   * indented with a field delimiter
+   *
+   * @param name The field name to print
+   * @param value The value t print
+   * @param tableInfo The target builder
+   * @param isOutputPadded Should the value printed as a padded string?
+   */
+  protected static void formatOutput(String name, String value, StringBuilder tableInfo,
+      boolean isOutputPadded) {
+    String unescapedValue = (isOutputPadded && value != null) ? value
+        .replaceAll("\\\\n|\\\\r|\\\\r\\\\n", "\n") : value;
+    formatOutput(name, unescapedValue, tableInfo);
+  }
+
+  /**
+   * Prints the name value pair, and if the value contains newlines, it add one more empty
+   * field before the two values (Assumes, the name value pair is already indented with
+   * it)
+   *
+   * @param name The field name to print
+   * @param value The value to print - might contain newlines
+   * @param tableInfo The target builder
+   */
+  private static void formatOutput(String name, String value, StringBuilder tableInfo) {
+    tableInfo.append(String.format("%-" + ALIGNMENT + "s", name)).append(FIELD_DELIM);
+    int colNameLength = ALIGNMENT > name.length() ? ALIGNMENT : name.length();
+    indentMultilineValue(value, tableInfo, new int[]{0, colNameLength}, true);
+  }
+
+  private static String formatDate(long timeInSeconds) {
+    if (timeInSeconds != 0) {
+      Date date = new Date(timeInSeconds * 1000);
+      return date.toString();
+    }
+    return "UNKNOWN";
+  }
+
+  private static void getStorageDescriptorInfo(StringBuilder tableInfo,
+      StorageDescriptor storageDesc) {
+
+    formatOutput("SerDe Library:", storageDesc.getSerdeInfo().getSerializationLib(),
+        tableInfo);
+    formatOutput("InputFormat:", storageDesc.getInputFormat(), tableInfo);
+    formatOutput("OutputFormat:", storageDesc.getOutputFormat(), tableInfo);
+    formatOutput("Compressed:", storageDesc.isCompressed() ? "Yes" : "No", tableInfo);
+    formatOutput("Num Buckets:", String.valueOf(storageDesc.getNumBuckets()), tableInfo);
+    formatOutput("Bucket Columns:", storageDesc.getBucketCols().toString(), tableInfo);
+    formatOutput("Sort Columns:", storageDesc.getSortCols().toString(), tableInfo);
+    if (storageDesc.isStoredAsSubDirectories()) {// optional parameter
+      formatOutput("Stored As SubDirectories:", "Yes", tableInfo);
+    }
+
+    if (null != storageDesc.getSkewedInfo()) {
+      List<String> skewedColNames =
+          sortedList(storageDesc.getSkewedInfo().getSkewedColNames());
+      if ((skewedColNames != null) && (skewedColNames.size() > 0)) {
+        formatOutput("Skewed Columns:", skewedColNames.toString(), tableInfo);
+      }
+
+      List<List<String>> skewedColValues =
+          sortedList(storageDesc.getSkewedInfo().getSkewedColValues(),
+              new VectorComparator<String>());
+      if ((skewedColValues != null) && (skewedColValues.size() > 0)) {
+        formatOutput("Skewed Values:", skewedColValues.toString(), tableInfo);
+      }
+
+      Map<List<String>, String> skewedColMap =
+          new TreeMap<>(new VectorComparator<>());
+      skewedColMap.putAll(storageDesc.getSkewedInfo().getSkewedColValueLocationMaps());
+      if ((skewedColMap != null) && (skewedColMap.size() > 0)) {
+        formatOutput("Skewed Value to Path:", skewedColMap.toString(), tableInfo);
+        Map<List<String>, String> truncatedSkewedColMap =
+            new TreeMap<List<String>, String>(new VectorComparator<String>());
+        // walk through existing map to truncate path so that test won't mask it
+        // then we can verify location is right
+        Set<Entry<List<String>, String>> entries = skewedColMap.entrySet();
+        for (Entry<List<String>, String> entry : entries) {
+          truncatedSkewedColMap.put(entry.getKey(), entry.getValue());
+        }
+        formatOutput("Skewed Value to Truncated Path:", truncatedSkewedColMap.toString(),
+            tableInfo);
+      }
+    }
+
+    if (storageDesc.getSerdeInfo().getParametersSize() > 0) {
+      tableInfo.append("Storage Desc Params:").append(LINE_DELIM);
+      displayAllParameters(storageDesc.getSerdeInfo().getParameters(), tableInfo, true,
+          false);
+    }
+  }
+
+  /**
+   * Returns a sorted version of the given list, using the provided comparator
+   */
+  static <T> List<T> sortedList(List<T> list, Comparator<T> comp) {
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    ArrayList<T> ret = new ArrayList<>();
+    ret.addAll(list);
+    Collections.sort(ret, comp);
+    return ret;
+  }
+
+  /**
+   * Returns a sorted version of the given list
+   */
+  static <T extends Comparable<T>> List<T> sortedList(List<T> list) {
+    if (list == null || list.size() <= 1) {
+      return list;
+    }
+    ArrayList<T> ret = new ArrayList<>();
+    ret.addAll(list);
+    Collections.sort(ret);
+    return ret;
+  }
+
+  /**
+   * Compares to lists of object T as vectors
+   *
+   * @param <T> the base object type. Must be {@link Comparable}
+   */
+  private static class VectorComparator<T extends Comparable<T>> implements
+      Comparator<List<T>> {
+
+    @Override
+    public int compare(List<T> listA, List<T> listB) {
+      for (int i = 0; i < listA.size() && i < listB.size(); i++) {
+        T valA = listA.get(i);
+        T valB = listB.get(i);
+        if (valA != null) {
+          int ret = valA.compareTo(valB);
+          if (ret != 0) {
+            return ret;
+          }
+        } else {
+          if (valB != null) {
+            return -1;
+          }
+        }
+      }
+      return Integer.compare(listA.size(), listB.size());
+    }
+  }
+}
diff --git a/fe/src/compat-hive-3/java/org/apache/impala/compat/MetastoreShim.java b/fe/src/compat-hive-3/java/org/apache/impala/compat/MetastoreShim.java
new file mode 100644
index 0000000..ee22c2b
--- /dev/null
+++ b/fe/src/compat-hive-3/java/org/apache/impala/compat/MetastoreShim.java
@@ -0,0 +1,367 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.impala.compat;
+
+import static org.apache.impala.service.MetadataOp.TABLE_TYPE_TABLE;
+import static org.apache.impala.service.MetadataOp.TABLE_TYPE_VIEW;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.ImmutableMap;
+import java.util.EnumSet;
+import java.util.List;
+
+import org.apache.hadoop.hive.common.StatsSetupConst;
+import org.apache.hadoop.hive.metastore.IMetaStoreClient;
+import org.apache.hadoop.hive.metastore.TableType;
+import org.apache.hadoop.hive.metastore.Warehouse;
+import org.apache.hadoop.hive.metastore.api.FieldSchema;
+import org.apache.hadoop.hive.metastore.api.InvalidOperationException;
+import org.apache.hadoop.hive.metastore.api.MetaException;
+import org.apache.hadoop.hive.metastore.api.Partition;
+import org.apache.hadoop.hive.metastore.api.Table;
+import org.apache.hadoop.hive.metastore.conf.MetastoreConf;
+import org.apache.hadoop.hive.metastore.messaging.AlterTableMessage;
+import org.apache.hadoop.hive.metastore.messaging.InsertMessage;
+import org.apache.hadoop.hive.metastore.messaging.MessageBuilder;
+import org.apache.hadoop.hive.metastore.messaging.MessageDeserializer;
+import org.apache.hadoop.hive.metastore.messaging.MessageEncoder;
+import org.apache.hadoop.hive.metastore.messaging.MessageFactory;
+import org.apache.hadoop.hive.metastore.utils.FileUtils;
+import org.apache.hadoop.hive.metastore.utils.MetaStoreUtils;
+import org.apache.hive.service.rpc.thrift.TGetColumnsReq;
+import org.apache.hive.service.rpc.thrift.TGetFunctionsReq;
+import org.apache.hive.service.rpc.thrift.TGetSchemasReq;
+import org.apache.hive.service.rpc.thrift.TGetTablesReq;
+import org.apache.impala.authorization.User;
+import org.apache.impala.common.ImpalaException;
+import org.apache.impala.common.Pair;
+import org.apache.impala.compat.HiveMetadataFormatUtils;
+import org.apache.impala.service.Frontend;
+import org.apache.impala.service.MetadataOp;
+import org.apache.impala.thrift.TMetadataOpRequest;
+import org.apache.impala.thrift.TResultSet;
+import org.apache.thrift.TException;
+
+/**
+ * A wrapper around some of Hive's Metastore API's to abstract away differences
+ * between major versions of Hive. This implements the shimmed methods for Hive 2.
+ */
+public class MetastoreShim {
+  /**
+   * Wrapper around MetaStoreUtils.validateName() to deal with added arguments.
+   */
+  public static boolean validateName(String name) {
+    return MetaStoreUtils.validateName(name, null);
+  }
+
+  /**
+   * Wrapper around IMetaStoreClient.alter_partition() to deal with added
+   * arguments.
+   */
+  public static void alterPartition(IMetaStoreClient client, Partition partition)
+      throws InvalidOperationException, MetaException, TException {
+    client.alter_partition(
+        partition.getDbName(), partition.getTableName(), partition, null);
+  }
+
+  /**
+   * Wrapper around IMetaStoreClient.alter_partitions() to deal with added
+   * arguments.
+   */
+  public static void alterPartitions(IMetaStoreClient client, String dbName,
+      String tableName, List<Partition> partitions)
+      throws InvalidOperationException, MetaException, TException {
+    client.alter_partitions(dbName, tableName, partitions, null);
+  }
+
+  /**
+   * Wrapper around MetaStoreUtils.updatePartitionStatsFast() to deal with added
+   * arguments.
+   */
+  public static void updatePartitionStatsFast(Partition partition, Table tbl,
+      Warehouse warehouse) throws MetaException {
+    MetaStoreUtils.updatePartitionStatsFast(partition, tbl, warehouse, /*madeDir*/false,
+        /*forceRecompute*/false,
+        /*environmentContext*/null, /*isCreate*/false);
+  }
+
+  /**
+   * Return the maximum number of Metastore objects that should be retrieved in
+   * a batch.
+   */
+  public static String metastoreBatchRetrieveObjectsMaxConfigKey() {
+    return MetastoreConf.ConfVars.BATCH_RETRIEVE_OBJECTS_MAX.toString();
+  }
+
+  /**
+   * Return the key and value that should be set in the partition parameters to
+   * mark that the stats were generated automatically by a stats task.
+   */
+  public static Pair<String, String> statsGeneratedViaStatsTaskParam() {
+    return Pair.create(StatsSetupConst.STATS_GENERATED, StatsSetupConst.TASK);
+  }
+
+  public static TResultSet execGetFunctions(
+      Frontend frontend, TMetadataOpRequest request, User user) throws ImpalaException {
+    TGetFunctionsReq req = request.getGet_functions_req();
+    return MetadataOp.getFunctions(
+        frontend, req.getCatalogName(), req.getSchemaName(), req.getFunctionName(), user);
+  }
+
+  public static TResultSet execGetColumns(
+      Frontend frontend, TMetadataOpRequest request, User user) throws ImpalaException {
+    TGetColumnsReq req = request.getGet_columns_req();
+    return MetadataOp.getColumns(frontend, req.getCatalogName(), req.getSchemaName(),
+        req.getTableName(), req.getColumnName(), user);
+  }
+
+  public static TResultSet execGetTables(
+      Frontend frontend, TMetadataOpRequest request, User user) throws ImpalaException {
+    TGetTablesReq req = request.getGet_tables_req();
+    return MetadataOp.getTables(frontend, req.getCatalogName(), req.getSchemaName(),
+        req.getTableName(), req.getTableTypes(), user);
+  }
+
+  public static TResultSet execGetSchemas(
+      Frontend frontend, TMetadataOpRequest request, User user) throws ImpalaException {
+    TGetSchemasReq req = request.getGet_schemas_req();
+    return MetadataOp.getSchemas(
+        frontend, req.getCatalogName(), req.getSchemaName(), user);
+  }
+
+  /**
+   * Supported HMS-3 types
+   */
+  public static final EnumSet<TableType> IMPALA_SUPPORTED_TABLE_TYPES = EnumSet
+      .of(TableType.EXTERNAL_TABLE, TableType.MANAGED_TABLE, TableType.VIRTUAL_VIEW,
+          TableType.MATERIALIZED_VIEW);
+
+  /**
+   * mapping between the HMS-3 type the Impala types
+   */
+  public static final ImmutableMap<String, String> HMS_TO_IMPALA_TYPE =
+      new ImmutableMap.Builder<String, String>()
+          .put("EXTERNAL_TABLE", TABLE_TYPE_TABLE)
+          .put("MANAGED_TABLE", TABLE_TYPE_TABLE)
+          .put("INDEX_TABLE", TABLE_TYPE_TABLE)
+          .put("VIRTUAL_VIEW", TABLE_TYPE_VIEW)
+          .put("MATERIALIZED_VIEW", TABLE_TYPE_VIEW).build();
+  /**
+   * Method which maps Metastore's TableType to Impala's table type. In metastore 2
+   * Materialized view is not supported
+   */
+  public static String mapToInternalTableType(String typeStr) {
+    String defaultTableType = TABLE_TYPE_TABLE;
+    TableType tType;
+
+    if (typeStr == null) return defaultTableType;
+    try {
+      tType = TableType.valueOf(typeStr.toUpperCase());
+    } catch (Exception e) {
+      return defaultTableType;
+    }
+    switch (tType) {
+      case EXTERNAL_TABLE:
+      case MANAGED_TABLE:
+      //Deprecated and removed in Hive-3.. //TODO throw exception?
+      case INDEX_TABLE:
+        return TABLE_TYPE_TABLE;
+      case VIRTUAL_VIEW:
+      case MATERIALIZED_VIEW:
+        return TABLE_TYPE_VIEW;
+      default:
+        return defaultTableType;
+    }
+  }
+
+  //hive-3 has a different class to encode and decode event messages
+  private static final MessageEncoder eventMessageEncoder_ =
+      MessageFactory.getDefaultInstance(MetastoreConf.newMetastoreConf());
+
+  /**
+   * Wrapper method which returns HMS-3 Message factory in case Impala is
+   * building against Hive-3
+   */
+  public static MessageDeserializer getMessageDeserializer() {
+    return eventMessageEncoder_.getDeserializer();
+  }
+
+  /**
+   * Wrapper around FileUtils.makePartName to deal with package relocation in Hive 3.
+   * This method uses the metastore's FileUtils method instead of one from hive-exec
+   * @param partitionColNames
+   * @param values
+   * @return
+   */
+  public static String makePartName(List<String> partitionColNames, List<String> values) {
+    return FileUtils.makePartName(partitionColNames, values);
+  }
+
+  /**
+   * Wrapper method around message factory's build alter table message due to added
+   * arguments in hive 3.
+   */
+  @VisibleForTesting
+  public static AlterTableMessage buildAlterTableMessage(Table before, Table after,
+      boolean isTruncateOp, long writeId) {
+    return MessageBuilder.getInstance().buildAlterTableMessage(before, after,
+        isTruncateOp, writeId);
+  }
+
+  @VisibleForTesting
+  public static InsertMessage buildInsertMessage(Table msTbl, Partition partition,
+      boolean isInsertOverwrite, List<String> newFiles) {
+    return MessageBuilder.getInstance().buildInsertMessage(msTbl, partition,
+        isInsertOverwrite, newFiles.iterator());
+  }
+
+  /**
+   * Wrapper method to get the formatted string to represent the columns information of
+   * a metastore table. This method was changed in Hive-3 significantly when compared
+   * to Hive-2. In order to avoid adding unnecessary dependency to hive-exec this
+   * method copies the source code from hive-2's MetaDataFormatUtils class for this
+   * method.
+   * TODO : In order to avoid this copy, we move move this code from hive's ql module
+   * to a util method in MetastoreUtils in metastore module
+   * @return
+   */
+  public static String getAllColumnsInformation(List<FieldSchema> tabCols,
+      List<FieldSchema> partitionCols, boolean printHeader, boolean isOutputPadded,
+      boolean showPartColsSeparately) {
+    return HiveMetadataFormatUtils
+        .getAllColumnsInformation(tabCols, partitionCols, printHeader, isOutputPadded,
+            showPartColsSeparately);
+  }
+
+  /**
+   * Wrapper method around Hive's MetadataFormatUtils.getTableInformation which has
+   * changed significantly in Hive-3
+   * @return
+   */
+  public static String getTableInformation(
+      org.apache.hadoop.hive.ql.metadata.Table table) {
+    return HiveMetadataFormatUtils.getTableInformation(table.getTTable(), false);
+  }
+
+  /**
+   * This method has been copied from BaseSemanticAnalyzer class of Hive and is fairly
+   * stable now (last change was in mid 2016 as of April 2019). Copying is preferred over
+   * adding dependency to this class which pulls in a lot of other transitive
+   * dependencies from hive-exec
+   */
+  public static String unescapeSQLString(String stringLiteral) {
+    {
+      Character enclosure = null;
+
+      // Some of the strings can be passed in as unicode. For example, the
+      // delimiter can be passed in as \002 - So, we first check if the
+      // string is a unicode number, else go back to the old behavior
+      StringBuilder sb = new StringBuilder(stringLiteral.length());
+      for (int i = 0; i < stringLiteral.length(); i++) {
+
+        char currentChar = stringLiteral.charAt(i);
+        if (enclosure == null) {
+          if (currentChar == '\'' || stringLiteral.charAt(i) == '\"') {
+            enclosure = currentChar;
+          }
+          // ignore all other chars outside the enclosure
+          continue;
+        }
+
+        if (enclosure.equals(currentChar)) {
+          enclosure = null;
+          continue;
+        }
+
+        if (currentChar == '\\' && (i + 6 < stringLiteral.length()) && stringLiteral.charAt(i + 1) == 'u') {
+          int code = 0;
+          int base = i + 2;
+          for (int j = 0; j < 4; j++) {
+            int digit = Character.digit(stringLiteral.charAt(j + base), 16);
+            code = (code << 4) + digit;
+          }
+          sb.append((char)code);
+          i += 5;
+          continue;
+        }
+
+        if (currentChar == '\\' && (i + 4 < stringLiteral.length())) {
+          char i1 = stringLiteral.charAt(i + 1);
+          char i2 = stringLiteral.charAt(i + 2);
+          char i3 = stringLiteral.charAt(i + 3);
+          if ((i1 >= '0' && i1 <= '1') && (i2 >= '0' && i2 <= '7')
+              && (i3 >= '0' && i3 <= '7')) {
+            byte bVal = (byte) ((i3 - '0') + ((i2 - '0') * 8) + ((i1 - '0') * 8 * 8));
+            byte[] bValArr = new byte[1];
+            bValArr[0] = bVal;
+            String tmp = new String(bValArr);
+            sb.append(tmp);
+            i += 3;
+            continue;
+          }
+        }
+
+        if (currentChar == '\\' && (i + 2 < stringLiteral.length())) {
+          char n = stringLiteral.charAt(i + 1);
+          switch (n) {
+            case '0':
+              sb.append("\0");
+              break;
+            case '\'':
+              sb.append("'");
+              break;
+            case '"':
+              sb.append("\"");
+              break;
+            case 'b':
+              sb.append("\b");
+              break;
+            case 'n':
+              sb.append("\n");
+              break;
+            case 'r':
+              sb.append("\r");
+              break;
+            case 't':
+              sb.append("\t");
+              break;
+            case 'Z':
+              sb.append("\u001A");
+              break;
+            case '\\':
+              sb.append("\\");
+              break;
+            // The following 2 lines are exactly what MySQL does TODO: why do we do this?
+            case '%':
+              sb.append("\\%");
+              break;
+            case '_':
+              sb.append("\\_");
+              break;
+            default:
+              sb.append(n);
+          }
+          i++;
+        } else {
+          sb.append(currentChar);
+        }
+      }
+      return sb.toString();
+    }
+  }
+}
diff --git a/fe/src/main/java/org/apache/impala/analysis/StringLiteral.java b/fe/src/main/java/org/apache/impala/analysis/StringLiteral.java
index 797ad5e..09a542f 100644
--- a/fe/src/main/java/org/apache/impala/analysis/StringLiteral.java
+++ b/fe/src/main/java/org/apache/impala/analysis/StringLiteral.java
@@ -21,10 +21,10 @@ import java.io.IOException;
 import java.io.StringReader;
 import java.math.BigDecimal;
 
-import org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer;
 import org.apache.impala.catalog.ScalarType;
 import org.apache.impala.catalog.Type;
 import org.apache.impala.common.AnalysisException;
+import org.apache.impala.compat.MetastoreShim;
 import org.apache.impala.thrift.TExprNode;
 import org.apache.impala.thrift.TExprNodeType;
 import org.apache.impala.thrift.TStringLiteral;
@@ -91,7 +91,7 @@ public class StringLiteral extends LiteralExpr {
   public String getUnescapedValue() {
     // Unescape string exactly like Hive does. Hive's method assumes
     // quotes so we add them here to reuse Hive's code.
-    return BaseSemanticAnalyzer.unescapeSQLString("'" + getNormalizedValue()
+    return MetastoreShim.unescapeSQLString("'" + getNormalizedValue()
         + "'");
   }
 
diff --git a/fe/src/main/java/org/apache/impala/catalog/FeHBaseTable.java b/fe/src/main/java/org/apache/impala/catalog/FeHBaseTable.java
index fbb6d5d..df4f12e 100644
--- a/fe/src/main/java/org/apache/impala/catalog/FeHBaseTable.java
+++ b/fe/src/main/java/org/apache/impala/catalog/FeHBaseTable.java
@@ -44,7 +44,6 @@ import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.io.compress.Compression;
 import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.hive.hbase.HBaseSerDe;
 import org.apache.hadoop.hive.metastore.api.FieldSchema;
 import org.apache.hadoop.hive.metastore.api.MetaException;
 import org.apache.hadoop.hive.metastore.api.Table;
@@ -106,6 +105,16 @@ public interface FeHBaseTable extends FeTable {
     // Minimum number of regions that are checked to estimate the row count
     private static final int MIN_NUM_REGIONS_TO_CHECK = 5;
 
+    // constants from Hive's HBaseSerDe.java copied here to avoid dependending on
+    // hive-hbase-handler (and its transitive dependencies) These are user facing
+    // properties and pretty much guaranteed to not change without breaking backwards
+    // compatibility. Hence it is safe to just copy them here
+    private static final String HBASE_COLUMNS_MAPPING = "hbase.columns.mapping";
+    private static final String HBASE_TABLE_DEFAULT_STORAGE_TYPE =
+        "hbase.table.default.storage.type";
+    private static final String HBASE_KEY_COL = ":key";
+    private static final String HBASE_TABLE_NAME = "hbase.table.name";
+
     /**
      * Table client objects are thread-unsafe and cheap to create. The HBase docs
      * recommend creating a new one for each task and then closing when done.
@@ -127,13 +136,13 @@ public interface FeHBaseTable extends FeTable {
         org.apache.hadoop.hive.metastore.api.Table msTable)
         throws MetaException, SerDeException {
       Map<String, String> serdeParams = msTable.getSd().getSerdeInfo().getParameters();
-      String hbaseColumnsMapping = serdeParams.get(HBaseSerDe.HBASE_COLUMNS_MAPPING);
+      String hbaseColumnsMapping = serdeParams.get(HBASE_COLUMNS_MAPPING);
       if (hbaseColumnsMapping == null) {
         throw new MetaException("No hbase.columns.mapping defined in Serde.");
       }
 
       String hbaseTableDefaultStorageType =
-          msTable.getParameters().get(HBaseSerDe.HBASE_TABLE_DEFAULT_STORAGE_TYPE);
+          msTable.getParameters().get(HBASE_TABLE_DEFAULT_STORAGE_TYPE);
       boolean tableDefaultStorageIsBinary = false;
       if (hbaseTableDefaultStorageType != null &&
           !hbaseTableDefaultStorageType.isEmpty()) {
@@ -141,7 +150,7 @@ public interface FeHBaseTable extends FeTable {
           tableDefaultStorageIsBinary = true;
         } else if (!hbaseTableDefaultStorageType.equalsIgnoreCase("string")) {
           throw new SerDeException(
-              "Error: " + HBaseSerDe.HBASE_TABLE_DEFAULT_STORAGE_TYPE +
+              "Error: " + HBASE_TABLE_DEFAULT_STORAGE_TYPE +
                   " parameter must be specified as" + " 'string' or 'binary'; '" +
                   hbaseTableDefaultStorageType +
                   "' is not a valid specification for this table/serde property.");
@@ -224,7 +233,7 @@ public interface FeHBaseTable extends FeTable {
       }
 
       if (columnsMappingSpec.equals("") ||
-          columnsMappingSpec.equals(HBaseSerDe.HBASE_KEY_COL)) {
+          columnsMappingSpec.equals(HBASE_KEY_COL)) {
         throw new SerDeException("Error: hbase.columns.mapping specifies only " +
             "the HBase table row key. A valid Hive-HBase table must specify at " +
             "least one additional column.");
@@ -258,7 +267,7 @@ public interface FeHBaseTable extends FeTable {
               "badly formed column family, column qualifier specification.");
         }
 
-        if (colInfo.equals(HBaseSerDe.HBASE_KEY_COL)) {
+        if (colInfo.equals(HBASE_KEY_COL)) {
           Preconditions.checkState(fsStartIdxOffset == 0);
           rowKeyIndex = i;
           columnFamilies.add(colInfo);
@@ -311,13 +320,13 @@ public interface FeHBaseTable extends FeTable {
         } else {
           // error in storage specification
           throw new SerDeException(
-              "Error: " + HBaseSerDe.HBASE_COLUMNS_MAPPING + " storage specification " +
+              "Error: " + HBASE_COLUMNS_MAPPING + " storage specification " +
                   mappingSpec + " is not valid for column: " + fieldSchema.getName());
         }
       }
 
       if (rowKeyIndex == -1) {
-        columnFamilies.add(0, HBaseSerDe.HBASE_KEY_COL);
+        columnFamilies.add(0, HBASE_KEY_COL);
         columnQualifiers.add(0, null);
         colIsBinaryEncoded.add(0, supportsBinaryEncoding(fieldSchemas.get(0), tblName) &&
             tableDefaultStorageIsBinary);
@@ -488,10 +497,10 @@ public interface FeHBaseTable extends FeTable {
       // Give preference to TBLPROPERTIES over SERDEPROPERTIES
       // (really we should only use TBLPROPERTIES, so this is just
       // for backwards compatibility with the original specs).
-      String tableName = tbl.getParameters().get(HBaseSerDe.HBASE_TABLE_NAME);
+      String tableName = tbl.getParameters().get(HBASE_TABLE_NAME);
       if (tableName == null) {
         tableName =
-            tbl.getSd().getSerdeInfo().getParameters().get(HBaseSerDe.HBASE_TABLE_NAME);
+            tbl.getSd().getSerdeInfo().getParameters().get(HBASE_TABLE_NAME);
       }
       if (tableName == null) {
         tableName = tbl.getDbName() + "." + tbl.getTableName();
diff --git a/fe/src/main/java/org/apache/impala/catalog/TableLoader.java b/fe/src/main/java/org/apache/impala/catalog/TableLoader.java
index 40638a1..1c6c815 100644
--- a/fe/src/main/java/org/apache/impala/catalog/TableLoader.java
+++ b/fe/src/main/java/org/apache/impala/catalog/TableLoader.java
@@ -22,6 +22,7 @@ import java.util.concurrent.TimeUnit;
 
 import org.apache.hadoop.hive.metastore.TableType;
 import org.apache.hadoop.hive.metastore.api.NoSuchObjectException;
+import org.apache.impala.compat.MetastoreShim;
 import org.apache.log4j.Logger;
 
 import com.google.common.base.Stopwatch;
@@ -36,10 +37,6 @@ import org.apache.impala.util.ThreadNameAnnotator;
 public class TableLoader {
   private static final Logger LOG = Logger.getLogger(TableLoader.class);
 
-  // Set of supported table types.
-  private static EnumSet<TableType> SUPPORTED_TABLE_TYPES = EnumSet.of(
-      TableType.EXTERNAL_TABLE, TableType.MANAGED_TABLE, TableType.VIRTUAL_VIEW);
-
   private final CatalogServiceCatalog catalog_;
 
   // Lock used to serialize calls to the Hive MetaStore to work around MetaStore
@@ -73,7 +70,7 @@ public class TableLoader {
       }
       // Check that the Hive TableType is supported
       TableType tableType = TableType.valueOf(msTbl.getTableType());
-      if (!SUPPORTED_TABLE_TYPES.contains(tableType)) {
+      if (!MetastoreShim.IMPALA_SUPPORTED_TABLE_TYPES.contains(tableType)) {
         throw new TableLoadingException(String.format(
             "Unsupported table type '%s' for: %s", tableType, fullTblName));
       }
diff --git a/fe/src/main/java/org/apache/impala/catalog/events/MetastoreEvents.java b/fe/src/main/java/org/apache/impala/catalog/events/MetastoreEvents.java
index 493b637..513390a 100644
--- a/fe/src/main/java/org/apache/impala/catalog/events/MetastoreEvents.java
+++ b/fe/src/main/java/org/apache/impala/catalog/events/MetastoreEvents.java
@@ -640,7 +640,7 @@ public class MetastoreEvents {
       Preconditions
           .checkNotNull(event.getMessage(), debugString("Event message is null"));
       CreateTableMessage createTableMessage =
-          MetastoreEventsProcessor.getMessageFactory().getDeserializer()
+          MetastoreEventsProcessor.getMessageDeserializer()
               .getCreateTableMessage(event.getMessage());
       try {
         msTbl_ = createTableMessage.getTableObj();
@@ -735,8 +735,8 @@ public class MetastoreEvents {
       super(catalog, metrics, event);
       Preconditions.checkArgument(MetastoreEventType.INSERT.equals(eventType_));
       InsertMessage insertMessage =
-          MetastoreEventsProcessor.getMessageFactory()
-              .getDeserializer().getInsertMessage(event.getMessage());
+          MetastoreEventsProcessor.getMessageDeserializer()
+              .getInsertMessage(event.getMessage());
       try {
         msTbl_ = Preconditions.checkNotNull(insertMessage.getTableObj());
         insertPartition_ = insertMessage.getPtnObj();
@@ -848,8 +848,8 @@ public class MetastoreEvents {
       super(catalog, metrics, event);
       Preconditions.checkArgument(MetastoreEventType.ALTER_TABLE.equals(eventType_));
       JSONAlterTableMessage alterTableMessage =
-          (JSONAlterTableMessage) MetastoreEventsProcessor.getMessageFactory()
-              .getDeserializer().getAlterTableMessage(event.getMessage());
+          (JSONAlterTableMessage) MetastoreEventsProcessor.getMessageDeserializer()
+              .getAlterTableMessage(event.getMessage());
       try {
         msTbl_ = Preconditions.checkNotNull(alterTableMessage.getTableObjBefore());
         tableAfter_ = Preconditions.checkNotNull(alterTableMessage.getTableObjAfter());
@@ -1021,8 +1021,8 @@ public class MetastoreEvents {
       super(catalog, metrics, event);
       Preconditions.checkArgument(MetastoreEventType.DROP_TABLE.equals(eventType_));
       JSONDropTableMessage dropTableMessage =
-          (JSONDropTableMessage) MetastoreEventsProcessor.getMessageFactory()
-              .getDeserializer().getDropTableMessage(event.getMessage());
+          (JSONDropTableMessage) MetastoreEventsProcessor.getMessageDeserializer()
+              .getDropTableMessage(event.getMessage());
       try {
         msTbl_ = Preconditions.checkNotNull(dropTableMessage.getTableObj());
       } catch (Exception e) {
@@ -1083,8 +1083,8 @@ public class MetastoreEvents {
       super(catalog, metrics, event);
       Preconditions.checkArgument(MetastoreEventType.CREATE_DATABASE.equals(eventType_));
       JSONCreateDatabaseMessage createDatabaseMessage =
-          (JSONCreateDatabaseMessage) MetastoreEventsProcessor.getMessageFactory()
-              .getDeserializer().getCreateDatabaseMessage(event.getMessage());
+          (JSONCreateDatabaseMessage) MetastoreEventsProcessor.getMessageDeserializer()
+              .getCreateDatabaseMessage(event.getMessage());
       try {
         createdDatabase_ =
             Preconditions.checkNotNull(createDatabaseMessage.getDatabaseObject());
@@ -1147,8 +1147,7 @@ public class MetastoreEvents {
       super(catalog, metrics, event);
       Preconditions.checkArgument(MetastoreEventType.ALTER_DATABASE.equals(eventType_));
       JSONAlterDatabaseMessage alterDatabaseMessage =
-          (JSONAlterDatabaseMessage) MetastoreEventsProcessor.getMessageFactory()
-              .getDeserializer()
+          (JSONAlterDatabaseMessage) MetastoreEventsProcessor.getMessageDeserializer()
               .getAlterDatabaseMessage(event.getMessage());
       try {
         alteredDatabase_ =
@@ -1209,8 +1208,8 @@ public class MetastoreEvents {
       super(catalog, metrics, event);
       Preconditions.checkArgument(MetastoreEventType.DROP_DATABASE.equals(eventType_));
       JSONDropDatabaseMessage dropDatabaseMessage =
-          (JSONDropDatabaseMessage) MetastoreEventsProcessor.getMessageFactory()
-              .getDeserializer().getDropDatabaseMessage(event.getMessage());
+          (JSONDropDatabaseMessage) MetastoreEventsProcessor.getMessageDeserializer()
+              .getDropDatabaseMessage(event.getMessage());
       try {
         droppedDatabase_ =
             Preconditions.checkNotNull(dropDatabaseMessage.getDatabaseObject());
@@ -1318,8 +1317,7 @@ public class MetastoreEvents {
       }
       try {
         AddPartitionMessage addPartitionMessage_ =
-            MetastoreEventsProcessor.getMessageFactory()
-                .getDeserializer()
+            MetastoreEventsProcessor.getMessageDeserializer()
                 .getAddPartitionMessage(event.getMessage());
         addedPartitions_ =
             Lists.newArrayList(addPartitionMessage_.getPartitionObjs());
@@ -1413,7 +1411,7 @@ public class MetastoreEvents {
       Preconditions.checkState(eventType_.equals(MetastoreEventType.ALTER_PARTITION));
       Preconditions.checkNotNull(event.getMessage());
       AlterPartitionMessage alterPartitionMessage =
-          MetastoreEventsProcessor.getMessageFactory().getDeserializer()
+          MetastoreEventsProcessor.getMessageDeserializer()
               .getAlterPartitionMessage(event.getMessage());
 
       try {
@@ -1496,8 +1494,7 @@ public class MetastoreEvents {
       Preconditions.checkState(eventType_.equals(MetastoreEventType.DROP_PARTITION));
       Preconditions.checkNotNull(event.getMessage());
       DropPartitionMessage dropPartitionMessage =
-          MetastoreEventsProcessor.getMessageFactory()
-              .getDeserializer()
+          MetastoreEventsProcessor.getMessageDeserializer()
               .getDropPartitionMessage(event.getMessage());
       try {
         msTbl_ = Preconditions.checkNotNull(dropPartitionMessage.getTableObj());
diff --git a/fe/src/main/java/org/apache/impala/catalog/events/MetastoreEventsProcessor.java b/fe/src/main/java/org/apache/impala/catalog/events/MetastoreEventsProcessor.java
index 5723961..8a6841e 100644
--- a/fe/src/main/java/org/apache/impala/catalog/events/MetastoreEventsProcessor.java
+++ b/fe/src/main/java/org/apache/impala/catalog/events/MetastoreEventsProcessor.java
@@ -35,8 +35,7 @@ import org.apache.hadoop.hive.metastore.IMetaStoreClient;
 import org.apache.hadoop.hive.metastore.api.CurrentNotificationEventId;
 import org.apache.hadoop.hive.metastore.api.NotificationEvent;
 import org.apache.hadoop.hive.metastore.api.NotificationEventResponse;
-import org.apache.hadoop.hive.metastore.messaging.MessageFactory;
-import org.apache.hadoop.hive.metastore.messaging.json.ExtendedJSONMessageFactory;
+import org.apache.hadoop.hive.metastore.messaging.MessageDeserializer;
 import org.apache.impala.catalog.CatalogException;
 import org.apache.impala.catalog.CatalogServiceCatalog;
 import org.apache.impala.catalog.MetaStoreClientPool.MetaStoreClient;
@@ -44,6 +43,7 @@ import org.apache.impala.catalog.events.EventProcessorConfigValidator.Validation
 import org.apache.impala.catalog.events.MetastoreEvents.MetastoreEvent;
 import org.apache.impala.catalog.events.MetastoreEvents.MetastoreEventFactory;
 import org.apache.impala.common.Metrics;
+import org.apache.impala.compat.MetastoreShim;
 import org.apache.impala.thrift.TEventProcessorMetrics;
 import org.apache.impala.thrift.TEventProcessorMetricsSummaryResponse;
 import org.apache.impala.util.MetaStoreUtil;
@@ -175,13 +175,9 @@ public class MetastoreEventsProcessor implements ExternalEventsProcessor {
 
   private static final Logger LOG =
       LoggerFactory.getLogger(MetastoreEventsProcessor.class);
-  // Use ExtendedJSONMessageFactory to deserialize the event messages.
-  // ExtendedJSONMessageFactory adds additional information over JSONMessageFactory so
-  // that events are compatible with Sentry
-  // TODO this should be moved to JSONMessageFactory when Sentry switches to
-  // JSONMessageFactory
-  private static final MessageFactory messageFactory =
-      ExtendedJSONMessageFactory.getInstance();
+
+  private static final MessageDeserializer MESSAGE_DESERIALIZER =
+      MetastoreShim.getMessageDeserializer();
 
   private static MetastoreEventsProcessor instance;
 
@@ -632,7 +628,7 @@ public class MetastoreEventsProcessor implements ExternalEventsProcessor {
     return metastoreEventFactory_;
   }
 
-  public static MessageFactory getMessageFactory() {
-    return messageFactory;
+  public static MessageDeserializer getMessageDeserializer() {
+    return MESSAGE_DESERIALIZER;
   }
 }
diff --git a/fe/src/main/java/org/apache/impala/catalog/local/DirectMetaProvider.java b/fe/src/main/java/org/apache/impala/catalog/local/DirectMetaProvider.java
index b222ba6..ce75f1c 100644
--- a/fe/src/main/java/org/apache/impala/catalog/local/DirectMetaProvider.java
+++ b/fe/src/main/java/org/apache/impala/catalog/local/DirectMetaProvider.java
@@ -25,7 +25,6 @@ import java.util.Map;
 import java.util.Set;
 
 import org.apache.hadoop.fs.Path;
-import org.apache.hadoop.hive.common.FileUtils;
 import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
 import org.apache.hadoop.hive.metastore.api.Database;
 import org.apache.hadoop.hive.metastore.api.MetaException;
@@ -40,6 +39,7 @@ import org.apache.impala.catalog.HdfsPartition.FileDescriptor;
 import org.apache.impala.catalog.MetaStoreClientPool;
 import org.apache.impala.catalog.MetaStoreClientPool.MetaStoreClient;
 import org.apache.impala.common.Pair;
+import org.apache.impala.compat.MetastoreShim;
 import org.apache.impala.service.BackendConfig;
 import org.apache.impala.thrift.TBackendGflags;
 import org.apache.impala.thrift.TNetworkAddress;
@@ -201,7 +201,7 @@ class DirectMetaProvider implements MetaProvider {
         throw new MetaException("Unexpected number of partition values for " +
           "partition " + vals + " (expected " + partitionColumnNames.size() + ")");
       }
-      String partName = FileUtils.makePartName(partitionColumnNames, p.getValues());
+      String partName = MetastoreShim.makePartName(partitionColumnNames, p.getValues());
       if (!namesSet.contains(partName)) {
         throw new MetaException("HMS returned unexpected partition " + partName +
             " which was not requested. Requested: " + namesSet);
diff --git a/fe/src/main/java/org/apache/impala/hive/executor/UdfExecutor.java b/fe/src/main/java/org/apache/impala/hive/executor/UdfExecutor.java
index 50f907a..0c9560c 100644
--- a/fe/src/main/java/org/apache/impala/hive/executor/UdfExecutor.java
+++ b/fe/src/main/java/org/apache/impala/hive/executor/UdfExecutor.java
@@ -67,6 +67,8 @@ public class UdfExecutor {
   private final static TBinaryProtocol.Factory PROTOCOL_FACTORY =
     new TBinaryProtocol.Factory();
 
+  // TODO UDF is deprecated in Hive and newer implementation of built-in functions using
+  // GenericUDF interface, we should consider supporting GenericUDFs in the future
   private UDF udf_;
   // setup by init() and cleared by close()
   private Method method_;
diff --git a/fe/src/main/java/org/apache/impala/service/CatalogOpExecutor.java b/fe/src/main/java/org/apache/impala/service/CatalogOpExecutor.java
index a356749..3bf2434 100644
--- a/fe/src/main/java/org/apache/impala/service/CatalogOpExecutor.java
+++ b/fe/src/main/java/org/apache/impala/service/CatalogOpExecutor.java
@@ -3640,7 +3640,7 @@ public class CatalogOpExecutor {
               partition.getSd().setSerdeInfo(msTbl.getSd().getSerdeInfo().deepCopy());
               partition.getSd().setLocation(msTbl.getSd().getLocation() + "/" +
                   partName.substring(0, partName.length() - 1));
-              MetastoreShim.updatePartitionStatsFast(partition, warehouse);
+              MetastoreShim.updatePartitionStatsFast(partition, msTbl, warehouse);
             }
 
             // First add_partitions and then alter_partitions the successful ones with
diff --git a/fe/src/main/java/org/apache/impala/service/DescribeResultFactory.java b/fe/src/main/java/org/apache/impala/service/DescribeResultFactory.java
index d1081c6..e530bf5 100644
--- a/fe/src/main/java/org/apache/impala/service/DescribeResultFactory.java
+++ b/fe/src/main/java/org/apache/impala/service/DescribeResultFactory.java
@@ -25,13 +25,13 @@ import java.util.Objects;
 import org.apache.hadoop.hive.metastore.api.PrincipalPrivilegeSet;
 import org.apache.hadoop.hive.metastore.api.PrincipalType;
 import org.apache.hadoop.hive.metastore.api.PrivilegeGrantInfo;
-import org.apache.hadoop.hive.ql.metadata.formatting.MetaDataFormatUtils;
 import org.apache.impala.catalog.Column;
 import org.apache.impala.catalog.FeDb;
 import org.apache.impala.catalog.FeTable;
 import org.apache.impala.catalog.KuduColumn;
 import org.apache.impala.catalog.StructField;
 import org.apache.impala.catalog.StructType;
+import org.apache.impala.compat.MetastoreShim;
 import org.apache.impala.thrift.TColumnValue;
 import org.apache.impala.thrift.TDescribeOutputStyle;
 import org.apache.impala.thrift.TDescribeResult;
@@ -216,15 +216,15 @@ public class DescribeResultFactory {
     hiveTable.setTTable(msTable);
     StringBuilder sb = new StringBuilder();
     // First add all the columns (includes partition columns).
-    sb.append(MetaDataFormatUtils.getAllColumnsInformation(msTable.getSd().getCols(),
+    sb.append(MetastoreShim.getAllColumnsInformation(msTable.getSd().getCols(),
         msTable.getPartitionKeys(), true, false, true));
     // Add the extended table metadata information.
-    sb.append(MetaDataFormatUtils.getTableInformation(hiveTable));
+    sb.append(MetastoreShim.getTableInformation(hiveTable));
 
     for (String line: sb.toString().split("\n")) {
       // To match Hive's HiveServer2 output, split each line into multiple column
       // values based on the field delimiter.
-      String[] columns = line.split(MetaDataFormatUtils.FIELD_DELIM);
+      String[] columns = line.split("\t");
       TResultRow resultRow = new TResultRow();
       for (int i = 0; i < NUM_DESC_FORMATTED_RESULT_COLS; ++i) {
         TColumnValue colVal = new TColumnValue();
diff --git a/fe/src/main/java/org/apache/impala/service/MetadataOp.java b/fe/src/main/java/org/apache/impala/service/MetadataOp.java
index 3d85aa9..0686ad99 100644
--- a/fe/src/main/java/org/apache/impala/service/MetadataOp.java
+++ b/fe/src/main/java/org/apache/impala/service/MetadataOp.java
@@ -40,6 +40,7 @@ import org.apache.impala.catalog.StructType;
 import org.apache.impala.catalog.Type;
 import org.apache.impala.catalog.local.InconsistentMetadataFetchException;
 import org.apache.impala.common.ImpalaException;
+import org.apache.impala.compat.MetastoreShim;
 import org.apache.impala.thrift.TColumn;
 import org.apache.impala.thrift.TColumnValue;
 import org.apache.impala.thrift.TResultRow;
@@ -64,8 +65,8 @@ public class MetadataOp {
   // Static column values
   private static final TColumnValue NULL_COL_VAL = new TColumnValue();
   private static final TColumnValue EMPTY_COL_VAL = createTColumnValue("");
-  private static final String TABLE_TYPE_TABLE = "TABLE";
-  private static final String TABLE_TYPE_VIEW = "VIEW";
+  public static final String TABLE_TYPE_TABLE = "TABLE";
+  public static final String TABLE_TYPE_VIEW = "VIEW";
 
   // Result set schema for each of the metadata operations.
   private final static TResultSetMetadata GET_CATALOGS_MD = new TResultSetMetadata();
@@ -317,7 +318,10 @@ public class MetadataOp {
           } else {
             if (table.getMetaStoreTable() != null) {
               comment = table.getMetaStoreTable().getParameters().get("comment");
-              tableType = mapToInternalTableType(table.getMetaStoreTable().getTableType());
+              String tableTypeStr = table.getMetaStoreTable().getTableType() == null ?
+                  null : table.getMetaStoreTable().getTableType().toUpperCase();
+              tableType = MetastoreShim.HMS_TO_IMPALA_TYPE
+                  .getOrDefault(tableTypeStr, TABLE_TYPE_TABLE);
             }
             columns.addAll(fe.getColumns(table, columnPatternMatcher, user));
           }
@@ -336,28 +340,6 @@ public class MetadataOp {
     return result;
   }
 
-  private static String mapToInternalTableType(String typeStr) {
-    String defaultTableType = TABLE_TYPE_TABLE;
-    TableType tType;
-
-    if (typeStr == null) return defaultTableType;
-    try {
-      tType = TableType.valueOf(typeStr.toUpperCase());
-    } catch (Exception e) {
-      return defaultTableType;
-    }
-    switch (tType) {
-      case EXTERNAL_TABLE:
-      case MANAGED_TABLE:
-      case INDEX_TABLE:
-        return TABLE_TYPE_TABLE;
-      case VIRTUAL_VIEW:
-        return TABLE_TYPE_VIEW;
-      default:
-        return defaultTableType;
-    }
-  }
-
   /**
    * Executes the GetCatalogs HiveServer2 operation and returns TResultSet.
    * Hive does not have a catalog concept. It always returns an empty result set.
diff --git a/fe/src/test/java/org/apache/impala/catalog/events/MetastoreEventsProcessorTest.java b/fe/src/test/java/org/apache/impala/catalog/events/MetastoreEventsProcessorTest.java
index bb2caed..c07d63c 100644
--- a/fe/src/test/java/org/apache/impala/catalog/events/MetastoreEventsProcessorTest.java
+++ b/fe/src/test/java/org/apache/impala/catalog/events/MetastoreEventsProcessorTest.java
@@ -56,19 +56,18 @@ import org.apache.hadoop.hive.metastore.IMetaStoreClient;
 import org.apache.hadoop.hive.metastore.api.CurrentNotificationEventId;
 import org.apache.hadoop.hive.metastore.api.Database;
 import org.apache.hadoop.hive.metastore.api.FieldSchema;
-import org.apache.hadoop.hive.metastore.api.GetPartitionsRequest;
 import org.apache.hadoop.hive.metastore.api.MetaException;
 import org.apache.hadoop.hive.metastore.api.NotificationEvent;
 import org.apache.hadoop.hive.metastore.api.Partition;
+import org.apache.hadoop.hive.metastore.api.SerDeInfo;
+import org.apache.hadoop.hive.metastore.api.StorageDescriptor;
 import org.apache.hadoop.hive.metastore.api.PrincipalType;
-import org.apache.hadoop.hive.metastore.client.builder.DatabaseBuilder;
-import org.apache.hadoop.hive.metastore.client.builder.PartitionBuilder;
-import org.apache.hadoop.hive.metastore.client.builder.TableBuilder;
 import org.apache.impala.authorization.NoopAuthorizationFactory;
 import org.apache.impala.authorization.NoopAuthorizationFactory.NoopAuthorizationManager;
 import org.apache.impala.catalog.CatalogException;
 import org.apache.impala.catalog.CatalogServiceCatalog;
 import org.apache.impala.catalog.DatabaseNotFoundException;
+import org.apache.impala.catalog.Db;
 import org.apache.impala.catalog.FeCatalogUtils;
 import org.apache.impala.catalog.FeFsPartition;
 import org.apache.impala.catalog.HdfsFileFormat;
@@ -89,6 +88,7 @@ import org.apache.impala.catalog.events.MetastoreEventsProcessor.EventProcessorS
 import org.apache.impala.common.FileSystemUtil;
 import org.apache.impala.common.ImpalaException;
 import org.apache.impala.common.Pair;
+import org.apache.impala.compat.MetastoreShim;
 import org.apache.impala.service.CatalogOpExecutor;
 import org.apache.impala.service.FeSupport;
 import org.apache.impala.testutil.CatalogServiceTestCatalog;
@@ -508,11 +508,12 @@ public class MetastoreEventsProcessorTest {
     String testDbParamKey = "testKey";
     String testDbParamVal = "testVal";
     eventsProcessor_.processEvents();
-    assertFalse("Newly created test database has db should not have parameter with key "
+    Db db = catalog_.getDb(TEST_DB_NAME);
+    assertNotNull(db);
+    // db parameters should not have the test key
+    assertTrue("Newly created test database should not have parameter with key"
             + testDbParamKey,
-        catalog_.getDb(TEST_DB_NAME)
-            .getMetaStoreDb()
-            .getParameters()
+        !db.getMetaStoreDb().isSetParameters() || !db.getMetaStoreDb().getParameters()
             .containsKey(testDbParamKey));
     // test change of parameters to the Database
     addDatabaseParameters(testDbParamKey, testDbParamVal);
@@ -730,11 +731,9 @@ public class MetastoreEventsProcessorTest {
       org.apache.hadoop.hive.metastore.api.Partition partition = null ;
       if (isPartitionInsert) {
         // Get the partition from metastore. This should now contain the new file.
-        GetPartitionsRequest request = new GetPartitionsRequest();
-        request.setDbName(dbName);
         try (MetaStoreClient metaStoreClient = catalog_.getMetaStoreClient()) {
-          partition = metaStoreClient.getHiveClient().getPartition(dbName,
-              tblName, "p1=testPartVal");
+          partition = metaStoreClient.getHiveClient().getPartition(dbName, tblName, "p1"
+              + "=testPartVal");
         }
       }
       // Simulate a load table
@@ -809,8 +808,8 @@ public class MetastoreEventsProcessorTest {
     fakeEvent.setTableName(msTbl.getTableName());
     fakeEvent.setDbName(msTbl.getDbName());
     fakeEvent.setEventId(eventIdGenerator.incrementAndGet());
-    fakeEvent.setMessage(MetastoreEventsProcessor.getMessageFactory()
-        .buildInsertMessage(msTbl, partition, isInsertOverwrite, newFiles).toString());
+    fakeEvent.setMessage(MetastoreShim.buildInsertMessage(msTbl, partition,
+       isInsertOverwrite, newFiles).toString());
     fakeEvent.setEventType("INSERT");
     return fakeEvent;
 
@@ -1052,7 +1051,7 @@ public class MetastoreEventsProcessorTest {
     // limitation : the DROP_TABLE event filtering expects that while processing events,
     // the CREATION_TIME of two tables with same name won't have the same
     // creation timestamp.
-    sleep(2000);
+    Thread.sleep(2000);
     dropTableFromImpala(TEST_DB_NAME, testTblName);
     // now catalogD does not have the table entry, create the table again
     createTableFromImpala(TEST_DB_NAME, testTblName, false);
@@ -1210,7 +1209,7 @@ public class MetastoreEventsProcessorTest {
     // limitation : the DROP_DB event filtering expects that while processing events,
     // the CREATION_TIME of two Databases with same name won't have the same
     // creation timestamp.
-    sleep(2000);
+    Thread.sleep(2000);
     dropDatabaseCascadeFromImpala(TEST_DB_NAME);
     assertNull(catalog_.getDb(TEST_DB_NAME));
     createDatabaseFromImpala(TEST_DB_NAME, "second");
@@ -1475,8 +1474,9 @@ public class MetastoreEventsProcessorTest {
     fakeEvent.setTableName(tblName);
     fakeEvent.setDbName(dbName);
     fakeEvent.setEventId(eventIdGenerator.incrementAndGet());
-    fakeEvent.setMessage(MetastoreEventsProcessor.getMessageFactory()
-        .buildAlterTableMessage(tableBefore, tableAfter).toString());
+    fakeEvent.setMessage(
+        MetastoreShim.buildAlterTableMessage(tableBefore, tableAfter, false, -1L)
+            .toString());
     fakeEvent.setEventType("ALTER_TABLE");
     return fakeEvent;
   }
@@ -2110,17 +2110,16 @@ public class MetastoreEventsProcessorTest {
 
   private void createDatabase(String dbName, Map<String, String> params)
       throws TException {
-    DatabaseBuilder databaseBuilder =
-        new DatabaseBuilder()
-            .setName(dbName)
-            .setDescription("Notification test database")
-            .setOwnerName("NotificationTestOwner")
-            .setOwnerType(PrincipalType.USER);
+    Database database = new Database();
+    database.setName(dbName);
+    database.setDescription("Notification test database");
+    database.setOwnerName("NotificationOwner");
+    database.setOwnerType(PrincipalType.USER);
     if (params != null && !params.isEmpty()) {
-      databaseBuilder.setParams(params);
+      database.setParameters(params);
     }
     try (MetaStoreClient msClient = catalog_.getMetaStoreClient()) {
-      msClient.getHiveClient().createDatabase(databaseBuilder.build());
+      msClient.getHiveClient().createDatabase(database);
     }
   }
 
@@ -2154,24 +2153,34 @@ public class MetastoreEventsProcessorTest {
   private org.apache.hadoop.hive.metastore.api.Table getTestTable(String dbName,
       String tblName, Map<String, String> params, boolean isPartitioned)
       throws MetaException {
-    TableBuilder tblBuilder =
-        new TableBuilder()
-            .setTableName(tblName)
-            .setDbName(dbName)
-            .addTableParam("tblParamKey", "tblParamValue")
-            .addCol("c1", "string", "c1 description")
-            .addCol("c2", "string", "c2 description")
-            .setSerdeLib(HdfsFileFormat.PARQUET.serializationLib())
-            .setInputFormat(HdfsFileFormat.PARQUET.inputFormat())
-            .setOutputFormat(HdfsFileFormat.PARQUET.outputFormat());
-    // if params are provided use them
+    org.apache.hadoop.hive.metastore.api.Table tbl =
+        new org.apache.hadoop.hive.metastore.api.Table();
+    tbl.setDbName(dbName);
+    tbl.setTableName(tblName);
+    tbl.putToParameters("tblParamKey", "tblParamValue");
+    List<FieldSchema> cols = Lists.newArrayList(
+        new FieldSchema("c1","string","c1 description"),
+        new FieldSchema("c2", "string","c2 description"));
+
+    StorageDescriptor sd = new StorageDescriptor();
+    sd.setCols(cols);
+    sd.setInputFormat(HdfsFileFormat.PARQUET.inputFormat());
+    sd.setOutputFormat(HdfsFileFormat.PARQUET.outputFormat());
+
+    SerDeInfo serDeInfo = new SerDeInfo();
+    serDeInfo.setSerializationLib(HdfsFileFormat.PARQUET.serializationLib());
+    sd.setSerdeInfo(serDeInfo);
+    tbl.setSd(sd);
+
     if (params != null && !params.isEmpty()) {
-      tblBuilder.setTableParams(params);
+      tbl.setParameters(params);
     }
     if (isPartitioned) {
-      tblBuilder.addPartCol("p1", "string", "partition p1 description");
+      List<FieldSchema> pcols = Lists.newArrayList(
+          new FieldSchema("p1","string","partition p1 description"));
+      tbl.setPartitionKeys(pcols);
     }
-    return tblBuilder.build();
+    return tbl;
   }
 
   /**
@@ -2643,8 +2652,6 @@ public class MetastoreEventsProcessorTest {
   private void alterPartitions(String tblName, List<List<String>> partValsList,
       String location)
       throws TException {
-    GetPartitionsRequest request = new GetPartitionsRequest();
-    request.setDbName(TEST_DB_NAME);
     List<Partition> partitions = new ArrayList<>();
     try (MetaStoreClient metaStoreClient = catalog_.getMetaStoreClient()) {
       for (List<String> partVal : partValsList) {
@@ -2667,14 +2674,12 @@ public class MetastoreEventsProcessorTest {
       org.apache.hadoop.hive.metastore.api.Table msTable =
           msClient.getHiveClient().getTable(dbName, tblName);
       for (List<String> partVals : partitionValues) {
-        partitions.add(
-            new PartitionBuilder()
-                .fromTable(msTable)
-                .setInputFormat(msTable.getSd().getInputFormat())
-                .setSerdeLib(msTable.getSd().getSerdeInfo().getSerializationLib())
-                .setOutputFormat(msTable.getSd().getOutputFormat())
-                .setValues(partVals)
-                .build());
+        Partition partition = new Partition();
+        partition.setDbName(msTable.getDbName());
+        partition.setTableName(msTable.getTableName());
+        partition.setSd(msTable.getSd().deepCopy());
+        partition.setValues(partVals);
+        partitions.add(partition);
       }
     }
     try (MetaStoreClient metaStoreClient = catalog_.getMetaStoreClient()) {
diff --git a/fe/src/test/java/org/apache/impala/hive/executor/UdfExecutorTest.java b/fe/src/test/java/org/apache/impala/hive/executor/UdfExecutorTest.java
index 7ca7f4b..fe4a1a4 100644
--- a/fe/src/test/java/org/apache/impala/hive/executor/UdfExecutorTest.java
+++ b/fe/src/test/java/org/apache/impala/hive/executor/UdfExecutorTest.java
@@ -37,7 +37,6 @@ import org.apache.hadoop.hive.ql.udf.UDFE;
 import org.apache.hadoop.hive.ql.udf.UDFExp;
 import org.apache.hadoop.hive.ql.udf.UDFFindInSet;
 import org.apache.hadoop.hive.ql.udf.UDFHex;
-import org.apache.hadoop.hive.ql.udf.UDFLength;
 import org.apache.hadoop.hive.ql.udf.UDFLn;
 import org.apache.hadoop.hive.ql.udf.UDFLog;
 import org.apache.hadoop.hive.ql.udf.UDFLog10;
@@ -450,7 +449,7 @@ public class UdfExecutorTest {
       throws ImpalaException, MalformedURLException, TException {
     TestHiveUdf(UDFAscii.class, createInt('1'), "123");
     TestHiveUdf(UDFFindInSet.class, createInt(2), "31", "12,31,23");
-    TestHiveUdf(UDFLength.class, createInt(5), "Hello");
+    // UDFLength was moved to GenericUDFLength in Hive 2.3 (HIVE-15979)
     TestHiveUdf(UDFRepeat.class, createText("abcabc"), "abc", createInt(2));
     TestHiveUdf(UDFReverse.class, createText("cba"), "abc");
     TestHiveUdf(UDFSpace.class, createText("    "), createInt(4));
diff --git a/fe/src/test/java/org/apache/impala/testutil/EmbeddedMetastoreClientPool.java b/fe/src/test/java/org/apache/impala/testutil/EmbeddedMetastoreClientPool.java
index 7e68d8e..a606d92 100644
--- a/fe/src/test/java/org/apache/impala/testutil/EmbeddedMetastoreClientPool.java
+++ b/fe/src/test/java/org/apache/impala/testutil/EmbeddedMetastoreClientPool.java
@@ -18,6 +18,7 @@
 package org.apache.impala.testutil;
 
 import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.hive.conf.HiveConf.ConfVars;
 import org.apache.impala.catalog.MetaStoreClientPool;
 import org.apache.hadoop.hive.conf.HiveConf;
 import org.apache.log4j.Logger;
@@ -46,6 +47,15 @@ public class EmbeddedMetastoreClientPool extends  MetaStoreClientPool {
     derbyDataStorePath_ = dbStorePath;
   }
 
+  // Embedded HMS instantiates partition expression proxy which by default brings in a
+  // lot of runtime dependencies from hive-exec. Since we don't depend on this, we
+  // should use a DefaultPartitionExpressionProxy which is a no-op implementation of
+  // PartitionExpressionProxy interface. It throws UnsupportedOperationException on its
+  // APIs so we will find them in tests in case we start using these features when
+  // using embedded HMS
+  private static final String DEFAULT_PARTITION_EXPRESSION_PROXY_CLASS = "org.apache"
+      + ".hadoop.hive.metastore.DefaultPartitionExpressionProxy";
+
   /**
    * Generates the HiveConf required to connect to an embedded metastore backed by
    * derby DB.
@@ -57,13 +67,17 @@ public class EmbeddedMetastoreClientPool extends  MetaStoreClientPool {
     // hive.metastore.uris - empty
     // javax.jdo.option.ConnectionDriverName - org.apache.derby.jdbc.EmbeddedDriver
     // javax.jdo.option.ConnectionURL - jdbc:derby:;databaseName=<path>;create=true"
-    conf.set(HiveConf.ConfVars.METASTOREURIS.toString(), "");
-    conf.set(HiveConf.ConfVars.METASTORE_CONNECTION_DRIVER.toString(),
+    conf.set(ConfVars.METASTOREURIS.varname, "");
+    conf.set(ConfVars.METASTORE_CONNECTION_DRIVER.varname,
         "org.apache.derby.jdbc.EmbeddedDriver");
-    conf.setBoolean(HiveConf.ConfVars.METASTORE_SCHEMA_VERIFICATION.toString(), false);
-    conf.setBoolean(HiveConf.ConfVars.METASTORE_AUTO_CREATE_ALL.toString(), true);
-    conf.set(HiveConf.ConfVars.METASTORECONNECTURLKEY.toString(),
+    conf.setBoolean(ConfVars.METASTORE_SCHEMA_VERIFICATION.varname, false);
+    conf.setBoolean(ConfVars.METASTORE_AUTO_CREATE_ALL.varname, true);
+    conf.set(ConfVars.METASTORECONNECTURLKEY.varname,
         String.format(CONNECTION_URL_TEMPLATE, dbStorePath.toString()));
+    conf.set(ConfVars.METASTORE_EXPRESSION_PROXY_CLASS.varname,
+        DEFAULT_PARTITION_EXPRESSION_PROXY_CLASS);
+    // Disabling notification event listeners
+    conf.set(ConfVars.METASTORE_TRANSACTIONAL_EVENT_LISTENERS.varname, "");
     return conf;
   }
 
diff --git a/impala-parent/pom.xml b/impala-parent/pom.xml
index 1f8d878..a6bbbd9 100644
--- a/impala-parent/pom.xml
+++ b/impala-parent/pom.xml
@@ -37,6 +37,7 @@ under the License.
     <testExecutionMode>reduced</testExecutionMode>
     <hadoop.version>${env.IMPALA_HADOOP_VERSION}</hadoop.version>
     <hive.version>${env.IMPALA_HIVE_VERSION}</hive.version>
+    <hive.storage.api.version>2.3.0.${env.IMPALA_HIVE_VERSION}</hive.storage.api.version>
     <hive.major.version>${env.IMPALA_HIVE_MAJOR_VERSION}</hive.major.version>
     <ranger.version>${env.IMPALA_RANGER_VERSION}</ranger.version>
     <postgres.jdbc.version>${env.IMPALA_POSTGRES_JDBC_DRIVER_VERSION}</postgres.jdbc.version>
@@ -134,6 +135,14 @@ under the License.
         <enabled>false</enabled>
       </snapshots>
     </repository>
+    <repository>
+      <id>hwx.public.repo</id>
+      <url>http://nexus-private.hortonworks.com/nexus/content/groups/public</url>
+      <name>Hortonworks public repository</name>
+      <snapshots>
+        <enabled>true</enabled>
+      </snapshots>
+    </repository>
   </repositories>
 
   <pluginRepositories>
diff --git a/shaded-deps/.gitignore b/shaded-deps/.gitignore
new file mode 100644
index 0000000..916e17c
--- /dev/null
+++ b/shaded-deps/.gitignore
@@ -0,0 +1 @@
+dependency-reduced-pom.xml
diff --git a/shaded-deps/CMakeLists.txt b/shaded-deps/CMakeLists.txt
new file mode 100644
index 0000000..73d353c
--- /dev/null
+++ b/shaded-deps/CMakeLists.txt
@@ -0,0 +1,20 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+add_custom_target(shaded-deps ALL DEPENDS impala-parent
+  COMMAND $ENV{IMPALA_HOME}/bin/mvn-quiet.sh -B install -DskipTests
+)
diff --git a/shaded-deps/pom.xml b/shaded-deps/pom.xml
new file mode 100644
index 0000000..6aad3c5
--- /dev/null
+++ b/shaded-deps/pom.xml
@@ -0,0 +1,108 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+    http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+<project xmlns="http://maven.apache.org/POM/4.0.0"
+         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
+                      http://maven.apache.org/xsd/maven-4.0.0.xsd">
+
+  <!-- This pom creates a jar which excludes most of the hive-exec classes to
+reduce the dependency footprint of hive in Impala. It uses maven-shade-plugin
+to include only those classes which are needed by fe to compile and run in a
+hive-3 environment. Additionally, it relocates some of the transitive dependencies
+coming from hive-exec so that it does not conflict with Impala's version of
+the same dependencies
+  -->
+  <parent>
+    <groupId>org.apache.impala</groupId>
+    <artifactId>impala-parent</artifactId>
+    <version>0.1-SNAPSHOT</version>
+    <relativePath>../impala-parent/pom.xml</relativePath>
+  </parent>
+  <modelVersion>4.0.0</modelVersion>
+  <groupId>org.apache.impala</groupId>
+  <artifactId>impala-minimal-hive-exec</artifactId>
+  <packaging>jar</packaging>
+
+  <dependencies>
+    <dependency>
+      <groupId>org.apache.hive</groupId>
+      <artifactId>hive-exec</artifactId>
+      <version>${hive.version}</version>
+    </dependency>
+  </dependencies>
+  <build>
+    <plugins>
+      <plugin>
+        <artifactId>maven-shade-plugin</artifactId>
+        <version>3.2.1</version>
+        <configuration>
+          <artifactSet>
+            <includes>
+              <include>org.apache.hive:hive-exec</include>
+              <include>org.apache.hadoop:hadoop-mapreduce-client</include>
+            </includes>
+          </artifactSet>
+          <relocations>
+            <relocation>
+              <pattern>com.google</pattern>
+              <shadedPattern>hiveexec.com.google</shadedPattern>
+            </relocation>
+            <relocation>
+              <pattern>org.joda.time</pattern>
+              <shadedPattern>hiveexec.org.joda.time</shadedPattern>
+            </relocation>
+          </relocations>
+          <filters>
+            <filter>
+              <artifact>org.apache.hive:hive-exec</artifact>
+              <includes>
+                <include>org/apache/hadoop/hive/conf/**/*</include>
+                <include>org/apache/hadoop/hive/common/FileUtils*</include>
+                <!-- Needed to support describe formatted command compat with Hive --> 
+                <include>org/apache/hadoop/hive/ql/metadata/**/*</include>
+                <include>org/apache/hadoop/hive/ql/parse/SemanticException.class</include>
+                <!-- Needed to support Hive udfs -->
+                <include>org/apache/hadoop/hive/ql/exec/*UDF*</include>
+                <include>org/apache/hadoop/hive/ql/exec/FunctionUtils*</include>
+                <include>org/apache/hadoop/hive/ql/parse/HiveLexer*</include>
+                <include>org/apache/hadoop/hive/ql/udf/**/*</include>
+                <!-- Many of the UDFs are annotated with their vectorized counter-parts.
+                 Including them makes sure that we don't break -->
+                <include>org/apache/hadoop/hive/ql/exec/vector/expressions/**/*</include>
+                <include>org/apache/hive/common/HiveVersionAnnotation.class</include>
+                <include>org/apache/hive/common/HiveCompat*</include>
+                <include>org/apache/hive/common/util/**</include>
+                <include>org/apache/hive/service/rpc/thrift/**</include>
+                <include>org/apache/hadoop/hive/serde/**</include>
+                <include>org/apache/hadoop/hive/serde2/**</include>
+                <include>org/apache/hive/service/rpc/thrift/**</include>
+                <include>org/apache/hive/common/HiveVersionAnnotation.class</include>
+                <include>com/google/**</include>
+              </includes>
+            </filter>
+          </filters>
+        </configuration>
+        <executions>
+          <execution>
+            <phase>package</phase>
+            <goals>
+              <goal>shade</goal>
+            </goals>
+          </execution>
+        </executions>
+      </plugin>
+    </plugins>
+  </build>
+</project>
diff --git a/testdata/bin/run-hive-server.sh b/testdata/bin/run-hive-server.sh
index 6bdaaee..473a654 100755
--- a/testdata/bin/run-hive-server.sh
+++ b/testdata/bin/run-hive-server.sh
@@ -63,7 +63,8 @@ done
 # Kill for a clean start.
 ${CLUSTER_BIN}/kill-hive-server.sh &> /dev/null
 
-export HIVE_METASTORE_HADOOP_OPTS="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=30010"
+export HIVE_METASTORE_HADOOP_OPTS="-Xdebug -Xrunjdwp:transport=dt_socket,server=y,\
+suspend=n,address=30010"
 
 # If this is CDP Hive we need to manually add the sentry jars in the classpath since
 # CDH Hive metastore scripts do not do so. This is currently to make sure that we can run
@@ -91,7 +92,7 @@ ${CLUSTER_BIN}/wait-for-metastore.py --transport=${METASTORE_TRANSPORT}
 
 if [ ${ONLY_METASTORE} -eq 0 ]; then
   # For Hive 3, we use Tez for execution. We have to add it to the HS2 classpath.
-  if $USE_CDP_HIVE; then
+  if ${USE_CDP_HIVE} ; then
     export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:${TEZ_HOME}/*
     # This is a little hacky, but Tez bundles a bunch of junk into lib/, such
     # as extra copies of the hadoop libraries, etc, and we want to avoid conflicts.
diff --git a/tests/custom_cluster/test_permanent_udfs.py b/tests/custom_cluster/test_permanent_udfs.py
index 5d28a2c..8e8819a 100644
--- a/tests/custom_cluster/test_permanent_udfs.py
+++ b/tests/custom_cluster/test_permanent_udfs.py
@@ -505,9 +505,15 @@ class TestUdfPersistence(CustomClusterTestSuite):
       ('udfbin', 'org.apache.hadoop.hive.ql.udf.UDFBin'),
       ('udfhex', 'org.apache.hadoop.hive.ql.udf.UDFHex'),
       ('udfconv', 'org.apache.hadoop.hive.ql.udf.UDFConv'),
+      # TODO UDFHour was moved from UDF to GenericUDF in Hive 3
+      # This test will fail when running against HMS-3 unless we add
+      # support for GenericUDFs to handle such cases
       ('udfhour', 'org.apache.hadoop.hive.ql.udf.UDFHour'),
       ('udflike', 'org.apache.hadoop.hive.ql.udf.UDFLike'),
       ('udfsign', 'org.apache.hadoop.hive.ql.udf.UDFSign'),
+      # TODO UDFYear moved to GenericUDF in Hive 3
+      # This test will fail when running against HMS-3 unless we add
+      # support for GenericUDFs
       ('udfyear', 'org.apache.hadoop.hive.ql.udf.UDFYear'),
       ('udfascii','org.apache.hadoop.hive.ql.udf.UDFAscii')
   ]


[impala] 03/04: IMPALA-8475: Fix unbound CMAKE_BUILD_TYPE_LIST in buildall.sh

Posted by jo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

joemcdonnell pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git

commit 61e7ff19f6afd080420af0af36591f6174906410
Author: Joe McDonnell <jo...@cloudera.com>
AuthorDate: Tue Apr 30 16:31:04 2019 -0700

    IMPALA-8475: Fix unbound CMAKE_BUILD_TYPE_LIST in buildall.sh
    
    A recent change introduced a shell array CMAKE_BUILD_TYPE_LIST.
    For debug builds, it is empty, because no build types are passed
    into buildall.sh. This is a problem on Centos, because the
    condition [[ -v CMAKE_BUILD_TYPE_LIST ]] is true for an empty
    array on Centos. This causes us to execute code meant for
    non-empty arrays and trigger an unbound variable error.
    
    This changes the condition to [[ -n "${CMAKE_BUILD_TYPE_LIST:+1}" ]],
    which returns true only if the array is not empty.
    
    Testing:
     - Ran buildall.sh on Centos 7 and Ubuntu 16.04.
    
    Change-Id: Ifd3b1af05af780d1a91cc781afff84b56f5aeb59
    Reviewed-on: http://gerrit.cloudera.org:8080/13204
    Reviewed-by: Impala Public Jenkins <im...@cloudera.com>
    Tested-by: Impala Public Jenkins <im...@cloudera.com>
---
 buildall.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/buildall.sh b/buildall.sh
index 125fbcb..8eab21d 100755
--- a/buildall.sh
+++ b/buildall.sh
@@ -308,7 +308,7 @@ fi
 if [[ ${BUILD_TSAN} -eq 1 ]]; then
   CMAKE_BUILD_TYPE_LIST+=(TSAN)
 fi
-if [[ -v CMAKE_BUILD_TYPE_LIST ]]; then
+if [[ -n "${CMAKE_BUILD_TYPE_LIST:+1}" ]]; then
   if [[ ${#CMAKE_BUILD_TYPE_LIST[@]} -gt 1 ]]; then
     echo "ERROR: more than one CMake build type defined: ${CMAKE_BUILD_TYPE_LIST[@]}"
     exit 1


[impala] 04/04: IMPALA-8293 (Part 2): Add support for Ranger cache invalidation

Posted by jo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

joemcdonnell pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git

commit eb97c746d2309fcf78ff3b50751cd5e27101539a
Author: Fredy Wijaya <fw...@cloudera.com>
AuthorDate: Fri Apr 26 15:39:16 2019 -0500

    IMPALA-8293 (Part 2): Add support for Ranger cache invalidation
    
    This patch adds support for Ranger cache invalidation via INVALIDATE
    METADATA and REFRESH AUTHORIZATION. This patch introduces a new catalog
    object type called AUTHZ_CACHE_INVALIDATION to allow broadcasting
    messages from Catalogd to Impalads to update their local Ranger caches.
    For better user experience, every GRANT/REVOKE statement will invalidate
    the Ranger authorization cache.
    
    Testing:
    - Replaced the sleep in test_ranger.py with INVALIDATE METADATA or
      REFRESH AUTHORIZATION
    - Ran all FE tests
    - Ran all E2E authorization tests
    
    Change-Id: Ia7160c082298e0b8cc2742dd3facbd4978581288
    Reviewed-on: http://gerrit.cloudera.org:8080/13134
    Reviewed-by: Impala Public Jenkins <im...@cloudera.com>
    Tested-by: Impala Public Jenkins <im...@cloudera.com>
---
 be/src/catalog/catalog-util.cc                     |   6 ++
 common/thrift/CatalogObjects.thrift                |  13 +++
 .../impala/authorization/AuthorizationChecker.java |   5 +
 .../authorization/NoopAuthorizationFactory.java    |   3 +
 .../ranger/RangerAuthorizationChecker.java         |  16 ++++
 .../ranger/RangerAuthorizationFactory.java         |   2 +-
 .../ranger/RangerCatalogdAuthorizationManager.java |  33 ++++++-
 .../sentry/SentryAuthorizationChecker.java         |   5 +
 .../impala/catalog/AuthzCacheInvalidation.java     |  53 +++++++++++
 .../java/org/apache/impala/catalog/Catalog.java    |  27 ++++++
 .../impala/catalog/CatalogServiceCatalog.java      |  68 ++++++++++++++
 .../org/apache/impala/catalog/ImpaladCatalog.java  |  31 ++++++-
 .../impala/catalog/local/CatalogdMetaProvider.java |  29 +++++-
 .../apache/impala/service/FeCatalogManager.java    |  12 ++-
 .../java/org/apache/impala/service/Frontend.java   |   1 +
 .../impala/analysis/AuthorizationStmtTest.java     |   2 +-
 .../org/apache/impala/common/FrontendTestBase.java |   3 +
 .../apache/impala/testutil/ImpaladTestCatalog.java |   4 +-
 fe/src/test/resources/ranger-hive-security.xml     |   2 +-
 tests/authorization/test_ranger.py                 | 103 +++++++++------------
 20 files changed, 349 insertions(+), 69 deletions(-)

diff --git a/be/src/catalog/catalog-util.cc b/be/src/catalog/catalog-util.cc
index d2b027b..4e628d7 100644
--- a/be/src/catalog/catalog-util.cc
+++ b/be/src/catalog/catalog-util.cc
@@ -266,6 +266,12 @@ Status TCatalogObjectFromObjectName(const TCatalogObjectType::type& object_type,
       }
       break;
     }
+    case TCatalogObjectType::AUTHZ_CACHE_INVALIDATION: {
+      catalog_object->__set_type(object_type);
+      catalog_object->__set_authz_cache_invalidation(TAuthzCacheInvalidation());
+      catalog_object->authz_cache_invalidation.__set_marker_name(object_name);
+      break;
+    }
     case TCatalogObjectType::CATALOG:
     case TCatalogObjectType::UNKNOWN:
     default:
diff --git a/common/thrift/CatalogObjects.thrift b/common/thrift/CatalogObjects.thrift
index f8cb8d5..4682fb7 100644
--- a/common/thrift/CatalogObjects.thrift
+++ b/common/thrift/CatalogObjects.thrift
@@ -39,6 +39,8 @@ enum TCatalogObjectType {
   PRINCIPAL = 7
   PRIVILEGE = 8
   HDFS_CACHE_POOL = 9
+  // A catalog object type as a marker for authorization cache invalidation.
+  AUTHZ_CACHE_INVALIDATION = 10
 }
 
 enum TTableType {
@@ -586,6 +588,14 @@ struct THdfsCachePool {
   // the pool limits, pool owner, etc.
 }
 
+// Thrift representation of an TAuthzCacheInvalidation. This catalog object does not
+// contain any authorization data and it's used as marker to perform an authorization
+// cache invalidation.
+struct TAuthzCacheInvalidation {
+  // Name of the authorization cache marker.
+  1: required string marker_name
+}
+
 // Represents state associated with the overall catalog.
 struct TCatalog {
   // The CatalogService service ID.
@@ -623,4 +633,7 @@ struct TCatalogObject {
 
   // Set iff object type is HDFS_CACHE_POOL
   10: optional THdfsCachePool cache_pool
+
+  // Set iff object type is AUTHZ_CACHE_INVALIDATION
+  11: optional TAuthzCacheInvalidation authz_cache_invalidation
 }
diff --git a/fe/src/main/java/org/apache/impala/authorization/AuthorizationChecker.java b/fe/src/main/java/org/apache/impala/authorization/AuthorizationChecker.java
index d4b205a..7d643cb 100644
--- a/fe/src/main/java/org/apache/impala/authorization/AuthorizationChecker.java
+++ b/fe/src/main/java/org/apache/impala/authorization/AuthorizationChecker.java
@@ -127,4 +127,9 @@ public abstract class AuthorizationChecker {
   public abstract void authorizeRowFilterAndColumnMask(User user,
       List<PrivilegeRequest> privilegeRequests)
       throws AuthorizationException, InternalException;
+
+  /**
+   * Invalidates an authorization cache.
+   */
+  public abstract void invalidateAuthorizationCache();
 }
diff --git a/fe/src/main/java/org/apache/impala/authorization/NoopAuthorizationFactory.java b/fe/src/main/java/org/apache/impala/authorization/NoopAuthorizationFactory.java
index 7284947..ed77fe1 100644
--- a/fe/src/main/java/org/apache/impala/authorization/NoopAuthorizationFactory.java
+++ b/fe/src/main/java/org/apache/impala/authorization/NoopAuthorizationFactory.java
@@ -199,6 +199,9 @@ public class NoopAuthorizationFactory implements AuthorizationFactory {
           List<PrivilegeRequest> privilegeRequests)
           throws AuthorizationException, InternalException {
       }
+
+      @Override
+      public void invalidateAuthorizationCache() {}
     };
   }
 
diff --git a/fe/src/main/java/org/apache/impala/authorization/ranger/RangerAuthorizationChecker.java b/fe/src/main/java/org/apache/impala/authorization/ranger/RangerAuthorizationChecker.java
index 561e6c2..1a0f2df 100644
--- a/fe/src/main/java/org/apache/impala/authorization/ranger/RangerAuthorizationChecker.java
+++ b/fe/src/main/java/org/apache/impala/authorization/ranger/RangerAuthorizationChecker.java
@@ -36,6 +36,8 @@ import org.apache.ranger.plugin.policyengine.RangerAccessRequestImpl;
 import org.apache.ranger.plugin.policyengine.RangerAccessResourceImpl;
 import org.apache.ranger.plugin.policyengine.RangerAccessResult;
 import org.apache.ranger.plugin.policyengine.RangerPolicyEngine;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import java.util.ArrayList;
 import java.util.EnumSet;
@@ -50,6 +52,9 @@ import java.util.Set;
  * Ranger plugin uses its own cache.
  */
 public class RangerAuthorizationChecker extends AuthorizationChecker {
+  private static final Logger LOG = LoggerFactory.getLogger(
+      RangerAuthorizationChecker.class);
+
   // These are Ranger access types (privileges).
   public static final String UPDATE_ACCESS_TYPE = "update";
   public static final String REFRESH_ACCESS_TYPE = "read";
@@ -174,6 +179,17 @@ public class RangerAuthorizationChecker extends AuthorizationChecker {
     }
   }
 
+  @Override
+  public void invalidateAuthorizationCache() {
+    long startTime = System.currentTimeMillis();
+    try {
+      plugin_.refreshPoliciesAndTags();
+    } finally {
+      LOG.debug("Refreshing Ranger policies took {} ms",
+          (System.currentTimeMillis() - startTime));
+    }
+  }
+
   /**
    * This method checks if column mask is enabled on the given columns and deny access
    * when column mask is enabled by throwing an {@link AuthorizationException}. This is
diff --git a/fe/src/main/java/org/apache/impala/authorization/ranger/RangerAuthorizationFactory.java b/fe/src/main/java/org/apache/impala/authorization/ranger/RangerAuthorizationFactory.java
index f7248cc..13eac99 100644
--- a/fe/src/main/java/org/apache/impala/authorization/ranger/RangerAuthorizationFactory.java
+++ b/fe/src/main/java/org/apache/impala/authorization/ranger/RangerAuthorizationFactory.java
@@ -90,6 +90,6 @@ public class RangerAuthorizationFactory implements AuthorizationFactory {
     RangerImpalaPlugin plugin = new RangerImpalaPlugin(config.getServiceType(),
         config.getAppId());
     plugin.init();
-    return new RangerCatalogdAuthorizationManager(() -> plugin);
+    return new RangerCatalogdAuthorizationManager(() -> plugin, catalog);
   }
 }
diff --git a/fe/src/main/java/org/apache/impala/authorization/ranger/RangerCatalogdAuthorizationManager.java b/fe/src/main/java/org/apache/impala/authorization/ranger/RangerCatalogdAuthorizationManager.java
index f772c00..87ca927 100644
--- a/fe/src/main/java/org/apache/impala/authorization/ranger/RangerCatalogdAuthorizationManager.java
+++ b/fe/src/main/java/org/apache/impala/authorization/ranger/RangerCatalogdAuthorizationManager.java
@@ -22,6 +22,8 @@ import org.apache.hadoop.hive.metastore.api.PrincipalType;
 import org.apache.impala.authorization.AuthorizationDelta;
 import org.apache.impala.authorization.AuthorizationManager;
 import org.apache.impala.authorization.User;
+import org.apache.impala.catalog.AuthzCacheInvalidation;
+import org.apache.impala.catalog.CatalogServiceCatalog;
 import org.apache.impala.common.ImpalaException;
 import org.apache.impala.common.InternalException;
 import org.apache.impala.thrift.TCreateDropRoleParams;
@@ -55,12 +57,17 @@ import java.util.function.Supplier;
  * Operations not supported by Ranger will throw an {@link UnsupportedOperationException}.
  */
 public class RangerCatalogdAuthorizationManager implements AuthorizationManager {
+  private static final String AUTHZ_CACHE_INVALIDATION_MARKER = "ranger";
+
   private final RangerDefaultAuditHandler auditHandler_;
   private final Supplier<RangerImpalaPlugin> plugin_;
+  private final CatalogServiceCatalog catalog_;
 
-  public RangerCatalogdAuthorizationManager(Supplier<RangerImpalaPlugin> pluginSupplier) {
+  public RangerCatalogdAuthorizationManager(Supplier<RangerImpalaPlugin> pluginSupplier,
+      CatalogServiceCatalog catalog) {
     auditHandler_ = new RangerDefaultAuditHandler();
     plugin_ = pluginSupplier;
+    catalog_ = catalog;
   }
 
   @Override
@@ -119,6 +126,7 @@ public class RangerCatalogdAuthorizationManager implements AuthorizationManager
         plugin_.get().getClusterName(), params.getPrivileges());
 
     grantPrivilege(requests);
+    refreshAuthorization(response);
   }
 
   @Override
@@ -129,6 +137,7 @@ public class RangerCatalogdAuthorizationManager implements AuthorizationManager
         plugin_.get().getClusterName(), params.getPrivileges());
 
     revokePrivilege(requests);
+    refreshAuthorization(response);
   }
 
   @Override
@@ -140,6 +149,7 @@ public class RangerCatalogdAuthorizationManager implements AuthorizationManager
         plugin_.get().getClusterName(), params.getPrivileges());
 
     grantPrivilege(requests);
+    refreshAuthorization(response);
   }
 
   @Override
@@ -151,6 +161,9 @@ public class RangerCatalogdAuthorizationManager implements AuthorizationManager
         plugin_.get().getClusterName(), params.getPrivileges());
 
     revokePrivilege(requests);
+    // Update the authorization refresh marker so that the Impalads can refresh their
+    // Ranger caches.
+    refreshAuthorization(response);
   }
 
   @VisibleForTesting
@@ -196,8 +209,22 @@ public class RangerCatalogdAuthorizationManager implements AuthorizationManager
 
   @Override
   public AuthorizationDelta refreshAuthorization(boolean resetVersions) {
-    // TODO: IMPALA-8293 (part 2)
-    return new AuthorizationDelta();
+    // Add a single AUTHZ_CACHE_INVALIDATION catalog object called "ranger" and increment
+    // its version to indicate a new cache invalidation request.
+    AuthorizationDelta authzDelta = new AuthorizationDelta();
+    AuthzCacheInvalidation authzCacheInvalidation =
+        catalog_.incrementAuthzCacheInvalidationVersion(AUTHZ_CACHE_INVALIDATION_MARKER);
+    authzDelta.addCatalogObjectAdded(authzCacheInvalidation.toTCatalogObject());
+    return authzDelta;
+  }
+
+  private void refreshAuthorization(TDdlExecResponse response) {
+    // Update the authorization cache invalidation marker so that the Impalads can
+    // invalidate their Ranger caches. This is needed for usability reason to make sure
+    // what's updated in Ranger via grant/revoke is automatically reflected to the
+    // Impalad Ranger plugins.
+    AuthorizationDelta authzDelta = refreshAuthorization(false);
+    response.result.setUpdated_catalog_objects(authzDelta.getCatalogObjectsAdded());
   }
 
   public static List<GrantRevokeRequest> createGrantRevokeRequests(String grantor,
diff --git a/fe/src/main/java/org/apache/impala/authorization/sentry/SentryAuthorizationChecker.java b/fe/src/main/java/org/apache/impala/authorization/sentry/SentryAuthorizationChecker.java
index 209b7ca..c570600 100644
--- a/fe/src/main/java/org/apache/impala/authorization/sentry/SentryAuthorizationChecker.java
+++ b/fe/src/main/java/org/apache/impala/authorization/sentry/SentryAuthorizationChecker.java
@@ -80,6 +80,11 @@ public class SentryAuthorizationChecker extends AuthorizationChecker {
       throws AuthorizationException, InternalException {
   }
 
+  @Override
+  public void invalidateAuthorizationCache() {
+    // Authorization refresh in Sentry is done by updating {@link AuthorizationPolicy}.
+  }
+
   /*
    * Creates a new ResourceAuthorizationProvider based on the given configuration.
    */
diff --git a/fe/src/main/java/org/apache/impala/catalog/AuthzCacheInvalidation.java b/fe/src/main/java/org/apache/impala/catalog/AuthzCacheInvalidation.java
new file mode 100644
index 0000000..62b35a0
--- /dev/null
+++ b/fe/src/main/java/org/apache/impala/catalog/AuthzCacheInvalidation.java
@@ -0,0 +1,53 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+package org.apache.impala.catalog;
+
+import com.google.common.base.Preconditions;
+import org.apache.impala.thrift.TAuthzCacheInvalidation;
+import org.apache.impala.thrift.TCatalogObject;
+import org.apache.impala.thrift.TCatalogObjectType;
+
+/**
+ * This catalog object is used for authorization cache invalidation notification.
+ */
+public class AuthzCacheInvalidation extends CatalogObjectImpl {
+  private final TAuthzCacheInvalidation authzCacheInvalidation_;
+
+  public AuthzCacheInvalidation(String markerName) {
+    this(new TAuthzCacheInvalidation(markerName));
+  }
+
+  public AuthzCacheInvalidation(TAuthzCacheInvalidation authzCacheInvalidation) {
+    authzCacheInvalidation_ = Preconditions.checkNotNull(authzCacheInvalidation);
+  }
+
+  @Override
+  protected void setTCatalogObject(TCatalogObject catalogObject) {
+    catalogObject.setAuthz_cache_invalidation(authzCacheInvalidation_);
+  }
+
+  @Override
+  public TCatalogObjectType getCatalogObjectType() {
+    return TCatalogObjectType.AUTHZ_CACHE_INVALIDATION;
+  }
+
+  @Override
+  public String getName() { return authzCacheInvalidation_.getMarker_name(); }
+
+  public TAuthzCacheInvalidation toThrift() { return authzCacheInvalidation_; }
+}
diff --git a/fe/src/main/java/org/apache/impala/catalog/Catalog.java b/fe/src/main/java/org/apache/impala/catalog/Catalog.java
index c61e20a..2313d65 100644
--- a/fe/src/main/java/org/apache/impala/catalog/Catalog.java
+++ b/fe/src/main/java/org/apache/impala/catalog/Catalog.java
@@ -84,6 +84,10 @@ public abstract class Catalog implements AutoCloseable {
   protected final CatalogObjectCache<HdfsCachePool> hdfsCachePools_ =
       new CatalogObjectCache<HdfsCachePool>(false);
 
+  // Cache of authorization cache invalidation markers.
+  protected final CatalogObjectCache<AuthzCacheInvalidation> authzCacheInvalidation_ =
+      new CatalogObjectCache<>();
+
   /**
    * Creates a new instance of Catalog backed by a given MetaStoreClientPool.
    */
@@ -320,6 +324,13 @@ public abstract class Catalog implements AutoCloseable {
   }
 
   /**
+   * Gets the {@link AuthzCacheInvalidation} for a given marker name.
+   */
+  public AuthzCacheInvalidation getAuthzCacheInvalidation(String markerName) {
+    return authzCacheInvalidation_.get(Preconditions.checkNotNull(markerName));
+  }
+
+  /**
    * Release the Hive Meta Store Client resources. Can be called multiple times
    * (additional calls will be no-ops).
    */
@@ -533,6 +544,19 @@ public abstract class Catalog implements AutoCloseable {
         throw new CatalogException(String.format("%s '%s' does not contain " +
             "privilege: '%s'", Principal.toString(tmpPrincipal.getPrincipalType()),
             tmpPrincipal.getName(), privilegeName));
+      case AUTHZ_CACHE_INVALIDATION:
+        AuthzCacheInvalidation authzCacheInvalidation = getAuthzCacheInvalidation(
+            objectDesc.getAuthz_cache_invalidation().getMarker_name());
+        if (authzCacheInvalidation == null) {
+          // Authorization cache invalidation requires a single catalog object and it
+          // needs to exist.
+          throw new CatalogException("Authz cache invalidation not found: " +
+              objectDesc.getAuthz_cache_invalidation().getMarker_name());
+        }
+        result.setType(authzCacheInvalidation.getCatalogObjectType());
+        result.setCatalog_version(authzCacheInvalidation.getCatalogVersion());
+        result.setAuthz_cache_invalidation(authzCacheInvalidation.toThrift());
+        break;
       default: throw new IllegalStateException(
           "Unexpected TCatalogObject type: " + objectDesc.getType());
     }
@@ -587,6 +611,9 @@ public abstract class Catalog implements AutoCloseable {
             catalogObject.getCache_pool().getPool_name().toLowerCase();
       case DATA_SOURCE:
         return "DATA_SOURCE:" + catalogObject.getData_source().getName().toLowerCase();
+      case AUTHZ_CACHE_INVALIDATION:
+        return "AUTHZ_CACHE_INVALIDATION:" + catalogObject.getAuthz_cache_invalidation()
+            .getMarker_name().toLowerCase();
       case CATALOG:
         return "CATALOG_SERVICE_ID";
       default:
diff --git a/fe/src/main/java/org/apache/impala/catalog/CatalogServiceCatalog.java b/fe/src/main/java/org/apache/impala/catalog/CatalogServiceCatalog.java
index 62f4b50..5f74c45 100644
--- a/fe/src/main/java/org/apache/impala/catalog/CatalogServiceCatalog.java
+++ b/fe/src/main/java/org/apache/impala/catalog/CatalogServiceCatalog.java
@@ -638,6 +638,7 @@ public class CatalogServiceCatalog extends Catalog {
         return obj;
       case PRIVILEGE:
       case PRINCIPAL:
+      case AUTHZ_CACHE_INVALIDATION:
         // The caching of this data on the impalad side is somewhat complex
         // and this code is also under some churn at the moment. So, we'll just publish
         // the full information rather than doing fetch-on-demand.
@@ -687,6 +688,9 @@ public class CatalogServiceCatalog extends Catalog {
     for (User user: getAllUsers()) {
       addPrincipalToCatalogDelta(user, ctx);
     }
+    for (AuthzCacheInvalidation authzCacheInvalidation: getAllAuthzCacheInvalidation()) {
+      addAuthzCacheInvalidationToCatalogDelta(authzCacheInvalidation, ctx);
+    }
     // Identify the catalog objects that were removed from the catalog for which their
     // versions are in range ('ctx.fromVersion', 'ctx.toVersion']. We need to make sure
     // that we don't include "deleted" objects that were re-added to the catalog.
@@ -895,6 +899,18 @@ public class CatalogServiceCatalog extends Catalog {
   }
 
   /**
+   * Get a snapshot view of all authz cache invalidation markers in the catalog.
+   */
+  private List<AuthzCacheInvalidation> getAllAuthzCacheInvalidation() {
+    versionLock_.readLock().lock();
+    try {
+      return ImmutableList.copyOf(authzCacheInvalidation_);
+    } finally {
+      versionLock_.readLock().unlock();
+    }
+  }
+
+  /**
    * Adds a database in the topic update if its version is in the range
    * ('ctx.fromVersion', 'ctx.toVersion']. It iterates through all the tables and
    * functions of this database to determine if they can be included in the topic update.
@@ -1181,6 +1197,22 @@ public class CatalogServiceCatalog extends Catalog {
   }
 
   /**
+   * Adds an authz cache invalidation to the topic update if its version is in the range
+   * ('ctx.fromVersion', 'ctx.toVersion'].
+   */
+  private void addAuthzCacheInvalidationToCatalogDelta(
+      AuthzCacheInvalidation authzCacheInvalidation, GetCatalogDeltaContext ctx)
+      throws TException  {
+    long authzCacheInvalidationVersion = authzCacheInvalidation.getCatalogVersion();
+    if (authzCacheInvalidationVersion <= ctx.fromVersion ||
+        authzCacheInvalidationVersion > ctx.toVersion) return;
+    TCatalogObject catalogObj = new TCatalogObject(
+        TCatalogObjectType.AUTHZ_CACHE_INVALIDATION, authzCacheInvalidationVersion);
+    catalogObj.setAuthz_cache_invalidation(authzCacheInvalidation.toThrift());
+    ctx.addCatalogObject(catalogObj, false);
+  }
+
+  /**
    * Returns all user defined functions (aggregate and scalar) in the specified database.
    * Functions are not returned in a defined order.
    */
@@ -2368,6 +2400,42 @@ public class CatalogServiceCatalog extends Catalog {
     }
   }
 
+  @Override
+  public AuthzCacheInvalidation getAuthzCacheInvalidation(String markerName) {
+    versionLock_.readLock().lock();
+    try {
+      return authzCacheInvalidation_.get(markerName);
+    } finally {
+      versionLock_.readLock().unlock();;
+    }
+  }
+
+  /**
+   * Gets the {@link AuthzCacheInvalidation} for a given marker name or creates a new
+   * {@link AuthzCacheInvalidation} if it does not exist and increment the catalog
+   * version of {@link AuthzCacheInvalidation}. A catalog version update indicates a
+   * an authorization cache invalidation notification.
+   *
+   * @param markerName the authorization cache invalidation marker name
+   * @return the updated {@link AuthzCacheInvalidation} instance
+   */
+  public AuthzCacheInvalidation incrementAuthzCacheInvalidationVersion(
+      String markerName) {
+    versionLock_.writeLock().lock();
+    try {
+      AuthzCacheInvalidation authzCacheInvalidation = getAuthzCacheInvalidation(
+          markerName);
+      if (authzCacheInvalidation == null) {
+        authzCacheInvalidation = new AuthzCacheInvalidation(markerName);
+        authzCacheInvalidation_.add(authzCacheInvalidation);
+      }
+      authzCacheInvalidation.setCatalogVersion(incrementAndGetCatalogVersion());
+      return authzCacheInvalidation;
+    } finally {
+      versionLock_.writeLock().unlock();
+    }
+  }
+
   /**
    * Increments the current Catalog version and returns the new value.
    */
diff --git a/fe/src/main/java/org/apache/impala/catalog/ImpaladCatalog.java b/fe/src/main/java/org/apache/impala/catalog/ImpaladCatalog.java
index 4c1f975..13cb620 100644
--- a/fe/src/main/java/org/apache/impala/catalog/ImpaladCatalog.java
+++ b/fe/src/main/java/org/apache/impala/catalog/ImpaladCatalog.java
@@ -21,12 +21,15 @@ import java.nio.ByteBuffer;
 import java.util.ArrayDeque;
 import java.util.Set;
 import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.atomic.AtomicReference;
 
 import org.apache.impala.analysis.TableName;
+import org.apache.impala.authorization.AuthorizationChecker;
 import org.apache.impala.authorization.AuthorizationPolicy;
 import org.apache.impala.common.InternalException;
 import org.apache.impala.common.Pair;
 import org.apache.impala.service.FeSupport;
+import org.apache.impala.thrift.TAuthzCacheInvalidation;
 import org.apache.impala.thrift.TCatalogObject;
 import org.apache.impala.thrift.TCatalogObjectType;
 import org.apache.impala.thrift.TDataSource;
@@ -92,9 +95,12 @@ public class ImpaladCatalog extends Catalog implements FeCatalog {
   // The addresses of the Kudu masters to use if no Kudu masters were explicitly provided.
   // Used during table creation.
   private final String defaultKuduMasterHosts_;
+  private final AtomicReference<? extends AuthorizationChecker> authzChecker_;
 
-  public ImpaladCatalog(String defaultKuduMasterHosts) {
+  public ImpaladCatalog(String defaultKuduMasterHosts,
+      AtomicReference<? extends AuthorizationChecker> authzChecker) {
     super();
+    authzChecker_ = authzChecker;
     addDb(BuiltinsDb.getInstance());
     defaultKuduMasterHosts_ = defaultKuduMasterHosts;
     // Ensure the contents of the CatalogObjectVersionSet instance are cleared when a
@@ -142,7 +148,8 @@ public class ImpaladCatalog extends Catalog implements FeCatalog {
       return catalogObject.getType() == TCatalogObjectType.DATABASE ||
           catalogObject.getType() == TCatalogObjectType.DATA_SOURCE ||
           catalogObject.getType() == TCatalogObjectType.HDFS_CACHE_POOL ||
-          catalogObject.getType() == TCatalogObjectType.PRINCIPAL;
+          catalogObject.getType() == TCatalogObjectType.PRINCIPAL ||
+          catalogObject.getType() == TCatalogObjectType.AUTHZ_CACHE_INVALIDATION;
     }
   }
 
@@ -312,6 +319,13 @@ public class ImpaladCatalog extends Catalog implements FeCatalog {
         cachePool.setCatalogVersion(catalogObject.getCatalog_version());
         hdfsCachePools_.add(cachePool);
         break;
+      case AUTHZ_CACHE_INVALIDATION:
+        AuthzCacheInvalidation authzCacheInvalidation = new AuthzCacheInvalidation(
+            catalogObject.getAuthz_cache_invalidation());
+        authzCacheInvalidation.setCatalogVersion(catalogObject.getCatalog_version());
+        authzCacheInvalidation_.add(authzCacheInvalidation);
+        authzChecker_.get().invalidateAuthorizationCache();
+        break;
       default:
         throw new IllegalStateException(
             "Unexpected TCatalogObjectType: " + catalogObject.getType());
@@ -354,6 +368,10 @@ public class ImpaladCatalog extends Catalog implements FeCatalog {
           hdfsCachePools_.remove(catalogObject.getCache_pool().getPool_name());
         }
         break;
+      case AUTHZ_CACHE_INVALIDATION:
+        removeAuthzCacheInvalidation(catalogObject.getAuthz_cache_invalidation(),
+            dropCatalogVersion);
+        break;
       default:
         throw new IllegalStateException(
             "Unexpected TCatalogObjectType: " + catalogObject.getType());
@@ -477,6 +495,15 @@ public class ImpaladCatalog extends Catalog implements FeCatalog {
     }
   }
 
+  private void removeAuthzCacheInvalidation(
+      TAuthzCacheInvalidation authzCacheInvalidation, long dropCatalogVersion) {
+    AuthzCacheInvalidation existingItem = authzCacheInvalidation_.get(
+        authzCacheInvalidation.getMarker_name());
+    if (existingItem != null && existingItem.getCatalogVersion() < dropCatalogVersion) {
+      authzCacheInvalidation_.remove(authzCacheInvalidation.getMarker_name());
+    }
+  }
+
   @Override // FeCatalog
   public boolean isReady() {
     return lastSyncedCatalogVersion_.get() > INITIAL_CATALOG_VERSION;
diff --git a/fe/src/main/java/org/apache/impala/catalog/local/CatalogdMetaProvider.java b/fe/src/main/java/org/apache/impala/catalog/local/CatalogdMetaProvider.java
index 368729d..a15d889 100644
--- a/fe/src/main/java/org/apache/impala/catalog/local/CatalogdMetaProvider.java
+++ b/fe/src/main/java/org/apache/impala/catalog/local/CatalogdMetaProvider.java
@@ -29,6 +29,7 @@ import java.util.concurrent.Callable;
 import java.util.concurrent.ExecutionException;
 import java.util.concurrent.TimeUnit;
 import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.atomic.AtomicReference;
 
 import org.apache.hadoop.hive.metastore.api.ColumnStatisticsObj;
 import org.apache.hadoop.hive.metastore.api.Database;
@@ -37,10 +38,13 @@ import org.apache.hadoop.hive.metastore.api.NoSuchObjectException;
 import org.apache.hadoop.hive.metastore.api.Partition;
 import org.apache.hadoop.hive.metastore.api.Table;
 import org.apache.hadoop.hive.metastore.api.UnknownDBException;
+import org.apache.impala.authorization.AuthorizationChecker;
 import org.apache.impala.authorization.AuthorizationPolicy;
+import org.apache.impala.catalog.AuthzCacheInvalidation;
 import org.apache.impala.catalog.Catalog;
 import org.apache.impala.catalog.CatalogDeltaLog;
 import org.apache.impala.catalog.CatalogException;
+import org.apache.impala.catalog.CatalogObjectCache;
 import org.apache.impala.catalog.Function;
 import org.apache.impala.catalog.HdfsPartition.FileDescriptor;
 import org.apache.impala.catalog.ImpaladCatalog.ObjectUpdateSequencer;
@@ -264,6 +268,10 @@ public class CatalogdMetaProvider implements MetaProvider {
    * StateStore. Currently this is _not_ "fetch-on-demand".
    */
   private final AuthorizationPolicy authPolicy_ = new AuthorizationPolicy();
+  // Cache of authorization refresh markers.
+  private final CatalogObjectCache<AuthzCacheInvalidation> authzCacheInvalidation_ =
+      new CatalogObjectCache<>();
+  private AtomicReference<? extends AuthorizationChecker> authzChecker_;
 
   public CatalogdMetaProvider(TBackendGflags flags) {
     Preconditions.checkArgument(flags.isSetLocal_catalog_cache_expiration_s());
@@ -305,6 +313,11 @@ public class CatalogdMetaProvider implements MetaProvider {
     return lastSeenCatalogVersion_.get() > Catalog.INITIAL_CATALOG_VERSION;
   }
 
+  public void setAuthzChecker(
+      AtomicReference<? extends AuthorizationChecker> authzChecker) {
+    authzChecker_ = authzChecker;
+  }
+
   /**
    * Send a GetPartialCatalogObject request to catalogd. This handles converting
    * non-OK status responses back to exceptions, performing various generic sanity
@@ -909,7 +922,8 @@ public class CatalogdMetaProvider implements MetaProvider {
       // may be cross-referential. So, just add them to the sequencer which ensures
       // we handle them in the right order later.
       if (obj.type == TCatalogObjectType.PRINCIPAL ||
-          obj.type == TCatalogObjectType.PRIVILEGE) {
+          obj.type == TCatalogObjectType.PRIVILEGE ||
+          obj.type == TCatalogObjectType.AUTHZ_CACHE_INVALIDATION) {
         authObjectSequencer.add(obj, isDelete);
       }
 
@@ -1001,6 +1015,19 @@ public class CatalogdMetaProvider implements MetaProvider {
             obj.getCatalog_version());
       }
       break;
+      case AUTHZ_CACHE_INVALIDATION:
+      if (!isDelete) {
+        AuthzCacheInvalidation authzCacheInvalidation = new AuthzCacheInvalidation(
+            obj.getAuthz_cache_invalidation());
+        authzCacheInvalidation.setCatalogVersion(obj.getCatalog_version());
+        authzCacheInvalidation_.add(authzCacheInvalidation);
+        Preconditions.checkState(authzChecker_ != null);
+        authzChecker_.get().invalidateAuthorizationCache();
+      } else {
+        authzCacheInvalidation_.remove(obj.getAuthz_cache_invalidation()
+            .getMarker_name());
+      }
+      break;
     default:
         throw new IllegalArgumentException("invalid type: " + obj.type);
     }
diff --git a/fe/src/main/java/org/apache/impala/service/FeCatalogManager.java b/fe/src/main/java/org/apache/impala/service/FeCatalogManager.java
index 75e7e16..ae7c471 100644
--- a/fe/src/main/java/org/apache/impala/service/FeCatalogManager.java
+++ b/fe/src/main/java/org/apache/impala/service/FeCatalogManager.java
@@ -18,6 +18,8 @@ package org.apache.impala.service;
 
 import java.util.concurrent.atomic.AtomicReference;
 
+import com.google.common.base.Preconditions;
+import org.apache.impala.authorization.AuthorizationChecker;
 import org.apache.impala.catalog.CatalogException;
 import org.apache.impala.catalog.FeCatalog;
 import org.apache.impala.catalog.ImpaladCatalog;
@@ -39,6 +41,8 @@ public abstract class FeCatalogManager {
   private static String DEFAULT_KUDU_MASTER_HOSTS =
       BackendConfig.INSTANCE.getBackendCfg().kudu_master_hosts;
 
+  protected AtomicReference<? extends AuthorizationChecker> authzChecker_;
+
   /**
    * @return the appropriate implementation based on the current backend
    * configuration.
@@ -59,6 +63,11 @@ public abstract class FeCatalogManager {
     return new TestImpl(testCatalog);
   }
 
+  public void setAuthzChecker(
+      AtomicReference<? extends AuthorizationChecker> authzChecker) {
+    authzChecker_ = Preconditions.checkNotNull(authzChecker);
+  }
+
   /**
    * @return a Catalog instance to be used for a request or query. Depending
    * on the catalog implementation this may either be a reused instance or a
@@ -119,7 +128,7 @@ public abstract class FeCatalogManager {
     }
 
     private ImpaladCatalog createNewCatalog() {
-      return new ImpaladCatalog(DEFAULT_KUDU_MASTER_HOSTS);
+      return new ImpaladCatalog(DEFAULT_KUDU_MASTER_HOSTS, authzChecker_);
     }
   }
 
@@ -133,6 +142,7 @@ public abstract class FeCatalogManager {
 
     @Override
     public FeCatalog getOrCreateCatalog() {
+      PROVIDER.setAuthzChecker(authzChecker_);
       return new LocalCatalog(PROVIDER, DEFAULT_KUDU_MASTER_HOSTS);
     }
 
diff --git a/fe/src/main/java/org/apache/impala/service/Frontend.java b/fe/src/main/java/org/apache/impala/service/Frontend.java
index 16f1580..cb9f9ab 100644
--- a/fe/src/main/java/org/apache/impala/service/Frontend.java
+++ b/fe/src/main/java/org/apache/impala/service/Frontend.java
@@ -263,6 +263,7 @@ public class Frontend {
     } else {
       authzChecker_.set(authzFactory.newAuthorizationChecker());
     }
+    catalogManager_.setAuthzChecker(authzChecker_);
     authzManager_ = authzFactory.newAuthorizationManager(catalogManager_,
         authzChecker_::get);
     impaladTableUsageTracker_ = ImpaladTableUsageTracker.createFromConfig(
diff --git a/fe/src/test/java/org/apache/impala/analysis/AuthorizationStmtTest.java b/fe/src/test/java/org/apache/impala/analysis/AuthorizationStmtTest.java
index 0a9e64a..df543cf 100644
--- a/fe/src/test/java/org/apache/impala/analysis/AuthorizationStmtTest.java
+++ b/fe/src/test/java/org/apache/impala/analysis/AuthorizationStmtTest.java
@@ -3276,7 +3276,7 @@ public class AuthorizationStmtTest extends FrontendTestBase {
   private abstract class WithRanger implements WithPrincipal {
     private final List<GrantRevokeRequest> requests = new ArrayList<>();
     private final RangerCatalogdAuthorizationManager authzManager =
-        new RangerCatalogdAuthorizationManager(() -> rangerImpalaPlugin_);
+        new RangerCatalogdAuthorizationManager(() -> rangerImpalaPlugin_, null);
 
     @Override
     public void init(TPrivilege[]... privileges) throws ImpalaException {
diff --git a/fe/src/test/java/org/apache/impala/common/FrontendTestBase.java b/fe/src/test/java/org/apache/impala/common/FrontendTestBase.java
index 59cd604..8e95f3c 100644
--- a/fe/src/test/java/org/apache/impala/common/FrontendTestBase.java
+++ b/fe/src/test/java/org/apache/impala/common/FrontendTestBase.java
@@ -350,6 +350,9 @@ public class FrontendTestBase extends AbstractFrontendTest {
               List<PrivilegeRequest> privilegeRequests)
               throws AuthorizationException, InternalException {
           }
+
+          @Override
+          public void invalidateAuthorizationCache() {}
         };
       }
 
diff --git a/fe/src/test/java/org/apache/impala/testutil/ImpaladTestCatalog.java b/fe/src/test/java/org/apache/impala/testutil/ImpaladTestCatalog.java
index 45cbed6..81bb9bd 100644
--- a/fe/src/test/java/org/apache/impala/testutil/ImpaladTestCatalog.java
+++ b/fe/src/test/java/org/apache/impala/testutil/ImpaladTestCatalog.java
@@ -55,7 +55,7 @@ public class ImpaladTestCatalog extends ImpaladCatalog {
    * Takes an {@link AuthorizationFactory} to bootstrap the backing CatalogServiceCatalog.
    */
   public ImpaladTestCatalog(AuthorizationFactory authzFactory) {
-    super("127.0.0.1");
+    super("127.0.0.1", null);
     CatalogServiceCatalog catalogServerCatalog =
         CatalogServiceTestCatalog.createWithAuth(authzFactory);
     authPolicy_ = catalogServerCatalog.getAuthPolicy();
@@ -68,7 +68,7 @@ public class ImpaladTestCatalog extends ImpaladCatalog {
    * Creates ImpaladTestCatalog backed by a given catalog instance.
    */
   public ImpaladTestCatalog(CatalogServiceCatalog catalog) {
-    super("127.0.0.1");
+    super("127.0.0.1", null);
     srcCatalog_ = Preconditions.checkNotNull(catalog);
     authPolicy_ = srcCatalog_.getAuthPolicy();
     setIsReady(true);
diff --git a/fe/src/test/resources/ranger-hive-security.xml b/fe/src/test/resources/ranger-hive-security.xml
index 4cc3160..4fbda3e 100644
--- a/fe/src/test/resources/ranger-hive-security.xml
+++ b/fe/src/test/resources/ranger-hive-security.xml
@@ -44,7 +44,7 @@
 
   <property>
     <name>ranger.plugin.hive.policy.pollIntervalMs</name>
-    <value>5000</value>
+    <value>30000</value>
     <description>
       Polling interval in milliseconds to poll for changes in policies.
     </description>
diff --git a/tests/authorization/test_ranger.py b/tests/authorization/test_ranger.py
index f2900b0..b1e62c1 100644
--- a/tests/authorization/test_ranger.py
+++ b/tests/authorization/test_ranger.py
@@ -21,7 +21,6 @@ import grp
 import json
 import pytest
 import requests
-import time
 
 from getpass import getuser
 from tests.common.custom_cluster_test_suite import CustomClusterTestSuite
@@ -45,7 +44,8 @@ class TestRanger(CustomClusterTestSuite):
     impalad_args=IMPALAD_ARGS, catalogd_args=CATALOGD_ARGS)
   def test_grant_revoke_with_catalog_v1(self, unique_name):
     """Tests grant/revoke with catalog v1."""
-    self._test_grant_revoke(unique_name)
+    self._test_grant_revoke(unique_name, [None, "invalidate metadata",
+                                          "refresh authorization"])
 
   @pytest.mark.execute_serially
   @CustomClusterTestSuite.with_args(
@@ -55,9 +55,10 @@ class TestRanger(CustomClusterTestSuite):
                                    "--catalog_topic_mode=minimal"))
   def test_grant_revoke_with_local_catalog(self, unique_name):
     """Tests grant/revoke with catalog v2 (local catalog)."""
-    self._test_grant_revoke(unique_name)
+    # Catalog v2 does not support global invalidate metadata.
+    self._test_grant_revoke(unique_name, [None, "refresh authorization"])
 
-  def _test_grant_revoke(self, unique_name):
+  def _test_grant_revoke(self, unique_name, refresh_statements):
     user = getuser()
     admin_client = self.create_impala_client()
     unique_database = unique_name + "_db"
@@ -65,42 +66,41 @@ class TestRanger(CustomClusterTestSuite):
     group = grp.getgrnam(getuser()).gr_name
     test_data = [(user, "USER"), (group, "GROUP")]
 
-    for data in test_data:
-      ident = data[0]
-      kw = data[1]
-
-      try:
-        # Set-up temp database/table
-        admin_client.execute("drop database if exists {0} cascade"
-                             .format(unique_database), user=ADMIN)
-        admin_client.execute("create database {0}".format(unique_database), user=ADMIN)
-        admin_client.execute("create table {0}.{1} (x int)"
-                             .format(unique_database, unique_table), user=ADMIN)
-
-        self.execute_query_expect_success(admin_client,
-                                          "grant select on database {0} to {1} {2}"
-                                          .format(unique_database, kw, ident), user=ADMIN)
-        # TODO: IMPALA-8293 use refresh authorization
-        time.sleep(10)
-        result = self.execute_query("show grant {0} {1} on database {2}"
-                                    .format(kw, ident, unique_database))
-        TestRanger._check_privileges(result, [
-          [kw, ident, unique_database, "", "", "", "*", "select", "false"],
-          [kw, ident, unique_database, "*", "*", "", "", "select", "false"]])
-        self.execute_query_expect_success(admin_client,
-                                          "revoke select on database {0} from {1} "
-                                          "{2}".format(unique_database, kw, ident),
-                                          user=ADMIN)
-        # TODO: IMPALA-8293 use refresh authorization
-        time.sleep(10)
-        result = self.execute_query("show grant {0} {1} on database {2}"
-                                    .format(kw, ident, unique_database))
-        TestRanger._check_privileges(result, [])
-      finally:
-        admin_client.execute("revoke select on database {0} from {1} {2}"
-                             .format(unique_database, kw, ident), user=ADMIN)
-        admin_client.execute("drop database if exists {0} cascade"
-                             .format(unique_database), user=ADMIN)
+    for refresh_stmt in refresh_statements:
+      for data in test_data:
+        ident = data[0]
+        kw = data[1]
+        try:
+          # Set-up temp database/table
+          admin_client.execute("drop database if exists {0} cascade"
+                               .format(unique_database), user=ADMIN)
+          admin_client.execute("create database {0}".format(unique_database), user=ADMIN)
+          admin_client.execute("create table {0}.{1} (x int)"
+                               .format(unique_database, unique_table), user=ADMIN)
+
+          self.execute_query_expect_success(admin_client,
+                                            "grant select on database {0} to {1} {2}"
+                                            .format(unique_database, kw, ident),
+                                            user=ADMIN)
+          self._refresh_authorization(admin_client, refresh_stmt)
+          result = self.execute_query("show grant {0} {1} on database {2}"
+                                      .format(kw, ident, unique_database))
+          TestRanger._check_privileges(result, [
+            [kw, ident, unique_database, "", "", "", "*", "select", "false"],
+            [kw, ident, unique_database, "*", "*", "", "", "select", "false"]])
+          self.execute_query_expect_success(admin_client,
+                                            "revoke select on database {0} from {1} "
+                                            "{2}".format(unique_database, kw, ident),
+                                            user=ADMIN)
+          self._refresh_authorization(admin_client, refresh_stmt)
+          result = self.execute_query("show grant {0} {1} on database {2}"
+                                      .format(kw, ident, unique_database))
+          TestRanger._check_privileges(result, [])
+        finally:
+          admin_client.execute("revoke select on database {0} from {1} {2}"
+                               .format(unique_database, kw, ident), user=ADMIN)
+          admin_client.execute("drop database if exists {0} cascade"
+                               .format(unique_database), user=ADMIN)
 
   @CustomClusterTestSuite.with_args(
     impalad_args=IMPALAD_ARGS, catalogd_args=CATALOGD_ARGS)
@@ -123,8 +123,7 @@ class TestRanger(CustomClusterTestSuite):
                                         "grant select on database {0} to user {1} with "
                                         "grant option".format(unique_database, user1),
                                         user=ADMIN)
-      # TODO: IMPALA-8293 use refresh authorization
-      time.sleep(10)
+
       # Verify user 1 has with_grant privilege on unique_database
       result = self.execute_query("show grant user {0} on database {1}"
                                   .format(user1, unique_database))
@@ -136,8 +135,7 @@ class TestRanger(CustomClusterTestSuite):
       self.execute_query_expect_success(admin_client, "revoke grant option for select "
                                                       "on database {0} from user {1}"
                                         .format(unique_database, user1), user=ADMIN)
-      # TODO: IMPALA-8293 use refresh authorization
-      time.sleep(10)
+
       # User 1 can no longer grant privileges on unique_database
       result = self.execute_query("show grant user {0} on database {1}"
                                   .format(user1, unique_database))
@@ -196,7 +194,6 @@ class TestRanger(CustomClusterTestSuite):
 
       admin_client.execute("grant select on database {0} to group {1}"
                            .format(unique_db, group))
-      time.sleep(10)
       result = self.client.execute("show grant user {0} on database {1}"
                                    .format(user, unique_db))
       TestRanger._check_privileges(result, [
@@ -211,7 +208,6 @@ class TestRanger(CustomClusterTestSuite):
     try:
       for privilege in privileges:
         admin_client.execute("grant {0} on server to user {1}".format(privilege, user))
-      time.sleep(10)
       result = self.client.execute("show grant user {0} on server".format(user))
       TestRanger._check_privileges(result, [
         ["USER", user, "", "", "", "*", "", "alter", "false"],
@@ -231,7 +227,6 @@ class TestRanger(CustomClusterTestSuite):
         ["USER", user, "*", "*", "*", "", "", "select", "false"]])
 
       admin_client.execute("grant all on server to user {0}".format(user))
-      time.sleep(10)
       result = self.client.execute("show grant user {0} on server".format(user))
       TestRanger._check_privileges(result, [
         ["USER", user, "", "", "", "*", "", "all", "false"],
@@ -246,7 +241,6 @@ class TestRanger(CustomClusterTestSuite):
     try:
       # Grant server privileges and verify
       admin_client.execute("grant all on server to {0} {1}".format(kw, id), user=ADMIN)
-      time.sleep(10)
       result = self.client.execute("show grant {0} {1} on server".format(kw, id))
       TestRanger._check_privileges(result, [
         [kw, id, "", "", "", "*", "", "all", "false"],
@@ -255,14 +249,12 @@ class TestRanger(CustomClusterTestSuite):
 
       # Revoke server privileges and verify
       admin_client.execute("revoke all on server from {0} {1}".format(kw, id))
-      time.sleep(10)
       result = self.client.execute("show grant {0} {1} on server".format(kw, id))
       TestRanger._check_privileges(result, [])
 
       # Grant uri privileges and verify
       admin_client.execute("grant all on uri '{0}' to {1} {2}"
                            .format(uri, kw, id))
-      time.sleep(10)
       result = self.client.execute("show grant {0} {1} on uri '{2}'"
                                    .format(kw, id, uri))
       TestRanger._check_privileges(result, [
@@ -271,7 +263,6 @@ class TestRanger(CustomClusterTestSuite):
       # Revoke uri privileges and verify
       admin_client.execute("revoke all on uri '{0}' from {1} {2}"
                            .format(uri, kw, id))
-      time.sleep(10)
       result = self.client.execute("show grant {0} {1} on uri '{2}'"
                                    .format(kw, id, uri))
       TestRanger._check_privileges(result, [])
@@ -279,7 +270,6 @@ class TestRanger(CustomClusterTestSuite):
       # Grant database privileges and verify
       admin_client.execute("grant select on database {0} to {1} {2}"
                            .format(unique_database, kw, id))
-      time.sleep(10)
       result = self.client.execute("show grant {0} {1} on database {2}"
                                    .format(kw, id, unique_database))
       TestRanger._check_privileges(result, [
@@ -289,7 +279,6 @@ class TestRanger(CustomClusterTestSuite):
       # Revoke database privileges and verify
       admin_client.execute("revoke select on database {0} from {1} {2}"
                            .format(unique_database, kw, id))
-      time.sleep(10)
       result = self.client.execute("show grant {0} {1} on database {2}"
                                    .format(kw, id, unique_database))
       TestRanger._check_privileges(result, [])
@@ -297,7 +286,6 @@ class TestRanger(CustomClusterTestSuite):
       # Grant table privileges and verify
       admin_client.execute("grant select on table {0}.{1} to {2} {3}"
                            .format(unique_database, unique_table, kw, id))
-      time.sleep(10)
       result = self.client.execute("show grant {0} {1} on table {2}.{3}"
                                    .format(kw, id, unique_database, unique_table))
       TestRanger._check_privileges(result, [
@@ -306,7 +294,6 @@ class TestRanger(CustomClusterTestSuite):
       # Revoke table privileges and verify
       admin_client.execute("revoke select on table {0}.{1} from {2} {3}"
                            .format(unique_database, unique_table, kw, id))
-      time.sleep(10)
       result = self.client.execute("show grant {0} {1} on table {2}.{3}"
                                    .format(kw, id, unique_database, unique_table))
       TestRanger._check_privileges(result, [])
@@ -314,7 +301,6 @@ class TestRanger(CustomClusterTestSuite):
       # Grant column privileges and verify
       admin_client.execute("grant select(x) on table {0}.{1} to {2} {3}"
                            .format(unique_database, unique_table, kw, id))
-      time.sleep(10)
       result = self.client.execute("show grant {0} {1} on column {2}.{3}.x"
                                    .format(kw, id, unique_database, unique_table))
       TestRanger._check_privileges(result, [
@@ -323,7 +309,6 @@ class TestRanger(CustomClusterTestSuite):
       # Revoke column privileges and verify
       admin_client.execute("revoke select(x) on table {0}.{1} from {2} {3}"
                            .format(unique_database, unique_table, kw, id))
-      time.sleep(10)
       result = self.client.execute("show grant {0} {1} on column {2}.{3}.x"
                                    .format(kw, id, unique_database, unique_table))
       TestRanger._check_privileges(result, [])
@@ -359,3 +344,7 @@ class TestRanger(CustomClusterTestSuite):
       return cols[0:len(cols) - 1]
 
     assert map(columns, result.data) == expected
+
+  def _refresh_authorization(self, client, statement):
+    if statement is not None:
+      self.execute_query_expect_success(client, statement)


[impala] 01/04: Bump Kudu version to 9ba901a

Posted by jo...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

joemcdonnell pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git

commit b4ff8018d33c71a502b3184d2f8ffb6c5e6c0207
Author: Thomas Tauber-Marshall <tm...@cloudera.com>
AuthorDate: Tue Apr 30 13:00:16 2019 -0700

    Bump Kudu version to 9ba901a
    
    This pulls in the new API for column comments needed for IMPALA-5351.
    
    Change-Id: I2b6d70908e133eacb01ec3b9616d8361db3417a2
    Reviewed-on: http://gerrit.cloudera.org:8080/13197
    Reviewed-by: Impala Public Jenkins <im...@cloudera.com>
    Tested-by: Impala Public Jenkins <im...@cloudera.com>
---
 bin/impala-config.sh | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/bin/impala-config.sh b/bin/impala-config.sh
index a48ad2d..bcd1a83 100755
--- a/bin/impala-config.sh
+++ b/bin/impala-config.sh
@@ -68,7 +68,7 @@ fi
 # moving to a different build of the toolchain, e.g. when a version is bumped or a
 # compile option is changed. The build id can be found in the output of the toolchain
 # build jobs, it is constructed from the build number and toolchain git hash prefix.
-export IMPALA_TOOLCHAIN_BUILD_ID=16-2402d830d5
+export IMPALA_TOOLCHAIN_BUILD_ID=24-3b615798c1
 # Versions of toolchain dependencies.
 # -----------------------------------
 export IMPALA_AVRO_VERSION=1.7.4-p4
@@ -648,7 +648,7 @@ if $USE_CDH_KUDU; then
   export IMPALA_KUDU_VERSION=${IMPALA_KUDU_VERSION-"1.10.0-cdh6.x-SNAPSHOT"}
   export IMPALA_KUDU_HOME=${CDH_COMPONENTS_HOME}/kudu-$IMPALA_KUDU_VERSION
 else
-  export IMPALA_KUDU_VERSION=${IMPALA_KUDU_VERSION-"1.9.0"}
+  export IMPALA_KUDU_VERSION=${IMPALA_KUDU_VERSION-"9ba901a"}
   export IMPALA_KUDU_HOME=${IMPALA_TOOLCHAIN}/kudu-$IMPALA_KUDU_VERSION
 fi