You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@impala.apache.org by jo...@apache.org on 2020/01/30 23:25:19 UTC

[impala] branch master updated: IMPALA-9149: part 2: Re-enable Ranger-related EE tests

This is an automated email from the ASF dual-hosted git repository.

joemcdonnell pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git


The following commit(s) were added to refs/heads/master by this push:
     new 135fa16  IMPALA-9149: part 2: Re-enable Ranger-related EE tests
135fa16 is described below

commit 135fa1613dac4999fec666d428e8fbe9ad1dfc60
Author: Fang-Yu Rao <fa...@cloudera.com>
AuthorDate: Thu Jan 16 11:45:29 2020 -0800

    IMPALA-9149: part 2: Re-enable Ranger-related EE tests
    
    In IMPALA-9047, we disabled some Ranger-related FE and BE tests due to
    changes in Ranger's behavior after upgrading Ranger from 1.2 to 2.0.
    This patch aims to re-enable those disabled EE tests in
    tests/authorization/test_authorized_proxy.py and
    tests/authorization/test_ranger.py to increase Impala's test coverage of
    authorization via Ranger.
    
    The Ranger-related tests in test_authorized_proxy.py test Impala's
    delegation for clients. Two types of delegation are supported in Impala,
    i.e., a user can delegate the execution of a query to either 1) another
    user, or 2) a group of users. In the former case, Ranger will check
    whether or not the delegated user specified in the option
    'authorized_proxy_user_config' possesses sufficient privileges to access
    the resources, whereas in the latter case, before checking the delegated
    group is granted sufficient privileges, Ranger will check with the help
    of Impala whether or not the delegated user specified in
    'authorized_proxy_user_config' belongs to the delegated group specified
    in 'authorized_proxy_group_config' in the underlying OS. This type of
    delegation requires Impala to retrieve the groups the delegated user
    belongs to from the underlying OS and thus if the delegated user does
    not exist in the underlying OS, Impala would inform Ranger that the
    delegated user does not belong to any group, which in turn would fail
    the authorization even though in the policies on the Ranger server, the
    delegated user belongs to the delegated group and the delegated group is
    granted sufficient privileges.
    
    The re-enabled Ranger tests in test_authorized_proxy.py involve queries
    in which the delegated user, i.e., 'non_owner', does not exist in the
    underlying OS. We use 'non_owner' as the delegated user instead of
    getuser() so that we will have to explicitly grant 'non_owner'
    sufficient privileges of accessing the resources. To avoid the need for
    creating an actual delegated user and its corresponding delegated groups
    in the underlying OS when running the EE tests, we added to
    'impalad_args' an additional option, i.e.,
    'use_customized_user_groups_mapper_for_ranger', which, when set to true,
    allows Impala to use a customized user-to-groups mapping when performing
    authorization via Ranger. On the other hand, we set the delegated user
    to getuser() when running the respective Sentry related tests to avoid
    the need for having to provide Sentry with a customized user-to-groups
    mapping.
    
    To re-enable test_legacy_catalog_ownership() in test_ranger.py, we
    removed in _test_ownership() a test query that was expected to fail the
    authorization in Ranger 1.2 but passes the authorization in Ranger 2.0.
    This is due to the fact that in Ranger 2.0, a user does not have to be
    explicitly granted the privileges of accessing a resource as long as the
    user is the owner of the resource.
    
    Testing:
    - Passed FE tests.
    - Passed the tests in test_authorized_proxy.py.
    - Passed the tests in test_ranger.py.
    
    Change-Id: I17420d7ff9beacd1b4d2ad72b68b8b54983e60cb
    Reviewed-on: http://gerrit.cloudera.org:8080/15088
    Reviewed-by: Impala Public Jenkins <im...@cloudera.com>
    Tested-by: Impala Public Jenkins <im...@cloudera.com>
---
 be/src/common/global-flags.cc                      |   4 +
 be/src/util/backend-gflag-util.cc                  |   3 +
 common/thrift/BackendGflags.thrift                 |   2 +
 .../ranger/RangerAuthorizationChecker.java         |   4 +-
 .../org/apache/impala/service/BackendConfig.java   |   4 +
 tests/authorization/test_authorized_proxy.py       |  63 ++++++------
 tests/authorization/test_ranger.py                 | 113 ++++++++-------------
 7 files changed, 90 insertions(+), 103 deletions(-)

diff --git a/be/src/common/global-flags.cc b/be/src/common/global-flags.cc
index 40841b5..74e5969 100644
--- a/be/src/common/global-flags.cc
+++ b/be/src/common/global-flags.cc
@@ -308,6 +308,10 @@ DEFINE_int32(num_check_authorization_threads, 1,
     "checking authorization with a high concurrency. The value must be in the range of "
     "1 to 128.");
 
+DEFINE_bool_hidden(use_customized_user_groups_mapper_for_ranger, false,
+    "If true, use the customized user-to-groups mapper when performing authorization via"
+    " Ranger.");
+
 // ++========================++
 // || Startup flag graveyard ||
 // ++========================++
diff --git a/be/src/util/backend-gflag-util.cc b/be/src/util/backend-gflag-util.cc
index 467873b..8882a11 100644
--- a/be/src/util/backend-gflag-util.cc
+++ b/be/src/util/backend-gflag-util.cc
@@ -84,6 +84,7 @@ DECLARE_string(blacklisted_tables);
 DECLARE_string(min_privilege_set_for_show_stmts);
 DECLARE_int32(num_expected_executors);
 DECLARE_int32(num_check_authorization_threads);
+DECLARE_bool(use_customized_user_groups_mapper_for_ranger);
 
 namespace impala {
 
@@ -171,6 +172,8 @@ Status GetThriftBackendGflags(JNIEnv* jni_env, jbyteArray* cfg_bytes) {
   cfg.__set_min_privilege_set_for_show_stmts(FLAGS_min_privilege_set_for_show_stmts);
   cfg.__set_num_expected_executors(FLAGS_num_expected_executors);
   cfg.__set_num_check_authorization_threads(FLAGS_num_check_authorization_threads);
+  cfg.__set_use_customized_user_groups_mapper_for_ranger(
+      FLAGS_use_customized_user_groups_mapper_for_ranger);
   RETURN_IF_ERROR(SerializeThriftMsg(jni_env, &cfg, cfg_bytes));
   return Status::OK();
 }
diff --git a/common/thrift/BackendGflags.thrift b/common/thrift/BackendGflags.thrift
index 4762654..2df20e6 100644
--- a/common/thrift/BackendGflags.thrift
+++ b/common/thrift/BackendGflags.thrift
@@ -149,4 +149,6 @@ struct TBackendGflags {
   62: required i32 num_expected_executors
 
   63: required i32 num_check_authorization_threads
+
+  64: required bool use_customized_user_groups_mapper_for_ranger
 }
diff --git a/fe/src/main/java/org/apache/impala/authorization/ranger/RangerAuthorizationChecker.java b/fe/src/main/java/org/apache/impala/authorization/ranger/RangerAuthorizationChecker.java
index f77ab9d..b4ad05a 100644
--- a/fe/src/main/java/org/apache/impala/authorization/ranger/RangerAuthorizationChecker.java
+++ b/fe/src/main/java/org/apache/impala/authorization/ranger/RangerAuthorizationChecker.java
@@ -36,6 +36,7 @@ import org.apache.impala.authorization.User;
 import org.apache.impala.catalog.FeCatalog;
 import org.apache.impala.common.InternalException;
 import org.apache.impala.common.RuntimeEnv;
+import org.apache.impala.service.BackendConfig;
 import org.apache.impala.thrift.TSessionState;
 import org.apache.impala.util.EventSequence;
 import org.apache.ranger.audit.model.AuthzAuditEvent;
@@ -353,7 +354,8 @@ public class RangerAuthorizationChecker extends BaseAuthorizationChecker {
   @Override
   public Set<String> getUserGroups(User user) throws InternalException {
     UserGroupInformation ugi;
-    if (RuntimeEnv.INSTANCE.isTestEnv()) {
+    if (RuntimeEnv.INSTANCE.isTestEnv() ||
+        BackendConfig.INSTANCE.useCustomizedUserGroupsMapperForRanger()) {
       ugi = UserGroupInformation.createUserForTesting(user.getShortName(),
           new String[]{user.getShortName()});
     } else {
diff --git a/fe/src/main/java/org/apache/impala/service/BackendConfig.java b/fe/src/main/java/org/apache/impala/service/BackendConfig.java
index d3abf78..b40d972 100644
--- a/fe/src/main/java/org/apache/impala/service/BackendConfig.java
+++ b/fe/src/main/java/org/apache/impala/service/BackendConfig.java
@@ -201,6 +201,10 @@ public class BackendConfig {
     return backendCfg_.num_check_authorization_threads;
   }
 
+  public boolean useCustomizedUserGroupsMapperForRanger() {
+    return backendCfg_.use_customized_user_groups_mapper_for_ranger;
+  }
+
   // Inits the auth_to_local configuration in the static KerberosName class.
   private static void initAuthToLocal() {
     // If auth_to_local is enabled, we read the configuration hadoop.security.auth_to_local
diff --git a/tests/authorization/test_authorized_proxy.py b/tests/authorization/test_authorized_proxy.py
index 71c3d3b..1d1b95e 100644
--- a/tests/authorization/test_authorized_proxy.py
+++ b/tests/authorization/test_authorized_proxy.py
@@ -111,19 +111,18 @@ class TestAuthorizedProxy(CustomClusterTestSuite):
     catalogd_args=SENTRY_CATALOGD_ARGS)
   def test_authorized_proxy_user_with_sentry(self, unique_role):
     """Tests authorized proxy user with Sentry using HS2."""
-    self._test_authorized_proxy_with_sentry(unique_role, self._test_authorized_proxy)
+    self._test_authorized_proxy_with_sentry(unique_role, self._test_authorized_proxy,
+                                            getuser())
 
   @pytest.mark.execute_serially
   @CustomClusterTestSuite.with_args(
-    impalad_args="{0} --authorized_proxy_user_config=foo=bar;hue={1} "
-                 .format(RANGER_IMPALAD_ARGS, getuser()),
+    impalad_args="{0} --authorized_proxy_user_config=foo=bar;hue=non_owner "
+                 .format(RANGER_IMPALAD_ARGS),
     catalogd_args=RANGER_CATALOGD_ARGS)
   def test_authorized_proxy_user_with_ranger(self):
-    # This test fails due to bumping up the Ranger to a newer version.
-    # TODO(fangyu.rao): Fix in a follow up commit.
-    pytest.xfail("failed due to bumping up the Ranger to a newer version")
     """Tests authorized proxy user with Ranger using HS2."""
-    self._test_authorized_proxy_with_ranger(self._test_authorized_proxy)
+    self._test_authorized_proxy_with_ranger(self._test_authorized_proxy, "non_owner",
+                                            False)
 
   @pytest.mark.execute_serially
   @CustomClusterTestSuite.with_args(
@@ -133,20 +132,20 @@ class TestAuthorizedProxy(CustomClusterTestSuite):
     catalogd_args=SENTRY_CATALOGD_ARGS)
   def test_authorized_proxy_group_with_sentry(self, unique_role):
     """Tests authorized proxy group with Sentry using HS2."""
-    self._test_authorized_proxy_with_sentry(unique_role, self._test_authorized_proxy)
+    self._test_authorized_proxy_with_sentry(unique_role, self._test_authorized_proxy,
+                                            getuser())
 
   @pytest.mark.execute_serially
   @CustomClusterTestSuite.with_args(
-    impalad_args="{0} --authorized_proxy_user_config=hue=bar "
-                 "--authorized_proxy_group_config=foo=bar;hue={1}"
-                 .format(RANGER_IMPALAD_ARGS, grp.getgrgid(os.getgid()).gr_name),
+    impalad_args="{0} --authorized_proxy_user_config=hue=non_owner "
+                 "--authorized_proxy_group_config=foo=bar;hue=non_owner "
+                 "--use_customized_user_groups_mapper_for_ranger"
+                 .format(RANGER_IMPALAD_ARGS),
     catalogd_args=RANGER_CATALOGD_ARGS)
   def test_authorized_proxy_group_with_ranger(self):
-    # This test fails due to bumping up the Ranger to a newer version.
-    # TODO(fangyu.rao): Fix in a follow up commit.
-    pytest.xfail("failed due to bumping up the Ranger to a newer version")
     """Tests authorized proxy group with Ranger using HS2."""
-    self._test_authorized_proxy_with_ranger(self._test_authorized_proxy)
+    self._test_authorized_proxy_with_ranger(self._test_authorized_proxy, "non_owner",
+                                            True)
 
   @pytest.mark.execute_serially
   @CustomClusterTestSuite.with_args(
@@ -172,7 +171,7 @@ class TestAuthorizedProxy(CustomClusterTestSuite):
     resp = self.hs2_client.OpenSession(open_session_req)
     assert "User 'hue' is not authorized to delegate to 'abc'" in str(resp)
 
-  def _test_authorized_proxy_with_sentry(self, role, test_func):
+  def _test_authorized_proxy_with_sentry(self, role, test_func, delegated_user):
     try:
       self.session_handle = self._open_hs2(getuser(), dict()).sessionHandle
       self._execute_hs2_stmt("create role {0}".format(role))
@@ -182,7 +181,7 @@ class TestAuthorizedProxy(CustomClusterTestSuite):
                              .format(role, grp.getgrnam(getuser()).gr_name))
       self._execute_hs2_stmt("grant role {0} to group {1}"
                              .format(role, grp.getgrgid(os.getgid()).gr_name))
-      test_func()
+      test_func(delegated_user)
     finally:
       self.session_handle = self._open_hs2(getuser(), dict()).sessionHandle
       self._execute_hs2_stmt("grant all on server to role {0}".format(role))
@@ -190,18 +189,23 @@ class TestAuthorizedProxy(CustomClusterTestSuite):
                              .format(role, grp.getgrnam(getuser()).gr_name))
       self._execute_hs2_stmt("drop role {0}".format(role))
 
-  def _test_authorized_proxy_with_ranger(self, test_func):
+  def _test_authorized_proxy_with_ranger(self, test_func, delegated_user,
+                                         delegated_to_group):
     try:
       self.session_handle = self._open_hs2(RANGER_ADMIN_USER, dict()).sessionHandle
-      self._execute_hs2_stmt("grant all on table tpch.lineitem to user {0}"
-                             .format(getuser()))
-      test_func()
+      if not delegated_to_group:
+        self._execute_hs2_stmt("grant all on table tpch.lineitem to user non_owner")
+      else:
+        self._execute_hs2_stmt("grant all on table tpch.lineitem to group non_owner")
+      test_func(delegated_user)
     finally:
       self.session_handle = self._open_hs2(RANGER_ADMIN_USER, dict()).sessionHandle
-      self._execute_hs2_stmt("revoke all on table tpch.lineitem from user {0}"
-                             .format(getuser()))
+      if not delegated_to_group:
+        self._execute_hs2_stmt("revoke all on table tpch.lineitem from user non_owner")
+      else:
+        self._execute_hs2_stmt("revoke all on table tpch.lineitem from group non_owner")
 
-  def _test_authorized_proxy(self):
+  def _test_authorized_proxy(self, delegated_user):
     """End-to-end impersonation + authorization test. Expects authorization to be
        configured before running this test"""
     # TODO: To reuse the HS2 utility code from the TestHS2 test suite we need to import
@@ -213,12 +217,13 @@ class TestAuthorizedProxy(CustomClusterTestSuite):
 
     # Try to query a table we are not authorized to access.
     self.session_handle = self._open_hs2("hue",
-                                         {"impala.doas.user": getuser()}).sessionHandle
+                                         {"impala.doas.user": delegated_user})\
+        .sessionHandle
     bad_resp = self._execute_hs2_stmt("describe tpch_seq.lineitem", False)
-    assert "User '%s' does not have privileges to access" % getuser() in \
+    assert "User '%s' does not have privileges to access" % delegated_user in \
            str(bad_resp)
 
-    assert self._wait_for_audit_record(user=getuser(), impersonator="hue"), \
+    assert self._wait_for_audit_record(user=delegated_user, impersonator="hue"), \
            "No matching audit event recorded in time window"
 
     # Now try the same operation on a table we are authorized to access.
@@ -228,8 +233,8 @@ class TestAuthorizedProxy(CustomClusterTestSuite):
     # Verify the correct user information is in the runtime profile.
     query_id = operation_id_to_query_id(good_resp.operationHandle.operationId)
     profile_page = self.cluster.impalads[0].service.read_query_profile_page(query_id)
-    self._verify_profile_user_fields(profile_page, effective_user=getuser(),
-                                     delegated_user=getuser(), connected_user="hue")
+    self._verify_profile_user_fields(profile_page, effective_user=delegated_user,
+                                     delegated_user=delegated_user, connected_user="hue")
 
     # Try to delegate a user we are not authorized to delegate to.
     resp = self._open_hs2("hue", {"impala.doas.user": "some_user"}, False)
diff --git a/tests/authorization/test_ranger.py b/tests/authorization/test_ranger.py
index b5402ce..b2db392 100644
--- a/tests/authorization/test_ranger.py
+++ b/tests/authorization/test_ranger.py
@@ -54,9 +54,6 @@ class TestRanger(CustomClusterTestSuite):
   @CustomClusterTestSuite.with_args(
     impalad_args=IMPALAD_ARGS, catalogd_args=CATALOGD_ARGS)
   def test_grant_revoke_with_catalog_v1(self, unique_name):
-    # This test fails due to bumping up the Ranger to a newer version.
-    # TODO(fangyu.rao): Fix in a follow up commit.
-    pytest.xfail("failed due to bumping up the Ranger to a newer version")
     """Tests grant/revoke with catalog v1."""
     self._test_grant_revoke(unique_name, [None, "invalidate metadata",
                                           "refresh authorization"])
@@ -66,9 +63,6 @@ class TestRanger(CustomClusterTestSuite):
     impalad_args="{0} {1}".format(IMPALAD_ARGS, "--use_local_catalog=true"),
     catalogd_args="{0} {1}".format(CATALOGD_ARGS, "--catalog_topic_mode=minimal"))
   def test_grant_revoke_with_local_catalog(self, unique_name):
-    # This test fails due to bumping up the Ranger to a newer version.
-    # TODO(fangyu.rao): Fix in a follow up commit.
-    pytest.xfail("failed due to bumping up the Ranger to a newer version")
     """Tests grant/revoke with catalog v2 (local catalog)."""
     self._test_grant_revoke(unique_name, [None, "invalidate metadata",
                                           "refresh authorization"])
@@ -120,9 +114,6 @@ class TestRanger(CustomClusterTestSuite):
   @CustomClusterTestSuite.with_args(
     impalad_args=IMPALAD_ARGS, catalogd_args=CATALOGD_ARGS)
   def test_grant_option(self, unique_name):
-    # This test fails due to bumping up the Ranger to a newer version.
-    # TODO(fangyu.rao): Fix in a follow up commit.
-    pytest.xfail("failed due to bumping up the Ranger to a newer version")
     user1 = getuser()
     admin_client = self.create_impala_client()
     unique_database = unique_name + "_db"
@@ -186,9 +177,6 @@ class TestRanger(CustomClusterTestSuite):
   @CustomClusterTestSuite.with_args(
     impalad_args=IMPALAD_ARGS, catalogd_args=CATALOGD_ARGS)
   def test_show_grant(self, unique_name):
-    # This test fails due to bumping up the Ranger to a newer version.
-    # TODO(fangyu.rao): Fix in a follow up commit.
-    pytest.xfail("failed due to bumping up the Ranger to a newer version")
     user = getuser()
     group = grp.getgrnam(getuser()).gr_name
     test_data = [(user, "USER"), (group, "GROUP")]
@@ -367,9 +355,6 @@ class TestRanger(CustomClusterTestSuite):
   @CustomClusterTestSuite.with_args(
     impalad_args=IMPALAD_ARGS, catalogd_args=CATALOGD_ARGS)
   def test_grant_revoke_ranger_api(self, unique_name):
-    # This test fails due to bumping up the Ranger to a newer version.
-    # TODO(fangyu.rao): Fix in a follow up commit.
-    pytest.xfail("failed due to bumping up the Ranger to a newer version")
     user = getuser()
     admin_client = self.create_impala_client()
     unique_db = unique_name + "_db"
@@ -430,9 +415,6 @@ class TestRanger(CustomClusterTestSuite):
   @CustomClusterTestSuite.with_args(
     impalad_args=IMPALAD_ARGS, catalogd_args=CATALOGD_ARGS)
   def test_show_grant_hive_privilege(self, unique_name):
-    # This test fails due to bumping up the Ranger to a newer version.
-    # TODO(fangyu.rao): Fix in a follow up commit.
-    pytest.xfail("failed due to bumping up the Ranger to a newer version")
     user = getuser()
     admin_client = self.create_impala_client()
     unique_db = unique_name + "_db"
@@ -705,9 +687,6 @@ class TestRanger(CustomClusterTestSuite):
   @CustomClusterTestSuite.with_args(
     impalad_args=IMPALAD_ARGS, catalogd_args=CATALOGD_ARGS)
   def test_legacy_catalog_ownership(self):
-      # This test fails due to bumping up the Ranger to a newer version.
-      # TODO(fangyu.rao): Fix in a follow up commit.
-      pytest.xfail("failed due to bumping up the Ranger to a newer version")
       self._test_ownership()
 
   @CustomClusterTestSuite.with_args(impalad_args=LOCAL_CATALOG_IMPALAD_ARGS,
@@ -730,62 +709,50 @@ class TestRanger(CustomClusterTestSuite):
       # Try to create a table under test_db as current user. It should fail.
       self._run_query_as_user(
           "create table {0}.foo(a int)".format(test_db), test_user, False)
+
       # Change the owner of the database to the current user.
       self._run_query_as_user(
           "alter database {0} set owner user {1}".format(test_db, test_user), ADMIN, True)
-      # Try creating a table under it again. It should still fail due to lack of ownership
-      # privileges
-      self._run_query_as_user(
-          "create table {0}.foo(a int)".format(test_db), test_user, False)
-      # Create ranger ownership poicy for the current user on test_db.
-      resource = {
-        "database": test_db,
-        "column": "*",
-        "table": "*"
-      }
-      access = ["create", "select"]
-      TestRanger._grant_ranger_privilege("{OWNER}", resource, access)
+
       self._run_query_as_user("refresh authorization", ADMIN, True)
-      try:
-        # Create should succeed now.
-        self._run_query_as_user(
-            "create table {0}.foo(a int)".format(test_db), test_user, True)
-        # Run show tables on the db. The resulting list should be empty. This happens
-        # because the created table's ownership information is not aggresively cached
-        # by the current Catalog implementations. Hence the analysis pass does not
-        # have access to the ownership information to verify if the current session
-        # user is actually the owner. We need to fix this by caching the HMS metadata
-        # more aggressively when the table loads. TODO(IMPALA-8937).
-        result =\
-            self._run_query_as_user("show tables in {0}".format(test_db), test_user, True)
-        assert len(result.data) == 0
-        # Run a simple query that warms up the table metadata and repeat SHOW TABLES.
-        self._run_query_as_user("select * from {0}.foo".format(test_db), test_user, True)
-        result =\
-            self._run_query_as_user("show tables in {0}".format(test_db), test_user, True)
-        assert len(result.data) == 1
-        assert "foo" in result.data
-        # Change the owner of the db back to the admin user
-        self._run_query_as_user(
-            "alter database {0} set owner user {1}".format(test_db, ADMIN), ADMIN, True)
-        result = self._run_query_as_user(
-            "show tables in {0}".format(test_db), test_user, False)
-        err = "User '{0}' does not have privileges to access: {1}.*.*".\
-            format(test_user, test_db)
-        assert err in str(result)
-        # test_user is still the owner of the table, so select should work fine.
-        self._run_query_as_user("select * from {0}.foo".format(test_db), test_user, True)
-        # Change the table owner back to admin.
-        self._run_query_as_user(
-            "alter table {0}.foo set owner user {1}".format(test_db, ADMIN), ADMIN, True)
-        # test_user should not be authorized to run the queries anymore.
-        result = self._run_query_as_user(
-            "select * from {0}.foo".format(test_db), test_user, False)
-        err = ("AuthorizationException: User '{0}' does not have privileges to execute" +
-            " 'SELECT' on: {1}.foo").format(test_user, test_db)
-        assert err in str(result)
-      finally:
-        TestRanger._revoke_ranger_privilege("{OWNER}", resource, access)
+
+      # Create should succeed now.
+      self._run_query_as_user(
+          "create table {0}.foo(a int)".format(test_db), test_user, True)
+      # Run show tables on the db. The resulting list should be empty. This happens
+      # because the created table's ownership information is not aggressively cached
+      # by the current Catalog implementations. Hence the analysis pass does not
+      # have access to the ownership information to verify if the current session
+      # user is actually the owner. We need to fix this by caching the HMS metadata
+      # more aggressively when the table loads. TODO(IMPALA-8937).
+      result = \
+          self._run_query_as_user("show tables in {0}".format(test_db), test_user, True)
+      assert len(result.data) == 0
+      # Run a simple query that warms up the table metadata and repeat SHOW TABLES.
+      self._run_query_as_user("select * from {0}.foo".format(test_db), test_user, True)
+      result = \
+          self._run_query_as_user("show tables in {0}".format(test_db), test_user, True)
+      assert len(result.data) == 1
+      assert "foo" in result.data
+      # Change the owner of the db back to the admin user
+      self._run_query_as_user(
+          "alter database {0} set owner user {1}".format(test_db, ADMIN), ADMIN, True)
+      result = self._run_query_as_user(
+          "show tables in {0}".format(test_db), test_user, False)
+      err = "User '{0}' does not have privileges to access: {1}.*.*". \
+          format(test_user, test_db)
+      assert err in str(result)
+      # test_user is still the owner of the table, so select should work fine.
+      self._run_query_as_user("select * from {0}.foo".format(test_db), test_user, True)
+      # Change the table owner back to admin.
+      self._run_query_as_user(
+          "alter table {0}.foo set owner user {1}".format(test_db, ADMIN), ADMIN, True)
+      # test_user should not be authorized to run the queries anymore.
+      result = self._run_query_as_user(
+          "select * from {0}.foo".format(test_db), test_user, False)
+      err = ("AuthorizationException: User '{0}' does not have privileges to execute" +
+             " 'SELECT' on: {1}.foo").format(test_user, test_db)
+      assert err in str(result)
     finally:
       self._run_query_as_user("drop database {0} cascade".format(test_db), ADMIN, True)