You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@impala.apache.org by jo...@apache.org on 2020/04/24 16:45:12 UTC
[impala] 02/02: IMPALA-9677: Fix frontend tests using a
non-existent S3 bucket
This is an automated email from the ASF dual-hosted git repository.
joemcdonnell pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/impala.git
commit 4386c1b44a83f7f7b5000f0a8c0dda217651c16a
Author: Joe McDonnell <jo...@cloudera.com>
AuthorDate: Thu Apr 23 22:16:36 2020 -0700
IMPALA-9677: Fix frontend tests using a non-existent S3 bucket
With HADOOP-16711, Hadoop added extra validation during the
initialization of S3AFileSystem that verified that the caller had
permissions on the S3 bucket specified. Some frontend tests use
non-existent S3 buckets in URIs to check analysis behavior. These
started to fail with the new validation.
This changes the core-site.xml configuration to disable the new
validation by setting fs.s3a.bucket.probe=1. This is equivalent
to the old behavior, and it can now run the frontend tests without
AWS credentials.
Testing:
- Hand tested the failing tests (AnalyzeDDLTest, ExplainTest, PlannerTest)
- Ran core job on USE_CDP_HIVE=true and USE_CDP_HIVE=false
Change-Id: Id61ffbf686f8b7827e7fbf13167cfc1dfc06a325
Reviewed-on: http://gerrit.cloudera.org:8080/15799
Tested-by: Impala Public Jenkins <im...@cloudera.com>
Reviewed-by: Anurag Mantripragada <an...@cloudera.com>
Reviewed-by: Tim Armstrong <ta...@cloudera.com>
---
.../node_templates/common/etc/hadoop/conf/core-site.xml.py | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/testdata/cluster/node_templates/common/etc/hadoop/conf/core-site.xml.py b/testdata/cluster/node_templates/common/etc/hadoop/conf/core-site.xml.py
index 3a5faa3..29ae1c6 100644
--- a/testdata/cluster/node_templates/common/etc/hadoop/conf/core-site.xml.py
+++ b/testdata/cluster/node_templates/common/etc/hadoop/conf/core-site.xml.py
@@ -22,6 +22,7 @@ import sys
kerberize = os.environ.get('IMPALA_KERBERIZE') == 'true'
target_filesystem = os.environ.get('TARGET_FILESYSTEM')
+use_cdp_components = os.environ.get('USE_CDP_HIVE') == 'true'
compression_codecs = [
'org.apache.hadoop.io.compress.GzipCodec',
@@ -109,3 +110,10 @@ if kerberize:
'hadoop.proxyuser.hive.hosts': '*',
'hadoop.proxyuser.hive.groups': '*',
})
+
+if use_cdp_components:
+ # Hadoop changed behaviors for S3AFilesystem to check permissions for the bucket
+ # on initialization (see HADOOP-16711). Some frontend tests access non-existent
+ # buckets and rely on the old behavior. This also means that the tests do not
+ # require AWS credentials.
+ CONFIG.update({'fs.s3a.bucket.probe': '1'})