You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Gabor Bota (JIRA)" <ji...@apache.org> on 2018/11/26 14:36:00 UTC
[jira] [Comment Edited] (HADOOP-14927) ITestS3GuardTool failures in
testDestroyNoBucket()
[ https://issues.apache.org/jira/browse/HADOOP-14927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16698953#comment-16698953 ]
Gabor Bota edited comment on HADOOP-14927 at 11/26/18 2:35 PM:
---------------------------------------------------------------
The issue is that org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.Destroy#run does not even check if an s3a:// bucket is passed, just tries to init the metastore and destroy it.
based on the usage help we want to support the following:
{noformat}
destroy [OPTIONS] [s3a://BUCKET]
destroy Metadata Store data (all data in S3 is preserved)
Common options:
-meta URL - Metadata repository details (implementation-specific)
Amazon DynamoDB-specific options:
-region REGION - Service region for connections
URLs for Amazon DynamoDB are of the form dynamodb://TABLE_NAME.
Specifying both the -region option and an S3A path
is not supported.
{noformat}
So the implementation should check if the s3a:// bucket is supplied before instantiating and destroying the metadatastore with the configured table name that could be different than what we supply on cli.
I'll provide a patch with this soon.
was (Author: gabor.bota):
The issue is that org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.Destroy#run does not even check if an s3a:// bucket is passed, just tries to init the metastore and destroy it.
based on the usage help we want to support the following:
{noformat}
destroy [OPTIONS] [s3a://BUCKET]
destroy Metadata Store data (all data in S3 is preserved)
Common options:
-meta URL - Metadata repository details (implementation-specific)
Amazon DynamoDB-specific options:
-region REGION - Service region for connections
URLs for Amazon DynamoDB are of the form dynamodb://TABLE_NAME.
Specifying both the -region option and an S3A path
is not supported.
{noformat}
So the implementation should check if the s3a:// bucket is supplied before instantiating and destroying the metadatastore with the configured table name that *could be different* than what we supply on cli.
I'll provide a patch with this soon.
> ITestS3GuardTool failures in testDestroyNoBucket()
> --------------------------------------------------
>
> Key: HADOOP-14927
> URL: https://issues.apache.org/jira/browse/HADOOP-14927
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/s3
> Affects Versions: 3.0.0-beta1, 3.0.0-alpha3, 3.1.0
> Reporter: Aaron Fabbri
> Assignee: Gabor Bota
> Priority: Minor
> Attachments: HADOOP-14927.001.patch
>
>
> Hit this when testing for the Hadoop 3.0.0-beta1 RC0.
> {noformat}
> hadoop-3.0.0-beta1-src/hadoop-tools/hadoop-aws$ mvn clean verify -Dit.test="ITestS3GuardTool*" -Dtest=none -Ds3guard -Ddynamo
> ...
> Failed tests:
> ITestS3GuardToolDynamoDB>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 Expected an exception, got 0
> ITestS3GuardToolLocal>AbstractS3GuardToolTestBase.testDestroyNoBucket:228 Expected an exception, got 0
> {noformat}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org