You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (Jira)" <ji...@apache.org> on 2020/04/27 10:00:12 UTC
[jira] [Created] (HADOOP-17017) S3A client retries on SSL Auth
exceptions triggered by "." bucket names
Steve Loughran created HADOOP-17017:
---------------------------------------
Summary: S3A client retries on SSL Auth exceptions triggered by "." bucket names
Key: HADOOP-17017
URL: https://issues.apache.org/jira/browse/HADOOP-17017
Project: Hadoop Common
Issue Type: Sub-task
Components: fs/s3
Affects Versions: 3.2.1
Reporter: Steve Loughran
If you have a "." in bucket names (it's allowed!) then virtual host HTTPS connections fail with a java.net.ssl exception. Except we retry and the inner cause is wrapped by generic "client exceptions"
I'm not going to try and be clever about fixing this, but we should
* make sure that the inner exception is raised up
* avoid retries
* document it in the troubleshooting page.
* if there is a well known public "." bucket (cloudera has some:)) we can test
I get a vague suspicion the AWS SDK is retrying too. Not much we can do there.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org