You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "angerszhu (Jira)" <ji...@apache.org> on 2020/12/16 02:49:00 UTC

[jira] [Comment Edited] (HADOOP-17017) S3A client retries on SSL Auth exceptions triggered by "." bucket names

    [ https://issues.apache.org/jira/browse/HADOOP-17017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17250064#comment-17250064 ] 

angerszhu edited comment on HADOOP-17017 at 12/16/20, 2:48 AM:
---------------------------------------------------------------

 

set  below config can run well.
 <property>
 <name>fs.s3a.path.style.access</name>
 <value>true</value>
 <description>Enable S3 path style access ie disabling the default virtual hosting behaviour.
 Useful for S3A-compliant storage providers as it removes the need to set up DNS for virtual hosting.
 </description>
 </property>


was (Author: angerszhuuu):
[~stevel@apache.org]

set  below config can also run well.
<property>
  <name>fs.s3a.path.style.access</name>
  <value>true</value>
  <description>Enable S3 path style access ie disabling the default virtual hosting behaviour.
    Useful for S3A-compliant storage providers as it removes the need to set up DNS for virtual hosting.
  </description>
</property>

> S3A client retries on SSL Auth exceptions triggered by "." bucket names
> -----------------------------------------------------------------------
>
>                 Key: HADOOP-17017
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17017
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.2.1
>            Reporter: Steve Loughran
>            Priority: Minor
>
> If you have a "." in bucket names (it's allowed!) then virtual host HTTPS connections fail with a  java.net.ssl exception. Except we retry and the inner cause is wrapped by generic "client exceptions"
> I'm not going to try and be clever about fixing this, but we should
> * make sure that the inner exception is raised up
> * avoid retries
> * document it in the troubleshooting page. 
> * if there is a well known public "." bucket (cloudera has some:)) we can test
> I get a vague suspicion the AWS SDK is retrying too. Not much we can do there.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org