You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (Jira)" <ji...@apache.org> on 2020/04/27 10:01:00 UTC

[jira] [Commented] (HADOOP-17017) S3A client retries on SSL Auth exceptions triggered by "." bucket names

    [ https://issues.apache.org/jira/browse/HADOOP-17017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17093293#comment-17093293 ] 

Steve Loughran commented on HADOOP-17017:
-----------------------------------------

Full Stack
{code}
2020-04-27 10:54:34,175 [main] INFO  diag.StoreDiag (DurationInfo.java:close(100)) - GetFileStatus s3a://dev.hortonworks.com/: duration 13:14:431
org.apache.hadoop.fs.s3a.AWSClientIOException: getFileStatus on s3a://dev.hortonworks.com/: com.amazonaws.SdkClientException: Unable to execute HTTP request: Certificate for <dev.hortonworks.com.s3.amazonaws.com> doesn't match any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]: Unable to execute HTTP request: Certificate for <dev.hortonworks.com.s3.amazonaws.com> doesn't match any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]
	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:206)
	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:168)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3039)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:2883)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:2767)
	at org.apache.hadoop.fs.store.diag.StoreDiag.executeFileSystemOperations(StoreDiag.java:885)
	at org.apache.hadoop.fs.store.diag.StoreDiag.run(StoreDiag.java:412)
	at org.apache.hadoop.fs.store.diag.StoreDiag.run(StoreDiag.java:356)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
	at org.apache.hadoop.fs.store.diag.StoreDiag.exec(StoreDiag.java:1178)
	at org.apache.hadoop.fs.store.diag.StoreDiag.main(StoreDiag.java:1187)
	at storediag.main(storediag.java:25)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
Caused by: com.amazonaws.SdkClientException: Unable to execute HTTP request: Certificate for <dev.hortonworks.com.s3.amazonaws.com> doesn't match any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1175)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1121)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532)
	at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4920)
	at com.amazonaws.services.s3.AmazonS3Client.getBucketRegionViaHeadRequest(AmazonS3Client.java:5700)
	at com.amazonaws.services.s3.AmazonS3Client.fetchRegionFromCache(AmazonS3Client.java:5673)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4904)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4866)
	at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4860)
	at com.amazonaws.services.s3.AmazonS3Client.listObjectsV2(AmazonS3Client.java:923)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listObjects$7(S3AFileSystem.java:1878)
	at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:407)
	at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:370)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.listObjects(S3AFileSystem.java:1871)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3011)
	... 16 more
Caused by: javax.net.ssl.SSLPeerUnverifiedException: Certificate for <dev.hortonworks.com.s3.amazonaws.com> doesn't match any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]
	at com.amazonaws.thirdparty.apache.http.conn.ssl.SSLConnectionSocketFactory.verifyHostname(SSLConnectionSocketFactory.java:467)
	at com.amazonaws.thirdparty.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:397)
	at com.amazonaws.thirdparty.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:355)
	at com.amazonaws.thirdparty.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
	at com.amazonaws.thirdparty.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:373)
	at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76)
	at com.amazonaws.http.conn.$Proxy9.connect(Unknown Source)
	at com.amazonaws.thirdparty.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:381)
	at com.amazonaws.thirdparty.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237)
	at com.amazonaws.thirdparty.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
	at com.amazonaws.thirdparty.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
	at com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
	at com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
	at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1297)
	at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113)
	... 35 more
2020-04-27 10:54:34,181 [main] INFO  util.ExitUtil (ExitUtil.java:terminate(210)) - Exiting with status -1: org.apache.hadoop.fs.s3a.AWSClientIOException: getFileStatus on s3a://dev.hortonworks.com/: com.amazonaws.SdkClientException: Unable to execute HTTP request: Certificate for <dev.hortonworks.com.s3.amazonaws.com> doesn't match any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]: Unable to execute HTTP request: Certificate for <dev.hortonworks.com.s3.amazonaws.com> doesn't match any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]
2020-04-27 10:54:34,183 [shutdown-hook-0] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:close(3269)) - Filesystem s3a://dev.hortonworks.com is closed
2020-04-27 10:54:34,184 [shutdown-hook-0] DEBUG s3a.S3AFileSystem (HadoopExecutors.java:shutdown(118)) - Gracefully shutting down executor service. Waiting max 30 SECONDS
{code}

> S3A client retries on SSL Auth exceptions triggered by "." bucket names
> -----------------------------------------------------------------------
>
>                 Key: HADOOP-17017
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17017
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.2.1
>            Reporter: Steve Loughran
>            Priority: Minor
>
> If you have a "." in bucket names (it's allowed!) then virtual host HTTPS connections fail with a  java.net.ssl exception. Except we retry and the inner cause is wrapped by generic "client exceptions"
> I'm not going to try and be clever about fixing this, but we should
> * make sure that the inner exception is raised up
> * avoid retries
> * document it in the troubleshooting page. 
> * if there is a well known public "." bucket (cloudera has some:)) we can test
> I get a vague suspicion the AWS SDK is retrying too. Not much we can do there.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org