You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "André F. (Jira)" <ji...@apache.org> on 2022/06/15 11:31:00 UTC

[jira] [Commented] (HADOOP-18159) Certificate doesn't match any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]

    [ https://issues.apache.org/jira/browse/HADOOP-18159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17554528#comment-17554528 ] 

André F. commented on HADOOP-18159:
-----------------------------------

Hello again, after coming back to this problem, I think I found the root cause of the issue.

Indeed `hadoop-cos` is somehow related to the problem. On Spark 3.2 It is bringing `cos_api-bundle-5.6.19.jar` to the classpath and this jar contains an outdated `public-suffix-list.txt` file, with the following contents:

 
{code:java}
// Amazon S3 : https://aws.amazon.com/s3/
// Submitted by Luke Wells <ps...@amazon.com>
*.s3.amazonaws.com
s3-ap-northeast-1.amazonaws.com
s3-ap-northeast-2.amazonaws.com
s3-ap-south-1.amazonaws.com {code}
The file which the default hostname verifier should be using should come from `hadoop-client-runtime-3.3.1.jar`, which has the following contents:
{code:java}
// Amazon S3 : https://aws.amazon.com/s3/
// Submitted by Luke Wells <ps...@amazon.com>
s3.amazonaws.com
s3-ap-northeast-1.amazonaws.com
s3-ap-northeast-2.amazonaws.com
s3-ap-south-1.amazonaws.com
s3-ap-southeast-1.amazonaws.com
s3-ap-southeast-2.amazonaws.com
s3-ca-central-1.amazonaws.com
s3-eu-central-1.amazonaws.com
... (all regional endpoints) {code}
For some reason, the `public-suffix-list.txt` was picked from `cos_api-bundle-5.6.19.jar` on my classpath, not from `hadoop-client-runtime-3.3.1.jar`.

Replacing `cos_api-bundle-5.6.19.jar` with a more up-to-date version (`cos_api-bundle-5.6.69.jar`) fixed this issue (since it has the same `public-suffix-list.txt` than in `hadoop-client-runtime-3.3.1.jar`). Can I create a PR to bump this cos_api-bundle on hadoop-cos?

 

> Certificate doesn't match any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-18159
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18159
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 3.3.1
>         Environment: hadoop 3.3.1
> httpclient 4.5.13
> JDK8
>            Reporter: André F.
>            Priority: Major
>
> Trying to run any job after bumping our Spark version (which is now using Hadoop 3.3.1), lead us to the current exception while reading files on s3:
> {code:java}
> org.apache.hadoop.fs.s3a.AWSClientIOException: getFileStatus on s3a://<bucket>/<path>.parquet: com.amazonaws.SdkClientException: Unable to execute HTTP request: Certificate for <bucket.s3.amazonaws.com> doesn't match any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]: Unable to execute HTTP request: Certificate for <bucket> doesn't match any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com] at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:208) at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:170) at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3351) at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3185) at org.apache.hadoop.fs.s3a.S3AFileSystem.isDirectory(S3AFileSystem.java:4277) at {code}
>  
> {code:java}
> Caused by: javax.net.ssl.SSLPeerUnverifiedException: Certificate for <bucket.s3.amazonaws.com> doesn't match any of the subject alternative names: [*.s3.amazonaws.com, s3.amazonaws.com]
> 		at com.amazonaws.thirdparty.apache.http.conn.ssl.SSLConnectionSocketFactory.verifyHostname(SSLConnectionSocketFactory.java:507)
> 		at com.amazonaws.thirdparty.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:437)
> 		at com.amazonaws.thirdparty.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:384)
> 		at com.amazonaws.thirdparty.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:142)
> 		at com.amazonaws.thirdparty.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:376)
> 		at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source)
> 		at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 		at java.lang.reflect.Method.invoke(Method.java:498)
> 		at com.amazonaws.http.conn.ClientConnectionManagerFactory$Handler.invoke(ClientConnectionManagerFactory.java:76)
> 		at com.amazonaws.http.conn.$Proxy16.connect(Unknown Source)
> 		at com.amazonaws.thirdparty.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:393)
> 		at com.amazonaws.thirdparty.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:236)
> 		at com.amazonaws.thirdparty.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:186)
> 		at com.amazonaws.thirdparty.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
> 		at com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
> 		at com.amazonaws.thirdparty.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
> 		at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
> 		at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1333)
> 		at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145)  {code}
> We found similar problems in the following tickets but:
>  - https://issues.apache.org/jira/browse/HADOOP-17017 (we don't use `.` in our bucket names)
>  - [https://github.com/aws/aws-sdk-java-v2/issues/1786] (we tried to override it by using `httpclient:4.5.10` or `httpclient:4.5.8`, with no effect).
> We couldn't test it using the native `openssl` configuration due to our setup, so we would like to stick with the java ssl implementation, if possible.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org