You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2015/10/27 17:48:27 UTC

[jira] [Commented] (SPARK-11353) Writing to S3 buckets, which only support AWS4-HMAC-SHA256 fails

    [ https://issues.apache.org/jira/browse/SPARK-11353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14976704#comment-14976704 ] 

Apache Spark commented on SPARK-11353:
--------------------------------------

User 'lpiepiora' has created a pull request for this issue:
https://github.com/apache/spark/pull/9306

> Writing to S3 buckets, which only support AWS4-HMAC-SHA256 fails
> ----------------------------------------------------------------
>
>                 Key: SPARK-11353
>                 URL: https://issues.apache.org/jira/browse/SPARK-11353
>             Project: Spark
>          Issue Type: Bug
>          Components: Input/Output
>    Affects Versions: 1.3.1, 1.5.1
>            Reporter: Ɓukasz Piepiora
>
> For certain regions like for example Frankfurt (eu-central-1) AWS supports only [AWS Signature Version 4|http://docs.aws.amazon.com/general/latest/gr/rande.html#d0e3788].
> Currently Spark is using jets3t library in version 0.9.3, which throws an exception when code tries to save files in S3 in eu-central-1.
> {code}
> Caused by: java.lang.RuntimeException: Failed to automatically set required header "x-amz-content-sha256" for request with entity org.jets3t.service.impl.rest.httpclient.RepeatableRequestEntity@1e4bc601
> 	at org.jets3t.service.utils.SignatureUtils.awsV4GetOrCalculatePayloadHash(SignatureUtils.java:238)
> 	at org.jets3t.service.impl.rest.httpclient.RestStorageService.authorizeHttpRequest(RestStorageService.java:762)
> 	at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:324)
> 	at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRequest(RestStorageService.java:277)
> 	at org.jets3t.service.impl.rest.httpclient.RestStorageService.performRestPut(RestStorageService.java:1143)
> 	at org.jets3t.service.impl.rest.httpclient.RestStorageService.createObjectImpl(RestStorageService.java:1954)
> 	at org.jets3t.service.impl.rest.httpclient.RestStorageService.putObjectWithRequestEntityImpl(RestStorageService.java:1875)
> 	at org.jets3t.service.impl.rest.httpclient.RestStorageService.putObjectImpl(RestStorageService.java:1867)
> 	at org.jets3t.service.StorageService.putObject(StorageService.java:840)
> 	at org.jets3t.service.S3Service.putObject(S3Service.java:2212)
> 	at org.jets3t.service.S3Service.putObject(S3Service.java:2356)
> 	... 23 more
> Caused by: java.io.IOException: Stream closed
> 	at java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:170)
> 	at java.io.BufferedInputStream.reset(BufferedInputStream.java:446)
> 	at org.jets3t.service.utils.SignatureUtils.awsV4GetOrCalculatePayloadHash(SignatureUtils.java:236)
> 	... 33 more
> {code}
> There is a newer version of jets3t 0.9.4, which seems to fix this issue (http://www.jets3t.org/RELEASE_NOTES.html).
> Therefore I suggest to upgrade jets3t dependency from 0.9.3 to 0.9.4 for Hadoop profiles.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org