You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2022/04/20 02:46:00 UTC

[jira] [Updated] (SPARK-38958) Override S3 Client in Spark Write/Read calls

     [ https://issues.apache.org/jira/browse/SPARK-38958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon updated SPARK-38958:
---------------------------------
    Priority: Major  (was: Blocker)

> Override S3 Client in Spark Write/Read calls
> --------------------------------------------
>
>                 Key: SPARK-38958
>                 URL: https://issues.apache.org/jira/browse/SPARK-38958
>             Project: Spark
>          Issue Type: New Feature
>          Components: Spark Core
>    Affects Versions: 3.2.1
>            Reporter: Hershal
>            Priority: Major
>
> Hello,
> I have been working to use spark to read and write data to S3. Unfortunately, there are a few S3 headers that I need to add to my spark read/write calls. After much looking, I have not found a way to replace the S3 client that spark uses to make the read/write calls. I also have not found a configuration that allows me to pass in S3 headers. Here is an example of some common S3 request headers ([https://docs.aws.amazon.com/AmazonS3/latest/API/RESTCommonRequestHeaders.html).] Does there already exist functionality to add S3 headers to spark read/write calls or pass in a custom client that would pass these headers on every read/write request? Appreciate the help and feedback
>  
> Thanks,



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org