You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2017/08/15 10:33:00 UTC

[jira] [Commented] (SPARK-21702) Structured Streaming S3A SSE Encryption Not Applied when PartitionBy Used

    [ https://issues.apache.org/jira/browse/SPARK-21702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127061#comment-16127061 ] 

Steve Loughran commented on SPARK-21702:
----------------------------------------

This is interesting. What may be happening is that whatever s3a FS is being created, it's not picking up the options you are setting in spark conf.

Set the values in core-site.xml and/or spark-default & see if is picked up there. Otherwise, if you can patch a (short) example with this problem I'll see if I can replicate in an integration test

> Structured Streaming S3A SSE Encryption Not Applied when PartitionBy Used
> -------------------------------------------------------------------------
>
>                 Key: SPARK-21702
>                 URL: https://issues.apache.org/jira/browse/SPARK-21702
>             Project: Spark
>          Issue Type: Bug
>          Components: Structured Streaming
>    Affects Versions: 2.2.0
>         Environment: Hadoop 2.7.3: AWS SDK 1.7.4
> Hadoop 2.8.1: AWS SDK 1.10.6
>            Reporter: George Pongracz
>            Priority: Minor
>              Labels: security
>
> Settings:
>       .config("spark.hadoop.fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
>       .config("spark.hadoop.fs.s3a.server-side-encryption-algorithm", "AES256")
> When writing to an S3 sink from structured streaming the files are being encrypted using AES-256
> When introducing a "PartitionBy" the output data files are unencrypted. 
> All other supporting files, metadata are encrypted
> Suspect write to temp is encrypted and move/rename is not applying the SSE.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org