You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Andre Schumacher (JIRA)" <ji...@apache.org> on 2014/06/20 12:36:24 UTC

[jira] [Commented] (SPARK-2112) ParquetTypesConverter should not create its own conf

    [ https://issues.apache.org/jira/browse/SPARK-2112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14038666#comment-14038666 ] 

Andre Schumacher commented on SPARK-2112:
-----------------------------------------

Since commit
https://github.com/apache/spark/commit/f479cf3743e416ee08e62806e1b34aff5998ac22
the SparkContext's Hadoop configuration should be used when reading metadata from the file source. I wasn't yet able to test this with say S3 bucket names.

Are the the S3 credentials copied from SparkConfig to its Hadoop configuration?  If someone could confirm this to be working we could close this issue.

> ParquetTypesConverter should not create its own conf
> ----------------------------------------------------
>
>                 Key: SPARK-2112
>                 URL: https://issues.apache.org/jira/browse/SPARK-2112
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.0.0
>            Reporter: Michael Armbrust
>
> [~adav]: "this actually makes it so that we can't use S3 credentials set in the SparkContext, or add new FileSystems at runtime, for instance."



--
This message was sent by Atlassian JIRA
(v6.2#6252)