You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Josh Rosen (JIRA)" <ji...@apache.org> on 2014/09/04 01:06:51 UTC

[jira] [Updated] (SPARK-3389) Add converter class to make reading Parquet files easy with PySpark

     [ https://issues.apache.org/jira/browse/SPARK-3389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Josh Rosen updated SPARK-3389:
------------------------------
    Component/s: PySpark

> Add converter class to make reading Parquet files easy with PySpark
> -------------------------------------------------------------------
>
>                 Key: SPARK-3389
>                 URL: https://issues.apache.org/jira/browse/SPARK-3389
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark
>            Reporter: Uri Laserson
>
> If a user wants to read Parquet data from PySpark, they currently must use SparkContext.newAPIHadoopFile.  If they do not provide a valueConverter, they will get JSON string that must be parsed.  Here I add a Converter implementation based on the one in the AvroConverters.scala file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org