You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Uri Laserson (JIRA)" <ji...@apache.org> on 2014/09/04 01:04:52 UTC

[jira] [Commented] (SPARK-3389) Add converter class to make reading Parquet files easy with PySpark

    [ https://issues.apache.org/jira/browse/SPARK-3389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14120638#comment-14120638 ] 

Uri Laserson commented on SPARK-3389:
-------------------------------------

https://github.com/apache/spark/pull/2256

> Add converter class to make reading Parquet files easy with PySpark
> -------------------------------------------------------------------
>
>                 Key: SPARK-3389
>                 URL: https://issues.apache.org/jira/browse/SPARK-3389
>             Project: Spark
>          Issue Type: Improvement
>            Reporter: Uri Laserson
>
> If a user wants to read Parquet data from PySpark, they currently must use SparkContext.newAPIHadoopFile.  If they do not provide a valueConverter, they will get JSON string that must be parsed.  Here I add a Converter implementation based on the one in the AvroConverters.scala file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org