You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Reynold Xin (JIRA)" <ji...@apache.org> on 2014/06/14 08:28:01 UTC

[jira] [Reopened] (SPARK-1487) Support record filtering via predicate pushdown in Parquet

     [ https://issues.apache.org/jira/browse/SPARK-1487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Reynold Xin reopened SPARK-1487:
--------------------------------


> Support record filtering via predicate pushdown in Parquet
> ----------------------------------------------------------
>
>                 Key: SPARK-1487
>                 URL: https://issues.apache.org/jira/browse/SPARK-1487
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 1.1.0
>            Reporter: Andre Schumacher
>            Assignee: Andre Schumacher
>             Fix For: 1.0.1, 1.1.0
>
>
> Parquet has support for column filters, which can be used to avoid reading and de-serializing records that fail the column filter condition. This can lead to potentially large savings, depending on the number of columns filtered by and how many records actually pass the filter.



--
This message was sent by Atlassian JIRA
(v6.2#6252)