You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yijie Shen (JIRA)" <ji...@apache.org> on 2015/04/14 17:00:14 UTC

[jira] [Created] (SPARK-6903) Eliminate partition filters from execution

Yijie Shen created SPARK-6903:
---------------------------------

             Summary: Eliminate partition filters from execution
                 Key: SPARK-6903
                 URL: https://issues.apache.org/jira/browse/SPARK-6903
             Project: Spark
          Issue Type: Improvement
          Components: SQL
    Affects Versions: 1.3.0
            Reporter: Yijie Shen
            Priority: Minor


Suppose I have a table t(id: String, event: String) saved as parquet file, and have directory hierarchy: hdfs://path/to/data/root/dt=2015-01-01/hr=00

After partition discovery, the result schema should be (id: String, event: String, dt: String, hr: Int)

If I have a query like:

df.select($“id”).filter(event match).filter($“dt” > “2015-01-01”).filter($”hr” > 13)

In current implementation, after (dt > 2015-01-01 && hr >13) is used to filter partitions, 
these two filters remains in execution plan and result in each row returned from parquet add two fields dt & hr each time, which I think is useless, if we could rewrite execution.Filter’s predicate and eliminate them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org