You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Julien Buret (JIRA)" <ji...@apache.org> on 2015/10/16 11:58:05 UTC

[jira] [Created] (SPARK-11145) Cannot filter using a partition key and another column

Julien Buret created SPARK-11145:
------------------------------------

             Summary: Cannot filter using a partition key and another column
                 Key: SPARK-11145
                 URL: https://issues.apache.org/jira/browse/SPARK-11145
             Project: Spark
          Issue Type: Bug
          Components: PySpark, SQL
    Affects Versions: 1.5.1
            Reporter: Julien Buret


A Dataframe, loaded from partitionned parquet files, cannot be filtered by a predicate comparing a partition key and another column.
In this case all records are returned

Example

{code}
from pyspark.sql import SQLContext

sqlContext = SQLContext(sc)
d = [
    {'name': 'a', 'YEAR': 2015, 'year_2': 2014, 'statut': 'a'},
    {'name': 'b', 'YEAR': 2014, 'year_2': 2014, 'statut': 'a'},
    {'name': 'c', 'YEAR': 2013, 'year_2': 2011, 'statut': 'a'},
    {'name': 'd', 'YEAR': 2014, 'year_2': 2013, 'statut': 'a'},
    {'name': 'e', 'YEAR': 2016, 'year_2': 2017, 'statut': 'p'}
]

rdd = sc.parallelize(d)
df = sqlContext.createDataFrame(rdd)
df.write.partitionBy('YEAR').mode('overwrite').parquet('data')
df2 = sqlContext.read.parquet('data')
df2.filter(df2.YEAR == df2.year_2).show()
{code}

return 

{code}

+----+------+------+----+
|name|statut|year_2|YEAR|
+----+------+------+----+
|   d|     a|  2013|2014|
|   b|     a|  2014|2014|
|   c|     a|  2011|2013|
|   e|     p|  2017|2016|
|   a|     a|  2014|2015|
+----+------+------+----+
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org