You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2016/10/08 12:44:20 UTC

[jira] [Resolved] (SPARK-11145) Cannot filter using a partition key and another column

     [ https://issues.apache.org/jira/browse/SPARK-11145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-11145.
----------------------------------
    Resolution: Cannot Reproduce

I can't reproduce this against current master

{code}
>>> from pyspark.sql import SQLContext
>>>
>>> sqlContext = SQLContext(sc)
>>> d = [
...     {'name': 'a', 'YEAR': 2015, 'year_2': 2014, 'statut': 'a'},
...     {'name': 'b', 'YEAR': 2014, 'year_2': 2014, 'statut': 'a'},
...     {'name': 'c', 'YEAR': 2013, 'year_2': 2011, 'statut': 'a'},
...     {'name': 'd', 'YEAR': 2014, 'year_2': 2013, 'statut': 'a'},
...     {'name': 'e', 'YEAR': 2016, 'year_2': 2017, 'statut': 'p'}
... ]
>>>
>>> rdd = sc.parallelize(d)
>>> df = sqlContext.createDataFrame(rdd)
df.write.partitionBy('YEAR').mode('overwrite').parquet('data')
df2 = sqlContext.read.parquet('data')
df2.filter(df2.YEAR == df2.year_2).show()
/Users/hyukjinkwon/Desktop/workspace/local/forked/spark/python/pyspark/sql/session.py:336: UserWarning: Using RDD of dict to inferSchema is deprecated. Use pyspark.sql.Row instead
  warnings.warn("Using RDD of dict to inferSchema is deprecated. "
>>> df.write.partitionBy('YEAR').mode('overwrite').parquet('data')
[Stage 2:======================>                                    (3 + 5) / 8]SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
>>> df2 = sqlContext.read.parquet('data')
>>> df2.filter(df2.YEAR == df2.year_2).show()
+----+------+------+----+
|name|statut|year_2|YEAR|
+----+------+------+----+
|   b|     a|  2014|2014|
+----+------+------+----+
{code}

> Cannot filter using a partition key and another column
> ------------------------------------------------------
>
>                 Key: SPARK-11145
>                 URL: https://issues.apache.org/jira/browse/SPARK-11145
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark, SQL
>    Affects Versions: 1.5.1
>            Reporter: Julien Buret
>
> A Dataframe, loaded from partitionned parquet files, cannot be filtered by a predicate comparing a partition key and another column.
> In this case all records are returned
> Example
> {code}
> from pyspark.sql import SQLContext
> sqlContext = SQLContext(sc)
> d = [
>     {'name': 'a', 'YEAR': 2015, 'year_2': 2014, 'statut': 'a'},
>     {'name': 'b', 'YEAR': 2014, 'year_2': 2014, 'statut': 'a'},
>     {'name': 'c', 'YEAR': 2013, 'year_2': 2011, 'statut': 'a'},
>     {'name': 'd', 'YEAR': 2014, 'year_2': 2013, 'statut': 'a'},
>     {'name': 'e', 'YEAR': 2016, 'year_2': 2017, 'statut': 'p'}
> ]
> rdd = sc.parallelize(d)
> df = sqlContext.createDataFrame(rdd)
> df.write.partitionBy('YEAR').mode('overwrite').parquet('data')
> df2 = sqlContext.read.parquet('data')
> df2.filter(df2.YEAR == df2.year_2).show()
> {code}
> return 
> {code}
> +----+------+------+----+
> |name|statut|year_2|YEAR|
> +----+------+------+----+
> |   d|     a|  2013|2014|
> |   b|     a|  2014|2014|
> |   c|     a|  2011|2013|
> |   e|     p|  2017|2016|
> |   a|     a|  2014|2015|
> +----+------+------+----+
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org