You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yin Huai (JIRA)" <ji...@apache.org> on 2015/08/28 19:37:46 UTC
[jira] [Updated] (SPARK-10334) Partitioned table scan's query plan
does not show Filter and Project on top of the table scan
[ https://issues.apache.org/jira/browse/SPARK-10334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Yin Huai updated SPARK-10334:
-----------------------------
Target Version/s: 1.6.0, 1.5.1
Priority: Critical (was: Major)
> Partitioned table scan's query plan does not show Filter and Project on top of the table scan
> ---------------------------------------------------------------------------------------------
>
> Key: SPARK-10334
> URL: https://issues.apache.org/jira/browse/SPARK-10334
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 1.5.0
> Reporter: Yin Huai
> Priority: Critical
>
> {code}
> Seq(Tuple2(1, 1), Tuple2(2, 2)).toDF("i", "j").write.format("parquet").partitionBy("i").save("/tmp/testFilter_partitioned")
> val df1 = sqlContext.read.format("parquet").load("/tmp/testFilter_partitioned")
> df1.selectExpr("hash(i)", "hash(j)").show
> df1.filter("hash(j) = 1").explain
> == Physical Plan ==
> Scan ParquetRelation[file:/tmp/testFilter_partitioned][j#20,i#21]
> {code}
> Looks like the reason is that we correctly apply the project and filter. Then, we create an RDD for the result and then manually create a PhysicalRDD. So, the Project and Filter on top of the original table scan disappears from the physical plan.
> See https://github.com/apache/spark/blob/master/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceStrategy.scala#L138-L175
> We will not generate wrong result. But, the query plan is confusing.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org