You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Yuming Wang (Jira)" <ji...@apache.org> on 2020/05/25 08:05:00 UTC
[jira] [Updated] (SPARK-31811) Pushdown IsNotNull to file scan if
possible
[ https://issues.apache.org/jira/browse/SPARK-31811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Yuming Wang updated SPARK-31811:
--------------------------------
Description:
We should Pushdown {{IsNotNull}} to file scan if possible. For example:
{code:sql}
CREATE TABLE t1(c1 string, c2 string) USING parquet;
EXPLAIN SELECT t1.* FROM t1 WHERE coalesce(t1.c1, t1.c2) IS NOT NULL;
{code}
{noformat}
== Physical Plan ==
*(1) Filter isnotnull(coalesce(c1#43, c2#44))
+- *(1) ColumnarToRow
+- FileScan parquet default.t1[c1#43,c2#44] Batched: true, DataFilters: [isnotnull(coalesce(c1#43, c2#44))], Format: Parquet, Location: InMemoryFileIndex[file:/root/spark-3.0.0-bin-hadoop2.7/spark-warehouse/t1], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<c1:string,c2:string>
{noformat}
{code:sql}
EXPLAIN SELECT t1.* FROM t1 WHERE t1.c1 IS NOT NULL OR t1.c2 IS NOT NULL;
{code}
{noformat}
== Physical Plan ==
*(1) Filter (isnotnull(c1#43) OR isnotnull(c2#44))
+- *(1) ColumnarToRow
+- FileScan parquet default.t1[c1#43,c2#44] Batched: true, DataFilters: [(isnotnull(c1#43) OR isnotnull(c2#44))], Format: Parquet, Location: InMemoryFileIndex[file:/root/spark-3.0.0-bin-hadoop2.7/spark-warehouse/t1], PartitionFilters: [], PushedFilters: [Or(IsNotNull(c1),IsNotNull(c2))], ReadSchema: struct<c1:string,c2:string>
{noformat}
Real performance test case:
!default.png! !pushdown.png!
was:
We should Pushdown {{IsNotNull}} to file scan if possible. For example:
{code:sql}
CREATE TABLE t1(c1 string, c2 string) USING parquet;
EXPLAIN SELECT t1.* FROM t1 WHERE coalesce(t1.c1, t1.c2) IS NOT NULL;
{code}
{noformat}
== Physical Plan ==
*(1) Filter isnotnull(coalesce(c1#43, c2#44))
+- *(1) ColumnarToRow
+- FileScan parquet default.t1[c1#43,c2#44] Batched: true, DataFilters: [isnotnull(coalesce(c1#43, c2#44))], Format: Parquet, Location: InMemoryFileIndex[file:/root/spark-3.0.0-bin-hadoop2.7/spark-warehouse/t1], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<c1:string,c2:string>
{noformat}
{code:sql}
EXPLAIN SELECT t1.* FROM t1 WHERE t1.c1 IS NOT NULL OR t1.c2 IS NOT NULL;
{code}
{noformat}
== Physical Plan ==
*(1) Filter (isnotnull(c1#43) OR isnotnull(c2#44))
+- *(1) ColumnarToRow
+- FileScan parquet default.t1[c1#43,c2#44] Batched: true, DataFilters: [(isnotnull(c1#43) OR isnotnull(c2#44))], Format: Parquet, Location: InMemoryFileIndex[file:/root/spark-3.0.0-bin-hadoop2.7/spark-warehouse/t1], PartitionFilters: [], PushedFilters: [Or(IsNotNull(c1),IsNotNull(c2))], ReadSchema: struct<c1:string,c2:string>
{noformat}
Real performance test case:
> Pushdown IsNotNull to file scan if possible
> -------------------------------------------
>
> Key: SPARK-31811
> URL: https://issues.apache.org/jira/browse/SPARK-31811
> Project: Spark
> Issue Type: Improvement
> Components: SQL
> Affects Versions: 3.1.0
> Reporter: Yuming Wang
> Assignee: Yuming Wang
> Priority: Major
> Attachments: default.png, pushdown.png
>
>
> We should Pushdown {{IsNotNull}} to file scan if possible. For example:
> {code:sql}
> CREATE TABLE t1(c1 string, c2 string) USING parquet;
> EXPLAIN SELECT t1.* FROM t1 WHERE coalesce(t1.c1, t1.c2) IS NOT NULL;
> {code}
> {noformat}
> == Physical Plan ==
> *(1) Filter isnotnull(coalesce(c1#43, c2#44))
> +- *(1) ColumnarToRow
> +- FileScan parquet default.t1[c1#43,c2#44] Batched: true, DataFilters: [isnotnull(coalesce(c1#43, c2#44))], Format: Parquet, Location: InMemoryFileIndex[file:/root/spark-3.0.0-bin-hadoop2.7/spark-warehouse/t1], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<c1:string,c2:string>
> {noformat}
> {code:sql}
> EXPLAIN SELECT t1.* FROM t1 WHERE t1.c1 IS NOT NULL OR t1.c2 IS NOT NULL;
> {code}
> {noformat}
> == Physical Plan ==
> *(1) Filter (isnotnull(c1#43) OR isnotnull(c2#44))
> +- *(1) ColumnarToRow
> +- FileScan parquet default.t1[c1#43,c2#44] Batched: true, DataFilters: [(isnotnull(c1#43) OR isnotnull(c2#44))], Format: Parquet, Location: InMemoryFileIndex[file:/root/spark-3.0.0-bin-hadoop2.7/spark-warehouse/t1], PartitionFilters: [], PushedFilters: [Or(IsNotNull(c1),IsNotNull(c2))], ReadSchema: struct<c1:string,c2:string>
> {noformat}
> Real performance test case:
> !default.png! !pushdown.png!
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org