You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Willi Raschkowski (Jira)" <ji...@apache.org> on 2021/07/07 18:55:00 UTC

[jira] [Comment Edited] (SPARK-36034) Incorrect datetime filter when reading Parquet files written in legacy mode

    [ https://issues.apache.org/jira/browse/SPARK-36034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17376795#comment-17376795 ] 

Willi Raschkowski edited comment on SPARK-36034 at 7/7/21, 6:54 PM:
--------------------------------------------------------------------

To show you the metadata of the Parquet files:

{code:title=Corrected}
$ parquet-tools meta /Volumes/git/pds/190025/out/date_written_by_spark3_corrected 
file:        file:/Volumes/git/pds/190025/out/date_written_by_spark3_corrected/part-00000-77ac86ed-1488-4b1d-882b-6cef698174ac-c000.snappy.parquet 
creator:     parquet-mr version 1.10.1 (build a89df8f9932b6ef6633d06069e50c9b7970bebd1) 
extra:       org.apache.spark.version = 3.1.2 
extra:       org.apache.spark.sql.parquet.row.metadata = {"type":"struct","fields":[{"name":"date","type":"date","nullable":false,"metadata":{}}]} 

file schema: spark_schema 
--------------------------------------------------------------------------------
date:        REQUIRED INT32 L:DATE R:0 D:0

row group 1: RC:1 TS:49 OFFSET:4 
--------------------------------------------------------------------------------
date:         INT32 SNAPPY DO:0 FPO:4 SZ:51/49/0.96 VC:1 ENC:BIT_PACKED,PLAIN ST:[min: 0001-01-01, max: 0001-01-01, num_nulls: 0]
{code}

{code:title=Legacy}
$ parquet-tools meta /Volumes/git/pds/190025/out/date_written_by_spark3_legacy 
file:        file:/Volumes/git/pds/190025/out/date_written_by_spark3_legacy/part-00000-2c5b4961-6908-4f6c-b2d8-e706d793aae5-c000.snappy.parquet 
creator:     parquet-mr version 1.10.1 (build a89df8f9932b6ef6633d06069e50c9b7970bebd1) 
extra:       org.apache.spark.version = 3.1.2 
extra:       org.apache.spark.sql.parquet.row.metadata = {"type":"struct","fields":[{"name":"date","type":"date","nullable":false,"metadata":{}}]} 
extra:       org.apache.spark.legacyDateTime = 

file schema: spark_schema 
--------------------------------------------------------------------------------
date:        REQUIRED INT32 L:DATE R:0 D:0

row group 1: RC:1 TS:49 OFFSET:4 
--------------------------------------------------------------------------------
date:         INT32 SNAPPY DO:0 FPO:4 SZ:51/49/0.96 VC:1 ENC:BIT_PACKED,PLAIN ST:[min: 0001-12-30, max: 0001-12-30, num_nulls: 0]
{code}

Mind how _corrected_ has {{0001-01-01}} as value, while _legacy_ has {{0001-12-30}}. This gives me the feeling that pushing down the {{0001-01-01}} filter doesn't work. Maybe we need an "inverse rebase" before pushing down?


was (Author: raschkowski):
To show you the metadata of the Parquet files:

{code:title=Corrected}
$ parquet-tools meta /Volumes/git/pds/190025/out/date_written_by_spark3_corrected 
file:        file:/Volumes/git/pds/190025/out/date_written_by_spark3_corrected/part-00000-77ac86ed-1488-4b1d-882b-6cef698174ac-c000.snappy.parquet 
creator:     parquet-mr version 1.10.1 (build a89df8f9932b6ef6633d06069e50c9b7970bebd1) 
extra:       org.apache.spark.version = 3.1.2 
extra:       org.apache.spark.sql.parquet.row.metadata = {"type":"struct","fields":[{"name":"date","type":"date","nullable":false,"metadata":{}}]} 

file schema: spark_schema 
--------------------------------------------------------------------------------
date:        REQUIRED INT32 L:DATE R:0 D:0

row group 1: RC:1 TS:49 OFFSET:4 
--------------------------------------------------------------------------------
date:         INT32 SNAPPY DO:0 FPO:4 SZ:51/49/0.96 VC:1 ENC:BIT_PACKED,PLAIN ST:[min: 0001-01-01, max: 0001-01-01, num_nulls: 0]
{code}

{code:title=Legacy}
$ parquet-tools meta /Volumes/git/pds/190025/out/date_written_by_spark3_legacy 
file:        file:/Volumes/git/pds/190025/out/date_written_by_spark3_legacy/part-00000-2c5b4961-6908-4f6c-b2d8-e706d793aae5-c000.snappy.parquet 
creator:     parquet-mr version 1.10.1 (build a89df8f9932b6ef6633d06069e50c9b7970bebd1) 
extra:       org.apache.spark.version = 3.1.2 
extra:       org.apache.spark.sql.parquet.row.metadata = {"type":"struct","fields":[{"name":"date","type":"date","nullable":false,"metadata":{}}]} 
extra:       org.apache.spark.legacyDateTime = 

file schema: spark_schema 
--------------------------------------------------------------------------------
date:        REQUIRED INT32 L:DATE R:0 D:0

row group 1: RC:1 TS:49 OFFSET:4 
--------------------------------------------------------------------------------
date:         INT32 SNAPPY DO:0 FPO:4 SZ:51/49/0.96 VC:1 ENC:BIT_PACKED,PLAIN ST:[min: 0001-12-30, max: 0001-12-30, num_nulls: 0]
{code}

Mind how _corrected_ has {{0001-01-01}} as value, while _legacy_ has {{0001-12-30}}. This gives me the feeling that pushing down the {{0001-01-01}} filter doesn't work.

> Incorrect datetime filter when reading Parquet files written in legacy mode
> ---------------------------------------------------------------------------
>
>                 Key: SPARK-36034
>                 URL: https://issues.apache.org/jira/browse/SPARK-36034
>             Project: Spark
>          Issue Type: Task
>          Components: SQL
>    Affects Versions: 3.1.2
>            Reporter: Willi Raschkowski
>            Priority: Major
>
> We're seeing incorrect date filters on Parquet files written by Spark 2 or by Spark 3 with legacy rebase mode.
> This is the expected behavior that we see in _corrected_ mode (Spark 3.1.2):
> {code:title=Good (Corrected Mode)}
> >>> spark.conf.set("spark.sql.legacy.parquet.datetimeRebaseModeInWrite", "CORRECTED")
> >>> spark.sql("SELECT DATE '0001-01-01' AS date").write.mode("overwrite").parquet("/Volumes/git/pds/190025/out/date_written_by_spark3")
> >>> spark.read.parquet("/Volumes/git/pds/190025/out/date_written_by_spark3").selectExpr("date", "date = '0001-01-01'").show()
> +----------+-------------------+
> |      date|(date = 0001-01-01)|
> +----------+-------------------+
> |0001-01-01|               true|
> +----------+-------------------+
> >>> spark.read.parquet("/Volumes/git/pds/190025/out/date_written_by_spark3").where("date = '0001-01-01'").show()
> +----------+
> |      date|
> +----------+
> |0001-01-01|
> +----------+
> {code}
> This is how we get incorrect results in _legacy_ mode, in this case the filter is dropping rows it shouldn't:
> {code:title=Bad (Legacy Mode)}
> In [27]: spark.conf.set("spark.sql.legacy.parquet.datetimeRebaseModeInWrite", "LEGACY")
> >>> spark.sql("SELECT DATE '0001-01-01' AS date").write.mode("overwrite").parquet("/Volumes/git/pds/190025/out/date_written_by_spark3")
> >>> spark.read.parquet("/Volumes/git/pds/190025/out/date_written_by_spark3").selectExpr("date", "date = '0001-01-01'").show()
> +----------+-------------------+
> |      date|(date = 0001-01-01)|
> +----------+-------------------+
> |0001-01-01|               true|
> +----------+-------------------+
> >>> spark.read.parquet("/Volumes/git/pds/190025/out/date_written_by_spark3").where("date = '0001-01-01'").show()
> +----+
> |date|
> +----+
> +----+
> >>> spark.read.parquet("/Volumes/git/pds/190025/out/date_written_by_spark3").where("date = '0001-01-01'").explain()
> == Physical Plan ==
> *(1) Filter (isnotnull(date#122) AND (date#122 = -719162))
> +- *(1) ColumnarToRow
>    +- FileScan parquet [date#122] Batched: true, DataFilters: [isnotnull(date#122), (date#122 = -719162)], Format: Parquet, Location: InMemoryFileIndex[file:/Volumes/git/pds/190025/out/date_written_by_spark3], PartitionFilters: [], PushedFilters: [IsNotNull(date), EqualTo(date,0001-01-01)], ReadSchema: struct<date:date>
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org