You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Maxim Gekk (Jira)" <ji...@apache.org> on 2020/05/08 12:44:00 UTC
[jira] [Updated] (SPARK-31662) Reading wrong dates from dictionary
encoded columns in Parquet files
[ https://issues.apache.org/jira/browse/SPARK-31662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Maxim Gekk updated SPARK-31662:
-------------------------------
Description:
Write dates with dictionary encoding enabled to parquet files:
{code:scala}
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 3.1.0-SNAPSHOT
/_/
Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_242)
Type in expressions to have them evaluated.
Type :help for more information.
scala> spark.conf.set("spark.sql.legacy.parquet.rebaseDateTimeInWrite.enabled", true)
scala> :paste
// Entering paste mode (ctrl-D to finish)
Seq.tabulate(8)(_ => "1001-01-01").toDF("dateS")
.select($"dateS".cast("date").as("date"))
.repartition(1)
.write
.option("parquet.enable.dictionary", true)
.mode("overwrite")
.parquet("/Users/maximgekk/tmp/parquet-date-dict")
// Exiting paste mode, now interpreting.
{code}
Read them back:
{code:scala}
scala> spark.read.parquet("/Users/maximgekk/tmp/parquet-date-dict").show(false)
+----------+
|date |
+----------+
|1001-01-07|
|1001-01-07|
|1001-01-07|
|1001-01-07|
|1001-01-07|
|1001-01-07|
|1001-01-07|
|1001-01-07|
+----------+
{code}
*Expected values must be 1000-01-01.*
I checked that the date column is encoded by dictionary via:
{code}
➜ parquet-date-dict java -jar ~/Downloads/parquet-tools-1.12.0.jar dump ./part-00000-84a77214-0c8c-45e9-ac41-5ca863b9dd94-c000.snappy.parquet
row group 0
--------------------------------------------------------------------------------
date: INT32 SNAPPY DO:0 FPO:4 SZ:74/70/0.95 VC:8 ENC:BIT_PACKED,RLE,P [more]...
date TV=8 RL=0 DL=1 DS: 1 DE:PLAIN_DICTIONARY
----------------------------------------------------------------------------
page 0: DLE:RLE RLE:BIT_PACKED VLE:PLAIN_DICTIONARY [more]... VC:8
INT32 date
--------------------------------------------------------------------------------
*** row group 1 of 1, values 1 to 8 ***
value 1: R:0 D:1 V:1001-01-07
value 2: R:0 D:1 V:1001-01-07
value 3: R:0 D:1 V:1001-01-07
value 4: R:0 D:1 V:1001-01-07
value 5: R:0 D:1 V:1001-01-07
value 6: R:0 D:1 V:1001-01-07
value 7: R:0 D:1 V:1001-01-07
value 8: R:0 D:1 V:1001-01-07
{code}
was:
Write dates with dictionary encoding enabled to parquet files:
{code:scala}
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 3.1.0-SNAPSHOT
/_/
Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_242)
Type in expressions to have them evaluated.
Type :help for more information.
scala> spark.conf.set("spark.sql.legacy.parquet.rebaseDateTimeInWrite.enabled", true)
scala> :paste
// Entering paste mode (ctrl-D to finish)
Seq.tabulate(8)(_ => "1001-01-01").toDF("dateS")
.select($"dateS".cast("date").as("date"))
.repartition(1)
.write
.option("parquet.enable.dictionary", true)
.mode("overwrite")
.parquet("/Users/maximgekk/tmp/parquet-date-dict")
// Exiting paste mode, now interpreting.
{code}
Read them back:
{code:scala}
scala> spark.read.parquet("/Users/maximgekk/tmp/parquet-date-dict").show(false)
+----------+
|date |
+----------+
|1001-01-07|
|1001-01-07|
|1001-01-07|
|1001-01-07|
|1001-01-07|
|1001-01-07|
|1001-01-07|
|1001-01-07|
+----------+
{code}
*Expected values must be 1000-01-01.*
> Reading wrong dates from dictionary encoded columns in Parquet files
> --------------------------------------------------------------------
>
> Key: SPARK-31662
> URL: https://issues.apache.org/jira/browse/SPARK-31662
> Project: Spark
> Issue Type: Sub-task
> Components: SQL
> Affects Versions: 3.0.0, 3.1.0
> Reporter: Maxim Gekk
> Priority: Major
>
> Write dates with dictionary encoding enabled to parquet files:
> {code:scala}
> Welcome to
> ____ __
> / __/__ ___ _____/ /__
> _\ \/ _ \/ _ `/ __/ '_/
> /___/ .__/\_,_/_/ /_/\_\ version 3.1.0-SNAPSHOT
> /_/
> Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_242)
> Type in expressions to have them evaluated.
> Type :help for more information.
> scala> spark.conf.set("spark.sql.legacy.parquet.rebaseDateTimeInWrite.enabled", true)
> scala> :paste
> // Entering paste mode (ctrl-D to finish)
> Seq.tabulate(8)(_ => "1001-01-01").toDF("dateS")
> .select($"dateS".cast("date").as("date"))
> .repartition(1)
> .write
> .option("parquet.enable.dictionary", true)
> .mode("overwrite")
> .parquet("/Users/maximgekk/tmp/parquet-date-dict")
> // Exiting paste mode, now interpreting.
> {code}
> Read them back:
> {code:scala}
> scala> spark.read.parquet("/Users/maximgekk/tmp/parquet-date-dict").show(false)
> +----------+
> |date |
> +----------+
> |1001-01-07|
> |1001-01-07|
> |1001-01-07|
> |1001-01-07|
> |1001-01-07|
> |1001-01-07|
> |1001-01-07|
> |1001-01-07|
> +----------+
> {code}
> *Expected values must be 1000-01-01.*
> I checked that the date column is encoded by dictionary via:
> {code}
> ➜ parquet-date-dict java -jar ~/Downloads/parquet-tools-1.12.0.jar dump ./part-00000-84a77214-0c8c-45e9-ac41-5ca863b9dd94-c000.snappy.parquet
> row group 0
> --------------------------------------------------------------------------------
> date: INT32 SNAPPY DO:0 FPO:4 SZ:74/70/0.95 VC:8 ENC:BIT_PACKED,RLE,P [more]...
> date TV=8 RL=0 DL=1 DS: 1 DE:PLAIN_DICTIONARY
> ----------------------------------------------------------------------------
> page 0: DLE:RLE RLE:BIT_PACKED VLE:PLAIN_DICTIONARY [more]... VC:8
> INT32 date
> --------------------------------------------------------------------------------
> *** row group 1 of 1, values 1 to 8 ***
> value 1: R:0 D:1 V:1001-01-07
> value 2: R:0 D:1 V:1001-01-07
> value 3: R:0 D:1 V:1001-01-07
> value 4: R:0 D:1 V:1001-01-07
> value 5: R:0 D:1 V:1001-01-07
> value 6: R:0 D:1 V:1001-01-07
> value 7: R:0 D:1 V:1001-01-07
> value 8: R:0 D:1 V:1001-01-07
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org