You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Dawid Wysakowicz (Jira)" <ji...@apache.org> on 2020/01/16 09:12:00 UTC
[jira] [Comment Edited] (FLINK-15602) Blink planner does not
respect the precision when casting timestamp to varchar
[ https://issues.apache.org/jira/browse/FLINK-15602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17016723#comment-17016723 ]
Dawid Wysakowicz edited comment on FLINK-15602 at 1/16/20 9:11 AM:
-------------------------------------------------------------------
[~docete] Thank you for checking it.
I checked additionally INTERVAL and DECIMAL types and SQL Server pads both types, Oracle does not pad DECIMAL, PostgreSQL pads DECIMAL and does not pad INTERVAL.
Personally I am in favor of padding. [~twalthr] gave also a nice argument that padded values look nicer when there are multiple values being printed:
{code}
// Padded version
2014-07-02 06:14:00.010000000
2014-07-02 06:14:00.001000000
2014-07-02 06:14:00.001000001
// Not padded
2014-07-02 06:14:00.01
2014-07-02 06:14:00.001
2014-07-02 06:14:00.001000001
{code}
MySQL and Hive/Spark are not known to follow the SQL too closely so I would not take them as good examples.
If we decide not to pad the results we should fix the {{sql client kafka tests}}
was (Author: dawidwys):
[~docete] Thank you for checking it.
I checked additionally INTERVAL and DECIMAL types and SQL Server pads both types, Oracle does not pad DECIMAL.
Personally I am in favor of padding. [~twalthr] gave also a nice argument that padded values look nicer when there are multiple values being printed:
{code}
// Padded version
2014-07-02 06:14:00.010000000
2014-07-02 06:14:00.001000000
2014-07-02 06:14:00.001000001
// Not padded
2014-07-02 06:14:00.01
2014-07-02 06:14:00.001
2014-07-02 06:14:00.001000001
{code}
MySQL and Hive/Spark are not known to follow the SQL too closely so I would not take them as good examples.
If we decide not to pad the results we should fix the {{sql client kafka tests}}
> Blink planner does not respect the precision when casting timestamp to varchar
> ------------------------------------------------------------------------------
>
> Key: FLINK-15602
> URL: https://issues.apache.org/jira/browse/FLINK-15602
> Project: Flink
> Issue Type: Bug
> Components: Table SQL / Planner
> Affects Versions: 1.10.0
> Reporter: Dawid Wysakowicz
> Priority: Blocker
> Fix For: 1.10.0
>
>
> According to SQL 2011 Part 2 Section 6.13 General Rules 11) d)
> {quote}
> If SD is a datetime data type or an interval data type then let Y be the shortest character string that
> conforms to the definition of <literal> in Subclause 5.3, “<literal>”, and such that the interpreted value
> of Y is SV and the interpreted precision of Y is the precision of SD.
> {quote}
> That means:
> {code}
> select cast(cast(TO_TIMESTAMP('2014-07-02 06:14:00', 'YYYY-MM-DD HH24:mm:SS') as TIMESTAMP(0)) as VARCHAR(256)) from ...;
> // should produce
> // 2014-07-02 06:14:00
> select cast(cast(TO_TIMESTAMP('2014-07-02 06:14:00', 'YYYY-MM-DD HH24:mm:SS') as TIMESTAMP(3)) as VARCHAR(256)) from ...;
> // should produce
> // 2014-07-02 06:14:00.000
> select cast(cast(TO_TIMESTAMP('2014-07-02 06:14:00', 'YYYY-MM-DD HH24:mm:SS') as TIMESTAMP(9)) as VARCHAR(256)) from ...;
> // should produce
> // 2014-07-02 06:14:00.000000000
> {code}
> One possible solution would be to propagate the precision in {{org.apache.flink.table.planner.codegen.calls.ScalarOperatorGens#localTimeToStringCode}}. If I am not mistaken this problem was introduced in [FLINK-14599]
--
This message was sent by Atlassian Jira
(v8.3.4#803005)