You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "simenliuxing (Jira)" <ji...@apache.org> on 2022/07/21 03:31:00 UTC

[jira] [Commented] (FLINK-28619) flink sql window aggregation using early fire will produce empty data

    [ https://issues.apache.org/jira/browse/FLINK-28619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17569256#comment-17569256 ] 

simenliuxing commented on FLINK-28619:
--------------------------------------

[~Zsigner] [~Leo Zhou] [~jark] can you take a look at it?

> flink sql window aggregation using early fire will produce empty data
> ---------------------------------------------------------------------
>
>                 Key: FLINK-28619
>                 URL: https://issues.apache.org/jira/browse/FLINK-28619
>             Project: Flink
>          Issue Type: Bug
>          Components: Table SQL / Planner, Table SQL / Runtime
>    Affects Versions: 1.15.0
>            Reporter: simenliuxing
>            Priority: Major
>             Fix For: 1.15.2
>
>
> sql is as follows:
>  
> {code:java}
> set table.exec.emit.early-fire.enabled=true;
> set table.exec.emit.early-fire.delay=1s;
> set table.exec.resource.default-parallelism = 1;
> CREATE TABLE source_table
> (
>     id   int,
>     name VARCHAR,
>     age  int,
>     proc_time AS PROCTIME()
> ) WITH (
>       'connector' = 'datagen'
>       ,'rows-per-second' = '3'
>       ,'number-of-rows' = '1000'
>       ,'fields.id.min' = '1'
>       ,'fields.id.max' = '1'
>       ,'fields.age.min' = '1'
>       ,'fields.age.max' = '150'
>       ,'fields.name.length' = '3'
>       );
> CREATE TABLE sink_table
> (
>     id     int,
>     name   VARCHAR,
>     ageAgg int,
>     PRIMARY KEY (id) NOT ENFORCED
> ) WITH (
>       'connector' = 'upsert-kafka'
>       ,'properties.bootstrap.servers' = 'localhost:9092'
>       ,'topic' = 'aaa'
>       ,'key.format' = 'json'
>       ,'value.format' = 'json'
>       ,'value.fields-include' = 'ALL'
>       );
> INSERT
> INTO sink_table
> select id,
>        last_value(name) as name,
>        sum(age)         as ageAgg
> from source_table
> group by tumble(proc_time, interval '1' day), id;
> {code}
> Result received in kafka:
>  
> {code:java}
> {"id":1,"name":"efe","ageAgg":455}
> null
> {"id":1,"name":"96a","ageAgg":701}
> null
> {"id":1,"name":"d71","ageAgg":1289}
> null
> {"id":1,"name":"89c","ageAgg":1515}{code}
>  
>  
> Is the extra null normal?
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)