You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by "Leonard Xu (Jira)" <ji...@apache.org> on 2020/02/15 12:52:00 UTC

[jira] [Created] (FLINK-16070) Blink planner can not extract correct unique key for UpsertStreamTableSink

Leonard Xu created FLINK-16070:
----------------------------------

             Summary: Blink planner can not extract correct unique key for UpsertStreamTableSink 
                 Key: FLINK-16070
                 URL: https://issues.apache.org/jira/browse/FLINK-16070
             Project: Flink
          Issue Type: Bug
          Components: Table SQL / Planner
    Affects Versions: 1.11.0
            Reporter: Leonard Xu
             Fix For: 1.11.0


I reproduce an Elasticsearch6UpsertTableSink issue which user reported in mail list[1] that Blink planner can not extract correct unique key for following query, but legacy planner work well. 
{code:java}
// user code
NSERT INTO ES6_ZHANGLE_OUTPUT  SELECT aggId, pageId, ts_min as ts,  count(case when eventId = 'exposure' then 1 else null end) as expoCnt,  count(case when eventId = 'click' then 1 else null end) as clkCnt  FROM  (    SELECT        'ZL_001' as aggId,        pageId,        eventId,        recvTime,        ts2Date(recvTime) as ts_min    from kafka_zl_etrack_event_stream    where eventId in ('exposure', 'click')  ) as t1  group by aggId, pageId, ts_min
{code}
I  found that blink planner can extract unique key in `*FlinkRelMetadataQuery.getUniqueKeys(relNode)*`, legacy planner workd well in  `*org.apache.flink.table.plan.util.UpdatingPlanChecker.getUniqueKeyFields(...)* `. A simple ETL job to reproduce this issue can refers[2]

 

[1][http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-10-es-sink-exception-td32773.html]

[2][https://github.com/leonardBang/flink-sql-etl/blob/master/etl-job/src/main/java/kafka2es/Kafka2UpsertEs.java]

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)