You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Yuan Mei (Jira)" <ji...@apache.org> on 2020/05/25 06:29:00 UTC

[jira] [Updated] (FLINK-17916) Provide API to separate KafkaShuffle's Producer and Consumer to different jobs

     [ https://issues.apache.org/jira/browse/FLINK-17916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Yuan Mei updated FLINK-17916:
-----------------------------
    Summary: Provide API to separate KafkaShuffle's Producer and Consumer to different jobs  (was: Separate KafkaShuffle's Producer and Consumer to different jobs)

> Provide API to separate KafkaShuffle's Producer and Consumer to different jobs
> ------------------------------------------------------------------------------
>
>                 Key: FLINK-17916
>                 URL: https://issues.apache.org/jira/browse/FLINK-17916
>             Project: Flink
>          Issue Type: Improvement
>          Components: API / DataStream, Connectors / Kafka
>    Affects Versions: 1.11.0
>            Reporter: Yuan Mei
>            Priority: Major
>             Fix For: 1.11.0
>
>
> Follow up of FLINK-15670
> *Separate sink (producer) and source (consumer) to different jobs*
>  * In the same job, a sink and a source are recovered independently according to regional failover. However, they share the same checkpoint coordinator and correspondingly, share the same global checkpoint snapshot.
>  * That says if the consumer fails, the producer can not commit written data because of two-phase commit set-up (the producer needs a checkpoint-complete signal to complete the second stage)
>  * Same applies to the producer
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)