You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Seweryn Habdank-Wojewodzki (JIRA)" <ji...@apache.org> on 2019/07/12 09:02:00 UTC

[jira] [Closed] (FLINK-9308) The method enableCheckpointing with low values like 10 are forming DoS on Kafka Clusters

     [ https://issues.apache.org/jira/browse/FLINK-9308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Seweryn Habdank-Wojewodzki closed FLINK-9308.
---------------------------------------------
      Resolution: Won't Fix
    Release Note: It seems nothing will be done for that matter, so I am closing it. Perhaps someone will reopen it later :-).

> The method enableCheckpointing with low values like 10 are forming DoS on Kafka Clusters
> ----------------------------------------------------------------------------------------
>
>                 Key: FLINK-9308
>                 URL: https://issues.apache.org/jira/browse/FLINK-9308
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Kafka
>            Reporter: Seweryn Habdank-Wojewodzki
>            Priority: Major
>
> Hi,
> Docus about Checkpoints in Flink contains such an example:
> {code}
> StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
> // start a checkpoint every 1000 ms
> env.enableCheckpointing(1000);
> {code}
> Nice. There is one hack. The enableCheckpointing( parametr /* in [ms]*/); when used with eg. 1 or 10 will kill Kafka Server by continuous commits of offsets.
> Every creatiive developer, who would like to defend the SW from duplication of messages in case of crash, will decrease this parameter to minimum. He will protect his app, but on the Kafka Broker/Server side he will cause DoS.
> Can you have a look, to limit minimum value in case of Kafka Stream Environment?
> I am not sure if 100ms as minimum is enough, but 1000 ms as minimum would be nice.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)