You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Lakshmi Rao (JIRA)" <ji...@apache.org> on 2019/02/07 18:47:00 UTC

[jira] [Updated] (FLINK-11501) Add a ratelimiting feature to the FlinkKafkaConsumer

     [ https://issues.apache.org/jira/browse/FLINK-11501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Lakshmi Rao updated FLINK-11501:
--------------------------------
    Attachment: RateLimiting-1.png

> Add a ratelimiting feature to the FlinkKafkaConsumer
> ----------------------------------------------------
>
>                 Key: FLINK-11501
>                 URL: https://issues.apache.org/jira/browse/FLINK-11501
>             Project: Flink
>          Issue Type: Improvement
>          Components: Kafka Connector
>            Reporter: Lakshmi Rao
>            Assignee: Lakshmi Rao
>            Priority: Major
>         Attachments: RateLimiting-1.png
>
>
> There are instances when a Flink job that reads from Kafka can read at a significantly high throughput (particularly while processing a backlog) and degrade the underlying Kafka cluster.
> While Kafka quotas are perhaps the best way to enforce this ratelimiting, there are cases where such a setup is not available or easily enabled. In such a scenario, ratelimiting on the FlinkKafkaConsumer is useful feature. The approach is essentially involves using Guava's [RateLimiter|https://google.github.io/guava/releases/19.0/api/docs/index.html?com/google/common/util/concurrent/RateLimiter.html] to ratelimit the bytes read from Kafka (in the [KafkaConsumerThread|https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/KafkaConsumerThread.java])
> More discussion here: [https://lists.apache.org/thread.html/8140b759ba83f33a22d809887fd2d711f5ffe7069c888eb9b1142272@%3Cdev.flink.apache.org%3E] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)