You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Cody Koeninger (JIRA)" <ji...@apache.org> on 2016/10/12 23:28:20 UTC
[jira] [Commented] (SPARK-11698) Add option to ignore kafka
messages that are out of limit rate
[ https://issues.apache.org/jira/browse/SPARK-11698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15570208#comment-15570208 ]
Cody Koeninger commented on SPARK-11698:
----------------------------------------
Would a custom ConsumerStrategy for the new consumer added in SPARK-12177 allow you to address this issue? You could supply a Consumer implementation that overrides poll
> Add option to ignore kafka messages that are out of limit rate
> --------------------------------------------------------------
>
> Key: SPARK-11698
> URL: https://issues.apache.org/jira/browse/SPARK-11698
> Project: Spark
> Issue Type: Improvement
> Components: Streaming
> Reporter: Liang-Chi Hsieh
>
> With spark.streaming.kafka.maxRatePerPartition, we can control the max rate limit. However, we can not ignore these messages out of limit. These messages will be consumed in next iteration. We have a use case that we need to ignore these messages and process latest messages in next iteration.
> In other words, we simply want to consume part of messages in each iteration and ignore remaining messages that are not consumed.
> We add an option for this purpose.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org