You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2017/11/06 18:53:00 UTC
[jira] [Commented] (KAFKA-6120) RecordCollectorImpl should not
retry sending
[ https://issues.apache.org/jira/browse/KAFKA-6120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16240697#comment-16240697 ]
ASF GitHub Bot commented on KAFKA-6120:
---------------------------------------
Github user asfgit closed the pull request at:
https://github.com/apache/kafka/pull/4148
> RecordCollectorImpl should not retry sending
> --------------------------------------------
>
> Key: KAFKA-6120
> URL: https://issues.apache.org/jira/browse/KAFKA-6120
> Project: Kafka
> Issue Type: Bug
> Components: streams
> Affects Versions: 1.0.0
> Reporter: Matthias J. Sax
> Assignee: Matthias J. Sax
> Labels: streams-exception-handling, streams-resilience
> Fix For: 1.1.0
>
>
> Currently, RecordCollectorImpl implements an internal retry loop for sending data with a hard coded retry maximum. This raises the problem, that data might be send out-of-order while at the same time, does not improve the overall resilience much, as the number of retires is hardcoded.
> Thus, we should remove this loop and only rely an producer configuration parameter {{retires}} that uses can configure accordingly.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)