You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Eno Thereska (JIRA)" <ji...@apache.org> on 2017/07/18 13:39:00 UTC
[jira] [Updated] (KAFKA-5158) Options for handling exceptions
during processing
[ https://issues.apache.org/jira/browse/KAFKA-5158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Eno Thereska updated KAFKA-5158:
--------------------------------
Issue Type: New Feature (was: Sub-task)
Parent: (was: KAFKA-5156)
> Options for handling exceptions during processing
> -------------------------------------------------
>
> Key: KAFKA-5158
> URL: https://issues.apache.org/jira/browse/KAFKA-5158
> Project: Kafka
> Issue Type: New Feature
> Components: streams
> Reporter: Eno Thereska
> Fix For: 0.11.1.0
>
>
> Imagine the app-level processing of a (non-corrupted) record fails (e.g. the user attempted to do a RPC to an external system, and this call failed). How can you process such failed records in a scalable way? For example, imagine you need to implement a retry policy such as "retry with exponential backoff". Here, you have the problem that 1. you can't really pause processing a single record because this will pause the processing of the full stream (bottleneck!) and 2. there is no straight-forward way to "sort" failed records based on their "next retry time".
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)