You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "A. Sophie Blee-Goldman (Jira)" <ji...@apache.org> on 2021/05/01 01:22:00 UTC
[jira] [Created] (KAFKA-12739) 2. Commit any cleanly-processed
records within a corrupted task
A. Sophie Blee-Goldman created KAFKA-12739:
----------------------------------------------
Summary: 2. Commit any cleanly-processed records within a corrupted task
Key: KAFKA-12739
URL: https://issues.apache.org/jira/browse/KAFKA-12739
Project: Kafka
Issue Type: Sub-task
Components: streams
Reporter: A. Sophie Blee-Goldman
Within a task, there will typically be a number of records that have been successfully processed through the subtopology but not yet committed. If the next record to be picked up hits an unexpected exception, we’ll dirty close the entire task and essentially throw away all the work we did on those previous records. We should be able to drop only the corrupted record and just commit the offsets up to that point. Again, for some exceptions such as de/serialization or user code errors, this can be straightforward as the thread/task is otherwise in a healthy state. Other cases such as an error in the Producer will need to be tackled separately, since a Producer error cannot be isolated to a single task.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)