You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by "1095193290@qq.com" <10...@qq.com> on 2019/06/03 03:22:21 UTC

How to prevent data loss in "read-process-write" application?

Hi
   I have a application consume from Kafka, process and send to Kafka. In order to prevent data loss, I need to commit consumer offset after committing a batch of  messages to Kafka successfully. I investigate Transaction fearture that provided atomic writes to multiple partitions could  solve my problem. Has any other recommended solution in addition to enable Transcation( I dont need exactly once process)?



1095193290@qq.com