You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@camel.apache.org by Claus Ibsen <cl...@gmail.com> on 2017/05/16 14:19:13 UTC

camel-kafka - Improve consumer to detect first exception and not continue

Hi

See ticket
https://issues.apache.org/jira/browse/CAMEL-11215

I pushed a potential fix/improvement to the branch
https://github.com/apache/camel/tree/CAMEL-11215

A new option has been introduced, breakOnFirstError (yeah naming is
hard) that you can set to true, so the consumer will then stop
continue processing kafka records if Camel failed process the
exchange.

If an exception was thrown it will detect this, break out its looping,
and also if possible then sync the kafka offset. Then the consumer
will force a re-connect where it has a single poll timeout as delay in
between (so it wont re-connect super fast).

By re-connecting we ensure the consumer re-start from the correct
offset that we synced just before. And it also allows kafka broker to
re-balance to another available consumer if clients is available. This
avoids sticking to the same consumer/JVM in case there is some
problems here.

Feedback is welcome on this solution.

Also we can consider turning on this new option by default, as its
maybe likely a better default, than just let camel-kafka consumer keep
on processing the next message. But on the flip side, if that message
is poision, and causes the same exception to happen over and over
again, then camel-kafka wont be able to move on to new messages. What
are your thoughts on this?

You can use Camel's error handler to try to mitigate this error, but
there is no notion of redelivery or redelivery counter on the kafka
broker side AFAIR. So you can't detect that we have tried to process
this message 5 times before, and now its really poison so lets just
move it to a DLQ or ignore it.

I do think its worth to make camel-kafka better and easier to use.
After all Kafka is a popular streaming platform.


And btw today Bilgin posted a good blog about short vs long retries
with Apache Camel. This is related to this problem here with
camel-kafka
https://www.javacodegeeks.com/2017/05/short-retry-vs-long-retry-apache-camel.html

-- 
Claus Ibsen
-----------------
http://davsclaus.com @davsclaus
Camel in Action 2: https://www.manning.com/ibsen2

Re: camel-kafka - Improve consumer to detect first exception and not continue

Posted by Claus Ibsen <cl...@gmail.com>.
Hi

Okay I did a bit more testing, and made it possible to turn this
on|off on component level. Keeping existing behavior as is. Then
people can turn it on globally in eg spring boot application
properties file etc.

This has been merged to master and 2.19.x branch. and the CAMEL-11215
branch has been deleted.

On Tue, May 16, 2017 at 4:19 PM, Claus Ibsen <cl...@gmail.com> wrote:
> Hi
>
> See ticket
> https://issues.apache.org/jira/browse/CAMEL-11215
>
> I pushed a potential fix/improvement to the branch
> https://github.com/apache/camel/tree/CAMEL-11215
>
> A new option has been introduced, breakOnFirstError (yeah naming is
> hard) that you can set to true, so the consumer will then stop
> continue processing kafka records if Camel failed process the
> exchange.
>
> If an exception was thrown it will detect this, break out its looping,
> and also if possible then sync the kafka offset. Then the consumer
> will force a re-connect where it has a single poll timeout as delay in
> between (so it wont re-connect super fast).
>
> By re-connecting we ensure the consumer re-start from the correct
> offset that we synced just before. And it also allows kafka broker to
> re-balance to another available consumer if clients is available. This
> avoids sticking to the same consumer/JVM in case there is some
> problems here.
>
> Feedback is welcome on this solution.
>
> Also we can consider turning on this new option by default, as its
> maybe likely a better default, than just let camel-kafka consumer keep
> on processing the next message. But on the flip side, if that message
> is poision, and causes the same exception to happen over and over
> again, then camel-kafka wont be able to move on to new messages. What
> are your thoughts on this?
>
> You can use Camel's error handler to try to mitigate this error, but
> there is no notion of redelivery or redelivery counter on the kafka
> broker side AFAIR. So you can't detect that we have tried to process
> this message 5 times before, and now its really poison so lets just
> move it to a DLQ or ignore it.
>
> I do think its worth to make camel-kafka better and easier to use.
> After all Kafka is a popular streaming platform.
>
>
> And btw today Bilgin posted a good blog about short vs long retries
> with Apache Camel. This is related to this problem here with
> camel-kafka
> https://www.javacodegeeks.com/2017/05/short-retry-vs-long-retry-apache-camel.html
>
> --
> Claus Ibsen
> -----------------
> http://davsclaus.com @davsclaus
> Camel in Action 2: https://www.manning.com/ibsen2



-- 
Claus Ibsen
-----------------
http://davsclaus.com @davsclaus
Camel in Action 2: https://www.manning.com/ibsen2