You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Xiyuan Hu <xi...@gmail.com> on 2019/10/20 02:28:58 UTC

Keep getting expiring X record(s) for changelog topic:326554 ms has passed since batch creation

Hi,

I'm running Kafka Streams v2.3.0. During peak hours, I noticed some
nodes had Timeout exception and it will mark the node status to DEAD.
Even though, I implement a CustomProductionExceptionHandler to set the
ProductionExceptionHandlerResponse.CONTINUE, it doesn't solve the
problem, just keep the nodes running.

Could anyone help me to understand why it throws above Timeout
exception? I did a lot search and people mention it's due to producer
slowness. But GC, CPU are all low.

I have set request.timeout to 5 min, default.api.timeout to 3min,
session.timeout to 2min, retry 1000 and retry backoff 100ms.

Really appreciated any help!!

Thanks.

Re: Keep getting expiring X record(s) for changelog topic:326554 ms has passed since batch creation

Posted by "Matthias J. Sax" <ma...@confluent.io>.
I would recommend to investigate broker and network health.

-Matthias

On 10/19/19 7:28 PM, Xiyuan Hu wrote:
> Hi,
> 
> I'm running Kafka Streams v2.3.0. During peak hours, I noticed some
> nodes had Timeout exception and it will mark the node status to DEAD.
> Even though, I implement a CustomProductionExceptionHandler to set the
> ProductionExceptionHandlerResponse.CONTINUE, it doesn't solve the
> problem, just keep the nodes running.
> 
> Could anyone help me to understand why it throws above Timeout
> exception? I did a lot search and people mention it's due to producer
> slowness. But GC, CPU are all low.
> 
> I have set request.timeout to 5 min, default.api.timeout to 3min,
> session.timeout to 2min, retry 1000 and retry backoff 100ms.
> 
> Really appreciated any help!!
> 
> Thanks.
>