You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Michael Hornung (Jira)" <ji...@apache.org> on 2022/02/22 12:15:00 UTC
[jira] [Updated] (KAFKA-13683) Streams - Transactional Producer - Transaction with key xyz went wrong with exception: Timeout expired after 60000milliseconds while awaiting InitProducerId
[ https://issues.apache.org/jira/browse/KAFKA-13683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Michael Hornung updated KAFKA-13683:
------------------------------------
Description:
We have an urgent issue with our customer using kafka transactional producer with kafka cluster with 3 or more nodes. Our customer is using confluent cloud on azure.
We this exception regularly: "Transaction with key XYZ went wrong with exception: Timeout expired after 60000milliseconds while awaiting InitProducerId" (see attachment)
We assume that the cause is a node which is down and the producer still sends messages to the “down node”.
We are using kafa streams 3.0.
*We expect that if a node is down kafka producer is intelligent enough to not send messages to this node any more.*
*What’s the solution of this issue? Is there any config we have to set?*
*This request is urgent because our costumer will soon have production issues.*
*Additional information*
* send record --> see attachment “AkkaHttpRestServer.scala” – line 100
* producer config --> see attachment “AkkaHttpRestServer.scala” – line 126
was:
We have an urgent issue with our customer using kafka transactional producer with kafka cluster with 3 or more nodes. We are using confluent on azure.
We this exception regularly: "Transaction with key XYZ went wrong with exception: Timeout expired after 60000milliseconds while awaiting InitProducerId" (see attachment)
We assume that the cause is a node which is down and the producer still sends messages to the “down node”.
We are using kafa streams 3.0.
*We expect that if a node is down kafka producer is intelligent enough to not send messages to this node any more.*
*What’s the solution of this issue? Is there any config we have to set?*
*This request is urgent because our costumer will soon have production issues.*
*Additional information*
* send record --> see attachment “AkkaHttpRestServer.scala” – line 100
* producer config --> see attachment “AkkaHttpRestServer.scala” – line 126
> Streams - Transactional Producer - Transaction with key xyz went wrong with exception: Timeout expired after 60000milliseconds while awaiting InitProducerId
> ------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: KAFKA-13683
> URL: https://issues.apache.org/jira/browse/KAFKA-13683
> Project: Kafka
> Issue Type: Bug
> Components: streams
> Affects Versions: 2.6.0, 2.7.0, 3.0.0
> Reporter: Michael Hornung
> Priority: Critical
> Fix For: 2.6.0, 2.7.0, 3.0.0
>
> Attachments: AkkaHttpRestServer.scala, timeoutException.png
>
>
> We have an urgent issue with our customer using kafka transactional producer with kafka cluster with 3 or more nodes. Our customer is using confluent cloud on azure.
> We this exception regularly: "Transaction with key XYZ went wrong with exception: Timeout expired after 60000milliseconds while awaiting InitProducerId" (see attachment)
> We assume that the cause is a node which is down and the producer still sends messages to the “down node”.
> We are using kafa streams 3.0.
> *We expect that if a node is down kafka producer is intelligent enough to not send messages to this node any more.*
> *What’s the solution of this issue? Is there any config we have to set?*
> *This request is urgent because our costumer will soon have production issues.*
> *Additional information*
> * send record --> see attachment “AkkaHttpRestServer.scala” – line 100
> * producer config --> see attachment “AkkaHttpRestServer.scala” – line 126
--
This message was sent by Atlassian Jira
(v8.20.1#820001)