You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Luis Araujo (Jira)" <ji...@apache.org> on 2020/07/31 11:30:00 UTC

[jira] [Created] (KAFKA-10334) Transactions not working properly

Luis Araujo created KAFKA-10334:
-----------------------------------

             Summary: Transactions not working properly
                 Key: KAFKA-10334
                 URL: https://issues.apache.org/jira/browse/KAFKA-10334
             Project: Kafka
          Issue Type: Bug
          Components: producer 
    Affects Versions: 2.3.0, 2.1.0
            Reporter: Luis Araujo


I'm using transactions provided by Kafka Producer API in a Scala project built with SBT. The dependency used in the project is: 

"org.apache.kafka" % "kafka-clients" % "2.1.0"

I followed the documentation and I was expecting that transactions fail when I call .commitTransaction if some problem is raised when sending a message like it's described in the documentation: [https://kafka.apache.org/10/javadoc/org/apache/kafka/clients/producer/KafkaProducer.html
]

Unfortunatelly, when testing this behaviour using a message larger than the size accepted by the Kafka broker/cluster, the transactions are not working properly.

I tested with a 3 Kafka broker cluster with 1MB message max size (default value):
- when the message has 1MB, the transaction is aborted and an exception is raised when calling commitTransaction()
- when the message is bigger than 1MB, the transaction is completed successfully without the message being written. no exception is thrown.

As an example, this means that when I produce 9 messages with 1 KB and 1 message with 1.1MB in the same transaction, the transaction is completed but only 9 messages are written to the Kafka cluster.

I tested this behaviour with Kafka version 2.1.0 and 2.3.0 in both Kafka cluster and Kafka Producer API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)