You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Tomas Alabes <to...@oracle.com> on 2021/01/31 18:03:48 UTC

ReplicaManager fetch fails on leader due to long/integer overflow

Hi, I’m having what seems to be this issue: https://issues.apache.org/jira/browse/KAFKA-7656<https://issues.apache.org/jira/browse/KAFKA-7656?page=com.atlassian.jira.plugin.system.issuetabpanels%3Aall-tabpanel>

It happens when a transactional producer sends an event for the 1st time.

```
ERROR [ReplicaManager broker=1] Error processing fetch with max size -2147483648 from consumer on partition __consumer_offsets-19: (fetchOffset=4, logStartOffset=-1, maxBytes=-2147483648, currentLeaderEpoch=Optional.empty) (kafka.server.ReplicaManager)
java.lang.IllegalArgumentException: Invalid max size -2147483648 for log read from segment FileRecords(size=5150, file=/var/lib/kafka/__consumer_offsets-19/00000000000000000000.log, start=0, end=2147483647)
```

I checked the logs with TRACE level but it doesn’t show anything new.
I tried the default “max.partition.fetch.bytes” and setting it manually to the double. We use default consumers/producers with Spring-Kafka, nothing custom.

I’m using a vanilla kafka version 2.13_2.6.1, the kafka-client v2.6.0.
The broker settings here: https://gist.github.com/tomasAlabes/0bd3e03546e399db6c6e6b8d4a78686b

I can’t reproduce this with bitnami’s or CP’s kafka. Maybe is something with our broker properties but I can’t figure out what. All our producers/consumers are transactional and we have a 3 brokers/1 zk topology (2 isr for txn’s).

When I delete all services (hence stopping consuming/producing) I continue seeing the exceptions.

Do you what can be causing this?

Thank you,
Tomas

ReplicaManager fetch fails on leader due to long/integer overflow

Posted by Tomas Alabes <to...@oracle.com>.
Hi, I’m having what seems to be this issue: https://issues.apache.org/jira/browse/KAFKA-7656<https://issues.apache.org/jira/browse/KAFKA-7656?page=com.atlassian.jira.plugin.system.issuetabpanels%3Aall-tabpanel>

It happens when a transactional producer sends an event for the 1st time.

```
ERROR [ReplicaManager broker=1] Error processing fetch with max size -2147483648 from consumer on partition __consumer_offsets-19: (fetchOffset=4, logStartOffset=-1, maxBytes=-2147483648, currentLeaderEpoch=Optional.empty) (kafka.server.ReplicaManager)
java.lang.IllegalArgumentException: Invalid max size -2147483648 for log read from segment FileRecords(size=5150, file=/var/lib/kafka/__consumer_offsets-19/00000000000000000000.log, start=0, end=2147483647)
```

I checked the logs with TRACE level but it doesn’t show anything new.
I tried the default “max.partition.fetch.bytes” and setting it manually to the double. We use default consumers/producers with Spring-Kafka, nothing custom.

I’m using a vanilla kafka version 2.13_2.6.1, the kafka-client v2.6.0.
The broker settings here: https://gist.github.com/tomasAlabes/0bd3e03546e399db6c6e6b8d4a78686b

I can’t reproduce this with bitnami’s or CP’s kafka. Maybe is something with our broker properties but I can’t figure out what. All our producers/consumers are transactional and we have a 3 brokers/1 zk topology (2 isr for txn’s).

When I delete all services (hence stopping consuming/producing) I continue seeing the exceptions.

Do you what can be causing this?

Thank you,
Tomas