You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kafka.apache.org by gu...@apache.org on 2020/03/18 21:23:28 UTC
[kafka] branch trunk updated: KAFKA-5604: Remove the redundant TODO
marker on the Streams side (#8313)
This is an automated email from the ASF dual-hosted git repository.
guozhang pushed a commit to branch trunk
in repository https://gitbox.apache.org/repos/asf/kafka.git
The following commit(s) were added to refs/heads/trunk by this push:
new b1999ba KAFKA-5604: Remove the redundant TODO marker on the Streams side (#8313)
b1999ba is described below
commit b1999ba22dbf83b662591baf64e3d68e0f69e818
Author: Guozhang Wang <wa...@gmail.com>
AuthorDate: Wed Mar 18 14:22:53 2020 -0700
KAFKA-5604: Remove the redundant TODO marker on the Streams side (#8313)
The issue itself has been fixed a while ago on the producer side, so we can just remove this TODO marker now (we've removed the isZombie flag already anyways).
Reviewers: John Roesler <vv...@apache.org>
---
.../streams/processor/internals/StreamsProducer.java | 19 ++++++++++---------
1 file changed, 10 insertions(+), 9 deletions(-)
diff --git a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsProducer.java b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsProducer.java
index 0324bf2..0aa4de6 100644
--- a/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsProducer.java
+++ b/streams/src/main/java/org/apache/kafka/streams/processor/internals/StreamsProducer.java
@@ -193,15 +193,16 @@ public class StreamsProducer {
if (transactionInFlight) {
try {
producer.abortTransaction();
- } catch (final ProducerFencedException ignore) {
- /* TODO
- * this should actually never happen atm as we guard the call to #abortTransaction
- * -> the reason for the guard is a "bug" in the Producer -- it throws IllegalStateException
- * instead of ProducerFencedException atm. We can remove the isZombie flag after KAFKA-5604 got
- * fixed and fall-back to this catch-and-swallow code
- */
-
- // can be ignored: transaction got already aborted by brokers/transactional-coordinator if this happens
+ } catch (final ProducerFencedException error) {
+ // The producer is aborting the txn when there's still an ongoing one,
+ // which means that we did not commit the task while closing it, which
+ // means that it is a dirty close. Therefore it is possible that the dirty
+ // close is due to an fenced exception already thrown previously, and hence
+ // when calling abortTxn here the same exception would be thrown again.
+ // Even if the dirty close was not due to an observed fencing exception but
+ // something else (e.g. task corrupted) we can still ignore the exception here
+ // since transaction already got aborted by brokers/transactional-coordinator if this happens
+ log.debug("Encountered {} while aborting the transaction; this is expected and hence swallowed", error.getMessage());
} catch (final KafkaException error) {
throw new StreamsException(
formatException("Producer encounter unexpected error trying to abort a transaction"),