You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by GitBox <gi...@apache.org> on 2020/05/07 23:51:25 UTC

[GitHub] [kafka] guozhangwang commented on a change in pull request #8239: KAFKA-9666: Don't increase transactional epoch when trying to fence if the log append fails

guozhangwang commented on a change in pull request #8239:
URL: https://github.com/apache/kafka/pull/8239#discussion_r421858128



##########
File path: core/src/main/scala/kafka/coordinator/transaction/TransactionCoordinator.scala
##########
@@ -487,6 +487,33 @@ class TransactionCoordinator(brokerId: Int,
               info(s"Aborting sending of transaction markers and returning $error error to client for $transactionalId's EndTransaction request of $txnMarkerResult, " +
                 s"since appending $newMetadata to transaction log with coordinator epoch $coordinatorEpoch failed")
 
+              txnManager.getTransactionState(transactionalId).right.foreach {

Review comment:
       That's a good point.. the append returning "fail" does not mean that the append did not go through. I think the alternative idea would work better: basically, whenever the coordinator decides to abort a txn, it can mark in memory the current epoch as aborting and whenever it (re-)tries to write the prepare-abort entry it always just use the epoch+1 until the write goes through and we can reset that aborting marker.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org