You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@kafka.apache.org by gu...@apache.org on 2016/01/29 17:45:50 UTC
kafka git commit: MINOR: Fix spelling and grammar issues in
ReplicaFetcherThread detailed comment
Repository: kafka
Updated Branches:
refs/heads/trunk 8f302c83c -> 3cfa6da6f
MINOR: Fix spelling and grammar issues in ReplicaFetcherThread detailed comment
I noticed them while looking at the recent commit:
https://github.com/apache/kafka/commit/87eccb9a3bea56e5d7d5696aaddef1421f038903
Author: Ismael Juma <is...@juma.me.uk>
Reviewers: Grant Henke, Guozhang Wang
Closes #829 from ijuma/fix-comments-in-replica-fetcher-thread
Project: http://git-wip-us.apache.org/repos/asf/kafka/repo
Commit: http://git-wip-us.apache.org/repos/asf/kafka/commit/3cfa6da6
Tree: http://git-wip-us.apache.org/repos/asf/kafka/tree/3cfa6da6
Diff: http://git-wip-us.apache.org/repos/asf/kafka/diff/3cfa6da6
Branch: refs/heads/trunk
Commit: 3cfa6da6f1141c7dc70459aa0a73c75340a65cef
Parents: 8f302c8
Author: Ismael Juma <is...@juma.me.uk>
Authored: Fri Jan 29 08:45:47 2016 -0800
Committer: Guozhang Wang <wa...@gmail.com>
Committed: Fri Jan 29 08:45:47 2016 -0800
----------------------------------------------------------------------
.../kafka/server/ReplicaFetcherThread.scala | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/kafka/blob/3cfa6da6/core/src/main/scala/kafka/server/ReplicaFetcherThread.scala
----------------------------------------------------------------------
diff --git a/core/src/main/scala/kafka/server/ReplicaFetcherThread.scala b/core/src/main/scala/kafka/server/ReplicaFetcherThread.scala
index a672917..c5f3360 100644
--- a/core/src/main/scala/kafka/server/ReplicaFetcherThread.scala
+++ b/core/src/main/scala/kafka/server/ReplicaFetcherThread.scala
@@ -176,24 +176,24 @@ class ReplicaFetcherThread(name: String,
leaderEndOffset
} else {
/**
- * If the leader's log end offset is greater than follower's log end offset, there are two possibilities:
+ * If the leader's log end offset is greater than the follower's log end offset, there are two possibilities:
* 1. The follower could have been down for a long time and when it starts up, its end offset could be smaller than the leader's
* start offset because the leader has deleted old logs (log.logEndOffset < leaderStartOffset).
* 2. When unclean leader election occurs, it is possible that the old leader's high watermark is greater than
* the new leader's log end offset. So when the old leader truncates its offset to its high watermark and starts
- * to fetch from new leader, an OffsetOutOfRangeException will be thrown. After that some more messages are
- * produced to the new leader. When the old leader was trying to handle the OffsetOutOfRangeException and query
- * the log end offset of new leader, new leader's log end offset become higher than follower's log end offset.
+ * to fetch from the new leader, an OffsetOutOfRangeException will be thrown. After that some more messages are
+ * produced to the new leader. While the old leader is trying to handle the OffsetOutOfRangeException and query
+ * the log end offset of the new leader, the new leader's log end offset becomes higher than the follower's log end offset.
*
- * In the first case, the follower's current log end offset is smaller than leader's log start offset. So the
- * follower should truncate all its log, roll out a new segment and start to fetch from current leader's log
+ * In the first case, the follower's current log end offset is smaller than the leader's log start offset. So the
+ * follower should truncate all its logs, roll out a new segment and start to fetch from the current leader's log
* start offset.
- * In the second case, the follower should just keep the current log segments and retry fetch. In the second
- * case, their will be some inconsistency of data between old leader and new leader. Weare not solving it here.
- * If user want to have strong consistency guarantee, appropriate configurations needs to be set for both
+ * In the second case, the follower should just keep the current log segments and retry the fetch. In the second
+ * case, there will be some inconsistency of data between old and new leader. We are not solving it here.
+ * If users want to have strong consistency guarantees, appropriate configurations needs to be set for both
* brokers and producers.
*
- * Putting the two case together, the follower should fetch from the higher one of its replica log end offset
+ * Putting the two cases together, the follower should fetch from the higher one of its replica log end offset
* and the current leader's log start offset.
*
*/