You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by "Guozhang Wang (JIRA)" <ji...@apache.org> on 2013/09/19 00:30:53 UTC

[jira] [Commented] (KAFKA-1001) Handle follower transition in batch

    [ https://issues.apache.org/jira/browse/KAFKA-1001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13771323#comment-13771323 ] 

Guozhang Wang commented on KAFKA-1001:
--------------------------------------

One thing is that currently makeFollower is covered by the leaderIsrUpdateLock. If we are going to break the removeFetcher part and the truncateTo part of makeFollower then we will release leaderIsrUpdateLock and re-acquire it again, which could result in race conditions with calls to leaderReplicaIfLocal etc.
                
> Handle follower transition in batch
> -----------------------------------
>
>                 Key: KAFKA-1001
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1001
>             Project: Kafka
>          Issue Type: Improvement
>            Reporter: Jay Kreps
>             Fix For: 0.8.1
>
>
> In KAFKA-615 we made changes to avoid fsync'ing the active segment of the log due to log roll and maintaining recovery semantics.
> One downside of the fix for that issue was that it required checkpointing the recovery point for the log many times, one for each partition that transitioned to follower state.
> In this ticket I aim to fix that issue by making the following changes:
> 1. Add a new API LogManager.truncateTo(m: Map[TopicAndPartition, Long]). This method will first checkpoint the recovery point, then truncate each of the given logs to the given offset. This method will have to ensure these two things happen atomically.
> 2. Change ReplicaManager to first stop fetching for all partitions changing to follower state, then call LogManager.truncateTo then complete the existing logic.
> We think this will, over all, be a good thing. The reason is that the fetching thread current does something like (a) acquire lock, (b) fetch partitions, (c) write data to logs, (d) release locks. Since we currently remove fetchers one at a time this requires acquiring the fetcher lock, and hence generally blocking for half of the read/write cycle for each partition. By doing this in bulk we will avoid reacquiring the lock over and over for each change.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira