You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "simonxia (JIRA)" <ji...@apache.org> on 2018/01/26 18:11:00 UTC

[jira] [Comment Edited] (KAFKA-2729) Cached zkVersion not equal to that in zookeeper, broker not recovering.

    [ https://issues.apache.org/jira/browse/KAFKA-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16341374#comment-16341374 ] 

simonxia edited comment on KAFKA-2729 at 1/26/18 6:10 PM:
----------------------------------------------------------

Happened to me several times,version 0.10.0.0  

timeline goes as
 # [2018-01-24 13:07:40] broker 18, the old controller expired
 # [2018-01-24 13:07:41,176] broker 26, then took the controller
 # [2018-01-24 13:07:40,293] broker 18 resigned as the controller

 # [2018-01-24 13:07:41,176] broker 16 successfully elected as leader
 # [2018-01-24 13:08:17,928] broker 26 resigned as the controller

 

and then repeated log show on broker 18

 
{code:java}
 [2018-01-24 13:07:59,138] INFO Partition [fusion-rtlog-std-prod,21] on broker 18: Cached zkVersion [422946] not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)  
{code}
 

 all related log can be found in [https://drive.google.com/file/d/1g7tf2YYP9AuwBYe4yMLVCgxDmmc2d_dc/view?usp=sharing]

ps: I recovered this by restarting current controller, which is killed the current controller and a new election is triggered, then everything is ok


was (Author: simonxia):
Happened to me several times,version 0.10.0.0  

timeline goes as
 # [2018-01-24 13:07:40] broker 18, the old controller expired
 # [2018-01-24 13:07:41,176] broker 26, then took the controller

 # [2018-01-24 13:07:41,176] broker 16 successfully elected as leader

 # [2018-01-24 13:08:17,928] broker 26 resigned as the controller

 

then repeated log show on broker 18

 
{code:java}
 [2018-01-24 13:07:59,138] INFO Partition [fusion-rtlog-std-prod,21] on broker 18: Cached zkVersion [422946] not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)  
{code}
 

 

all related log can be found in [https://drive.google.com/file/d/1g7tf2YYP9AuwBYe4yMLVCgxDmmc2d_dc/view?usp=sharing]

ps: I recovered this by restarting current controller, which is killed the current controller and a new election is triggered, then everything is ok

> Cached zkVersion not equal to that in zookeeper, broker not recovering.
> -----------------------------------------------------------------------
>
>                 Key: KAFKA-2729
>                 URL: https://issues.apache.org/jira/browse/KAFKA-2729
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.8.2.1, 0.9.0.0, 0.10.0.0, 0.10.1.0, 0.11.0.0
>            Reporter: Danil Serdyuchenko
>            Assignee: Onur Karaman
>            Priority: Major
>             Fix For: 1.1.0
>
>
> After a small network wobble where zookeeper nodes couldn't reach each other, we started seeing a large number of undereplicated partitions. The zookeeper cluster recovered, however we continued to see a large number of undereplicated partitions. Two brokers in the kafka cluster were showing this in the logs:
> {code}
> [2015-10-27 11:36:00,888] INFO Partition [__samza_checkpoint_event-creation_1,3] on broker 5: Shrinking ISR for partition [__samza_checkpoint_event-creation_1,3] from 6,5 to 5 (kafka.cluster.Partition)
> [2015-10-27 11:36:00,891] INFO Partition [__samza_checkpoint_event-creation_1,3] on broker 5: Cached zkVersion [66] not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
> {code}
> For all of the topics on the effected brokers. Both brokers only recovered after a restart. Our own investigation yielded nothing, I was hoping you could shed some light on this issue. Possibly if it's related to: https://issues.apache.org/jira/browse/KAFKA-1382 , however we're using 0.8.2.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)