You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by "Dudenhoefer, Ed" <ed...@ebay.com> on 2016/08/10 20:37:06 UTC

kafka-list-topic showing leader: -1 when brokers 300 or 301 would be the leader

Our main concern is: how to nuke or repair things so that brokers 300 and 301 can rejoin the leader and isr lists properly.

We have a 5-host cluster, each host has both zookeeper-3.3.6 and kafka-0.8 running on it.

I sniffed around in zkCli.sh and didn't notice anyting particularly wrong or corrupt.


kafka-list-topic is showing leader: -1 when brokers 300 or 301 would be the leader:

# /usr/local/kafka-0.8/bin/kafka-list-topic.sh --zookeeper localhost:2181
...
topic: optimizer-xl-topic       partition: 1374 leader: 304     replicas: 304,302,303   isr: 304,303
topic: optimizer-xl-topic       partition: 1375 leader: 304     replicas: 300,304,301   isr: 304,300
topic: optimizer-xl-topic       partition: 1376 leader: -1      replicas: 301,300,302   isr: 302
topic: optimizer-xl-topic       partition: 1377 leader: 303     replicas: 302,301,303   isr: 303,302
topic: optimizer-xl-topic       partition: 1378 leader: 304     replicas: 303,302,304   isr: 304,303
topic: optimizer-xl-topic       partition: 1379 leader: 304     replicas: 304,303,300   isr: 304,303,300
topic: optimizer-xl-topic       partition: 1380 leader: -1      replicas: 300,301,302   isr: 302
topic: optimizer-xl-topic       partition: 1381 leader: 303     replicas: 301,302,303   isr: 303,302
topic: optimizer-xl-topic       partition: 1382 leader: 304     replicas: 302,303,304   isr: 304,303
topic: optimizer-xl-topic       partition: 1383 leader: 304     replicas: 303,304,300   isr: 304,303,300
topic: optimizer-xl-topic       partition: 1384 leader: 304     replicas: 304,300,301   isr: 304,300
topic: optimizer-xl-topic       partition: 1385 leader: 303     replicas: 300,302,303   isr: 303,302,300
topic: optimizer-xl-topic       partition: 1386 leader: 304     replicas: 301,303,304   isr: 304,303
topic: optimizer-xl-topic       partition: 1387 leader: 304     replicas: 302,304,300   isr: 304,302,300
topic: optimizer-xl-topic       partition: 1388 leader: 303     replicas: 303,300,301   isr: 303,300
topic: optimizer-xl-topic       partition: 1389 leader: 304     replicas: 304,301,302   isr: 304,302
topic: optimizer-xl-topic       partition: 1390 leader: 304     replicas: 300,303,304   isr: 304,303,300
topic: optimizer-xl-topic       partition: 1391 leader: 304     replicas: 301,304,300   isr: 304,300
topic: optimizer-xl-topic       partition: 1392 leader: 302     replicas: 302,300,301   isr: 302,300
topic: optimizer-xl-topic       partition: 1393 leader: 303     replicas: 303,301,302   isr: 303,302
topic: optimizer-xl-topic       partition: 1394 leader: 304     replicas: 304,302,303   isr: 304,303
topic: optimizer-xl-topic       partition: 1395 leader: 304     replicas: 300,304,301   isr: 304,300
topic: optimizer-xl-topic       partition: 1396 leader: -1      replicas: 301,300,302   isr: 302
topic: optimizer-xl-topic       partition: 1397 leader: 303     replicas: 302,301,303   isr: 303,302
topic: optimizer-xl-topic       partition: 1398 leader: 304     replicas: 303,302,304   isr: 304,303
topic: optimizer-xl-topic       partition: 1399 leader: 304     replicas: 304,303,300   isr: 304,303,300

All 5 brokers report the same info for the above (only 300 and 301 show -1 whenever one of them would be the leader).



Also, I tried running the offset checker, which works in other clusters we have running, but not in this one (on any of the 5 brokers, not just 300 or 301):

# /usr/local/kafka-0.8/bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zkconnect localhost:2181 --group optimizer-group --topic optimizer-default-topic

Group           Topic                          Pid Offset          logSize         Lag             Owner
Exception in thread "main" java.lang.UnsupportedOperationException: empty.head
        at scala.collection.immutable.Vector.head(Vector.scala:162)
        at kafka.tools.ConsumerOffsetChecker$.kafka$tools$ConsumerOffsetChecker$$processPartition(ConsumerOffsetChecker.scala:72)
        at kafka.tools.ConsumerOffsetChecker$$anonfun$kafka$tools$ConsumerOffsetChecker$$processTopic$1.apply$mcVI$sp(ConsumerOffsetChecker.scala:89)
        at kafka.tools.ConsumerOffsetChecker$$anonfun$kafka$tools$ConsumerOffsetChecker$$processTopic$1.apply(ConsumerOffsetChecker.scala:89)
        at kafka.tools.ConsumerOffsetChecker$$anonfun$kafka$tools$ConsumerOffsetChecker$$processTopic$1.apply(ConsumerOffsetChecker.scala:89)
        at scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:61)
        at scala.collection.immutable.List.foreach(List.scala:45)
        at kafka.tools.ConsumerOffsetChecker$.kafka$tools$ConsumerOffsetChecker$$processTopic(ConsumerOffsetChecker.scala:88)
        at kafka.tools.ConsumerOffsetChecker$$anonfun$main$3.apply(ConsumerOffsetChecker.scala:153)
        at kafka.tools.ConsumerOffsetChecker$$anonfun$main$3.apply(ConsumerOffsetChecker.scala:153)
        at scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:61)
        at scala.collection.immutable.List.foreach(List.scala:45)
        at kafka.tools.ConsumerOffsetChecker$.main(ConsumerOffsetChecker.scala:152)
        at kafka.tools.ConsumerOffsetChecker.main(ConsumerOffsetChecker.scala)