You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Ian Kallen <ia...@klout.com> on 2015/10/03 19:59:14 UTC

replication errors on broker restart

When one of the brokers in a 3 node cluster restarts (running v0.8.2.2 on java7 update 65) we see tons of warnings like this

[2015-10-03 17:43:15,513] WARN [ReplicaFetcherThread-0-0], Error in fetch Name: FetchRequest; Version: 0; CorrelationId: 10; ClientId: ReplicaFetcherThread-0-0; ReplicaId: 2; MaxWait: 10000 ms; MinBytes: 1024 bytes; RequestInfo: [lia.stage.activities,15] -> PartitionFetchInfo(348403,52428800),[lia.stage.bundles,7] -> PartitionFetchInfo(13640118,52428800),[lia.stage.raw_events,2] -> PartitionFetchInfo(185988291,52428800),[lia.stage.bundles,19] -> PartitionFetchInfo(15640380,52428800),[lia.stage.activities.anonymized.json,3] -> PartitionFetchInfo(347631,52428800),[lia.stage.activities.anonymized.json,15] -> PartitionFetchInfo(348344,52428800),[lia.stage.raw_events,14] -> PartitionFetchInfo(322471814,52428800),[lia.stage.activities,3] -> PartitionFetchInfo(347671,52428800). Possible cause: java.net.SocketTimeoutException (kafka.server.ReplicaFetcherThread)

Clients are unable to produce / consume from it, the ReplicaManager emits of warnings like this

[2015-10-03 17:48:38,518] WARN [Replica Manager on Broker 2]: Fetch request with correlation id 306 from client ReplicaFetcherThread-2-2 on partition [lia.stage.bundles,9] failed due to Leader not local for partition [lia.stage.bundles,9] on broker 2 (kafka.server.ReplicaManager)

And less frequently but still recurrently this error

[2015-10-03 17:48:39,152] ERROR Closing socket for /10.21.220.32 because of error (kafka.network.Processor)
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:487)
at kafka.api.PartitionDataSend.writeTo(FetchResponse.scala:68)
at kafka.network.MultiSend.writeTo(Transmission.scala:101)
at kafka.api.TopicDataSend.writeTo(FetchResponse.scala:125)
at kafka.network.MultiSend.writeTo(Transmission.scala:101)
at kafka.api.FetchResponseSend.writeTo(FetchResponse.scala:231)
at kafka.network.Processor.write(SocketServer.scala:472)
at kafka.network.Processor.run(SocketServer.scala:342)
at java.lang.Thread.run(Thread.java:745)

The other brokers report SocketTimeoutExceptions like this

[2015-10-03 17:55:02,146] WARN [ReplicaFetcherThread-2-2], Error in fetch Name: FetchRequest; Version: 0; CorrelationId: 384; ClientId: ReplicaFetcherThread-2-2; ReplicaId: 0; MaxWait: 10000 ms; MinBytes: 1024 bytes; RequestInfo: [lia.stage.activities.anonymized.json,1] -> PartitionFetchInfo(348260,52428800),[lia.stage.bundles,5] -> PartitionFetchInfo(14246268,52428800),[lia.stage.bundles,9] -> PartitionFetchInfo(10617882,52428800),[lia.stage.raw_events,16] -> PartitionFetchInfo(288401851,52428800),[lia.stage.activities.anonymized.json,5] -> PartitionFetchInfo(346952,52428800),[lia.stage.activities,9] -> PartitionFetchInfo(347576,52428800),[lia.stage.activities,17] -> PartitionFetchInfo(347429,52428800),[ian.test,0] -> PartitionFetchInfo(7,52428800),[lia.stage.raw_events,0] -> PartitionFetchInfo(244080587,52428800),[lia.stage.raw_events,8] -> PartitionFetchInfo(221196268,52428800),[lia.stage.activities.anonymized.json,13] -> PartitionFetchInfo(346737,52428800),[lia.stage.activities,5] -> PartitionFetchInfo(346978,52428800),[lia.stage.raw_events,12] -> PartitionFetchInfo(189360261,52428800),[lia.stage.activities,1] -> PartitionFetchInfo(348288,52428800),[lia.stage.activities.anonymized.json,9] -> PartitionFetchInfo(347548,52428800),[lia.stage.activities,13] -> PartitionFetchInfo(346766,52428800),[lia.stage.activities.anonymized.json,17] -> PartitionFetchInfo(347383,52428800),[lia.stage.raw_events,4] -> PartitionFetchInfo(266606082,52428800),[lia.stage.bundles,13] -> PartitionFetchInfo(16804809,52428800),[lia.stage.bundles,1] -> PartitionFetchInfo(19461991,52428800),[lia.stage.bundles,17] -> PartitionFetchInfo(17467493,52428800). Possible cause: java.net.SocketTimeoutException (kafka.server.ReplicaFetcherThread)

And the cluster never really recovers.

The brokers are configured with these params, are we doing something egregiously wrong?

zookeeper.session.timeout.ms=10000
zookeeper.connection.timeout.ms=8000
zookeeper.sync.time.ms=3000

# Partitions
num.partitions=4

# Log Settings
log.dirs=/data1/kafka,/data2/kafka
log.retention.hours=96
log.retention.bytes=-1
log.retention.check.interval.ms=300000
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
#log.flush.interval.ms=10000
#log.flush.interval.messages=20000
#log.flush.scheduler.interval.ms=2000
log.roll.hours=96
log.segment.bytes=1073741824

# Topic settings
auto.create.topics.enable=false
delete.topic.enable=true
auto.leader.rebalance.enable=false
controlled.shutdown.enable=true

# Threading settings
num.network.threads=3
num.io.threads=4
background.threads=10

# Socket server configuration
socket.request.max.bytes=104857600
socket.receive.buffer.bytes=102400
socket.send.buffer.bytes=102400
queued.max.requests=500
fetch.purgatory.purge.interval.requests=1000
producer.purgatory.purge.interval.requests=1000


# Message
message.max.bytes=52428800

# Replication
default.replication.factor=2
num.replica.fetchers=4
replica.lag.time.max.ms=10000
replica.lag.max.messages=4000
replica.socket.timeout.ms=30000
replica.socket.receive.buffer.bytes=65536
replica.fetch.max.bytes=52428800
replica.fetch.wait.max.ms=10000
replica.fetch.min.bytes=1024
replica.high.watermark.checkpoint.interval.ms=5000

# misc
num.recovery.threads.per.data.dir=1
controller.socket.timeout.ms=30000
connections.max.idle.ms=600000

Thanks!
-Ian