You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@kafka.apache.org by Zakee <kz...@netzero.net> on 2015/03/03 02:02:49 UTC

Fetch Purgatory Request Size

Looking for ideas from those who have been using kafka for some time.

Should I be concerned about the fetch purgatory size increasing to high numbers and consistently remaining there while the producing data rates vary between 210k to 270k per sec (60M to 82M per sec)?


There are 5 brokers with following set of properties, currently hosting 35 topics (total 400 partitions), some of which high volume; others quite low in throughputs. There is no consumer running yet, I guess, once consumers start, purgatory size will further increase. 

auto.leader.rebalance.enable=true
leader.imbalance.check.interval.seconds=600
leader.imbalance.per.broker.percentage=10
default.replication.factor=3
log.cleaner.enable=true
log.cleaner.threads=5
log.cleanup.policy=delete
log.flush.scheduler.interval.ms=3000
log.retention.minutes=1440
log.segment.bytes=1073741824
message.max.bytes=100000000
num.io.threads=14
num.network.threads=14
num.replica.fetchers=4
queued.max.requests=500
replica.fetch.max.bytes=200000000
replica.fetch.min.bytes=51200
replica.lag.max.messages=5000
replica.lag.time.max.ms=30000
replica.fetch.wait.max.ms=1000
fetch.purgatory.purge.interval.requests=5000
producer.purgatory.purge.interval.requests=5000


Thanks
Zakee



____________________________________________________________
Skin Tightening For Men
Reduce The Look of Saggy Skin and Wrinkles, without Leaving Home
http://thirdpartyoffers.netzero.net/TGL3231/54f5083a31f938396843st04vuc