You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@kafka.apache.org by Json Tu <ka...@126.com> on 2016/12/16 10:17:46 UTC

log.flush.interval.messages setting of Kafka 0.9.0.0

Hi all,
	we have a cluster of 0.9.0.0 with 3 nodes, we have a topic with 3 replicas, and send it with ack -1, our sending latency is avg 7ms. I prepare to optimize performance of cluster through adjusting some params.
we find our brokers has set config item as below,
	log.flush.interval.messages=10000
and other relevant parameter is default, and I find the default value of log.flush.interval.messages is LONG.MAX_VALUE, because of setting this config will flush intiative that may affect performace . I wonder can I cancel this config  item’s setting, and use default value.

	I think use default value may have two drawback as below.
		1.recovery checkpoint can not be updated,so when load segments,it will scan from begin to end.
		2.it may lose data when leader partition’s broker’s vm is restart,but I think 3 replicas can remedy this drawback if the network between them is good.

	any suggestions? thank you

Re: log.flush.interval.messages setting of Kafka 0.9.0.0

Posted by Json Tu <ka...@126.com>.
Would be grateful to hear opinions from experts out there. Thanks in advance


> 在 2016年12月16日,下午6:17,Json Tu <ka...@126.com> 写道:
> 
> Hi all,
> 	we have a cluster of 0.9.0.0 with 3 nodes, we have a topic with 3 replicas, and send it with ack -1, our sending latency is avg 7ms. I prepare to optimize performance of cluster through adjusting some params.
> we find our brokers has set config item as below,
> 	log.flush.interval.messages=10000
> and other relevant parameter is default, and I find the default value of log.flush.interval.messages is LONG.MAX_VALUE, because of setting this config will flush intiative that may affect performace . I wonder can I cancel this config  item’s setting, and use default value.
> 
> 	I think use default value may have two drawback as below.
> 		1.recovery checkpoint can not be updated,so when load segments,it will scan from begin to end.
> 		2.it may lose data when leader partition’s broker’s vm is restart,but I think 3 replicas can remedy this drawback if the network between them is good.
> 
> 	any suggestions? thank you



Re: log.flush.interval.messages setting of Kafka 0.9.0.0

Posted by Json Tu <ka...@126.com>.
Would be grateful to hear opinions from experts out there. Thanks in advance


> 在 2016年12月16日,下午6:17,Json Tu <ka...@126.com> 写道:
> 
> Hi all,
> 	we have a cluster of 0.9.0.0 with 3 nodes, we have a topic with 3 replicas, and send it with ack -1, our sending latency is avg 7ms. I prepare to optimize performance of cluster through adjusting some params.
> we find our brokers has set config item as below,
> 	log.flush.interval.messages=10000
> and other relevant parameter is default, and I find the default value of log.flush.interval.messages is LONG.MAX_VALUE, because of setting this config will flush intiative that may affect performace . I wonder can I cancel this config  item’s setting, and use default value.
> 
> 	I think use default value may have two drawback as below.
> 		1.recovery checkpoint can not be updated,so when load segments,it will scan from begin to end.
> 		2.it may lose data when leader partition’s broker’s vm is restart,but I think 3 replicas can remedy this drawback if the network between them is good.
> 
> 	any suggestions? thank you