You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@kafka.apache.org by "Evan Williams (Jira)" <ji...@apache.org> on 2020/05/05 06:09:00 UTC

[jira] [Comment Edited] (KAFKA-4084) automated leader rebalance causes replication downtime for clusters with too many partitions

    [ https://issues.apache.org/jira/browse/KAFKA-4084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17099550#comment-17099550 ] 

Evan Williams edited comment on KAFKA-4084 at 5/5/20, 6:08 AM:
---------------------------------------------------------------

Apologies for the delayed response [~sql_consulting]. Have been a bit unwell (not corona).

Thanks for doing that. Hopefully I'll have some time asap to do more testing. It's a bit chaotic at the moment.

On another small note, have you ever experienced that auto.leader.rebalance.enable=false , doesn't take effect at all ?. I have it set on all nodes in a cluster (after restarting all), however when bringing a node back online (service restart for example), leaders are still moved back to the broker automatically for those partitions it's preferred leader for.

Is there a vnode in ZK, that might be stale, that I could check ?


was (Author: blodsbror):
Apologies for the delayed response [~sql_consulting]. Have been a bit unwell (not corona).

Thanks for doing that. Hopefully I'll have some time asap to do more testing. It's a bit chaotic at the moment.

On another small note, have you ever experienced that auto.leader.rebalance.enable=false , doesn't take effect at all ?. I have it set on all nodes in a cluster (after restarting all), however when bringing a node back online (service restart for example), leaders are still moved back to the broker automatically.

Is there a vnode in ZK, that might be stale, that I could check ?

> automated leader rebalance causes replication downtime for clusters with too many partitions
> --------------------------------------------------------------------------------------------
>
>                 Key: KAFKA-4084
>                 URL: https://issues.apache.org/jira/browse/KAFKA-4084
>             Project: Kafka
>          Issue Type: Bug
>          Components: controller
>    Affects Versions: 0.8.2.2, 0.9.0.0, 0.9.0.1, 0.10.0.0, 0.10.0.1
>            Reporter: Tom Crayford
>            Priority: Major
>              Labels: reliability
>             Fix For: 1.1.0
>
>
> If you enable {{auto.leader.rebalance.enable}} (which is on by default), and you have a cluster with many partitions, there is a severe amount of replication downtime following a restart. This causes `UnderReplicatedPartitions` to fire, and replication is paused.
> This is because the current automated leader rebalance mechanism changes leaders for *all* imbalanced partitions at once, instead of doing it gradually. This effectively stops all replica fetchers in the cluster (assuming there are enough imbalanced partitions), and restarts them. This can take minutes on busy clusters, during which no replication is happening and user data is at risk. Clients with {{acks=-1}} also see issues at this time, because replication is effectively stalled.
> To quote Todd Palino from the mailing list:
> bq. There is an admin CLI command to trigger the preferred replica election manually. There is also a broker configuration “auto.leader.rebalance.enable” which you can set to have the broker automatically perform the PLE when needed. DO NOT USE THIS OPTION. There are serious performance issues when doing so, especially on larger clusters. It needs some development work that has not been fully identified yet.
> This setting is extremely useful for smaller clusters, but with high partition counts causes the huge issues stated above.
> One potential fix could be adding a new configuration for the number of partitions to do automated leader rebalancing for at once, and *stop* once that number of leader rebalances are in flight, until they're done. There may be better mechanisms, and I'd love to hear if anybody has any ideas.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)