You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Cao Manh Dat (JIRA)" <ji...@apache.org> on 2017/10/17 04:15:00 UTC

[jira] [Comment Edited] (SOLR-11423) Overseer queue needs a hard cap (maximum size) that clients respect

    [ https://issues.apache.org/jira/browse/SOLR-11423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16206976#comment-16206976 ] 

Cao Manh Dat edited comment on SOLR-11423 at 10/17/17 4:14 AM:
---------------------------------------------------------------

Should we modify the behavior here a little bit, instead of throw an IllegalStateException() (which can lead to many errors cause this is an unchecked exception) should we try first, if the queue is full, retry until timeout.
[~dragonsinth] I really want to hear about your cluster status after SOLR-11443 get applied ( maybe we do not need this hard cap at all if the Overseer can process messages fast enough )


was (Author: caomanhdat):
Should we modify the behavior here a little bit, instead of throw an IllegalStateException() (which can lead to many errors cause this is an unchecked exception) should we try first, if the queue is full, retry until timeout.
[~dragonsinth] I really want to hear about your cluster status after SOLR-11443 get applied.

> Overseer queue needs a hard cap (maximum size) that clients respect
> -------------------------------------------------------------------
>
>                 Key: SOLR-11423
>                 URL: https://issues.apache.org/jira/browse/SOLR-11423
>             Project: Solr
>          Issue Type: Improvement
>      Security Level: Public(Default Security Level. Issues are Public) 
>          Components: SolrCloud
>            Reporter: Scott Blum
>            Assignee: Scott Blum
>
> When Solr gets into pathological GC thrashing states, it can fill the overseer queue with literally thousands and thousands of queued state changes.  Many of these end up being duplicated up/down state updates.  Our production cluster has gotten to the 100k queued items level many times, and there's nothing useful you can do at this point except manually purge the queue in ZK.  Recently, it hit 3 million queued items, at which point our entire ZK cluster exploded.
> I propose a hard cap.  Any client trying to enqueue a item when a queue is full would throw an exception.  I was thinking maybe 10,000 items would be a reasonable limit.  Thoughts?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org