You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by Shawn Heisey <ap...@elyograg.org> on 2015/10/02 07:57:59 UTC

SolrCloud overseer queue, zookeeper, and jute.maxbuffer

In a message I just sent to the solr-user list, I mentioned problems
with the overseer queue getting significantly larger than jute.maxbuffer.

How much pain would it cause if zookeeper were were to refuse to accept
new entries in the overseer queue when the new entry would push the
znode size above jute.maxbuffer?  Presumably it would cause a specific
exception that we could catch.  If that feature existed, could the
situation be handled gracefully in SolrCloud, possibly waiting until the
overseer has processed some of the existing entries and made some room?

I sent a query about such a feature to the zookeeper mailing list:

http://mail-archives.apache.org/mod_mbox/zookeeper-user/201510.mbox/%3C560DCB9E.4090307%40elyograg.org%3E

There are more messages in the thread, which you can follow with the
navigation links.  ZOOKEEPER-2260 was mentioned, which looks
interesting, but ultimately the root of the huge queue problem is the
fact that entries are generated VERY fast, and the size of the queue
znode is not restricted to zookeeper's own built-in limitations.

A few months ago, on SOLR-7191, I mentioned an idea for creating single
entries in the queue that can do large-scale state updates instead of
updating one little piece in each entry.  It's the second to last
paragraph in this comment:

https://issues.apache.org/jira/browse/SOLR-7191?focusedCommentId=14348836

Any thoughts?

Thanks,
Shawn


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org