You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@solr.apache.org by Shawn Heisey <ap...@elyograg.org> on 2021/04/03 21:35:06 UTC
Should it be acceptable to have a "rows" parameter larger than
Integer.MAX_VALUE?
I've come across something and I wonder if it should be considered a bug.
If a value larger than Integer.MAX_VALUE is sent with the "rows"
parameter, Solr will immediately throw an exception:
org.apache.solr.common.SolrException: For input string: "3000000000"
It would be perfectly valid (though probably have terrible performance)
to expect a value like that with distributed indexes. The individual
shard subqueries of course could never go that high.
This error also occurs in the cloud example with a distributed index.
Should this be considered a bug, or are we OK with current behavior?
Thanks,
Shawn
Re: Should it be acceptable to have a "rows" parameter larger than
Integer.MAX_VALUE?
Posted by Jan Høydahl <ja...@cominvent.com>.
3. apr. 2021 kl. 23:35 skrev Shawn Heisey <ap...@elyograg.org>:
>
> It would be perfectly valid (though probably have terrible performance) to expect a value like that with distributed indexes
Hmm, the way Lucene handles large rows params, we should rather err way earlier than MAX_VALUE, since such high rows requests causes TERRIBLE performance and GC hell.
https://issues.apache.org/jira/browse/SOLR-15252 <https://issues.apache.org/jira/browse/SOLR-15252>
Jan
Re: Should it be acceptable to have a "rows" parameter larger than
Integer.MAX_VALUE?
Posted by Eric Pugh <ep...@opensourceconnections.com>.
This doesn’t answer your question, but it would be nice if the exception pointed you to the specific cause of the error? “Input string ‘3000000000’ exceeds Integer.MAX_VALUE for rows parameter”, so that if you aren’t a java savvy person, you would better understand the issue.
> On Apr 3, 2021, at 5:35 PM, Shawn Heisey <ap...@elyograg.org> wrote:
>
> I've come across something and I wonder if it should be considered a bug.
>
> If a value larger than Integer.MAX_VALUE is sent with the "rows" parameter, Solr will immediately throw an exception:
>
> org.apache.solr.common.SolrException: For input string: "3000000000"
>
> It would be perfectly valid (though probably have terrible performance) to expect a value like that with distributed indexes. The individual shard subqueries of course could never go that high.
>
> This error also occurs in the cloud example with a distributed index.
>
> Should this be considered a bug, or are we OK with current behavior?
>
> Thanks,
> Shawn
_______________________
Eric Pugh | Founder & CEO | OpenSource Connections, LLC | 434.466.1467 | http://www.opensourceconnections.com <http://www.opensourceconnections.com/> | My Free/Busy <http://tinyurl.com/eric-cal>
Co-Author: Apache Solr Enterprise Search Server, 3rd Ed <https://www.packtpub.com/big-data-and-business-intelligence/apache-solr-enterprise-search-server-third-edition-raw>
This e-mail and all contents, including attachments, is considered to be Company Confidential unless explicitly stated otherwise, regardless of whether attachments are marked as such.