You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "jess canabou (JIRA)" <ji...@apache.org> on 2011/06/14 09:04:47 UTC

[jira] [Commented] (SOLR-2218) Performance of start= and rows= parameters are exponentially slow with large data sets

    [ https://issues.apache.org/jira/browse/SOLR-2218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13049020#comment-13049020 ] 

jess canabou commented on SOLR-2218:
------------------------------------

Hi all

I'm a bit confused by this thread, but think I have the same or almost same issue. I'm searching on a document with over 7000000 entries. I'm using the start and rows parameters (querying 30000 recs at a time), and notice the query times getting increasingly large, the further into the document I get. Unlike Bill, I do not care about scores or relevancy, and am having difficulty understanding whether the docid is a suitable solution to my problem. Is there something I can simply tack onto the end of my query to help speed up these query times? From what I understand, it's not necessary for me to be sorting all the rows before the chunk of data I'm querying on
My query looks as below.
http://hostname/solr/select/?q=blablabla&version=2.2&start=4000000rows=30000&indent=on&fl=<bunch of fields>

Any help would be greatly appreciated :)

> Performance of start= and rows= parameters are exponentially slow with large data sets
> --------------------------------------------------------------------------------------
>
>                 Key: SOLR-2218
>                 URL: https://issues.apache.org/jira/browse/SOLR-2218
>             Project: Solr
>          Issue Type: Improvement
>          Components: Build
>    Affects Versions: 1.4.1
>            Reporter: Bill Bell
>
> With large data sets, > 10M rows.
> Setting start=<large number> and rows=<large numbers> is slow, and gets slower the farther you get from start=0 with a complex query. Random also makes this slower.
> Would like to somehow make this performance faster for looping through large data sets. It would be nice if we could pass a pointer to the result set to loop, or support very large rows=<number>.
> Something like:
> rows=1000
> start=0
> spointer=string_my_query_1
> Then within interval (like 5 mins) I can reference this loop:
> Something like:
> rows=1000
> start=1000
> spointer=string_my_query_1
> What do you think? Since the data is too great the cache is not helping.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org