You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Michael McCandless (JIRA)" <ji...@apache.org> on 2009/10/20 18:41:59 UTC
[jira] Commented: (LUCENE-1997) Explore performance of multi-PQ vs
single-PQ sorting API
[ https://issues.apache.org/jira/browse/LUCENE-1997?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12767870#action_12767870 ]
Michael McCandless commented on LUCENE-1997:
--------------------------------------------
OK I ran sortBench.py on opensolaris 2009.06 box, Java 1.6.0_13.
It'd be great if others with more mainstream platforms (Linux,
Windows) could run this and post back.
Raw results (only ran on the log-sized segments):
||Seg size||Query||Tot hits||Sort||Top N||QPS old||QPS new||Pct change||
|log|1|318481|title|10|114.26|112.40|{color:red}-1.6%{color}|
|log|1|318481|title|25|117.59|110.08|{color:red}-6.4%{color}|
|log|1|318481|title|50|116.22|106.96|{color:red}-8.0%{color}|
|log|1|318481|title|100|114.48|100.07|{color:red}-12.6%{color}|
|log|1|318481|title|500|103.16|73.98|{color:red}-28.3%{color}|
|log|1|318481|title|1000|95.60|57.85|{color:red}-39.5%{color}|
|log|<all>|1000000|title|10|95.71|109.41|{color:green}14.3%{color}|
|log|<all>|1000000|title|25|111.56|101.73|{color:red}-8.8%{color}|
|log|<all>|1000000|title|50|110.56|98.84|{color:red}-10.6%{color}|
|log|<all>|1000000|title|100|104.09|93.02|{color:red}-10.6%{color}|
|log|<all>|1000000|title|500|93.36|66.67|{color:red}-28.6%{color}|
|log|<all>|1000000|title|1000|97.07|50.03|{color:red}-48.5%{color}|
|log|<all>|1000000|rand string|10|118.10|109.63|{color:red}-7.2%{color}|
|log|<all>|1000000|rand string|25|107.68|102.33|{color:red}-5.0%{color}|
|log|<all>|1000000|rand string|50|107.12|100.37|{color:red}-6.3%{color}|
|log|<all>|1000000|rand string|100|110.63|95.17|{color:red}-14.0%{color}|
|log|<all>|1000000|rand string|500|79.97|72.09|{color:red}-9.9%{color}|
|log|<all>|1000000|rand string|1000|76.82|54.67|{color:red}-28.8%{color}|
|log|<all>|1000000|country|10|129.49|103.63|{color:red}-20.0%{color}|
|log|<all>|1000000|country|25|111.74|102.60|{color:red}-8.2%{color}|
|log|<all>|1000000|country|50|108.82|100.90|{color:red}-7.3%{color}|
|log|<all>|1000000|country|100|108.01|96.84|{color:red}-10.3%{color}|
|log|<all>|1000000|country|500|97.60|72.02|{color:red}-26.2%{color}|
|log|<all>|1000000|country|1000|85.19|54.56|{color:red}-36.0%{color}|
|log|<all>|1000000|rand int|10|151.75|110.37|{color:red}-27.3%{color}|
|log|<all>|1000000|rand int|25|138.06|109.15|{color:red}-20.9%{color}|
|log|<all>|1000000|rand int|50|135.40|106.49|{color:red}-21.4%{color}|
|log|<all>|1000000|rand int|100|108.30|101.86|{color:red}-5.9%{color}|
|log|<all>|1000000|rand int|500|94.45|73.42|{color:red}-22.3%{color}|
|log|<all>|1000000|rand int|1000|88.30|54.71|{color:red}-38.0%{color}|
Some observations:
* MultiPQ seems like it's generally slower, thought it is faster in
one case, when topN = 10, sorting by title. It's only faster with
the *:* (MatchAllDocsQuery) query, not with the TermQuery for
term=1, which is odd.
* MultiPQ slows down, relatively, as topN increases.
* Sorting by int acts differently: MultiPQ is quite a bit slower
across the board, except for topN=100
> Explore performance of multi-PQ vs single-PQ sorting API
> --------------------------------------------------------
>
> Key: LUCENE-1997
> URL: https://issues.apache.org/jira/browse/LUCENE-1997
> Project: Lucene - Java
> Issue Type: Improvement
> Components: Search
> Affects Versions: 2.9
> Reporter: Michael McCandless
> Assignee: Michael McCandless
> Attachments: LUCENE-1997.patch, LUCENE-1997.patch
>
>
> Spinoff from recent "lucene 2.9 sorting algorithm" thread on java-dev,
> where a simpler (non-segment-based) comparator API is proposed that
> gathers results into multiple PQs (one per segment) and then merges
> them in the end.
> I started from John's multi-PQ code and worked it into
> contrib/benchmark so that we could run perf tests. Then I generified
> the Python script I use for running search benchmarks (in
> contrib/benchmark/sortBench.py).
> The script first creates indexes with 1M docs (based on
> SortableSingleDocSource, and based on wikipedia, if available). Then
> it runs various combinations:
> * Index with 20 balanced segments vs index with the "normal" log
> segment size
> * Queries with different numbers of hits (only for wikipedia index)
> * Different top N
> * Different sorts (by title, for wikipedia, and by random string,
> random int, and country for the random index)
> For each test, 7 search rounds are run and the best QPS is kept. The
> script runs singlePQ then multiPQ, and records the resulting best QPS
> for each and produces table (in Jira format) as output.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org