You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Robert Muir (JIRA)" <ji...@apache.org> on 2009/11/25 06:58:39 UTC

[jira] Updated: (LUCENE-2090) convert automaton to char[] based processing and TermRef / TermsEnum api

     [ https://issues.apache.org/jira/browse/LUCENE-2090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Robert Muir updated LUCENE-2090:
--------------------------------

    Attachment: LUCENE-2090_TermRef_flex.patch

Attached is a patch to TermRef to implement endsWith()

this is a huge win on flex, even though constant suffix gain is very minor on trunk, because it avoids unicode conversion (char[]) for the worst cases that must do lots of comparisons.

*N	1705.7ms avg -> 1195.4ms avg
*NNNNNN	1844.9ms avg -> 1192.3ms avg

it doesn't really matter if the suffix is short, if there is a way in FilteredTermsEnum.accept() for a multitermquery to accept/reject a term without unicode conversion, it helps a lot.

in my opinion, this is the cleanest way to improve these cases, other crazy ideas i have tossed around out here like the iterative "reader-like" conversion or even TermRef substring matching will probably not gain much more over this, will be a lot more complex, and only apply to automatonquery.

Mike, if you get a chance to review, this, I'll commit it to flex branch (the tests pass).


> convert automaton to char[] based processing and TermRef / TermsEnum api
> ------------------------------------------------------------------------
>
>                 Key: LUCENE-2090
>                 URL: https://issues.apache.org/jira/browse/LUCENE-2090
>             Project: Lucene - Java
>          Issue Type: Improvement
>          Components: Search
>            Reporter: Robert Muir
>            Priority: Minor
>             Fix For: 3.1
>
>         Attachments: LUCENE-2090_TermRef_flex.patch
>
>
> The automaton processing is currently done with String, mostly because TermEnum is based on String.
> it is easy to change the processing to work with char[], since behind the scenes this is used anyway.
> in general I think we should make sure char[] based processing is exposed in the automaton pkg anyway, for things like pattern-based tokenizers and such.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org