You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Robert Muir (JIRA)" <ji...@apache.org> on 2018/06/05 02:17:00 UTC

[jira] [Created] (LUCENE-8348) Remove [Edge]NgramTokenizer min/max defaults consistent with Filter

Robert Muir created LUCENE-8348:
-----------------------------------

             Summary: Remove [Edge]NgramTokenizer min/max defaults consistent with Filter
                 Key: LUCENE-8348
                 URL: https://issues.apache.org/jira/browse/LUCENE-8348
             Project: Lucene - Core
          Issue Type: Task
          Components: modules/analysis
         Environment: LUCENE-7960 fixed a good deal of trappiness here for the tokenfilters, there aren't ridiculous default min/max values such as 1,2. 

Also javadocs are enhanced to present a nice warning about using large ranges: it seems to surprise people that min=small, max=huge eats up a ton of resources, but its really like creating (huge-small) separate n-gram indexes, so of course its expensive.

Finally it keeps it easy to do the typical, more efficient fixed ngram case, vs forcing someone to do min=X,max=X range which is unintuitive.

We should improve the tokenizers in the same way.
            Reporter: Robert Muir






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org