You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Erick Erickson (JIRA)" <ji...@apache.org> on 2017/05/08 17:09:04 UTC

[jira] [Updated] (LUCENE-7705) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

     [ https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Erick Erickson updated LUCENE-7705:
-----------------------------------
    Attachment: LUCENE-7705.patch

Fixed a couple of precommit issues, otherwise patch is the same.

Amrit:

this test always fails for me: TestMaxTokenLenTokenizer

assertQ("Check the total number of docs", req("q", "letter:lett"), "//result[@numFound=0]");

Looking at the code, numFound should be 1 I believe. The problem is that _both_ the index time and query time analysis trims the term to 3 characters, so the finding a document when searching for "lett" here is perfectly legitimate. In fact all tokens no matter how long and no matter what follows "let" will succeed. I think all the rest of the tests for fields in this set will fail for a similar reason when checking for search terms > the length of the token. Do you agree?

If you agree, let's add a few tests explicitly showing this, that way future people looking at the code will know it's intended behavior. I.e. add lines like:

// Anything that matches the first three letters should be found when maxLen=3
 assertQ("Check the total number of docs", req("q", "letter:letXyz"), "//result[@numFound=1]");


Or I somehow messed up the patch.

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length
> ---------------------------------------------------------------------------------------------
>
>                 Key: LUCENE-7705
>                 URL: https://issues.apache.org/jira/browse/LUCENE-7705
>             Project: Lucene - Core
>          Issue Type: Improvement
>            Reporter: Amrit Sarkar
>            Assignee: Erick Erickson
>            Priority: Minor
>         Attachments: LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch
>
>
> SOLR-10186
> [~erickerickson]: Is there a good reason that we hard-code a 256 character limit for the CharTokenizer? In order to change this limit it requires that people copy/paste the incrementToken into some new class since incrementToken is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) (Factories) it would take adding a c'tor to the base class in Lucene and using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org