You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Erick Erickson (JIRA)" <ji...@apache.org> on 2017/03/06 18:12:32 UTC
[jira] [Resolved] (SOLR-10186) Allow CharTokenizer-derived
tokenizers and KeywordTokenizer to configure the max token length
[ https://issues.apache.org/jira/browse/SOLR-10186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Erick Erickson resolved SOLR-10186.
-----------------------------------
Resolution: Duplicate
this is really LUCENE-7705, see that JIRA for status.
> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length
> ---------------------------------------------------------------------------------------------
>
> Key: SOLR-10186
> URL: https://issues.apache.org/jira/browse/SOLR-10186
> Project: Solr
> Issue Type: Improvement
> Security Level: Public(Default Security Level. Issues are Public)
> Reporter: Erick Erickson
> Assignee: Erick Erickson
> Priority: Minor
> Attachments: SOLR-10186.patch, SOLR-10186.patch, SOLR-10186.patch
>
>
> Is there a good reason that we hard-code a 256 character limit for the CharTokenizer? In order to change this limit it requires that people copy/paste the incrementToken into some new class since incrementToken is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) (Factories) it would take adding a c'tor to the base class in Lucene and using it in the factory.
> Any objections?
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org