You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "David Smiley (JIRA)" <ji...@apache.org> on 2014/03/16 05:50:18 UTC
[jira] [Updated] (LUCENE-2407) make CharTokenizer.MAX_WORD_LEN
parametrizable
[ https://issues.apache.org/jira/browse/LUCENE-2407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
David Smiley updated LUCENE-2407:
---------------------------------
Fix Version/s: (was: 4.7)
4.8
> make CharTokenizer.MAX_WORD_LEN parametrizable
> ----------------------------------------------
>
> Key: LUCENE-2407
> URL: https://issues.apache.org/jira/browse/LUCENE-2407
> Project: Lucene - Core
> Issue Type: Improvement
> Components: modules/analysis
> Affects Versions: 3.0.1
> Reporter: javi
> Priority: Minor
> Labels: dead
> Fix For: 4.8
>
>
> as discussed here http://n3.nabble.com/are-long-words-split-into-up-to-256-long-tokens-tp739914p739914.html it would be nice to be able to parametrize that value.
--
This message was sent by Atlassian JIRA
(v6.2#6252)
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org