You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Michael McCandless (JIRA)" <ji...@apache.org> on 2008/01/03 16:16:33 UTC

[jira] Created: (LUCENE-1118) core analyzers should not produce tokens > N (100?) characters in length

core analyzers should not produce tokens > N (100?) characters in length
------------------------------------------------------------------------

                 Key: LUCENE-1118
                 URL: https://issues.apache.org/jira/browse/LUCENE-1118
             Project: Lucene - Java
          Issue Type: Improvement
            Reporter: Michael McCandless
            Assignee: Michael McCandless
            Priority: Minor


Discussion that led to this:

  http://www.gossamer-threads.com/lists/lucene/java-dev/56103

I believe nearly any time a token > 100 characters in length is
produced, it's a bug in the analysis that the user is not aware of.

These long tokens cause all sorts of problems, downstream, so it's
best to catch them early at the source.

We can accomplish this by tacking on a LengthFilter onto the chains
for StandardAnalyzer, SimpleAnalyzer, WhitespaceAnalyzer, etc.

Should we do this in 2.3?  I realize this is technically a break in
backwards compatibility, however, I think it must be incredibly rare
that this break would in fact break something real in the application?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


[jira] Resolved: (LUCENE-1118) core analyzers should not produce tokens > N (100?) characters in length

Posted by "Michael McCandless (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/LUCENE-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Michael McCandless resolved LUCENE-1118.
----------------------------------------

    Resolution: Fixed

> core analyzers should not produce tokens > N (100?) characters in length
> ------------------------------------------------------------------------
>
>                 Key: LUCENE-1118
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1118
>             Project: Lucene - Java
>          Issue Type: Improvement
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>            Priority: Minor
>         Attachments: LUCENE-1118.patch
>
>
> Discussion that led to this:
>   http://www.gossamer-threads.com/lists/lucene/java-dev/56103
> I believe nearly any time a token > 100 characters in length is
> produced, it's a bug in the analysis that the user is not aware of.
> These long tokens cause all sorts of problems, downstream, so it's
> best to catch them early at the source.
> We can accomplish this by tacking on a LengthFilter onto the chains
> for StandardAnalyzer, SimpleAnalyzer, WhitespaceAnalyzer, etc.
> Should we do this in 2.3?  I realize this is technically a break in
> backwards compatibility, however, I think it must be incredibly rare
> that this break would in fact break something real in the application?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org


[jira] Updated: (LUCENE-1118) core analyzers should not produce tokens > N (100?) characters in length

Posted by "Michael McCandless (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/LUCENE-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Michael McCandless updated LUCENE-1118:
---------------------------------------

    Attachment: LUCENE-1118.patch

I fixed only StandardAnalyzer to skip terms longer than 255 chars by
default (it turns out SimpleAnalyzer, WhitespaceAnalyzer, StopAnalyzer
already prune tokens at 255 chars).

You can change the max allowed token length by calling
StandardAnalyzer.setMaxTokenLength.

I didn't use LengthFilter, because 1) I wanted to avoid copying the
massive term only to then filter it (makes performance faster) and, 2)
I wanted to increment position increment for the next valid token
after a series of too-long tokens.

All tests pass.  I plan to commit in a day or two.


> core analyzers should not produce tokens > N (100?) characters in length
> ------------------------------------------------------------------------
>
>                 Key: LUCENE-1118
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1118
>             Project: Lucene - Java
>          Issue Type: Improvement
>            Reporter: Michael McCandless
>            Assignee: Michael McCandless
>            Priority: Minor
>         Attachments: LUCENE-1118.patch
>
>
> Discussion that led to this:
>   http://www.gossamer-threads.com/lists/lucene/java-dev/56103
> I believe nearly any time a token > 100 characters in length is
> produced, it's a bug in the analysis that the user is not aware of.
> These long tokens cause all sorts of problems, downstream, so it's
> best to catch them early at the source.
> We can accomplish this by tacking on a LengthFilter onto the chains
> for StandardAnalyzer, SimpleAnalyzer, WhitespaceAnalyzer, etc.
> Should we do this in 2.3?  I realize this is technically a break in
> backwards compatibility, however, I think it must be incredibly rare
> that this break would in fact break something real in the application?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org