You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Steve Rowe (JIRA)" <ji...@apache.org> on 2014/08/22 11:10:12 UTC

[jira] [Updated] (LUCENE-5897) performance bug ("adversary") in StandardTokenizer

     [ https://issues.apache.org/jira/browse/LUCENE-5897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Steve Rowe updated LUCENE-5897:
-------------------------------

    Attachment: LUCENE-5897.patch

Trunk patch, fixes both this issue and LUCENE-5400:

* modifies jflex generation to disable scanner buffer expansion
* when StandardTokenizerInterface.setMaxTokenLength() is called, the scanner's buffer size is also modified, but is limited to max 1M chars 
* added randomized tests for StandardTokenizer and UAX29URLEmailTokenizer.
* I tried to find problematic text sequences for the other JFlex grammars (HTMLStripCharFilter, ClassicTokenizer, and WikipediaTokenizer), but nothing I tried worked, so I left these as-is.

All analysis-common tests pass, as does precommit (after locally patching some javadoc problems unrelated to this issue).  I'll commit to trunk and branch_4x after I've run the whole test suite.

I'd like to include this fix in 4.10.

> performance bug ("adversary") in StandardTokenizer
> --------------------------------------------------
>
>                 Key: LUCENE-5897
>                 URL: https://issues.apache.org/jira/browse/LUCENE-5897
>             Project: Lucene - Core
>          Issue Type: Bug
>            Reporter: Robert Muir
>         Attachments: LUCENE-5897.patch
>
>
> There seem to be some conditions (I don't know how rare or what conditions) that cause StandardTokenizer to essentially hang on input: I havent looked hard yet, but as its essentially a DFA I think something wierd might be going on.
> An easy way to reproduce is with 1MB of underscores, it will just hang forever.
> {code}
>   public void testWorthyAdversary() throws Exception {
>     char buffer[] = new char[1024 * 1024];
>     Arrays.fill(buffer, '_');
>     int tokenCount = 0;
>     Tokenizer ts = new StandardTokenizer();
>     ts.setReader(new StringReader(new String(buffer)));
>     ts.reset();
>     while (ts.incrementToken()) {
>       tokenCount++;
>     }
>     ts.end();
>     ts.close();
>     assertEquals(0, tokenCount);
>   }
> {code} 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org