You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Michael McCandless (JIRA)" <ji...@apache.org> on 2007/07/31 19:22:53 UTC
[jira] Commented: (LUCENE-967) Add "tokenize documents only" task
to contrib/benchmark
[ https://issues.apache.org/jira/browse/LUCENE-967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12516747 ]
Michael McCandless commented on LUCENE-967:
-------------------------------------------
I plan to commit this soon...
> Add "tokenize documents only" task to contrib/benchmark
> -------------------------------------------------------
>
> Key: LUCENE-967
> URL: https://issues.apache.org/jira/browse/LUCENE-967
> Project: Lucene - Java
> Issue Type: Improvement
> Components: contrib/benchmark
> Affects Versions: 2.3
> Reporter: Michael McCandless
> Assignee: Michael McCandless
> Priority: Minor
> Fix For: 2.3
>
> Attachments: LUCENE-967.patch, LUCENE-967.take2.patch
>
>
> I've been looking at performance improvements to tokenization by
> re-using Tokens, and to help benchmark my changes I've added a new
> task called ReadTokens that just steps through all fields in a
> document, gets a TokenStream, and reads all the tokens out of it.
> EG this alg just reads all Tokens for all docs in Reuters collection:
> doc.maker=org.apache.lucene.benchmark.byTask.feeds.ReutersDocMaker
> doc.maker.forever=false
> {ReadTokens > : *
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org