You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "David Bowen (JIRA)" <ji...@apache.org> on 2009/10/02 03:03:23 UTC
[jira] Commented: (LUCENE-1489) highlighter problem with n-gram
tokens
[ https://issues.apache.org/jira/browse/LUCENE-1489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12761439#action_12761439 ]
David Bowen commented on LUCENE-1489:
-------------------------------------
Mark, I tried the approach you suggested of using the Formatter interface. I found it didn't work because the Formatter did not have a way to map the tokens in the token group into the text. This could be fixed by providing a public accessor function for TokenGroup's matchStartOffset field. However, it seems convoluted to go to the trouble of constructing a TokenGroup only to have every Formatter have to take it all apart again to find the places within it that need highlighting. It seems to me that the purpose of a TokenGroup is to identify (up to) one span of characters that needs to be highlighted.
> highlighter problem with n-gram tokens
> --------------------------------------
>
> Key: LUCENE-1489
> URL: https://issues.apache.org/jira/browse/LUCENE-1489
> Project: Lucene - Java
> Issue Type: Bug
> Components: contrib/highlighter
> Reporter: Koji Sekiguchi
> Priority: Minor
> Attachments: lucene1489.patch
>
>
> I have a problem when using n-gram and highlighter. I thought it had been solved in LUCENE-627...
> Actually, I found this problem when I was using CJKTokenizer on Solr, though, here is lucene program to reproduce it using NGramTokenizer(min=2,max=2) instead of CJKTokenizer:
> {code:java}
> public class TestNGramHighlighter {
> public static void main(String[] args) throws Exception {
> Analyzer analyzer = new NGramAnalyzer();
> final String TEXT = "Lucene can make index. Then Lucene can search.";
> final String QUERY = "can";
> QueryParser parser = new QueryParser("f",analyzer);
> Query query = parser.parse(QUERY);
> QueryScorer scorer = new QueryScorer(query,"f");
> Highlighter h = new Highlighter( scorer );
> System.out.println( h.getBestFragment(analyzer, "f", TEXT) );
> }
> static class NGramAnalyzer extends Analyzer {
> public TokenStream tokenStream(String field, Reader input) {
> return new NGramTokenizer(input,2,2);
> }
> }
> }
> {code}
> expected output is:
> Lucene <B>can</B> make index. Then Lucene <B>can</B> search.
> but the actual output is:
> Lucene <B>can make index. Then Lucene can</B> search.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.
---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org