You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Tim Allison (JIRA)" <ji...@apache.org> on 2018/03/05 13:06:00 UTC
[jira] [Comment Edited] (LUCENE-8186) CustomAnalyzer with a
LowerCaseTokenizerFactory fails to normalize multiterms
[ https://issues.apache.org/jira/browse/LUCENE-8186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16386042#comment-16386042 ]
Tim Allison edited comment on LUCENE-8186 at 3/5/18 1:05 PM:
-------------------------------------------------------------
[~thetaphi], it works because multiterms are normalized in {{TextField}}'s {{analyzeMultiTerm}}: https://github.com/tballison/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/schema/TextField.java#L168 , which uses the full analyzer including the tokenizer.
AFAICT, {{TokenizerChain}}'s {{normalize()}} is never actually called at the moment, which, I'm guessing, is why no one found SOLR-11976 until I did in my custom code. :)
was (Author: tallison@mitre.org):
[~thetaphi], it works because multiterms are normalized in {{TextField}}'s {{analyzeMultiTerm}}: https://github.com/tballison/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/schema/TextField.java#L168 , which uses the full analyzer including the tokenizer.
AFAICT, {{TokenizerChain}}'s {{normalize()}} is never actually called at the moment, which, I'm guessing, is why no one found SOLR-11976 until I did in my custom code.
> CustomAnalyzer with a LowerCaseTokenizerFactory fails to normalize multiterms
> ------------------------------------------------------------------------------
>
> Key: LUCENE-8186
> URL: https://issues.apache.org/jira/browse/LUCENE-8186
> Project: Lucene - Core
> Issue Type: Bug
> Reporter: Tim Allison
> Priority: Minor
> Attachments: LUCENE-8186.patch
>
>
> While working on SOLR-12034, a unit test that relied on the LowerCaseTokenizerFactory failed.
> After some digging, I was able to replicate this at the Lucene level.
> Unit test:
> {noformat}
> @Test
> public void testLCTokenizerFactoryNormalize() throws Exception {
> Analyzer analyzer = CustomAnalyzer.builder().withTokenizer(LowerCaseTokenizerFactory.class).build();
> //fails
> assertEquals(new BytesRef("hello"), analyzer.normalize("f", "Hello"));
>
> //now try an integration test with the classic query parser
> QueryParser p = new QueryParser("f", analyzer);
> Query q = p.parse("Hello");
> //passes
> assertEquals(new TermQuery(new Term("f", "hello")), q);
> q = p.parse("Hello*");
> //fails
> assertEquals(new PrefixQuery(new Term("f", "hello")), q);
> q = p.parse("Hel*o");
> //fails
> assertEquals(new WildcardQuery(new Term("f", "hel*o")), q);
> }
> {noformat}
> The problem is that the CustomAnalyzer iterates through the tokenfilters, but does not call the tokenizer, which, in the case of the LowerCaseTokenizer, does the filtering work.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org