You are viewing a plain text version of this content. The canonical link for it is here.
Posted to solr-dev@lucene.apache.org by "Mark Bennett (JIRA)" <ji...@apache.org> on 2009/05/21 19:39:45 UTC
[jira] Updated: (SOLR-822) CharFilter - normalize characters before
tokenizer
[ https://issues.apache.org/jira/browse/SOLR-822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Mark Bennett updated SOLR-822:
------------------------------
Attachment: japanese-h-to-k-mapping.txt
In SOLR-814 it was suggested that some systems might want to normalizes all Hiragana characters to their Katakana counterpart.
Although this is not universally agreed to, *if* somebody wanted to do it, I believe this mapping file would peform that task when used with this 822 patch. I don't speak Japanese and don't have test content yet, so I'm not 100% it works, but wanted to upload it as a start.
> CharFilter - normalize characters before tokenizer
> --------------------------------------------------
>
> Key: SOLR-822
> URL: https://issues.apache.org/jira/browse/SOLR-822
> Project: Solr
> Issue Type: New Feature
> Components: Analysis
> Affects Versions: 1.3
> Reporter: Koji Sekiguchi
> Assignee: Koji Sekiguchi
> Priority: Minor
> Fix For: 1.4
>
> Attachments: character-normalization.JPG, japanese-h-to-k-mapping.txt, sample_mapping_ja.txt, sample_mapping_ja.txt, SOLR-822-for-1.3.patch, SOLR-822-renameMethod.patch, SOLR-822.patch, SOLR-822.patch, SOLR-822.patch, SOLR-822.patch, SOLR-822.patch
>
>
> A new plugin which can be placed in front of <tokenizer/>.
> {code:xml}
> <fieldType name="textCharNorm" class="solr.TextField" positionIncrementGap="100" >
> <analyzer>
> <charFilter class="solr.MappingCharFilterFactory" mapping="mapping_ja.txt" />
> <tokenizer class="solr.MappingCJKTokenizerFactory"/>
> <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>
> <filter class="solr.LowerCaseFilterFactory"/>
> </analyzer>
> </fieldType>
> {code}
> <charFilter/> can be multiple (chained). I'll post a JPEG file to show character normalization sample soon.
> MOTIVATION:
> In Japan, there are two types of tokenizers -- N-gram (CJKTokenizer) and Morphological Analyzer.
> When we use morphological analyzer, because the analyzer uses Japanese dictionary to detect terms,
> we need to normalize characters before tokenization.
> I'll post a patch soon, too.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.