You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Victor Yap (JIRA)" <ji...@apache.org> on 2014/01/10 15:27:50 UTC

[jira] [Commented] (SOLR-822) CharFilter - normalize characters before tokenizer

    [ https://issues.apache.org/jira/browse/SOLR-822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13867832#comment-13867832 ] 

Victor Yap commented on SOLR-822:
---------------------------------

An old comment's link has been moved.

Originally: http://webui.sourcelabs.com/lucene/issues/1343
Moved to: https://issues.apache.org/jira/browse/LUCENE-1343


> CharFilter - normalize characters before tokenizer
> --------------------------------------------------
>
>                 Key: SOLR-822
>                 URL: https://issues.apache.org/jira/browse/SOLR-822
>             Project: Solr
>          Issue Type: New Feature
>          Components: Schema and Analysis
>    Affects Versions: 1.3
>            Reporter: Koji Sekiguchi
>            Assignee: Koji Sekiguchi
>            Priority: Minor
>             Fix For: 1.4
>
>         Attachments: SOLR-822-for-1.3.patch, SOLR-822-renameMethod.patch, SOLR-822.patch, SOLR-822.patch, SOLR-822.patch, SOLR-822.patch, SOLR-822.patch, character-normalization.JPG, japanese-h-to-k-mapping.txt, sample_mapping_ja.txt, sample_mapping_ja.txt
>
>
> A new plugin which can be placed in front of <tokenizer/>.
> {code:xml}
> <fieldType name="textCharNorm" class="solr.TextField" positionIncrementGap="100" >
>   <analyzer>
>     <charFilter class="solr.MappingCharFilterFactory" mapping="mapping_ja.txt" />
>     <tokenizer class="solr.MappingCJKTokenizerFactory"/>
>     <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>
>     <filter class="solr.LowerCaseFilterFactory"/>
>   </analyzer>
> </fieldType>
> {code}
> <charFilter/> can be multiple (chained). I'll post a JPEG file to show character normalization sample soon.
> MOTIVATION:
> In Japan, there are two types of tokenizers -- N-gram (CJKTokenizer) and Morphological Analyzer.
> When we use morphological analyzer, because the analyzer uses Japanese dictionary to detect terms,
> we need to normalize characters before tokenization.
> I'll post a patch soon, too.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org