You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@lucene.apache.org by "Tomoko Uchida (Jira)" <ji...@apache.org> on 2020/06/20 05:50:00 UTC

[jira] [Created] (LUCENE-9413) Add a char filter corresponding to CJKWidthFilter

Tomoko Uchida created LUCENE-9413:
-------------------------------------

             Summary: Add a char filter corresponding to CJKWidthFilter
                 Key: LUCENE-9413
                 URL: https://issues.apache.org/jira/browse/LUCENE-9413
             Project: Lucene - Core
          Issue Type: New Feature
            Reporter: Tomoko Uchida


In association with issues in Elasticsearch ([https://github.com/elastic/elasticsearch/issues/58384] and [https://github.com/elastic/elasticsearch/issues/58385]), it might be useful for Japanese default analyzer.

Although I don't think it's a bug to not normalize FULL and HALF width characters before tokenization, the behaviour sometimes confuses beginners or users who have limited knowledge about Japanese analysis (and Unicode).

If we have a FULL and HALF width character normalization filter in {{analyzers-common}}, we can include it into JapaneseAnalyzer (currently, JapaneseAnalyzer contains CJKWidthFilter but it is applied after tokenization so all FULL width numbers or alphabets are separated by the tokenizer).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org