You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Namgyu Kim (JIRA)" <ji...@apache.org> on 2019/05/22 14:39:00 UTC

[jira] [Comment Edited] (LUCENE-8784) Nori(Korean) tokenizer removes the decimal point.

    [ https://issues.apache.org/jira/browse/LUCENE-8784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16845933#comment-16845933 ] 

Namgyu Kim edited comment on LUCENE-8784 at 5/22/19 2:38 PM:
-------------------------------------------------------------

Thank you for your reply, [~jim.ferenczi] :D

 

I tried to process only "." character in Tokenizer.
 Because Korean is a language that can have a whitespace in sentence, but Japanese is not.
 (Character.OTHER_PUNCTUATION would match more than just the full stop character. => Right. That's a problem. I have to change that part...)

 

JapaneseTokenizer keeps whitespace when using the discardPunctuation option.
 (example : "十万二千五 百 二千五" (means "102005 100 2005")
 If we run the JapaneseTokenizer with discardPunctuation=false and JapaneseNumberFilter, we get:
{"102005", " ", "100", " ", "2005"})

 

Of course we can do it with StopFilter or internal processing in other Filter, but is it okay..?

 

Developing a NumberFilter looks much more flexible and structurally beautiful rather than internal processing in Tokenizer.
 But I have developed like this because of the above problems, how can we handle those spaces?

 

I think there are several ways to handle this problem:
 1) Remove whitespace from Punctuation list in Tokenizer.
 2) Use a TokenFilter to remove whitespace.
 3) Remove whitespace from KoreanNumberFilter. (looks structurally strange...)
 4) Just leave whitespace


was (Author: danmuzi):
Thank you for your reply, [~jim.ferenczi] :D

 

I tried to process only "." character in Tokenizer.
 Because Korean is a language that can have a whitespace in sentence, but Japanese is not.
 (Character.OTHER_PUNCTUATION would match more than just the full stop character. => Right. That's a problem. I have to change that part...)

 

JapaneseTokenizer keeps whitespace when using the discardPunctuation option.
 (example : "十万二千五 百 二千五" (means "102005 100 2005")
 If we run the JapaneseTokenizer with discardPunctuation=false and JapaneseNumberFilter, we get:
{"102005", " ", "100", " ", "2005"})

 

Of course we can do it with StopFilter or internal processing in other Filter, but is it okay..?

 

Developing a NumberFilter looks much more flexible and structurally beautiful rather than internal processing in Tokenizer.
 But I have developed like this because of the above problems, how can we handle those spaces?

 

I think there are several ways to handle this problem:
 1) Remove whitespace from Punctuation list in Tokenizer.
 2) Use a TokenFilter to remove whitespace.
 3) Remove whitespace from KoreanNumberFilter. (looks structurally strange...)
 4) Just leave whitespace

>  Nori(Korean) tokenizer removes the decimal point. 
> ---------------------------------------------------
>
>                 Key: LUCENE-8784
>                 URL: https://issues.apache.org/jira/browse/LUCENE-8784
>             Project: Lucene - Core
>          Issue Type: Improvement
>            Reporter: Munkyu Im
>            Priority: Major
>         Attachments: LUCENE-8784.patch
>
>
> This is the same issue that I mentioned to [https://github.com/elastic/elasticsearch/issues/41401#event-2293189367]
> unlike standard analyzer, nori analyzer removes the decimal point.
> nori tokenizer removes "." character by default.
>  In this case, it is difficult to index the keywords including the decimal point.
> It would be nice if there had the option whether add a decimal point or not.
> Like Japanese tokenizer does,  Nori need an option to preserve decimal point.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org