You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@lucene.apache.org by "Tomoko Uchida (Jira)" <ji...@apache.org> on 2022/04/06 11:43:00 UTC

[jira] [Comment Edited] (LUCENE-10493) Can we unify the viterbi search logic in the tokenizers of kuromoji and nori?

    [ https://issues.apache.org/jira/browse/LUCENE-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17517988#comment-17517988 ] 

Tomoko Uchida edited comment on LUCENE-10493 at 4/6/22 11:42 AM:
-----------------------------------------------------------------

I'm starting this with small steps. I'll try to keep the commits self-contained, and also as small as possible for safety.
https://github.com/apache/lucene/pull/793
https://github.com/apache/lucene/pull/795

Let me know if there is any feedback, thanks! 


was (Author: tomoko uchida):
I'm starting this with small steps. I'll try to keep the commits self-contained, and also as small as possible for safety.
https://github.com/apache/lucene/pull/793

Let me know if there is any feedback, thanks! 

> Can we unify the viterbi search logic in the tokenizers of kuromoji and nori?
> -----------------------------------------------------------------------------
>
>                 Key: LUCENE-10493
>                 URL: https://issues.apache.org/jira/browse/LUCENE-10493
>             Project: Lucene - Core
>          Issue Type: Improvement
>          Components: modules/analysis
>            Reporter: Tomoko Uchida
>            Priority: Major
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> We now have common dictionary interfaces for kuromoji and nori ([LUCENE-10393]). A natural question would be: is it possible to unify the Japanese/Korean tokenizers? 
> The core methods of the two tokenizers are `parse()` and `backtrace()` to calculate the minimum cost path by Viterbi search. I'd set the goal of this issue to factoring out them into a separate class (in analysis-common) that is shared between JapaneseTokenizer and KoreanTokenizer. 
> The algorithm to solve the minimum cost path itself is of course language-agnostic, so I think it should be theoretically possible; the most difficult part here might be the N-best path calculation - which is supported only by JapaneseTokenizer and not by KoreanTokenizer.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@lucene.apache.org
For additional commands, e-mail: issues-help@lucene.apache.org