You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Dai Deqi (JIRA)" <ji...@apache.org> on 2014/02/08 20:29:23 UTC

[jira] [Commented] (LUCENE-4956) the korean analyzer that has a korean morphological analyzer and dictionaries

    [ https://issues.apache.org/jira/browse/LUCENE-4956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13895699#comment-13895699 ] 

Dai Deqi commented on LUCENE-4956:
----------------------------------

Dear Lucene Korean Team,

I posted the following at sourceforge too.  Thank you for your time.  Would appreciate any inputs or assistance you can provide.

Respectfully,
Deqi

Dear Lucene Korean Team,
Hi, I'm a translator working with OmegaT and the OmegaT developers (see Yahoo! OmegaT group). Thank you all very much for the hard work you've put into this analyzer. I was so excited when I came across it!
As a result, I asked the OmegaT developers if they could include your Korean analyzer into OmegaT and they did. The unfortunate part is that the analyzer does not appear to be working. See the e-mails pasted below for more information.
And I would respectfully like to ask a few questions. Would you happen to know why this is happening? If there's a problem, do you know if it will be fixed in future releases? Finally, may I ask how this analyzer and the one here are related: https://issues.apache.org/jira/browse/LUCENE-4956
Thank you all in advance for your time.
Respectfully,
Deqi
Dear Colleagues,
RE: http://groups.yahoo.com/neo/groups/OmegaT/conversations/messages/20023
I'm interested in adding a Korean-specific analyzer/tokenizer to OmT 3.0.8 because of the simplicity of the CJK tokenizer described in the RE. To that end, I downloaded KoreanAnalyzer-20100302.jar and, since I'm using a Mac, put in the .app lib folder and updated the Info.plist file to point to the new jar file.
Does anyone else know what needs to be done? How do I make OmT aware of the new analyzer and use it by default? I'd be very grateful for any assistance and apologize in advance if I don't know the difference between an analyzer and a tokenizer.
For those working in Korean, there's another apparently related analyzer, but I have no idea of how to work with it:
https://issues.apache.org/jira/browse/LUCENE-4956
V/R,
Dai Deqi
Hi Aaron,
Good news and bad news. I built OmT with the new Korean analyzer that you so graciously added with no problems at all. However, the new Korean-only analyzer doesn't appear to be working as well as the CJK analyzer. I'm assuming analyzer/tokenizer differences will show up most noticeably in the Glossary pane. And that's where I'm seeing big differences.
For example, the simple sentence below
그 전문은 다음과 같다.
produces Transtips and Glossary hits using the CJK analyzer, but nothing with the new Korean-only analyzer. That was quite disappointing.
If there are any other tests you or anyone else can suggest or would like me to try, please let me know. I've never done this kind of testing before.
All the Best,
Dai Deqi
Hello.
I just did a quick test of the KoreanAnalyzer lib and found that while the tokenizer seems to work fine, the analyzer part (which is used for glossary and Transtips, etc.) doesn't seem to work at all.
Input: "그 전문은 다음과 같다."
Tokenizer output: [ "그", "전문은", "다음과", "같다" ]
Analyzer output: [ ]
In other words the analyzer simply does not output anything, which means that no matches will be found.
I'm not sure what to make of this, as we are using the library in the same way as any other Lucene analyzer. This suggests to me that the code is broken; if there's some workaround then perhaps the author of the library can help us, but otherwise we will just have to wait until the standalone library is fixed or a final version is integrated into Lucene.
-Aaron
Actually, sorry, I was wrong; the analyzer's output is empty for the example sentence you supplied, but that is not true for the general case.
For a sentence I took from Wikipedia:
Input: "위키백과는 전 세계 여러 언어로 만들어 나가는 자유 백과사전으로, 누구나 참여하실 수 있습니다."
Tokenization: [ "위키백과는", "전", "세계", "여러", "언어로", "만들어", "나가는", "자유", "백과사전으로", "누구나", "참여하실", "수", "있습니다" ]
Analysis: [ "위키백과는", "위키백", "위키", "키백" ]
I thought at first this was the result of a very aggressive stopwords filter or something, but the result is the same even when supplying an empty stopwords set. Plus, Google Translate tells me that the analysis result is basically:
[ "Wikipedia", "Wikipedia", "Wiki", "pedia" ] (all substrings of the first token)
So it seems the conclusion is the same: The analysis is broken, or at least behaves completely differently from all standard Lucene analyzers.
-Aaron

> the korean analyzer that has a korean morphological analyzer and dictionaries
> -----------------------------------------------------------------------------
>
>                 Key: LUCENE-4956
>                 URL: https://issues.apache.org/jira/browse/LUCENE-4956
>             Project: Lucene - Core
>          Issue Type: New Feature
>          Components: modules/analysis
>    Affects Versions: 4.2
>            Reporter: SooMyung Lee
>            Assignee: Christian Moen
>              Labels: newbie
>         Attachments: LUCENE-4956.patch, eval.patch, kr.analyzer.4x.tar, lucene-4956.patch, lucene4956.patch
>
>
> Korean language has specific characteristic. When developing search service with lucene & solr in korean, there are some problems in searching and indexing. The korean analyer solved the problems with a korean morphological anlyzer. It consists of a korean morphological analyzer, dictionaries, a korean tokenizer and a korean filter. The korean anlyzer is made for lucene and solr. If you develop a search service with lucene in korean, It is the best idea to choose the korean analyzer.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org