You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@lucene.apache.org by "Grant Ingersoll (JIRA)" <ji...@apache.org> on 2008/09/26 14:35:44 UTC

[jira] Commented: (LUCENE-1406) new Arabic Analyzer (Apache license)

    [ https://issues.apache.org/jira/browse/LUCENE-1406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12634838#action_12634838 ] 

Grant Ingersoll commented on LUCENE-1406:
-----------------------------------------

Very cool.  I've used a modified version of the light 8 before, and it is indeed pretty good.

Can you provide it as a patch?  Also, unit tests would be good.  See the How To Contribute section on the Lucene wiki: http://wiki.apache.org/lucene-java/HowToContribute



> new Arabic Analyzer (Apache license)
> ------------------------------------
>
>                 Key: LUCENE-1406
>                 URL: https://issues.apache.org/jira/browse/LUCENE-1406
>             Project: Lucene - Java
>          Issue Type: New Feature
>          Components: Analysis
>            Reporter: Robert Muir
>            Priority: Minor
>         Attachments: arabic.zip
>
>
> I've noticed there is no Arabic analyzer for Lucene, most likely because Tim Buckwalter's morphological dictionary is GPL.
> However, it is not necessary  to have full morphological analysis engine for a quality arabic search. 
> This implementation implements the light-8s algorithm present in the following paper: http://ciir.cs.umass.edu/pubfiles/ir-249.pdf
> As you can see from the paper, improvement via this method over searching surface forms (as lucene currently does) is significant, with almost 100% improvement in average precision.
> While I personally don't think all the choices were the best, and some easily improvements are still possible, the major motivation for implementing it exactly the way it is presented in the paper is that the algorithm is TREC-tested, so the precision/recall improvements to lucene are already documented.
> For a stopword list, I used a list present at http://members.unine.ch/jacques.savoy/clef/index.html simply because the creator of this list documents the data as BSD-licensed.
> This implementation (Analyzer) consists of above mentioned stopword list plus two filters:
>  ArabicNormalizationFilter: performs orthographic normalization (such as hamza seated on alif, alif maksura, teh marbuta, removal of harakat, tatweel, etc)
>  ArabicStemFilter: performs arabic light stemming
> Both filters operate directly on termbuffer for maximum performance. There is no object creation in this Analyzer.
> There are no external dependencies. I've indexed about half a billion words of arabic text and tested against that.
> If there are any issues with this implementation I am willing to fix them. I use lucene on a daily basis and would like to give something back. Thanks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: java-dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: java-dev-help@lucene.apache.org