You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@opennlp.apache.org by "Tim Allison (JIRA)" <ji...@apache.org> on 2019/06/10 16:18:00 UTC

[jira] [Comment Edited] (OPENNLP-1261) Language Detector fails to predict language on long input texts

    [ https://issues.apache.org/jira/browse/OPENNLP-1261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16859207#comment-16859207 ] 

Tim Allison edited comment on OPENNLP-1261 at 6/10/19 4:17 PM:
---------------------------------------------------------------

Sorry, on second thought... better than returning a {{Map<String, Integer>}} would be an IterableLangDetectorContextGenerator that implements {{Iterator<String>}}.  Example coming shortly.


was (Author: tallison@mitre.org):
Sorry, on second thought... better than returning a {{Map<String, Integer>}} would be an IterableLangDetectorContextGenerator that implements {{Iterable<String>}}.

> Language Detector fails to predict language on long input texts
> ---------------------------------------------------------------
>
>                 Key: OPENNLP-1261
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-1261
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Language Detector
>            Reporter: Joern Kottmann
>            Assignee: Joern Kottmann
>            Priority: Major
>         Attachments: langid_plus_minus_rollups.zip
>
>
> If the input text is very long, e.g. 100k chars, then the lang detect component fails to detect the language correctly, even though the text is only written in one language.
> This issue was tracked down to the context generator, where the count of the ngrams are ignored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)