You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@opennlp.apache.org by "Vinh Khuc (JIRA)" <ji...@apache.org> on 2014/08/04 06:29:11 UTC

[jira] [Commented] (OPENNLP-703) Parallel computing the objective function and its gradient for MAXENT_QN

    [ https://issues.apache.org/jira/browse/OPENNLP-703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14084288#comment-14084288 ] 

Vinh Khuc commented on OPENNLP-703:
-----------------------------------

QNTrainer now accepts the parameter "Threads" for computing the objective function (negative log-likelihood) and its gradient in parallel. In my small experiment, with 4 threads, the training time for CONLL 2000 was reduced from 1,788 seconds down to 952 seconds. The speed-up factor is less than 4 since the for-loops in QNMinimizer still run in sequential manner. 

> Parallel computing the objective function and its gradient for MAXENT_QN
> ------------------------------------------------------------------------
>
>                 Key: OPENNLP-703
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-703
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Machine Learning
>    Affects Versions: tools-1.5.3, maxent-3.0.3
>            Reporter: Vinh Khuc
>            Assignee: Vinh Khuc
>             Fix For: 1.6.0
>
>
> Although the current L-BFGS trainer runs in a sequential manner, Maxent's objective function (i.e. the negative log-likelihood function) and its gradient can be computed in parallel. This JIRA will focus on improving the training time of MAXENT_QN.



--
This message was sent by Atlassian JIRA
(v6.2#6252)