You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@opennlp.apache.org by Markus Jelsma <ma...@openindex.io> on 2020/02/21 15:06:58 UTC

LemmatizerTrainerME gets stuck with GaussianSmoothing enabled

Hello,

When GaussianSmoothing is enabled, the LemmatizerTrainerME gets stuck. The loglikelihood gets warped to NaN right after the first iteration, like so:
Computing model parameters in 16 threads...
Performing 10000 iterations.
  1:  ... loglikelihood=-1238383.3965208859     0.8032247485201534
  2:  ... loglikelihood=NaN     0.8178054720724305
  3:  ... loglikelihood=NaN     0.8032247485201534
  4:  ... loglikelihood=NaN     0.8032247485201534

Is this known to happen? Did i stumble upon some bug? 

Many thanks,
Markus

Re: LemmatizerTrainerME gets stuck with GaussianSmoothing enabled

Posted by Rodrigo Agerri <ra...@apache.org>.
Hello,

I do not know why is this happening, but while testing the lemmatizer
the best performance (word and sentence accuracy) was always obtained
using the Perceptron trainer (for a number of languages), so I would
recommend to train perceptron models (perhaps you have already tried
this, but just in case).

Best,

R






On Fri, 21 Feb 2020 at 16:07, Markus Jelsma <ma...@openindex.io> wrote:
>
> Hello,
>
> When GaussianSmoothing is enabled, the LemmatizerTrainerME gets stuck. The loglikelihood gets warped to NaN right after the first iteration, like so:
> Computing model parameters in 16 threads...
> Performing 10000 iterations.
>   1:  ... loglikelihood=-1238383.3965208859     0.8032247485201534
>   2:  ... loglikelihood=NaN     0.8178054720724305
>   3:  ... loglikelihood=NaN     0.8032247485201534
>   4:  ... loglikelihood=NaN     0.8032247485201534
>
> Is this known to happen? Did i stumble upon some bug?
>
> Many thanks,
> Markus