You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@opennlp.apache.org by "Jörn Kottmann (JIRA)" <ji...@apache.org> on 2011/06/08 09:57:58 UTC

[jira] [Created] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Refactor the PerceptronTrainer class to address a couple of problems
--------------------------------------------------------------------

                 Key: OPENNLP-199
                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
             Project: OpenNLP
          Issue Type: Improvement
          Components: Maxent
    Affects Versions: maxent-3.0.1-incubating
            Reporter: Jörn Kottmann
            Assignee: Jason Baldridge
             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating


- Changed the update to be the actual perceptron update: when a label
  that is not the gold label is chosen for an event, the parameters
  associated with that label are decremented, and the parameters
  associated with the gold label are incremented. I checked this
  empirically on several datasets, and it works better than the
  previous update (and it involves fewer updates).

- stepsize is decreased by stepsize/1.05 on every iteration, ensuring
  better stability toward the end of training. This is actually the
  main reason that the training set accuracy obtained during parameter
  update continued to be different from that computed when parameters
  aren't updated. Now, the parameters don't jump as much in later
  iterations, so things settle down and those two accuracies converge
  if enough iterations are allowed.

- Training set accuracy is computed once per iteration.

- Training stops if the current training set accuracy changes less
  than a given tolerance from the accuracies obtained in each of the
  previous three iterations.

- Averaging is done differently than before. Rather than doing an
  immediate update, parameters are simply accumulated after iterations
  (this makes the code much easier to understand/maintain). Also, not
  every iteration is used, as this tends to give to much weight to the
  final iterations, which don't actually differ that much from one
  another. I tried a few things and found a simple method that works
  well: sum the parameters from the first 20 iterations and then sum
  parameters from any further iterations that are perfect squares (25,
  36, 49, etc). This gets a good (diverse) sample of parameters for
  averaging since the distance between subsequent parameter sets gets
  larger as the number of iterations gets bigger.

- Added prepositional phrase attachment dataset to
  src/test/resources/data/ppa. This is done with permission from
  Adwait Ratnarparkhi -- see the README for details. 

- Created unit test to check perceptron training consistency, using
  the prepositional phrase attachment data. It would be good to do the
  same for maxent.

- Added ListEventStream to make a stream out of List<Event>

- Added some helper methods, e.g. maxIndex, to simplify the code in
  the main algorithm.

- The training stats aren't shown for every iteration. Now it is just
  the first 10 and then every 10th iteration after that.

- modelDistribution, params, evalParams and others are no longer class
  variables. They have been pushed into the findParameters
  method. Other variables could/should be made non-global too, but
  leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13060419#comment-13060419 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

Jason I would like to get this issue closed soon.

I suggest to do the following changes:
- Make the step size configurable, and disable it by default, if a user wants to use this feature he can enable it and provide a step size decrement
- Make the special averaging configurable and also disabled by default.

For me it looks like these settings should be fine-tuned per data set and not be hard-coded. When fine tuning something it is always good to start with the simplest configuration, and then test changes to it.

Please let me know what you think.

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13046855#comment-13046855 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

I was thinking about something like this assertTrue(0.75 < accuracy) then 0.75 is the minimum threshold and greater values never fail the test.

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045983#comment-13045983 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

Maybe my answer wasn't that clear. So in the beginning I compared the version before and after your commit. After I was done with that I removed the step size reduction and in the next step additionally the average handling. I think it should be easy to understand when you read the test result posts down from the top.

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045838#comment-13045838 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

> Other variables could/should be made non-global too, but leaving as is for now.

+1, lets make them local also. I will test to ensure we do not get a performance regression through this.

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jason Baldridge (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045940#comment-13045940 ] 

Jason Baldridge commented on OPENNLP-199:
-----------------------------------------

Thanks for doing the new JIRA. Will get this right eventually...

2011/6/8 Jörn Kottmann (JIRA) <ji...@apache.org>




-- 
Jason Baldridge
Assistant Professor, Department of Linguistics
The University of Texas at Austin
http://www.jasonbaldridge.com
http://twitter.com/jasonbaldridge


> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Issue Comment Edited] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045840#comment-13045840 ] 

Jörn Kottmann edited comment on OPENNLP-199 at 6/8/11 1:18 PM:
---------------------------------------------------------------

I trained the POS Tagger with our training data and tested on section 00 of the WSJ.
Training was done without tag dict and exactly 100 iterations in both cases.

Result before this refactoring:
Accuracy: 0.9303351919226712

Result after refactoring:
Accuracy: 0.9635960474478483

      was (Author: joern):
    I trained the POS Tagger with our training data and tested on section 00 of the WSJ.
Training was done without tag dict.

Result before this refactoring:
Accuracy: 0.9303351919226712

Result after refactoring:
Accuracy: 0.9635960474478483
  
> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045865#comment-13045865 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

The new code is faster. The POS Tagger can now be trained on the english data in around 10 minutes, and it took more than 15 minutes before.

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jason Baldridge (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13060544#comment-13060544 ] 

Jason Baldridge commented on OPENNLP-199:
-----------------------------------------

+1 to all this. Did you do it already?

2011/7/6 Jörn Kottmann (JIRA) <ji...@apache.org>



-- 
Jason Baldridge
Assistant Professor, Department of Linguistics
The University of Texas at Austin
http://www.jasonbaldridge.com
http://twitter.com/jasonbaldridge


> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jason Baldridge (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13046590#comment-13046590 ] 

Jason Baldridge commented on OPENNLP-199:
-----------------------------------------

Good point. I've added a delta of .00001.

I tested it on my linux box and my mac laptop, and both report the same
result. Perhaps you made some changes while testing out the taggers that
lead to the different result?

Jason

2011/6/9 Jörn Kottmann (JIRA) <ji...@apache.org>




-- 
Jason Baldridge
Assistant Professor, Department of Linguistics
The University of Texas at Austin
http://www.jasonbaldridge.com
http://twitter.com/jasonbaldridge


> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jason Baldridge (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045970#comment-13045970 ] 

Jason Baldridge commented on OPENNLP-199:
-----------------------------------------

Sorry, can you summarize what the results were and whether you used averaging and stepsize reduction? 

I've been using 5000 max iterations, though models tend to stop anywhere from 50-1000 iterations.

For NER, do you have a breakdown of P/R/F for each type of entity?

Jason

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13046446#comment-13046446 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

The bug is not caused by the strict compare, the delta is quite large.

Here is the stack trace:
Error Message

expected:<0.7833622183708839> but was:<0.7813815300817034>
Stacktrace

junit.framework.AssertionFailedError: expected:<0.7833622183708839> but was:<0.7813815300817034>
	at junit.framework.Assert.fail(Assert.java:47)
	at junit.framework.Assert.failNotEquals(Assert.java:283)
	at junit.framework.Assert.assertEquals(Assert.java:64)
	at junit.framework.Assert.assertEquals(Assert.java:71)
	at opennlp.perceptron.PerceptronPrepAttachTest.testPerceptronOnPrepAttachData(PerceptronPrepAttachTest.java:60)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:592)
	at junit.framework.TestCase.runTest(TestCase.java:168)
	at junit.framework.TestCase.runBare(TestCase.java:134)
	at junit.framework.TestResult$1.protect(TestResult.java:110)
	at junit.framework.TestResult.runProtected(TestResult.java:128)
	at junit.framework.TestResult.run(TestResult.java:113)
	at junit.framework.TestCase.run(TestCase.java:124)
	at junit.framework.TestSuite.runTest(TestSuite.java:232)
	at junit.framework.TestSuite.run(TestSuite.java:227)
	at org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
	at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:35)
	at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:115)
	at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:97)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
	at java.lang.reflect.Method.invoke(Method.java:592)
	at org.apache.maven.surefire.booter.ProviderFactory$ClassLoaderProxy.invoke(ProviderFactory.java:103)
	at $Proxy0.invoke(Unknown Source)
	at org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:150)
	at org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcess(SurefireStarter.java:91)
	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:69)

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Issue Comment Edited] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045869#comment-13045869 ] 

Jörn Kottmann edited comment on OPENNLP-199 at 6/8/11 1:42 PM:
---------------------------------------------------------------

Test results for the name finder. Training was done on 30 iterations with the pre-refactoring version
and was terminated after more than 40 iterations by the new stop criteria in the refactored version.

Before refactoring:
Precision: 0.8425312729948492
Recall: 0.8363769174579986
F-Measure: 0.8394428152492669

After refactoring:
Precision: 0.7919782460910945
Recall: 0.8509861212563915
F-Measure: 0.8204225352112676

      was (Author: joern):
    Test results for the name finder. Training was done on 30 iterations with the pre-refactoring version
and was terminated after more than 40 iterations by the new stop criteria. 

Before refactoring:
Precision: 0.8425312729948492
Recall: 0.8363769174579986
F-Measure: 0.8394428152492669

After refactoring:
Must be re-tested!

The results after refactoring are now better, than first reported.
  
> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045942#comment-13045942 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

Based on my testing I believe only the change update behavior improves the performance for the POS Tagger and slightly reduces it for name finder. The step size reduction reduces the name finder performance and has a very small increase (0.4%) on the POS Tagger accuracy.

Are there data set where the new average handling and step size reduction change the accuracy of a trained model? Otherwise we might want to disable it by default or remove it, to have code which is as simple as possible.

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13047852#comment-13047852 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

Ok, so what you say is a bug could also cause an improvement.

+1 for using an equals with a bigger tolerance.

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13063288#comment-13063288 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

How should we call the new way of averaging?

Basically we are supporting now three settings:
- none
- averaging (like done before)
- new averaging (when skipping iterations)

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13046802#comment-13046802 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

The error message is also a little confusing, because the expected value and actual value are switched in the assertEquals which is done in this test. The test seems to be more accurate on the build server than on your machine.

The test should ensure that the system is not broken, so it comes to the question when do we consider the system as broken?

If a very minor code change changes the accuracy a little (maybe makes it more accurate) we should not consider it as broken. So I believe we should use here an accuracy threshold which must be exceeded. This way we fail the test if the accuracy goes down too much, but tolerate any improvement.

What do you think?

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jason Baldridge (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13046850#comment-13046850 ] 

Jason Baldridge commented on OPENNLP-199:
-----------------------------------------

+1 to switching the expected and actual

+1 to using a looser threshold -- say .5 %

2011/6/9 Jörn Kottmann (JIRA) <ji...@apache.org>




-- 
Jason Baldridge
Assistant Professor, Department of Linguistics
The University of Texas at Austin
http://www.jasonbaldridge.com
http://twitter.com/jasonbaldridge


> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Issue Comment Edited] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045869#comment-13045869 ] 

Jörn Kottmann edited comment on OPENNLP-199 at 6/8/11 1:30 PM:
---------------------------------------------------------------

Test results for the name finder. Training was done on 30 iterations with the pre-refactoring version
and was terminated after more than 40 iterations by the new stop criteria. 

Before refactoring:
Precision: 0.8425312729948492
Recall: 0.8363769174579986
F-Measure: 0.8394428152492669

After refactoring:
Precision: 0.8305209097578871
Recall: 0.8268809349890431
F-Measure: 0.828696925329429

The results after refactoring are now better, than first reported.

      was (Author: joern):
    Test results for the name finder. Training was done on 30 iterations with the pre-refactoring version
and was terminated after more than 40 iterations by the new stop criteria. 

Before refactoring:
Precision: 0.8425312729948492
Recall: 0.8363769174579986
F-Measure: 0.8394428152492669

After refactoring:
Precision: 0.8305209097578871
Recall: 0.8268809349890431
F-Measure: 0.828696925329429

The results after refactoring are now better, than first reported.

The precision is going down here.

Do you think we need a bigger stepsize decrease for the namfinder,
or maybe a small stepsize?
Otherwise the change might be caused by the different averaging.

I will do more tests to figure that out.
  
> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045946#comment-13045946 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

The name finder training always stops by itself after around 40 iterations, not sure what is the exact number, since you changed the reporting. The POS Tagger was trained with 100 iterations, should that be bigger?

In the first name finder test it was restricted to 30 iterations. I guess I should re-test and edit my comment.

I think the new parameter updates is great, because the perceptorn pos tagger now has a performance similar to the maxent one. 

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13063278#comment-13063278 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

How should we specify the step size decrease. Should we use the current approach? Otherwise it might be more intuitive to specify a percentage instead, e.g. 5% step size decrease per iteration. 

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045955#comment-13045955 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

I clarified the test result for the name finder, see edited comment above.

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jason Baldridge (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13063708#comment-13063708 ] 

Jason Baldridge commented on OPENNLP-199:
-----------------------------------------

How about "skipped averaging"?

2011/7/11 Jörn Kottmann (JIRA) <ji...@apache.org>



-- 
Jason Baldridge
Assistant Professor, Department of Linguistics
The University of Texas at Austin
http://www.jasonbaldridge.com
http://twitter.com/jasonbaldridge


> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045932#comment-13045932 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

I tested again, but this time I removed the step size reduction:

POS Tagger:
Accuracy: 0.962971733654819

Name Finder:
Precision: 0.827536231884058
Recall: 0.8341855368882396
F-Measure: 0.8308475809385231

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jason Baldridge (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045941#comment-13045941 ] 

Jason Baldridge commented on OPENNLP-199:
-----------------------------------------

How many iterations are you allowing? The way I set it up, you should be
able to put a very high number, but if you are stopping earlier, it might
not work as well.

2011/6/8 Jörn Kottmann (JIRA) <ji...@apache.org>




-- 
Jason Baldridge
Assistant Professor, Department of Linguistics
The University of Texas at Austin
http://www.jasonbaldridge.com
http://twitter.com/jasonbaldridge


> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Issue Comment Edited] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045869#comment-13045869 ] 

Jörn Kottmann edited comment on OPENNLP-199 at 6/8/11 1:30 PM:
---------------------------------------------------------------

Test results for the name finder. Training was done on 30 iterations with the pre-refactoring version
and was terminated after more than 40 iterations by the new stop criteria. 

Before refactoring:
Precision: 0.8425312729948492
Recall: 0.8363769174579986
F-Measure: 0.8394428152492669

After refactoring:
Precision: 0.8305209097578871
Recall: 0.8268809349890431
F-Measure: 0.828696925329429

The results after refactoring are now better, than first reported.

The precision is going down here.

Do you think we need a bigger stepsize decrease for the namfinder,
or maybe a small stepsize?
Otherwise the change might be caused by the different averaging.

I will do more tests to figure that out.

      was (Author: joern):
    Test results for the name finder. Training was done on 30 iterations.

Before refactoring:
Precision: 0.8425312729948492
Recall: 0.8363769174579986
F-Measure: 0.8394428152492669

After refactoring:
Precision: 0.7919782460910945
Recall: 0.8509861212563915
F-Measure: 0.8204225352112676

The precision is going down here.

Do you think we need a bigger stepsize decrease for the namfinder,
or maybe a small stepsize?
Otherwise the change might be caused by the different averaging.

I will do more tests to figure that out.
  
> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045973#comment-13045973 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

I edited the original posts and extended them, so just scroll up and look at my first posts where I reported the results.
NER was only tested on person entities, and I posted Precision, Recall and F-Measure. What do you mean with breakdown?

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Issue Comment Edited] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045869#comment-13045869 ] 

Jörn Kottmann edited comment on OPENNLP-199 at 6/8/11 1:32 PM:
---------------------------------------------------------------

Test results for the name finder. Training was done on 30 iterations with the pre-refactoring version
and was terminated after more than 40 iterations by the new stop criteria. 

Before refactoring:
Precision: 0.8425312729948492
Recall: 0.8363769174579986
F-Measure: 0.8394428152492669

After refactoring:
Must be re-tested!

The results after refactoring are now better, than first reported.

      was (Author: joern):
    Test results for the name finder. Training was done on 30 iterations with the pre-refactoring version
and was terminated after more than 40 iterations by the new stop criteria. 

Before refactoring:
Precision: 0.8425312729948492
Recall: 0.8363769174579986
F-Measure: 0.8394428152492669

After refactoring:
Precision: 0.8305209097578871
Recall: 0.8268809349890431
F-Measure: 0.828696925329429

The results after refactoring are now better, than first reported.
  
> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jason Baldridge (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13047224#comment-13047224 ] 

Jason Baldridge commented on OPENNLP-199:
-----------------------------------------

To me, the point of the test is to check that the implementation hasn't
changed from its expected behavior. If changes are made that lead to
improvements, that is fine, but it should come from some non-trivial
improvement done with the intent to improve results. Also, improvement for
one metric on one dataset doesn't necessarily indicate an overall
improvement. So, I say we stick with a smallish margin on either side.

2011/6/9 Jörn Kottmann (JIRA) <ji...@apache.org>




-- 
Jason Baldridge
Assistant Professor, Department of Linguistics
The University of Texas at Austin
http://www.jasonbaldridge.com
http://twitter.com/jasonbaldridge


> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045869#comment-13045869 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

Test results for the name finder. Training was done on 30 iterations.

Before refactoring:
Precision: 0.8425312729948492
Recall: 0.8363769174579986
F-Measure: 0.8394428152492669

After refactoring:
Precision: 0.7919782460910945
Recall: 0.8509861212563915
F-Measure: 0.8204225352112676

The precision is going down here.

Do you think we need a bigger stepsize decrease for the namfinder,
or maybe a small stepsize?
Otherwise the change might be caused by the different averaging.

I will do more tests to figure that out.

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13063929#comment-13063929 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

Changes are made now, I will start testing soon.

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Issue Comment Edited] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045840#comment-13045840 ] 

Jörn Kottmann edited comment on OPENNLP-199 at 6/8/11 8:46 AM:
---------------------------------------------------------------

I trained the POS Tagger with our training data and tested on section 00 of the WSJ.
Training was done without tag dict.

Result before this refactoring:
Accuracy: 0.9303351919226712

Result after refactoring:
Accuracy: 0.9635960474478483

      was (Author: joern):
    I trained the POS Tagger with our training data and tested on section 00 of the WSJ.

Result before this refactoring:
Accuracy: 0.9303351919226712

Result after refactoring:
Accuracy: 0.9635960474478483
  
> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045938#comment-13045938 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

And more test, this time without step size reduction and the special average handling:

POS Tagger:
Accuracy: 0.962971733654819

Name Finder:
Precision: 0.8305209097578871
Recall: 0.8268809349890431
F-Measure: 0.828696925329429


> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jason Baldridge (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13047886#comment-13047886 ] 

Jason Baldridge commented on OPENNLP-199:
-----------------------------------------

A bug, or minor change to parameters (e.g. early stopping), could indeed
improve performance for a single dataset.

2011/6/11 Jörn Kottmann (JIRA) <ji...@apache.org>




-- 
Jason Baldridge
Assistant Professor, Department of Linguistics
The University of Texas at Austin
http://www.jasonbaldridge.com
http://twitter.com/jasonbaldridge


> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13060421#comment-13060421 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

Additionally I propose to make the "tolerance" to stop training configurable and use the current value as a default.

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13046592#comment-13046592 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

I didn't check in any of my changes. For some reason the code seems to run different on our build server than on your local mac and linux boxes.

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Issue Comment Edited] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13063278#comment-13063278 ] 

Jörn Kottmann edited comment on OPENNLP-199 at 7/11/11 10:57 AM:
-----------------------------------------------------------------

How should we specify the step size decrease. Should we use the current approach? Otherwise it might be more intuitive to specify a percentage instead, e.g. 5% step size decrease per iteration. 

That could be implemented like this:
stepsize *= 1 - (stepSizeDecrease / 100); 


      was (Author: joern):
    How should we specify the step size decrease. Should we use the current approach? Otherwise it might be more intuitive to specify a percentage instead, e.g. 5% step size decrease per iteration. 
  
> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13063898#comment-13063898 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

+1 for the suggested skipped averaging.

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045840#comment-13045840 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

I trained the POS Tagger with our training data and tested on section 00 of the WSJ.

Result before this refactoring:
Accuracy: 0.9303351919226712

Result after refactoring:
Accuracy: 0.9635960474478483

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Closed] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jörn Kottmann closed OPENNLP-199.
---------------------------------

    Resolution: Fixed

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jason Baldridge (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13063709#comment-13063709 ] 

Jason Baldridge commented on OPENNLP-199:
-----------------------------------------

+1 That sounds good.

2011/7/11 Jörn Kottmann (JIRA) <ji...@apache.org>



-- 
Jason Baldridge
Assistant Professor, Department of Linguistics
The University of Texas at Austin
http://www.jasonbaldridge.com
http://twitter.com/jasonbaldridge


> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jason Baldridge (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13045987#comment-13045987 ] 

Jason Baldridge commented on OPENNLP-199:
-----------------------------------------

By breakdown, I meant what is the P/R/F for person, P/R/F for location, etc. We might find that we've improved F-score for something that occurs less often (e.g. organizations), but have lost some person predictions, etc.

One other small thing to try: let it go for 5000 iterations, and use a smaller step size reduction, e.g. 1.01.

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

[jira] [Updated] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Jörn Kottmann updated OPENNLP-199:
----------------------------------

    Description: 
- Changed the update to be the actual perceptron update: when a label
  that is not the gold label is chosen for an event, the parameters
  associated with that label are decremented, and the parameters
  associated with the gold label are incremented. I checked this
  empirically on several datasets, and it works better than the
  previous update (and it involves fewer updates).

- stepsize is decreased by stepsize/1.05 on every iteration, ensuring
  better stability toward the end of training. This is actually the
  main reason that the training set accuracy obtained during parameter
  update continued to be different from that computed when parameters
  aren't updated. Now, the parameters don't jump as much in later
  iterations, so things settle down and those two accuracies converge
  if enough iterations are allowed.

- Training set accuracy is computed once per iteration.

- Training stops if the current training set accuracy changes less
  than a given tolerance from the accuracies obtained in each of the
  previous three iterations.

- Averaging is done differently than before. Rather than doing an
  immediate update, parameters are simply accumulated after iterations
  (this makes the code much easier to understand/maintain). Also, not
  every iteration is used, as this tends to give to much weight to the
  final iterations, which don't actually differ that much from one
  another. I tried a few things and found a simple method that works
  well: sum the parameters from the first 20 iterations and then sum
  parameters from any further iterations that are perfect squares (25,
  36, 49, etc). This gets a good (diverse) sample of parameters for
  averaging since the distance between subsequent parameter sets gets
  larger as the number of iterations gets bigger.

- Added ListEventStream to make a stream out of List<Event>

- Added some helper methods, e.g. maxIndex, to simplify the code in
  the main algorithm.

- The training stats aren't shown for every iteration. Now it is just
  the first 10 and then every 10th iteration after that.

- modelDistribution, params, evalParams and others are no longer class
  variables. They have been pushed into the findParameters
  method. Other variables could/should be made non-global too, but
  leaving as is for now.

  was:
- Changed the update to be the actual perceptron update: when a label
  that is not the gold label is chosen for an event, the parameters
  associated with that label are decremented, and the parameters
  associated with the gold label are incremented. I checked this
  empirically on several datasets, and it works better than the
  previous update (and it involves fewer updates).

- stepsize is decreased by stepsize/1.05 on every iteration, ensuring
  better stability toward the end of training. This is actually the
  main reason that the training set accuracy obtained during parameter
  update continued to be different from that computed when parameters
  aren't updated. Now, the parameters don't jump as much in later
  iterations, so things settle down and those two accuracies converge
  if enough iterations are allowed.

- Training set accuracy is computed once per iteration.

- Training stops if the current training set accuracy changes less
  than a given tolerance from the accuracies obtained in each of the
  previous three iterations.

- Averaging is done differently than before. Rather than doing an
  immediate update, parameters are simply accumulated after iterations
  (this makes the code much easier to understand/maintain). Also, not
  every iteration is used, as this tends to give to much weight to the
  final iterations, which don't actually differ that much from one
  another. I tried a few things and found a simple method that works
  well: sum the parameters from the first 20 iterations and then sum
  parameters from any further iterations that are perfect squares (25,
  36, 49, etc). This gets a good (diverse) sample of parameters for
  averaging since the distance between subsequent parameter sets gets
  larger as the number of iterations gets bigger.

- Added prepositional phrase attachment dataset to
  src/test/resources/data/ppa. This is done with permission from
  Adwait Ratnarparkhi -- see the README for details. 

- Created unit test to check perceptron training consistency, using
  the prepositional phrase attachment data. It would be good to do the
  same for maxent.

- Added ListEventStream to make a stream out of List<Event>

- Added some helper methods, e.g. maxIndex, to simplify the code in
  the main algorithm.

- The training stats aren't shown for every iteration. Now it is just
  the first 10 and then every 10th iteration after that.

- modelDistribution, params, evalParams and others are no longer class
  variables. They have been pushed into the findParameters
  method. Other variables could/should be made non-global too, but
  leaving as is for now.


> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13060545#comment-13060545 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

No, but I can spent a little time here to get it finished.

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

       

[jira] [Commented] (OPENNLP-199) Refactor the PerceptronTrainer class to address a couple of problems

Posted by "Jörn Kottmann (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/jira/browse/OPENNLP-199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13046435#comment-13046435 ] 

Jörn Kottmann commented on OPENNLP-199:
---------------------------------------

The test case added with this issue fails on our build server, not sure why. Can you have a look?

Anyway I noticed that add the end the prob is tested like this:
assertEquals(accuracy, 0.7813815300817034);

Doing a strict equals on floating points results usually always in a bug. That is also the reason why JUnit does not even offer an assert method for floating points with out providing a tolerance delta.

> Refactor the PerceptronTrainer class to address a couple of problems
> --------------------------------------------------------------------
>
>                 Key: OPENNLP-199
>                 URL: https://issues.apache.org/jira/browse/OPENNLP-199
>             Project: OpenNLP
>          Issue Type: Improvement
>          Components: Maxent
>    Affects Versions: maxent-3.0.1-incubating
>            Reporter: Jörn Kottmann
>            Assignee: Jason Baldridge
>             Fix For: tools-1.5.2-incubating, maxent-3.0.2-incubating
>
>
> - Changed the update to be the actual perceptron update: when a label
>   that is not the gold label is chosen for an event, the parameters
>   associated with that label are decremented, and the parameters
>   associated with the gold label are incremented. I checked this
>   empirically on several datasets, and it works better than the
>   previous update (and it involves fewer updates).
> - stepsize is decreased by stepsize/1.05 on every iteration, ensuring
>   better stability toward the end of training. This is actually the
>   main reason that the training set accuracy obtained during parameter
>   update continued to be different from that computed when parameters
>   aren't updated. Now, the parameters don't jump as much in later
>   iterations, so things settle down and those two accuracies converge
>   if enough iterations are allowed.
> - Training set accuracy is computed once per iteration.
> - Training stops if the current training set accuracy changes less
>   than a given tolerance from the accuracies obtained in each of the
>   previous three iterations.
> - Averaging is done differently than before. Rather than doing an
>   immediate update, parameters are simply accumulated after iterations
>   (this makes the code much easier to understand/maintain). Also, not
>   every iteration is used, as this tends to give to much weight to the
>   final iterations, which don't actually differ that much from one
>   another. I tried a few things and found a simple method that works
>   well: sum the parameters from the first 20 iterations and then sum
>   parameters from any further iterations that are perfect squares (25,
>   36, 49, etc). This gets a good (diverse) sample of parameters for
>   averaging since the distance between subsequent parameter sets gets
>   larger as the number of iterations gets bigger.
> - Added prepositional phrase attachment dataset to
>   src/test/resources/data/ppa. This is done with permission from
>   Adwait Ratnarparkhi -- see the README for details. 
> - Created unit test to check perceptron training consistency, using
>   the prepositional phrase attachment data. It would be good to do the
>   same for maxent.
> - Added ListEventStream to make a stream out of List<Event>
> - Added some helper methods, e.g. maxIndex, to simplify the code in
>   the main algorithm.
> - The training stats aren't shown for every iteration. Now it is just
>   the first 10 and then every 10th iteration after that.
> - modelDistribution, params, evalParams and others are no longer class
>   variables. They have been pushed into the findParameters
>   method. Other variables could/should be made non-global too, but
>   leaving as is for now.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira