You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@opennlp.apache.org by Alec Taylor <al...@gmail.com> on 2011/11/07 15:09:02 UTC
Re: [VOTE][RESULT] Release OpenNLP 1.5.2 RC 4
No worries.
What does non-binding mean?
On Mon, Nov 7, 2011 at 7:31 PM, Jörn Kottmann <ko...@gmail.com> wrote:
> This vote has been open for 72 hours, and is now closed.
> There were 4 binding +1s, 1 non-binding +1 and no 0s or -1s.
> The vote passes.
>
> The following people voted +1:
> William Colen (binding)
> Jörn Kottmann (binding)
> Alec Taylor (non-binding)
> Jason Baldridge (binding)
> James Kosin (binding)
>
> Thanks for voting,
> Jörn
>
>
> On 11/3/11 11:42 PM, Jörn Kottmann wrote:
>>
>> Hello,
>>
>> lets vote to release RC 4 as OpenNLP 1.5.2.
>>
>> The testing of it is documented here:
>> https://cwiki.apache.org/confluence/display/OPENNLP/TestPlan1.5.2
>>
>> Our tests on CONLL02 will be updated in the next days, since there was
>> an encoding issue on the data I used for this test. Anyway the other tests
>> we did on the name finder strongly indicate that we don't have an
>> regressions.
>>
>> The RC can be downloaded here:
>> http://people.apache.org/~joern/releases/opennlp-1.5.2-incubating/rc4/
>>
>> Please vote to approve this release:
>> [ ] +1 Approve the release
>> [ ] -1 Veto the release (please provide specific comments)
>> [ ] 0 Don't care
>>
>> Please report any problems you may find.
>>
>> Jörn
>>
>
>
Re: [VOTE][RESULT] Release OpenNLP 1.5.2 RC 4
Posted by Jörn Kottmann <ko...@gmail.com>.
The vote is now cancelled on the general list and we
need to prepare and additional release candidate.
Jörn
On 11/7/11 8:14 PM, william.colen@gmail.com wrote:
> On Mon, Nov 7, 2011 at 4:21 PM, Jörn Kottmann<ko...@gmail.com> wrote:
>
>> We got a good review of our release from sebb on the
>> general list and he pointed out a couple of issues, we might
>> need to do one more release candidate and some fixing of
>> subversion properties (See the OPENNLP-357).
>>
>> William, should we just include the chunker documentation in the
>> next RC? I am happy to review it, doesn't look like that it would be much
>> work and based on the user list discussion there seems to be a need for it.
>
> OK. I will change the issue version in to 1.5.2.
>
> William
>
Re: [VOTE][RESULT] Release OpenNLP 1.5.2 RC 4
Posted by "william.colen@gmail.com" <wi...@gmail.com>.
On Mon, Nov 7, 2011 at 4:21 PM, Jörn Kottmann <ko...@gmail.com> wrote:
> We got a good review of our release from sebb on the
> general list and he pointed out a couple of issues, we might
> need to do one more release candidate and some fixing of
> subversion properties (See the OPENNLP-357).
>
> William, should we just include the chunker documentation in the
> next RC? I am happy to review it, doesn't look like that it would be much
> work and based on the user list discussion there seems to be a need for it.
OK. I will change the issue version in to 1.5.2.
William
RE: any hints on how to get chunking info from Parse?
Posted by Boris Galitsky <bg...@hotmail.com>.
>
> You are doing it then twice. The chunk information is already present inside
> the parse tree.
I will now focus on getting it from there. Doing that I also keep adding more JUNIT tests in the similarity component
Boris
RE: any hints on how to get chunking info from Parse?
Posted by Boris Galitsky <bg...@hotmail.com>.
>
> Sorry I'm a bit late to this discussion. I think it is fine to have a
> default way that similarity is assessed, but it shouldn't be completely hid
> from the user that there may be other choices. For example, have you done
> standard similarity based on the standard bag-of-words model?
I 've got the code, it is pretty basic, which will also do bag-of-words.
I will add it
That's at
> least needed as a baseline to see whether the chunks and tree structures
> are helping. (Sorry in advance if this is not addressing the questions and
> such -- I seem to be missing some context in the discussion.)
>
> FWIW, it is entirely possible for a chunker to produce better local
> structure than a full parser. This is pretty well known in the dependency
> parsing literature (e.g. see the comparison of MaltParser and MSTParser by
> Nivre and McDonald). Also, if you want unsupervised chunks, you might check
> out work that Elias Ponvert, Katrin Erk, and I did on using HMMs for this
> (and cascading them to get full parses). Code and paper available here:
>
> http://elias.ponvert.net/upparse
Yes, will take a look
Regards
Boris
>
> Jason
>
> On Fri, Dec 2, 2011 at 10:12 AM, Boris Galitsky <bg...@hotmail.com>wrote:
>
> >
> > My philosophy for similarity component is that an engineer without
> > background in linguistic can do text processing.
> > He/she would install OpenNLP, and would call assessRelevance(text1, text2)
> > function, without any knowledge of what is heppening inside.
> > That would significantly extend the user base of OpenNLP.
> > The problem domains I used for illustration is search (a standard domain
> > for linguistic apps) and content generation (a state-of-art technology, in
> > my opinion). Again, to incorporate these into user apps users do not need
> > to know anything about parsing, chunking, etc.
> > RegardsBoris
> >
> >
> >
> >
> >
> > > Date: Fri, 2 Dec 2011 13:10:23 +0100
> > > From: kottmann@gmail.com
> > > To: opennlp-dev@incubator.apache.org
> > > Subject: Re: any hints on how to get chunking info from Parse?
> > >
> > > On 12/1/11 8:08 PM, Boris Galitsky wrote:
> > > > I spent last couple of weeks understanding how OpenNLP parser does
> > chunking, how chunking occurs separately in opennlp.tools.chunker, and I
> > came to conclusion that using independently trained chunker on the results
> > of parser gives significantly higher accuracy of resultant parsing, and
> > therefore makes 'similarity' component much more accurate as a result.
> > > > Lets look at an example (I added stars):
> > > > two NP& VP are extracted, but what kills similarity component is the
> > last part of the latter:
> > > > ****to-TO drive-NN****
> > > > Parse Tree Chunk list = [NP [Its-PRP$ classy-JJ design-NN and-CC
> > the-DT Mercedes-NNP name-NN ], VP [make-VBP it-PRP a-DT very-RB cool-JJ
> > vehicle-NN *******to-TO drive-NN**** ]]
> > > >
> > > > When I apply the chunker which has its own problems ( but most
> > importantly was trained independently) I can then apply rules to fix these
> > cases for matching with other sub-VP like 'to-VB'.
> > > > I understand it works slower that way.
> > > > I would propose we have two version of similarity, one that just does
> > without chunker and one which uses it (and also an additional 'correction'
> > algo ? ).
> > > > I have now both versions, but only the latter passes current tests.
> > >
> > > Ok, sounds good to me, but we should assume that the user can run the
> > > parser and chunker them self. Your similarity component simply accepts
> > > a parse tree in one case and a parse tree plus chunks in the other case.
> > >
> > > What do you think?
> > >
> > > Jörn
> >
> >
>
>
>
> --
> Jason Baldridge
> Associate Professor, Department of Linguistics
> The University of Texas at Austin
> http://www.jasonbaldridge.com
> http://twitter.com/jasonbaldridge
Re: any hints on how to get chunking info from Parse?
Posted by Jason Baldridge <ja...@gmail.com>.
Sorry I'm a bit late to this discussion. I think it is fine to have a
default way that similarity is assessed, but it shouldn't be completely hid
from the user that there may be other choices. For example, have you done
standard similarity based on the standard bag-of-words model? That's at
least needed as a baseline to see whether the chunks and tree structures
are helping. (Sorry in advance if this is not addressing the questions and
such -- I seem to be missing some context in the discussion.)
FWIW, it is entirely possible for a chunker to produce better local
structure than a full parser. This is pretty well known in the dependency
parsing literature (e.g. see the comparison of MaltParser and MSTParser by
Nivre and McDonald). Also, if you want unsupervised chunks, you might check
out work that Elias Ponvert, Katrin Erk, and I did on using HMMs for this
(and cascading them to get full parses). Code and paper available here:
http://elias.ponvert.net/upparse
Jason
On Fri, Dec 2, 2011 at 10:12 AM, Boris Galitsky <bg...@hotmail.com>wrote:
>
> My philosophy for similarity component is that an engineer without
> background in linguistic can do text processing.
> He/she would install OpenNLP, and would call assessRelevance(text1, text2)
> function, without any knowledge of what is heppening inside.
> That would significantly extend the user base of OpenNLP.
> The problem domains I used for illustration is search (a standard domain
> for linguistic apps) and content generation (a state-of-art technology, in
> my opinion). Again, to incorporate these into user apps users do not need
> to know anything about parsing, chunking, etc.
> RegardsBoris
>
>
>
>
>
> > Date: Fri, 2 Dec 2011 13:10:23 +0100
> > From: kottmann@gmail.com
> > To: opennlp-dev@incubator.apache.org
> > Subject: Re: any hints on how to get chunking info from Parse?
> >
> > On 12/1/11 8:08 PM, Boris Galitsky wrote:
> > > I spent last couple of weeks understanding how OpenNLP parser does
> chunking, how chunking occurs separately in opennlp.tools.chunker, and I
> came to conclusion that using independently trained chunker on the results
> of parser gives significantly higher accuracy of resultant parsing, and
> therefore makes 'similarity' component much more accurate as a result.
> > > Lets look at an example (I added stars):
> > > two NP& VP are extracted, but what kills similarity component is the
> last part of the latter:
> > > ****to-TO drive-NN****
> > > Parse Tree Chunk list = [NP [Its-PRP$ classy-JJ design-NN and-CC
> the-DT Mercedes-NNP name-NN ], VP [make-VBP it-PRP a-DT very-RB cool-JJ
> vehicle-NN *******to-TO drive-NN**** ]]
> > >
> > > When I apply the chunker which has its own problems ( but most
> importantly was trained independently) I can then apply rules to fix these
> cases for matching with other sub-VP like 'to-VB'.
> > > I understand it works slower that way.
> > > I would propose we have two version of similarity, one that just does
> without chunker and one which uses it (and also an additional 'correction'
> algo ? ).
> > > I have now both versions, but only the latter passes current tests.
> >
> > Ok, sounds good to me, but we should assume that the user can run the
> > parser and chunker them self. Your similarity component simply accepts
> > a parse tree in one case and a parse tree plus chunks in the other case.
> >
> > What do you think?
> >
> > Jörn
>
>
--
Jason Baldridge
Associate Professor, Department of Linguistics
The University of Texas at Austin
http://www.jasonbaldridge.com
http://twitter.com/jasonbaldridge
RE: any hints on how to get chunking info from Parse?
Posted by Boris Galitsky <bg...@hotmail.com>.
My philosophy for similarity component is that an engineer without background in linguistic can do text processing.
He/she would install OpenNLP, and would call assessRelevance(text1, text2) function, without any knowledge of what is heppening inside.
That would significantly extend the user base of OpenNLP.
The problem domains I used for illustration is search (a standard domain for linguistic apps) and content generation (a state-of-art technology, in my opinion). Again, to incorporate these into user apps users do not need to know anything about parsing, chunking, etc.
RegardsBoris
> Date: Fri, 2 Dec 2011 13:10:23 +0100
> From: kottmann@gmail.com
> To: opennlp-dev@incubator.apache.org
> Subject: Re: any hints on how to get chunking info from Parse?
>
> On 12/1/11 8:08 PM, Boris Galitsky wrote:
> > I spent last couple of weeks understanding how OpenNLP parser does chunking, how chunking occurs separately in opennlp.tools.chunker, and I came to conclusion that using independently trained chunker on the results of parser gives significantly higher accuracy of resultant parsing, and therefore makes 'similarity' component much more accurate as a result.
> > Lets look at an example (I added stars):
> > two NP& VP are extracted, but what kills similarity component is the last part of the latter:
> > ****to-TO drive-NN****
> > Parse Tree Chunk list = [NP [Its-PRP$ classy-JJ design-NN and-CC the-DT Mercedes-NNP name-NN ], VP [make-VBP it-PRP a-DT very-RB cool-JJ vehicle-NN *******to-TO drive-NN**** ]]
> >
> > When I apply the chunker which has its own problems ( but most importantly was trained independently) I can then apply rules to fix these cases for matching with other sub-VP like 'to-VB'.
> > I understand it works slower that way.
> > I would propose we have two version of similarity, one that just does without chunker and one which uses it (and also an additional 'correction' algo ? ).
> > I have now both versions, but only the latter passes current tests.
>
> Ok, sounds good to me, but we should assume that the user can run the
> parser and chunker them self. Your similarity component simply accepts
> a parse tree in one case and a parse tree plus chunks in the other case.
>
> What do you think?
>
> Jörn
Re: any hints on how to get chunking info from Parse?
Posted by Jörn Kottmann <ko...@gmail.com>.
On 12/1/11 8:08 PM, Boris Galitsky wrote:
> I spent last couple of weeks understanding how OpenNLP parser does chunking, how chunking occurs separately in opennlp.tools.chunker, and I came to conclusion that using independently trained chunker on the results of parser gives significantly higher accuracy of resultant parsing, and therefore makes 'similarity' component much more accurate as a result.
> Lets look at an example (I added stars):
> two NP& VP are extracted, but what kills similarity component is the last part of the latter:
> ****to-TO drive-NN****
> Parse Tree Chunk list = [NP [Its-PRP$ classy-JJ design-NN and-CC the-DT Mercedes-NNP name-NN ], VP [make-VBP it-PRP a-DT very-RB cool-JJ vehicle-NN *******to-TO drive-NN**** ]]
>
> When I apply the chunker which has its own problems ( but most importantly was trained independently) I can then apply rules to fix these cases for matching with other sub-VP like 'to-VB'.
> I understand it works slower that way.
> I would propose we have two version of similarity, one that just does without chunker and one which uses it (and also an additional 'correction' algo ? ).
> I have now both versions, but only the latter passes current tests.
Ok, sounds good to me, but we should assume that the user can run the
parser and chunker them self. Your similarity component simply accepts
a parse tree in one case and a parse tree plus chunks in the other case.
What do you think?
Jörn
RE: any hints on how to get chunking info from Parse?
Posted by Boris Galitsky <bg...@hotmail.com>.
Hi Jörn
I spent last couple of weeks understanding how OpenNLP parser does chunking, how chunking occurs separately in opennlp.tools.chunker, and I came to conclusion that using independently trained chunker on the results of parser gives significantly higher accuracy of resultant parsing, and therefore makes 'similarity' component much more accurate as a result.
Lets look at an example (I added stars):
two NP & VP are extracted, but what kills similarity component is the last part of the latter:
****to-TO drive-NN****
Parse Tree Chunk list = [NP [Its-PRP$ classy-JJ design-NN and-CC the-DT Mercedes-NNP name-NN ], VP [make-VBP it-PRP a-DT very-RB cool-JJ vehicle-NN *******to-TO drive-NN**** ]]
When I apply the chunker which has its own problems ( but most importantly was trained independently) I can then apply rules to fix these cases for matching with other sub-VP like 'to-VB'.
I understand it works slower that way.
I would propose we have two version of similarity, one that just does without chunker and one which uses it (and also an additional 'correction' algo ? ).
I have now both versions, but only the latter passes current tests.
RegardsBoris
> Date: Thu, 17 Nov 2011 19:49:50 +0100
> From: kottmann@gmail.com
> To: opennlp-dev@incubator.apache.org
> Subject: Re: any hints on how to get chunking info from Parse?
>
> On 11/17/11 7:08 PM, Boris Galitsky wrote:
> > Yes, I will try
> > opennlp.tools.parser.ChunkSampleStream
> > and meanwhile the question is: what is wrong with using
> > opennlp.tools.chunker ?
>
> You are doing it then twice. The chunk information is already present inside
> the parse tree. So if you have a Parse object already, you should
> extract the
> chunk information from it instead of running the chunker again.
>
> It is also harder to use, because a user then needs to provide you with
> a Parse
> object and a chunker instance. For the same reason it is harder to test
> as well.
> It will be slower because chunking needs to be done twice, and I guess
> there are
> a couple of more reasons why this is not the preferred solution.
>
> Let me know if you need help.
>
> Jörn
Re: any hints on how to get chunking info from Parse?
Posted by Jörn Kottmann <ko...@gmail.com>.
On 11/17/11 7:08 PM, Boris Galitsky wrote:
> Yes, I will try
> opennlp.tools.parser.ChunkSampleStream
> and meanwhile the question is: what is wrong with using
> opennlp.tools.chunker ?
You are doing it then twice. The chunk information is already present inside
the parse tree. So if you have a Parse object already, you should
extract the
chunk information from it instead of running the chunker again.
It is also harder to use, because a user then needs to provide you with
a Parse
object and a chunker instance. For the same reason it is harder to test
as well.
It will be slower because chunking needs to be done twice, and I guess
there are
a couple of more reasons why this is not the preferred solution.
Let me know if you need help.
Jörn
RE: any hints on how to get chunking info from Parse?
Posted by Boris Galitsky <bg...@hotmail.com>.
Yes, I will try
opennlp.tools.parser.ChunkSampleStream
and meanwhile the question is: what is wrong with using
opennlp.tools.chunker ?
RegardsBoris
> Date: Thu, 17 Nov 2011 11:19:02 +0100
> From: kottmann@gmail.com
> To: opennlp-dev@incubator.apache.org
> Subject: Re: any hints on how to get chunking info from Parse?
>
> On 11/9/11 9:23 PM, Boris Galitsky wrote:
> >> Furthermore it would be nice if you can do the change you did for the>pos tagger
> >> also for the chunker, where you extract the pos tags from the Parse
> >> objects instead
> >> of running the POS Tagger. The Parse object also includes the chunk
> >> information,
> >> so there should be no need to run the chunker.
> > Hi
> > I am doing further chunks processing which might be useful for other apps, not just this 'similarity' project
> > I need to get all phrases grouped by type (noun, verb, adj, pp, ...) from chunking results, and it is not clear how can I get phrases other than noun from 'Parse' object.
> > Once I get all phrases, I do matching inside my component for each phrase type separately.
> > So far I have to process chunking results [1..3 4..5 6...8 6..10] + POS + lemmas -> lists of phrases for each group.
> > I suspect there's a better way!
> > RegardsBoris
>
> Sorry for the late reply. As far as I know it should be possible.
>
> We have code in opennlp.tools.parser.ChunkSampleStream which does it to
> train
> a chunker based on Parse trees.
>
> Can you try this out and see if it work for you? I guess more people
> will need this
> anyway, maybe we should create a method somewhere to do this.
>
> Jörn
Re: any hints on how to get chunking info from Parse?
Posted by Jörn Kottmann <ko...@gmail.com>.
On 11/9/11 9:23 PM, Boris Galitsky wrote:
>> Furthermore it would be nice if you can do the change you did for the>pos tagger
>> also for the chunker, where you extract the pos tags from the Parse
>> objects instead
>> of running the POS Tagger. The Parse object also includes the chunk
>> information,
>> so there should be no need to run the chunker.
> Hi
> I am doing further chunks processing which might be useful for other apps, not just this 'similarity' project
> I need to get all phrases grouped by type (noun, verb, adj, pp, ...) from chunking results, and it is not clear how can I get phrases other than noun from 'Parse' object.
> Once I get all phrases, I do matching inside my component for each phrase type separately.
> So far I have to process chunking results [1..3 4..5 6...8 6..10] + POS + lemmas -> lists of phrases for each group.
> I suspect there's a better way!
> RegardsBoris
Sorry for the late reply. As far as I know it should be possible.
We have code in opennlp.tools.parser.ChunkSampleStream which does it to
train
a chunker based on Parse trees.
Can you try this out and see if it work for you? I guess more people
will need this
anyway, maybe we should create a method somewhere to do this.
Jörn
any hints on how to get chunking info from Parse?
Posted by Boris Galitsky <bg...@hotmail.com>.
>Furthermore it would be nice if you can do the change you did for the >pos tagger
>also for the chunker, where you extract the pos tags from the Parse
>objects instead
>of running the POS Tagger. The Parse object also includes the chunk
>information,
>so there should be no need to run the chunker.
Hi
I am doing further chunks processing which might be useful for other apps, not just this 'similarity' project
I need to get all phrases grouped by type (noun, verb, adj, pp, ...) from chunking results, and it is not clear how can I get phrases other than noun from 'Parse' object.
Once I get all phrases, I do matching inside my component for each phrase type separately.
So far I have to process chunking results [1..3 4..5 6...8 6..10] + POS + lemmas -> lists of phrases for each group.
I suspect there's a better way!
RegardsBoris
RE: how to continue?
Posted by Boris Galitsky <bg...@hotmail.com>.
Thanks Jörn for recommendations.
I will be working on these items
RegardsBoris
> Date: Mon, 7 Nov 2011 23:56:49 +0100
> From: kottmann@gmail.com
> To: opennlp-dev@incubator.apache.org
> Subject: Re: how to continue?
>
> Hi Boris,
>
> I think it would be a good idea to mature it a bit more and get everyone
> a bit more familiar with the code base.
>
> I created a jira to move the Porter Stemmer over to the tools package:
> https://issues.apache.org/jira/browse/OPENNLP-337
>
> This work includes the definition of an interface, and we would need to
> write a test for the stemmer so we know it works, should be easy to test.
>
> I just tried to compile the project and still get a couple of errors
> would be nice
> if you can fix these. It looks like the tests are referencing models
> which do not
> exist in my file system.
>
> Furthermore it would be nice if you can do the change you did for the
> pos tagger
> also for the chunker, where you extract the pos tags from the Parse
> objects instead
> of running the POS Tagger. The Parse object also includes the chunk
> information,
> so there should be no need to run the chunker.
>
> We would need a bit documentation so that people can understand what it does
> and how it can be used.
>
> What do you think?
>
> Jörn
>
> On 11/7/11 11:38 PM, Boris Galitsky wrote:
> > Hi Jörn
> >
> > I think the 'similarity' module is in a good shape now, what would
> > be the next steps?
> >
> > Regards
> > Boris
> >
> >
>
Re: how to continue?
Posted by Jörn Kottmann <ko...@gmail.com>.
Hi Boris,
I think it would be a good idea to mature it a bit more and get everyone
a bit more familiar with the code base.
I created a jira to move the Porter Stemmer over to the tools package:
https://issues.apache.org/jira/browse/OPENNLP-337
This work includes the definition of an interface, and we would need to
write a test for the stemmer so we know it works, should be easy to test.
I just tried to compile the project and still get a couple of errors
would be nice
if you can fix these. It looks like the tests are referencing models
which do not
exist in my file system.
Furthermore it would be nice if you can do the change you did for the
pos tagger
also for the chunker, where you extract the pos tags from the Parse
objects instead
of running the POS Tagger. The Parse object also includes the chunk
information,
so there should be no need to run the chunker.
We would need a bit documentation so that people can understand what it does
and how it can be used.
What do you think?
Jörn
On 11/7/11 11:38 PM, Boris Galitsky wrote:
> Hi Jörn
>
> I think the 'similarity' module is in a good shape now, what would
> be the next steps?
>
> Regards
> Boris
>
>
Re: [VOTE][RESULT] Release OpenNLP 1.5.2 RC 4
Posted by Jörn Kottmann <ko...@gmail.com>.
We got a good review of our release from sebb on the
general list and he pointed out a couple of issues, we might
need to do one more release candidate and some fixing of
subversion properties (See the OPENNLP-357).
William, should we just include the chunker documentation in the
next RC? I am happy to review it, doesn't look like that it would be much
work and based on the user list discussion there seems to be a need for it.
Jörn
On 11/7/11 3:15 PM, Jörn Kottmann wrote:
> At Apache only PMC members have binding votes. Everyone else is still
> invited
> to vote but these votes are called non-binding and usually cannot veto
> a decision.
> Anyway at OpenNLP we will carefully listen for the reasons when we get a
> non-binding -1 one vote.
>
> You can find more information about voting here:
> http://www.apache.org/foundation/voting.html
>
> Jörn
>
> On 11/7/11 3:09 PM, Alec Taylor wrote:
>> No worries.
>>
>> What does non-binding mean?
>>
>> On Mon, Nov 7, 2011 at 7:31 PM, Jörn Kottmann<ko...@gmail.com>
>> wrote:
>>> This vote has been open for 72 hours, and is now closed.
>>> There were 4 binding +1s, 1 non-binding +1 and no 0s or -1s.
>>> The vote passes.
>>>
>>> The following people voted +1:
>>> William Colen (binding)
>>> Jörn Kottmann (binding)
>>> Alec Taylor (non-binding)
>>> Jason Baldridge (binding)
>>> James Kosin (binding)
>>>
>>> Thanks for voting,
>>> Jörn
>>>
>>>
>>> On 11/3/11 11:42 PM, Jörn Kottmann wrote:
>>>> Hello,
>>>>
>>>> lets vote to release RC 4 as OpenNLP 1.5.2.
>>>>
>>>> The testing of it is documented here:
>>>> https://cwiki.apache.org/confluence/display/OPENNLP/TestPlan1.5.2
>>>>
>>>> Our tests on CONLL02 will be updated in the next days, since there was
>>>> an encoding issue on the data I used for this test. Anyway the
>>>> other tests
>>>> we did on the name finder strongly indicate that we don't have an
>>>> regressions.
>>>>
>>>> The RC can be downloaded here:
>>>> http://people.apache.org/~joern/releases/opennlp-1.5.2-incubating/rc4/
>>>>
>>>> Please vote to approve this release:
>>>> [ ] +1 Approve the release
>>>> [ ] -1 Veto the release (please provide specific comments)
>>>> [ ] 0 Don't care
>>>>
>>>> Please report any problems you may find.
>>>>
>>>> Jörn
>>>>
>>>
>
Re: [VOTE][RESULT] Release OpenNLP 1.5.2 RC 4
Posted by Jörn Kottmann <ko...@gmail.com>.
At Apache only PMC members have binding votes. Everyone else is still
invited
to vote but these votes are called non-binding and usually cannot veto a
decision.
Anyway at OpenNLP we will carefully listen for the reasons when we get a
non-binding -1 one vote.
You can find more information about voting here:
http://www.apache.org/foundation/voting.html
Jörn
On 11/7/11 3:09 PM, Alec Taylor wrote:
> No worries.
>
> What does non-binding mean?
>
> On Mon, Nov 7, 2011 at 7:31 PM, Jörn Kottmann<ko...@gmail.com> wrote:
>> This vote has been open for 72 hours, and is now closed.
>> There were 4 binding +1s, 1 non-binding +1 and no 0s or -1s.
>> The vote passes.
>>
>> The following people voted +1:
>> William Colen (binding)
>> Jörn Kottmann (binding)
>> Alec Taylor (non-binding)
>> Jason Baldridge (binding)
>> James Kosin (binding)
>>
>> Thanks for voting,
>> Jörn
>>
>>
>> On 11/3/11 11:42 PM, Jörn Kottmann wrote:
>>> Hello,
>>>
>>> lets vote to release RC 4 as OpenNLP 1.5.2.
>>>
>>> The testing of it is documented here:
>>> https://cwiki.apache.org/confluence/display/OPENNLP/TestPlan1.5.2
>>>
>>> Our tests on CONLL02 will be updated in the next days, since there was
>>> an encoding issue on the data I used for this test. Anyway the other tests
>>> we did on the name finder strongly indicate that we don't have an
>>> regressions.
>>>
>>> The RC can be downloaded here:
>>> http://people.apache.org/~joern/releases/opennlp-1.5.2-incubating/rc4/
>>>
>>> Please vote to approve this release:
>>> [ ] +1 Approve the release
>>> [ ] -1 Veto the release (please provide specific comments)
>>> [ ] 0 Don't care
>>>
>>> Please report any problems you may find.
>>>
>>> Jörn
>>>
>>