You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@river.apache.org by Peter Firmstone <ji...@zeus.net.au> on 2012/12/01 04:43:20 UTC

Re: Open discussion on development process

On 30/11/2012 12:27 AM, Dan Creswell wrote:
> On 29 November 2012 13:11, Peter Firmstone<ji...@zeus.net.au>  wrote:
>
>> The last passing trunk versions:
>>
>> Jdk6 Ubuntu     1407017
>> Solaris  x86        1373770
>> Jdk7 Ubuntu     1379873
>> Windows           1373770
>>
>> Revision 1373770 looks the most stable, I think all platforms were passing
>> on this,  1407017 only passed on Ubuntu jdk6, nothing else.
>>
>> If we can confirm 1373770 as stable, maybe we should branch a release off
>> that, buying some time to stabilise what we're working on now.
>>
>>
> I think we should do that. I'm also tempted to suggest we consider limiting
> our development until we've fixed these tests up. Or alternatively control
> the rate of patch merging so we can pace it and make sure the tests get
> focus.
>
> That's a bit sledgehammer but...
>
>

Ok, sounds like a plan, how do you think we should best approach the task?

Create a branch in skunk, just for qa and run tests against released jars?

Regards,

Peter.



Re: Open discussion on development process

Posted by Peter Firmstone <ji...@zeus.net.au>.
Gregg Wonderly wrote:
> I still wonder why it doesn't feel right that the test suite be in the same branch as the associated "release".  
That was a comment I made, not that it's a pressing concern, right now 
we need to focus on the ability of the harness to deal with concurrent 
code and reducing shared state - shared via protected variables and 
collections.

The qa test suite hasn't had much attention for some time, we're 
experiencing test failures due to concurrency issues.
> Some of the new code needs new test that demonstrate "functionality" while other tests that demonstrate compatibility will be ran on each release without change.  It seems to me, that in the end, when a release goes out the door, the tests that validated that release, are part of that "release".
>   
> If we need two different types of tests, and a migration path from "functionality tests" into "compatibility tests", then maybe we really need two trees for development of each release, and branching the whole test suite would be one branch an the new release would be the other.
>
> Is that how you guys are thinking about this?
>   

No not really, we just want to fix and refactor the test suite.
> Gregg Wonderly
>
> On Nov 30, 2012, at 9:43 PM, Peter Firmstone <ji...@zeus.net.au> wrote:
>
>   
>> On 30/11/2012 12:27 AM, Dan Creswell wrote:
>>     
>>> On 29 November 2012 13:11, Peter Firmstone<ji...@zeus.net.au>  wrote:
>>>
>>>       
>>>> The last passing trunk versions:
>>>>
>>>> Jdk6 Ubuntu     1407017
>>>> Solaris  x86        1373770
>>>> Jdk7 Ubuntu     1379873
>>>> Windows           1373770
>>>>
>>>> Revision 1373770 looks the most stable, I think all platforms were passing
>>>> on this,  1407017 only passed on Ubuntu jdk6, nothing else.
>>>>
>>>> If we can confirm 1373770 as stable, maybe we should branch a release off
>>>> that, buying some time to stabilise what we're working on now.
>>>>
>>>>
>>>>         
>>> I think we should do that. I'm also tempted to suggest we consider limiting
>>> our development until we've fixed these tests up. Or alternatively control
>>> the rate of patch merging so we can pace it and make sure the tests get
>>> focus.
>>>
>>> That's a bit sledgehammer but...
>>>
>>>
>>>       
>> Ok, sounds like a plan, how do you think we should best approach the task?
>>
>> Create a branch in skunk, just for qa and run tests against released jars?
>>
>> Regards,
>>
>> Peter.
>>
>>
>>     
>
>
>   


Re: Open discussion on development process

Posted by Gregg Wonderly <ge...@cox.net>.
Well, if we had a slightly altered version of the service starter framework, we could, in fact, have the ability to start services without new processes, in the same JVM as the tests are running in, and use JUnit more often then not, I believe.  We'd just need some more simple to use programatic APIs instead of the configuration inspection for the list of services to start.

Gregg Wonderly

On Dec 3, 2012, at 1:15 AM, Peter Firmstone <ji...@zeus.net.au> wrote:

> We presently have about 83 junit tests, at least 1/3 are rewritten qa tests, so yes it is possible in test cases where you don't need services or other infrastructure running.
> 
> On 3/12/2012 6:29 AM, Gregg Wonderly wrote:
>> I concur that include JUnit for unit testing would be a good thing, and then we can move tests to JUnit if they are more likely unit tests than qa tests.
>> 
>> Gregg
>> 
>> On Dec 2, 2012, at 12:35 PM, Dan Creswell<da...@gmail.com>  wrote:
>> 
>>> Nice to hear from you Patricia...
>>> 
>>> On 2 December 2012 10:29, Patricia Shanahan<pa...@acm.org>  wrote:
>>>> I hope you don't mind me throwing in some random comments on this. I think
>>>> there are two types of testing that need to be distinguished, system and
>>>> unit.
>>>> 
>>>> A system test looks that the external behavior of the whole system, so what
>>>> it is testing changes only when the API changes, and tests should apply
>>>> across many source code revisions. I can see separating those out.
>>>> 
>>>> However, I feel River has been weak on unit tests, tests that check the
>>>> implementation of e.g. a data structure against its javadoc comments. Those
>>>> tests need to change with internal interface changes.
>>>> 
>>>> Testing e.g. the multi-thread consistency of a complex data structure using
>>>> only external system tests can be a mistake. It may take a very large
>>>> configuration or many days of running to bring out a bug that could be found
>>>> relatively fast by a unit test that hits the data structure rapidly.
>>>> 
>>>> I'm a little concerned that the reconfiguration of the tests may represent
>>>> an increased commitment to only doing system tests, and not doing any unit
>>>> tests.
>>>> 
>>> It doesn't represent any such thing in my mind! I'd expect to do a
>>> bunch of those per release and for individual checkins etc.
>>> 
>>> That's one of the reasons I'm interested in maybe moving to junit.
>>> Maybe in fact, we bring in junit as a statement of intent for unit
>>> tests and leave jtreg and friends for the "big" stuff.
>>> 
>>> Feeling less concerned? ;)
>>> 
>>>> Patricia
>>>> 
>>>> 
>>>> 
>>>> On 12/2/2012 10:21 AM, Dan Creswell wrote:
>>>>> ...
>>>>> 
>>>>> On 30 November 2012 19:53, Gregg Wonderly<ge...@cox.net>  wrote:
>>>>>> I still wonder why it doesn't feel right that the test suite be in the
>>>>>> same branch as the associated "release".  Some of the new code needs new
>>>>>> test that demonstrate "functionality" while other tests that demonstrate
>>>>>> compatibility will be ran on each release without change.  It seems to me,
>>>>>> that in the end, when a release goes out the door, the tests that validated
>>>>>> that release, are part of that "release".
>>>>>> 
>>>>> I have some similar disquiet, here's what I'm thinking at the moment
>>>>> (subject to change faster than I can type!)...
>>>>> 
>>>>> Compatibility and similar is really "compliance test" and is closely
>>>>> linked to the APIs defined by the specs. Two flavours here:
>>>>> 
>>>>> (1) "Well-behaved service" tests - does a service do join properly etc.
>>>>> (2) Compliance tests - do the APIs behave right etc.
>>>>> 
>>>>> These are kind of slow moving as are the APIs at least for now. I feel
>>>>> right now like (1) might be a subproject applied to our own "built-in"
>>>>> services as well as others. I'm tempted to say the same about (2) save
>>>>> for the fact that if we give up on the idea someone else is going to
>>>>> build a River clone this stuff becomes part of the release/test phase
>>>>> for the core.
>>>>> 
>>>>> Any other testing we're doing over and above what falls into (1) and
>>>>> (2) above is part of tests for core and ought to be living in the same
>>>>> branch and run as part of release. However, that's a little
>>>>> uncomfortable when one wishes to freeze development of core to do
>>>>> major work on the test harness etc. You branch core and test suite to
>>>>> work purely on the suite.
>>>>> 
>>>>> Manageable I guess well, until you have the trunk moving on and
>>>>> breaking your already seriously under construction test suite where
>>>>> everything in trunk is "old style" and will be a b*stard to merge
>>>>> across but if you don't your branched test suite is gonna break for
>>>>> nuisance reasons.
>>>>> 
>>>>>> If we need two different types of tests, and a migration path from
>>>>>> "functionality tests" into "compatibility tests", then maybe we really need
>>>>>> two trees for development of each release, and branching the whole test
>>>>>> suite would be one branch an the new release would be the other.
>>>>>> 
>>>>>> Is that how you guys are thinking about this?
>>>>>> 
>>>>> You have my (current) thinking above...
>>>>> 
>>>>>> Gregg Wonderly
>>>>>> 
>>>>>> On Nov 30, 2012, at 9:43 PM, Peter Firmstone<ji...@zeus.net.au>  wrote:
>>>>>> 
>>>>>>> On 30/11/2012 12:27 AM, Dan Creswell wrote:
>>>>>>>> On 29 November 2012 13:11, Peter Firmstone<ji...@zeus.net.au>   wrote:
>>>>>>>> 
>>>>>>>>> The last passing trunk versions:
>>>>>>>>> 
>>>>>>>>> Jdk6 Ubuntu     1407017
>>>>>>>>> Solaris  x86        1373770
>>>>>>>>> Jdk7 Ubuntu     1379873
>>>>>>>>> Windows           1373770
>>>>>>>>> 
>>>>>>>>> Revision 1373770 looks the most stable, I think all platforms were
>>>>>>>>> passing
>>>>>>>>> on this,  1407017 only passed on Ubuntu jdk6, nothing else.
>>>>>>>>> 
>>>>>>>>> If we can confirm 1373770 as stable, maybe we should branch a release
>>>>>>>>> off
>>>>>>>>> that, buying some time to stabilise what we're working on now.
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> I think we should do that. I'm also tempted to suggest we consider
>>>>>>>> limiting
>>>>>>>> our development until we've fixed these tests up. Or alternatively
>>>>>>>> control
>>>>>>>> the rate of patch merging so we can pace it and make sure the tests get
>>>>>>>> focus.
>>>>>>>> 
>>>>>>>> That's a bit sledgehammer but...
>>>>>>>> 
>>>>>>>> 
>>>>>>> Ok, sounds like a plan, how do you think we should best approach the
>>>>>>> task?
>>>>>>> 
>>>>>>> Create a branch in skunk, just for qa and run tests against released
>>>>>>> jars?
>>>>>>> 
>>>>>>> Regards,
>>>>>>> 
>>>>>>> Peter.
>>>>>>> 
>>>>>>> 
> 


Re: Open discussion on development process

Posted by Peter Firmstone <ji...@zeus.net.au>.
We presently have about 83 junit tests, at least 1/3 are rewritten qa 
tests, so yes it is possible in test cases where you don't need services 
or other infrastructure running.

On 3/12/2012 6:29 AM, Gregg Wonderly wrote:
> I concur that include JUnit for unit testing would be a good thing, and then we can move tests to JUnit if they are more likely unit tests than qa tests.
>
> Gregg
>
> On Dec 2, 2012, at 12:35 PM, Dan Creswell<da...@gmail.com>  wrote:
>
>> Nice to hear from you Patricia...
>>
>> On 2 December 2012 10:29, Patricia Shanahan<pa...@acm.org>  wrote:
>>> I hope you don't mind me throwing in some random comments on this. I think
>>> there are two types of testing that need to be distinguished, system and
>>> unit.
>>>
>>> A system test looks that the external behavior of the whole system, so what
>>> it is testing changes only when the API changes, and tests should apply
>>> across many source code revisions. I can see separating those out.
>>>
>>> However, I feel River has been weak on unit tests, tests that check the
>>> implementation of e.g. a data structure against its javadoc comments. Those
>>> tests need to change with internal interface changes.
>>>
>>> Testing e.g. the multi-thread consistency of a complex data structure using
>>> only external system tests can be a mistake. It may take a very large
>>> configuration or many days of running to bring out a bug that could be found
>>> relatively fast by a unit test that hits the data structure rapidly.
>>>
>>> I'm a little concerned that the reconfiguration of the tests may represent
>>> an increased commitment to only doing system tests, and not doing any unit
>>> tests.
>>>
>> It doesn't represent any such thing in my mind! I'd expect to do a
>> bunch of those per release and for individual checkins etc.
>>
>> That's one of the reasons I'm interested in maybe moving to junit.
>> Maybe in fact, we bring in junit as a statement of intent for unit
>> tests and leave jtreg and friends for the "big" stuff.
>>
>> Feeling less concerned? ;)
>>
>>> Patricia
>>>
>>>
>>>
>>> On 12/2/2012 10:21 AM, Dan Creswell wrote:
>>>> ...
>>>>
>>>> On 30 November 2012 19:53, Gregg Wonderly<ge...@cox.net>  wrote:
>>>>> I still wonder why it doesn't feel right that the test suite be in the
>>>>> same branch as the associated "release".  Some of the new code needs new
>>>>> test that demonstrate "functionality" while other tests that demonstrate
>>>>> compatibility will be ran on each release without change.  It seems to me,
>>>>> that in the end, when a release goes out the door, the tests that validated
>>>>> that release, are part of that "release".
>>>>>
>>>> I have some similar disquiet, here's what I'm thinking at the moment
>>>> (subject to change faster than I can type!)...
>>>>
>>>> Compatibility and similar is really "compliance test" and is closely
>>>> linked to the APIs defined by the specs. Two flavours here:
>>>>
>>>> (1) "Well-behaved service" tests - does a service do join properly etc.
>>>> (2) Compliance tests - do the APIs behave right etc.
>>>>
>>>> These are kind of slow moving as are the APIs at least for now. I feel
>>>> right now like (1) might be a subproject applied to our own "built-in"
>>>> services as well as others. I'm tempted to say the same about (2) save
>>>> for the fact that if we give up on the idea someone else is going to
>>>> build a River clone this stuff becomes part of the release/test phase
>>>> for the core.
>>>>
>>>> Any other testing we're doing over and above what falls into (1) and
>>>> (2) above is part of tests for core and ought to be living in the same
>>>> branch and run as part of release. However, that's a little
>>>> uncomfortable when one wishes to freeze development of core to do
>>>> major work on the test harness etc. You branch core and test suite to
>>>> work purely on the suite.
>>>>
>>>> Manageable I guess well, until you have the trunk moving on and
>>>> breaking your already seriously under construction test suite where
>>>> everything in trunk is "old style" and will be a b*stard to merge
>>>> across but if you don't your branched test suite is gonna break for
>>>> nuisance reasons.
>>>>
>>>>> If we need two different types of tests, and a migration path from
>>>>> "functionality tests" into "compatibility tests", then maybe we really need
>>>>> two trees for development of each release, and branching the whole test
>>>>> suite would be one branch an the new release would be the other.
>>>>>
>>>>> Is that how you guys are thinking about this?
>>>>>
>>>> You have my (current) thinking above...
>>>>
>>>>> Gregg Wonderly
>>>>>
>>>>> On Nov 30, 2012, at 9:43 PM, Peter Firmstone<ji...@zeus.net.au>  wrote:
>>>>>
>>>>>> On 30/11/2012 12:27 AM, Dan Creswell wrote:
>>>>>>> On 29 November 2012 13:11, Peter Firmstone<ji...@zeus.net.au>   wrote:
>>>>>>>
>>>>>>>> The last passing trunk versions:
>>>>>>>>
>>>>>>>> Jdk6 Ubuntu     1407017
>>>>>>>> Solaris  x86        1373770
>>>>>>>> Jdk7 Ubuntu     1379873
>>>>>>>> Windows           1373770
>>>>>>>>
>>>>>>>> Revision 1373770 looks the most stable, I think all platforms were
>>>>>>>> passing
>>>>>>>> on this,  1407017 only passed on Ubuntu jdk6, nothing else.
>>>>>>>>
>>>>>>>> If we can confirm 1373770 as stable, maybe we should branch a release
>>>>>>>> off
>>>>>>>> that, buying some time to stabilise what we're working on now.
>>>>>>>>
>>>>>>>>
>>>>>>> I think we should do that. I'm also tempted to suggest we consider
>>>>>>> limiting
>>>>>>> our development until we've fixed these tests up. Or alternatively
>>>>>>> control
>>>>>>> the rate of patch merging so we can pace it and make sure the tests get
>>>>>>> focus.
>>>>>>>
>>>>>>> That's a bit sledgehammer but...
>>>>>>>
>>>>>>>
>>>>>> Ok, sounds like a plan, how do you think we should best approach the
>>>>>> task?
>>>>>>
>>>>>> Create a branch in skunk, just for qa and run tests against released
>>>>>> jars?
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> Peter.
>>>>>>
>>>>>>


Re: Open discussion on development process

Posted by Gregg Wonderly <gr...@gmail.com>.
I concur that include JUnit for unit testing would be a good thing, and then we can move tests to JUnit if they are more likely unit tests than qa tests.

Gregg

On Dec 2, 2012, at 12:35 PM, Dan Creswell <da...@gmail.com> wrote:

> Nice to hear from you Patricia...
> 
> On 2 December 2012 10:29, Patricia Shanahan <pa...@acm.org> wrote:
>> I hope you don't mind me throwing in some random comments on this. I think
>> there are two types of testing that need to be distinguished, system and
>> unit.
>> 
>> A system test looks that the external behavior of the whole system, so what
>> it is testing changes only when the API changes, and tests should apply
>> across many source code revisions. I can see separating those out.
>> 
>> However, I feel River has been weak on unit tests, tests that check the
>> implementation of e.g. a data structure against its javadoc comments. Those
>> tests need to change with internal interface changes.
>> 
>> Testing e.g. the multi-thread consistency of a complex data structure using
>> only external system tests can be a mistake. It may take a very large
>> configuration or many days of running to bring out a bug that could be found
>> relatively fast by a unit test that hits the data structure rapidly.
>> 
>> I'm a little concerned that the reconfiguration of the tests may represent
>> an increased commitment to only doing system tests, and not doing any unit
>> tests.
>> 
> 
> It doesn't represent any such thing in my mind! I'd expect to do a
> bunch of those per release and for individual checkins etc.
> 
> That's one of the reasons I'm interested in maybe moving to junit.
> Maybe in fact, we bring in junit as a statement of intent for unit
> tests and leave jtreg and friends for the "big" stuff.
> 
> Feeling less concerned? ;)
> 
>> Patricia
>> 
>> 
>> 
>> On 12/2/2012 10:21 AM, Dan Creswell wrote:
>>> 
>>> ...
>>> 
>>> On 30 November 2012 19:53, Gregg Wonderly <ge...@cox.net> wrote:
>>>> 
>>>> I still wonder why it doesn't feel right that the test suite be in the
>>>> same branch as the associated "release".  Some of the new code needs new
>>>> test that demonstrate "functionality" while other tests that demonstrate
>>>> compatibility will be ran on each release without change.  It seems to me,
>>>> that in the end, when a release goes out the door, the tests that validated
>>>> that release, are part of that "release".
>>>> 
>>> 
>>> I have some similar disquiet, here's what I'm thinking at the moment
>>> (subject to change faster than I can type!)...
>>> 
>>> Compatibility and similar is really "compliance test" and is closely
>>> linked to the APIs defined by the specs. Two flavours here:
>>> 
>>> (1) "Well-behaved service" tests - does a service do join properly etc.
>>> (2) Compliance tests - do the APIs behave right etc.
>>> 
>>> These are kind of slow moving as are the APIs at least for now. I feel
>>> right now like (1) might be a subproject applied to our own "built-in"
>>> services as well as others. I'm tempted to say the same about (2) save
>>> for the fact that if we give up on the idea someone else is going to
>>> build a River clone this stuff becomes part of the release/test phase
>>> for the core.
>>> 
>>> Any other testing we're doing over and above what falls into (1) and
>>> (2) above is part of tests for core and ought to be living in the same
>>> branch and run as part of release. However, that's a little
>>> uncomfortable when one wishes to freeze development of core to do
>>> major work on the test harness etc. You branch core and test suite to
>>> work purely on the suite.
>>> 
>>> Manageable I guess well, until you have the trunk moving on and
>>> breaking your already seriously under construction test suite where
>>> everything in trunk is "old style" and will be a b*stard to merge
>>> across but if you don't your branched test suite is gonna break for
>>> nuisance reasons.
>>> 
>>>> If we need two different types of tests, and a migration path from
>>>> "functionality tests" into "compatibility tests", then maybe we really need
>>>> two trees for development of each release, and branching the whole test
>>>> suite would be one branch an the new release would be the other.
>>>> 
>>>> Is that how you guys are thinking about this?
>>>> 
>>> 
>>> You have my (current) thinking above...
>>> 
>>>> Gregg Wonderly
>>>> 
>>>> On Nov 30, 2012, at 9:43 PM, Peter Firmstone <ji...@zeus.net.au> wrote:
>>>> 
>>>>> On 30/11/2012 12:27 AM, Dan Creswell wrote:
>>>>>> 
>>>>>> On 29 November 2012 13:11, Peter Firmstone<ji...@zeus.net.au>  wrote:
>>>>>> 
>>>>>>> The last passing trunk versions:
>>>>>>> 
>>>>>>> Jdk6 Ubuntu     1407017
>>>>>>> Solaris  x86        1373770
>>>>>>> Jdk7 Ubuntu     1379873
>>>>>>> Windows           1373770
>>>>>>> 
>>>>>>> Revision 1373770 looks the most stable, I think all platforms were
>>>>>>> passing
>>>>>>> on this,  1407017 only passed on Ubuntu jdk6, nothing else.
>>>>>>> 
>>>>>>> If we can confirm 1373770 as stable, maybe we should branch a release
>>>>>>> off
>>>>>>> that, buying some time to stabilise what we're working on now.
>>>>>>> 
>>>>>>> 
>>>>>> I think we should do that. I'm also tempted to suggest we consider
>>>>>> limiting
>>>>>> our development until we've fixed these tests up. Or alternatively
>>>>>> control
>>>>>> the rate of patch merging so we can pace it and make sure the tests get
>>>>>> focus.
>>>>>> 
>>>>>> That's a bit sledgehammer but...
>>>>>> 
>>>>>> 
>>>>> 
>>>>> Ok, sounds like a plan, how do you think we should best approach the
>>>>> task?
>>>>> 
>>>>> Create a branch in skunk, just for qa and run tests against released
>>>>> jars?
>>>>> 
>>>>> Regards,
>>>>> 
>>>>> Peter.
>>>>> 
>>>>> 
>>>> 
>> 


Re: Open discussion on development process

Posted by Peter Firmstone <ji...@zeus.net.au>.
+1  Peter.

On 3/12/2012 4:35 AM, Dan Creswell wrote:
> Nice to hear from you Patricia...
>
> On 2 December 2012 10:29, Patricia Shanahan<pa...@acm.org>  wrote:
>


Re: Open discussion on development process

Posted by Dan Creswell <da...@gmail.com>.
Nice to hear from you Patricia...

On 2 December 2012 10:29, Patricia Shanahan <pa...@acm.org> wrote:
> I hope you don't mind me throwing in some random comments on this. I think
> there are two types of testing that need to be distinguished, system and
> unit.
>
> A system test looks that the external behavior of the whole system, so what
> it is testing changes only when the API changes, and tests should apply
> across many source code revisions. I can see separating those out.
>
> However, I feel River has been weak on unit tests, tests that check the
> implementation of e.g. a data structure against its javadoc comments. Those
> tests need to change with internal interface changes.
>
> Testing e.g. the multi-thread consistency of a complex data structure using
> only external system tests can be a mistake. It may take a very large
> configuration or many days of running to bring out a bug that could be found
> relatively fast by a unit test that hits the data structure rapidly.
>
> I'm a little concerned that the reconfiguration of the tests may represent
> an increased commitment to only doing system tests, and not doing any unit
> tests.
>

It doesn't represent any such thing in my mind! I'd expect to do a
bunch of those per release and for individual checkins etc.

That's one of the reasons I'm interested in maybe moving to junit.
Maybe in fact, we bring in junit as a statement of intent for unit
tests and leave jtreg and friends for the "big" stuff.

Feeling less concerned? ;)

> Patricia
>
>
>
> On 12/2/2012 10:21 AM, Dan Creswell wrote:
>>
>> ...
>>
>> On 30 November 2012 19:53, Gregg Wonderly <ge...@cox.net> wrote:
>>>
>>> I still wonder why it doesn't feel right that the test suite be in the
>>> same branch as the associated "release".  Some of the new code needs new
>>> test that demonstrate "functionality" while other tests that demonstrate
>>> compatibility will be ran on each release without change.  It seems to me,
>>> that in the end, when a release goes out the door, the tests that validated
>>> that release, are part of that "release".
>>>
>>
>> I have some similar disquiet, here's what I'm thinking at the moment
>> (subject to change faster than I can type!)...
>>
>> Compatibility and similar is really "compliance test" and is closely
>> linked to the APIs defined by the specs. Two flavours here:
>>
>> (1) "Well-behaved service" tests - does a service do join properly etc.
>> (2) Compliance tests - do the APIs behave right etc.
>>
>> These are kind of slow moving as are the APIs at least for now. I feel
>> right now like (1) might be a subproject applied to our own "built-in"
>> services as well as others. I'm tempted to say the same about (2) save
>> for the fact that if we give up on the idea someone else is going to
>> build a River clone this stuff becomes part of the release/test phase
>> for the core.
>>
>> Any other testing we're doing over and above what falls into (1) and
>> (2) above is part of tests for core and ought to be living in the same
>> branch and run as part of release. However, that's a little
>> uncomfortable when one wishes to freeze development of core to do
>> major work on the test harness etc. You branch core and test suite to
>> work purely on the suite.
>>
>> Manageable I guess well, until you have the trunk moving on and
>> breaking your already seriously under construction test suite where
>> everything in trunk is "old style" and will be a b*stard to merge
>> across but if you don't your branched test suite is gonna break for
>> nuisance reasons.
>>
>>> If we need two different types of tests, and a migration path from
>>> "functionality tests" into "compatibility tests", then maybe we really need
>>> two trees for development of each release, and branching the whole test
>>> suite would be one branch an the new release would be the other.
>>>
>>> Is that how you guys are thinking about this?
>>>
>>
>> You have my (current) thinking above...
>>
>>> Gregg Wonderly
>>>
>>> On Nov 30, 2012, at 9:43 PM, Peter Firmstone <ji...@zeus.net.au> wrote:
>>>
>>>> On 30/11/2012 12:27 AM, Dan Creswell wrote:
>>>>>
>>>>> On 29 November 2012 13:11, Peter Firmstone<ji...@zeus.net.au>  wrote:
>>>>>
>>>>>> The last passing trunk versions:
>>>>>>
>>>>>> Jdk6 Ubuntu     1407017
>>>>>> Solaris  x86        1373770
>>>>>> Jdk7 Ubuntu     1379873
>>>>>> Windows           1373770
>>>>>>
>>>>>> Revision 1373770 looks the most stable, I think all platforms were
>>>>>> passing
>>>>>> on this,  1407017 only passed on Ubuntu jdk6, nothing else.
>>>>>>
>>>>>> If we can confirm 1373770 as stable, maybe we should branch a release
>>>>>> off
>>>>>> that, buying some time to stabilise what we're working on now.
>>>>>>
>>>>>>
>>>>> I think we should do that. I'm also tempted to suggest we consider
>>>>> limiting
>>>>> our development until we've fixed these tests up. Or alternatively
>>>>> control
>>>>> the rate of patch merging so we can pace it and make sure the tests get
>>>>> focus.
>>>>>
>>>>> That's a bit sledgehammer but...
>>>>>
>>>>>
>>>>
>>>> Ok, sounds like a plan, how do you think we should best approach the
>>>> task?
>>>>
>>>> Create a branch in skunk, just for qa and run tests against released
>>>> jars?
>>>>
>>>> Regards,
>>>>
>>>> Peter.
>>>>
>>>>
>>>
>

Re: Open discussion on development process

Posted by Gregg Wonderly <gr...@gmail.com>.
Thanks for getting my mind to switch to the right thought process.  Unit/Integration testing is what I meant when I said functionality testing.  This really makes the right division of testing as a starting point.  At some point, if a behavior becomes "spec", then tests demonstrating the correct behavior can be moved over to system tests.  Unit tests only need to be run as part of the release development, and while they can continue to exist, if an API changes, the tests might change, or go away to be replaced with something more applicable.

So, it seems to me that qa can be shoved into a branch because it is "system test" behavior.  Then, we just need to figure out how to work on developing more tests to validate some of the issues that we see with lookup misbehavior.  I think I should spend a moment to do some diffs in my fork of Jini 2.1 to see if there is something that I tried to "fix" with lookup, but have lost sight/memory of.  I have dealt with a number of different things, over the years, and I do have a different lookup management engine than SDM which uses notification and polling to manage a lookup cache, which we developed during one larger project which was experiencing some odd lookup problems, and I wanted to have more explicit control and knowledge of the behaviors that were broken.  

Gregg

On Dec 2, 2012, at 12:29 PM, Patricia Shanahan <pa...@acm.org> wrote:

> I hope you don't mind me throwing in some random comments on this. I think there are two types of testing that need to be distinguished, system and unit.
> 
> A system test looks that the external behavior of the whole system, so what it is testing changes only when the API changes, and tests should apply across many source code revisions. I can see separating those out.
> 
> However, I feel River has been weak on unit tests, tests that check the implementation of e.g. a data structure against its javadoc comments. Those tests need to change with internal interface changes.
> 
> Testing e.g. the multi-thread consistency of a complex data structure using only external system tests can be a mistake. It may take a very large configuration or many days of running to bring out a bug that could be found relatively fast by a unit test that hits the data structure rapidly.
> 
> I'm a little concerned that the reconfiguration of the tests may represent an increased commitment to only doing system tests, and not doing any unit tests.
> 
> Patricia
> 
> 
> On 12/2/2012 10:21 AM, Dan Creswell wrote:
>> ...
>> 
>> On 30 November 2012 19:53, Gregg Wonderly <ge...@cox.net> wrote:
>>> I still wonder why it doesn't feel right that the test suite be in the same branch as the associated "release".  Some of the new code needs new test that demonstrate "functionality" while other tests that demonstrate compatibility will be ran on each release without change.  It seems to me, that in the end, when a release goes out the door, the tests that validated that release, are part of that "release".
>>> 
>> 
>> I have some similar disquiet, here's what I'm thinking at the moment
>> (subject to change faster than I can type!)...
>> 
>> Compatibility and similar is really "compliance test" and is closely
>> linked to the APIs defined by the specs. Two flavours here:
>> 
>> (1) "Well-behaved service" tests - does a service do join properly etc.
>> (2) Compliance tests - do the APIs behave right etc.
>> 
>> These are kind of slow moving as are the APIs at least for now. I feel
>> right now like (1) might be a subproject applied to our own "built-in"
>> services as well as others. I'm tempted to say the same about (2) save
>> for the fact that if we give up on the idea someone else is going to
>> build a River clone this stuff becomes part of the release/test phase
>> for the core.
>> 
>> Any other testing we're doing over and above what falls into (1) and
>> (2) above is part of tests for core and ought to be living in the same
>> branch and run as part of release. However, that's a little
>> uncomfortable when one wishes to freeze development of core to do
>> major work on the test harness etc. You branch core and test suite to
>> work purely on the suite.
>> 
>> Manageable I guess well, until you have the trunk moving on and
>> breaking your already seriously under construction test suite where
>> everything in trunk is "old style" and will be a b*stard to merge
>> across but if you don't your branched test suite is gonna break for
>> nuisance reasons.
>> 
>>> If we need two different types of tests, and a migration path from "functionality tests" into "compatibility tests", then maybe we really need two trees for development of each release, and branching the whole test suite would be one branch an the new release would be the other.
>>> 
>>> Is that how you guys are thinking about this?
>>> 
>> 
>> You have my (current) thinking above...
>> 
>>> Gregg Wonderly
>>> 
>>> On Nov 30, 2012, at 9:43 PM, Peter Firmstone <ji...@zeus.net.au> wrote:
>>> 
>>>> On 30/11/2012 12:27 AM, Dan Creswell wrote:
>>>>> On 29 November 2012 13:11, Peter Firmstone<ji...@zeus.net.au>  wrote:
>>>>> 
>>>>>> The last passing trunk versions:
>>>>>> 
>>>>>> Jdk6 Ubuntu     1407017
>>>>>> Solaris  x86        1373770
>>>>>> Jdk7 Ubuntu     1379873
>>>>>> Windows           1373770
>>>>>> 
>>>>>> Revision 1373770 looks the most stable, I think all platforms were passing
>>>>>> on this,  1407017 only passed on Ubuntu jdk6, nothing else.
>>>>>> 
>>>>>> If we can confirm 1373770 as stable, maybe we should branch a release off
>>>>>> that, buying some time to stabilise what we're working on now.
>>>>>> 
>>>>>> 
>>>>> I think we should do that. I'm also tempted to suggest we consider limiting
>>>>> our development until we've fixed these tests up. Or alternatively control
>>>>> the rate of patch merging so we can pace it and make sure the tests get
>>>>> focus.
>>>>> 
>>>>> That's a bit sledgehammer but...
>>>>> 
>>>>> 
>>>> 
>>>> Ok, sounds like a plan, how do you think we should best approach the task?
>>>> 
>>>> Create a branch in skunk, just for qa and run tests against released jars?
>>>> 
>>>> Regards,
>>>> 
>>>> Peter.
>>>> 
>>>> 
>>> 
> 


Re: Open discussion on development process

Posted by Patricia Shanahan <pa...@acm.org>.
I hope you don't mind me throwing in some random comments on this. I 
think there are two types of testing that need to be distinguished, 
system and unit.

A system test looks that the external behavior of the whole system, so 
what it is testing changes only when the API changes, and tests should 
apply across many source code revisions. I can see separating those out.

However, I feel River has been weak on unit tests, tests that check the 
implementation of e.g. a data structure against its javadoc comments. 
Those tests need to change with internal interface changes.

Testing e.g. the multi-thread consistency of a complex data structure 
using only external system tests can be a mistake. It may take a very 
large configuration or many days of running to bring out a bug that 
could be found relatively fast by a unit test that hits the data 
structure rapidly.

I'm a little concerned that the reconfiguration of the tests may 
represent an increased commitment to only doing system tests, and not 
doing any unit tests.

Patricia


On 12/2/2012 10:21 AM, Dan Creswell wrote:
> ...
>
> On 30 November 2012 19:53, Gregg Wonderly <ge...@cox.net> wrote:
>> I still wonder why it doesn't feel right that the test suite be in the same branch as the associated "release".  Some of the new code needs new test that demonstrate "functionality" while other tests that demonstrate compatibility will be ran on each release without change.  It seems to me, that in the end, when a release goes out the door, the tests that validated that release, are part of that "release".
>>
>
> I have some similar disquiet, here's what I'm thinking at the moment
> (subject to change faster than I can type!)...
>
> Compatibility and similar is really "compliance test" and is closely
> linked to the APIs defined by the specs. Two flavours here:
>
> (1) "Well-behaved service" tests - does a service do join properly etc.
> (2) Compliance tests - do the APIs behave right etc.
>
> These are kind of slow moving as are the APIs at least for now. I feel
> right now like (1) might be a subproject applied to our own "built-in"
> services as well as others. I'm tempted to say the same about (2) save
> for the fact that if we give up on the idea someone else is going to
> build a River clone this stuff becomes part of the release/test phase
> for the core.
>
> Any other testing we're doing over and above what falls into (1) and
> (2) above is part of tests for core and ought to be living in the same
> branch and run as part of release. However, that's a little
> uncomfortable when one wishes to freeze development of core to do
> major work on the test harness etc. You branch core and test suite to
> work purely on the suite.
>
> Manageable I guess well, until you have the trunk moving on and
> breaking your already seriously under construction test suite where
> everything in trunk is "old style" and will be a b*stard to merge
> across but if you don't your branched test suite is gonna break for
> nuisance reasons.
>
>> If we need two different types of tests, and a migration path from "functionality tests" into "compatibility tests", then maybe we really need two trees for development of each release, and branching the whole test suite would be one branch an the new release would be the other.
>>
>> Is that how you guys are thinking about this?
>>
>
> You have my (current) thinking above...
>
>> Gregg Wonderly
>>
>> On Nov 30, 2012, at 9:43 PM, Peter Firmstone <ji...@zeus.net.au> wrote:
>>
>>> On 30/11/2012 12:27 AM, Dan Creswell wrote:
>>>> On 29 November 2012 13:11, Peter Firmstone<ji...@zeus.net.au>  wrote:
>>>>
>>>>> The last passing trunk versions:
>>>>>
>>>>> Jdk6 Ubuntu     1407017
>>>>> Solaris  x86        1373770
>>>>> Jdk7 Ubuntu     1379873
>>>>> Windows           1373770
>>>>>
>>>>> Revision 1373770 looks the most stable, I think all platforms were passing
>>>>> on this,  1407017 only passed on Ubuntu jdk6, nothing else.
>>>>>
>>>>> If we can confirm 1373770 as stable, maybe we should branch a release off
>>>>> that, buying some time to stabilise what we're working on now.
>>>>>
>>>>>
>>>> I think we should do that. I'm also tempted to suggest we consider limiting
>>>> our development until we've fixed these tests up. Or alternatively control
>>>> the rate of patch merging so we can pace it and make sure the tests get
>>>> focus.
>>>>
>>>> That's a bit sledgehammer but...
>>>>
>>>>
>>>
>>> Ok, sounds like a plan, how do you think we should best approach the task?
>>>
>>> Create a branch in skunk, just for qa and run tests against released jars?
>>>
>>> Regards,
>>>
>>> Peter.
>>>
>>>
>>


Re: Open discussion on development process

Posted by Dan Creswell <da...@gmail.com>.
...

On 30 November 2012 19:53, Gregg Wonderly <ge...@cox.net> wrote:
> I still wonder why it doesn't feel right that the test suite be in the same branch as the associated "release".  Some of the new code needs new test that demonstrate "functionality" while other tests that demonstrate compatibility will be ran on each release without change.  It seems to me, that in the end, when a release goes out the door, the tests that validated that release, are part of that "release".
>

I have some similar disquiet, here's what I'm thinking at the moment
(subject to change faster than I can type!)...

Compatibility and similar is really "compliance test" and is closely
linked to the APIs defined by the specs. Two flavours here:

(1) "Well-behaved service" tests - does a service do join properly etc.
(2) Compliance tests - do the APIs behave right etc.

These are kind of slow moving as are the APIs at least for now. I feel
right now like (1) might be a subproject applied to our own "built-in"
services as well as others. I'm tempted to say the same about (2) save
for the fact that if we give up on the idea someone else is going to
build a River clone this stuff becomes part of the release/test phase
for the core.

Any other testing we're doing over and above what falls into (1) and
(2) above is part of tests for core and ought to be living in the same
branch and run as part of release. However, that's a little
uncomfortable when one wishes to freeze development of core to do
major work on the test harness etc. You branch core and test suite to
work purely on the suite.

Manageable I guess well, until you have the trunk moving on and
breaking your already seriously under construction test suite where
everything in trunk is "old style" and will be a b*stard to merge
across but if you don't your branched test suite is gonna break for
nuisance reasons.

> If we need two different types of tests, and a migration path from "functionality tests" into "compatibility tests", then maybe we really need two trees for development of each release, and branching the whole test suite would be one branch an the new release would be the other.
>
> Is that how you guys are thinking about this?
>

You have my (current) thinking above...

> Gregg Wonderly
>
> On Nov 30, 2012, at 9:43 PM, Peter Firmstone <ji...@zeus.net.au> wrote:
>
>> On 30/11/2012 12:27 AM, Dan Creswell wrote:
>>> On 29 November 2012 13:11, Peter Firmstone<ji...@zeus.net.au>  wrote:
>>>
>>>> The last passing trunk versions:
>>>>
>>>> Jdk6 Ubuntu     1407017
>>>> Solaris  x86        1373770
>>>> Jdk7 Ubuntu     1379873
>>>> Windows           1373770
>>>>
>>>> Revision 1373770 looks the most stable, I think all platforms were passing
>>>> on this,  1407017 only passed on Ubuntu jdk6, nothing else.
>>>>
>>>> If we can confirm 1373770 as stable, maybe we should branch a release off
>>>> that, buying some time to stabilise what we're working on now.
>>>>
>>>>
>>> I think we should do that. I'm also tempted to suggest we consider limiting
>>> our development until we've fixed these tests up. Or alternatively control
>>> the rate of patch merging so we can pace it and make sure the tests get
>>> focus.
>>>
>>> That's a bit sledgehammer but...
>>>
>>>
>>
>> Ok, sounds like a plan, how do you think we should best approach the task?
>>
>> Create a branch in skunk, just for qa and run tests against released jars?
>>
>> Regards,
>>
>> Peter.
>>
>>
>

Re: Open discussion on development process

Posted by Gregg Wonderly <ge...@cox.net>.
I still wonder why it doesn't feel right that the test suite be in the same branch as the associated "release".  Some of the new code needs new test that demonstrate "functionality" while other tests that demonstrate compatibility will be ran on each release without change.  It seems to me, that in the end, when a release goes out the door, the tests that validated that release, are part of that "release".

If we need two different types of tests, and a migration path from "functionality tests" into "compatibility tests", then maybe we really need two trees for development of each release, and branching the whole test suite would be one branch an the new release would be the other.

Is that how you guys are thinking about this?

Gregg Wonderly

On Nov 30, 2012, at 9:43 PM, Peter Firmstone <ji...@zeus.net.au> wrote:

> On 30/11/2012 12:27 AM, Dan Creswell wrote:
>> On 29 November 2012 13:11, Peter Firmstone<ji...@zeus.net.au>  wrote:
>> 
>>> The last passing trunk versions:
>>> 
>>> Jdk6 Ubuntu     1407017
>>> Solaris  x86        1373770
>>> Jdk7 Ubuntu     1379873
>>> Windows           1373770
>>> 
>>> Revision 1373770 looks the most stable, I think all platforms were passing
>>> on this,  1407017 only passed on Ubuntu jdk6, nothing else.
>>> 
>>> If we can confirm 1373770 as stable, maybe we should branch a release off
>>> that, buying some time to stabilise what we're working on now.
>>> 
>>> 
>> I think we should do that. I'm also tempted to suggest we consider limiting
>> our development until we've fixed these tests up. Or alternatively control
>> the rate of patch merging so we can pace it and make sure the tests get
>> focus.
>> 
>> That's a bit sledgehammer but...
>> 
>> 
> 
> Ok, sounds like a plan, how do you think we should best approach the task?
> 
> Create a branch in skunk, just for qa and run tests against released jars?
> 
> Regards,
> 
> Peter.
> 
> 


Re: Open discussion on development process

Posted by Dan Creswell <da...@gmail.com>.
>>>
>>> If we can confirm 1373770 as stable, maybe we should branch a release off
>>> that, buying some time to stabilise what we're working on now.
>>>
>>>
>> I think we should do that. I'm also tempted to suggest we consider
>> limiting
>> our development until we've fixed these tests up. Or alternatively control
>> the rate of patch merging so we can pace it and make sure the tests get
>> focus.
>>
>> That's a bit sledgehammer but...
>>
>>
>
> Ok, sounds like a plan, how do you think we should best approach the task?
>
> Create a branch in skunk, just for qa and run tests against released jars?
>

That's my gut instinct but let's thrash through the other stuff on
this thread with Gregg and then decide.

Okay?

> Regards,
>
> Peter.
>
>