You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@activemq.apache.org by artnaseef <ar...@artnaseef.com> on 2015/02/01 01:33:07 UTC

[DISCUSS] Releases and Testing

*Overview*
Defining a consistent approach to tests for releases will help us both
near-term and long-term come to agreement on (a) how to maintain quality
releases, and (b) how to improve the tests in a way that serves the needs of
releases.

As a general practice, tests that are unreliable raise a major question -
just how valuable are the tests?  With enough unreliable tests, can we ever
expect a single build to complete successfully?

How can we ensure the quality of ActiveMQ is maintained, and tests are
safeguarding the solution from the introduction of bugs, in light of these
tests?

*Ideally*
Putting some ideals here so we have the "end in mind" (Stephen Covey) --
i.e. so they can help us move in the right direction overall.  These are
definitely not feasible within any reasonable timeframe.

Putting on my "purist" hat -- ideally, we would analyze every test to
determine the possibility of FALSE-NEGATIVES *and* FALSE-POSITIVES generated
by the test.  From there, it would be possible to look for methods of
distinguishing false-negatives and false-positives (for example, by
reviewing logs) and improving the tests so they hopefully never end in false
results.

Another ideal approach - return to the drawing board and define all of the
test scenarios needed to ensure ActiveMQ operates properly, then determine
the most reliable way to cover those test scenarios.  Discard redundant
tests and replace unreliable ones with reliable ones.

*Approach for Releases*
Back to the focus of this thread - let's define an acceptable approach to
the release.  Here is an idea to get the discussion started:

- Run the build with the Maven "-fn" flag (fail-none), then review all
failed tests and determine a course of action for each:
  - Re-run the test if there is reason (preferably a clear, documented
reason) to believe the failure was a false-negative (e.g. a test that
times-out too aggressively)
  - Declare the failure a bug (or at least, a suspected bug), create a Jira
entry, and resolve
  - Replace the test with a more reliable alternative that addresses the
same underlying concern as the original test

*Call for Feedback*
To move this discussion forward, please provide as much negative feedback as
necessary and, at the same time, please provide reasoning or ideas that can
help move things forward.  Criticism (unactionable feedback) is discouraging
and unwelcome.  On a similar note - the practice of throwing out "-1" votes,
even for small, easily-addressed issues, without any offer to assist is
getting old.  I dream of seeing "-1, file <x> needs an update; I'll take
care of that myself right now."

*Wrap-Up*
Let's get this solved, continue with frequent releases, and then move
forward in improving ActiveMQ and enjoying the results!

Expect another thread soon with ideas on improving the tests in general.




--
View this message in context: http://activemq.2283324.n4.nabble.com/DISCUSS-Releases-and-Testing-tp4690763.html
Sent from the ActiveMQ - Dev mailing list archive at Nabble.com.

Re: [DISCUSS] Releases and Testing

Posted by "Jamie G." <ja...@gmail.com>.
You mean on AMQ 5.12? Lots of production AMQ5 out there, moving to 6
may be problematic, so being able to maintain a heathy 5.x line is
important to a lot of users. If there are contributors out there
willing to pitch in, than great :)

Cheers,
Jamie

On Sun, Feb 1, 2015 at 4:51 PM, Clebert <cl...@gmail.com> wrote:
> I understand. I'm just saying this could be done through the newer branches. It's a better strategy on moving forward IMHO.
>
> -- Clebert Suconic typing on the iPhone.
>
>> On Feb 1, 2015, at 13:13, Jamie G. <ja...@gmail.com> wrote:
>>
>> ActiveMQ 5.x is in wide deployment, improving the community's ability
>> to maintain the code and deliver service releases is good.
>>
>> Breaking the tests into 'release' and 'deep testing' does make sense
>> in the context of 10 hour builds. The goal is still having end users
>> being able to run said tests successfully.  I'm just suggesting a
>> community oriented approach to tackling the project.
>>
>> Cheers,
>> Jamie
>>
>>> On Sun, Feb 1, 2015 at 2:14 PM, Clebert <cl...@gmail.com> wrote:
>>> Please look at my post regarding the testsuite. Why you guys don't contribute effort towards activemq-6 branch ? There's an ongoing effort there.
>>>
>>> -- Clebert Suconic typing on the iPhone.
>>>
>>>> On Feb 1, 2015, at 12:34, Jamie G. <ja...@gmail.com> wrote:
>>>>
>>>> The choice to fix, refactor, or remove test cases should be reasonably
>>>> straight forward on a case by case basis - the real challenge in my
>>>> mind is the volume to be reviewed.
>>>>
>>>> Perhaps the AMQ community could parcel the test cases into small sets,
>>>> each tracked by a Jira task. These sets could then be posted into a
>>>> community page tracking, showing which ones have been reviewed, which
>>>> are under review, and which ones have not been touched.
>>>>
>>>> The reason I'd like to see a table tracking these test case set
>>>> reviews is that it would provide new contributors an easy way to see
>>>> where they could jump in and help out -- much like the old Servicemix
>>>> community wish page (That's how I was able to jump in and start
>>>> helping effectively back in the day). Many hands making the work
>>>> light.
>>>>
>>>> The over head of having the tracking table, Jiras, and co-ordination
>>>> should be offset by having the work spread well over many people, and
>>>> providing new contributors a great way to start interacting with the
>>>> community.
>>>>
>>>> Cheers,
>>>> Jamie
>>>>
>>>>> On Sat, Jan 31, 2015 at 9:03 PM, artnaseef <ar...@artnaseef.com> wrote:
>>>>> *Overview*
>>>>> Defining a consistent approach to tests for releases will help us both
>>>>> near-term and long-term come to agreement on (a) how to maintain quality
>>>>> releases, and (b) how to improve the tests in a way that serves the needs of
>>>>> releases.
>>>>>
>>>>> As a general practice, tests that are unreliable raise a major question -
>>>>> just how valuable are the tests?  With enough unreliable tests, can we ever
>>>>> expect a single build to complete successfully?
>>>>>
>>>>> How can we ensure the quality of ActiveMQ is maintained, and tests are
>>>>> safeguarding the solution from the introduction of bugs, in light of these
>>>>> tests?
>>>>>
>>>>> *Ideally*
>>>>> Putting some ideals here so we have the "end in mind" (Stephen Covey) --
>>>>> i.e. so they can help us move in the right direction overall.  These are
>>>>> definitely not feasible within any reasonable timeframe.
>>>>>
>>>>> Putting on my "purist" hat -- ideally, we would analyze every test to
>>>>> determine the possibility of FALSE-NEGATIVES *and* FALSE-POSITIVES generated
>>>>> by the test.  From there, it would be possible to look for methods of
>>>>> distinguishing false-negatives and false-positives (for example, by
>>>>> reviewing logs) and improving the tests so they hopefully never end in false
>>>>> results.
>>>>>
>>>>> Another ideal approach - return to the drawing board and define all of the
>>>>> test scenarios needed to ensure ActiveMQ operates properly, then determine
>>>>> the most reliable way to cover those test scenarios.  Discard redundant
>>>>> tests and replace unreliable ones with reliable ones.
>>>>>
>>>>> *Approach for Releases*
>>>>> Back to the focus of this thread - let's define an acceptable approach to
>>>>> the release.  Here is an idea to get the discussion started:
>>>>>
>>>>> - Run the build with the Maven "-fn" flag (fail-none), then review all
>>>>> failed tests and determine a course of action for each:
>>>>> - Re-run the test if there is reason (preferably a clear, documented
>>>>> reason) to believe the failure was a false-negative (e.g. a test that
>>>>> times-out too aggressively)
>>>>> - Declare the failure a bug (or at least, a suspected bug), create a Jira
>>>>> entry, and resolve
>>>>> - Replace the test with a more reliable alternative that addresses the
>>>>> same underlying concern as the original test
>>>>>
>>>>> *Call for Feedback*
>>>>> To move this discussion forward, please provide as much negative feedback as
>>>>> necessary and, at the same time, please provide reasoning or ideas that can
>>>>> help move things forward.  Criticism (unactionable feedback) is discouraging
>>>>> and unwelcome.  On a similar note - the practice of throwing out "-1" votes,
>>>>> even for small, easily-addressed issues, without any offer to assist is
>>>>> getting old.  I dream of seeing "-1, file <x> needs an update; I'll take
>>>>> care of that myself right now."
>>>>>
>>>>> *Wrap-Up*
>>>>> Let's get this solved, continue with frequent releases, and then move
>>>>> forward in improving ActiveMQ and enjoying the results!
>>>>>
>>>>> Expect another thread soon with ideas on improving the tests in general.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> View this message in context: http://activemq.2283324.n4.nabble.com/DISCUSS-Releases-and-Testing-tp4690763.html
>>>>> Sent from the ActiveMQ - Dev mailing list archive at Nabble.com.

Re: [DISCUSS] Releases and Testing

Posted by Clebert <cl...@gmail.com>.
I understand. I'm just saying this could be done through the newer branches. It's a better strategy on moving forward IMHO. 

-- Clebert Suconic typing on the iPhone. 

> On Feb 1, 2015, at 13:13, Jamie G. <ja...@gmail.com> wrote:
> 
> ActiveMQ 5.x is in wide deployment, improving the community's ability
> to maintain the code and deliver service releases is good.
> 
> Breaking the tests into 'release' and 'deep testing' does make sense
> in the context of 10 hour builds. The goal is still having end users
> being able to run said tests successfully.  I'm just suggesting a
> community oriented approach to tackling the project.
> 
> Cheers,
> Jamie
> 
>> On Sun, Feb 1, 2015 at 2:14 PM, Clebert <cl...@gmail.com> wrote:
>> Please look at my post regarding the testsuite. Why you guys don't contribute effort towards activemq-6 branch ? There's an ongoing effort there.
>> 
>> -- Clebert Suconic typing on the iPhone.
>> 
>>> On Feb 1, 2015, at 12:34, Jamie G. <ja...@gmail.com> wrote:
>>> 
>>> The choice to fix, refactor, or remove test cases should be reasonably
>>> straight forward on a case by case basis - the real challenge in my
>>> mind is the volume to be reviewed.
>>> 
>>> Perhaps the AMQ community could parcel the test cases into small sets,
>>> each tracked by a Jira task. These sets could then be posted into a
>>> community page tracking, showing which ones have been reviewed, which
>>> are under review, and which ones have not been touched.
>>> 
>>> The reason I'd like to see a table tracking these test case set
>>> reviews is that it would provide new contributors an easy way to see
>>> where they could jump in and help out -- much like the old Servicemix
>>> community wish page (That's how I was able to jump in and start
>>> helping effectively back in the day). Many hands making the work
>>> light.
>>> 
>>> The over head of having the tracking table, Jiras, and co-ordination
>>> should be offset by having the work spread well over many people, and
>>> providing new contributors a great way to start interacting with the
>>> community.
>>> 
>>> Cheers,
>>> Jamie
>>> 
>>>> On Sat, Jan 31, 2015 at 9:03 PM, artnaseef <ar...@artnaseef.com> wrote:
>>>> *Overview*
>>>> Defining a consistent approach to tests for releases will help us both
>>>> near-term and long-term come to agreement on (a) how to maintain quality
>>>> releases, and (b) how to improve the tests in a way that serves the needs of
>>>> releases.
>>>> 
>>>> As a general practice, tests that are unreliable raise a major question -
>>>> just how valuable are the tests?  With enough unreliable tests, can we ever
>>>> expect a single build to complete successfully?
>>>> 
>>>> How can we ensure the quality of ActiveMQ is maintained, and tests are
>>>> safeguarding the solution from the introduction of bugs, in light of these
>>>> tests?
>>>> 
>>>> *Ideally*
>>>> Putting some ideals here so we have the "end in mind" (Stephen Covey) --
>>>> i.e. so they can help us move in the right direction overall.  These are
>>>> definitely not feasible within any reasonable timeframe.
>>>> 
>>>> Putting on my "purist" hat -- ideally, we would analyze every test to
>>>> determine the possibility of FALSE-NEGATIVES *and* FALSE-POSITIVES generated
>>>> by the test.  From there, it would be possible to look for methods of
>>>> distinguishing false-negatives and false-positives (for example, by
>>>> reviewing logs) and improving the tests so they hopefully never end in false
>>>> results.
>>>> 
>>>> Another ideal approach - return to the drawing board and define all of the
>>>> test scenarios needed to ensure ActiveMQ operates properly, then determine
>>>> the most reliable way to cover those test scenarios.  Discard redundant
>>>> tests and replace unreliable ones with reliable ones.
>>>> 
>>>> *Approach for Releases*
>>>> Back to the focus of this thread - let's define an acceptable approach to
>>>> the release.  Here is an idea to get the discussion started:
>>>> 
>>>> - Run the build with the Maven "-fn" flag (fail-none), then review all
>>>> failed tests and determine a course of action for each:
>>>> - Re-run the test if there is reason (preferably a clear, documented
>>>> reason) to believe the failure was a false-negative (e.g. a test that
>>>> times-out too aggressively)
>>>> - Declare the failure a bug (or at least, a suspected bug), create a Jira
>>>> entry, and resolve
>>>> - Replace the test with a more reliable alternative that addresses the
>>>> same underlying concern as the original test
>>>> 
>>>> *Call for Feedback*
>>>> To move this discussion forward, please provide as much negative feedback as
>>>> necessary and, at the same time, please provide reasoning or ideas that can
>>>> help move things forward.  Criticism (unactionable feedback) is discouraging
>>>> and unwelcome.  On a similar note - the practice of throwing out "-1" votes,
>>>> even for small, easily-addressed issues, without any offer to assist is
>>>> getting old.  I dream of seeing "-1, file <x> needs an update; I'll take
>>>> care of that myself right now."
>>>> 
>>>> *Wrap-Up*
>>>> Let's get this solved, continue with frequent releases, and then move
>>>> forward in improving ActiveMQ and enjoying the results!
>>>> 
>>>> Expect another thread soon with ideas on improving the tests in general.
>>>> 
>>>> 
>>>> 
>>>> 
>>>> --
>>>> View this message in context: http://activemq.2283324.n4.nabble.com/DISCUSS-Releases-and-Testing-tp4690763.html
>>>> Sent from the ActiveMQ - Dev mailing list archive at Nabble.com.

Re: [DISCUSS] Releases and Testing

Posted by "Jamie G." <ja...@gmail.com>.
ActiveMQ 5.x is in wide deployment, improving the community's ability
to maintain the code and deliver service releases is good.

Breaking the tests into 'release' and 'deep testing' does make sense
in the context of 10 hour builds. The goal is still having end users
being able to run said tests successfully.  I'm just suggesting a
community oriented approach to tackling the project.

Cheers,
Jamie

On Sun, Feb 1, 2015 at 2:14 PM, Clebert <cl...@gmail.com> wrote:
> Please look at my post regarding the testsuite. Why you guys don't contribute effort towards activemq-6 branch ? There's an ongoing effort there.
>
> -- Clebert Suconic typing on the iPhone.
>
>> On Feb 1, 2015, at 12:34, Jamie G. <ja...@gmail.com> wrote:
>>
>> The choice to fix, refactor, or remove test cases should be reasonably
>> straight forward on a case by case basis - the real challenge in my
>> mind is the volume to be reviewed.
>>
>> Perhaps the AMQ community could parcel the test cases into small sets,
>> each tracked by a Jira task. These sets could then be posted into a
>> community page tracking, showing which ones have been reviewed, which
>> are under review, and which ones have not been touched.
>>
>> The reason I'd like to see a table tracking these test case set
>> reviews is that it would provide new contributors an easy way to see
>> where they could jump in and help out -- much like the old Servicemix
>> community wish page (That's how I was able to jump in and start
>> helping effectively back in the day). Many hands making the work
>> light.
>>
>> The over head of having the tracking table, Jiras, and co-ordination
>> should be offset by having the work spread well over many people, and
>> providing new contributors a great way to start interacting with the
>> community.
>>
>> Cheers,
>> Jamie
>>
>>> On Sat, Jan 31, 2015 at 9:03 PM, artnaseef <ar...@artnaseef.com> wrote:
>>> *Overview*
>>> Defining a consistent approach to tests for releases will help us both
>>> near-term and long-term come to agreement on (a) how to maintain quality
>>> releases, and (b) how to improve the tests in a way that serves the needs of
>>> releases.
>>>
>>> As a general practice, tests that are unreliable raise a major question -
>>> just how valuable are the tests?  With enough unreliable tests, can we ever
>>> expect a single build to complete successfully?
>>>
>>> How can we ensure the quality of ActiveMQ is maintained, and tests are
>>> safeguarding the solution from the introduction of bugs, in light of these
>>> tests?
>>>
>>> *Ideally*
>>> Putting some ideals here so we have the "end in mind" (Stephen Covey) --
>>> i.e. so they can help us move in the right direction overall.  These are
>>> definitely not feasible within any reasonable timeframe.
>>>
>>> Putting on my "purist" hat -- ideally, we would analyze every test to
>>> determine the possibility of FALSE-NEGATIVES *and* FALSE-POSITIVES generated
>>> by the test.  From there, it would be possible to look for methods of
>>> distinguishing false-negatives and false-positives (for example, by
>>> reviewing logs) and improving the tests so they hopefully never end in false
>>> results.
>>>
>>> Another ideal approach - return to the drawing board and define all of the
>>> test scenarios needed to ensure ActiveMQ operates properly, then determine
>>> the most reliable way to cover those test scenarios.  Discard redundant
>>> tests and replace unreliable ones with reliable ones.
>>>
>>> *Approach for Releases*
>>> Back to the focus of this thread - let's define an acceptable approach to
>>> the release.  Here is an idea to get the discussion started:
>>>
>>> - Run the build with the Maven "-fn" flag (fail-none), then review all
>>> failed tests and determine a course of action for each:
>>>  - Re-run the test if there is reason (preferably a clear, documented
>>> reason) to believe the failure was a false-negative (e.g. a test that
>>> times-out too aggressively)
>>>  - Declare the failure a bug (or at least, a suspected bug), create a Jira
>>> entry, and resolve
>>>  - Replace the test with a more reliable alternative that addresses the
>>> same underlying concern as the original test
>>>
>>> *Call for Feedback*
>>> To move this discussion forward, please provide as much negative feedback as
>>> necessary and, at the same time, please provide reasoning or ideas that can
>>> help move things forward.  Criticism (unactionable feedback) is discouraging
>>> and unwelcome.  On a similar note - the practice of throwing out "-1" votes,
>>> even for small, easily-addressed issues, without any offer to assist is
>>> getting old.  I dream of seeing "-1, file <x> needs an update; I'll take
>>> care of that myself right now."
>>>
>>> *Wrap-Up*
>>> Let's get this solved, continue with frequent releases, and then move
>>> forward in improving ActiveMQ and enjoying the results!
>>>
>>> Expect another thread soon with ideas on improving the tests in general.
>>>
>>>
>>>
>>>
>>> --
>>> View this message in context: http://activemq.2283324.n4.nabble.com/DISCUSS-Releases-and-Testing-tp4690763.html
>>> Sent from the ActiveMQ - Dev mailing list archive at Nabble.com.

Re: [DISCUSS] Releases and Testing

Posted by Clebert <cl...@gmail.com>.
Please look at my post regarding the testsuite. Why you guys don't contribute effort towards activemq-6 branch ? There's an ongoing effort there.

-- Clebert Suconic typing on the iPhone. 

> On Feb 1, 2015, at 12:34, Jamie G. <ja...@gmail.com> wrote:
> 
> The choice to fix, refactor, or remove test cases should be reasonably
> straight forward on a case by case basis - the real challenge in my
> mind is the volume to be reviewed.
> 
> Perhaps the AMQ community could parcel the test cases into small sets,
> each tracked by a Jira task. These sets could then be posted into a
> community page tracking, showing which ones have been reviewed, which
> are under review, and which ones have not been touched.
> 
> The reason I'd like to see a table tracking these test case set
> reviews is that it would provide new contributors an easy way to see
> where they could jump in and help out -- much like the old Servicemix
> community wish page (That's how I was able to jump in and start
> helping effectively back in the day). Many hands making the work
> light.
> 
> The over head of having the tracking table, Jiras, and co-ordination
> should be offset by having the work spread well over many people, and
> providing new contributors a great way to start interacting with the
> community.
> 
> Cheers,
> Jamie
> 
>> On Sat, Jan 31, 2015 at 9:03 PM, artnaseef <ar...@artnaseef.com> wrote:
>> *Overview*
>> Defining a consistent approach to tests for releases will help us both
>> near-term and long-term come to agreement on (a) how to maintain quality
>> releases, and (b) how to improve the tests in a way that serves the needs of
>> releases.
>> 
>> As a general practice, tests that are unreliable raise a major question -
>> just how valuable are the tests?  With enough unreliable tests, can we ever
>> expect a single build to complete successfully?
>> 
>> How can we ensure the quality of ActiveMQ is maintained, and tests are
>> safeguarding the solution from the introduction of bugs, in light of these
>> tests?
>> 
>> *Ideally*
>> Putting some ideals here so we have the "end in mind" (Stephen Covey) --
>> i.e. so they can help us move in the right direction overall.  These are
>> definitely not feasible within any reasonable timeframe.
>> 
>> Putting on my "purist" hat -- ideally, we would analyze every test to
>> determine the possibility of FALSE-NEGATIVES *and* FALSE-POSITIVES generated
>> by the test.  From there, it would be possible to look for methods of
>> distinguishing false-negatives and false-positives (for example, by
>> reviewing logs) and improving the tests so they hopefully never end in false
>> results.
>> 
>> Another ideal approach - return to the drawing board and define all of the
>> test scenarios needed to ensure ActiveMQ operates properly, then determine
>> the most reliable way to cover those test scenarios.  Discard redundant
>> tests and replace unreliable ones with reliable ones.
>> 
>> *Approach for Releases*
>> Back to the focus of this thread - let's define an acceptable approach to
>> the release.  Here is an idea to get the discussion started:
>> 
>> - Run the build with the Maven "-fn" flag (fail-none), then review all
>> failed tests and determine a course of action for each:
>>  - Re-run the test if there is reason (preferably a clear, documented
>> reason) to believe the failure was a false-negative (e.g. a test that
>> times-out too aggressively)
>>  - Declare the failure a bug (or at least, a suspected bug), create a Jira
>> entry, and resolve
>>  - Replace the test with a more reliable alternative that addresses the
>> same underlying concern as the original test
>> 
>> *Call for Feedback*
>> To move this discussion forward, please provide as much negative feedback as
>> necessary and, at the same time, please provide reasoning or ideas that can
>> help move things forward.  Criticism (unactionable feedback) is discouraging
>> and unwelcome.  On a similar note - the practice of throwing out "-1" votes,
>> even for small, easily-addressed issues, without any offer to assist is
>> getting old.  I dream of seeing "-1, file <x> needs an update; I'll take
>> care of that myself right now."
>> 
>> *Wrap-Up*
>> Let's get this solved, continue with frequent releases, and then move
>> forward in improving ActiveMQ and enjoying the results!
>> 
>> Expect another thread soon with ideas on improving the tests in general.
>> 
>> 
>> 
>> 
>> --
>> View this message in context: http://activemq.2283324.n4.nabble.com/DISCUSS-Releases-and-Testing-tp4690763.html
>> Sent from the ActiveMQ - Dev mailing list archive at Nabble.com.

Re: [DISCUSS] Releases and Testing

Posted by "Jamie G." <ja...@gmail.com>.
The choice to fix, refactor, or remove test cases should be reasonably
straight forward on a case by case basis - the real challenge in my
mind is the volume to be reviewed.

Perhaps the AMQ community could parcel the test cases into small sets,
each tracked by a Jira task. These sets could then be posted into a
community page tracking, showing which ones have been reviewed, which
are under review, and which ones have not been touched.

The reason I'd like to see a table tracking these test case set
reviews is that it would provide new contributors an easy way to see
where they could jump in and help out -- much like the old Servicemix
community wish page (That's how I was able to jump in and start
helping effectively back in the day). Many hands making the work
light.

The over head of having the tracking table, Jiras, and co-ordination
should be offset by having the work spread well over many people, and
providing new contributors a great way to start interacting with the
community.

Cheers,
Jamie

On Sat, Jan 31, 2015 at 9:03 PM, artnaseef <ar...@artnaseef.com> wrote:
> *Overview*
> Defining a consistent approach to tests for releases will help us both
> near-term and long-term come to agreement on (a) how to maintain quality
> releases, and (b) how to improve the tests in a way that serves the needs of
> releases.
>
> As a general practice, tests that are unreliable raise a major question -
> just how valuable are the tests?  With enough unreliable tests, can we ever
> expect a single build to complete successfully?
>
> How can we ensure the quality of ActiveMQ is maintained, and tests are
> safeguarding the solution from the introduction of bugs, in light of these
> tests?
>
> *Ideally*
> Putting some ideals here so we have the "end in mind" (Stephen Covey) --
> i.e. so they can help us move in the right direction overall.  These are
> definitely not feasible within any reasonable timeframe.
>
> Putting on my "purist" hat -- ideally, we would analyze every test to
> determine the possibility of FALSE-NEGATIVES *and* FALSE-POSITIVES generated
> by the test.  From there, it would be possible to look for methods of
> distinguishing false-negatives and false-positives (for example, by
> reviewing logs) and improving the tests so they hopefully never end in false
> results.
>
> Another ideal approach - return to the drawing board and define all of the
> test scenarios needed to ensure ActiveMQ operates properly, then determine
> the most reliable way to cover those test scenarios.  Discard redundant
> tests and replace unreliable ones with reliable ones.
>
> *Approach for Releases*
> Back to the focus of this thread - let's define an acceptable approach to
> the release.  Here is an idea to get the discussion started:
>
> - Run the build with the Maven "-fn" flag (fail-none), then review all
> failed tests and determine a course of action for each:
>   - Re-run the test if there is reason (preferably a clear, documented
> reason) to believe the failure was a false-negative (e.g. a test that
> times-out too aggressively)
>   - Declare the failure a bug (or at least, a suspected bug), create a Jira
> entry, and resolve
>   - Replace the test with a more reliable alternative that addresses the
> same underlying concern as the original test
>
> *Call for Feedback*
> To move this discussion forward, please provide as much negative feedback as
> necessary and, at the same time, please provide reasoning or ideas that can
> help move things forward.  Criticism (unactionable feedback) is discouraging
> and unwelcome.  On a similar note - the practice of throwing out "-1" votes,
> even for small, easily-addressed issues, without any offer to assist is
> getting old.  I dream of seeing "-1, file <x> needs an update; I'll take
> care of that myself right now."
>
> *Wrap-Up*
> Let's get this solved, continue with frequent releases, and then move
> forward in improving ActiveMQ and enjoying the results!
>
> Expect another thread soon with ideas on improving the tests in general.
>
>
>
>
> --
> View this message in context: http://activemq.2283324.n4.nabble.com/DISCUSS-Releases-and-Testing-tp4690763.html
> Sent from the ActiveMQ - Dev mailing list archive at Nabble.com.