You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@beehive.apache.org by Heather Stephens <he...@bea.com> on 2004/10/13 19:23:21 UTC

Beehive "Quality Processes" (WAS RE: [proposal] Beehive release strategy)

I'd like to split the QA/quality/code tidy/code review/etc. questions
out onto a separate thread and encapsulate in a separate document and
proposal.  Holler if you don't like that approach.  :)

-----Original Message-----
From: Ken Tam 
Sent: Tuesday, October 12, 2004 2:59 PM
To: Beehive Developers
Subject: RE: [proposal] Beehive release strategy

Overall strategy looks good.  Thanks Heather!

Drilling in:

1)  Need a little more definition around the role of automated QA
processes wrt releases.  Per Rotan's comment, I don't think there's a
low-cost mechanism to ensure that a checkin doesn't _happen_ unless some
set of tests pass, but we can and should e.g. set up a publically
visible cruisecontrol machine to monitor and archive ongoing status wrt
test suites.  Logistics of how this works (physical location of this
machine, network perms, setup, etc) probably need to involve the PMC.
  
<< snip >>

-----Original Message-----
From: Heather Stephens 
Sent: Tuesday, October 12, 2004 8:41 AM
To: Beehive Developers
Subject: RE: [proposal] Beehive release strategy

Regarding #2, as we're putting together the bootstrapped roadmap doc and
project plans could everyone consider the code tidy request as well?

Once we've seeded the community with this information then all feature
planning, prioritization, etc., should obviously be done in the open.

-----Original Message-----
From: Steven Tocco 
Sent: Tuesday, October 12, 2004 8:17 AM
To: Beehive Developers
Subject: FW: Opinions?/Response to Rotan on beehive-dev -- FW:
[proposal] Beehive release strategy

Rotan,

Great questions!  I think Heather has Questions 3 and 4 covered.  Let me
see if we can start to assemble some responses in the other areas.

Rotan Question 1: Is there any way to validate that a successful unit
test sequence was executed?

I know of now way to enforce this now.  Essentially our build structure
is built independent of source control system at this time.  But not all
hope is lost.  We currently run checkin/DRT tests using CruiseControl to
detect errors.  Thus, within a few hours of a harmful checkin we will
have a suspect list of changes that caused the break.

Perhaps there is some more advanced integration between SVN and Ant for
this type of checking, but I'm not aware of it.

Rotan Question 2:  Is there a code tidy process?

We currently don't have such a tool in place or the standards to hold
the code to.  I've used some of these tools before (Check Style for
example) and think they are great.  Using them usually comes with three
issues:
a.	Getting agreement on style and such.
b.	The tools ability to enforce such styles.
c.	When/How to enforce these standards?  (for example, getting
checkstyle like tool to run as part of checkin/DRT tests may be extreme,
perhaps a code review step, perhaps post process, perhaps pre-release,
etc.)

With all of that said, the first part to move towards is step a.  Steps
b and c would come after that.  I'll raise the issue to Heather to see
the priority of this relative to other issues.
	
Rotan Question 5: We need a stable objective quality assessment
mechanism from which we can observe trends.

Some of this work is ongoing.  Let me describe what has and has not been
happening.  

There are current plans to get Cruise Control to run performance runs
regularly and archive results.  Beehive controls seems to be leading the
charge here as they plan in the next few months to have continuously run
controls performance scenarios against svn builds. 

These tests will expect a given time that the test should report with a
configurable "drift" percent to allow for fluctuations in the
performance values. This procedure has worked well for xmlbeans already.

Another important part of this is some sort of historical archive so we
can catch regressions and observe improvements.

So I expect the performance analysis to be manual/adhoc until we get
some performance infrastructure in place.  Once that is done, we can
start more closely defining performance testing scenarios on top of this
chassis.  Until that time, the adhoc results will likely be posted to
SVN/Wiki once they are available.

In addition to that, I feel we can begin to target a reference
performance platform now.  I think we should target a low-level server
class specification for this.  Thus, within reach, yet still cost
effective to procure.  

Do you have any suggestions?

Thanks
Steve

-----Original Message-----
From: Rotan Hanrahan [mailto:Rotan.Hanrahan@MobileAware.com] 
Sent: Tuesday, September 21, 2004 5:21 PM
To: beehive-dev@incubator.apache.org
Subject: FW: [proposal] Beehive release strategy

I'm sure the Dev team would be happy to contribute additional comments
on the proposed Beehive release strategy, so as requested I'm sending
"this" to "there".
---Rotan

	-----Original Message----- 
	From: Heather Stephens [mailto:heathers@bea.com] 
	Sent: Tue 21/09/2004 22:16 
	To: Rotan Hanrahan; beehive-ppmc@incubator.apache.org 
	Cc: 
	Subject: RE: [proposal] Beehive release strategy
	
	

	Hey Rotan-
	
	This is all great feedback and seem like good discussion items
and
	questions for the team.  It doesn't seem like there is anything
	particularly private or sensitive in this email and so I think
it would
	be great if we could have this discussion in a more public forum
on
	beehive-dev.  Would you mind sending this there to open it up to
the
	larger community?
	
	H.
	
	-----Original Message-----
	From: Rotan Hanrahan [mailto:Rotan.Hanrahan@MobileAware.com]
	Sent: Friday, September 17, 2004 9:52 AM
	To: beehive-ppmc@incubator.apache.org
	Subject: RE: [proposal] Beehive release strategy
	
	Quick feedback:
	
	0: Looks good.
	
	1: Is there any way to validate that a successful unit test
sequence was
	executed?
	
	In effect, I'm wondering if there's a way to prevent check-in
*unless*
	the tests have passed.
	
	2: Is there a code tidy process?
	
	This is a sweeper task that one or more people do. Look at code
and tidy
	the comments or layout according to style rules we agree in
advance.
	Ambiguous comments get referred to the author for clarification.
This
	might sound like a minor task, but if we have a large community
and not
	all are native speakers of the comment language (ie english)
then
	someone has to make sure it is clear and makes sense. Preferably
good
	coders with good communication skills. It also provides an
avenue for
	contributions that may not be code mods, but would still be very
useful
	to those who do the actual coding.
	
	3: If I have version X.y.z, will there be an easy way for me to
	determine the feature set?
	
	4: 'When appropriate, cut a "fix pack"...' needs clarification.
	
	Will there be a set of unambiguous criteria against which one
can
	ascertain whether or not the time is 'appropriate' to cut?
	
	5: We need a stable objective quality assessment mechanism from
which we
	can observe trends.
	
	For example, we could agree a hardware and o.s. reference
environment,
	and then run an agreed set of tests on this platform, measurings
key
	statistics as we go. Over time we will obtain some objective
performance
	quality trends. We might then be able to sync a feature/bug
introduction
	to a change in performance (+/-), which in turn would suggest an
	inspection of that code (to fix or to learn).
	
	
	Regards,
	---Rotan.
	
	
	-----Original Message-----
	From: Heather Stephens [mailto:heathers@bea.com]
	Sent: 14 September 2004 00:22
	To: beehive-ppmc@incubator.apache.org
	Subject: FW: [proposal] Beehive release strategy
	
	
	FYI.  Feedback appreciated.
	
	-----Original Message-----
	From: Heather Stephens
	Sent: Monday, September 13, 2004 4:20 PM
	To: Beehive Developers
	Subject: [proposal] Beehive release strategy
	
	Hi all-
	
	I've been putting some thought into a release strategy we might
use for
	Beehive.  http://wiki.apache.org/beehive/Release_20Process
	
	Please take some time to review and assess it as the Beehive
general
	release model.  If you would raise any concerns or suggest
	revisements/refinements on this alias for further discussion
that would
	be fabulous.
	
	Timeline goal:
	9/19/04:  Close on discussion and resolve any issues
	9/20/04:  Finalize proposal and send to a vote at the PPMC
	
	Cheers.
	Heather Stephens