You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by David Nalley <da...@gnsa.us> on 2014/02/07 04:50:34 UTC

Code quality, QA, etc

Hi folks,

We continue to break things large and small in the codebase, and after
a number of different conversations; I thought I'd bring that
discussion here.

First - coding quality is only one factor that the PMC considers when
making someone a committer.

Second - CloudStack is a huge codebase; has a ton of inter-related
pieces, and unintended consequences are easy.

We also have an pretty heady commit velocity - 20+ commits today alone.

Some communities have Review-then-commit - which would slow us down,
and presumably help us increase quality. However, I am not personally
convinced that it will do so measurably because even the most
experienced CloudStack developers occasionally break a build or worse.

We could have an automated pipeline that verifies a number of
different tests pass - before a patch/commit makes it into a mainline
branch. That is difficult with our current tooling; but perhaps
something worth considering.

At FOSDEM, Hugo and I were discussing his experiences with Gerrit and
OpenDaylight, and he thinks thats a viable option. I think it would
certainly be a step in the right direction.

Separately, Jake Farrell and I were discussing our git-related
proposal for ApacheCon, and broached the subject of Gerrit. Jake is
the current person bearing most of the load for git at the ASF, and
he's also run Gerrit in other contexts. He points out a number of
difficulties. (And I'd love for him to weigh in on this conversation,
hence the CC) He wants to expand RB significantly, including
pre-commit testing.

So - thoughts, comments, flames? How do we improve code quality, stop
needless breakage? Much of this is going to be cultural I think, and I
personally think we struggle with that. Many folks have voiced an
opinion about stopping continued commits when the build is broken; but
we haven't been able to do that.

--David

Re: Code quality, QA, etc

Posted by Amogh Vasekar <am...@citrix.com>.

On 2/11/14 3:54 PM, "Nate Gordon" <na...@appcore.com> wrote:

>My goal is to eventually do a full build, deploy, configure, and
>automated tests against a scratch built local cloud with every commit from
>selected branches or daily from master (Hooray for nested virtualization
>and excess hardware)

Had done something similar, along with trying to come up with a flow for
keeping master stable. Have put up the stuff here :
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Replicable+QA+Infras
tructure+Design+Proposal


Re: Code quality, QA, etc

Posted by Nate Gordon <na...@appcore.com>.
Sorry for being a few days late on this.

I totally agree with the direction this conversation is going, but as
someone who has done build engineering in various incarnations over the
last couple of years, I would suggest two additional things.  Build per
branch and gitflow.  We have had good success in quality control by
requiring developers to create a new branch for all activities, which
automatically creates a new build in the build system.  This allows people
to have full builds of their branch, that in theory must be passing, before
they merge back into master/dev/trunk.  This is a bit different from the
gerrit solution where you are still committing to master regularly.  We
also do a review/pull request system to regulate that merge process.  I
know it was said that limiting the input by having a review process would
be bad, I'm a firm believer in having two sets of eyes look at everything
makes things generally better.

This does come with an increased build infrastructure cost since running
builds for all of the branch changes can be costly.  We implement this
internally, and have a build server setup in the office which is pulling
several interesting branches from ACS and running builds daily so we can
keep up better on current status and test additional items that are
specific to our environment.  This is mirrored into our internal git repo
where we can create branches for testing random fixes and such (we aren't
committers yet, but we can dream), but still have full build and test
support.  My goal is to eventually do a full build, deploy, configure, and
automated tests against a scratch built local cloud with every commit from
selected branches or daily from master (Hooray for nested virtualization
and excess hardware).  But I'm also a bit of a build nerd.

Even without reviews, something of this nature could help improve quality
as well.


On Sat, Feb 8, 2014 at 12:19 AM, Rohit Yadav <bh...@apache.org> wrote:

> On Fri, Feb 7, 2014 at 2:16 PM, Hugo Trippaers <hu...@trippaers.nl> wrote:
> > Hey David,
> >
> > I would make a distinction between code issues and functional issues.
> Occasionally somebody just plainly breaks the build, i'm guilty of that
> myself actually, and thats just plain stupid. Luckily we have Jenkins to
> catch these errors quickly. I'm in a continuous struggle with Jenkins to
> get the build time to less than 5 minutes. I feel that is an acceptable
> time to get feedback on a commit, any longer and you have moved on to the
> next thing or gone home.
>
> Why not do incremental builds, since last build on a git SHA to speed
> up by cheating?
>
> May I share a hack I use based on an old method once shared by Edison
> to do a fast builds locally:
>
> 1. If you just clone cloudstack, build once this will probably take a
> lot of time, also to get deps and whatnot
>
> 2. Start making changes and build only projects/modules that got
> changed using: (you make create a shell function that wraps this in
> your bashrc or zshrc etc)
>
> mvn -pl `git status --porcelain |sed -n '/\/src/p'| awk '{print $2}'
> |sed 's/\/src/$/'|cut -d $ -f 1|uniq |tr "\n" "," |sed
> 's/,$/,client/'` clean install
>
> 3. Commit often on your branch after build works and when you're
> implemented your stuff do a squash merge on master (or target branch)
> which results in a single (reviewable) commit. This probably will save
> you from breaking builds.
>
> > Also this kind of testing isn't really hard, run the build and unit
> tests. By introducing something like gerrit we can actually make this
> happen before committing it to the repo. Push a patch to gerrit, gerrit
> tells jenkins to test the patch, if +1 from jerkins commit, for non
> committers the step would be to invite somebody for review as well. Second
> nice thing about jenkins is the post-review test, if a contributor submits
> a patch its build by jenkins, if a reviewes approves the patch, jerkins
> will again run a build to ensure that the patch will still apply and
> doesn't break the build. Very handy if there is some time between patch
> submission and patch review.
>
> +1
>
> I think it's a culture issue, while we may think that introducing and
> forcing everyone to go through code reviewing process will slow
> everyone down IMHO over time this will inculcate the habit in everyone
> to do code reviews for others (their patches/branches etc.) so others
> would do for them. It can fail in case there are not enough reviewers
> available for a code review or they lack interest/time (our state with
> reviewboard).
>
> So, the trick could be to have, in addition of the reviewers, few
> assigned maintainers who are responsible for churning out pending
> reviews in parts of the codebase they understand very well and can
> help with reviews process.
>
> >
> > Functional issues are much harder to track. For example yesterday i
> found several issues in the contrail plugin that would not cause any pain
> in a contrail environment, but any other environments creating a network
> would fail. These examples are too common and difficult to catch with unit
> tests. It can be done, but requires some serious effort on the developers
> side and we in general don't seem to be very active at writing unit tests.
> These kind of issues can only be found by actually running CloudStack and
> executing a series of functional tests. Ideally that is what we have the
> BVT suite for, but i think our current BVT setup is not documented enough
> to give accurate feedback to a developer about which patch broke a certain
> piece of functionality. In jenkins the path from code to BVT is not kept
> yet, so it is almost impossible to see which commits were new in a
> particular run of the bvt suite.
> >
> > Personally i'm trying to get into the habit of running a series of tests
> on devcloud before committing something. Doesn't prove a lot, but does
> guarantee that the bare basic developer functionality is working before
> committing something. After a commit at least i'm sure that anybody will be
> able to spinup devcloud and deploy an instance. I'm trying to get this
> automated as well so we can use this as feedback on a patch. Beers for
> anyone who writes an easy to use script that configures devcloud with a
> zone and tests if a user vm can be instantiated on an isolated sourcenat
> network. If we could include such a script in the tree it might help people
> with testing their patch before committing.
>
> I once discussed an idea that we use something fast to do the testing,
> so instead of vms or nested vms we can use a mocked hypervisor
> (simulator) or LXC container on a VM locally. But again, in every case
> they will be just passing very small set of test cases.
>
> Regards.
>
> >
> > I think we are seeing more and more reverts in the tree. Not necessarily
> a good thing, but at least people know that there is that option if a
> commit really breaks a build. Also please help each other out, everybody
> can make a mistake and commit it. If its a trivial mistake it might not be
> much effort to track it down and fix it, which is way better than a revert
> or a mail that something is broken.
> >
> > In short, we need to make testing more efficient and transparent to
> allow people to easily incorporate it in their personal workflow.
> >
> > Cheers,
> >
> > Hugo
> >
> > On 7 feb. 2014, at 04:50, David Nalley <da...@gnsa.us> wrote:
> >
> >> Hi folks,
> >>
> >> We continue to break things large and small in the codebase, and after
> >> a number of different conversations; I thought I'd bring that
> >> discussion here.
> >>
> >> First - coding quality is only one factor that the PMC considers when
> >> making someone a committer.
> >>
> >> Second - CloudStack is a huge codebase; has a ton of inter-related
> >> pieces, and unintended consequences are easy.
> >>
> >> We also have an pretty heady commit velocity - 20+ commits today alone.
> >>
> >> Some communities have Review-then-commit - which would slow us down,
> >> and presumably help us increase quality. However, I am not personally
> >> convinced that it will do so measurably because even the most
> >> experienced CloudStack developers occasionally break a build or worse.
> >>
> >> We could have an automated pipeline that verifies a number of
> >> different tests pass - before a patch/commit makes it into a mainline
> >> branch. That is difficult with our current tooling; but perhaps
> >> something worth considering.
> >>
> >> At FOSDEM, Hugo and I were discussing his experiences with Gerrit and
> >> OpenDaylight, and he thinks thats a viable option. I think it would
> >> certainly be a step in the right direction.
> >>
> >> Separately, Jake Farrell and I were discussing our git-related
> >> proposal for ApacheCon, and broached the subject of Gerrit. Jake is
> >> the current person bearing most of the load for git at the ASF, and
> >> he's also run Gerrit in other contexts. He points out a number of
> >> difficulties. (And I'd love for him to weigh in on this conversation,
> >> hence the CC) He wants to expand RB significantly, including
> >> pre-commit testing.
> >>
> >> So - thoughts, comments, flames? How do we improve code quality, stop
> >> needless breakage? Much of this is going to be cultural I think, and I
> >> personally think we struggle with that. Many folks have voiced an
> >> opinion about stopping continued commits when the build is broken; but
> >> we haven't been able to do that.
> >>
> >> --David
> >
>



-- 


*Nate Gordon*Director of Technology | Appcore - the business of cloud
computing®

Office +1.800.735.7104  |  Direct +1.515.612.7787
nate.gordon@appcore.com  |  www.appcore.com

----------------------------------------------------------------------

The information in this message is intended for the named recipients only.
It may contain information that is privileged, confidential or otherwise
protected from disclosure. If you are not the intended recipient, you are
hereby notified that any disclosure, copying, distribution, or the taking
of any action in reliance on the contents of this message is strictly
prohibited. If you have received this e-mail in error, do not print it or
disseminate it or its contents. In such event, please notify the sender by
return e-mail and delete the e-mail file immediately thereafter. Thank you.

Re: Code quality, QA, etc

Posted by Rohit Yadav <bh...@apache.org>.
On Fri, Feb 7, 2014 at 2:16 PM, Hugo Trippaers <hu...@trippaers.nl> wrote:
> Hey David,
>
> I would make a distinction between code issues and functional issues. Occasionally somebody just plainly breaks the build, i'm guilty of that myself actually, and thats just plain stupid. Luckily we have Jenkins to catch these errors quickly. I'm in a continuous struggle with Jenkins to get the build time to less than 5 minutes. I feel that is an acceptable time to get feedback on a commit, any longer and you have moved on to the next thing or gone home.

Why not do incremental builds, since last build on a git SHA to speed
up by cheating?

May I share a hack I use based on an old method once shared by Edison
to do a fast builds locally:

1. If you just clone cloudstack, build once this will probably take a
lot of time, also to get deps and whatnot

2. Start making changes and build only projects/modules that got
changed using: (you make create a shell function that wraps this in
your bashrc or zshrc etc)

mvn -pl `git status --porcelain |sed -n '/\/src/p'| awk '{print $2}'
|sed 's/\/src/$/'|cut -d $ -f 1|uniq |tr "\n" "," |sed
's/,$/,client/'` clean install

3. Commit often on your branch after build works and when you're
implemented your stuff do a squash merge on master (or target branch)
which results in a single (reviewable) commit. This probably will save
you from breaking builds.

> Also this kind of testing isn't really hard, run the build and unit tests. By introducing something like gerrit we can actually make this happen before committing it to the repo. Push a patch to gerrit, gerrit tells jenkins to test the patch, if +1 from jerkins commit, for non committers the step would be to invite somebody for review as well. Second nice thing about jenkins is the post-review test, if a contributor submits a patch its build by jenkins, if a reviewes approves the patch, jerkins will again run a build to ensure that the patch will still apply and doesn't break the build. Very handy if there is some time between patch submission and patch review.

+1

I think it's a culture issue, while we may think that introducing and
forcing everyone to go through code reviewing process will slow
everyone down IMHO over time this will inculcate the habit in everyone
to do code reviews for others (their patches/branches etc.) so others
would do for them. It can fail in case there are not enough reviewers
available for a code review or they lack interest/time (our state with
reviewboard).

So, the trick could be to have, in addition of the reviewers, few
assigned maintainers who are responsible for churning out pending
reviews in parts of the codebase they understand very well and can
help with reviews process.

>
> Functional issues are much harder to track. For example yesterday i found several issues in the contrail plugin that would not cause any pain in a contrail environment, but any other environments creating a network would fail. These examples are too common and difficult to catch with unit tests. It can be done, but requires some serious effort on the developers side and we in general don't seem to be very active at writing unit tests. These kind of issues can only be found by actually running CloudStack and executing a series of functional tests. Ideally that is what we have the BVT suite for, but i think our current BVT setup is not documented enough to give accurate feedback to a developer about which patch broke a certain piece of functionality. In jenkins the path from code to BVT is not kept yet, so it is almost impossible to see which commits were new in a particular run of the bvt suite.
>
> Personally i'm trying to get into the habit of running a series of tests on devcloud before committing something. Doesn't prove a lot, but does guarantee that the bare basic developer functionality is working before committing something. After a commit at least i'm sure that anybody will be able to spinup devcloud and deploy an instance. I'm trying to get this automated as well so we can use this as feedback on a patch. Beers for anyone who writes an easy to use script that configures devcloud with a zone and tests if a user vm can be instantiated on an isolated sourcenat network. If we could include such a script in the tree it might help people with testing their patch before committing.

I once discussed an idea that we use something fast to do the testing,
so instead of vms or nested vms we can use a mocked hypervisor
(simulator) or LXC container on a VM locally. But again, in every case
they will be just passing very small set of test cases.

Regards.

>
> I think we are seeing more and more reverts in the tree. Not necessarily a good thing, but at least people know that there is that option if a commit really breaks a build. Also please help each other out, everybody can make a mistake and commit it. If its a trivial mistake it might not be much effort to track it down and fix it, which is way better than a revert or a mail that something is broken.
>
> In short, we need to make testing more efficient and transparent to allow people to easily incorporate it in their personal workflow.
>
> Cheers,
>
> Hugo
>
> On 7 feb. 2014, at 04:50, David Nalley <da...@gnsa.us> wrote:
>
>> Hi folks,
>>
>> We continue to break things large and small in the codebase, and after
>> a number of different conversations; I thought I'd bring that
>> discussion here.
>>
>> First - coding quality is only one factor that the PMC considers when
>> making someone a committer.
>>
>> Second - CloudStack is a huge codebase; has a ton of inter-related
>> pieces, and unintended consequences are easy.
>>
>> We also have an pretty heady commit velocity - 20+ commits today alone.
>>
>> Some communities have Review-then-commit - which would slow us down,
>> and presumably help us increase quality. However, I am not personally
>> convinced that it will do so measurably because even the most
>> experienced CloudStack developers occasionally break a build or worse.
>>
>> We could have an automated pipeline that verifies a number of
>> different tests pass - before a patch/commit makes it into a mainline
>> branch. That is difficult with our current tooling; but perhaps
>> something worth considering.
>>
>> At FOSDEM, Hugo and I were discussing his experiences with Gerrit and
>> OpenDaylight, and he thinks thats a viable option. I think it would
>> certainly be a step in the right direction.
>>
>> Separately, Jake Farrell and I were discussing our git-related
>> proposal for ApacheCon, and broached the subject of Gerrit. Jake is
>> the current person bearing most of the load for git at the ASF, and
>> he's also run Gerrit in other contexts. He points out a number of
>> difficulties. (And I'd love for him to weigh in on this conversation,
>> hence the CC) He wants to expand RB significantly, including
>> pre-commit testing.
>>
>> So - thoughts, comments, flames? How do we improve code quality, stop
>> needless breakage? Much of this is going to be cultural I think, and I
>> personally think we struggle with that. Many folks have voiced an
>> opinion about stopping continued commits when the build is broken; but
>> we haven't been able to do that.
>>
>> --David
>

Re: Code quality, QA, etc

Posted by Jake Farrell <jf...@apache.org>.
We currently have jenkins pre commit testing available [1] for patches
attached to jira tickets, I own the filter listed in step 5 and can help
get this setup if needed.

I am working on getting pre-commit testing available from reviewboard and
hopefully will have this available shortly. The reviewbot plugin for
jenkins is still very new and does not support multiple reviewboard
repositories yet which we would need in order to use it as a possible
solution.

We currently have over 30k users on our reviewboard and it is working well
for projects that are leveraging it. Gerrit setup has some pain points as
it wants to have control of the repositories in order to allow merging of
patches. We would not want to run this on the same server that our
repositories run on due to security and potential hardware contentions.
That said, I am not completely against the use of Gerrit, but I think that
we need to wait and review Gerrit and its potential use after we complete
the migration of git-wip to git.a.o.

I'm happy to discuss or help in any way I can to facilitate easier
workflows and pre-commit testing to ensure that our projects are providing
the highest quality software.

-Jake

[1]: http://wiki.apache.org/general/PreCommitBuilds


On Fri, Feb 7, 2014 at 2:16 PM, Yoshikazu Nojima <ma...@ynojima.net> wrote:

> +1 for pre-commit testing.
> I put pre-commit testing in practice personally, and it helped me a lot.
> Before I submit a patch, I create a pull request in my github repo. My
> Jenkins subscribes pull requests by Jenkins plugin for Github and the
> build result is displayed in Github.
> I heard there are plugins for ReviewBoard and plugins for Gerrit that
> automatically test patches submitted.
> ex. )
> https://wiki.jenkins-ci.org/display/JENKINS/Jenkins-Reviewbot
> https://wiki.jenkins-ci.org/display/JENKINS/Gerrit+Trigger
>
> If possible, executing automated end-to-end testing is better to
> eliminate most of regressions, but
> even executing build and unit testing will prevent minor mistakes and
> broken commits.
>
>
> 2014-02-07 9:28 GMT-07:00 Mike Tutkowski <mi...@solidfire.com>:
> > I would love to see pre-commit testing such as what Hugo described.
> >
> > At the time being, I tend to mvn -P developer,systemvm clean install to
> > make sure I have a clean build and run whatever tests it runs, then I run
> > my own suite of tests manually (I'd like to automated these when I have
> > time), then I check my code in.
> >
> >
> > On Fri, Feb 7, 2014 at 5:02 AM, Sudha Ponnaganti <
> > sudha.ponnaganti@citrix.com> wrote:
> >
> >> +1 for pre- commit testing.  Whichever tool enforces it would be good
> >> choice.
> >> For feature check in, we ( community) require sanity tests to be
> submitted
> >> by feature owners and this was followed well in 4.0 release  but there
> is
> >> lapse in this practice now. This would be a great if RM can enforce this
> >> during check ins  -  review unit tests  and results before approving a
> >> check in.
> >>
> >> -----Original Message-----
> >> From: Trippie [mailto:trippie@gmail.com] On Behalf Of Hugo Trippaers
> >> Sent: Friday, February 07, 2014 12:46 AM
> >> To: dev
> >> Cc: jfarrell@apache.org
> >> Subject: Re: Code quality, QA, etc
> >>
> >> Hey David,
> >>
> >> I would make a distinction between code issues and functional issues.
> >> Occasionally somebody just plainly breaks the build, i'm guilty of that
> >> myself actually, and thats just plain stupid. Luckily we have Jenkins to
> >> catch these errors quickly. I'm in a continuous struggle with Jenkins to
> >> get the build time to less than 5 minutes. I feel that is an acceptable
> >> time to get feedback on a commit, any longer and you have moved on to
> the
> >> next thing or gone home. Also this kind of testing isn't really hard,
> run
> >> the build and unit tests. By introducing something like gerrit we can
> >> actually make this happen before committing it to the repo. Push a
> patch to
> >> gerrit, gerrit tells jenkins to test the patch, if +1 from jerkins
> commit,
> >> for non committers the step would be to invite somebody for review as
> well.
> >> Second nice thing about jenkins is the post-review test, if a
> contributor
> >> submits a patch its build by jenkins, if a reviewes approves the patch,
> >> jerkins will again run a build to ensure that the patch will still apply
> >> and doesn't break the build. Very handy if there is some time between
> patch
> >> submission and patch review.
> >>
> >> Functional issues are much harder to track. For example yesterday i
> found
> >> several issues in the contrail plugin that would not cause any pain in a
> >> contrail environment, but any other environments creating a network
> would
> >> fail. These examples are too common and difficult to catch with unit
> tests.
> >> It can be done, but requires some serious effort on the developers side
> and
> >> we in general don't seem to be very active at writing unit tests. These
> >> kind of issues can only be found by actually running CloudStack and
> >> executing a series of functional tests. Ideally that is what we have the
> >> BVT suite for, but i think our current BVT setup is not documented
> enough
> >> to give accurate feedback to a developer about which patch broke a
> certain
> >> piece of functionality. In jenkins the path from code to BVT is not kept
> >> yet, so it is almost impossible to see which commits were new in a
> >> particular run of the bvt suite.
> >>
> >> Personally i'm trying to get into the habit of running a series of tests
> >> on devcloud before committing something. Doesn't prove a lot, but does
> >> guarantee that the bare basic developer functionality is working before
> >> committing something. After a commit at least i'm sure that anybody
> will be
> >> able to spinup devcloud and deploy an instance. I'm trying to get this
> >> automated as well so we can use this as feedback on a patch. Beers for
> >> anyone who writes an easy to use script that configures devcloud with a
> >> zone and tests if a user vm can be instantiated on an isolated sourcenat
> >> network. If we could include such a script in the tree it might help
> people
> >> with testing their patch before committing.
> >>
> >> I think we are seeing more and more reverts in the tree. Not
> necessarily a
> >> good thing, but at least people know that there is that option if a
> commit
> >> really breaks a build. Also please help each other out, everybody can
> make
> >> a mistake and commit it. If its a trivial mistake it might not be much
> >> effort to track it down and fix it, which is way better than a revert
> or a
> >> mail that something is broken.
> >>
> >> In short, we need to make testing more efficient and transparent to
> allow
> >> people to easily incorporate it in their personal workflow.
> >>
> >> Cheers,
> >>
> >> Hugo
> >>
> >> On 7 feb. 2014, at 04:50, David Nalley <da...@gnsa.us> wrote:
> >>
> >> > Hi folks,
> >> >
> >> > We continue to break things large and small in the codebase, and after
> >> > a number of different conversations; I thought I'd bring that
> >> > discussion here.
> >> >
> >> > First - coding quality is only one factor that the PMC considers when
> >> > making someone a committer.
> >> >
> >> > Second - CloudStack is a huge codebase; has a ton of inter-related
> >> > pieces, and unintended consequences are easy.
> >> >
> >> > We also have an pretty heady commit velocity - 20+ commits today
> alone.
> >> >
> >> > Some communities have Review-then-commit - which would slow us down,
> >> > and presumably help us increase quality. However, I am not personally
> >> > convinced that it will do so measurably because even the most
> >> > experienced CloudStack developers occasionally break a build or worse.
> >> >
> >> > We could have an automated pipeline that verifies a number of
> >> > different tests pass - before a patch/commit makes it into a mainline
> >> > branch. That is difficult with our current tooling; but perhaps
> >> > something worth considering.
> >> >
> >> > At FOSDEM, Hugo and I were discussing his experiences with Gerrit and
> >> > OpenDaylight, and he thinks thats a viable option. I think it would
> >> > certainly be a step in the right direction.
> >> >
> >> > Separately, Jake Farrell and I were discussing our git-related
> >> > proposal for ApacheCon, and broached the subject of Gerrit. Jake is
> >> > the current person bearing most of the load for git at the ASF, and
> >> > he's also run Gerrit in other contexts. He points out a number of
> >> > difficulties. (And I'd love for him to weigh in on this conversation,
> >> > hence the CC) He wants to expand RB significantly, including
> >> > pre-commit testing.
> >> >
> >> > So - thoughts, comments, flames? How do we improve code quality, stop
> >> > needless breakage? Much of this is going to be cultural I think, and I
> >> > personally think we struggle with that. Many folks have voiced an
> >> > opinion about stopping continued commits when the build is broken; but
> >> > we haven't been able to do that.
> >> >
> >> > --David
> >>
> >>
> >
> >
> > --
> > *Mike Tutkowski*
> > *Senior CloudStack Developer, SolidFire Inc.*
> > e: mike.tutkowski@solidfire.com
> > o: 303.746.7302
> > Advancing the way the world uses the
> > cloud<http://solidfire.com/solution/overview/?video=play>
> > *(tm)*
>

Re: Code quality, QA, etc

Posted by Yoshikazu Nojima <ma...@ynojima.net>.
+1 for pre-commit testing.
I put pre-commit testing in practice personally, and it helped me a lot.
Before I submit a patch, I create a pull request in my github repo. My
Jenkins subscribes pull requests by Jenkins plugin for Github and the
build result is displayed in Github.
I heard there are plugins for ReviewBoard and plugins for Gerrit that
automatically test patches submitted.
ex. )
https://wiki.jenkins-ci.org/display/JENKINS/Jenkins-Reviewbot
https://wiki.jenkins-ci.org/display/JENKINS/Gerrit+Trigger

If possible, executing automated end-to-end testing is better to
eliminate most of regressions, but
even executing build and unit testing will prevent minor mistakes and
broken commits.


2014-02-07 9:28 GMT-07:00 Mike Tutkowski <mi...@solidfire.com>:
> I would love to see pre-commit testing such as what Hugo described.
>
> At the time being, I tend to mvn -P developer,systemvm clean install to
> make sure I have a clean build and run whatever tests it runs, then I run
> my own suite of tests manually (I'd like to automated these when I have
> time), then I check my code in.
>
>
> On Fri, Feb 7, 2014 at 5:02 AM, Sudha Ponnaganti <
> sudha.ponnaganti@citrix.com> wrote:
>
>> +1 for pre- commit testing.  Whichever tool enforces it would be good
>> choice.
>> For feature check in, we ( community) require sanity tests to be submitted
>> by feature owners and this was followed well in 4.0 release  but there is
>> lapse in this practice now. This would be a great if RM can enforce this
>> during check ins  -  review unit tests  and results before approving a
>> check in.
>>
>> -----Original Message-----
>> From: Trippie [mailto:trippie@gmail.com] On Behalf Of Hugo Trippaers
>> Sent: Friday, February 07, 2014 12:46 AM
>> To: dev
>> Cc: jfarrell@apache.org
>> Subject: Re: Code quality, QA, etc
>>
>> Hey David,
>>
>> I would make a distinction between code issues and functional issues.
>> Occasionally somebody just plainly breaks the build, i'm guilty of that
>> myself actually, and thats just plain stupid. Luckily we have Jenkins to
>> catch these errors quickly. I'm in a continuous struggle with Jenkins to
>> get the build time to less than 5 minutes. I feel that is an acceptable
>> time to get feedback on a commit, any longer and you have moved on to the
>> next thing or gone home. Also this kind of testing isn't really hard, run
>> the build and unit tests. By introducing something like gerrit we can
>> actually make this happen before committing it to the repo. Push a patch to
>> gerrit, gerrit tells jenkins to test the patch, if +1 from jerkins commit,
>> for non committers the step would be to invite somebody for review as well.
>> Second nice thing about jenkins is the post-review test, if a contributor
>> submits a patch its build by jenkins, if a reviewes approves the patch,
>> jerkins will again run a build to ensure that the patch will still apply
>> and doesn't break the build. Very handy if there is some time between patch
>> submission and patch review.
>>
>> Functional issues are much harder to track. For example yesterday i found
>> several issues in the contrail plugin that would not cause any pain in a
>> contrail environment, but any other environments creating a network would
>> fail. These examples are too common and difficult to catch with unit tests.
>> It can be done, but requires some serious effort on the developers side and
>> we in general don't seem to be very active at writing unit tests. These
>> kind of issues can only be found by actually running CloudStack and
>> executing a series of functional tests. Ideally that is what we have the
>> BVT suite for, but i think our current BVT setup is not documented enough
>> to give accurate feedback to a developer about which patch broke a certain
>> piece of functionality. In jenkins the path from code to BVT is not kept
>> yet, so it is almost impossible to see which commits were new in a
>> particular run of the bvt suite.
>>
>> Personally i'm trying to get into the habit of running a series of tests
>> on devcloud before committing something. Doesn't prove a lot, but does
>> guarantee that the bare basic developer functionality is working before
>> committing something. After a commit at least i'm sure that anybody will be
>> able to spinup devcloud and deploy an instance. I'm trying to get this
>> automated as well so we can use this as feedback on a patch. Beers for
>> anyone who writes an easy to use script that configures devcloud with a
>> zone and tests if a user vm can be instantiated on an isolated sourcenat
>> network. If we could include such a script in the tree it might help people
>> with testing their patch before committing.
>>
>> I think we are seeing more and more reverts in the tree. Not necessarily a
>> good thing, but at least people know that there is that option if a commit
>> really breaks a build. Also please help each other out, everybody can make
>> a mistake and commit it. If its a trivial mistake it might not be much
>> effort to track it down and fix it, which is way better than a revert or a
>> mail that something is broken.
>>
>> In short, we need to make testing more efficient and transparent to allow
>> people to easily incorporate it in their personal workflow.
>>
>> Cheers,
>>
>> Hugo
>>
>> On 7 feb. 2014, at 04:50, David Nalley <da...@gnsa.us> wrote:
>>
>> > Hi folks,
>> >
>> > We continue to break things large and small in the codebase, and after
>> > a number of different conversations; I thought I'd bring that
>> > discussion here.
>> >
>> > First - coding quality is only one factor that the PMC considers when
>> > making someone a committer.
>> >
>> > Second - CloudStack is a huge codebase; has a ton of inter-related
>> > pieces, and unintended consequences are easy.
>> >
>> > We also have an pretty heady commit velocity - 20+ commits today alone.
>> >
>> > Some communities have Review-then-commit - which would slow us down,
>> > and presumably help us increase quality. However, I am not personally
>> > convinced that it will do so measurably because even the most
>> > experienced CloudStack developers occasionally break a build or worse.
>> >
>> > We could have an automated pipeline that verifies a number of
>> > different tests pass - before a patch/commit makes it into a mainline
>> > branch. That is difficult with our current tooling; but perhaps
>> > something worth considering.
>> >
>> > At FOSDEM, Hugo and I were discussing his experiences with Gerrit and
>> > OpenDaylight, and he thinks thats a viable option. I think it would
>> > certainly be a step in the right direction.
>> >
>> > Separately, Jake Farrell and I were discussing our git-related
>> > proposal for ApacheCon, and broached the subject of Gerrit. Jake is
>> > the current person bearing most of the load for git at the ASF, and
>> > he's also run Gerrit in other contexts. He points out a number of
>> > difficulties. (And I'd love for him to weigh in on this conversation,
>> > hence the CC) He wants to expand RB significantly, including
>> > pre-commit testing.
>> >
>> > So - thoughts, comments, flames? How do we improve code quality, stop
>> > needless breakage? Much of this is going to be cultural I think, and I
>> > personally think we struggle with that. Many folks have voiced an
>> > opinion about stopping continued commits when the build is broken; but
>> > we haven't been able to do that.
>> >
>> > --David
>>
>>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *(tm)*

Re: Code quality, QA, etc

Posted by Mike Tutkowski <mi...@solidfire.com>.
I would love to see pre-commit testing such as what Hugo described.

At the time being, I tend to mvn -P developer,systemvm clean install to
make sure I have a clean build and run whatever tests it runs, then I run
my own suite of tests manually (I'd like to automated these when I have
time), then I check my code in.


On Fri, Feb 7, 2014 at 5:02 AM, Sudha Ponnaganti <
sudha.ponnaganti@citrix.com> wrote:

> +1 for pre- commit testing.  Whichever tool enforces it would be good
> choice.
> For feature check in, we ( community) require sanity tests to be submitted
> by feature owners and this was followed well in 4.0 release  but there is
> lapse in this practice now. This would be a great if RM can enforce this
> during check ins  -  review unit tests  and results before approving a
> check in.
>
> -----Original Message-----
> From: Trippie [mailto:trippie@gmail.com] On Behalf Of Hugo Trippaers
> Sent: Friday, February 07, 2014 12:46 AM
> To: dev
> Cc: jfarrell@apache.org
> Subject: Re: Code quality, QA, etc
>
> Hey David,
>
> I would make a distinction between code issues and functional issues.
> Occasionally somebody just plainly breaks the build, i'm guilty of that
> myself actually, and thats just plain stupid. Luckily we have Jenkins to
> catch these errors quickly. I'm in a continuous struggle with Jenkins to
> get the build time to less than 5 minutes. I feel that is an acceptable
> time to get feedback on a commit, any longer and you have moved on to the
> next thing or gone home. Also this kind of testing isn't really hard, run
> the build and unit tests. By introducing something like gerrit we can
> actually make this happen before committing it to the repo. Push a patch to
> gerrit, gerrit tells jenkins to test the patch, if +1 from jerkins commit,
> for non committers the step would be to invite somebody for review as well.
> Second nice thing about jenkins is the post-review test, if a contributor
> submits a patch its build by jenkins, if a reviewes approves the patch,
> jerkins will again run a build to ensure that the patch will still apply
> and doesn't break the build. Very handy if there is some time between patch
> submission and patch review.
>
> Functional issues are much harder to track. For example yesterday i found
> several issues in the contrail plugin that would not cause any pain in a
> contrail environment, but any other environments creating a network would
> fail. These examples are too common and difficult to catch with unit tests.
> It can be done, but requires some serious effort on the developers side and
> we in general don't seem to be very active at writing unit tests. These
> kind of issues can only be found by actually running CloudStack and
> executing a series of functional tests. Ideally that is what we have the
> BVT suite for, but i think our current BVT setup is not documented enough
> to give accurate feedback to a developer about which patch broke a certain
> piece of functionality. In jenkins the path from code to BVT is not kept
> yet, so it is almost impossible to see which commits were new in a
> particular run of the bvt suite.
>
> Personally i'm trying to get into the habit of running a series of tests
> on devcloud before committing something. Doesn't prove a lot, but does
> guarantee that the bare basic developer functionality is working before
> committing something. After a commit at least i'm sure that anybody will be
> able to spinup devcloud and deploy an instance. I'm trying to get this
> automated as well so we can use this as feedback on a patch. Beers for
> anyone who writes an easy to use script that configures devcloud with a
> zone and tests if a user vm can be instantiated on an isolated sourcenat
> network. If we could include such a script in the tree it might help people
> with testing their patch before committing.
>
> I think we are seeing more and more reverts in the tree. Not necessarily a
> good thing, but at least people know that there is that option if a commit
> really breaks a build. Also please help each other out, everybody can make
> a mistake and commit it. If its a trivial mistake it might not be much
> effort to track it down and fix it, which is way better than a revert or a
> mail that something is broken.
>
> In short, we need to make testing more efficient and transparent to allow
> people to easily incorporate it in their personal workflow.
>
> Cheers,
>
> Hugo
>
> On 7 feb. 2014, at 04:50, David Nalley <da...@gnsa.us> wrote:
>
> > Hi folks,
> >
> > We continue to break things large and small in the codebase, and after
> > a number of different conversations; I thought I'd bring that
> > discussion here.
> >
> > First - coding quality is only one factor that the PMC considers when
> > making someone a committer.
> >
> > Second - CloudStack is a huge codebase; has a ton of inter-related
> > pieces, and unintended consequences are easy.
> >
> > We also have an pretty heady commit velocity - 20+ commits today alone.
> >
> > Some communities have Review-then-commit - which would slow us down,
> > and presumably help us increase quality. However, I am not personally
> > convinced that it will do so measurably because even the most
> > experienced CloudStack developers occasionally break a build or worse.
> >
> > We could have an automated pipeline that verifies a number of
> > different tests pass - before a patch/commit makes it into a mainline
> > branch. That is difficult with our current tooling; but perhaps
> > something worth considering.
> >
> > At FOSDEM, Hugo and I were discussing his experiences with Gerrit and
> > OpenDaylight, and he thinks thats a viable option. I think it would
> > certainly be a step in the right direction.
> >
> > Separately, Jake Farrell and I were discussing our git-related
> > proposal for ApacheCon, and broached the subject of Gerrit. Jake is
> > the current person bearing most of the load for git at the ASF, and
> > he's also run Gerrit in other contexts. He points out a number of
> > difficulties. (And I'd love for him to weigh in on this conversation,
> > hence the CC) He wants to expand RB significantly, including
> > pre-commit testing.
> >
> > So - thoughts, comments, flames? How do we improve code quality, stop
> > needless breakage? Much of this is going to be cultural I think, and I
> > personally think we struggle with that. Many folks have voiced an
> > opinion about stopping continued commits when the build is broken; but
> > we haven't been able to do that.
> >
> > --David
>
>


-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkowski@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*(tm)*

RE: Code quality, QA, etc

Posted by Sudha Ponnaganti <su...@citrix.com>.
+1 for pre- commit testing.  Whichever tool enforces it would be good choice.  
For feature check in, we ( community) require sanity tests to be submitted by feature owners and this was followed well in 4.0 release  but there is lapse in this practice now. This would be a great if RM can enforce this during check ins  -  review unit tests  and results before approving a check in. 

-----Original Message-----
From: Trippie [mailto:trippie@gmail.com] On Behalf Of Hugo Trippaers
Sent: Friday, February 07, 2014 12:46 AM
To: dev
Cc: jfarrell@apache.org
Subject: Re: Code quality, QA, etc

Hey David,

I would make a distinction between code issues and functional issues. Occasionally somebody just plainly breaks the build, i'm guilty of that myself actually, and thats just plain stupid. Luckily we have Jenkins to catch these errors quickly. I'm in a continuous struggle with Jenkins to get the build time to less than 5 minutes. I feel that is an acceptable time to get feedback on a commit, any longer and you have moved on to the next thing or gone home. Also this kind of testing isn't really hard, run the build and unit tests. By introducing something like gerrit we can actually make this happen before committing it to the repo. Push a patch to gerrit, gerrit tells jenkins to test the patch, if +1 from jerkins commit, for non committers the step would be to invite somebody for review as well. Second nice thing about jenkins is the post-review test, if a contributor submits a patch its build by jenkins, if a reviewes approves the patch, jerkins will again run a build to ensure that the patch will still apply and doesn't break the build. Very handy if there is some time between patch submission and patch review.

Functional issues are much harder to track. For example yesterday i found several issues in the contrail plugin that would not cause any pain in a contrail environment, but any other environments creating a network would fail. These examples are too common and difficult to catch with unit tests. It can be done, but requires some serious effort on the developers side and we in general don't seem to be very active at writing unit tests. These kind of issues can only be found by actually running CloudStack and executing a series of functional tests. Ideally that is what we have the BVT suite for, but i think our current BVT setup is not documented enough to give accurate feedback to a developer about which patch broke a certain piece of functionality. In jenkins the path from code to BVT is not kept yet, so it is almost impossible to see which commits were new in a particular run of the bvt suite.

Personally i'm trying to get into the habit of running a series of tests on devcloud before committing something. Doesn't prove a lot, but does guarantee that the bare basic developer functionality is working before committing something. After a commit at least i'm sure that anybody will be able to spinup devcloud and deploy an instance. I'm trying to get this automated as well so we can use this as feedback on a patch. Beers for anyone who writes an easy to use script that configures devcloud with a zone and tests if a user vm can be instantiated on an isolated sourcenat network. If we could include such a script in the tree it might help people with testing their patch before committing.

I think we are seeing more and more reverts in the tree. Not necessarily a good thing, but at least people know that there is that option if a commit really breaks a build. Also please help each other out, everybody can make a mistake and commit it. If its a trivial mistake it might not be much effort to track it down and fix it, which is way better than a revert or a mail that something is broken. 

In short, we need to make testing more efficient and transparent to allow people to easily incorporate it in their personal workflow.

Cheers,

Hugo

On 7 feb. 2014, at 04:50, David Nalley <da...@gnsa.us> wrote:

> Hi folks,
> 
> We continue to break things large and small in the codebase, and after 
> a number of different conversations; I thought I'd bring that 
> discussion here.
> 
> First - coding quality is only one factor that the PMC considers when 
> making someone a committer.
> 
> Second - CloudStack is a huge codebase; has a ton of inter-related 
> pieces, and unintended consequences are easy.
> 
> We also have an pretty heady commit velocity - 20+ commits today alone.
> 
> Some communities have Review-then-commit - which would slow us down, 
> and presumably help us increase quality. However, I am not personally 
> convinced that it will do so measurably because even the most 
> experienced CloudStack developers occasionally break a build or worse.
> 
> We could have an automated pipeline that verifies a number of 
> different tests pass - before a patch/commit makes it into a mainline 
> branch. That is difficult with our current tooling; but perhaps 
> something worth considering.
> 
> At FOSDEM, Hugo and I were discussing his experiences with Gerrit and 
> OpenDaylight, and he thinks thats a viable option. I think it would 
> certainly be a step in the right direction.
> 
> Separately, Jake Farrell and I were discussing our git-related 
> proposal for ApacheCon, and broached the subject of Gerrit. Jake is 
> the current person bearing most of the load for git at the ASF, and 
> he's also run Gerrit in other contexts. He points out a number of 
> difficulties. (And I'd love for him to weigh in on this conversation, 
> hence the CC) He wants to expand RB significantly, including 
> pre-commit testing.
> 
> So - thoughts, comments, flames? How do we improve code quality, stop 
> needless breakage? Much of this is going to be cultural I think, and I 
> personally think we struggle with that. Many folks have voiced an 
> opinion about stopping continued commits when the build is broken; but 
> we haven't been able to do that.
> 
> --David


Re: Code quality, QA, etc

Posted by Hugo Trippaers <hu...@trippaers.nl>.
Hey David,

I would make a distinction between code issues and functional issues. Occasionally somebody just plainly breaks the build, i’m guilty of that myself actually, and thats just plain stupid. Luckily we have Jenkins to catch these errors quickly. I’m in a continuous struggle with Jenkins to get the build time to less than 5 minutes. I feel that is an acceptable time to get feedback on a commit, any longer and you have moved on to the next thing or gone home. Also this kind of testing isn’t really hard, run the build and unit tests. By introducing something like gerrit we can actually make this happen before committing it to the repo. Push a patch to gerrit, gerrit tells jenkins to test the patch, if +1 from jerkins commit, for non committers the step would be to invite somebody for review as well. Second nice thing about jenkins is the post-review test, if a contributor submits a patch its build by jenkins, if a reviewes approves the patch, jerkins will again run a build to ensure that the patch will still apply and doesn’t break the build. Very handy if there is some time between patch submission and patch review.

Functional issues are much harder to track. For example yesterday i found several issues in the contrail plugin that would not cause any pain in a contrail environment, but any other environments creating a network would fail. These examples are too common and difficult to catch with unit tests. It can be done, but requires some serious effort on the developers side and we in general don’t seem to be very active at writing unit tests. These kind of issues can only be found by actually running CloudStack and executing a series of functional tests. Ideally that is what we have the BVT suite for, but i think our current BVT setup is not documented enough to give accurate feedback to a developer about which patch broke a certain piece of functionality. In jenkins the path from code to BVT is not kept yet, so it is almost impossible to see which commits were new in a particular run of the bvt suite.

Personally i’m trying to get into the habit of running a series of tests on devcloud before committing something. Doesn't prove a lot, but does guarantee that the bare basic developer functionality is working before committing something. After a commit at least i’m sure that anybody will be able to spinup devcloud and deploy an instance. I’m trying to get this automated as well so we can use this as feedback on a patch. Beers for anyone who writes an easy to use script that configures devcloud with a zone and tests if a user vm can be instantiated on an isolated sourcenat network. If we could include such a script in the tree it might help people with testing their patch before committing.

I think we are seeing more and more reverts in the tree. Not necessarily a good thing, but at least people know that there is that option if a commit really breaks a build. Also please help each other out, everybody can make a mistake and commit it. If its a trivial mistake it might not be much effort to track it down and fix it, which is way better than a revert or a mail that something is broken. 

In short, we need to make testing more efficient and transparent to allow people to easily incorporate it in their personal workflow.

Cheers,

Hugo

On 7 feb. 2014, at 04:50, David Nalley <da...@gnsa.us> wrote:

> Hi folks,
> 
> We continue to break things large and small in the codebase, and after
> a number of different conversations; I thought I'd bring that
> discussion here.
> 
> First - coding quality is only one factor that the PMC considers when
> making someone a committer.
> 
> Second - CloudStack is a huge codebase; has a ton of inter-related
> pieces, and unintended consequences are easy.
> 
> We also have an pretty heady commit velocity - 20+ commits today alone.
> 
> Some communities have Review-then-commit - which would slow us down,
> and presumably help us increase quality. However, I am not personally
> convinced that it will do so measurably because even the most
> experienced CloudStack developers occasionally break a build or worse.
> 
> We could have an automated pipeline that verifies a number of
> different tests pass - before a patch/commit makes it into a mainline
> branch. That is difficult with our current tooling; but perhaps
> something worth considering.
> 
> At FOSDEM, Hugo and I were discussing his experiences with Gerrit and
> OpenDaylight, and he thinks thats a viable option. I think it would
> certainly be a step in the right direction.
> 
> Separately, Jake Farrell and I were discussing our git-related
> proposal for ApacheCon, and broached the subject of Gerrit. Jake is
> the current person bearing most of the load for git at the ASF, and
> he's also run Gerrit in other contexts. He points out a number of
> difficulties. (And I'd love for him to weigh in on this conversation,
> hence the CC) He wants to expand RB significantly, including
> pre-commit testing.
> 
> So - thoughts, comments, flames? How do we improve code quality, stop
> needless breakage? Much of this is going to be cultural I think, and I
> personally think we struggle with that. Many folks have voiced an
> opinion about stopping continued commits when the build is broken; but
> we haven't been able to do that.
> 
> --David


Re: Code quality, QA, etc

Posted by Laszlo Hornyak <la...@gmail.com>.
Hi,

I used to work with gerrit in OS projects and I think the tool is great,
the integration with jenkins is cool.
One problem could be when jenkins infrastructure problems are frequent and
developers start to ignore warnings from jenkins.
With my particular project we were also frequently hit by gerrit outages. I
do not know the reason since I did not operate the infrastructure, but
having 1-2 outages per week was normal.

This is the technical part and I am sure you can make a more reliable
service.

We also had Review-then-commit process, and in general I had a bad
experience with the process. I do believe the code review is necessary in
an open source project and it can improve quality, but at the same time the
costs (in time and lost braincells) are very high and the existence of a
process does not guarantee that the quality will improve. No process
replaced thinking so far.
Once I complained about having the 30th version of a patch that in my
opinion was quite simple and then someone answered that he is already over
the 40th review. It took several months to push something through the
process. And those numbers just kept growing. We collected some of the top
reasons with my team:
- the review was not really a review, the reviewer only looked at the code
in firefox. Never checked out, never ran the tests.
- reviewer expectations were various even in the same language and module
between reviewers, unfortunately this was not documented, so you had to use
the try-and-fail process to learn individual reviewer preferences, it took
quite a lot of time since the team was huge
- one had to wait for review sometimes for several weeks. Meanwhile the
patch got outdated and had to be rewritten, and then the whole process
started over again.
- Also, reviewers blocked at the first issue found in the patch. This was
usually in the commit comment, they did not like it. So you change the
commit comment and hope that next time the guy will read some actual code.
Maybe he will block on something like he does not like your variable name.
This is especially annoying when you send an urgent fix.
- The typical reason for merging a patch was the release deadline. Just a
few days before the deadline they merged everything. So we have spent
several months and still only the developer tested the code.

In my opinion a review tool is not enough to make the review process
productive, you need good reviewers.

Regards,
Laszlo

On Fri, Feb 7, 2014 at 4:50 AM, David Nalley <da...@gnsa.us> wrote:

> Hi folks,
>
> We continue to break things large and small in the codebase, and after
> a number of different conversations; I thought I'd bring that
> discussion here.
>
> First - coding quality is only one factor that the PMC considers when
> making someone a committer.
>
> Second - CloudStack is a huge codebase; has a ton of inter-related
> pieces, and unintended consequences are easy.
>
> We also have an pretty heady commit velocity - 20+ commits today alone.
>
> Some communities have Review-then-commit - which would slow us down,
> and presumably help us increase quality. However, I am not personally
> convinced that it will do so measurably because even the most
> experienced CloudStack developers occasionally break a build or worse.
>
> We could have an automated pipeline that verifies a number of
> different tests pass - before a patch/commit makes it into a mainline
> branch. That is difficult with our current tooling; but perhaps
> something worth considering.
>
> At FOSDEM, Hugo and I were discussing his experiences with Gerrit and
> OpenDaylight, and he thinks thats a viable option. I think it would
> certainly be a step in the right direction.
>
> Separately, Jake Farrell and I were discussing our git-related
> proposal for ApacheCon, and broached the subject of Gerrit. Jake is
> the current person bearing most of the load for git at the ASF, and
> he's also run Gerrit in other contexts. He points out a number of
> difficulties. (And I'd love for him to weigh in on this conversation,
> hence the CC) He wants to expand RB significantly, including
> pre-commit testing.
>
> So - thoughts, comments, flames? How do we improve code quality, stop
> needless breakage? Much of this is going to be cultural I think, and I
> personally think we struggle with that. Many folks have voiced an
> opinion about stopping continued commits when the build is broken; but
> we haven't been able to do that.
>
> --David
>



-- 

EOF