You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by Darren Shepherd <da...@gmail.com> on 2013/10/27 16:51:26 UTC

Tiered Quality

I don't know if a similar thing has been talked about before but I
thought I'd just throws this out there.  The ultimate way to ensure
quality is that we have unit test and integration test coverage on all
functionality.  That way somebody authors some code, commits to, for
example, 4.2, but then when we release 4.3, 4.4, etc they aren't on
the hook to manually tests the functionality with each release.  The
obvious nature of a community project is that people come and go.  If
a contributor wants to ensure the long term viability of the
component, they should ensure that there are unit+integration tests.

Now, for whatever reason whether good or bad, it's not always possible
to have full integration tests.  I don't want to throw down the gamut
and say everything must have coverage because that will mean some
useful code/feature will not get in because of some coverage wasn't
possible at the time.

What I propose is that for every feature or function we put it in a
tier of what is the quality of it (very similar to how OpenStack
qualifies their hypervisor integration).  Tier A means unit test and
integration test coverage gates the release.  Tier B means unit test
coverage gates the release.  Tier C mean who knows, it compiled.  We
can go through and classify the components and then as a community we
can try to get as much into Tier A as possible.

Darren

Re: Tiered Quality

Posted by Daan Hoogland <da...@gmail.com>.
You are right Laszlo but i was reacting on the scoring of submissions
proposal. if the goal is getting 3.6 to 3.7 coverage we definitely
don't need anything but a keen eye on submissions.

On Fri, Nov 1, 2013 at 6:48 PM, Laszlo Hornyak <la...@gmail.com> wrote:
> I have heard about commercial tools that do more advanced coverage
> tracking. But if you think in open source, not sure Sonar really has an
> alternative. It is pretty cool anyway.
> Btw the overall code coverage is about 3.6%, probaly it is not worth trying
> something more advanced for that much.
>
>
> On Thu, Oct 31, 2013 at 9:12 PM, Daan Hoogland <da...@gmail.com>wrote:
>
>> one note on testing guys,
>>
>> I see that the analysis site give lines-coverage and branch-coverage.
>> I don't see anything on distinct paths. What I mean is that the
>> program
>> if(a)
>>  A
>> else
>>  B
>> if(b)
>>  C
>> else
>>  D
>> if(c)
>>  E
>> else
>>  F
>> has eight (2^3) distict paths. It is not enough to show that
>> A,B,C,D,E,F are all hit and hance every line and branch. Also all
>> combinations of a/!a and b/!b and c/!c need to be hit.
>>
>> Now I am not saying that we should not score our code if not in this
>> way but it is kind of kidding ourselves if we don't face up to the
>> fact that coverage of lines of code or branches is not a completeness
>> criterium of some kind. I don't know if any of the mentioned tools
>> does analysis this thorough. But if any does we should go for that
>> one.
>>
>> €0,02
>> Daan
>>
>> On Tue, Oct 29, 2013 at 2:21 AM, Darren Shepherd
>> <da...@gmail.com> wrote:
>> > Starting with the honor system might be good.  It's not so easy some
>> times to relate lines of code to functionality.  Also just because it hits
>> a line of code doesn't mean it's really tested.
>> >
>> > Can't we just get people to just put a check mark on some table in the
>> wiki?
>> >
>> > Darren
>> >
>> >> On Oct 28, 2013, at 12:08 PM, Santhosh Edukulla <
>> santhosh.edukulla@citrix.com> wrote:
>> >>
>> >> 1.It seems we already have a code coverage numbers using sonar as
>> below. It currently shows only the numbers for unit tests.
>> >>
>> >> https://analysis.apache.org/dashboard/index/100206
>> >>
>> >> 2. The below link has an explanation for using it for both integration
>> and unit tests.
>> >>
>> >>
>> http://docs.codehaus.org/display/SONAR/Code+Coverage+by+Integration+Tests+for+Java+Project
>> >>
>> >> 3. Many links suggests it has good decision coverage facility compared
>> to other coverage tools.
>> >>
>> >>
>> http://onlysoftware.wordpress.com/2012/12/19/code-coverage-tools-jacoco-cobertura-emma-comparison-in-sonar/
>> >>
>> >> Regards,
>> >> Santhosh
>> >> ________________________________________
>> >> From: Laszlo Hornyak [laszlo.hornyak@gmail.com]
>> >> Sent: Monday, October 28, 2013 1:43 PM
>> >> To: dev@cloudstack.apache.org
>> >> Subject: Re: Tiered Quality
>> >>
>> >> Sonar already tracks the unit test coverage. It is also able to track
>> the
>> >> integration test coverage, however this might be a bit more
>> sophisticated
>> >> in CS since not all hardware/software requirements are available in the
>> >> jenkins environment. However, this could be a problem in any
>> environment.
>> >>
>> >>
>> >>> On Mon, Oct 28, 2013 at 5:53 AM, Prasanna Santhanam <ts...@apache.org>
>> wrote:
>> >>>
>> >>> We need a way to check coverage of (unit+integration) tests. How many
>> >>> lines of code hit on a deployed system that corresponds to the
>> >>> component donated/committed. We don't have that for existing tests so
>> >>> it makes it hard to judge if a feature that comes with tests covers
>> >>> enough of itself.
>> >>>
>> >>>> On Sun, Oct 27, 2013 at 11:00:46PM +0100, Laszlo Hornyak wrote:
>> >>>> Ok, makes sense, but that sounds like even more work :) Can you share
>> the
>> >>>> plan on how will this work?
>> >>>>
>> >>>>
>> >>>> On Sun, Oct 27, 2013 at 7:54 PM, Darren Shepherd <
>> >>>> darren.s.shepherd@gmail.com> wrote:
>> >>>>
>> >>>>> I think it can't be at a component level because components are too
>> >>> large.
>> >>>>> It needs to be at a feature for implementation level.  For example,
>> >>> live
>> >>>>> storage migration for xen and live storage migration for kvm (don't
>> >>> know if
>> >>>>> that's a real thing) would be two separate items.
>> >>>>>
>> >>>>> Darren
>> >>>>>
>> >>>>>> On Oct 27, 2013, at 10:57 AM, Laszlo Hornyak <
>> >>> laszlo.hornyak@gmail.com>
>> >>>>> wrote:
>> >>>>>>
>> >>>>>> I believe this will be very useful for users.
>> >>>>>> As far as I understand someone will have to qualify components. What
>> >>> will
>> >>>>>> be the method for qualification? I do not think simply the test
>> >>> coverage
>> >>>>>> would be right. But then if you want to go deeper, then you need a
>> >>> bigger
>> >>>>>> effort testing the components.
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> On Sun, Oct 27, 2013 at 4:51 PM, Darren Shepherd <
>> >>>>>> darren.s.shepherd@gmail.com> wrote:
>> >>>>>>
>> >>>>>>> I don't know if a similar thing has been talked about before but I
>> >>>>>>> thought I'd just throws this out there.  The ultimate way to ensure
>> >>>>>>> quality is that we have unit test and integration test coverage on
>> >>> all
>> >>>>>>> functionality.  That way somebody authors some code, commits to,
>> for
>> >>>>>>> example, 4.2, but then when we release 4.3, 4.4, etc they aren't on
>> >>>>>>> the hook to manually tests the functionality with each release.
>>  The
>> >>>>>>> obvious nature of a community project is that people come and go.
>> >>> If
>> >>>>>>> a contributor wants to ensure the long term viability of the
>> >>>>>>> component, they should ensure that there are unit+integration
>> tests.
>> >>>>>>>
>> >>>>>>> Now, for whatever reason whether good or bad, it's not always
>> >>> possible
>> >>>>>>> to have full integration tests.  I don't want to throw down the
>> >>> gamut
>> >>>>>>> and say everything must have coverage because that will mean some
>> >>>>>>> useful code/feature will not get in because of some coverage wasn't
>> >>>>>>> possible at the time.
>> >>>>>>>
>> >>>>>>> What I propose is that for every feature or function we put it in a
>> >>>>>>> tier of what is the quality of it (very similar to how OpenStack
>> >>>>>>> qualifies their hypervisor integration).  Tier A means unit test
>> and
>> >>>>>>> integration test coverage gates the release.  Tier B means unit
>> test
>> >>>>>>> coverage gates the release.  Tier C mean who knows, it compiled.
>>  We
>> >>>>>>> can go through and classify the components and then as a community
>> >>> we
>> >>>>>>> can try to get as much into Tier A as possible.
>> >>>>>>>
>> >>>>>>> Darren
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> --
>> >>>>>>
>> >>>>>> EOF
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>>
>> >>>> EOF
>> >>>
>> >>> --
>> >>> Prasanna.,
>> >>>
>> >>> ------------------------
>> >>> Powered by BigRock.com
>> >>
>> >>
>> >> --
>> >>
>> >> EOF
>>
>
>
>
> --
>
> EOF

RE: Tiered Quality

Posted by Sudha Ponnaganti <su...@citrix.com>.
Makes sense. 

Thanks
/Sudha

-----Original Message-----
From: Santhosh Edukulla 
Sent: Wednesday, December 11, 2013 4:33 AM
To: Sudha Ponnaganti; dev@cloudstack.apache.org
Subject: RE: Tiered Quality

1. The below snapshot only depicts numbers only for KVM run.( A sample run ).

2. We will provide a reported information\share the analyzed information possibly once we were through full run analysis. We can share individual details but sharing the analyzed information can give community a chance to provide better inputs.

Thanks!
Santhosh
________________________________________
From: Sudha Ponnaganti
Sent: Wednesday, December 11, 2013 7:26 AM
To: dev@cloudstack.apache.org
Cc: Santhosh Edukulla
Subject: RE: Tiered Quality

Thanks Santhosh for the coverage numbers. Does this include only KVM BVT and Regression or any other configuration??
Would it be possible to post package coverage details so community would be aware which packages are covered and which are not as the results are not available to drill down.

Thanks
/sudha

-----Original Message-----
From: Santhosh Edukulla [mailto:santhosh.edukulla@citrix.com]
Sent: Wednesday, December 11, 2013 2:52 AM
To: dev@cloudstack.apache.org
Subject: RE: Tiered Quality

Coverage information for both unit and integration tests for a sample regression run.

http://picpaste.com/pics/Coverage_Unit_Integration_KVM_Regression-HmxW9yva.1386758719.png

Note:

1. As such the link is internal, shared the sample pic depicting the current coverage numbers.
2. This is not to add any completeness criteria.  This is one such quality metric to assess the coverage information and areas to concentrate more to increase coverage information. Current unit test coverage is low. May be we can enforce more unit tests atleast for new additions.
3. We will share the report once we can add report plugin to it.


Regards,
Santhosh
________________________________________
From: Daan Hoogland [daan.hoogland@gmail.com]
Sent: Sunday, November 03, 2013 7:07 AM
To: dev
Subject: Re: Tiered Quality

keep us posted before breakthrough as well, please. I'm very interested.

and good hunting of course,
Daan

On Sat, Nov 2, 2013 at 2:35 PM, Sudha Ponnaganti <su...@citrix.com> wrote:
> That is only for Unit tests - we need to instrument code coverage for BVTs and Regressions i.e integration tests.  We are pursuing this in our lab. If we get any break through will post it to the forum. Because of customized nature of automation framework there are few challenges there.
>
> ________________________________________
> From: Laszlo Hornyak [laszlo.hornyak@gmail.com]
> Sent: Friday, November 01, 2013 10:48 AM
> To: dev@cloudstack.apache.org
> Subject: Re: Tiered Quality
>
> I have heard about commercial tools that do more advanced coverage 
> tracking. But if you think in open source, not sure Sonar really has 
> an alternative. It is pretty cool anyway.
> Btw the overall code coverage is about 3.6%, probaly it is not worth 
> trying something more advanced for that much.
>
>
> On Thu, Oct 31, 2013 at 9:12 PM, Daan Hoogland <da...@gmail.com>wrote:
>
>> one note on testing guys,
>>
>> I see that the analysis site give lines-coverage and branch-coverage.
>> I don't see anything on distinct paths. What I mean is that the 
>> program
>> if(a)
>>  A
>> else
>>  B
>> if(b)
>>  C
>> else
>>  D
>> if(c)
>>  E
>> else
>>  F
>> has eight (2^3) distict paths. It is not enough to show that 
>> A,B,C,D,E,F are all hit and hance every line and branch. Also all 
>> combinations of a/!a and b/!b and c/!c need to be hit.
>>
>> Now I am not saying that we should not score our code if not in this 
>> way but it is kind of kidding ourselves if we don't face up to the 
>> fact that coverage of lines of code or branches is not a completeness 
>> criterium of some kind. I don't know if any of the mentioned tools 
>> does analysis this thorough. But if any does we should go for that 
>> one.
>>
>> €0,02
>> Daan
>>
>> On Tue, Oct 29, 2013 at 2:21 AM, Darren Shepherd 
>> <da...@gmail.com> wrote:
>> > Starting with the honor system might be good.  It's not so easy 
>> > some
>> times to relate lines of code to functionality.  Also just because it 
>> hits a line of code doesn't mean it's really tested.
>> >
>> > Can't we just get people to just put a check mark on some table in 
>> > the
>> wiki?
>> >
>> > Darren
>> >
>> >> On Oct 28, 2013, at 12:08 PM, Santhosh Edukulla <
>> santhosh.edukulla@citrix.com> wrote:
>> >>
>> >> 1.It seems we already have a code coverage numbers using sonar as
>> below. It currently shows only the numbers for unit tests.
>> >>
>> >> https://analysis.apache.org/dashboard/index/100206
>> >>
>> >> 2. The below link has an explanation for using it for both 
>> >> integration
>> and unit tests.
>> >>
>> >>
>> http://docs.codehaus.org/display/SONAR/Code+Coverage+by+Integration+T
>> ests+for+Java+Project
>> >>
>> >> 3. Many links suggests it has good decision coverage facility 
>> >> compared
>> to other coverage tools.
>> >>
>> >>
>> http://onlysoftware.wordpress.com/2012/12/19/code-coverage-tools-jaco
>> co-cobertura-emma-comparison-in-sonar/
>> >>
>> >> Regards,
>> >> Santhosh
>> >> ________________________________________
>> >> From: Laszlo Hornyak [laszlo.hornyak@gmail.com]
>> >> Sent: Monday, October 28, 2013 1:43 PM
>> >> To: dev@cloudstack.apache.org
>> >> Subject: Re: Tiered Quality
>> >>
>> >> Sonar already tracks the unit test coverage. It is also able to 
>> >> track
>> the
>> >> integration test coverage, however this might be a bit more
>> sophisticated
>> >> in CS since not all hardware/software requirements are available 
>> >> in the jenkins environment. However, this could be a problem in 
>> >> any
>> environment.
>> >>
>> >>
>> >>> On Mon, Oct 28, 2013 at 5:53 AM, Prasanna Santhanam 
>> >>> <ts...@apache.org>
>> wrote:
>> >>>
>> >>> We need a way to check coverage of (unit+integration) tests. How 
>> >>> many lines of code hit on a deployed system that corresponds to 
>> >>> the component donated/committed. We don't have that for existing 
>> >>> tests so it makes it hard to judge if a feature that comes with 
>> >>> tests covers enough of itself.
>> >>>
>> >>>> On Sun, Oct 27, 2013 at 11:00:46PM +0100, Laszlo Hornyak wrote:
>> >>>> Ok, makes sense, but that sounds like even more work :) Can you 
>> >>>> share
>> the
>> >>>> plan on how will this work?
>> >>>>
>> >>>>
>> >>>> On Sun, Oct 27, 2013 at 7:54 PM, Darren Shepherd < 
>> >>>> darren.s.shepherd@gmail.com> wrote:
>> >>>>
>> >>>>> I think it can't be at a component level because components are 
>> >>>>> too
>> >>> large.
>> >>>>> It needs to be at a feature for implementation level.  For 
>> >>>>> example,
>> >>> live
>> >>>>> storage migration for xen and live storage migration for kvm 
>> >>>>> (don't
>> >>> know if
>> >>>>> that's a real thing) would be two separate items.
>> >>>>>
>> >>>>> Darren
>> >>>>>
>> >>>>>> On Oct 27, 2013, at 10:57 AM, Laszlo Hornyak <
>> >>> laszlo.hornyak@gmail.com>
>> >>>>> wrote:
>> >>>>>>
>> >>>>>> I believe this will be very useful for users.
>> >>>>>> As far as I understand someone will have to qualify 
>> >>>>>> components. What
>> >>> will
>> >>>>>> be the method for qualification? I do not think simply the 
>> >>>>>> test
>> >>> coverage
>> >>>>>> would be right. But then if you want to go deeper, then you 
>> >>>>>> need a
>> >>> bigger
>> >>>>>> effort testing the components.
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> On Sun, Oct 27, 2013 at 4:51 PM, Darren Shepherd < 
>> >>>>>> darren.s.shepherd@gmail.com> wrote:
>> >>>>>>
>> >>>>>>> I don't know if a similar thing has been talked about before 
>> >>>>>>> but I thought I'd just throws this out there.  The ultimate 
>> >>>>>>> way to ensure quality is that we have unit test and 
>> >>>>>>> integration test coverage on
>> >>> all
>> >>>>>>> functionality.  That way somebody authors some code, commits 
>> >>>>>>> to,
>> for
>> >>>>>>> example, 4.2, but then when we release 4.3, 4.4, etc they 
>> >>>>>>> aren't on the hook to manually tests the functionality with each release.
>>  The
>> >>>>>>> obvious nature of a community project is that people come and go.
>> >>> If
>> >>>>>>> a contributor wants to ensure the long term viability of the 
>> >>>>>>> component, they should ensure that there are unit+integration
>> tests.
>> >>>>>>>
>> >>>>>>> Now, for whatever reason whether good or bad, it's not always
>> >>> possible
>> >>>>>>> to have full integration tests.  I don't want to throw down 
>> >>>>>>> the
>> >>> gamut
>> >>>>>>> and say everything must have coverage because that will mean 
>> >>>>>>> some useful code/feature will not get in because of some 
>> >>>>>>> coverage wasn't possible at the time.
>> >>>>>>>
>> >>>>>>> What I propose is that for every feature or function we put 
>> >>>>>>> it in a tier of what is the quality of it (very similar to 
>> >>>>>>> how OpenStack qualifies their hypervisor integration).  Tier 
>> >>>>>>> A means unit test
>> and
>> >>>>>>> integration test coverage gates the release.  Tier B means 
>> >>>>>>> unit
>> test
>> >>>>>>> coverage gates the release.  Tier C mean who knows, it compiled.
>>  We
>> >>>>>>> can go through and classify the components and then as a 
>> >>>>>>> community
>> >>> we
>> >>>>>>> can try to get as much into Tier A as possible.
>> >>>>>>>
>> >>>>>>> Darren
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> --
>> >>>>>>
>> >>>>>> EOF
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>>
>> >>>> EOF
>> >>>
>> >>> --
>> >>> Prasanna.,
>> >>>
>> >>> ------------------------
>> >>> Powered by BigRock.com
>> >>
>> >>
>> >> --
>> >>
>> >> EOF
>>
>
>
>
> --
>
> EOF

RE: Tiered Quality

Posted by Santhosh Edukulla <sa...@citrix.com>.
1. The below snapshot only depicts numbers only for KVM run.( A sample run ).

2. We will provide a reported information\share the analyzed information possibly once we were through full run analysis. We can share individual details but sharing the analyzed information can give community a chance to provide better inputs.

Thanks!
Santhosh
________________________________________
From: Sudha Ponnaganti
Sent: Wednesday, December 11, 2013 7:26 AM
To: dev@cloudstack.apache.org
Cc: Santhosh Edukulla
Subject: RE: Tiered Quality

Thanks Santhosh for the coverage numbers. Does this include only KVM BVT and Regression or any other configuration??
Would it be possible to post package coverage details so community would be aware which packages are covered and which are not as the results are not available to drill down.

Thanks
/sudha

-----Original Message-----
From: Santhosh Edukulla [mailto:santhosh.edukulla@citrix.com]
Sent: Wednesday, December 11, 2013 2:52 AM
To: dev@cloudstack.apache.org
Subject: RE: Tiered Quality

Coverage information for both unit and integration tests for a sample regression run.

http://picpaste.com/pics/Coverage_Unit_Integration_KVM_Regression-HmxW9yva.1386758719.png

Note:

1. As such the link is internal, shared the sample pic depicting the current coverage numbers.
2. This is not to add any completeness criteria.  This is one such quality metric to assess the coverage information and areas to concentrate more to increase coverage information. Current unit test coverage is low. May be we can enforce more unit tests atleast for new additions.
3. We will share the report once we can add report plugin to it.


Regards,
Santhosh
________________________________________
From: Daan Hoogland [daan.hoogland@gmail.com]
Sent: Sunday, November 03, 2013 7:07 AM
To: dev
Subject: Re: Tiered Quality

keep us posted before breakthrough as well, please. I'm very interested.

and good hunting of course,
Daan

On Sat, Nov 2, 2013 at 2:35 PM, Sudha Ponnaganti <su...@citrix.com> wrote:
> That is only for Unit tests - we need to instrument code coverage for BVTs and Regressions i.e integration tests.  We are pursuing this in our lab. If we get any break through will post it to the forum. Because of customized nature of automation framework there are few challenges there.
>
> ________________________________________
> From: Laszlo Hornyak [laszlo.hornyak@gmail.com]
> Sent: Friday, November 01, 2013 10:48 AM
> To: dev@cloudstack.apache.org
> Subject: Re: Tiered Quality
>
> I have heard about commercial tools that do more advanced coverage
> tracking. But if you think in open source, not sure Sonar really has
> an alternative. It is pretty cool anyway.
> Btw the overall code coverage is about 3.6%, probaly it is not worth
> trying something more advanced for that much.
>
>
> On Thu, Oct 31, 2013 at 9:12 PM, Daan Hoogland <da...@gmail.com>wrote:
>
>> one note on testing guys,
>>
>> I see that the analysis site give lines-coverage and branch-coverage.
>> I don't see anything on distinct paths. What I mean is that the
>> program
>> if(a)
>>  A
>> else
>>  B
>> if(b)
>>  C
>> else
>>  D
>> if(c)
>>  E
>> else
>>  F
>> has eight (2^3) distict paths. It is not enough to show that
>> A,B,C,D,E,F are all hit and hance every line and branch. Also all
>> combinations of a/!a and b/!b and c/!c need to be hit.
>>
>> Now I am not saying that we should not score our code if not in this
>> way but it is kind of kidding ourselves if we don't face up to the
>> fact that coverage of lines of code or branches is not a completeness
>> criterium of some kind. I don't know if any of the mentioned tools
>> does analysis this thorough. But if any does we should go for that
>> one.
>>
>> €0,02
>> Daan
>>
>> On Tue, Oct 29, 2013 at 2:21 AM, Darren Shepherd
>> <da...@gmail.com> wrote:
>> > Starting with the honor system might be good.  It's not so easy
>> > some
>> times to relate lines of code to functionality.  Also just because it
>> hits a line of code doesn't mean it's really tested.
>> >
>> > Can't we just get people to just put a check mark on some table in
>> > the
>> wiki?
>> >
>> > Darren
>> >
>> >> On Oct 28, 2013, at 12:08 PM, Santhosh Edukulla <
>> santhosh.edukulla@citrix.com> wrote:
>> >>
>> >> 1.It seems we already have a code coverage numbers using sonar as
>> below. It currently shows only the numbers for unit tests.
>> >>
>> >> https://analysis.apache.org/dashboard/index/100206
>> >>
>> >> 2. The below link has an explanation for using it for both
>> >> integration
>> and unit tests.
>> >>
>> >>
>> http://docs.codehaus.org/display/SONAR/Code+Coverage+by+Integration+T
>> ests+for+Java+Project
>> >>
>> >> 3. Many links suggests it has good decision coverage facility
>> >> compared
>> to other coverage tools.
>> >>
>> >>
>> http://onlysoftware.wordpress.com/2012/12/19/code-coverage-tools-jaco
>> co-cobertura-emma-comparison-in-sonar/
>> >>
>> >> Regards,
>> >> Santhosh
>> >> ________________________________________
>> >> From: Laszlo Hornyak [laszlo.hornyak@gmail.com]
>> >> Sent: Monday, October 28, 2013 1:43 PM
>> >> To: dev@cloudstack.apache.org
>> >> Subject: Re: Tiered Quality
>> >>
>> >> Sonar already tracks the unit test coverage. It is also able to
>> >> track
>> the
>> >> integration test coverage, however this might be a bit more
>> sophisticated
>> >> in CS since not all hardware/software requirements are available
>> >> in the jenkins environment. However, this could be a problem in
>> >> any
>> environment.
>> >>
>> >>
>> >>> On Mon, Oct 28, 2013 at 5:53 AM, Prasanna Santhanam
>> >>> <ts...@apache.org>
>> wrote:
>> >>>
>> >>> We need a way to check coverage of (unit+integration) tests. How
>> >>> many lines of code hit on a deployed system that corresponds to
>> >>> the component donated/committed. We don't have that for existing
>> >>> tests so it makes it hard to judge if a feature that comes with
>> >>> tests covers enough of itself.
>> >>>
>> >>>> On Sun, Oct 27, 2013 at 11:00:46PM +0100, Laszlo Hornyak wrote:
>> >>>> Ok, makes sense, but that sounds like even more work :) Can you
>> >>>> share
>> the
>> >>>> plan on how will this work?
>> >>>>
>> >>>>
>> >>>> On Sun, Oct 27, 2013 at 7:54 PM, Darren Shepherd <
>> >>>> darren.s.shepherd@gmail.com> wrote:
>> >>>>
>> >>>>> I think it can't be at a component level because components are
>> >>>>> too
>> >>> large.
>> >>>>> It needs to be at a feature for implementation level.  For
>> >>>>> example,
>> >>> live
>> >>>>> storage migration for xen and live storage migration for kvm
>> >>>>> (don't
>> >>> know if
>> >>>>> that's a real thing) would be two separate items.
>> >>>>>
>> >>>>> Darren
>> >>>>>
>> >>>>>> On Oct 27, 2013, at 10:57 AM, Laszlo Hornyak <
>> >>> laszlo.hornyak@gmail.com>
>> >>>>> wrote:
>> >>>>>>
>> >>>>>> I believe this will be very useful for users.
>> >>>>>> As far as I understand someone will have to qualify
>> >>>>>> components. What
>> >>> will
>> >>>>>> be the method for qualification? I do not think simply the
>> >>>>>> test
>> >>> coverage
>> >>>>>> would be right. But then if you want to go deeper, then you
>> >>>>>> need a
>> >>> bigger
>> >>>>>> effort testing the components.
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> On Sun, Oct 27, 2013 at 4:51 PM, Darren Shepherd <
>> >>>>>> darren.s.shepherd@gmail.com> wrote:
>> >>>>>>
>> >>>>>>> I don't know if a similar thing has been talked about before
>> >>>>>>> but I thought I'd just throws this out there.  The ultimate
>> >>>>>>> way to ensure quality is that we have unit test and
>> >>>>>>> integration test coverage on
>> >>> all
>> >>>>>>> functionality.  That way somebody authors some code, commits
>> >>>>>>> to,
>> for
>> >>>>>>> example, 4.2, but then when we release 4.3, 4.4, etc they
>> >>>>>>> aren't on the hook to manually tests the functionality with each release.
>>  The
>> >>>>>>> obvious nature of a community project is that people come and go.
>> >>> If
>> >>>>>>> a contributor wants to ensure the long term viability of the
>> >>>>>>> component, they should ensure that there are unit+integration
>> tests.
>> >>>>>>>
>> >>>>>>> Now, for whatever reason whether good or bad, it's not always
>> >>> possible
>> >>>>>>> to have full integration tests.  I don't want to throw down
>> >>>>>>> the
>> >>> gamut
>> >>>>>>> and say everything must have coverage because that will mean
>> >>>>>>> some useful code/feature will not get in because of some
>> >>>>>>> coverage wasn't possible at the time.
>> >>>>>>>
>> >>>>>>> What I propose is that for every feature or function we put
>> >>>>>>> it in a tier of what is the quality of it (very similar to
>> >>>>>>> how OpenStack qualifies their hypervisor integration).  Tier
>> >>>>>>> A means unit test
>> and
>> >>>>>>> integration test coverage gates the release.  Tier B means
>> >>>>>>> unit
>> test
>> >>>>>>> coverage gates the release.  Tier C mean who knows, it compiled.
>>  We
>> >>>>>>> can go through and classify the components and then as a
>> >>>>>>> community
>> >>> we
>> >>>>>>> can try to get as much into Tier A as possible.
>> >>>>>>>
>> >>>>>>> Darren
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> --
>> >>>>>>
>> >>>>>> EOF
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>>
>> >>>> EOF
>> >>>
>> >>> --
>> >>> Prasanna.,
>> >>>
>> >>> ------------------------
>> >>> Powered by BigRock.com
>> >>
>> >>
>> >> --
>> >>
>> >> EOF
>>
>
>
>
> --
>
> EOF

RE: Tiered Quality

Posted by Sudha Ponnaganti <su...@citrix.com>.
Thanks Santhosh for the coverage numbers. Does this include only KVM BVT and Regression or any other configuration??
Would it be possible to post package coverage details so community would be aware which packages are covered and which are not as the results are not available to drill down.

Thanks
/sudha

-----Original Message-----
From: Santhosh Edukulla [mailto:santhosh.edukulla@citrix.com] 
Sent: Wednesday, December 11, 2013 2:52 AM
To: dev@cloudstack.apache.org
Subject: RE: Tiered Quality

Coverage information for both unit and integration tests for a sample regression run.

http://picpaste.com/pics/Coverage_Unit_Integration_KVM_Regression-HmxW9yva.1386758719.png 

Note: 

1. As such the link is internal, shared the sample pic depicting the current coverage numbers.
2. This is not to add any completeness criteria.  This is one such quality metric to assess the coverage information and areas to concentrate more to increase coverage information. Current unit test coverage is low. May be we can enforce more unit tests atleast for new additions.
3. We will share the report once we can add report plugin to it. 


Regards,
Santhosh
________________________________________
From: Daan Hoogland [daan.hoogland@gmail.com]
Sent: Sunday, November 03, 2013 7:07 AM
To: dev
Subject: Re: Tiered Quality

keep us posted before breakthrough as well, please. I'm very interested.

and good hunting of course,
Daan

On Sat, Nov 2, 2013 at 2:35 PM, Sudha Ponnaganti <su...@citrix.com> wrote:
> That is only for Unit tests - we need to instrument code coverage for BVTs and Regressions i.e integration tests.  We are pursuing this in our lab. If we get any break through will post it to the forum. Because of customized nature of automation framework there are few challenges there.
>
> ________________________________________
> From: Laszlo Hornyak [laszlo.hornyak@gmail.com]
> Sent: Friday, November 01, 2013 10:48 AM
> To: dev@cloudstack.apache.org
> Subject: Re: Tiered Quality
>
> I have heard about commercial tools that do more advanced coverage 
> tracking. But if you think in open source, not sure Sonar really has 
> an alternative. It is pretty cool anyway.
> Btw the overall code coverage is about 3.6%, probaly it is not worth 
> trying something more advanced for that much.
>
>
> On Thu, Oct 31, 2013 at 9:12 PM, Daan Hoogland <da...@gmail.com>wrote:
>
>> one note on testing guys,
>>
>> I see that the analysis site give lines-coverage and branch-coverage.
>> I don't see anything on distinct paths. What I mean is that the 
>> program
>> if(a)
>>  A
>> else
>>  B
>> if(b)
>>  C
>> else
>>  D
>> if(c)
>>  E
>> else
>>  F
>> has eight (2^3) distict paths. It is not enough to show that 
>> A,B,C,D,E,F are all hit and hance every line and branch. Also all 
>> combinations of a/!a and b/!b and c/!c need to be hit.
>>
>> Now I am not saying that we should not score our code if not in this 
>> way but it is kind of kidding ourselves if we don't face up to the 
>> fact that coverage of lines of code or branches is not a completeness 
>> criterium of some kind. I don't know if any of the mentioned tools 
>> does analysis this thorough. But if any does we should go for that 
>> one.
>>
>> €0,02
>> Daan
>>
>> On Tue, Oct 29, 2013 at 2:21 AM, Darren Shepherd 
>> <da...@gmail.com> wrote:
>> > Starting with the honor system might be good.  It's not so easy 
>> > some
>> times to relate lines of code to functionality.  Also just because it 
>> hits a line of code doesn't mean it's really tested.
>> >
>> > Can't we just get people to just put a check mark on some table in 
>> > the
>> wiki?
>> >
>> > Darren
>> >
>> >> On Oct 28, 2013, at 12:08 PM, Santhosh Edukulla <
>> santhosh.edukulla@citrix.com> wrote:
>> >>
>> >> 1.It seems we already have a code coverage numbers using sonar as
>> below. It currently shows only the numbers for unit tests.
>> >>
>> >> https://analysis.apache.org/dashboard/index/100206
>> >>
>> >> 2. The below link has an explanation for using it for both 
>> >> integration
>> and unit tests.
>> >>
>> >>
>> http://docs.codehaus.org/display/SONAR/Code+Coverage+by+Integration+T
>> ests+for+Java+Project
>> >>
>> >> 3. Many links suggests it has good decision coverage facility 
>> >> compared
>> to other coverage tools.
>> >>
>> >>
>> http://onlysoftware.wordpress.com/2012/12/19/code-coverage-tools-jaco
>> co-cobertura-emma-comparison-in-sonar/
>> >>
>> >> Regards,
>> >> Santhosh
>> >> ________________________________________
>> >> From: Laszlo Hornyak [laszlo.hornyak@gmail.com]
>> >> Sent: Monday, October 28, 2013 1:43 PM
>> >> To: dev@cloudstack.apache.org
>> >> Subject: Re: Tiered Quality
>> >>
>> >> Sonar already tracks the unit test coverage. It is also able to 
>> >> track
>> the
>> >> integration test coverage, however this might be a bit more
>> sophisticated
>> >> in CS since not all hardware/software requirements are available 
>> >> in the jenkins environment. However, this could be a problem in 
>> >> any
>> environment.
>> >>
>> >>
>> >>> On Mon, Oct 28, 2013 at 5:53 AM, Prasanna Santhanam 
>> >>> <ts...@apache.org>
>> wrote:
>> >>>
>> >>> We need a way to check coverage of (unit+integration) tests. How 
>> >>> many lines of code hit on a deployed system that corresponds to 
>> >>> the component donated/committed. We don't have that for existing 
>> >>> tests so it makes it hard to judge if a feature that comes with 
>> >>> tests covers enough of itself.
>> >>>
>> >>>> On Sun, Oct 27, 2013 at 11:00:46PM +0100, Laszlo Hornyak wrote:
>> >>>> Ok, makes sense, but that sounds like even more work :) Can you 
>> >>>> share
>> the
>> >>>> plan on how will this work?
>> >>>>
>> >>>>
>> >>>> On Sun, Oct 27, 2013 at 7:54 PM, Darren Shepherd < 
>> >>>> darren.s.shepherd@gmail.com> wrote:
>> >>>>
>> >>>>> I think it can't be at a component level because components are 
>> >>>>> too
>> >>> large.
>> >>>>> It needs to be at a feature for implementation level.  For 
>> >>>>> example,
>> >>> live
>> >>>>> storage migration for xen and live storage migration for kvm 
>> >>>>> (don't
>> >>> know if
>> >>>>> that's a real thing) would be two separate items.
>> >>>>>
>> >>>>> Darren
>> >>>>>
>> >>>>>> On Oct 27, 2013, at 10:57 AM, Laszlo Hornyak <
>> >>> laszlo.hornyak@gmail.com>
>> >>>>> wrote:
>> >>>>>>
>> >>>>>> I believe this will be very useful for users.
>> >>>>>> As far as I understand someone will have to qualify 
>> >>>>>> components. What
>> >>> will
>> >>>>>> be the method for qualification? I do not think simply the 
>> >>>>>> test
>> >>> coverage
>> >>>>>> would be right. But then if you want to go deeper, then you 
>> >>>>>> need a
>> >>> bigger
>> >>>>>> effort testing the components.
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> On Sun, Oct 27, 2013 at 4:51 PM, Darren Shepherd < 
>> >>>>>> darren.s.shepherd@gmail.com> wrote:
>> >>>>>>
>> >>>>>>> I don't know if a similar thing has been talked about before 
>> >>>>>>> but I thought I'd just throws this out there.  The ultimate 
>> >>>>>>> way to ensure quality is that we have unit test and 
>> >>>>>>> integration test coverage on
>> >>> all
>> >>>>>>> functionality.  That way somebody authors some code, commits 
>> >>>>>>> to,
>> for
>> >>>>>>> example, 4.2, but then when we release 4.3, 4.4, etc they 
>> >>>>>>> aren't on the hook to manually tests the functionality with each release.
>>  The
>> >>>>>>> obvious nature of a community project is that people come and go.
>> >>> If
>> >>>>>>> a contributor wants to ensure the long term viability of the 
>> >>>>>>> component, they should ensure that there are unit+integration
>> tests.
>> >>>>>>>
>> >>>>>>> Now, for whatever reason whether good or bad, it's not always
>> >>> possible
>> >>>>>>> to have full integration tests.  I don't want to throw down 
>> >>>>>>> the
>> >>> gamut
>> >>>>>>> and say everything must have coverage because that will mean 
>> >>>>>>> some useful code/feature will not get in because of some 
>> >>>>>>> coverage wasn't possible at the time.
>> >>>>>>>
>> >>>>>>> What I propose is that for every feature or function we put 
>> >>>>>>> it in a tier of what is the quality of it (very similar to 
>> >>>>>>> how OpenStack qualifies their hypervisor integration).  Tier 
>> >>>>>>> A means unit test
>> and
>> >>>>>>> integration test coverage gates the release.  Tier B means 
>> >>>>>>> unit
>> test
>> >>>>>>> coverage gates the release.  Tier C mean who knows, it compiled.
>>  We
>> >>>>>>> can go through and classify the components and then as a 
>> >>>>>>> community
>> >>> we
>> >>>>>>> can try to get as much into Tier A as possible.
>> >>>>>>>
>> >>>>>>> Darren
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> --
>> >>>>>>
>> >>>>>> EOF
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>>
>> >>>> EOF
>> >>>
>> >>> --
>> >>> Prasanna.,
>> >>>
>> >>> ------------------------
>> >>> Powered by BigRock.com
>> >>
>> >>
>> >> --
>> >>
>> >> EOF
>>
>
>
>
> --
>
> EOF

RE: Tiered Quality

Posted by Santhosh Edukulla <sa...@citrix.com>.
Coverage information for both unit and integration tests for a sample regression run.

http://picpaste.com/pics/Coverage_Unit_Integration_KVM_Regression-HmxW9yva.1386758719.png 

Note: 

1. As such the link is internal, shared the sample pic depicting the current coverage numbers.
2. This is not to add any completeness criteria.  This is one such quality metric to assess the coverage information and areas
to concentrate more to increase coverage information. Current unit test coverage is low. May be we can enforce more unit tests atleast for new additions.
3. We will share the report once we can add report plugin to it. 


Regards,
Santhosh
________________________________________
From: Daan Hoogland [daan.hoogland@gmail.com]
Sent: Sunday, November 03, 2013 7:07 AM
To: dev
Subject: Re: Tiered Quality

keep us posted before breakthrough as well, please. I'm very interested.

and good hunting of course,
Daan

On Sat, Nov 2, 2013 at 2:35 PM, Sudha Ponnaganti
<su...@citrix.com> wrote:
> That is only for Unit tests - we need to instrument code coverage for BVTs and Regressions i.e integration tests.  We are pursuing this in our lab. If we get any break through will post it to the forum. Because of customized nature of automation framework there are few challenges there.
>
> ________________________________________
> From: Laszlo Hornyak [laszlo.hornyak@gmail.com]
> Sent: Friday, November 01, 2013 10:48 AM
> To: dev@cloudstack.apache.org
> Subject: Re: Tiered Quality
>
> I have heard about commercial tools that do more advanced coverage
> tracking. But if you think in open source, not sure Sonar really has an
> alternative. It is pretty cool anyway.
> Btw the overall code coverage is about 3.6%, probaly it is not worth trying
> something more advanced for that much.
>
>
> On Thu, Oct 31, 2013 at 9:12 PM, Daan Hoogland <da...@gmail.com>wrote:
>
>> one note on testing guys,
>>
>> I see that the analysis site give lines-coverage and branch-coverage.
>> I don't see anything on distinct paths. What I mean is that the
>> program
>> if(a)
>>  A
>> else
>>  B
>> if(b)
>>  C
>> else
>>  D
>> if(c)
>>  E
>> else
>>  F
>> has eight (2^3) distict paths. It is not enough to show that
>> A,B,C,D,E,F are all hit and hance every line and branch. Also all
>> combinations of a/!a and b/!b and c/!c need to be hit.
>>
>> Now I am not saying that we should not score our code if not in this
>> way but it is kind of kidding ourselves if we don't face up to the
>> fact that coverage of lines of code or branches is not a completeness
>> criterium of some kind. I don't know if any of the mentioned tools
>> does analysis this thorough. But if any does we should go for that
>> one.
>>
>> €0,02
>> Daan
>>
>> On Tue, Oct 29, 2013 at 2:21 AM, Darren Shepherd
>> <da...@gmail.com> wrote:
>> > Starting with the honor system might be good.  It's not so easy some
>> times to relate lines of code to functionality.  Also just because it hits
>> a line of code doesn't mean it's really tested.
>> >
>> > Can't we just get people to just put a check mark on some table in the
>> wiki?
>> >
>> > Darren
>> >
>> >> On Oct 28, 2013, at 12:08 PM, Santhosh Edukulla <
>> santhosh.edukulla@citrix.com> wrote:
>> >>
>> >> 1.It seems we already have a code coverage numbers using sonar as
>> below. It currently shows only the numbers for unit tests.
>> >>
>> >> https://analysis.apache.org/dashboard/index/100206
>> >>
>> >> 2. The below link has an explanation for using it for both integration
>> and unit tests.
>> >>
>> >>
>> http://docs.codehaus.org/display/SONAR/Code+Coverage+by+Integration+Tests+for+Java+Project
>> >>
>> >> 3. Many links suggests it has good decision coverage facility compared
>> to other coverage tools.
>> >>
>> >>
>> http://onlysoftware.wordpress.com/2012/12/19/code-coverage-tools-jacoco-cobertura-emma-comparison-in-sonar/
>> >>
>> >> Regards,
>> >> Santhosh
>> >> ________________________________________
>> >> From: Laszlo Hornyak [laszlo.hornyak@gmail.com]
>> >> Sent: Monday, October 28, 2013 1:43 PM
>> >> To: dev@cloudstack.apache.org
>> >> Subject: Re: Tiered Quality
>> >>
>> >> Sonar already tracks the unit test coverage. It is also able to track
>> the
>> >> integration test coverage, however this might be a bit more
>> sophisticated
>> >> in CS since not all hardware/software requirements are available in the
>> >> jenkins environment. However, this could be a problem in any
>> environment.
>> >>
>> >>
>> >>> On Mon, Oct 28, 2013 at 5:53 AM, Prasanna Santhanam <ts...@apache.org>
>> wrote:
>> >>>
>> >>> We need a way to check coverage of (unit+integration) tests. How many
>> >>> lines of code hit on a deployed system that corresponds to the
>> >>> component donated/committed. We don't have that for existing tests so
>> >>> it makes it hard to judge if a feature that comes with tests covers
>> >>> enough of itself.
>> >>>
>> >>>> On Sun, Oct 27, 2013 at 11:00:46PM +0100, Laszlo Hornyak wrote:
>> >>>> Ok, makes sense, but that sounds like even more work :) Can you share
>> the
>> >>>> plan on how will this work?
>> >>>>
>> >>>>
>> >>>> On Sun, Oct 27, 2013 at 7:54 PM, Darren Shepherd <
>> >>>> darren.s.shepherd@gmail.com> wrote:
>> >>>>
>> >>>>> I think it can't be at a component level because components are too
>> >>> large.
>> >>>>> It needs to be at a feature for implementation level.  For example,
>> >>> live
>> >>>>> storage migration for xen and live storage migration for kvm (don't
>> >>> know if
>> >>>>> that's a real thing) would be two separate items.
>> >>>>>
>> >>>>> Darren
>> >>>>>
>> >>>>>> On Oct 27, 2013, at 10:57 AM, Laszlo Hornyak <
>> >>> laszlo.hornyak@gmail.com>
>> >>>>> wrote:
>> >>>>>>
>> >>>>>> I believe this will be very useful for users.
>> >>>>>> As far as I understand someone will have to qualify components. What
>> >>> will
>> >>>>>> be the method for qualification? I do not think simply the test
>> >>> coverage
>> >>>>>> would be right. But then if you want to go deeper, then you need a
>> >>> bigger
>> >>>>>> effort testing the components.
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> On Sun, Oct 27, 2013 at 4:51 PM, Darren Shepherd <
>> >>>>>> darren.s.shepherd@gmail.com> wrote:
>> >>>>>>
>> >>>>>>> I don't know if a similar thing has been talked about before but I
>> >>>>>>> thought I'd just throws this out there.  The ultimate way to ensure
>> >>>>>>> quality is that we have unit test and integration test coverage on
>> >>> all
>> >>>>>>> functionality.  That way somebody authors some code, commits to,
>> for
>> >>>>>>> example, 4.2, but then when we release 4.3, 4.4, etc they aren't on
>> >>>>>>> the hook to manually tests the functionality with each release.
>>  The
>> >>>>>>> obvious nature of a community project is that people come and go.
>> >>> If
>> >>>>>>> a contributor wants to ensure the long term viability of the
>> >>>>>>> component, they should ensure that there are unit+integration
>> tests.
>> >>>>>>>
>> >>>>>>> Now, for whatever reason whether good or bad, it's not always
>> >>> possible
>> >>>>>>> to have full integration tests.  I don't want to throw down the
>> >>> gamut
>> >>>>>>> and say everything must have coverage because that will mean some
>> >>>>>>> useful code/feature will not get in because of some coverage wasn't
>> >>>>>>> possible at the time.
>> >>>>>>>
>> >>>>>>> What I propose is that for every feature or function we put it in a
>> >>>>>>> tier of what is the quality of it (very similar to how OpenStack
>> >>>>>>> qualifies their hypervisor integration).  Tier A means unit test
>> and
>> >>>>>>> integration test coverage gates the release.  Tier B means unit
>> test
>> >>>>>>> coverage gates the release.  Tier C mean who knows, it compiled.
>>  We
>> >>>>>>> can go through and classify the components and then as a community
>> >>> we
>> >>>>>>> can try to get as much into Tier A as possible.
>> >>>>>>>
>> >>>>>>> Darren
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> --
>> >>>>>>
>> >>>>>> EOF
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>>
>> >>>> EOF
>> >>>
>> >>> --
>> >>> Prasanna.,
>> >>>
>> >>> ------------------------
>> >>> Powered by BigRock.com
>> >>
>> >>
>> >> --
>> >>
>> >> EOF
>>
>
>
>
> --
>
> EOF

Re: Tiered Quality

Posted by Daan Hoogland <da...@gmail.com>.
keep us posted before breakthrough as well, please. I'm very interested.

and good hunting of course,
Daan

On Sat, Nov 2, 2013 at 2:35 PM, Sudha Ponnaganti
<su...@citrix.com> wrote:
> That is only for Unit tests - we need to instrument code coverage for BVTs and Regressions i.e integration tests.  We are pursuing this in our lab. If we get any break through will post it to the forum. Because of customized nature of automation framework there are few challenges there.
>
> ________________________________________
> From: Laszlo Hornyak [laszlo.hornyak@gmail.com]
> Sent: Friday, November 01, 2013 10:48 AM
> To: dev@cloudstack.apache.org
> Subject: Re: Tiered Quality
>
> I have heard about commercial tools that do more advanced coverage
> tracking. But if you think in open source, not sure Sonar really has an
> alternative. It is pretty cool anyway.
> Btw the overall code coverage is about 3.6%, probaly it is not worth trying
> something more advanced for that much.
>
>
> On Thu, Oct 31, 2013 at 9:12 PM, Daan Hoogland <da...@gmail.com>wrote:
>
>> one note on testing guys,
>>
>> I see that the analysis site give lines-coverage and branch-coverage.
>> I don't see anything on distinct paths. What I mean is that the
>> program
>> if(a)
>>  A
>> else
>>  B
>> if(b)
>>  C
>> else
>>  D
>> if(c)
>>  E
>> else
>>  F
>> has eight (2^3) distict paths. It is not enough to show that
>> A,B,C,D,E,F are all hit and hance every line and branch. Also all
>> combinations of a/!a and b/!b and c/!c need to be hit.
>>
>> Now I am not saying that we should not score our code if not in this
>> way but it is kind of kidding ourselves if we don't face up to the
>> fact that coverage of lines of code or branches is not a completeness
>> criterium of some kind. I don't know if any of the mentioned tools
>> does analysis this thorough. But if any does we should go for that
>> one.
>>
>> €0,02
>> Daan
>>
>> On Tue, Oct 29, 2013 at 2:21 AM, Darren Shepherd
>> <da...@gmail.com> wrote:
>> > Starting with the honor system might be good.  It's not so easy some
>> times to relate lines of code to functionality.  Also just because it hits
>> a line of code doesn't mean it's really tested.
>> >
>> > Can't we just get people to just put a check mark on some table in the
>> wiki?
>> >
>> > Darren
>> >
>> >> On Oct 28, 2013, at 12:08 PM, Santhosh Edukulla <
>> santhosh.edukulla@citrix.com> wrote:
>> >>
>> >> 1.It seems we already have a code coverage numbers using sonar as
>> below. It currently shows only the numbers for unit tests.
>> >>
>> >> https://analysis.apache.org/dashboard/index/100206
>> >>
>> >> 2. The below link has an explanation for using it for both integration
>> and unit tests.
>> >>
>> >>
>> http://docs.codehaus.org/display/SONAR/Code+Coverage+by+Integration+Tests+for+Java+Project
>> >>
>> >> 3. Many links suggests it has good decision coverage facility compared
>> to other coverage tools.
>> >>
>> >>
>> http://onlysoftware.wordpress.com/2012/12/19/code-coverage-tools-jacoco-cobertura-emma-comparison-in-sonar/
>> >>
>> >> Regards,
>> >> Santhosh
>> >> ________________________________________
>> >> From: Laszlo Hornyak [laszlo.hornyak@gmail.com]
>> >> Sent: Monday, October 28, 2013 1:43 PM
>> >> To: dev@cloudstack.apache.org
>> >> Subject: Re: Tiered Quality
>> >>
>> >> Sonar already tracks the unit test coverage. It is also able to track
>> the
>> >> integration test coverage, however this might be a bit more
>> sophisticated
>> >> in CS since not all hardware/software requirements are available in the
>> >> jenkins environment. However, this could be a problem in any
>> environment.
>> >>
>> >>
>> >>> On Mon, Oct 28, 2013 at 5:53 AM, Prasanna Santhanam <ts...@apache.org>
>> wrote:
>> >>>
>> >>> We need a way to check coverage of (unit+integration) tests. How many
>> >>> lines of code hit on a deployed system that corresponds to the
>> >>> component donated/committed. We don't have that for existing tests so
>> >>> it makes it hard to judge if a feature that comes with tests covers
>> >>> enough of itself.
>> >>>
>> >>>> On Sun, Oct 27, 2013 at 11:00:46PM +0100, Laszlo Hornyak wrote:
>> >>>> Ok, makes sense, but that sounds like even more work :) Can you share
>> the
>> >>>> plan on how will this work?
>> >>>>
>> >>>>
>> >>>> On Sun, Oct 27, 2013 at 7:54 PM, Darren Shepherd <
>> >>>> darren.s.shepherd@gmail.com> wrote:
>> >>>>
>> >>>>> I think it can't be at a component level because components are too
>> >>> large.
>> >>>>> It needs to be at a feature for implementation level.  For example,
>> >>> live
>> >>>>> storage migration for xen and live storage migration for kvm (don't
>> >>> know if
>> >>>>> that's a real thing) would be two separate items.
>> >>>>>
>> >>>>> Darren
>> >>>>>
>> >>>>>> On Oct 27, 2013, at 10:57 AM, Laszlo Hornyak <
>> >>> laszlo.hornyak@gmail.com>
>> >>>>> wrote:
>> >>>>>>
>> >>>>>> I believe this will be very useful for users.
>> >>>>>> As far as I understand someone will have to qualify components. What
>> >>> will
>> >>>>>> be the method for qualification? I do not think simply the test
>> >>> coverage
>> >>>>>> would be right. But then if you want to go deeper, then you need a
>> >>> bigger
>> >>>>>> effort testing the components.
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> On Sun, Oct 27, 2013 at 4:51 PM, Darren Shepherd <
>> >>>>>> darren.s.shepherd@gmail.com> wrote:
>> >>>>>>
>> >>>>>>> I don't know if a similar thing has been talked about before but I
>> >>>>>>> thought I'd just throws this out there.  The ultimate way to ensure
>> >>>>>>> quality is that we have unit test and integration test coverage on
>> >>> all
>> >>>>>>> functionality.  That way somebody authors some code, commits to,
>> for
>> >>>>>>> example, 4.2, but then when we release 4.3, 4.4, etc they aren't on
>> >>>>>>> the hook to manually tests the functionality with each release.
>>  The
>> >>>>>>> obvious nature of a community project is that people come and go.
>> >>> If
>> >>>>>>> a contributor wants to ensure the long term viability of the
>> >>>>>>> component, they should ensure that there are unit+integration
>> tests.
>> >>>>>>>
>> >>>>>>> Now, for whatever reason whether good or bad, it's not always
>> >>> possible
>> >>>>>>> to have full integration tests.  I don't want to throw down the
>> >>> gamut
>> >>>>>>> and say everything must have coverage because that will mean some
>> >>>>>>> useful code/feature will not get in because of some coverage wasn't
>> >>>>>>> possible at the time.
>> >>>>>>>
>> >>>>>>> What I propose is that for every feature or function we put it in a
>> >>>>>>> tier of what is the quality of it (very similar to how OpenStack
>> >>>>>>> qualifies their hypervisor integration).  Tier A means unit test
>> and
>> >>>>>>> integration test coverage gates the release.  Tier B means unit
>> test
>> >>>>>>> coverage gates the release.  Tier C mean who knows, it compiled.
>>  We
>> >>>>>>> can go through and classify the components and then as a community
>> >>> we
>> >>>>>>> can try to get as much into Tier A as possible.
>> >>>>>>>
>> >>>>>>> Darren
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> --
>> >>>>>>
>> >>>>>> EOF
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>>
>> >>>> EOF
>> >>>
>> >>> --
>> >>> Prasanna.,
>> >>>
>> >>> ------------------------
>> >>> Powered by BigRock.com
>> >>
>> >>
>> >> --
>> >>
>> >> EOF
>>
>
>
>
> --
>
> EOF

RE: Tiered Quality

Posted by Sudha Ponnaganti <su...@citrix.com>.
That is only for Unit tests - we need to instrument code coverage for BVTs and Regressions i.e integration tests.  We are pursuing this in our lab. If we get any break through will post it to the forum. Because of customized nature of automation framework there are few challenges there.

________________________________________
From: Laszlo Hornyak [laszlo.hornyak@gmail.com]
Sent: Friday, November 01, 2013 10:48 AM
To: dev@cloudstack.apache.org
Subject: Re: Tiered Quality

I have heard about commercial tools that do more advanced coverage
tracking. But if you think in open source, not sure Sonar really has an
alternative. It is pretty cool anyway.
Btw the overall code coverage is about 3.6%, probaly it is not worth trying
something more advanced for that much.


On Thu, Oct 31, 2013 at 9:12 PM, Daan Hoogland <da...@gmail.com>wrote:

> one note on testing guys,
>
> I see that the analysis site give lines-coverage and branch-coverage.
> I don't see anything on distinct paths. What I mean is that the
> program
> if(a)
>  A
> else
>  B
> if(b)
>  C
> else
>  D
> if(c)
>  E
> else
>  F
> has eight (2^3) distict paths. It is not enough to show that
> A,B,C,D,E,F are all hit and hance every line and branch. Also all
> combinations of a/!a and b/!b and c/!c need to be hit.
>
> Now I am not saying that we should not score our code if not in this
> way but it is kind of kidding ourselves if we don't face up to the
> fact that coverage of lines of code or branches is not a completeness
> criterium of some kind. I don't know if any of the mentioned tools
> does analysis this thorough. But if any does we should go for that
> one.
>
> €0,02
> Daan
>
> On Tue, Oct 29, 2013 at 2:21 AM, Darren Shepherd
> <da...@gmail.com> wrote:
> > Starting with the honor system might be good.  It's not so easy some
> times to relate lines of code to functionality.  Also just because it hits
> a line of code doesn't mean it's really tested.
> >
> > Can't we just get people to just put a check mark on some table in the
> wiki?
> >
> > Darren
> >
> >> On Oct 28, 2013, at 12:08 PM, Santhosh Edukulla <
> santhosh.edukulla@citrix.com> wrote:
> >>
> >> 1.It seems we already have a code coverage numbers using sonar as
> below. It currently shows only the numbers for unit tests.
> >>
> >> https://analysis.apache.org/dashboard/index/100206
> >>
> >> 2. The below link has an explanation for using it for both integration
> and unit tests.
> >>
> >>
> http://docs.codehaus.org/display/SONAR/Code+Coverage+by+Integration+Tests+for+Java+Project
> >>
> >> 3. Many links suggests it has good decision coverage facility compared
> to other coverage tools.
> >>
> >>
> http://onlysoftware.wordpress.com/2012/12/19/code-coverage-tools-jacoco-cobertura-emma-comparison-in-sonar/
> >>
> >> Regards,
> >> Santhosh
> >> ________________________________________
> >> From: Laszlo Hornyak [laszlo.hornyak@gmail.com]
> >> Sent: Monday, October 28, 2013 1:43 PM
> >> To: dev@cloudstack.apache.org
> >> Subject: Re: Tiered Quality
> >>
> >> Sonar already tracks the unit test coverage. It is also able to track
> the
> >> integration test coverage, however this might be a bit more
> sophisticated
> >> in CS since not all hardware/software requirements are available in the
> >> jenkins environment. However, this could be a problem in any
> environment.
> >>
> >>
> >>> On Mon, Oct 28, 2013 at 5:53 AM, Prasanna Santhanam <ts...@apache.org>
> wrote:
> >>>
> >>> We need a way to check coverage of (unit+integration) tests. How many
> >>> lines of code hit on a deployed system that corresponds to the
> >>> component donated/committed. We don't have that for existing tests so
> >>> it makes it hard to judge if a feature that comes with tests covers
> >>> enough of itself.
> >>>
> >>>> On Sun, Oct 27, 2013 at 11:00:46PM +0100, Laszlo Hornyak wrote:
> >>>> Ok, makes sense, but that sounds like even more work :) Can you share
> the
> >>>> plan on how will this work?
> >>>>
> >>>>
> >>>> On Sun, Oct 27, 2013 at 7:54 PM, Darren Shepherd <
> >>>> darren.s.shepherd@gmail.com> wrote:
> >>>>
> >>>>> I think it can't be at a component level because components are too
> >>> large.
> >>>>> It needs to be at a feature for implementation level.  For example,
> >>> live
> >>>>> storage migration for xen and live storage migration for kvm (don't
> >>> know if
> >>>>> that's a real thing) would be two separate items.
> >>>>>
> >>>>> Darren
> >>>>>
> >>>>>> On Oct 27, 2013, at 10:57 AM, Laszlo Hornyak <
> >>> laszlo.hornyak@gmail.com>
> >>>>> wrote:
> >>>>>>
> >>>>>> I believe this will be very useful for users.
> >>>>>> As far as I understand someone will have to qualify components. What
> >>> will
> >>>>>> be the method for qualification? I do not think simply the test
> >>> coverage
> >>>>>> would be right. But then if you want to go deeper, then you need a
> >>> bigger
> >>>>>> effort testing the components.
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Sun, Oct 27, 2013 at 4:51 PM, Darren Shepherd <
> >>>>>> darren.s.shepherd@gmail.com> wrote:
> >>>>>>
> >>>>>>> I don't know if a similar thing has been talked about before but I
> >>>>>>> thought I'd just throws this out there.  The ultimate way to ensure
> >>>>>>> quality is that we have unit test and integration test coverage on
> >>> all
> >>>>>>> functionality.  That way somebody authors some code, commits to,
> for
> >>>>>>> example, 4.2, but then when we release 4.3, 4.4, etc they aren't on
> >>>>>>> the hook to manually tests the functionality with each release.
>  The
> >>>>>>> obvious nature of a community project is that people come and go.
> >>> If
> >>>>>>> a contributor wants to ensure the long term viability of the
> >>>>>>> component, they should ensure that there are unit+integration
> tests.
> >>>>>>>
> >>>>>>> Now, for whatever reason whether good or bad, it's not always
> >>> possible
> >>>>>>> to have full integration tests.  I don't want to throw down the
> >>> gamut
> >>>>>>> and say everything must have coverage because that will mean some
> >>>>>>> useful code/feature will not get in because of some coverage wasn't
> >>>>>>> possible at the time.
> >>>>>>>
> >>>>>>> What I propose is that for every feature or function we put it in a
> >>>>>>> tier of what is the quality of it (very similar to how OpenStack
> >>>>>>> qualifies their hypervisor integration).  Tier A means unit test
> and
> >>>>>>> integration test coverage gates the release.  Tier B means unit
> test
> >>>>>>> coverage gates the release.  Tier C mean who knows, it compiled.
>  We
> >>>>>>> can go through and classify the components and then as a community
> >>> we
> >>>>>>> can try to get as much into Tier A as possible.
> >>>>>>>
> >>>>>>> Darren
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> --
> >>>>>>
> >>>>>> EOF
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>>
> >>>> EOF
> >>>
> >>> --
> >>> Prasanna.,
> >>>
> >>> ------------------------
> >>> Powered by BigRock.com
> >>
> >>
> >> --
> >>
> >> EOF
>



--

EOF

Re: Tiered Quality

Posted by Laszlo Hornyak <la...@gmail.com>.
I have heard about commercial tools that do more advanced coverage
tracking. But if you think in open source, not sure Sonar really has an
alternative. It is pretty cool anyway.
Btw the overall code coverage is about 3.6%, probaly it is not worth trying
something more advanced for that much.


On Thu, Oct 31, 2013 at 9:12 PM, Daan Hoogland <da...@gmail.com>wrote:

> one note on testing guys,
>
> I see that the analysis site give lines-coverage and branch-coverage.
> I don't see anything on distinct paths. What I mean is that the
> program
> if(a)
>  A
> else
>  B
> if(b)
>  C
> else
>  D
> if(c)
>  E
> else
>  F
> has eight (2^3) distict paths. It is not enough to show that
> A,B,C,D,E,F are all hit and hance every line and branch. Also all
> combinations of a/!a and b/!b and c/!c need to be hit.
>
> Now I am not saying that we should not score our code if not in this
> way but it is kind of kidding ourselves if we don't face up to the
> fact that coverage of lines of code or branches is not a completeness
> criterium of some kind. I don't know if any of the mentioned tools
> does analysis this thorough. But if any does we should go for that
> one.
>
> €0,02
> Daan
>
> On Tue, Oct 29, 2013 at 2:21 AM, Darren Shepherd
> <da...@gmail.com> wrote:
> > Starting with the honor system might be good.  It's not so easy some
> times to relate lines of code to functionality.  Also just because it hits
> a line of code doesn't mean it's really tested.
> >
> > Can't we just get people to just put a check mark on some table in the
> wiki?
> >
> > Darren
> >
> >> On Oct 28, 2013, at 12:08 PM, Santhosh Edukulla <
> santhosh.edukulla@citrix.com> wrote:
> >>
> >> 1.It seems we already have a code coverage numbers using sonar as
> below. It currently shows only the numbers for unit tests.
> >>
> >> https://analysis.apache.org/dashboard/index/100206
> >>
> >> 2. The below link has an explanation for using it for both integration
> and unit tests.
> >>
> >>
> http://docs.codehaus.org/display/SONAR/Code+Coverage+by+Integration+Tests+for+Java+Project
> >>
> >> 3. Many links suggests it has good decision coverage facility compared
> to other coverage tools.
> >>
> >>
> http://onlysoftware.wordpress.com/2012/12/19/code-coverage-tools-jacoco-cobertura-emma-comparison-in-sonar/
> >>
> >> Regards,
> >> Santhosh
> >> ________________________________________
> >> From: Laszlo Hornyak [laszlo.hornyak@gmail.com]
> >> Sent: Monday, October 28, 2013 1:43 PM
> >> To: dev@cloudstack.apache.org
> >> Subject: Re: Tiered Quality
> >>
> >> Sonar already tracks the unit test coverage. It is also able to track
> the
> >> integration test coverage, however this might be a bit more
> sophisticated
> >> in CS since not all hardware/software requirements are available in the
> >> jenkins environment. However, this could be a problem in any
> environment.
> >>
> >>
> >>> On Mon, Oct 28, 2013 at 5:53 AM, Prasanna Santhanam <ts...@apache.org>
> wrote:
> >>>
> >>> We need a way to check coverage of (unit+integration) tests. How many
> >>> lines of code hit on a deployed system that corresponds to the
> >>> component donated/committed. We don't have that for existing tests so
> >>> it makes it hard to judge if a feature that comes with tests covers
> >>> enough of itself.
> >>>
> >>>> On Sun, Oct 27, 2013 at 11:00:46PM +0100, Laszlo Hornyak wrote:
> >>>> Ok, makes sense, but that sounds like even more work :) Can you share
> the
> >>>> plan on how will this work?
> >>>>
> >>>>
> >>>> On Sun, Oct 27, 2013 at 7:54 PM, Darren Shepherd <
> >>>> darren.s.shepherd@gmail.com> wrote:
> >>>>
> >>>>> I think it can't be at a component level because components are too
> >>> large.
> >>>>> It needs to be at a feature for implementation level.  For example,
> >>> live
> >>>>> storage migration for xen and live storage migration for kvm (don't
> >>> know if
> >>>>> that's a real thing) would be two separate items.
> >>>>>
> >>>>> Darren
> >>>>>
> >>>>>> On Oct 27, 2013, at 10:57 AM, Laszlo Hornyak <
> >>> laszlo.hornyak@gmail.com>
> >>>>> wrote:
> >>>>>>
> >>>>>> I believe this will be very useful for users.
> >>>>>> As far as I understand someone will have to qualify components. What
> >>> will
> >>>>>> be the method for qualification? I do not think simply the test
> >>> coverage
> >>>>>> would be right. But then if you want to go deeper, then you need a
> >>> bigger
> >>>>>> effort testing the components.
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Sun, Oct 27, 2013 at 4:51 PM, Darren Shepherd <
> >>>>>> darren.s.shepherd@gmail.com> wrote:
> >>>>>>
> >>>>>>> I don't know if a similar thing has been talked about before but I
> >>>>>>> thought I'd just throws this out there.  The ultimate way to ensure
> >>>>>>> quality is that we have unit test and integration test coverage on
> >>> all
> >>>>>>> functionality.  That way somebody authors some code, commits to,
> for
> >>>>>>> example, 4.2, but then when we release 4.3, 4.4, etc they aren't on
> >>>>>>> the hook to manually tests the functionality with each release.
>  The
> >>>>>>> obvious nature of a community project is that people come and go.
> >>> If
> >>>>>>> a contributor wants to ensure the long term viability of the
> >>>>>>> component, they should ensure that there are unit+integration
> tests.
> >>>>>>>
> >>>>>>> Now, for whatever reason whether good or bad, it's not always
> >>> possible
> >>>>>>> to have full integration tests.  I don't want to throw down the
> >>> gamut
> >>>>>>> and say everything must have coverage because that will mean some
> >>>>>>> useful code/feature will not get in because of some coverage wasn't
> >>>>>>> possible at the time.
> >>>>>>>
> >>>>>>> What I propose is that for every feature or function we put it in a
> >>>>>>> tier of what is the quality of it (very similar to how OpenStack
> >>>>>>> qualifies their hypervisor integration).  Tier A means unit test
> and
> >>>>>>> integration test coverage gates the release.  Tier B means unit
> test
> >>>>>>> coverage gates the release.  Tier C mean who knows, it compiled.
>  We
> >>>>>>> can go through and classify the components and then as a community
> >>> we
> >>>>>>> can try to get as much into Tier A as possible.
> >>>>>>>
> >>>>>>> Darren
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> --
> >>>>>>
> >>>>>> EOF
> >>>>
> >>>>
> >>>>
> >>>> --
> >>>>
> >>>> EOF
> >>>
> >>> --
> >>> Prasanna.,
> >>>
> >>> ------------------------
> >>> Powered by BigRock.com
> >>
> >>
> >> --
> >>
> >> EOF
>



-- 

EOF

Re: Tiered Quality

Posted by Daan Hoogland <da...@gmail.com>.
one note on testing guys,

I see that the analysis site give lines-coverage and branch-coverage.
I don't see anything on distinct paths. What I mean is that the
program
if(a)
 A
else
 B
if(b)
 C
else
 D
if(c)
 E
else
 F
has eight (2^3) distict paths. It is not enough to show that
A,B,C,D,E,F are all hit and hance every line and branch. Also all
combinations of a/!a and b/!b and c/!c need to be hit.

Now I am not saying that we should not score our code if not in this
way but it is kind of kidding ourselves if we don't face up to the
fact that coverage of lines of code or branches is not a completeness
criterium of some kind. I don't know if any of the mentioned tools
does analysis this thorough. But if any does we should go for that
one.

€0,02
Daan

On Tue, Oct 29, 2013 at 2:21 AM, Darren Shepherd
<da...@gmail.com> wrote:
> Starting with the honor system might be good.  It's not so easy some times to relate lines of code to functionality.  Also just because it hits a line of code doesn't mean it's really tested.
>
> Can't we just get people to just put a check mark on some table in the wiki?
>
> Darren
>
>> On Oct 28, 2013, at 12:08 PM, Santhosh Edukulla <sa...@citrix.com> wrote:
>>
>> 1.It seems we already have a code coverage numbers using sonar as below. It currently shows only the numbers for unit tests.
>>
>> https://analysis.apache.org/dashboard/index/100206
>>
>> 2. The below link has an explanation for using it for both integration and unit tests.
>>
>> http://docs.codehaus.org/display/SONAR/Code+Coverage+by+Integration+Tests+for+Java+Project
>>
>> 3. Many links suggests it has good decision coverage facility compared to other coverage tools.
>>
>> http://onlysoftware.wordpress.com/2012/12/19/code-coverage-tools-jacoco-cobertura-emma-comparison-in-sonar/
>>
>> Regards,
>> Santhosh
>> ________________________________________
>> From: Laszlo Hornyak [laszlo.hornyak@gmail.com]
>> Sent: Monday, October 28, 2013 1:43 PM
>> To: dev@cloudstack.apache.org
>> Subject: Re: Tiered Quality
>>
>> Sonar already tracks the unit test coverage. It is also able to track the
>> integration test coverage, however this might be a bit more sophisticated
>> in CS since not all hardware/software requirements are available in the
>> jenkins environment. However, this could be a problem in any environment.
>>
>>
>>> On Mon, Oct 28, 2013 at 5:53 AM, Prasanna Santhanam <ts...@apache.org> wrote:
>>>
>>> We need a way to check coverage of (unit+integration) tests. How many
>>> lines of code hit on a deployed system that corresponds to the
>>> component donated/committed. We don't have that for existing tests so
>>> it makes it hard to judge if a feature that comes with tests covers
>>> enough of itself.
>>>
>>>> On Sun, Oct 27, 2013 at 11:00:46PM +0100, Laszlo Hornyak wrote:
>>>> Ok, makes sense, but that sounds like even more work :) Can you share the
>>>> plan on how will this work?
>>>>
>>>>
>>>> On Sun, Oct 27, 2013 at 7:54 PM, Darren Shepherd <
>>>> darren.s.shepherd@gmail.com> wrote:
>>>>
>>>>> I think it can't be at a component level because components are too
>>> large.
>>>>> It needs to be at a feature for implementation level.  For example,
>>> live
>>>>> storage migration for xen and live storage migration for kvm (don't
>>> know if
>>>>> that's a real thing) would be two separate items.
>>>>>
>>>>> Darren
>>>>>
>>>>>> On Oct 27, 2013, at 10:57 AM, Laszlo Hornyak <
>>> laszlo.hornyak@gmail.com>
>>>>> wrote:
>>>>>>
>>>>>> I believe this will be very useful for users.
>>>>>> As far as I understand someone will have to qualify components. What
>>> will
>>>>>> be the method for qualification? I do not think simply the test
>>> coverage
>>>>>> would be right. But then if you want to go deeper, then you need a
>>> bigger
>>>>>> effort testing the components.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, Oct 27, 2013 at 4:51 PM, Darren Shepherd <
>>>>>> darren.s.shepherd@gmail.com> wrote:
>>>>>>
>>>>>>> I don't know if a similar thing has been talked about before but I
>>>>>>> thought I'd just throws this out there.  The ultimate way to ensure
>>>>>>> quality is that we have unit test and integration test coverage on
>>> all
>>>>>>> functionality.  That way somebody authors some code, commits to, for
>>>>>>> example, 4.2, but then when we release 4.3, 4.4, etc they aren't on
>>>>>>> the hook to manually tests the functionality with each release.  The
>>>>>>> obvious nature of a community project is that people come and go.
>>> If
>>>>>>> a contributor wants to ensure the long term viability of the
>>>>>>> component, they should ensure that there are unit+integration tests.
>>>>>>>
>>>>>>> Now, for whatever reason whether good or bad, it's not always
>>> possible
>>>>>>> to have full integration tests.  I don't want to throw down the
>>> gamut
>>>>>>> and say everything must have coverage because that will mean some
>>>>>>> useful code/feature will not get in because of some coverage wasn't
>>>>>>> possible at the time.
>>>>>>>
>>>>>>> What I propose is that for every feature or function we put it in a
>>>>>>> tier of what is the quality of it (very similar to how OpenStack
>>>>>>> qualifies their hypervisor integration).  Tier A means unit test and
>>>>>>> integration test coverage gates the release.  Tier B means unit test
>>>>>>> coverage gates the release.  Tier C mean who knows, it compiled.  We
>>>>>>> can go through and classify the components and then as a community
>>> we
>>>>>>> can try to get as much into Tier A as possible.
>>>>>>>
>>>>>>> Darren
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> EOF
>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> EOF
>>>
>>> --
>>> Prasanna.,
>>>
>>> ------------------------
>>> Powered by BigRock.com
>>
>>
>> --
>>
>> EOF

Re: Tiered Quality

Posted by Darren Shepherd <da...@gmail.com>.
Starting with the honor system might be good.  It's not so easy some times to relate lines of code to functionality.  Also just because it hits a line of code doesn't mean it's really tested.

Can't we just get people to just put a check mark on some table in the wiki?

Darren

> On Oct 28, 2013, at 12:08 PM, Santhosh Edukulla <sa...@citrix.com> wrote:
> 
> 1.It seems we already have a code coverage numbers using sonar as below. It currently shows only the numbers for unit tests.
> 
> https://analysis.apache.org/dashboard/index/100206
> 
> 2. The below link has an explanation for using it for both integration and unit tests.
> 
> http://docs.codehaus.org/display/SONAR/Code+Coverage+by+Integration+Tests+for+Java+Project
> 
> 3. Many links suggests it has good decision coverage facility compared to other coverage tools.
> 
> http://onlysoftware.wordpress.com/2012/12/19/code-coverage-tools-jacoco-cobertura-emma-comparison-in-sonar/
> 
> Regards,
> Santhosh
> ________________________________________
> From: Laszlo Hornyak [laszlo.hornyak@gmail.com]
> Sent: Monday, October 28, 2013 1:43 PM
> To: dev@cloudstack.apache.org
> Subject: Re: Tiered Quality
> 
> Sonar already tracks the unit test coverage. It is also able to track the
> integration test coverage, however this might be a bit more sophisticated
> in CS since not all hardware/software requirements are available in the
> jenkins environment. However, this could be a problem in any environment.
> 
> 
>> On Mon, Oct 28, 2013 at 5:53 AM, Prasanna Santhanam <ts...@apache.org> wrote:
>> 
>> We need a way to check coverage of (unit+integration) tests. How many
>> lines of code hit on a deployed system that corresponds to the
>> component donated/committed. We don't have that for existing tests so
>> it makes it hard to judge if a feature that comes with tests covers
>> enough of itself.
>> 
>>> On Sun, Oct 27, 2013 at 11:00:46PM +0100, Laszlo Hornyak wrote:
>>> Ok, makes sense, but that sounds like even more work :) Can you share the
>>> plan on how will this work?
>>> 
>>> 
>>> On Sun, Oct 27, 2013 at 7:54 PM, Darren Shepherd <
>>> darren.s.shepherd@gmail.com> wrote:
>>> 
>>>> I think it can't be at a component level because components are too
>> large.
>>>> It needs to be at a feature for implementation level.  For example,
>> live
>>>> storage migration for xen and live storage migration for kvm (don't
>> know if
>>>> that's a real thing) would be two separate items.
>>>> 
>>>> Darren
>>>> 
>>>>> On Oct 27, 2013, at 10:57 AM, Laszlo Hornyak <
>> laszlo.hornyak@gmail.com>
>>>> wrote:
>>>>> 
>>>>> I believe this will be very useful for users.
>>>>> As far as I understand someone will have to qualify components. What
>> will
>>>>> be the method for qualification? I do not think simply the test
>> coverage
>>>>> would be right. But then if you want to go deeper, then you need a
>> bigger
>>>>> effort testing the components.
>>>>> 
>>>>> 
>>>>> 
>>>>> On Sun, Oct 27, 2013 at 4:51 PM, Darren Shepherd <
>>>>> darren.s.shepherd@gmail.com> wrote:
>>>>> 
>>>>>> I don't know if a similar thing has been talked about before but I
>>>>>> thought I'd just throws this out there.  The ultimate way to ensure
>>>>>> quality is that we have unit test and integration test coverage on
>> all
>>>>>> functionality.  That way somebody authors some code, commits to, for
>>>>>> example, 4.2, but then when we release 4.3, 4.4, etc they aren't on
>>>>>> the hook to manually tests the functionality with each release.  The
>>>>>> obvious nature of a community project is that people come and go.
>> If
>>>>>> a contributor wants to ensure the long term viability of the
>>>>>> component, they should ensure that there are unit+integration tests.
>>>>>> 
>>>>>> Now, for whatever reason whether good or bad, it's not always
>> possible
>>>>>> to have full integration tests.  I don't want to throw down the
>> gamut
>>>>>> and say everything must have coverage because that will mean some
>>>>>> useful code/feature will not get in because of some coverage wasn't
>>>>>> possible at the time.
>>>>>> 
>>>>>> What I propose is that for every feature or function we put it in a
>>>>>> tier of what is the quality of it (very similar to how OpenStack
>>>>>> qualifies their hypervisor integration).  Tier A means unit test and
>>>>>> integration test coverage gates the release.  Tier B means unit test
>>>>>> coverage gates the release.  Tier C mean who knows, it compiled.  We
>>>>>> can go through and classify the components and then as a community
>> we
>>>>>> can try to get as much into Tier A as possible.
>>>>>> 
>>>>>> Darren
>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> 
>>>>> EOF
>>> 
>>> 
>>> 
>>> --
>>> 
>>> EOF
>> 
>> --
>> Prasanna.,
>> 
>> ------------------------
>> Powered by BigRock.com
> 
> 
> --
> 
> EOF

RE: Tiered Quality

Posted by Santhosh Edukulla <sa...@citrix.com>.
1.It seems we already have a code coverage numbers using sonar as below. It currently shows only the numbers for unit tests.

https://analysis.apache.org/dashboard/index/100206

2. The below link has an explanation for using it for both integration and unit tests.

http://docs.codehaus.org/display/SONAR/Code+Coverage+by+Integration+Tests+for+Java+Project

3. Many links suggests it has good decision coverage facility compared to other coverage tools.

http://onlysoftware.wordpress.com/2012/12/19/code-coverage-tools-jacoco-cobertura-emma-comparison-in-sonar/

Regards,
Santhosh
________________________________________
From: Laszlo Hornyak [laszlo.hornyak@gmail.com]
Sent: Monday, October 28, 2013 1:43 PM
To: dev@cloudstack.apache.org
Subject: Re: Tiered Quality

Sonar already tracks the unit test coverage. It is also able to track the
integration test coverage, however this might be a bit more sophisticated
in CS since not all hardware/software requirements are available in the
jenkins environment. However, this could be a problem in any environment.


On Mon, Oct 28, 2013 at 5:53 AM, Prasanna Santhanam <ts...@apache.org> wrote:

> We need a way to check coverage of (unit+integration) tests. How many
> lines of code hit on a deployed system that corresponds to the
> component donated/committed. We don't have that for existing tests so
> it makes it hard to judge if a feature that comes with tests covers
> enough of itself.
>
> On Sun, Oct 27, 2013 at 11:00:46PM +0100, Laszlo Hornyak wrote:
> > Ok, makes sense, but that sounds like even more work :) Can you share the
> > plan on how will this work?
> >
> >
> > On Sun, Oct 27, 2013 at 7:54 PM, Darren Shepherd <
> > darren.s.shepherd@gmail.com> wrote:
> >
> > > I think it can't be at a component level because components are too
> large.
> > >  It needs to be at a feature for implementation level.  For example,
> live
> > > storage migration for xen and live storage migration for kvm (don't
> know if
> > > that's a real thing) would be two separate items.
> > >
> > > Darren
> > >
> > > > On Oct 27, 2013, at 10:57 AM, Laszlo Hornyak <
> laszlo.hornyak@gmail.com>
> > > wrote:
> > > >
> > > > I believe this will be very useful for users.
> > > > As far as I understand someone will have to qualify components. What
> will
> > > > be the method for qualification? I do not think simply the test
> coverage
> > > > would be right. But then if you want to go deeper, then you need a
> bigger
> > > > effort testing the components.
> > > >
> > > >
> > > >
> > > > On Sun, Oct 27, 2013 at 4:51 PM, Darren Shepherd <
> > > > darren.s.shepherd@gmail.com> wrote:
> > > >
> > > >> I don't know if a similar thing has been talked about before but I
> > > >> thought I'd just throws this out there.  The ultimate way to ensure
> > > >> quality is that we have unit test and integration test coverage on
> all
> > > >> functionality.  That way somebody authors some code, commits to, for
> > > >> example, 4.2, but then when we release 4.3, 4.4, etc they aren't on
> > > >> the hook to manually tests the functionality with each release.  The
> > > >> obvious nature of a community project is that people come and go.
>  If
> > > >> a contributor wants to ensure the long term viability of the
> > > >> component, they should ensure that there are unit+integration tests.
> > > >>
> > > >> Now, for whatever reason whether good or bad, it's not always
> possible
> > > >> to have full integration tests.  I don't want to throw down the
> gamut
> > > >> and say everything must have coverage because that will mean some
> > > >> useful code/feature will not get in because of some coverage wasn't
> > > >> possible at the time.
> > > >>
> > > >> What I propose is that for every feature or function we put it in a
> > > >> tier of what is the quality of it (very similar to how OpenStack
> > > >> qualifies their hypervisor integration).  Tier A means unit test and
> > > >> integration test coverage gates the release.  Tier B means unit test
> > > >> coverage gates the release.  Tier C mean who knows, it compiled.  We
> > > >> can go through and classify the components and then as a community
> we
> > > >> can try to get as much into Tier A as possible.
> > > >>
> > > >> Darren
> > > >
> > > >
> > > >
> > > > --
> > > >
> > > > EOF
> > >
> >
> >
> >
> > --
> >
> > EOF
>
> --
> Prasanna.,
>
> ------------------------
> Powered by BigRock.com
>
>


--

EOF

Re: Tiered Quality

Posted by Laszlo Hornyak <la...@gmail.com>.
Sonar already tracks the unit test coverage. It is also able to track the
integration test coverage, however this might be a bit more sophisticated
in CS since not all hardware/software requirements are available in the
jenkins environment. However, this could be a problem in any environment.


On Mon, Oct 28, 2013 at 5:53 AM, Prasanna Santhanam <ts...@apache.org> wrote:

> We need a way to check coverage of (unit+integration) tests. How many
> lines of code hit on a deployed system that corresponds to the
> component donated/committed. We don't have that for existing tests so
> it makes it hard to judge if a feature that comes with tests covers
> enough of itself.
>
> On Sun, Oct 27, 2013 at 11:00:46PM +0100, Laszlo Hornyak wrote:
> > Ok, makes sense, but that sounds like even more work :) Can you share the
> > plan on how will this work?
> >
> >
> > On Sun, Oct 27, 2013 at 7:54 PM, Darren Shepherd <
> > darren.s.shepherd@gmail.com> wrote:
> >
> > > I think it can't be at a component level because components are too
> large.
> > >  It needs to be at a feature for implementation level.  For example,
> live
> > > storage migration for xen and live storage migration for kvm (don't
> know if
> > > that's a real thing) would be two separate items.
> > >
> > > Darren
> > >
> > > > On Oct 27, 2013, at 10:57 AM, Laszlo Hornyak <
> laszlo.hornyak@gmail.com>
> > > wrote:
> > > >
> > > > I believe this will be very useful for users.
> > > > As far as I understand someone will have to qualify components. What
> will
> > > > be the method for qualification? I do not think simply the test
> coverage
> > > > would be right. But then if you want to go deeper, then you need a
> bigger
> > > > effort testing the components.
> > > >
> > > >
> > > >
> > > > On Sun, Oct 27, 2013 at 4:51 PM, Darren Shepherd <
> > > > darren.s.shepherd@gmail.com> wrote:
> > > >
> > > >> I don't know if a similar thing has been talked about before but I
> > > >> thought I'd just throws this out there.  The ultimate way to ensure
> > > >> quality is that we have unit test and integration test coverage on
> all
> > > >> functionality.  That way somebody authors some code, commits to, for
> > > >> example, 4.2, but then when we release 4.3, 4.4, etc they aren't on
> > > >> the hook to manually tests the functionality with each release.  The
> > > >> obvious nature of a community project is that people come and go.
>  If
> > > >> a contributor wants to ensure the long term viability of the
> > > >> component, they should ensure that there are unit+integration tests.
> > > >>
> > > >> Now, for whatever reason whether good or bad, it's not always
> possible
> > > >> to have full integration tests.  I don't want to throw down the
> gamut
> > > >> and say everything must have coverage because that will mean some
> > > >> useful code/feature will not get in because of some coverage wasn't
> > > >> possible at the time.
> > > >>
> > > >> What I propose is that for every feature or function we put it in a
> > > >> tier of what is the quality of it (very similar to how OpenStack
> > > >> qualifies their hypervisor integration).  Tier A means unit test and
> > > >> integration test coverage gates the release.  Tier B means unit test
> > > >> coverage gates the release.  Tier C mean who knows, it compiled.  We
> > > >> can go through and classify the components and then as a community
> we
> > > >> can try to get as much into Tier A as possible.
> > > >>
> > > >> Darren
> > > >
> > > >
> > > >
> > > > --
> > > >
> > > > EOF
> > >
> >
> >
> >
> > --
> >
> > EOF
>
> --
> Prasanna.,
>
> ------------------------
> Powered by BigRock.com
>
>


-- 

EOF

Re: Tiered Quality

Posted by Prasanna Santhanam <ts...@apache.org>.
We need a way to check coverage of (unit+integration) tests. How many
lines of code hit on a deployed system that corresponds to the
component donated/committed. We don't have that for existing tests so
it makes it hard to judge if a feature that comes with tests covers
enough of itself.

On Sun, Oct 27, 2013 at 11:00:46PM +0100, Laszlo Hornyak wrote:
> Ok, makes sense, but that sounds like even more work :) Can you share the
> plan on how will this work?
> 
> 
> On Sun, Oct 27, 2013 at 7:54 PM, Darren Shepherd <
> darren.s.shepherd@gmail.com> wrote:
> 
> > I think it can't be at a component level because components are too large.
> >  It needs to be at a feature for implementation level.  For example, live
> > storage migration for xen and live storage migration for kvm (don't know if
> > that's a real thing) would be two separate items.
> >
> > Darren
> >
> > > On Oct 27, 2013, at 10:57 AM, Laszlo Hornyak <la...@gmail.com>
> > wrote:
> > >
> > > I believe this will be very useful for users.
> > > As far as I understand someone will have to qualify components. What will
> > > be the method for qualification? I do not think simply the test coverage
> > > would be right. But then if you want to go deeper, then you need a bigger
> > > effort testing the components.
> > >
> > >
> > >
> > > On Sun, Oct 27, 2013 at 4:51 PM, Darren Shepherd <
> > > darren.s.shepherd@gmail.com> wrote:
> > >
> > >> I don't know if a similar thing has been talked about before but I
> > >> thought I'd just throws this out there.  The ultimate way to ensure
> > >> quality is that we have unit test and integration test coverage on all
> > >> functionality.  That way somebody authors some code, commits to, for
> > >> example, 4.2, but then when we release 4.3, 4.4, etc they aren't on
> > >> the hook to manually tests the functionality with each release.  The
> > >> obvious nature of a community project is that people come and go.  If
> > >> a contributor wants to ensure the long term viability of the
> > >> component, they should ensure that there are unit+integration tests.
> > >>
> > >> Now, for whatever reason whether good or bad, it's not always possible
> > >> to have full integration tests.  I don't want to throw down the gamut
> > >> and say everything must have coverage because that will mean some
> > >> useful code/feature will not get in because of some coverage wasn't
> > >> possible at the time.
> > >>
> > >> What I propose is that for every feature or function we put it in a
> > >> tier of what is the quality of it (very similar to how OpenStack
> > >> qualifies their hypervisor integration).  Tier A means unit test and
> > >> integration test coverage gates the release.  Tier B means unit test
> > >> coverage gates the release.  Tier C mean who knows, it compiled.  We
> > >> can go through and classify the components and then as a community we
> > >> can try to get as much into Tier A as possible.
> > >>
> > >> Darren
> > >
> > >
> > >
> > > --
> > >
> > > EOF
> >
> 
> 
> 
> -- 
> 
> EOF

-- 
Prasanna.,

------------------------
Powered by BigRock.com


Re: Tiered Quality

Posted by Laszlo Hornyak <la...@gmail.com>.
Ok, makes sense, but that sounds like even more work :) Can you share the
plan on how will this work?


On Sun, Oct 27, 2013 at 7:54 PM, Darren Shepherd <
darren.s.shepherd@gmail.com> wrote:

> I think it can't be at a component level because components are too large.
>  It needs to be at a feature for implementation level.  For example, live
> storage migration for xen and live storage migration for kvm (don't know if
> that's a real thing) would be two separate items.
>
> Darren
>
> > On Oct 27, 2013, at 10:57 AM, Laszlo Hornyak <la...@gmail.com>
> wrote:
> >
> > I believe this will be very useful for users.
> > As far as I understand someone will have to qualify components. What will
> > be the method for qualification? I do not think simply the test coverage
> > would be right. But then if you want to go deeper, then you need a bigger
> > effort testing the components.
> >
> >
> >
> > On Sun, Oct 27, 2013 at 4:51 PM, Darren Shepherd <
> > darren.s.shepherd@gmail.com> wrote:
> >
> >> I don't know if a similar thing has been talked about before but I
> >> thought I'd just throws this out there.  The ultimate way to ensure
> >> quality is that we have unit test and integration test coverage on all
> >> functionality.  That way somebody authors some code, commits to, for
> >> example, 4.2, but then when we release 4.3, 4.4, etc they aren't on
> >> the hook to manually tests the functionality with each release.  The
> >> obvious nature of a community project is that people come and go.  If
> >> a contributor wants to ensure the long term viability of the
> >> component, they should ensure that there are unit+integration tests.
> >>
> >> Now, for whatever reason whether good or bad, it's not always possible
> >> to have full integration tests.  I don't want to throw down the gamut
> >> and say everything must have coverage because that will mean some
> >> useful code/feature will not get in because of some coverage wasn't
> >> possible at the time.
> >>
> >> What I propose is that for every feature or function we put it in a
> >> tier of what is the quality of it (very similar to how OpenStack
> >> qualifies their hypervisor integration).  Tier A means unit test and
> >> integration test coverage gates the release.  Tier B means unit test
> >> coverage gates the release.  Tier C mean who knows, it compiled.  We
> >> can go through and classify the components and then as a community we
> >> can try to get as much into Tier A as possible.
> >>
> >> Darren
> >
> >
> >
> > --
> >
> > EOF
>



-- 

EOF

Re: Tiered Quality

Posted by Darren Shepherd <da...@gmail.com>.
I think it can't be at a component level because components are too large.  It needs to be at a feature for implementation level.  For example, live storage migration for xen and live storage migration for kvm (don't know if that's a real thing) would be two separate items.  

Darren

> On Oct 27, 2013, at 10:57 AM, Laszlo Hornyak <la...@gmail.com> wrote:
> 
> I believe this will be very useful for users.
> As far as I understand someone will have to qualify components. What will
> be the method for qualification? I do not think simply the test coverage
> would be right. But then if you want to go deeper, then you need a bigger
> effort testing the components.
> 
> 
> 
> On Sun, Oct 27, 2013 at 4:51 PM, Darren Shepherd <
> darren.s.shepherd@gmail.com> wrote:
> 
>> I don't know if a similar thing has been talked about before but I
>> thought I'd just throws this out there.  The ultimate way to ensure
>> quality is that we have unit test and integration test coverage on all
>> functionality.  That way somebody authors some code, commits to, for
>> example, 4.2, but then when we release 4.3, 4.4, etc they aren't on
>> the hook to manually tests the functionality with each release.  The
>> obvious nature of a community project is that people come and go.  If
>> a contributor wants to ensure the long term viability of the
>> component, they should ensure that there are unit+integration tests.
>> 
>> Now, for whatever reason whether good or bad, it's not always possible
>> to have full integration tests.  I don't want to throw down the gamut
>> and say everything must have coverage because that will mean some
>> useful code/feature will not get in because of some coverage wasn't
>> possible at the time.
>> 
>> What I propose is that for every feature or function we put it in a
>> tier of what is the quality of it (very similar to how OpenStack
>> qualifies their hypervisor integration).  Tier A means unit test and
>> integration test coverage gates the release.  Tier B means unit test
>> coverage gates the release.  Tier C mean who knows, it compiled.  We
>> can go through and classify the components and then as a community we
>> can try to get as much into Tier A as possible.
>> 
>> Darren
> 
> 
> 
> -- 
> 
> EOF

Re: Tiered Quality

Posted by Laszlo Hornyak <la...@gmail.com>.
I believe this will be very useful for users.
As far as I understand someone will have to qualify components. What will
be the method for qualification? I do not think simply the test coverage
would be right. But then if you want to go deeper, then you need a bigger
effort testing the components.



On Sun, Oct 27, 2013 at 4:51 PM, Darren Shepherd <
darren.s.shepherd@gmail.com> wrote:

> I don't know if a similar thing has been talked about before but I
> thought I'd just throws this out there.  The ultimate way to ensure
> quality is that we have unit test and integration test coverage on all
> functionality.  That way somebody authors some code, commits to, for
> example, 4.2, but then when we release 4.3, 4.4, etc they aren't on
> the hook to manually tests the functionality with each release.  The
> obvious nature of a community project is that people come and go.  If
> a contributor wants to ensure the long term viability of the
> component, they should ensure that there are unit+integration tests.
>
> Now, for whatever reason whether good or bad, it's not always possible
> to have full integration tests.  I don't want to throw down the gamut
> and say everything must have coverage because that will mean some
> useful code/feature will not get in because of some coverage wasn't
> possible at the time.
>
> What I propose is that for every feature or function we put it in a
> tier of what is the quality of it (very similar to how OpenStack
> qualifies their hypervisor integration).  Tier A means unit test and
> integration test coverage gates the release.  Tier B means unit test
> coverage gates the release.  Tier C mean who knows, it compiled.  We
> can go through and classify the components and then as a community we
> can try to get as much into Tier A as possible.
>
> Darren
>



-- 

EOF