You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tuscany.apache.org by ant elder <an...@gmail.com> on 2006/04/14 11:01:53 UTC

Testing, was: What are some good samples for Tuscany?

Moving some of the testing discussion out of the samples thread...

Its not completely clear to me what the distinction is between 'technology
samples' and  functional tests. There are some JavaScript samples in the
samples directory:
http://svn.apache.org/repos/asf/incubator/tuscany/java/samples/JavaScript/,
which of these should be samples and which should be tests, and where should
they fit in the Tuscany directory structure? I'm quite happy to move some or
all of these or change them to be testcases, tell me what you'd like.

Could there be some specific examples of how we should be doing functional
and integration testing of things like the WS binding entryPoints and
externalServices? It was done by running the WS samples in testing/tomcat,
whats a better approach?

I really don't understand why samples shouldn't be tested as part of the
regular build.  What is the old ground being rehashed, the best I can find
is the comment at the very end of this email which no one posted any
disagreements to:
http://mail-archives.apache.org/mod_mbox/ws-tuscany-dev/200603.mbox/%3c441FBF30.3090607@apache.org%3e

I'd be careful with -1'ing commits where you don't like the test coverage.
It would be far better to offer guidance and specific constructive
criticism, or even help add tests if you think some code is lacking. We need
to foster an environment where people want to join in and help, throwing
around vetos isn't going to do that, and if using vetos becomes common
practice they will likely be used back at you when you least expect or want
them. Everyone acknowledges the current code needs improved testing, so if
nothing else -1s would be a bit hypocritical. Vetos are always available as
an option of last resort, but I think they're best kept for that - a last
resort - after attempts to resolve a problem have failed.

   ...ant

On 4/14/06, Jeremy Boynes <jb...@apache.org> wrote:
>
> ant elder wrote:
> > Here are some specific ideas to kick around:
> >
> > 1) how about calling  business samples 'demos' and technology samples
> just
> > 'samples'
> >
>
> I tend to agree with Simon that there could be a perception that "demos"
> are just mockups with no substance. I do think thought that there is a
> difference though between business and technology samples.
>
> In J2EE days the business samples were called "blueprints" so perhaps we
> could call them that as well - would be a natural place for a petstore :-)
>
> > 2) restructure the current samples folder to be something like:
> >     samples
> >       - demos
> >           - bigbank
> >           - petstore
> >           - ...
> >       - das
> >           - ...
> >       - sdo
> >           - ...
> >       - sca
> >           - bindings
> >               - jsonrpc
> >               - ws
> >               - ...
> >           - componentTypes
> >               - java
> >               - javascript
> >               - ...
>
> I'm a little concerned about the depth of the tree here but the idea
> looks good.
>
> >
> > 3) There should be a consistent set of samples within bindings,
> > componentTypes etc so its easy to copy things to create new samples and
> add
> > new function
> >
>
> +1
>
> > 4) samples are like functional tests so we should add a sample for every
> bit
> > of existing and new function
> >
>
> -1
> Business samples illustrate business application and should focus on
> that. Technology samples illustrate how to use the technology and should
> focus on that.
>
> Functional tests test function and should focus on that. If possible
> they should run as part of the build using the maven test framework.
>
> > 5) Fix the testing/tomcat stuff so all the samples doing functional
> testing
> > get run as part of a regular build
> >
>
> -1
> Fix the functional/integration tests so that they are an integrated part
> of the build rather than a bolt-on using different infrastructure. As
> Jim said we have been here before.
>
> I will point out that there are huge areas of the code where we do not
> even have unit test coverage even though that is trivial to add to the
> build. We need to stop building samples for testing and put some effort
> into real testing and samples that clearly illustrate key technology
> and/or business application.
>
> I'll plead guilty here as there are very few unit tests for the loader
> framework. So right now I'm committing to go back and add tests there.
> How about if everyone volunteers to go write unit and/or integration
> tests for some part of the code they worked on?
>
> Let's go further - given we're all supposed to be reviewing commits, how
> about we start to -1 changes that don't have tests associated with them?
>
> --
> Jeremy
>

Re: Testing, was: What are some good samples for Tuscany?

Posted by Jim Marino <jm...@myromatours.com>.
On Apr 14, 2006, at 7:44 AM, Jeremy Boynes wrote:

> ant elder wrote:
>
>> Moving some of the testing discussion out of the samples thread...
>>
>> Its not completely clear to me what the distinction is between  
>> 'technology
>> samples' and  functional tests. There are some JavaScript samples  
>> in the
>> samples directory:
>> http://svn.apache.org/repos/asf/incubator/tuscany/java/samples/ 
>> JavaScript/,
>> which of these should be samples and which should be tests, and  
>> where should
>> they fit in the Tuscany directory structure? I'm quite happy to  
>> move some or
>> all of these or change them to be testcases, tell me what you'd like.
>>
>>
>
> To me, the purpose of a technology sample is to allow someone to learn
> and understand a particular piece of technology (such as a  
> construct in
> the programming model); it is a learning/teaching aid. As such there
> should be an emphasis on clearly describing the concept the sample is
> illustrating which requires things like clear source code, a lack of
> distracting constructs (like error handling), a simple but very
> explanatory UI, and so on. The sample is most valuable distributed in
> source form, perhaps with config files for different IDEs that make it
> easy to view.
>
> On the other hand, functional test of the same technology is  
> intended to
> check that the function works as advertised. It involves mechanical
> testing of not just the main code paths but also of documented but
> lesser used functions as well as error paths.
>
An example of this is the lifecycle tests in  
o.a.t.container.java.scopes. They are designed to exercise, among  
other things, ordered shutdown of component implementation instances.  
Some of the scenarios would never come up in a "sample/scenario/ 
blueprint/demo/foo" application but they need to have test coverage  
nevertheless as they may be encountered in very complex "real-world"  
applications. Another example is negative testing, of which we have  
almost none. For example, we need to test how the runtime handles  
cycles in wires - we obviously would not  do that in a sample/ 
scenario/blueprint/demo/foo" application.


> In many ways these are exercising similar technology but they are  
> doing
> it for different purposes - one is illustration, one is verification.
>
> Taking the JavaScript samples, I think we should keep all of them as
> samples and build them so that they clearly illustrate how SCA can be
> used to build JavaScript components (including ones based on E4X) and
> how it works with JavaScript UI frameworks such as Dojo using JSON- 
> RPC.
>
> I also think we need to increase the amount of testing done in the  
> build
> of the container.js and binding.jsonrpc modules. I recently made a
> change to the model that impacted container.js but was not caught  
> in its
> test suite and was only caught by the sample. I think (and I think Jim
> agrees with me) that this is a problem - things like this should be
> caught by test coverage in the build and not just because it  
> happened to
> be used in the sample.
>
Yes I agree with this. Actually, I recently made a change that broke  
another area which was not covered in our test cases.
>
>> Could there be some specific examples of how we should be doing  
>> functional
>> and integration testing of things like the WS binding entryPoints and
>> externalServices? It was done by running the WS samples in testing/ 
>> tomcat,
>> whats a better approach?
>>
>>
>
> There is already an integration test for the WS entryPoint in the  
> Tomcat
> module itself that is run as part of the main build. It would not be
> hard to add one for externalService using a mock servlet to implement
> the provider (it would be similar to the one used to test the
> ModuleContext setup for the servlet environment). Given I have added
> /all/ the integration tests we currently have I would appreciate it if
> someone else would step up to the plate.
>
>
>> I really don't understand why samples shouldn't be tested as part  
>> of the
>> regular build.  What is the old ground being rehashed, the best I  
>> can find
>> is the comment at the very end of this email which no one posted any
>> disagreements to:
>> http://mail-archives.apache.org/mod_mbox/ws-tuscany-dev/ 
>> 200603.mbox/%3c441FBF30.3090607@apache.org%3e
>>
>>
>
> There's stuff here and on other threads
> http://mail-archives.apache.org/mod_mbox/ws-tuscany-dev/200601.mbox/ 
> %3cOFDBD58BF9.63C0DE14- 
> ON882570F8.0073378D-872570F8.007B9CE3@sybase.com%3e
>
>
>> I'd be careful with -1'ing commits where you don't like the test  
>> coverage.
>> It would be far better to offer guidance and specific constructive
>> criticism, or even help add tests if you think some code is  
>> lacking. We need
>> to foster an environment where people want to join in and help,  
>> throwing
>> around vetos isn't going to do that, and if using vetos becomes  
>> common
>> practice they will likely be used back at you when you least  
>> expect or want
>> them. Everyone acknowledges the current code needs improved  
>> testing, so if
>> nothing else -1s would be a bit hypocritical. Vetos are always  
>> available as
>> an option of last resort, but I think they're best kept for that -  
>> a last
>> resort - after attempts to resolve a problem have failed.
>>
>>
>
> I was proposing (and plan to start) -1 commits with NO test coverage.
>
> We have attempted to resolve this problem through guidance and
> constructive criticism. You say we all acknowledge that the current  
> code
> needs improved testing; we may have agreed we have a problem but we  
> are
> not acting to make the improvements that resolve it. Vetoing changes
> from people who make the problem worse (and who are thereby acting
> against what we agreed on) is IMO an appropriate use of a veto.
>

We really need to raise the importance of testcases and, in  
particular unit and integration tests. We are never going to attract  
the type of developers who build good communities if we maintain the  
status quo test coverage. Code quality has to be paramount since most  
people do not want to work on crappy projects (unless they are  
masochists :-) ).  We also need to apply equal standards to everyone.  
For example, I would not accept a patch from someone that did not  
have test coverage, was not formatted correctly, or did not follow  
the practices we have laid out with regard to things such as  
exception handling.  If someone is not willing to do that (a pretty  
low bar), I don't think they are really interested in the community  
aspects of the project. Why should we expect anything less of  
ourselves? I appreciate the need to be constructive but,  as Jeremy  
said, we are really beyond that and at the point of recidivism.

Since some of our code guidelines are strewn about, I am willing to  
collate those into a document that can be posted, perhaps as part of  
the wiki. Alternatively, it would also be a great way for someone to  
get involved in the project.

> Hypocritical? No, hyprocrisy would be saying we need more testing but
> not doing anything about it. I already committed to add testing for  
> the
> loaders - which tests are you going to add?
>
So I will be going back adding more test cases for component  
lifecycles, and the extensibility framework.
> --
> Jeremy
>


Re: Testing, was: What are some good samples for Tuscany?

Posted by Jim Marino <jm...@myromatours.com>.
On Apr 14, 2006, at 7:44 AM, Jeremy Boynes wrote:


> ant elder wrote:
>
>
>> Moving some of the testing discussion out of the samples thread...
>>
>> Its not completely clear to me what the distinction is between  
>> 'technology
>> samples' and  functional tests. There are some JavaScript samples  
>> in the
>> samples directory:
>> http://svn.apache.org/repos/asf/incubator/tuscany/java/samples/ 
>> JavaScript/,
>> which of these should be samples and which should be tests, and  
>> where should
>> they fit in the Tuscany directory structure? I'm quite happy to  
>> move some or
>> all of these or change them to be testcases, tell me what you'd like.
>>
>>
>>
>
> To me, the purpose of a technology sample is to allow someone to learn
> and understand a particular piece of technology (such as a  
> construct in
> the programming model); it is a learning/teaching aid. As such there
> should be an emphasis on clearly describing the concept the sample is
> illustrating which requires things like clear source code, a lack of
> distracting constructs (like error handling), a simple but very
> explanatory UI, and so on. The sample is most valuable distributed in
> source form, perhaps with config files for different IDEs that make it
> easy to view.
>
> On the other hand, functional test of the same technology is  
> intended to
> check that the function works as advertised. It involves mechanical
> testing of not just the main code paths but also of documented but
> lesser used functions as well as error paths.
>
>
An example of this is the lifecycle tests in  
o.a.t.container.java.scopes. They are designed to exercise, among  
other things, ordered shutdown of component implementation instances.  
Some of the scenarios would never come up in a "sample/scenario/ 
blueprint/demo/foo" application but they need to have test coverage  
nevertheless as they may be encountered in very complex "real-world"  
applications. Another example is negative testing, of which we have  
almost none. For example, we need to test how the runtime handles  
cycles in wires - we obviously would not  do that in a sample/ 
scenario/blueprint/demo/foo" application.



> In many ways these are exercising similar technology but they are  
> doing
> it for different purposes - one is illustration, one is verification.
>
> Taking the JavaScript samples, I think we should keep all of them as
> samples and build them so that they clearly illustrate how SCA can be
> used to build JavaScript components (including ones based on E4X) and
> how it works with JavaScript UI frameworks such as Dojo using JSON- 
> RPC.
>
> I also think we need to increase the amount of testing done in the  
> build
> of the container.js and binding.jsonrpc modules. I recently made a
> change to the model that impacted container.js but was not caught  
> in its
> test suite and was only caught by the sample. I think (and I think Jim
> agrees with me) that this is a problem - things like this should be
> caught by test coverage in the build and not just because it  
> happened to
> be used in the sample.
>
>
Yes I agree with this. Actually, I recently made a change that broke  
another area which was not covered in our test cases.

>
>
>> Could there be some specific examples of how we should be doing  
>> functional
>> and integration testing of things like the WS binding entryPoints and
>> externalServices? It was done by running the WS samples in testing/ 
>> tomcat,
>> whats a better approach?
>>
>>
>>
>
> There is already an integration test for the WS entryPoint in the  
> Tomcat
> module itself that is run as part of the main build. It would not be
> hard to add one for externalService using a mock servlet to implement
> the provider (it would be similar to the one used to test the
> ModuleContext setup for the servlet environment). Given I have added
> /all/ the integration tests we currently have I would appreciate it if
> someone else would step up to the plate.
>
>
>
>> I really don't understand why samples shouldn't be tested as part  
>> of the
>> regular build.  What is the old ground being rehashed, the best I  
>> can find
>> is the comment at the very end of this email which no one posted any
>> disagreements to:
>> http://mail-archives.apache.org/mod_mbox/ws-tuscany-dev/ 
>> 200603.mbox/%3c441FBF30.3090607@apache.org%3e
>>
>>
>>
>
> There's stuff here and on other threads
> http://mail-archives.apache.org/mod_mbox/ws-tuscany-dev/200601.mbox/ 
> %3cOFDBD58BF9.63C0DE14- 
> ON882570F8.0073378D-872570F8.007B9CE3@sybase.com%3e
>
>
>
>> I'd be careful with -1'ing commits where you don't like the test  
>> coverage.
>> It would be far better to offer guidance and specific constructive
>> criticism, or even help add tests if you think some code is  
>> lacking. We need
>> to foster an environment where people want to join in and help,  
>> throwing
>> around vetos isn't going to do that, and if using vetos becomes  
>> common
>> practice they will likely be used back at you when you least  
>> expect or want
>> them. Everyone acknowledges the current code needs improved  
>> testing, so if
>> nothing else -1s would be a bit hypocritical. Vetos are always  
>> available as
>> an option of last resort, but I think they're best kept for that -  
>> a last
>> resort - after attempts to resolve a problem have failed.
>>
>>
>>
>
> I was proposing (and plan to start) -1 commits with NO test coverage.
>
> We have attempted to resolve this problem through guidance and
> constructive criticism. You say we all acknowledge that the current  
> code
> needs improved testing; we may have agreed we have a problem but we  
> are
> not acting to make the improvements that resolve it. Vetoing changes
> from people who make the problem worse (and who are thereby acting
> against what we agreed on) is IMO an appropriate use of a veto.
>
>

We really need to raise the importance of testcases and, in  
particular unit and integration tests. We are never going to attract  
the type of developers who build good communities if we maintain the  
status quo test coverage. Code quality has to be paramount since most  
people do not want to work on crappy projects (unless they are  
masochists :-) ).  We also need to apply equal standards to everyone.  
For example, I would not accept a patch from someone that did not  
have test coverage, was not formatted correctly, or did not follow  
the practices we have laid out with regard to things such as  
exception handling.  If someone is not willing to do that (a pretty  
low bar), I don't think they are really interested in the community  
aspects of the project. Why should we expect anything less of  
ourselves? I appreciate the need to be constructive but,  as Jeremy  
said, we are really beyond that and at the point of recidivism.

Since some of our code guidelines are strewn about, I am willing to  
collate those into a document that can be posted, perhaps as part of  
the wiki. Alternatively, it would also be a great way for someone to  
get involved in the project.


> Hypocritical? No, hyprocrisy would be saying we need more testing but
> not doing anything about it. I already committed to add testing for  
> the
> loaders - which tests are you going to add?
>
>
So I will be going back adding more test cases for component  
lifecycles, and the extensibility framework.

> --
> Jeremy
>
>



Re: Testing, was: What are some good samples for Tuscany?

Posted by Jeremy Boynes <jb...@apache.org>.
ant elder wrote:
> Moving some of the testing discussion out of the samples thread...
> 
> Its not completely clear to me what the distinction is between 'technology
> samples' and  functional tests. There are some JavaScript samples in the
> samples directory:
> http://svn.apache.org/repos/asf/incubator/tuscany/java/samples/JavaScript/,
> which of these should be samples and which should be tests, and where should
> they fit in the Tuscany directory structure? I'm quite happy to move some or
> all of these or change them to be testcases, tell me what you'd like.
> 

To me, the purpose of a technology sample is to allow someone to learn
and understand a particular piece of technology (such as a construct in
the programming model); it is a learning/teaching aid. As such there
should be an emphasis on clearly describing the concept the sample is
illustrating which requires things like clear source code, a lack of
distracting constructs (like error handling), a simple but very
explanatory UI, and so on. The sample is most valuable distributed in
source form, perhaps with config files for different IDEs that make it
easy to view.

On the other hand, functional test of the same technology is intended to
check that the function works as advertised. It involves mechanical
testing of not just the main code paths but also of documented but
lesser used functions as well as error paths.

In many ways these are exercising similar technology but they are doing
it for different purposes - one is illustration, one is verification.

Taking the JavaScript samples, I think we should keep all of them as
samples and build them so that they clearly illustrate how SCA can be
used to build JavaScript components (including ones based on E4X) and
how it works with JavaScript UI frameworks such as Dojo using JSON-RPC.

I also think we need to increase the amount of testing done in the build
of the container.js and binding.jsonrpc modules. I recently made a
change to the model that impacted container.js but was not caught in its
test suite and was only caught by the sample. I think (and I think Jim
agrees with me) that this is a problem - things like this should be
caught by test coverage in the build and not just because it happened to
be used in the sample.

> Could there be some specific examples of how we should be doing functional
> and integration testing of things like the WS binding entryPoints and
> externalServices? It was done by running the WS samples in testing/tomcat,
> whats a better approach?
> 

There is already an integration test for the WS entryPoint in the Tomcat
module itself that is run as part of the main build. It would not be
hard to add one for externalService using a mock servlet to implement
the provider (it would be similar to the one used to test the
ModuleContext setup for the servlet environment). Given I have added
/all/ the integration tests we currently have I would appreciate it if
someone else would step up to the plate.

> I really don't understand why samples shouldn't be tested as part of the
> regular build.  What is the old ground being rehashed, the best I can find
> is the comment at the very end of this email which no one posted any
> disagreements to:
> http://mail-archives.apache.org/mod_mbox/ws-tuscany-dev/200603.mbox/%3c441FBF30.3090607@apache.org%3e
> 

There's stuff here and on other threads
http://mail-archives.apache.org/mod_mbox/ws-tuscany-dev/200601.mbox/%3cOFDBD58BF9.63C0DE14-ON882570F8.0073378D-872570F8.007B9CE3@sybase.com%3e

> I'd be careful with -1'ing commits where you don't like the test coverage.
> It would be far better to offer guidance and specific constructive
> criticism, or even help add tests if you think some code is lacking. We need
> to foster an environment where people want to join in and help, throwing
> around vetos isn't going to do that, and if using vetos becomes common
> practice they will likely be used back at you when you least expect or want
> them. Everyone acknowledges the current code needs improved testing, so if
> nothing else -1s would be a bit hypocritical. Vetos are always available as
> an option of last resort, but I think they're best kept for that - a last
> resort - after attempts to resolve a problem have failed.
> 

I was proposing (and plan to start) -1 commits with NO test coverage.

We have attempted to resolve this problem through guidance and
constructive criticism. You say we all acknowledge that the current code
needs improved testing; we may have agreed we have a problem but we are
not acting to make the improvements that resolve it. Vetoing changes
from people who make the problem worse (and who are thereby acting
against what we agreed on) is IMO an appropriate use of a veto.

Hypocritical? No, hyprocrisy would be saying we need more testing but
not doing anything about it. I already committed to add testing for the
loaders - which tests are you going to add?

--
Jeremy