You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@tuscany.apache.org by kelvin goodson <ke...@gmail.com> on 2007/05/01 13:55:00 UTC

[Java SDO CTS] thoughts on structure

Having spent some time getting to grips with the CTS there are some things I
think I'd like to improve.

First amongst them is to get some structure that allows us to get a feel for
how well the spec is covered by the tests.  One thing that concerns me is
that one of the most apparent things in the structure is the split between
the parameterized and the "single shot" junit tests.  This seems like a
junit technology driven split,  and I don't think it is necessary or
desirable. We should be able to apply the parameterization feature of junit
without it being so prominent in the source code structure.

I'd like to see more relation between spec features and test code packaging.
That way we are more likely to spot gaps or overlaps.  I feel sure that this
will throw up some other issues,  like testing certain features in
combination.

As a first step I'd like to propose refactoring the "paramatized" package.
As far as I can see our usage of the junit parameterized testing function is
aimed at ensuring consistency between operations performed on graphs when
the metadata has been produced a) from an xsd and b) by using the SDO API to
create  it dynamically.  I propose to rehouse these under
test.sdo21.consistency.

--
Kelvin.

Re: [Java SDO CTS] thoughts on structure

Posted by Brian Murray <br...@gmail.com>.
I have opened Tuscany-1241 to provide the changes my team has made to the
CTS.  As is mentioned in the Jira, the patch is not entirely complete in and
of itself and further may not reflect some of the changes that have gone in
recently.  However, I'm aware that changes are being made to the CTS and
wanted to provide the patch quickly to minimize the complexity of the merge.

As per the above discussion, here is my two cents:

Packaging:

<kg>We should be able to apply the parameterization feature of junit without
it being so prominent in the source code structure.</kg>

I agree and this seems to be a consensus.

The Use of Paramaterized Tests:

<kg>As far as I can see our usage of the junit parameterized testing
function is aimed at ensuring consistency between operations performed on
graphs when
the metadata has been produced a) from an xsd and b) by using the SDO API to
create  it dynamically.  I propose to rehouse these under
test.sdo21.consistency.</kg>

<rjm>However, I don't think that this should be packaged under a consistency
package - for me that has the same problems as being organized under
paramatized where you do not get a feel for complete API coverage.</rjm>

<fb>Verifying this by repeating every possible test on the exact same
DataObjects, just created differently, sounds a little inefficient. What I
think we need instead is to have some set of consistency tests that confirm
that the types created in various ways are in fact the same.</fb>

<kg>However, it's not clear to me whether we can now confirm the types are
as they should be through the metadata API alone,  or whether there are
still
requirements on an implementation to preserve elements of metadata that can
only be detected by the behaviour seen through the data API.  However,  if
it's purely some facets of XSD behaviour that we needed to test empirically
then that wouldn't require a parameterized approach.</kg>

My group has been running our version of the CTS and have located several
instances of errors specific to one DataObject definiton/creation
mechanism.  Granted, these are generally specific to the static case.  Given
this experience, I agree with Kelvin that the running the parameterized
tests on the DataObject API adds value.  Creating tests that attempt to
foresee such instances by working through the metadata API alone would seem
to be another instance of "testing the testers" that Kelvin alluded to
earlier.


Miscellaneous:
<fb>I also wonder why ParameterizedTestUtil has it's own equals code
(instead of just using EqualityHelper). </fb>

Kelvin addressed this, and he correctly guessed my intent when adding the
equal code to ParamaterizedTestUtil.

<fb>I noticed that the TestHelper is unnecessarily complicated. Instead of
having all kinds of getXXXHelper() methods, it should just have one
getHelperContext() method - that's the only method that is implementation
dependant.</fb>

This will be addressed in the resubmission of the patch for Tuscany-1241.
It had been addressed in our version of the CTS, but I see that in the first
version of the patch provided that is not reflected.

Re: [Java SDO CTS] thoughts on structure

Posted by Frank Budinsky <fr...@ca.ibm.com>.
I added a few comments in-line.

Thanks,
Frank.

"kelvin goodson" <ke...@gmail.com> wrote on 05/02/2007 06:38:28 
AM:

> I'm inclined to agree that a blanket approach to this kind of testing is 
not
> best.  The more directed the tests are the better we can understand how
> comprehensive the CTS is.
> However, it's not clear to me whether we can now confirm the types are 
as
> they should be through the metadata API alone,  or whether there are 
still
> requirements on an implementation to preserve elements of metadata that 
can
> only be detected by the behaviour seen through the data API.  However, 
if
> it's purely some facets of XSD behaviour that we needed to test 
empirically
> then that wouldn't require a parameterized approach.

I think this is the case. The intersection of features that can be defined 
in multiple ways wouldn't have anything hidden. Some of the 
non-parameterized (e.g., XSD-specific) tests would test features that rely 
on hidden information.

> 
> I think this kind of parameterized testing is well suited to finding 
issues
> that might not otherwise be found when exercising code that must handle
> arbitrary graphs, but most of the tests we have in place are written 
with
> tight preconditions on the inputs.  So to run multiple graphs displaying
> interesting facets through the EqualityHelper for instance might be a 
good
> use of the technique. In that case the parameterized data sets would 
need to
> include a description of the expected result.  For the case of
> EqualityHelper that would be easy {true|false},  but for say the 
XMLHelper's
> serialization function it requires a bit more work, as the variability 
of
> output is permitted in the XML literal space.
> 
> There is an argument in favour retaining at least some level of the 
current
> mode parameterized testing,  related to testing of static classes. 

This is a good point. It would argue for making every test paramaterized, 
so that we can optionally run them with statically generated classes. But, 
on the other hand, I wonder if it wouldn't be simpler to just have a 
simple subclass that registers the static classes. The subclass would 
probably want to also add some static tests (i.e., ones that call the 
generated methods) in addition to the dynamic tests that are provided in 
the base class. It's not sufficient to simply test the dynamic behavior of 
static data objects.

> The spec
> doesn't cover static classes yet,  but the parameterized infrastructure 
that
> we have in place permits an implementation to augment the set of inputs 
with
> some of its own.  So for the case we currently have,  where metadata has
> been generated by the implementation independent CTS infrastructure from 
an
> XSD and via the dynamic API,  Tuscany for example could make use of the
> call-out in BaseSDOParameterizedTest's data() method to  add one or more
> sets of data create from static classes and those would be run against 
the
> tests.
> 
> I can to some extent see a theoretical/academic reason for having
> implementation independent equality testing code in the CTS,  but
> practically this leads to a "who tests the testers" scenario.  If we 
build
> tests that make the assumption that the equality helper of the
> implementation under tests is trusted, then we have to ensure that the 
suite
> of tests applied to the equality helper itself warrants that trust.

I assume that one of the test cases in the test suite will explicitly test 
the EqualityHelper, after which it can be trusted.

> 
> +1 to simplifying the interface of the TestHelper.
> 
> Kelvin.
> 
> On 01/05/07, Frank Budinsky <fr...@ca.ibm.com> wrote:
> >
> > I think this approach sounds a little too brute force. Regardless of 
how
> > the metadata is defined, once instances are created in memory, they 
will
> > be exactly the same. Verifying this by repeating every possible test 
on
> > the exact same DataObjects, just created differently, sounds a little
> > inefficient. What I think we need instead is to have some set of
> > consistency tests that confirm that the types created in various ways 
are
> > in fact the same.
> 
> The parameterized tests approach might be a good way to
> > do that, but the tests that need to run to confirm this is a small 
subset
> > of all the functionality of SDO. Testing every API N times is 
definitely
> > overkill IMO.
> >
> > Actually, it's probably sufficient to have a parameterized test that
> > simply walks through the metadata and confirms the types and 
properties
> > are as expected. All the DataObject tests do not need to be 
parameterized
> > at all.
> 
> I've noticed some overlap between the parameterized and non 
parameterized
> > tests. It also looks like the parameterized tests make a lot of
> > Tuscany-specific assumptions. I also wonder why ParameterizedTestUtil 
has
> > it's own equals code (instead of just using EqualityHelper). Maybe we
> > should just remove all these tests, and then resubmit/merge any unique
> > tests with the appropriate non parameterized tests.
> >
> > One more thing, I noticed that the TestHelper is unnecessarily
> > complicated. Instead of having all kinds of getXXXHelper() methods, it
> > should just have one getHelperContext() method - that's the only 
method
> > that is implementation dependant. Other methods, e.g., 
createPropertyDef()
> > are also not implementation dependent, so they shouldn't be in the
> > TestHelper interface. I think we should clean this up and simplify it 
now,
> > before we have so many tests that we won't want to change it anymore.
> >
> > Thoughts?
> >
> > Frank.
> >
> > "Robbie Minshall" <my...@gmail.com> wrote on 05/01/2007 12:06:03 PM:
> >
> > > I agree that the tests should be structured in a way that is spec 
and
> > > functionaly orientated.  I have never really liked the split between
> > > paramatized and non paramatized tests so getting rid of this is just
> > > fine.
> > >
> > > Other than that I think that the test cases are more or less 
organized
> > > by API though I am sure some changes could be beneficial.
> > >
> > > The idea behind the paramatized tests does indeed lean towards
> > > consistency.  In general the SDO API should apply regardless of the
> > > creation means for the DataObject ( static, dynamic, mixed, the old
> > > relational DB DAS or any other datasource ).  This is done simply by
> > > injecting a dataObject instance into a common set of tests.
> > >
> > > However, I don't think that this should be packaged under a
> > > consistency package - for me that has the same problems as being
> > > organized under paramatized where you do not get a feel for complete
> > > API coverage.
> > >
> > > If you want to get ride of that problem you should just have a 
single
> > > source tree organized by API and have both paramatized and non
> > > paramatized tests in that single tree.
> > >
> > > I would note that while slightly diluted ( moved more to an 
interface
> > > to XML with the lack of work on the RDB DAS ) the intial conception 
of
> > > SDO as a common API to many datasources shoudl still be maintained.
> > > In my view this means that API tests etc should be performed on a
> > > variety of dataobject creation mechanisms and paramatized tests are
> > > the way to go.
> > >
> > > cheers,
> > > Robbie.
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > On 5/1/07, kelvin goodson <ke...@gmail.com> wrote:
> > > > Having spent some time getting to grips with the CTS there are 
some
> > things I
> > > > think I'd like to improve.
> > > >
> > > > First amongst them is to get some structure that allows us to get 
a
> > feel for
> > > > how well the spec is covered by the tests.  One thing that 
concerns me
> > is
> > > > that one of the most apparent things in the structure is the split
> > between
> > > > the parameterized and the "single shot" junit tests.  This seems 
like
> > a
> > > > junit technology driven split,  and I don't think it is necessary 
or
> > > > desirable. We should be able to apply the parameterization feature 
of
> > junit
> > > > without it being so prominent in the source code structure.
> > > >
> > > > I'd like to see more relation between spec features and test code
> > packaging.
> > > > That way we are more likely to spot gaps or overlaps.  I feel 
surethat
> > this
> > > > will throw up some other issues,  like testing certain features in
> > > > combination.
> > > >
> > > > As a first step I'd like to propose refactoring the "paramatized"
> > package.
> > > > As far as I can see our usage of the junit parameterized testing
> > function is
> > > > aimed at ensuring consistency between operations performed on 
graphs
> > when
> > > > the metadata has been produced a) from an xsd and b) by using the 
SDO
> > API to
> > > > create  it dynamically.  I propose to rehouse these under
> > > > test.sdo21.consistency.
> > > >
> > > > --
> > > > Kelvin.
> > > >
> > >
> > >
> > > --
> > > * * * Charlie * * *
> > > Check out some pics of little Charlie at
> > > http://www.flickr.com/photos/83388211@N00/sets/
> > >
> > > Check out Charlie's al crapo blog at 
http://robbieminshall.blogspot.com
> > >
> > > * * * Addresss * * *
> > > 1914 Overland Drive
> > > Chapel Hill
> > > NC 27517
> > >
> > > * * * Number * * *
> > > 919-225-1553
> > >
> > > 
---------------------------------------------------------------------
> > > To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
> > > For additional commands, e-mail: tuscany-dev-help@ws.apache.org
> > >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
> > For additional commands, e-mail: tuscany-dev-help@ws.apache.org
> >
> >


---------------------------------------------------------------------
To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
For additional commands, e-mail: tuscany-dev-help@ws.apache.org


Re: [Java SDO CTS] thoughts on structure

Posted by kelvin goodson <ke...@gmail.com>.
I'm inclined to agree that a blanket approach to this kind of testing is not
best.  The more directed the tests are the better we can understand how
comprehensive the CTS is.
However, it's not clear to me whether we can now confirm the types are as
they should be through the metadata API alone,  or whether there are still
requirements on an implementation to preserve elements of metadata that can
only be detected by the behaviour seen through the data API.  However,  if
it's purely some facets of XSD behaviour that we needed to test empirically
then that wouldn't require a parameterized approach.

I think this kind of parameterized testing is well suited to finding issues
that might not otherwise be found when exercising code that must handle
arbitrary graphs, but most of the tests we have in place are written with
tight preconditions on the inputs.  So to run multiple graphs displaying
interesting facets through the EqualityHelper for instance might be a good
use of the technique. In that case the parameterized data sets would need to
include a description of the expected result.  For the case of
EqualityHelper that would be easy {true|false},  but for say the XMLHelper's
serialization function it requires a bit more work, as the variability of
output is permitted in the XML literal space.

There is an argument in favour retaining at least some level of the current
mode parameterized testing,  related to testing of static classes.  The spec
doesn't cover static classes yet,  but the parameterized infrastructure that
we have in place permits an implementation to augment the set of inputs with
some of its own.  So for the case we currently have,  where metadata has
been generated by the implementation independent CTS infrastructure from an
XSD and via the dynamic API,  Tuscany for example could make use of the
call-out in BaseSDOParameterizedTest's data() method to  add one or more
sets of data create from static classes and those would be run against the
tests.

I can to some extent see a theoretical/academic reason for having
implementation independent equality testing code in the CTS,  but
practically this leads to a "who tests the testers" scenario.  If we build
tests that make the assumption that the equality helper of the
implementation under tests is trusted, then we have to ensure that the suite
of tests applied to the equality helper itself warrants that trust.

+1 to simplifying the interface of the TestHelper.

Kelvin.

On 01/05/07, Frank Budinsky <fr...@ca.ibm.com> wrote:
>
> I think this approach sounds a little too brute force. Regardless of how
> the metadata is defined, once instances are created in memory, they will
> be exactly the same. Verifying this by repeating every possible test on
> the exact same DataObjects, just created differently, sounds a little
> inefficient. What I think we need instead is to have some set of
> consistency tests that confirm that the types created in various ways are
> in fact the same.

The parameterized tests approach might be a good way to
> do that, but the tests that need to run to confirm this is a small subset
> of all the functionality of SDO. Testing every API N times is definitely
> overkill IMO.
>
> Actually, it's probably sufficient to have a parameterized test that
> simply walks through the metadata and confirms the types and properties
> are as expected. All the DataObject tests do not need to be parameterized
> at all.

I've noticed some overlap between the parameterized and non parameterized
> tests. It also looks like the parameterized tests make a lot of
> Tuscany-specific assumptions. I also wonder why ParameterizedTestUtil has
> it's own equals code (instead of just using EqualityHelper). Maybe we
> should just remove all these tests, and then resubmit/merge any unique
> tests with the appropriate non parameterized tests.
>
> One more thing, I noticed that the TestHelper is unnecessarily
> complicated. Instead of having all kinds of getXXXHelper() methods, it
> should just have one getHelperContext() method - that's the only method
> that is implementation dependant. Other methods, e.g., createPropertyDef()
> are also not implementation dependent, so they shouldn't be in the
> TestHelper interface. I think we should clean this up and simplify it now,
> before we have so many tests that we won't want to change it anymore.
>
> Thoughts?
>
> Frank.
>
> "Robbie Minshall" <my...@gmail.com> wrote on 05/01/2007 12:06:03 PM:
>
> > I agree that the tests should be structured in a way that is spec and
> > functionaly orientated.  I have never really liked the split between
> > paramatized and non paramatized tests so getting rid of this is just
> > fine.
> >
> > Other than that I think that the test cases are more or less organized
> > by API though I am sure some changes could be beneficial.
> >
> > The idea behind the paramatized tests does indeed lean towards
> > consistency.  In general the SDO API should apply regardless of the
> > creation means for the DataObject ( static, dynamic, mixed, the old
> > relational DB DAS or any other datasource ).  This is done simply by
> > injecting a dataObject instance into a common set of tests.
> >
> > However, I don't think that this should be packaged under a
> > consistency package - for me that has the same problems as being
> > organized under paramatized where you do not get a feel for complete
> > API coverage.
> >
> > If you want to get ride of that problem you should just have a single
> > source tree organized by API and have both paramatized and non
> > paramatized tests in that single tree.
> >
> > I would note that while slightly diluted ( moved more to an interface
> > to XML with the lack of work on the RDB DAS ) the intial conception of
> > SDO as a common API to many datasources shoudl still be maintained.
> > In my view this means that API tests etc should be performed on a
> > variety of dataobject creation mechanisms and paramatized tests are
> > the way to go.
> >
> > cheers,
> > Robbie.
> >
> >
> >
> >
> >
> >
> >
> >
> > On 5/1/07, kelvin goodson <ke...@gmail.com> wrote:
> > > Having spent some time getting to grips with the CTS there are some
> things I
> > > think I'd like to improve.
> > >
> > > First amongst them is to get some structure that allows us to get a
> feel for
> > > how well the spec is covered by the tests.  One thing that concerns me
> is
> > > that one of the most apparent things in the structure is the split
> between
> > > the parameterized and the "single shot" junit tests.  This seems like
> a
> > > junit technology driven split,  and I don't think it is necessary or
> > > desirable. We should be able to apply the parameterization feature of
> junit
> > > without it being so prominent in the source code structure.
> > >
> > > I'd like to see more relation between spec features and test code
> packaging.
> > > That way we are more likely to spot gaps or overlaps.  I feel surethat
> this
> > > will throw up some other issues,  like testing certain features in
> > > combination.
> > >
> > > As a first step I'd like to propose refactoring the "paramatized"
> package.
> > > As far as I can see our usage of the junit parameterized testing
> function is
> > > aimed at ensuring consistency between operations performed on graphs
> when
> > > the metadata has been produced a) from an xsd and b) by using the SDO
> API to
> > > create  it dynamically.  I propose to rehouse these under
> > > test.sdo21.consistency.
> > >
> > > --
> > > Kelvin.
> > >
> >
> >
> > --
> > * * * Charlie * * *
> > Check out some pics of little Charlie at
> > http://www.flickr.com/photos/83388211@N00/sets/
> >
> > Check out Charlie's al crapo blog at http://robbieminshall.blogspot.com
> >
> > * * * Addresss * * *
> > 1914 Overland Drive
> > Chapel Hill
> > NC 27517
> >
> > * * * Number * * *
> > 919-225-1553
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
> > For additional commands, e-mail: tuscany-dev-help@ws.apache.org
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
> For additional commands, e-mail: tuscany-dev-help@ws.apache.org
>
>

Re: [Java SDO CTS] thoughts on structure

Posted by Frank Budinsky <fr...@ca.ibm.com>.
I think this approach sounds a little too brute force. Regardless of how 
the metadata is defined, once instances are created in memory, they will 
be exactly the same. Verifying this by repeating every possible test on 
the exact same DataObjects, just created differently, sounds a little 
inefficient. What I think we need instead is to have some set of 
consistency tests that confirm that the types created in various ways are 
in fact the same. The parameterized tests approach might be a good way to 
do that, but the tests that need to run to confirm this is a small subset 
of all the functionality of SDO. Testing every API N times is definitely 
overkill IMO.

Actually, it's probably sufficient to have a parameterized test that 
simply walks through the metadata and confirms the types and properties 
are as expected. All the DataObject tests do not need to be parameterized 
at all.

I've noticed some overlap between the parameterized and non parameterized 
tests. It also looks like the parameterized tests make a lot of 
Tuscany-specific assumptions. I also wonder why ParameterizedTestUtil has 
it's own equals code (instead of just using EqualityHelper). Maybe we 
should just remove all these tests, and then resubmit/merge any unique 
tests with the appropriate non parameterized tests.

One more thing, I noticed that the TestHelper is unnecessarily 
complicated. Instead of having all kinds of getXXXHelper() methods, it 
should just have one getHelperContext() method - that's the only method 
that is implementation dependant. Other methods, e.g., createPropertyDef() 
are also not implementation dependent, so they shouldn't be in the 
TestHelper interface. I think we should clean this up and simplify it now, 
before we have so many tests that we won't want to change it anymore.

Thoughts?

Frank.

"Robbie Minshall" <my...@gmail.com> wrote on 05/01/2007 12:06:03 PM:

> I agree that the tests should be structured in a way that is spec and
> functionaly orientated.  I have never really liked the split between
> paramatized and non paramatized tests so getting rid of this is just
> fine.
> 
> Other than that I think that the test cases are more or less organized
> by API though I am sure some changes could be beneficial.
> 
> The idea behind the paramatized tests does indeed lean towards
> consistency.  In general the SDO API should apply regardless of the
> creation means for the DataObject ( static, dynamic, mixed, the old
> relational DB DAS or any other datasource ).  This is done simply by
> injecting a dataObject instance into a common set of tests.
> 
> However, I don't think that this should be packaged under a
> consistency package - for me that has the same problems as being
> organized under paramatized where you do not get a feel for complete
> API coverage.
> 
> If you want to get ride of that problem you should just have a single
> source tree organized by API and have both paramatized and non
> paramatized tests in that single tree.
> 
> I would note that while slightly diluted ( moved more to an interface
> to XML with the lack of work on the RDB DAS ) the intial conception of
> SDO as a common API to many datasources shoudl still be maintained.
> In my view this means that API tests etc should be performed on a
> variety of dataobject creation mechanisms and paramatized tests are
> the way to go.
> 
> cheers,
> Robbie.
> 
> 
> 
> 
> 
> 
> 
> 
> On 5/1/07, kelvin goodson <ke...@gmail.com> wrote:
> > Having spent some time getting to grips with the CTS there are some 
things I
> > think I'd like to improve.
> >
> > First amongst them is to get some structure that allows us to get a 
feel for
> > how well the spec is covered by the tests.  One thing that concerns me 
is
> > that one of the most apparent things in the structure is the split 
between
> > the parameterized and the "single shot" junit tests.  This seems like 
a
> > junit technology driven split,  and I don't think it is necessary or
> > desirable. We should be able to apply the parameterization feature of 
junit
> > without it being so prominent in the source code structure.
> >
> > I'd like to see more relation between spec features and test code 
packaging.
> > That way we are more likely to spot gaps or overlaps.  I feel surethat 
this
> > will throw up some other issues,  like testing certain features in
> > combination.
> >
> > As a first step I'd like to propose refactoring the "paramatized" 
package.
> > As far as I can see our usage of the junit parameterized testing 
function is
> > aimed at ensuring consistency between operations performed on graphs 
when
> > the metadata has been produced a) from an xsd and b) by using the SDO 
API to
> > create  it dynamically.  I propose to rehouse these under
> > test.sdo21.consistency.
> >
> > --
> > Kelvin.
> >
> 
> 
> -- 
> * * * Charlie * * *
> Check out some pics of little Charlie at
> http://www.flickr.com/photos/83388211@N00/sets/
> 
> Check out Charlie's al crapo blog at http://robbieminshall.blogspot.com
> 
> * * * Addresss * * *
> 1914 Overland Drive
> Chapel Hill
> NC 27517
> 
> * * * Number * * *
> 919-225-1553
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
> For additional commands, e-mail: tuscany-dev-help@ws.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
For additional commands, e-mail: tuscany-dev-help@ws.apache.org


Re: [Java SDO CTS] thoughts on structure

Posted by Robbie Minshall <my...@gmail.com>.
I agree that the tests should be structured in a way that is spec and
functionaly orientated.  I have never really liked the split between
paramatized and non paramatized tests so getting rid of this is just
fine.

Other than that I think that the test cases are more or less organized
by API though I am sure some changes could be beneficial.

The idea behind the paramatized tests does indeed lean towards
consistency.  In general the SDO API should apply regardless of the
creation means for the DataObject ( static, dynamic, mixed, the old
relational DB DAS or any other datasource ).  This is done simply by
injecting a dataObject instance into a common set of tests.

However, I don't think that this should be packaged under a
consistency package - for me that has the same problems as being
organized under paramatized where you do not get a feel for complete
API coverage.

If you want to get ride of that problem you should just have a single
source tree organized by API and have both paramatized and non
paramatized tests in that single tree.

I would note that while slightly diluted ( moved more to an interface
to XML with the lack of work on the RDB DAS ) the intial conception of
SDO as a common API to many datasources shoudl still be maintained.
In my view this means that API tests etc should be performed on a
variety of dataobject creation mechanisms and paramatized tests are
the way to go.

cheers,
Robbie.








On 5/1/07, kelvin goodson <ke...@gmail.com> wrote:
> Having spent some time getting to grips with the CTS there are some things I
> think I'd like to improve.
>
> First amongst them is to get some structure that allows us to get a feel for
> how well the spec is covered by the tests.  One thing that concerns me is
> that one of the most apparent things in the structure is the split between
> the parameterized and the "single shot" junit tests.  This seems like a
> junit technology driven split,  and I don't think it is necessary or
> desirable. We should be able to apply the parameterization feature of junit
> without it being so prominent in the source code structure.
>
> I'd like to see more relation between spec features and test code packaging.
> That way we are more likely to spot gaps or overlaps.  I feel sure that this
> will throw up some other issues,  like testing certain features in
> combination.
>
> As a first step I'd like to propose refactoring the "paramatized" package.
> As far as I can see our usage of the junit parameterized testing function is
> aimed at ensuring consistency between operations performed on graphs when
> the metadata has been produced a) from an xsd and b) by using the SDO API to
> create  it dynamically.  I propose to rehouse these under
> test.sdo21.consistency.
>
> --
> Kelvin.
>


-- 
* * * Charlie * * *
Check out some pics of little Charlie at
http://www.flickr.com/photos/83388211@N00/sets/

Check out Charlie's al crapo blog at http://robbieminshall.blogspot.com

* * * Addresss * * *
1914 Overland Drive
Chapel Hill
NC 27517

* * * Number * * *
919-225-1553

---------------------------------------------------------------------
To unsubscribe, e-mail: tuscany-dev-unsubscribe@ws.apache.org
For additional commands, e-mail: tuscany-dev-help@ws.apache.org