You are viewing a plain text version of this content. The canonical link for it is here.
Posted to derby-dev@db.apache.org by Ole Solberg <Ol...@Sun.COM> on 2005/05/04 16:24:53 UTC
Regression testing
Hi,
Are regression test results for "head of trunk" of Derby available
somewhere? (I.e. tests run at some specific svn revision of Derby.)
I am asking because I would like to compare my own test results with the
current "official" state of Derby: e.g.
- if errors I see are due to problems with my own setup/environment, and
- what errors are *expected* to be seen on the revisions where
regression tests were run.
Regards
Ole Solberg
Re: Regression testing
Posted by David Van Couvering <Da...@Sun.COM>.
Hi, Andrew. I see your point about increasing traffic on the list, but
it's just one email a day, that really isn't going to inrease the noise
much.
Where I used to work, we got an email when the build or tests *failed*.
This was very useful, as it created an automatic impetus for all of
us to be more careful, because everyone knows when you messed up :) If
our tools could support that, that would be fine with me.
Regarding the tinderbox, I'll talk around here to see what we can dig
up; making the results available could be accomplished by posting the
results to a public web site. I think we should be able to work this
out, it seems worth it if it increases the quality of the product.
David
Andrew McIntyre wrote:
>
> On May 4, 2005, at 2:44 PM, Ole Solberg wrote:
>
>> We also build and test on a few platforms daily and could provide
>> those results.
>>
>> My level of ambition would be to just send out the results without any
>> deep analysis. (Just catching and filtering obvious local
>> setup/enviroment blunders etc.)
>>
>> I think communicating daily regression test results could be a good
>> way to present the state of Derby.
>
>
> It's great to hear that other derby-dev'rs are building Derby nightly
> and running the tests! I think it would be a very good thing to be
> sharing test results, but I'm a bit concerned about sending them to
> derby-dev itself. I personally feel that nightly automated mail to the
> list would simply decrease the signal-to-noise ratio on the list and be
> a bit of a nuisance (and ultimately not likely to be read). But there
> are alternatives to sending nightly test results to the list, like
> posting them in a specific location on the Derby website, as I'm
> currently doing with the doc/javadoc build. Or we could have a page on
> the Derby website with links to locations where derby-dev'rs are
> publishing their test results.
>
> In an ideal situation, it would be great to have a tinderbox approach,
> as David suggested, with constantly active build/test cycles running.
> But there's always the complicated question of who's going to provide
> the hardware and put the box out on the net for all to see when going
> that route.
>
> andrew
>
Re: Regression testing
Posted by Myrna van Lunteren <m....@gmail.com>.
I agree a web page would be best. How would we go about doing that?
Andrew, can you set something up like with the javadoc? You've got the
karma... I'll work on a little program that grabs our nightly jdk142
functiontests (derbyall) results for windows and linux.
Ole, do your nightly results include the jdk15 or J2SE 5.0 results?
re tinderbox & constant running & maximum turnaround...Let's start
with the results from our nightly run in California & yours in Norge,
and go from there...
I also wonder if someone's at sun is running the j2ee cts suite with
derby regularly? If so, that maybe can be posted?
Myrna
On 5/10/05, Ole Solberg <Ol...@sun.com> wrote:
> Andrew McIntyre wrote:
> >
> > On May 4, 2005, at 2:44 PM, Ole Solberg wrote:
> >
> >> We also build and test on a few platforms daily and could provide
> >> those results.
> >>
> >> My level of ambition would be to just send out the results without any
> >> deep analysis. (Just catching and filtering obvious local
> >> setup/enviroment blunders etc.)
> >>
> >> I think communicating daily regression test results could be a good
> >> way to present the state of Derby.
> >
> >
> > It's great to hear that other derby-dev'rs are building Derby nightly
> > and running the tests! I think it would be a very good thing to be
> > sharing test results, but I'm a bit concerned about sending them to
> > derby-dev itself. I personally feel that nightly automated mail to the
> > list would simply decrease the signal-to-noise ratio on the list and be
> > a bit of a nuisance (and ultimately not likely to be read). But there
> > are alternatives to sending nightly test results to the list, like
> > posting them in a specific location on the Derby website, as I'm
> > currently doing with the doc/javadoc build. Or we could have a page on
> > the Derby website with links to locations where derby-dev'rs are
> > publishing their test results.
>
> I agree that that having a page on the Derby website linking to the
> actual test reports would be a much better solution than "polluting"
> derby-dev with lots of e-mails.
> Anyone with neccessary rights willing to create such a page?
>
> >
> > In an ideal situation, it would be great to have a tinderbox approach,
> > as David suggested, with constantly active build/test cycles running.
> > But there's always the complicated question of who's going to provide
> > the hardware and put the box out on the net for all to see when going
> > that route.
>
> Which tests should be included in such an approach?
> What must the maximum turnaround time for this be to be considered useful?
>
> >
> > andrew
> >
>
> -- Ole
>
Re: Regression testing
Posted by David Van Couvering <Da...@Sun.COM>.
Andrew McIntyre wrote:
>
> On May 10, 2005, at 11:00 AM, David Van Couvering wrote:
>
>> Again, I'm not sure why there would be "lots of emails" for a nightly
>> build. Also, if we email failures, that would be great, as I know I
>> personally won't be checking the test web site on a regular basis.
>
>
> Let's get hypothetical and say 2 derby-dev'rs decide to run the tests
> nightly against their own builds (on whatever platform/jdk revision
> they're interested in) and send the output to the list. Now, make that
> 5. or 10. Imagine if, by accident, someone checks in a change that
> happens to breaks all of the tests, and we have 5 people sending their
> test result failures to the list. Suddenly, 300 people have 100
> megabytes of test diffs in their inbox. Not fun.
>
Oh, OK, I wasn't thinking about it that way. I was thinking there is
one official nightly run of regression tests, and a report is sent out
by email. This report would include all covered platforms. I wasn't
thinking of this as a distributed job.
But you're probably right, it makes more sense to distribute the work,
and have separate derby-devrs to choose the platforms they are
interested in, and have a common web site where the results are posted...
> But, what if instead derby-dev'r X running tests on platform Y and JDK Z
> posts to the list with "there is a serious failure going on, I think
> it's due to {whatever} and here's a link to the failure information:
> .... " - that's a lot more useful then automatic notification, even in
> the case of failure. And, with a page on the website that can collect
> the locations of where specific platform/jvm tests are running, at least
> all of that useful test information is available without derby-dev
> receiving a lot of email that is only of interest to a small subsection
> of people. If there are serious issues on a specific platform/jvm, then
> the people responsible for those tests can provide a more detailed
> description and bug report to derby-dev than if a failure notification
> is mailed to the list. Well, that's my opinion, anyway
>
>> Well, it seems to me you just have a cycle of "pull, build, full
>> regresssions, post results." If we have a powerful enough machine, it
>> shouldn't take that long, and we'd have fresh results every, say, four
>> hours. Not bad!
>>
>> If we want a faster turnaround, we could, as you suggest, identify a
>> subset of tests.
>
>
> I don't think four hours for a full build/test cycle is unreasonable for
> a tinderbox, and I think with fairly recent hardware and the relaxed
> durability option for testing, that the time for a full build/test cycle
> could probably be quite a bit less than that, probably around two hours
> on top-of-the-line hardware. i.e. I think that running the full test
> suite in a tinderbox approach is a better idea than a subset of the tests.
OK, I think we're in agreement here then... Now it's just a mere matter
of implementing it :)
Thanks,
David
>
> andrew
>
Re: Regression testing
Posted by Ole Solberg <Ol...@Sun.COM>.
Daniel John Debrunner wrote:
> Ole Solberg - Sun Norway wrote:
>
>
>>Our (Sun DBTG) regression test results are now available at
>>
>>http://www.multinet.no/~solberg/public/Apache/Derby/index.html
>>
>
>
>
> Ole, I see now that the build date is available when you click on a svn
> revision number to go to its results, I still think that a date in the
> 'Tested Revisions' table on the main page would be useful.
>
> My next question is (and again I probably missed the information) where
> is the JVM information for the runs, I see platform information but
> nothing about which JVM is being used (1,3, 1.4, 1.5?).
>
> Thanks,
> Dan.
>
Thanks for the feedback!
I am in the process of enhancing the pages in accordance with these
wishes, and at the same time trying to make my scripts useable outside
our site.
--
Ole Solberg, Database Technology Group,
Sun Microsystems, Trondheim, Norway
Re: Regression testing
Posted by Daniel John Debrunner <dj...@debrunners.com>.
Andrew McIntyre wrote:
> On 5/26/05, Daniel John Debrunner <dj...@debrunners.com> wrote:
>
>>My next question is (and again I probably missed the information) where
>>is the JVM information for the runs, I see platform information but
>>nothing about which JVM is being used (1,3, 1.4, 1.5?).
>
>
> Links to failures show that the JVM being tested is JDK 1.5. I would
> suggest putting this information on the front page so curious users
> could find it more easily.
Thanks, especially hard to find if there are no failures. :-)
Dan.
Re: Regression testing
Posted by Andrew McIntyre <mc...@gmail.com>.
On 5/26/05, Daniel John Debrunner <dj...@debrunners.com> wrote:
> My next question is (and again I probably missed the information) where
> is the JVM information for the runs, I see platform information but
> nothing about which JVM is being used (1,3, 1.4, 1.5?).
Links to failures show that the JVM being tested is JDK 1.5. I would
suggest putting this information on the front page so curious users
could find it more easily.
andrew
Re: Regression testing
Posted by Daniel John Debrunner <dj...@debrunners.com>.
Ole Solberg - Sun Norway wrote:
> Our (Sun DBTG) regression test results are now available at
>
> http://www.multinet.no/~solberg/public/Apache/Derby/index.html
>
Ole, I see now that the build date is available when you click on a svn
revision number to go to its results, I still think that a date in the
'Tested Revisions' table on the main page would be useful.
My next question is (and again I probably missed the information) where
is the JVM information for the runs, I see platform information but
nothing about which JVM is being used (1,3, 1.4, 1.5?).
Thanks,
Dan.
Re: Regression testing
Posted by Myrna van Lunteren <m....@gmail.com>.
On 5/20/05, Daniel John Debrunner <dj...@debrunners.com> wrote:
> Ole Solberg - Sun Norway wrote:
>
> > Our (Sun DBTG) regression test results are now available at
> >
> > http://www.multinet.no/~solberg/public/Apache/Derby/index.html
> >
> > Have a look and see if this is useful.
>
> That looks great!
>
> Have you considered adding date information to the report?
> Eg. always have the date next to a revision
>
> 170978 2005/05/19
>
> I think this would be great to get linked to from the Derby site.
>
> Dan.
>
>
wow.
That looks great.
It's much more than I was planning to do...
I was planning only only the contents of the derbyall_fail.txt...
And I haven't even started on it yet...
Myrna
Re: Regression testing
Posted by Ole Solberg - Sun Norway <Ol...@Sun.COM>.
Myrna,
Sun, 22 May 2005 17:54:31 -0700 (PDT) Myrna van Lunteren wrote:
> Ole,
>
> The more I think about this, the more I wonder if I can make anything
> as nice as what you've done in a reasonable time frame. Lots of kudos!
>
> Would it be possible for you to share/contribute your program for
> gathering information?
It is just a set of simple shell scripts; I will go thru it and
generalize any site specific stuff, then I'll send it to you for a try?
If it works ok at your site we could publish it for others interested.
> If not, I'll just forge ahead with our stuff, but it's going to be much simpler.
>
> Also, I have to add that our testing is more on a variety of jvms and
> insane and sane jars than a variety of operating systems (for
> Cloudscape, we did those OS tests as part of the QA cycle for a
> release, not nightly).
I think that is where we also are heading.
I assume that the differences that we're seeing on different platforms
now might simply be because I haven't got the environment (like locale
stuff) set up correctly everywhere.
>
> Myrna
>
> On 5/22/05, Jean T. Anderson <jt...@bristowhill.com> wrote:
>
>>Daniel John Debrunner wrote:
>>
>>>Ole Solberg - Sun Norway wrote:
>>>
>>>
>>>
>>>>Our (Sun DBTG) regression test results are now available at
>>>>
>>>>http://www.multinet.no/~solberg/public/Apache/Derby/index.html
>>>>
>>>>Have a look and see if this is useful.
>>>
>>>
>>>That looks great!
>>>
>>>Have you considered adding date information to the report?
>>>Eg. always have the date next to a revision
>>>
>>>170978 2005/05/19
>>>
>>>I think this would be great to get linked to from the Derby site.
>>>
>>>Dan.
>>>
>>
>>Added to
>>http://incubator.apache.org/derby/derby_downloads.html#How+to+test+Derby ,
>>committed revision 171382. (Scott H, thanks for the poke on the web site
>>update.) I suspect there's a better home for this link, so anybody
>>should feel free to suggest one.
>>
>>This change will become visible with the next rsync to www.apache.org
>>(currently scheduled for every 4 hours).
>>
>> -jean
>>
>>
>>
>>
--
Ole Solberg, Database Technology Group,
Sun Microsystems, Trondheim, Norway
Re: Regression testing
Posted by Myrna van Lunteren <m....@gmail.com>.
Ole,
The more I think about this, the more I wonder if I can make anything
as nice as what you've done in a reasonable time frame. Lots of kudos!
Would it be possible for you to share/contribute your program for
gathering information?
If not, I'll just forge ahead with our stuff, but it's going to be much simpler.
Also, I have to add that our testing is more on a variety of jvms and
insane and sane jars than a variety of operating systems (for
Cloudscape, we did those OS tests as part of the QA cycle for a
release, not nightly).
Myrna
On 5/22/05, Jean T. Anderson <jt...@bristowhill.com> wrote:
> Daniel John Debrunner wrote:
> > Ole Solberg - Sun Norway wrote:
> >
> >
> >>Our (Sun DBTG) regression test results are now available at
> >>
> >>http://www.multinet.no/~solberg/public/Apache/Derby/index.html
> >>
> >>Have a look and see if this is useful.
> >
> >
> > That looks great!
> >
> > Have you considered adding date information to the report?
> > Eg. always have the date next to a revision
> >
> > 170978 2005/05/19
> >
> > I think this would be great to get linked to from the Derby site.
> >
> > Dan.
> >
>
> Added to
> http://incubator.apache.org/derby/derby_downloads.html#How+to+test+Derby ,
> committed revision 171382. (Scott H, thanks for the poke on the web site
> update.) I suspect there's a better home for this link, so anybody
> should feel free to suggest one.
>
> This change will become visible with the next rsync to www.apache.org
> (currently scheduled for every 4 hours).
>
> -jean
>
>
>
>
Re: Regression testing
Posted by "Jean T. Anderson" <jt...@bristowhill.com>.
Daniel John Debrunner wrote:
> Ole Solberg - Sun Norway wrote:
>
>
>>Our (Sun DBTG) regression test results are now available at
>>
>>http://www.multinet.no/~solberg/public/Apache/Derby/index.html
>>
>>Have a look and see if this is useful.
>
>
> That looks great!
>
> Have you considered adding date information to the report?
> Eg. always have the date next to a revision
>
> 170978 2005/05/19
>
> I think this would be great to get linked to from the Derby site.
>
> Dan.
>
Added to
http://incubator.apache.org/derby/derby_downloads.html#How+to+test+Derby ,
committed revision 171382. (Scott H, thanks for the poke on the web site
update.) I suspect there's a better home for this link, so anybody
should feel free to suggest one.
This change will become visible with the next rsync to www.apache.org
(currently scheduled for every 4 hours).
-jean
Re: Regression testing
Posted by Daniel John Debrunner <dj...@debrunners.com>.
Ole Solberg - Sun Norway wrote:
> Our (Sun DBTG) regression test results are now available at
>
> http://www.multinet.no/~solberg/public/Apache/Derby/index.html
>
> Have a look and see if this is useful.
That looks great!
Have you considered adding date information to the report?
Eg. always have the date next to a revision
170978 2005/05/19
I think this would be great to get linked to from the Derby site.
Dan.
Re: Regression testing
Posted by Ole Solberg - Sun Norway <Ol...@Sun.COM>.
Our (Sun DBTG) regression test results are now available at
http://www.multinet.no/~solberg/public/Apache/Derby/index.html
Have a look and see if this is useful.
Do we have an Apache Derby web page where links to such materials could
be added?
--
Ole Solberg, Database Technology Group,
Sun Microsystems, Trondheim, Norway
Re: Regression testing
Posted by Andrew McIntyre <mc...@gmail.com>.
On May 10, 2005, at 11:00 AM, David Van Couvering wrote:
> Again, I'm not sure why there would be "lots of emails" for a nightly
> build. Also, if we email failures, that would be great, as I know I
> personally won't be checking the test web site on a regular basis.
Let's get hypothetical and say 2 derby-dev'rs decide to run the tests
nightly against their own builds (on whatever platform/jdk revision
they're interested in) and send the output to the list. Now, make that
5. or 10. Imagine if, by accident, someone checks in a change that
happens to breaks all of the tests, and we have 5 people sending their
test result failures to the list. Suddenly, 300 people have 100
megabytes of test diffs in their inbox. Not fun.
But, what if instead derby-dev'r X running tests on platform Y and JDK
Z posts to the list with "there is a serious failure going on, I think
it's due to {whatever} and here's a link to the failure information:
.... " - that's a lot more useful then automatic notification, even in
the case of failure. And, with a page on the website that can collect
the locations of where specific platform/jvm tests are running, at
least all of that useful test information is available without
derby-dev receiving a lot of email that is only of interest to a small
subsection of people. If there are serious issues on a specific
platform/jvm, then the people responsible for those tests can provide a
more detailed description and bug report to derby-dev than if a failure
notification is mailed to the list. Well, that's my opinion, anyway
> Well, it seems to me you just have a cycle of "pull, build, full
> regresssions, post results." If we have a powerful enough machine, it
> shouldn't take that long, and we'd have fresh results every, say, four
> hours. Not bad!
>
> If we want a faster turnaround, we could, as you suggest, identify a
> subset of tests.
I don't think four hours for a full build/test cycle is unreasonable
for a tinderbox, and I think with fairly recent hardware and the
relaxed durability option for testing, that the time for a full
build/test cycle could probably be quite a bit less than that, probably
around two hours on top-of-the-line hardware. i.e. I think that running
the full test suite in a tinderbox approach is a better idea than a
subset of the tests.
andrew
Re: Regression testing
Posted by David Van Couvering <Da...@Sun.COM>.
Ole Solberg wrote:
> Andrew McIntyre wrote:
>
>>
>> On May 4, 2005, at 2:44 PM, Ole Solberg wrote:
>>
>>> We also build and test on a few platforms daily and could provide
>>> those results.
>>>
>>> My level of ambition would be to just send out the results without
>>> any deep analysis. (Just catching and filtering obvious local
>>> setup/enviroment blunders etc.)
>>>
>>> I think communicating daily regression test results could be a good
>>> way to present the state of Derby.
>>
>>
>>
>> It's great to hear that other derby-dev'rs are building Derby nightly
>> and running the tests! I think it would be a very good thing to be
>> sharing test results, but I'm a bit concerned about sending them to
>> derby-dev itself. I personally feel that nightly automated mail to the
>> list would simply decrease the signal-to-noise ratio on the list and
>> be a bit of a nuisance (and ultimately not likely to be read). But
>> there are alternatives to sending nightly test results to the list,
>> like posting them in a specific location on the Derby website, as I'm
>> currently doing with the doc/javadoc build. Or we could have a page on
>> the Derby website with links to locations where derby-dev'rs are
>> publishing their test results.
>
>
> I agree that that having a page on the Derby website linking to the
> actual test reports would be a much better solution than "polluting"
> derby-dev with lots of e-mails.
> Anyone with neccessary rights willing to create such a page?
Again, I'm not sure why there would be "lots of emails" for a nightly
build. Also, if we email failures, that would be great, as I know I
personally won't be checking the test web site on a regular basis.
>
>>
>> In an ideal situation, it would be great to have a tinderbox approach,
>> as David suggested, with constantly active build/test cycles running.
>> But there's always the complicated question of who's going to provide
>> the hardware and put the box out on the net for all to see when going
>> that route.
>
>
> Which tests should be included in such an approach?
> What must the maximum turnaround time for this be to be considered useful?
Well, it seems to me you just have a cycle of "pull, build, full
regresssions, post results." If we have a powerful enough machine, it
shouldn't take that long, and we'd have fresh results every, say, four
hours. Not bad!
If we want a faster turnaround, we could, as you suggest, identify a
subset of tests.
>
>>
>> andrew
>>
>
> -- Ole
Re: Regression testing
Posted by Ole Solberg <Ol...@Sun.COM>.
Andrew McIntyre wrote:
>
> On May 4, 2005, at 2:44 PM, Ole Solberg wrote:
>
>> We also build and test on a few platforms daily and could provide
>> those results.
>>
>> My level of ambition would be to just send out the results without any
>> deep analysis. (Just catching and filtering obvious local
>> setup/enviroment blunders etc.)
>>
>> I think communicating daily regression test results could be a good
>> way to present the state of Derby.
>
>
> It's great to hear that other derby-dev'rs are building Derby nightly
> and running the tests! I think it would be a very good thing to be
> sharing test results, but I'm a bit concerned about sending them to
> derby-dev itself. I personally feel that nightly automated mail to the
> list would simply decrease the signal-to-noise ratio on the list and be
> a bit of a nuisance (and ultimately not likely to be read). But there
> are alternatives to sending nightly test results to the list, like
> posting them in a specific location on the Derby website, as I'm
> currently doing with the doc/javadoc build. Or we could have a page on
> the Derby website with links to locations where derby-dev'rs are
> publishing their test results.
I agree that that having a page on the Derby website linking to the
actual test reports would be a much better solution than "polluting"
derby-dev with lots of e-mails.
Anyone with neccessary rights willing to create such a page?
>
> In an ideal situation, it would be great to have a tinderbox approach,
> as David suggested, with constantly active build/test cycles running.
> But there's always the complicated question of who's going to provide
> the hardware and put the box out on the net for all to see when going
> that route.
Which tests should be included in such an approach?
What must the maximum turnaround time for this be to be considered useful?
>
> andrew
>
-- Ole
Re: Regression testing
Posted by Andrew McIntyre <mc...@gmail.com>.
On May 4, 2005, at 2:44 PM, Ole Solberg wrote:
> We also build and test on a few platforms daily and could provide
> those results.
>
> My level of ambition would be to just send out the results without any
> deep analysis. (Just catching and filtering obvious local
> setup/enviroment blunders etc.)
>
> I think communicating daily regression test results could be a good
> way to present the state of Derby.
It's great to hear that other derby-dev'rs are building Derby nightly
and running the tests! I think it would be a very good thing to be
sharing test results, but I'm a bit concerned about sending them to
derby-dev itself. I personally feel that nightly automated mail to the
list would simply decrease the signal-to-noise ratio on the list and be
a bit of a nuisance (and ultimately not likely to be read). But there
are alternatives to sending nightly test results to the list, like
posting them in a specific location on the Derby website, as I'm
currently doing with the doc/javadoc build. Or we could have a page on
the Derby website with links to locations where derby-dev'rs are
publishing their test results.
In an ideal situation, it would be great to have a tinderbox approach,
as David suggested, with constantly active build/test cycles running.
But there's always the complicated question of who's going to provide
the hardware and put the box out on the net for all to see when going
that route.
andrew
Re: Regression testing
Posted by David Van Couvering <Da...@Sun.COM>.
It seems to me we could leverage each others' resources and allocate
platforms across the two groups. I would *love* to get a nightly report
of test results and test failures. Why? Because then I feel that each
developer can pull back on the amount of testing we have to do before
checking in/submitting a patch. We could identify a set of MATS and a
policy around running them prior to checkin, and shorten the checkin
lifecycle considerably.
It would be even more effective if we had a "tinderbox" approach where
the tinderbox machine was pulling changes, building and testing
continuously. The sooner a failure is caught, the easier it is to track
it down; obviously having each developer run full regressions is the
best way to catch this, but when it takes hours and hours to run tests
before checkin, we start wasting precious developer resources and it
makes it harder to turn around fixes and patches.
David
Ole Solberg wrote:
> Hi,
>
> We also build and test on a few platforms daily and could provide those
> results.
>
> My level of ambition would be to just send out the results without any
> deep analysis. (Just catching and filtering obvious local
> setup/enviroment blunders etc.)
>
> I think communicating daily regression test results could be a good way
> to present the state of Derby.
>
>
>
> Ole
>
>
> Wed, 04 May 2005 Myrna van Lunteren wrote:
>
>> Hi...
>>
>> At IBM we build the jars & run the tests nightly on a small set of
>> platforms...we could work on sending an automated list of the failures
>> to the community...
>>
>> I'd propose sharing the list of failures for insane jar runs, Sun's
>> jdk142 for windows & (suse) linux (barring unexpected machine
>> outages)... I won't promise an actual analysis, but proactive
>> individuals could volunteer...
>>
>> Is there interest in this?
>>
>> Myrna
>>
>> On 5/4/05, Ole Solberg <Ol...@sun.com> wrote:
>>
>>> Hi,
>>>
>>> Are regression test results for "head of trunk" of Derby available
>>> somewhere? (I.e. tests run at some specific svn revision of Derby.)
>>>
>>> I am asking because I would like to compare my own test results with the
>>> current "official" state of Derby: e.g.
>>> - if errors I see are due to problems with my own setup/environment, and
>>> - what errors are *expected* to be seen on the revisions where
>>> regression tests were run.
>>>
>>> Regards
>>> Ole Solberg
>>>
Re: Regression testing
Posted by Ole Solberg <Ol...@Sun.COM>.
Hi,
We also build and test on a few platforms daily and could provide those
results.
My level of ambition would be to just send out the results without any
deep analysis. (Just catching and filtering obvious local
setup/enviroment blunders etc.)
I think communicating daily regression test results could be a good way
to present the state of Derby.
Ole
Wed, 04 May 2005 Myrna van Lunteren wrote:
> Hi...
>
> At IBM we build the jars & run the tests nightly on a small set of
> platforms...we could work on sending an automated list of the failures
> to the community...
>
> I'd propose sharing the list of failures for insane jar runs, Sun's
> jdk142 for windows & (suse) linux (barring unexpected machine
> outages)... I won't promise an actual analysis, but proactive
> individuals could volunteer...
>
> Is there interest in this?
>
> Myrna
>
> On 5/4/05, Ole Solberg <Ol...@sun.com> wrote:
>
>>Hi,
>>
>>Are regression test results for "head of trunk" of Derby available
>>somewhere? (I.e. tests run at some specific svn revision of Derby.)
>>
>>I am asking because I would like to compare my own test results with the
>>current "official" state of Derby: e.g.
>>- if errors I see are due to problems with my own setup/environment, and
>>- what errors are *expected* to be seen on the revisions where
>>regression tests were run.
>>
>>Regards
>>Ole Solberg
>>
Re: Regression testing
Posted by Myrna van Lunteren <m....@gmail.com>.
Hi...
At IBM we build the jars & run the tests nightly on a small set of
platforms...we could work on sending an automated list of the failures
to the community...
I'd propose sharing the list of failures for insane jar runs, Sun's
jdk142 for windows & (suse) linux (barring unexpected machine
outages)... I won't promise an actual analysis, but proactive
individuals could volunteer...
Is there interest in this?
Myrna
On 5/4/05, Ole Solberg <Ol...@sun.com> wrote:
> Hi,
>
> Are regression test results for "head of trunk" of Derby available
> somewhere? (I.e. tests run at some specific svn revision of Derby.)
>
> I am asking because I would like to compare my own test results with the
> current "official" state of Derby: e.g.
> - if errors I see are due to problems with my own setup/environment, and
> - what errors are *expected* to be seen on the revisions where
> regression tests were run.
>
> Regards
> Ole Solberg
>