You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@harmony.apache.org by Stepan Mishura <st...@gmail.com> on 2007/03/30 10:31:39 UTC

[general] Discussion: how to keep up stability and fast progress all together?

Hi,

We made a big progress in improving the project's code base. Just for
example, a total number of excluded tests in the class library were
reduced by ~ 60 entries in two months. Also taking into account that
Windows x86_64 build was enabled and the test suite is growing I think
this is a good progress indicator for the Class library. The same for
DRL VM – new testing modes are added and a total number of excluded
tests for most of modes are reducing. Let's keep up good progress!

But I'd like to attract attention to stability issues. I've been
monitoring CC status for a couple of months and I my impression that
situation with stability become worse – a number of reports about
failures is growing. I'm afraid that if we set keeping good stability
aside then it may affect the overall project's progress.

I'd like to encourage everybody to pay attention to stability side and
to hear ideas how to improve the situation?

Thanks,
Stepan Mishura
Intel Enterprise Solutions Software Division

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Stepan Mishura <st...@gmail.com>.
On 3/30/07, Alexey Petrenko wrote:
> As far as I understand noticeable amount of failures are false failures.
> For example yesterday we had CC failures on all platforms because of
> "build clean" target failure...
>
> Is my feeling correct?
>

I didn't take into account such failures. I'm talking about
instabilities and regressions caused by new updates.

Thanks,
Stepan.

> SY, Alexey
>
> 2007/3/30, Stepan Mishura <st...@gmail.com>:
> > Hi,
> >
> > We made a big progress in improving the project's code base. Just for
> > example, a total number of excluded tests in the class library were
> > reduced by ~ 60 entries in two months. Also taking into account that
> > Windows x86_64 build was enabled and the test suite is growing I think
> > this is a good progress indicator for the Class library. The same for
> > DRL VM – new testing modes are added and a total number of excluded
> > tests for most of modes are reducing. Let's keep up good progress!
> >
> > But I'd like to attract attention to stability issues. I've been
> > monitoring CC status for a couple of months and I my impression that
> > situation with stability become worse – a number of reports about
> > failures is growing. I'm afraid that if we set keeping good stability
> > aside then it may affect the overall project's progress.
> >
> > I'd like to encourage everybody to pay attention to stability side and
> > to hear ideas how to improve the situation?
> >
> > Thanks,
> > Stepan Mishura
> > Intel Enterprise Solutions Software Division
> >
>


-- 
Stepan Mishura
Intel Enterprise Solutions Software Division

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Elena Semukhina <el...@gmail.com>.
On 3/30/07, Alexey Petrenko <al...@gmail.com> wrote:
>
> As far as I understand noticeable amount of failures are false failures.
> For example yesterday we had CC failures on all platforms because of
> "build clean" target failure...
>
> Is my feeling correct?


This week we saw a number of CC failures because of timeout. Actually some
DRLVM smoke tests started hanging since last weekend. Two issues have been
already reported and announced at another thread.

As for improving stability, I think we need to implement iterative runs of
DRLVM tests similar to iterative runs of classlib tests and put it under CC.

When I ran the tests iteratively on my machine with a primitive shell
script, I managed to reveal a number of intermittent failures and reported
them to JIRA. I think we need to have such testing on a permanent basis.

Thanks,
Elene


SY, Alexey
>
> 2007/3/30, Stepan Mishura <st...@gmail.com>:
> > Hi,
> >
> > We made a big progress in improving the project's code base. Just for
> > example, a total number of excluded tests in the class library were
> > reduced by ~ 60 entries in two months. Also taking into account that
> > Windows x86_64 build was enabled and the test suite is growing I think
> > this is a good progress indicator for the Class library. The same for
> > DRL VM – new testing modes are added and a total number of excluded
> > tests for most of modes are reducing. Let's keep up good progress!
> >
> > But I'd like to attract attention to stability issues. I've been
> > monitoring CC status for a couple of months and I my impression that
> > situation with stability become worse – a number of reports about
> > failures is growing. I'm afraid that if we set keeping good stability
> > aside then it may affect the overall project's progress.
> >
> > I'd like to encourage everybody to pay attention to stability side and
> > to hear ideas how to improve the situation?
> >
> > Thanks,
> > Stepan Mishura
> > Intel Enterprise Solutions Software Division
> >
>

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Alexey Petrenko <al...@gmail.com>.
As far as I understand noticeable amount of failures are false failures.
For example yesterday we had CC failures on all platforms because of
"build clean" target failure...

Is my feeling correct?

SY, Alexey

2007/3/30, Stepan Mishura <st...@gmail.com>:
> Hi,
>
> We made a big progress in improving the project's code base. Just for
> example, a total number of excluded tests in the class library were
> reduced by ~ 60 entries in two months. Also taking into account that
> Windows x86_64 build was enabled and the test suite is growing I think
> this is a good progress indicator for the Class library. The same for
> DRL VM – new testing modes are added and a total number of excluded
> tests for most of modes are reducing. Let's keep up good progress!
>
> But I'd like to attract attention to stability issues. I've been
> monitoring CC status for a couple of months and I my impression that
> situation with stability become worse – a number of reports about
> failures is growing. I'm afraid that if we set keeping good stability
> aside then it may affect the overall project's progress.
>
> I'd like to encourage everybody to pay attention to stability side and
> to hear ideas how to improve the situation?
>
> Thanks,
> Stepan Mishura
> Intel Enterprise Solutions Software Division
>

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Alexey Petrenko <al...@gmail.com>.
Stepan,

what is your ideas on stability improvements? :)

+1 from me for milestones.

SY, Alexey

2007/3/30, Stepan Mishura <st...@gmail.com>:
> Hi,
>
> We made a big progress in improving the project's code base. Just for
> example, a total number of excluded tests in the class library were
> reduced by ~ 60 entries in two months. Also taking into account that
> Windows x86_64 build was enabled and the test suite is growing I think
> this is a good progress indicator for the Class library. The same for
> DRL VM – new testing modes are added and a total number of excluded
> tests for most of modes are reducing. Let's keep up good progress!
>
> But I'd like to attract attention to stability issues. I've been
> monitoring CC status for a couple of months and I my impression that
> situation with stability become worse – a number of reports about
> failures is growing. I'm afraid that if we set keeping good stability
> aside then it may affect the overall project's progress.
>
> I'd like to encourage everybody to pay attention to stability side and
> to hear ideas how to improve the situation?
>
> Thanks,
> Stepan Mishura
> Intel Enterprise Solutions Software Division
>

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Alexey Petrenko <al...@gmail.com>.
Mikhail,

thanks for your comments.

SY, Alexey

2007/4/15, Mikhail Fursov <mi...@gmail.com>:
> After >10 runs of jEdit I've found it hangs couple of times during startup.
> It can be a threading issue and jEdit here is not a good reproducer. We have
> several easy to reproduce JIRA issues for threading subsystem and I hope
> fixing them will improve jEdit stats too.
>
> On 4/15/07, Mikhail Fursov <mi...@gmail.com> wrote:
> >
> > I also tried jEdit today: (version 4.3 pre9)  - it started fine in default
> > JIT mode. I opened and edited several documents and found no errors. I'll
> > try more with different verification levels enabled.
> >
> > On 4/13/07, Alexei Zakharov <al...@gmail.com> wrote:
> > >
> > > > jEdit?
> > > > But I'm not sure that it works ok on current class library :)
> > >
> > > I've tried to run jEdit on Harmony recently. I was able to start it on
> > > IBM VME only - it fails to start on JITed version of DRLVM and startup
> > > takes tooo long in DRLVM's interpreter mode (however, it starts).  But
> > > even on IBM VME it is not able to work longer than about 10 minutes.
> > >
> > > Regards,
> > >
> > > 2007/4/4, Alexey Petrenko < alexey.a.petrenko@gmail.com>:
> > > > 2007/4/4, Stepan Mishura <st...@gmail.com>:
> > > > > On 4/4/07, Alexey Petrenko wrote:
> > > > > <SNIP>
> > > > > > > > I'd like to propose the next approach that may help us to know
> > > about
> > > > > > > > instabilities: develop (or take existing one, for example,
> > > Eclipse
> > > > > > > > hello world) a scenario for testing stability and configure CC
> > > to run
> > > > > > > > it at all times. The stability scenario must be the only one
> > > scenario
> > > > > > > > for CC; it must be short (no longer then an hour), test JRE in
> > > stress
> > > > > > > > conditions and cover most of functionality. If the scenario
> > > fails then
> > > > > > > > all newly committed updates are subject for investigation and
> > > fix (or
> > > > > > > > rollback).
> > > > > > > Actually, I prefer something without GUI
> > > > > > I do not think that remove GUI testing from CC and other stability
> > >
> > > > > > testing is a good way to go. Because awt and swing modules are
> > > really
> > > > > > big and complicated pieces of code.
> > > > > >
> > > > >
> > > > > Sorry for the confusion - I agree that we should continue running
> > > > > AWT/Swing tests under CC. But we are talking about scenario that can
> > > > > be used for testing stability in terms of race conditions. The first
> > > > > scenario that spread in my mind was Eclipse hello world testing
> > > > > scenario: it is quite short, verifies core functionality and so on.
> > > > > But Vladimir claimed that there might be some issues related to GUI
> > > > > testing and we may have a number of 'false alarms'.
> > > > In fact Eclipse does not use awt and swing at all so it can not be
> > > > used as a test for these modules.
> > > >
> > > >
> > > > > BTW, do you have any scenario in mind that can be used a stability
> > > > > criteria (of cause in terms of race conditions)?
> > > > jEdit?
> > > > But I'm not sure that it works ok on current class library :)
> > > >
> > > > SY, Alexey
> > > >
> > > > > > > or at least without using
> > > > > > > special 'GUI testing" tools. It should improve quality of this
> > > testing
> > > > > > > (than less tools than more predictable results :)) Current
> > > "Eclipse
> > > > > > > hello world" scenario based on the AutoIT for Win and X11GuiTest
> > > for
> > > > > > > Linux platform. Also we have this scenario based on API calls
> > > which
> > > > > > > should emulate GUI scenario. From these 2 approaches I prefer
> > > second
> > > > > > > to minimize 'false alarms'. Or may be some other scenarios
> > > (non-GUI)?
> > > > > > >
> > > > > > >  Thanks, Vladimir
> > > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > Thought? Objections?
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > Stepan.
> > > > > > > >
> > > > > > > > > I read the discussion on naming, and M1, M2, ... is fine by
> > > me.  How
> > > > > > > > > about we pick a proposed date for Apache Harmony M1?
> > >
> > >
> > > --
> > > Alexei Zakharov,
> > > Intel ESSD
> > >
> >
> >
> >
> > --
> > Mikhail Fursov
>
>
>
>
> --
> Mikhail Fursov
>

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Mikhail Fursov <mi...@gmail.com>.
After >10 runs of jEdit I've found it hangs couple of times during startup.
It can be a threading issue and jEdit here is not a good reproducer. We have
several easy to reproduce JIRA issues for threading subsystem and I hope
fixing them will improve jEdit stats too.

On 4/15/07, Mikhail Fursov <mi...@gmail.com> wrote:
>
> I also tried jEdit today: (version 4.3 pre9)  - it started fine in default
> JIT mode. I opened and edited several documents and found no errors. I'll
> try more with different verification levels enabled.
>
> On 4/13/07, Alexei Zakharov <al...@gmail.com> wrote:
> >
> > > jEdit?
> > > But I'm not sure that it works ok on current class library :)
> >
> > I've tried to run jEdit on Harmony recently. I was able to start it on
> > IBM VME only - it fails to start on JITed version of DRLVM and startup
> > takes tooo long in DRLVM's interpreter mode (however, it starts).  But
> > even on IBM VME it is not able to work longer than about 10 minutes.
> >
> > Regards,
> >
> > 2007/4/4, Alexey Petrenko < alexey.a.petrenko@gmail.com>:
> > > 2007/4/4, Stepan Mishura <st...@gmail.com>:
> > > > On 4/4/07, Alexey Petrenko wrote:
> > > > <SNIP>
> > > > > > > I'd like to propose the next approach that may help us to know
> > about
> > > > > > > instabilities: develop (or take existing one, for example,
> > Eclipse
> > > > > > > hello world) a scenario for testing stability and configure CC
> > to run
> > > > > > > it at all times. The stability scenario must be the only one
> > scenario
> > > > > > > for CC; it must be short (no longer then an hour), test JRE in
> > stress
> > > > > > > conditions and cover most of functionality. If the scenario
> > fails then
> > > > > > > all newly committed updates are subject for investigation and
> > fix (or
> > > > > > > rollback).
> > > > > > Actually, I prefer something without GUI
> > > > > I do not think that remove GUI testing from CC and other stability
> >
> > > > > testing is a good way to go. Because awt and swing modules are
> > really
> > > > > big and complicated pieces of code.
> > > > >
> > > >
> > > > Sorry for the confusion - I agree that we should continue running
> > > > AWT/Swing tests under CC. But we are talking about scenario that can
> > > > be used for testing stability in terms of race conditions. The first
> > > > scenario that spread in my mind was Eclipse hello world testing
> > > > scenario: it is quite short, verifies core functionality and so on.
> > > > But Vladimir claimed that there might be some issues related to GUI
> > > > testing and we may have a number of 'false alarms'.
> > > In fact Eclipse does not use awt and swing at all so it can not be
> > > used as a test for these modules.
> > >
> > >
> > > > BTW, do you have any scenario in mind that can be used a stability
> > > > criteria (of cause in terms of race conditions)?
> > > jEdit?
> > > But I'm not sure that it works ok on current class library :)
> > >
> > > SY, Alexey
> > >
> > > > > > or at least without using
> > > > > > special 'GUI testing" tools. It should improve quality of this
> > testing
> > > > > > (than less tools than more predictable results :)) Current
> > "Eclipse
> > > > > > hello world" scenario based on the AutoIT for Win and X11GuiTest
> > for
> > > > > > Linux platform. Also we have this scenario based on API calls
> > which
> > > > > > should emulate GUI scenario. From these 2 approaches I prefer
> > second
> > > > > > to minimize 'false alarms'. Or may be some other scenarios
> > (non-GUI)?
> > > > > >
> > > > > >  Thanks, Vladimir
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > Thought? Objections?
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Stepan.
> > > > > > >
> > > > > > > > I read the discussion on naming, and M1, M2, ... is fine by
> > me.  How
> > > > > > > > about we pick a proposed date for Apache Harmony M1?
> >
> >
> > --
> > Alexei Zakharov,
> > Intel ESSD
> >
>
>
>
> --
> Mikhail Fursov




-- 
Mikhail Fursov

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Mikhail Fursov <mi...@gmail.com>.
I used Windows 32-bit and my Centrino-based laptop.

On 4/19/07, Alexei Zakharov <al...@gmail.com> wrote:
>
> Mikhail,
>
> On which system have you tried to start it?
> I can't say I was so lucky. Today I've tried to start jEdit (I have
> the most recent stable version 4.2) several times on my Debian Linux
> (32bit) system.
>
> First time it hanged after pressing couple of buttons with:
> free(): invalid pointer 0x<some hex number>
> free(): invalid pointer 0x<another hex number>
>
> When I started it again it crashed with:
> SIGSEGV in VM code.
> Stack trace:
>
> I don't want to say that jEdit is the best possible reproducer. Just
> like to reminde that we still have problems with it.
>
> With Best Regards,
>
> 2007/4/15, Mikhail Fursov <mi...@gmail.com>:
> > I also tried jEdit today: (version 4.3 pre9)  - it started fine in
> default
> > JIT mode. I opened and edited several documents and found no errors.
> I'll
> > try more with different verification levels enabled.
> >
> > On 4/13/07, Alexei Zakharov <al...@gmail.com> wrote:
> > >
> > > > jEdit?
> > > > But I'm not sure that it works ok on current class library :)
> > >
> > > I've tried to run jEdit on Harmony recently. I was able to start it on
> > > IBM VME only - it fails to start on JITed version of DRLVM and startup
> > > takes tooo long in DRLVM's interpreter mode (however, it starts).  But
> > > even on IBM VME it is not able to work longer than about 10 minutes.
> > >
> > > Regards,
> > >
> > > 2007/4/4, Alexey Petrenko <al...@gmail.com>:
> > > > 2007/4/4, Stepan Mishura <st...@gmail.com>:
> > > > > On 4/4/07, Alexey Petrenko wrote:
> > > > > <SNIP>
> > > > > > > > I'd like to propose the next approach that may help us to
> know
> > > about
> > > > > > > > instabilities: develop (or take existing one, for example,
> > > Eclipse
> > > > > > > > hello world) a scenario for testing stability and configure
> CC
> > > to run
> > > > > > > > it at all times. The stability scenario must be the only one
> > > scenario
> > > > > > > > for CC; it must be short (no longer then an hour), test JRE
> in
> > > stress
> > > > > > > > conditions and cover most of functionality. If the scenario
> > > fails then
> > > > > > > > all newly committed updates are subject for investigation
> and
> > > fix (or
> > > > > > > > rollback).
> > > > > > > Actually, I prefer something without GUI
> > > > > > I do not think that remove GUI testing from CC and other
> stability
> > > > > > testing is a good way to go. Because awt and swing modules are
> > > really
> > > > > > big and complicated pieces of code.
> > > > > >
> > > > >
> > > > > Sorry for the confusion - I agree that we should continue running
> > > > > AWT/Swing tests under CC. But we are talking about scenario that
> can
> > > > > be used for testing stability in terms of race conditions. The
> first
> > > > > scenario that spread in my mind was Eclipse hello world testing
> > > > > scenario: it is quite short, verifies core functionality and so
> on.
> > > > > But Vladimir claimed that there might be some issues related to
> GUI
> > > > > testing and we may have a number of 'false alarms'.
> > > > In fact Eclipse does not use awt and swing at all so it can not be
> > > > used as a test for these modules.
> > > >
> > > >
> > > > > BTW, do you have any scenario in mind that can be used a stability
> > > > > criteria (of cause in terms of race conditions)?
> > > > jEdit?
> > > > But I'm not sure that it works ok on current class library :)
> > > >
> > > > SY, Alexey
> > > >
> > > > > > > or at least without using
> > > > > > > special 'GUI testing" tools. It should improve quality of this
> > > testing
> > > > > > > (than less tools than more predictable results :)) Current
> > > "Eclipse
> > > > > > > hello world" scenario based on the AutoIT for Win and
> X11GuiTest
> > > for
> > > > > > > Linux platform. Also we have this scenario based on API calls
> > > which
> > > > > > > should emulate GUI scenario. From these 2 approaches I prefer
> > > second
> > > > > > > to minimize 'false alarms'. Or may be some other scenarios
> > > (non-GUI)?
> > > > > > >
> > > > > > >  Thanks, Vladimir
> > > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > Thought? Objections?
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > > Stepan.
> > > > > > > >
> > > > > > > > > I read the discussion on naming, and M1, M2, ... is fine
> by
> > > me.  How
> > > > > > > > > about we pick a proposed date for Apache Harmony M1?
>
> --
> Alexei Zakharov,
> Intel ESSD
>



-- 
Mikhail Fursov

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Alexei Zakharov <al...@gmail.com>.
Mikhail,

On which system have you tried to start it?
I can't say I was so lucky. Today I've tried to start jEdit (I have
the most recent stable version 4.2) several times on my Debian Linux
(32bit) system.

First time it hanged after pressing couple of buttons with:
free(): invalid pointer 0x<some hex number>
free(): invalid pointer 0x<another hex number>

When I started it again it crashed with:
SIGSEGV in VM code.
Stack trace:

I don't want to say that jEdit is the best possible reproducer. Just
like to reminde that we still have problems with it.

With Best Regards,

2007/4/15, Mikhail Fursov <mi...@gmail.com>:
> I also tried jEdit today: (version 4.3 pre9)  - it started fine in default
> JIT mode. I opened and edited several documents and found no errors. I'll
> try more with different verification levels enabled.
>
> On 4/13/07, Alexei Zakharov <al...@gmail.com> wrote:
> >
> > > jEdit?
> > > But I'm not sure that it works ok on current class library :)
> >
> > I've tried to run jEdit on Harmony recently. I was able to start it on
> > IBM VME only - it fails to start on JITed version of DRLVM and startup
> > takes tooo long in DRLVM's interpreter mode (however, it starts).  But
> > even on IBM VME it is not able to work longer than about 10 minutes.
> >
> > Regards,
> >
> > 2007/4/4, Alexey Petrenko <al...@gmail.com>:
> > > 2007/4/4, Stepan Mishura <st...@gmail.com>:
> > > > On 4/4/07, Alexey Petrenko wrote:
> > > > <SNIP>
> > > > > > > I'd like to propose the next approach that may help us to know
> > about
> > > > > > > instabilities: develop (or take existing one, for example,
> > Eclipse
> > > > > > > hello world) a scenario for testing stability and configure CC
> > to run
> > > > > > > it at all times. The stability scenario must be the only one
> > scenario
> > > > > > > for CC; it must be short (no longer then an hour), test JRE in
> > stress
> > > > > > > conditions and cover most of functionality. If the scenario
> > fails then
> > > > > > > all newly committed updates are subject for investigation and
> > fix (or
> > > > > > > rollback).
> > > > > > Actually, I prefer something without GUI
> > > > > I do not think that remove GUI testing from CC and other stability
> > > > > testing is a good way to go. Because awt and swing modules are
> > really
> > > > > big and complicated pieces of code.
> > > > >
> > > >
> > > > Sorry for the confusion - I agree that we should continue running
> > > > AWT/Swing tests under CC. But we are talking about scenario that can
> > > > be used for testing stability in terms of race conditions. The first
> > > > scenario that spread in my mind was Eclipse hello world testing
> > > > scenario: it is quite short, verifies core functionality and so on.
> > > > But Vladimir claimed that there might be some issues related to GUI
> > > > testing and we may have a number of 'false alarms'.
> > > In fact Eclipse does not use awt and swing at all so it can not be
> > > used as a test for these modules.
> > >
> > >
> > > > BTW, do you have any scenario in mind that can be used a stability
> > > > criteria (of cause in terms of race conditions)?
> > > jEdit?
> > > But I'm not sure that it works ok on current class library :)
> > >
> > > SY, Alexey
> > >
> > > > > > or at least without using
> > > > > > special 'GUI testing" tools. It should improve quality of this
> > testing
> > > > > > (than less tools than more predictable results :)) Current
> > "Eclipse
> > > > > > hello world" scenario based on the AutoIT for Win and X11GuiTest
> > for
> > > > > > Linux platform. Also we have this scenario based on API calls
> > which
> > > > > > should emulate GUI scenario. From these 2 approaches I prefer
> > second
> > > > > > to minimize 'false alarms'. Or may be some other scenarios
> > (non-GUI)?
> > > > > >
> > > > > >  Thanks, Vladimir
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > Thought? Objections?
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Stepan.
> > > > > > >
> > > > > > > > I read the discussion on naming, and M1, M2, ... is fine by
> > me.  How
> > > > > > > > about we pick a proposed date for Apache Harmony M1?

-- 
Alexei Zakharov,
Intel ESSD

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Mikhail Fursov <mi...@gmail.com>.
I also tried jEdit today: (version 4.3 pre9)  - it started fine in default
JIT mode. I opened and edited several documents and found no errors. I'll
try more with different verification levels enabled.

On 4/13/07, Alexei Zakharov <al...@gmail.com> wrote:
>
> > jEdit?
> > But I'm not sure that it works ok on current class library :)
>
> I've tried to run jEdit on Harmony recently. I was able to start it on
> IBM VME only - it fails to start on JITed version of DRLVM and startup
> takes tooo long in DRLVM's interpreter mode (however, it starts).  But
> even on IBM VME it is not able to work longer than about 10 minutes.
>
> Regards,
>
> 2007/4/4, Alexey Petrenko <al...@gmail.com>:
> > 2007/4/4, Stepan Mishura <st...@gmail.com>:
> > > On 4/4/07, Alexey Petrenko wrote:
> > > <SNIP>
> > > > > > I'd like to propose the next approach that may help us to know
> about
> > > > > > instabilities: develop (or take existing one, for example,
> Eclipse
> > > > > > hello world) a scenario for testing stability and configure CC
> to run
> > > > > > it at all times. The stability scenario must be the only one
> scenario
> > > > > > for CC; it must be short (no longer then an hour), test JRE in
> stress
> > > > > > conditions and cover most of functionality. If the scenario
> fails then
> > > > > > all newly committed updates are subject for investigation and
> fix (or
> > > > > > rollback).
> > > > > Actually, I prefer something without GUI
> > > > I do not think that remove GUI testing from CC and other stability
> > > > testing is a good way to go. Because awt and swing modules are
> really
> > > > big and complicated pieces of code.
> > > >
> > >
> > > Sorry for the confusion - I agree that we should continue running
> > > AWT/Swing tests under CC. But we are talking about scenario that can
> > > be used for testing stability in terms of race conditions. The first
> > > scenario that spread in my mind was Eclipse hello world testing
> > > scenario: it is quite short, verifies core functionality and so on.
> > > But Vladimir claimed that there might be some issues related to GUI
> > > testing and we may have a number of 'false alarms'.
> > In fact Eclipse does not use awt and swing at all so it can not be
> > used as a test for these modules.
> >
> >
> > > BTW, do you have any scenario in mind that can be used a stability
> > > criteria (of cause in terms of race conditions)?
> > jEdit?
> > But I'm not sure that it works ok on current class library :)
> >
> > SY, Alexey
> >
> > > > > or at least without using
> > > > > special 'GUI testing" tools. It should improve quality of this
> testing
> > > > > (than less tools than more predictable results :)) Current
> "Eclipse
> > > > > hello world" scenario based on the AutoIT for Win and X11GuiTest
> for
> > > > > Linux platform. Also we have this scenario based on API calls
> which
> > > > > should emulate GUI scenario. From these 2 approaches I prefer
> second
> > > > > to minimize 'false alarms'. Or may be some other scenarios
> (non-GUI)?
> > > > >
> > > > >  Thanks, Vladimir
> > > > >
> > > > >
> > > > > >
> > > > > > Thought? Objections?
> > > > > >
> > > > > > Thanks,
> > > > > > Stepan.
> > > > > >
> > > > > > > I read the discussion on naming, and M1, M2, ... is fine by
> me.  How
> > > > > > > about we pick a proposed date for Apache Harmony M1?
>
>
> --
> Alexei Zakharov,
> Intel ESSD
>



-- 
Mikhail Fursov

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Alexey Petrenko <al...@gmail.com>.
So it looks like we have few issues to fix :)

SY, Alexey

2007/4/13, Alexei Zakharov <al...@gmail.com>:
> > jEdit?
> > But I'm not sure that it works ok on current class library :)
>
> I've tried to run jEdit on Harmony recently. I was able to start it on
> IBM VME only - it fails to start on JITed version of DRLVM and startup
> takes tooo long in DRLVM's interpreter mode (however, it starts).  But
> even on IBM VME it is not able to work longer than about 10 minutes.
>
> Regards,
>
> 2007/4/4, Alexey Petrenko <al...@gmail.com>:
> > 2007/4/4, Stepan Mishura <st...@gmail.com>:
> > > On 4/4/07, Alexey Petrenko wrote:
> > > <SNIP>
> > > > > > I'd like to propose the next approach that may help us to know about
> > > > > > instabilities: develop (or take existing one, for example, Eclipse
> > > > > > hello world) a scenario for testing stability and configure CC to run
> > > > > > it at all times. The stability scenario must be the only one scenario
> > > > > > for CC; it must be short (no longer then an hour), test JRE in stress
> > > > > > conditions and cover most of functionality. If the scenario fails then
> > > > > > all newly committed updates are subject for investigation and fix (or
> > > > > > rollback).
> > > > > Actually, I prefer something without GUI
> > > > I do not think that remove GUI testing from CC and other stability
> > > > testing is a good way to go. Because awt and swing modules are really
> > > > big and complicated pieces of code.
> > > >
> > >
> > > Sorry for the confusion - I agree that we should continue running
> > > AWT/Swing tests under CC. But we are talking about scenario that can
> > > be used for testing stability in terms of race conditions. The first
> > > scenario that spread in my mind was Eclipse hello world testing
> > > scenario: it is quite short, verifies core functionality and so on.
> > > But Vladimir claimed that there might be some issues related to GUI
> > > testing and we may have a number of 'false alarms'.
> > In fact Eclipse does not use awt and swing at all so it can not be
> > used as a test for these modules.
> >
> >
> > > BTW, do you have any scenario in mind that can be used a stability
> > > criteria (of cause in terms of race conditions)?
> > jEdit?
> > But I'm not sure that it works ok on current class library :)
> >
> > SY, Alexey
> >
> > > > > or at least without using
> > > > > special 'GUI testing" tools. It should improve quality of this testing
> > > > > (than less tools than more predictable results :)) Current "Eclipse
> > > > > hello world" scenario based on the AutoIT for Win and X11GuiTest for
> > > > > Linux platform. Also we have this scenario based on API calls which
> > > > > should emulate GUI scenario. From these 2 approaches I prefer second
> > > > > to minimize 'false alarms'. Or may be some other scenarios (non-GUI)?
> > > > >
> > > > >  Thanks, Vladimir
> > > > >
> > > > >
> > > > > >
> > > > > > Thought? Objections?
> > > > > >
> > > > > > Thanks,
> > > > > > Stepan.
> > > > > >
> > > > > > > I read the discussion on naming, and M1, M2, ... is fine by me.  How
> > > > > > > about we pick a proposed date for Apache Harmony M1?
>
>
> --
> Alexei Zakharov,
> Intel ESSD
>

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Alexei Zakharov <al...@gmail.com>.
> jEdit?
> But I'm not sure that it works ok on current class library :)

I've tried to run jEdit on Harmony recently. I was able to start it on
IBM VME only - it fails to start on JITed version of DRLVM and startup
takes tooo long in DRLVM's interpreter mode (however, it starts).  But
even on IBM VME it is not able to work longer than about 10 minutes.

Regards,

2007/4/4, Alexey Petrenko <al...@gmail.com>:
> 2007/4/4, Stepan Mishura <st...@gmail.com>:
> > On 4/4/07, Alexey Petrenko wrote:
> > <SNIP>
> > > > > I'd like to propose the next approach that may help us to know about
> > > > > instabilities: develop (or take existing one, for example, Eclipse
> > > > > hello world) a scenario for testing stability and configure CC to run
> > > > > it at all times. The stability scenario must be the only one scenario
> > > > > for CC; it must be short (no longer then an hour), test JRE in stress
> > > > > conditions and cover most of functionality. If the scenario fails then
> > > > > all newly committed updates are subject for investigation and fix (or
> > > > > rollback).
> > > > Actually, I prefer something without GUI
> > > I do not think that remove GUI testing from CC and other stability
> > > testing is a good way to go. Because awt and swing modules are really
> > > big and complicated pieces of code.
> > >
> >
> > Sorry for the confusion - I agree that we should continue running
> > AWT/Swing tests under CC. But we are talking about scenario that can
> > be used for testing stability in terms of race conditions. The first
> > scenario that spread in my mind was Eclipse hello world testing
> > scenario: it is quite short, verifies core functionality and so on.
> > But Vladimir claimed that there might be some issues related to GUI
> > testing and we may have a number of 'false alarms'.
> In fact Eclipse does not use awt and swing at all so it can not be
> used as a test for these modules.
>
>
> > BTW, do you have any scenario in mind that can be used a stability
> > criteria (of cause in terms of race conditions)?
> jEdit?
> But I'm not sure that it works ok on current class library :)
>
> SY, Alexey
>
> > > > or at least without using
> > > > special 'GUI testing" tools. It should improve quality of this testing
> > > > (than less tools than more predictable results :)) Current "Eclipse
> > > > hello world" scenario based on the AutoIT for Win and X11GuiTest for
> > > > Linux platform. Also we have this scenario based on API calls which
> > > > should emulate GUI scenario. From these 2 approaches I prefer second
> > > > to minimize 'false alarms'. Or may be some other scenarios (non-GUI)?
> > > >
> > > >  Thanks, Vladimir
> > > >
> > > >
> > > > >
> > > > > Thought? Objections?
> > > > >
> > > > > Thanks,
> > > > > Stepan.
> > > > >
> > > > > > I read the discussion on naming, and M1, M2, ... is fine by me.  How
> > > > > > about we pick a proposed date for Apache Harmony M1?


-- 
Alexei Zakharov,
Intel ESSD

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Alexey Petrenko <al...@gmail.com>.
2007/4/4, Stepan Mishura <st...@gmail.com>:
> On 4/4/07, Alexey Petrenko wrote:
> <SNIP>
> > > > I'd like to propose the next approach that may help us to know about
> > > > instabilities: develop (or take existing one, for example, Eclipse
> > > > hello world) a scenario for testing stability and configure CC to run
> > > > it at all times. The stability scenario must be the only one scenario
> > > > for CC; it must be short (no longer then an hour), test JRE in stress
> > > > conditions and cover most of functionality. If the scenario fails then
> > > > all newly committed updates are subject for investigation and fix (or
> > > > rollback).
> > > Actually, I prefer something without GUI
> > I do not think that remove GUI testing from CC and other stability
> > testing is a good way to go. Because awt and swing modules are really
> > big and complicated pieces of code.
> >
>
> Sorry for the confusion - I agree that we should continue running
> AWT/Swing tests under CC. But we are talking about scenario that can
> be used for testing stability in terms of race conditions. The first
> scenario that spread in my mind was Eclipse hello world testing
> scenario: it is quite short, verifies core functionality and so on.
> But Vladimir claimed that there might be some issues related to GUI
> testing and we may have a number of 'false alarms'.
In fact Eclipse does not use awt and swing at all so it can not be
used as a test for these modules.


> BTW, do you have any scenario in mind that can be used a stability
> criteria (of cause in terms of race conditions)?
jEdit?
But I'm not sure that it works ok on current class library :)

SY, Alexey

> > > or at least without using
> > > special 'GUI testing" tools. It should improve quality of this testing
> > > (than less tools than more predictable results :)) Current "Eclipse
> > > hello world" scenario based on the AutoIT for Win and X11GuiTest for
> > > Linux platform. Also we have this scenario based on API calls which
> > > should emulate GUI scenario. From these 2 approaches I prefer second
> > > to minimize 'false alarms'. Or may be some other scenarios (non-GUI)?
> > >
> > >  Thanks, Vladimir
> > >
> > >
> > > >
> > > > Thought? Objections?
> > > >
> > > > Thanks,
> > > > Stepan.
> > > >
> > > > > I read the discussion on naming, and M1, M2, ... is fine by me.  How
> > > > > about we pick a proposed date for Apache Harmony M1?
> > > > >
> > > > > Regards,
> > > > > Tim
> > > > >
>
> --
> Stepan Mishura
> Intel Enterprise Solutions Software Division
>

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Stepan Mishura <st...@gmail.com>.
On 4/4/07, Alexey Petrenko wrote:
<SNIP>
> > > I'd like to propose the next approach that may help us to know about
> > > instabilities: develop (or take existing one, for example, Eclipse
> > > hello world) a scenario for testing stability and configure CC to run
> > > it at all times. The stability scenario must be the only one scenario
> > > for CC; it must be short (no longer then an hour), test JRE in stress
> > > conditions and cover most of functionality. If the scenario fails then
> > > all newly committed updates are subject for investigation and fix (or
> > > rollback).
> > Actually, I prefer something without GUI
> I do not think that remove GUI testing from CC and other stability
> testing is a good way to go. Because awt and swing modules are really
> big and complicated pieces of code.
>

Sorry for the confusion - I agree that we should continue running
AWT/Swing tests under CC. But we are talking about scenario that can
be used for testing stability in terms of race conditions. The first
scenario that spread in my mind was Eclipse hello world testing
scenario: it is quite short, verifies core functionality and so on.
But Vladimir claimed that there might be some issues related to GUI
testing and we may have a number of 'false alarms'.

BTW, do you have any scenario in mind that can be used a stability
criteria (of cause in terms of race conditions)?

Thanks,
Stepan.

> SY, Alexey
>
> > or at least without using
> > special 'GUI testing" tools. It should improve quality of this testing
> > (than less tools than more predictable results :)) Current "Eclipse
> > hello world" scenario based on the AutoIT for Win and X11GuiTest for
> > Linux platform. Also we have this scenario based on API calls which
> > should emulate GUI scenario. From these 2 approaches I prefer second
> > to minimize 'false alarms'. Or may be some other scenarios (non-GUI)?
> >
> >  Thanks, Vladimir
> >
> >
> > >
> > > Thought? Objections?
> > >
> > > Thanks,
> > > Stepan.
> > >
> > > > I read the discussion on naming, and M1, M2, ... is fine by me.  How
> > > > about we pick a proposed date for Apache Harmony M1?
> > > >
> > > > Regards,
> > > > Tim
> > > >

-- 
Stepan Mishura
Intel Enterprise Solutions Software Division

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Alexey Petrenko <al...@gmail.com>.
2007/4/3, Vladimir Ivanov <iv...@gmail.com>:
> On 4/2/07, Stepan Mishura <st...@gmail.com> wrote:
> > On 3/30/07, Tim Ellison wrote:
> > > Stepan Mishura wrote:
> > > > We made a big progress in improving the project's code base. Just for
> > > > example, a total number of excluded tests in the class library were
> > > > reduced by ~ 60 entries in two months. Also taking into account that
> > > > Windows x86_64 build was enabled and the test suite is growing I think
> > > > this is a good progress indicator for the Class library. The same for
> > > > DRL VM – new testing modes are added and a total number of excluded
> > > > tests for most of modes are reducing. Let's keep up good progress!
> > > >
> > > > But I'd like to attract attention to stability issues. I've been
> > > > monitoring CC status for a couple of months and I my impression that
> > > > situation with stability become worse – a number of reports about
> > > > failures is growing. I'm afraid that if we set keeping good stability
> > > > aside then it may affect the overall project's progress.
> > > >
> > > > I'd like to encourage everybody to pay attention to stability side and
> > > > to hear ideas how to improve the situation?
> > >
> > > Caveat:  I'm still 200 emails behind on the dev list, a good sign of the
> > > project's liveliness, but my apologies in advance for any repetition...
> > >
> > > IMO we won't achieve rock solid stability without focusing on it as an
> > > explicit goal; and delivering a Milestone release is the best way to get
> > > that focus.
> > >
> >
> > Yes, I agree that without focusing on stability it is hard to achive it.
> >
> > > Not exactly a novel or radical idea, but Milestones have a number of
> > > benefits not least of which is that they demonstrate we, as a diverse
> > > group, can converge on a delivery of the code we are working on.  Some
> > > projects will rumble on forever without committing to a stable, tested,
> > > and likely imperfect, packaging of something.
> > >
> > > If the Milestones are time-boxed they also form a natural boundary for
> > > feature planning, and afford some predictability to the project that is
> > > also important.
> > >
> > > In my experience, something like 6 to 8 weeks between Milestones is a
> > > good period of time.  Four weeks is too short to get big ticket items in
> > > and stable, and 12 weeks (a quarter year) is too long such that
> > > instability can set-in.
> > >
> > > In that 6 to 8 week period there should be a time at the end where we
> > > hold back from introducing cool new function, and emphasize testing and
> > > fixing.  Maybe that is the last seven days leading up to the Milestone,
> > > and of course, if instability exists we slip the date until we can
> > > declare a stable point.
> > >
> >
> > Sure this approach makes sense and I think we should accept and follow
> > it. I see only one issue here - it lets instabilities get accumulated
> > and present unnoticed (ignored?) in the code for some period of time.
> > This may result that minor update can have unintended consequences.
> >
> > Currently if we identify a regression we try to find a guilty commit
> > and to fix it or do rollback. I think it is the right way – we keep
> > code base in a good shape and don't let a number of known problems to
> > grow. This approach showed its efficiency and the only thing I can do
> > here is only encourage all contributors to run all available tests
> > after doing any non-trivial change. But it seems for intermittent
> > failures the approach with running all testing scenarios doesn't work
> > well – usually they are not immediately detected. And it's hard to
> > find guilty update after a long time so we tend to put such tests to
> > exclude list.
> >
>
> > I'd like to propose the next approach that may help us to know about
> > instabilities: develop (or take existing one, for example, Eclipse
> > hello world) a scenario for testing stability and configure CC to run
> > it at all times. The stability scenario must be the only one scenario
> > for CC; it must be short (no longer then an hour), test JRE in stress
> > conditions and cover most of functionality. If the scenario fails then
> > all newly committed updates are subject for investigation and fix (or
> > rollback).
> Actually, I prefer something without GUI
I do not think that remove GUI testing from CC and other stability
testing is a good way to go. Because awt and swing modules are really
big and complicated pieces of code.

SY, Alexey

> or at least without using
> special 'GUI testing" tools. It should improve quality of this testing
> (than less tools than more predictable results :)) Current "Eclipse
> hello world" scenario based on the AutoIT for Win and X11GuiTest for
> Linux platform. Also we have this scenario based on API calls which
> should emulate GUI scenario. From these 2 approaches I prefer second
> to minimize 'false alarms'. Or may be some other scenarios (non-GUI)?
>
>  Thanks, Vladimir
>
>
> >
> > Thought? Objections?
> >
> > Thanks,
> > Stepan.
> >
> > > I read the discussion on naming, and M1, M2, ... is fine by me.  How
> > > about we pick a proposed date for Apache Harmony M1?
> > >
> > > Regards,
> > > Tim
> > >
> >
> >
> > --
> > Stepan Mishura
> > Intel Enterprise Solutions Software Division
> >
>

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Alexey Petrenko <al...@gmail.com>.
2007/4/5, Alexey Varlamov <al...@gmail.com>:
> 2007/4/5, Mikhail Loenko <ml...@gmail.com>:
> > 2007/4/4, Vladimir Ivanov <iv...@gmail.com>:
> > > On 4/3/07, Stepan Mishura <st...@gmail.com> wrote:
> > > > On 4/3/07, Vladimir Ivanov wrote:
> > > > <SNIP>
> > > > >
> > > > > > I'd like to propose the next approach that may help us to know about
> > > > > > instabilities: develop (or take existing one, for example, Eclipse
> > > > > > hello world) a scenario for testing stability and configure CC to run
> > > > > > it at all times. The stability scenario must be the only one scenario
> > > > > > for CC; it must be short (no longer then an hour), test JRE in stress
> > > > > > conditions and cover most of functionality. If the scenario fails then
> > > > > > all newly committed updates are subject for investigation and fix (or
> > > > > > rollback).
> > > > >
> > > > > Actually, I prefer something without GUI or at least without using
> > > > > special 'GUI testing" tools. It should improve quality of this testing
> > > > > (than less tools than more predictable results :)) Current "Eclipse
> > > > > hello world" scenario based on the AutoIT for Win and X11GuiTest for
> > > > > Linux platform. Also we have this scenario based on API calls which
> > > > > should emulate GUI scenario. From these 2 approaches I prefer second
> > > > > to minimize 'false alarms'. Or may be some other scenarios (non-GUI)?
> > > > >
> > > >
> > >
> > > > Did I understand you correctly that there may be 'false alarms' caused
> > > > by using external 'GUI testing' tools? If yes which kind of 'false
> > > > alarms' are there?
> > >
> > > Seems, in some cases the input from the X11GuiTest can be lost in the system :(
> > > Also timeouts between different symbols may depends on the system load etc
> >
> > How often does EHWA fail? Given that this scenario exercises VM very well
> > and the scenario itself is pretty fast we may run it twice and report
> > a problem if
> > it fails both attempts.
>
> As Vladimir already mentioned, drlvm build provides automated EHWA
> test driven via Eclipse APIs, it is reliable and very fast - why not
> use it instead of GUI robots, indeed?
But it would be nice to have some non-Eclipse GUI testing since
Eclipse does not use awt and swing.

SY, Alexey

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Alexey Varlamov <al...@gmail.com>.
2007/4/5, Mikhail Loenko <ml...@gmail.com>:
> 2007/4/4, Vladimir Ivanov <iv...@gmail.com>:
> > On 4/3/07, Stepan Mishura <st...@gmail.com> wrote:
> > > On 4/3/07, Vladimir Ivanov wrote:
> > > <SNIP>
> > > >
> > > > > I'd like to propose the next approach that may help us to know about
> > > > > instabilities: develop (or take existing one, for example, Eclipse
> > > > > hello world) a scenario for testing stability and configure CC to run
> > > > > it at all times. The stability scenario must be the only one scenario
> > > > > for CC; it must be short (no longer then an hour), test JRE in stress
> > > > > conditions and cover most of functionality. If the scenario fails then
> > > > > all newly committed updates are subject for investigation and fix (or
> > > > > rollback).
> > > >
> > > > Actually, I prefer something without GUI or at least without using
> > > > special 'GUI testing" tools. It should improve quality of this testing
> > > > (than less tools than more predictable results :)) Current "Eclipse
> > > > hello world" scenario based on the AutoIT for Win and X11GuiTest for
> > > > Linux platform. Also we have this scenario based on API calls which
> > > > should emulate GUI scenario. From these 2 approaches I prefer second
> > > > to minimize 'false alarms'. Or may be some other scenarios (non-GUI)?
> > > >
> > >
> >
> > > Did I understand you correctly that there may be 'false alarms' caused
> > > by using external 'GUI testing' tools? If yes which kind of 'false
> > > alarms' are there?
> >
> > Seems, in some cases the input from the X11GuiTest can be lost in the system :(
> > Also timeouts between different symbols may depends on the system load etc
>
> How often does EHWA fail? Given that this scenario exercises VM very well
> and the scenario itself is pretty fast we may run it twice and report
> a problem if
> it fails both attempts.

As Vladimir already mentioned, drlvm build provides automated EHWA
test driven via Eclipse APIs, it is reliable and very fast - why not
use it instead of GUI robots, indeed?

>
> Thanks,
> Mikhail
>
> >
> >  Thanks, Vladimir
> >
> > >
> > > Thanks,
> > > Stepan.
> > >
> > > >  Thanks, Vladimir
> > > >
> > > >
> > > > >
> > > > > Thought? Objections?
> > > > >
> > > > > Thanks,
> > > > > Stepan.
> > > > >
> > > > > > I read the discussion on naming, and M1, M2, ... is fine by me.  How
> > > > > > about we pick a proposed date for Apache Harmony M1?
> > > > > >
> > > > > > Regards,
> > > > > > Tim
> > > > > >
> > > > >
> > >
> > > --
> > > Stepan Mishura
> > > Intel Enterprise Solutions Software Division
> > >
> >
>

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Vladimir Ivanov <iv...@gmail.com>.
On 4/5/07, Mikhail Loenko <ml...@gmail.com> wrote:
> 2007/4/4, Vladimir Ivanov <iv...@gmail.com>:
> > On 4/3/07, Stepan Mishura <st...@gmail.com> wrote:
> > > On 4/3/07, Vladimir Ivanov wrote:
> > > <SNIP>
> > > >
> > > > > I'd like to propose the next approach that may help us to know about
> > > > > instabilities: develop (or take existing one, for example, Eclipse
> > > > > hello world) a scenario for testing stability and configure CC to run
> > > > > it at all times. The stability scenario must be the only one scenario
> > > > > for CC; it must be short (no longer then an hour), test JRE in stress
> > > > > conditions and cover most of functionality. If the scenario fails then
> > > > > all newly committed updates are subject for investigation and fix (or
> > > > > rollback).
> > > >
> > > > Actually, I prefer something without GUI or at least without using
> > > > special 'GUI testing" tools. It should improve quality of this testing
> > > > (than less tools than more predictable results :)) Current "Eclipse
> > > > hello world" scenario based on the AutoIT for Win and X11GuiTest for
> > > > Linux platform. Also we have this scenario based on API calls which
> > > > should emulate GUI scenario. From these 2 approaches I prefer second
> > > > to minimize 'false alarms'. Or may be some other scenarios (non-GUI)?
> > > >
> > >
> >
> > > Did I understand you correctly that there may be 'false alarms' caused
> > > by using external 'GUI testing' tools? If yes which kind of 'false
> > > alarms' are there?
> >
> > Seems, in some cases the input from the X11GuiTest can be lost in the system :(
> > Also timeouts between different symbols may depends on the system load etc
>

> How often does EHWA fail? Given that this scenario exercises VM very well
> and the scenario itself is pretty fast we may run it twice and report
> a problem if
> it fails both attempts.

It depends on platform: almost newer on Win (at least due to AutoIt),
very rare on Linux x86 and often on Linux x86_64 (1 time for ~5 runs).
While usually it is a hang I manually skip the notification sending.
So, for this testing I also prefer non-GUI scenario. It can be unified
for all platforms.
 thanks, Vladimir

>
> Thanks,
> Mikhail
>
> >
> >  Thanks, Vladimir
> >
> > >
> > > Thanks,
> > > Stepan.
> > >
> > > >  Thanks, Vladimir
> > > >
> > > >
> > > > >
> > > > > Thought? Objections?
> > > > >
> > > > > Thanks,
> > > > > Stepan.
> > > > >
> > > > > > I read the discussion on naming, and M1, M2, ... is fine by me.  How
> > > > > > about we pick a proposed date for Apache Harmony M1?
> > > > > >
> > > > > > Regards,
> > > > > > Tim
> > > > > >
> > > > >
> > >
> > > --
> > > Stepan Mishura
> > > Intel Enterprise Solutions Software Division
> > >
> >
>

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Mikhail Loenko <ml...@gmail.com>.
2007/4/4, Vladimir Ivanov <iv...@gmail.com>:
> On 4/3/07, Stepan Mishura <st...@gmail.com> wrote:
> > On 4/3/07, Vladimir Ivanov wrote:
> > <SNIP>
> > >
> > > > I'd like to propose the next approach that may help us to know about
> > > > instabilities: develop (or take existing one, for example, Eclipse
> > > > hello world) a scenario for testing stability and configure CC to run
> > > > it at all times. The stability scenario must be the only one scenario
> > > > for CC; it must be short (no longer then an hour), test JRE in stress
> > > > conditions and cover most of functionality. If the scenario fails then
> > > > all newly committed updates are subject for investigation and fix (or
> > > > rollback).
> > >
> > > Actually, I prefer something without GUI or at least without using
> > > special 'GUI testing" tools. It should improve quality of this testing
> > > (than less tools than more predictable results :)) Current "Eclipse
> > > hello world" scenario based on the AutoIT for Win and X11GuiTest for
> > > Linux platform. Also we have this scenario based on API calls which
> > > should emulate GUI scenario. From these 2 approaches I prefer second
> > > to minimize 'false alarms'. Or may be some other scenarios (non-GUI)?
> > >
> >
>
> > Did I understand you correctly that there may be 'false alarms' caused
> > by using external 'GUI testing' tools? If yes which kind of 'false
> > alarms' are there?
>
> Seems, in some cases the input from the X11GuiTest can be lost in the system :(
> Also timeouts between different symbols may depends on the system load etc

How often does EHWA fail? Given that this scenario exercises VM very well
and the scenario itself is pretty fast we may run it twice and report
a problem if
it fails both attempts.

Thanks,
Mikhail

>
>  Thanks, Vladimir
>
> >
> > Thanks,
> > Stepan.
> >
> > >  Thanks, Vladimir
> > >
> > >
> > > >
> > > > Thought? Objections?
> > > >
> > > > Thanks,
> > > > Stepan.
> > > >
> > > > > I read the discussion on naming, and M1, M2, ... is fine by me.  How
> > > > > about we pick a proposed date for Apache Harmony M1?
> > > > >
> > > > > Regards,
> > > > > Tim
> > > > >
> > > >
> >
> > --
> > Stepan Mishura
> > Intel Enterprise Solutions Software Division
> >
>

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Vladimir Ivanov <iv...@gmail.com>.
On 4/3/07, Stepan Mishura <st...@gmail.com> wrote:
> On 4/3/07, Vladimir Ivanov wrote:
> <SNIP>
> >
> > > I'd like to propose the next approach that may help us to know about
> > > instabilities: develop (or take existing one, for example, Eclipse
> > > hello world) a scenario for testing stability and configure CC to run
> > > it at all times. The stability scenario must be the only one scenario
> > > for CC; it must be short (no longer then an hour), test JRE in stress
> > > conditions and cover most of functionality. If the scenario fails then
> > > all newly committed updates are subject for investigation and fix (or
> > > rollback).
> >
> > Actually, I prefer something without GUI or at least without using
> > special 'GUI testing" tools. It should improve quality of this testing
> > (than less tools than more predictable results :)) Current "Eclipse
> > hello world" scenario based on the AutoIT for Win and X11GuiTest for
> > Linux platform. Also we have this scenario based on API calls which
> > should emulate GUI scenario. From these 2 approaches I prefer second
> > to minimize 'false alarms'. Or may be some other scenarios (non-GUI)?
> >
>

> Did I understand you correctly that there may be 'false alarms' caused
> by using external 'GUI testing' tools? If yes which kind of 'false
> alarms' are there?

Seems, in some cases the input from the X11GuiTest can be lost in the system :(
Also timeouts between different symbols may depends on the system load etc

 Thanks, Vladimir

>
> Thanks,
> Stepan.
>
> >  Thanks, Vladimir
> >
> >
> > >
> > > Thought? Objections?
> > >
> > > Thanks,
> > > Stepan.
> > >
> > > > I read the discussion on naming, and M1, M2, ... is fine by me.  How
> > > > about we pick a proposed date for Apache Harmony M1?
> > > >
> > > > Regards,
> > > > Tim
> > > >
> > >
>
> --
> Stepan Mishura
> Intel Enterprise Solutions Software Division
>

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Stepan Mishura <st...@gmail.com>.
On 4/3/07, Vladimir Ivanov wrote:
<SNIP>
>
> > I'd like to propose the next approach that may help us to know about
> > instabilities: develop (or take existing one, for example, Eclipse
> > hello world) a scenario for testing stability and configure CC to run
> > it at all times. The stability scenario must be the only one scenario
> > for CC; it must be short (no longer then an hour), test JRE in stress
> > conditions and cover most of functionality. If the scenario fails then
> > all newly committed updates are subject for investigation and fix (or
> > rollback).
>
> Actually, I prefer something without GUI or at least without using
> special 'GUI testing" tools. It should improve quality of this testing
> (than less tools than more predictable results :)) Current "Eclipse
> hello world" scenario based on the AutoIT for Win and X11GuiTest for
> Linux platform. Also we have this scenario based on API calls which
> should emulate GUI scenario. From these 2 approaches I prefer second
> to minimize 'false alarms'. Or may be some other scenarios (non-GUI)?
>

Did I understand you correctly that there may be 'false alarms' caused
by using external 'GUI testing' tools? If yes which kind of 'false
alarms' are there?

Thanks,
Stepan.

>  Thanks, Vladimir
>
>
> >
> > Thought? Objections?
> >
> > Thanks,
> > Stepan.
> >
> > > I read the discussion on naming, and M1, M2, ... is fine by me.  How
> > > about we pick a proposed date for Apache Harmony M1?
> > >
> > > Regards,
> > > Tim
> > >
> >

-- 
Stepan Mishura
Intel Enterprise Solutions Software Division

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Vladimir Ivanov <iv...@gmail.com>.
On 4/2/07, Stepan Mishura <st...@gmail.com> wrote:
> On 3/30/07, Tim Ellison wrote:
> > Stepan Mishura wrote:
> > > We made a big progress in improving the project's code base. Just for
> > > example, a total number of excluded tests in the class library were
> > > reduced by ~ 60 entries in two months. Also taking into account that
> > > Windows x86_64 build was enabled and the test suite is growing I think
> > > this is a good progress indicator for the Class library. The same for
> > > DRL VM – new testing modes are added and a total number of excluded
> > > tests for most of modes are reducing. Let's keep up good progress!
> > >
> > > But I'd like to attract attention to stability issues. I've been
> > > monitoring CC status for a couple of months and I my impression that
> > > situation with stability become worse – a number of reports about
> > > failures is growing. I'm afraid that if we set keeping good stability
> > > aside then it may affect the overall project's progress.
> > >
> > > I'd like to encourage everybody to pay attention to stability side and
> > > to hear ideas how to improve the situation?
> >
> > Caveat:  I'm still 200 emails behind on the dev list, a good sign of the
> > project's liveliness, but my apologies in advance for any repetition...
> >
> > IMO we won't achieve rock solid stability without focusing on it as an
> > explicit goal; and delivering a Milestone release is the best way to get
> > that focus.
> >
>
> Yes, I agree that without focusing on stability it is hard to achive it.
>
> > Not exactly a novel or radical idea, but Milestones have a number of
> > benefits not least of which is that they demonstrate we, as a diverse
> > group, can converge on a delivery of the code we are working on.  Some
> > projects will rumble on forever without committing to a stable, tested,
> > and likely imperfect, packaging of something.
> >
> > If the Milestones are time-boxed they also form a natural boundary for
> > feature planning, and afford some predictability to the project that is
> > also important.
> >
> > In my experience, something like 6 to 8 weeks between Milestones is a
> > good period of time.  Four weeks is too short to get big ticket items in
> > and stable, and 12 weeks (a quarter year) is too long such that
> > instability can set-in.
> >
> > In that 6 to 8 week period there should be a time at the end where we
> > hold back from introducing cool new function, and emphasize testing and
> > fixing.  Maybe that is the last seven days leading up to the Milestone,
> > and of course, if instability exists we slip the date until we can
> > declare a stable point.
> >
>
> Sure this approach makes sense and I think we should accept and follow
> it. I see only one issue here - it lets instabilities get accumulated
> and present unnoticed (ignored?) in the code for some period of time.
> This may result that minor update can have unintended consequences.
>
> Currently if we identify a regression we try to find a guilty commit
> and to fix it or do rollback. I think it is the right way – we keep
> code base in a good shape and don't let a number of known problems to
> grow. This approach showed its efficiency and the only thing I can do
> here is only encourage all contributors to run all available tests
> after doing any non-trivial change. But it seems for intermittent
> failures the approach with running all testing scenarios doesn't work
> well – usually they are not immediately detected. And it's hard to
> find guilty update after a long time so we tend to put such tests to
> exclude list.
>

> I'd like to propose the next approach that may help us to know about
> instabilities: develop (or take existing one, for example, Eclipse
> hello world) a scenario for testing stability and configure CC to run
> it at all times. The stability scenario must be the only one scenario
> for CC; it must be short (no longer then an hour), test JRE in stress
> conditions and cover most of functionality. If the scenario fails then
> all newly committed updates are subject for investigation and fix (or
> rollback).

Actually, I prefer something without GUI or at least without using
special 'GUI testing" tools. It should improve quality of this testing
(than less tools than more predictable results :)) Current "Eclipse
hello world" scenario based on the AutoIT for Win and X11GuiTest for
Linux platform. Also we have this scenario based on API calls which
should emulate GUI scenario. From these 2 approaches I prefer second
to minimize 'false alarms'. Or may be some other scenarios (non-GUI)?

 Thanks, Vladimir


>
> Thought? Objections?
>
> Thanks,
> Stepan.
>
> > I read the discussion on naming, and M1, M2, ... is fine by me.  How
> > about we pick a proposed date for Apache Harmony M1?
> >
> > Regards,
> > Tim
> >
>
>
> --
> Stepan Mishura
> Intel Enterprise Solutions Software Division
>

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Mikhail Loenko <ml...@gmail.com>.
2007/4/3, Tim Ellison <t....@gmail.com>:
> Stepan Mishura wrote:
> > Sure this approach makes sense and I think we should accept and follow
> > it. I see only one issue here - it lets instabilities get accumulated
> > and present unnoticed (ignored?) in the code for some period of time.
> > This may result that minor update can have unintended consequences.
>
> Apologies for not being so clear, we should continue to ensure that the
> automated tests and pre-commit tests are run as we do today to ensure
> day-to-day stability.  Of course, as work progresses there may be
> regressions that require a commit to be backed-out (or better still, fixed).
>
> The stability period approaching a Milestone is a pact that only
> stability enhancing fixes are applied to ensure there is no last minute
> disruption during the enhanced testing and declaration of the release.

I agree with "feature freeze" periods. An open question is what is the frequency
of milestones and duration of "feature freeze" periods before them.
I think we should also have code freeze periods: when we do last-minute testing
to validate the milestone candidate build.

Thanks,
Mikhail

>
> > Currently if we identify a regression we try to find a guilty commit
> > and to fix it or do rollback. I think it is the right way – we keep
> > code base in a good shape and don't let a number of known problems to
> > grow. This approach showed its efficiency and the only thing I can do
> > here is only encourage all contributors to run all available tests
> > after doing any non-trivial change.
>
> Agreed.
>
> > But it seems for intermittent
> > failures the approach with running all testing scenarios doesn't work
> > well – usually they are not immediately detected. And it's hard to
> > find guilty update after a long time so we tend to put such tests to
> > exclude list.
>
> Right, and we should triage these and draw extra focus on them during
> the stability pass.
>
> > I'd like to propose the next approach that may help us to know about
> > instabilities: develop (or take existing one, for example, Eclipse
> > hello world) a scenario for testing stability and configure CC to run
> > it at all times. The stability scenario must be the only one scenario
> > for CC; it must be short (no longer then an hour), test JRE in stress
> > conditions and cover most of functionality. If the scenario fails then
> > all newly committed updates are subject for investigation and fix (or
> > rollback).
>
> None from me.
>
> Regards,
> Tim
>

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Tim Ellison <t....@gmail.com>.
Stepan Mishura wrote:
> Sure this approach makes sense and I think we should accept and follow
> it. I see only one issue here - it lets instabilities get accumulated
> and present unnoticed (ignored?) in the code for some period of time.
> This may result that minor update can have unintended consequences.

Apologies for not being so clear, we should continue to ensure that the
automated tests and pre-commit tests are run as we do today to ensure
day-to-day stability.  Of course, as work progresses there may be
regressions that require a commit to be backed-out (or better still, fixed).

The stability period approaching a Milestone is a pact that only
stability enhancing fixes are applied to ensure there is no last minute
disruption during the enhanced testing and declaration of the release.

> Currently if we identify a regression we try to find a guilty commit
> and to fix it or do rollback. I think it is the right way – we keep
> code base in a good shape and don't let a number of known problems to
> grow. This approach showed its efficiency and the only thing I can do
> here is only encourage all contributors to run all available tests
> after doing any non-trivial change.

Agreed.

> But it seems for intermittent
> failures the approach with running all testing scenarios doesn't work
> well – usually they are not immediately detected. And it's hard to
> find guilty update after a long time so we tend to put such tests to
> exclude list.

Right, and we should triage these and draw extra focus on them during
the stability pass.

> I'd like to propose the next approach that may help us to know about
> instabilities: develop (or take existing one, for example, Eclipse
> hello world) a scenario for testing stability and configure CC to run
> it at all times. The stability scenario must be the only one scenario
> for CC; it must be short (no longer then an hour), test JRE in stress
> conditions and cover most of functionality. If the scenario fails then
> all newly committed updates are subject for investigation and fix (or
> rollback).

None from me.

Regards,
Tim

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Stepan Mishura <st...@gmail.com>.
On 3/30/07, Tim Ellison wrote:
> Stepan Mishura wrote:
> > We made a big progress in improving the project's code base. Just for
> > example, a total number of excluded tests in the class library were
> > reduced by ~ 60 entries in two months. Also taking into account that
> > Windows x86_64 build was enabled and the test suite is growing I think
> > this is a good progress indicator for the Class library. The same for
> > DRL VM – new testing modes are added and a total number of excluded
> > tests for most of modes are reducing. Let's keep up good progress!
> >
> > But I'd like to attract attention to stability issues. I've been
> > monitoring CC status for a couple of months and I my impression that
> > situation with stability become worse – a number of reports about
> > failures is growing. I'm afraid that if we set keeping good stability
> > aside then it may affect the overall project's progress.
> >
> > I'd like to encourage everybody to pay attention to stability side and
> > to hear ideas how to improve the situation?
>
> Caveat:  I'm still 200 emails behind on the dev list, a good sign of the
> project's liveliness, but my apologies in advance for any repetition...
>
> IMO we won't achieve rock solid stability without focusing on it as an
> explicit goal; and delivering a Milestone release is the best way to get
> that focus.
>

Yes, I agree that without focusing on stability it is hard to achive it.

> Not exactly a novel or radical idea, but Milestones have a number of
> benefits not least of which is that they demonstrate we, as a diverse
> group, can converge on a delivery of the code we are working on.  Some
> projects will rumble on forever without committing to a stable, tested,
> and likely imperfect, packaging of something.
>
> If the Milestones are time-boxed they also form a natural boundary for
> feature planning, and afford some predictability to the project that is
> also important.
>
> In my experience, something like 6 to 8 weeks between Milestones is a
> good period of time.  Four weeks is too short to get big ticket items in
> and stable, and 12 weeks (a quarter year) is too long such that
> instability can set-in.
>
> In that 6 to 8 week period there should be a time at the end where we
> hold back from introducing cool new function, and emphasize testing and
> fixing.  Maybe that is the last seven days leading up to the Milestone,
> and of course, if instability exists we slip the date until we can
> declare a stable point.
>

Sure this approach makes sense and I think we should accept and follow
it. I see only one issue here - it lets instabilities get accumulated
and present unnoticed (ignored?) in the code for some period of time.
This may result that minor update can have unintended consequences.

Currently if we identify a regression we try to find a guilty commit
and to fix it or do rollback. I think it is the right way – we keep
code base in a good shape and don't let a number of known problems to
grow. This approach showed its efficiency and the only thing I can do
here is only encourage all contributors to run all available tests
after doing any non-trivial change. But it seems for intermittent
failures the approach with running all testing scenarios doesn't work
well – usually they are not immediately detected. And it's hard to
find guilty update after a long time so we tend to put such tests to
exclude list.

I'd like to propose the next approach that may help us to know about
instabilities: develop (or take existing one, for example, Eclipse
hello world) a scenario for testing stability and configure CC to run
it at all times. The stability scenario must be the only one scenario
for CC; it must be short (no longer then an hour), test JRE in stress
conditions and cover most of functionality. If the scenario fails then
all newly committed updates are subject for investigation and fix (or
rollback).

Thought? Objections?

Thanks,
Stepan.

> I read the discussion on naming, and M1, M2, ... is fine by me.  How
> about we pick a proposed date for Apache Harmony M1?
>
> Regards,
> Tim
>


-- 
Stepan Mishura
Intel Enterprise Solutions Software Division

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Yang Paulex <pa...@gmail.com>.
+1 from me on milestones, if with which a developer release will be
delivered.

Especially seems someone are going to start Java 6 work, with the same
concern with Stepan, I preferred to release at least once on Java 5 API
level before we heads into substantial Java 6 upgrade.

2007/3/30, Tim Ellison <t....@gmail.com>:
>
> Stepan Mishura wrote:
> > We made a big progress in improving the project's code base. Just for
> > example, a total number of excluded tests in the class library were
> > reduced by ~ 60 entries in two months. Also taking into account that
> > Windows x86_64 build was enabled and the test suite is growing I think
> > this is a good progress indicator for the Class library. The same for
> > DRL VM – new testing modes are added and a total number of excluded
> > tests for most of modes are reducing. Let's keep up good progress!
> >
> > But I'd like to attract attention to stability issues. I've been
> > monitoring CC status for a couple of months and I my impression that
> > situation with stability become worse – a number of reports about
> > failures is growing. I'm afraid that if we set keeping good stability
> > aside then it may affect the overall project's progress.
> >
> > I'd like to encourage everybody to pay attention to stability side and
> > to hear ideas how to improve the situation?
>
> Caveat:  I'm still 200 emails behind on the dev list, a good sign of the
> project's liveliness, but my apologies in advance for any repetition...
>
> IMO we won't achieve rock solid stability without focusing on it as an
> explicit goal; and delivering a Milestone release is the best way to get
> that focus.
>
> Not exactly a novel or radical idea, but Milestones have a number of
> benefits not least of which is that they demonstrate we, as a diverse
> group, can converge on a delivery of the code we are working on.  Some
> projects will rumble on forever without committing to a stable, tested,
> and likely imperfect, packaging of something.
>
> If the Milestones are time-boxed they also form a natural boundary for
> feature planning, and afford some predictability to the project that is
> also important.
>
> In my experience, something like 6 to 8 weeks between Milestones is a
> good period of time.  Four weeks is too short to get big ticket items in
> and stable, and 12 weeks (a quarter year) is too long such that
> instability can set-in.
>
> In that 6 to 8 week period there should be a time at the end where we
> hold back from introducing cool new function, and emphasize testing and
> fixing.  Maybe that is the last seven days leading up to the Milestone,
> and of course, if instability exists we slip the date until we can
> declare a stable point.
>
> I read the discussion on naming, and M1, M2, ... is fine by me.  How
> about we pick a proposed date for Apache Harmony M1?
>
> Regards,
> Tim
>



-- 
Paulex Yang
China Software Development laboratory
IBM

Re: [general] Discussion: how to keep up stability and fast progress all together?

Posted by Tim Ellison <t....@gmail.com>.
Stepan Mishura wrote:
> We made a big progress in improving the project's code base. Just for
> example, a total number of excluded tests in the class library were
> reduced by ~ 60 entries in two months. Also taking into account that
> Windows x86_64 build was enabled and the test suite is growing I think
> this is a good progress indicator for the Class library. The same for
> DRL VM – new testing modes are added and a total number of excluded
> tests for most of modes are reducing. Let's keep up good progress!
> 
> But I'd like to attract attention to stability issues. I've been
> monitoring CC status for a couple of months and I my impression that
> situation with stability become worse – a number of reports about
> failures is growing. I'm afraid that if we set keeping good stability
> aside then it may affect the overall project's progress.
> 
> I'd like to encourage everybody to pay attention to stability side and
> to hear ideas how to improve the situation?

Caveat:  I'm still 200 emails behind on the dev list, a good sign of the
project's liveliness, but my apologies in advance for any repetition...

IMO we won't achieve rock solid stability without focusing on it as an
explicit goal; and delivering a Milestone release is the best way to get
that focus.

Not exactly a novel or radical idea, but Milestones have a number of
benefits not least of which is that they demonstrate we, as a diverse
group, can converge on a delivery of the code we are working on.  Some
projects will rumble on forever without committing to a stable, tested,
and likely imperfect, packaging of something.

If the Milestones are time-boxed they also form a natural boundary for
feature planning, and afford some predictability to the project that is
also important.

In my experience, something like 6 to 8 weeks between Milestones is a
good period of time.  Four weeks is too short to get big ticket items in
and stable, and 12 weeks (a quarter year) is too long such that
instability can set-in.

In that 6 to 8 week period there should be a time at the end where we
hold back from introducing cool new function, and emphasize testing and
fixing.  Maybe that is the last seven days leading up to the Milestone,
and of course, if instability exists we slip the date until we can
declare a stable point.

I read the discussion on naming, and M1, M2, ... is fine by me.  How
about we pick a proposed date for Apache Harmony M1?

Regards,
Tim