You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@harmony.apache.org by Mikhail Loenko <ml...@gmail.com> on 2006/09/14 10:05:39 UTC

[testing] optimization regressions (was: Re: [result] Re: [vote] HARMONY-1363 - DRLVM fixes and additions)

Hi Rana

2006/9/14, Rana Dasgupta <rd...@gmail.com>:
<SNIP>
>  One way to write the test would be to loop N times on a scenario that
> kicks in the optimization say, array bounds check elimination and then loop
> N times a very similar scenario but such that the bounds check does not get
> eliminated. Then the test should pass only if the difference in timing is at
> least X on any platform.

I tried to create a similar test when was testing that resolved IP
addresses are
cached. Finally I've figured out that this test is not the best
pre-commit test as it
may accidentally fail if I run other apps on the same machine where I
run the tests.

And as you know unstable failure is not the most pleasant thing to deal with :)

Thanks,
Mikhail


>  I have been forced to do this several times :-) So I couldn't resist
> spreading the pain.
>
> Thanks,
> Rana
>
>
>
> > On 14 Sep 2006 12:10:19 +0700, Egor Pasko < egor.pasko@gmail.com> wrote:
> > >
> > >
> > > Weldon, I am afraid, this is a performance issue and the test would
> > > show nothing more than a serious performance boost after the fix. I'll
> > > find someone with a test like this :) and ask to attach it to JIRA.
> > > But .. do we need performance tests in the regression suite?
> > >
> > > Apart of this issue I see that JIT infrastructure is not so
> > > test-oriented as one would expect. JIT tests should sometimes be more
> > > sophisticated than those in vm/tests and, I guess, we need a separate
> > > place for them in the JIT tree.
> > >
> > > Many JIT tests are sensitive to various JIT options and cannot be
> > > reproduced in default mode. For example, to catch a bug in OPT with a
> > > small test you will have to provide "-Xem opt" options. Thus, in a
> > > regression test we will need:
> > > (a) extra options to VM,
> > > (b) sources (often in jasmin or C++ (for hand-crafted IRs))
> > > (c) and even *.emconfig files to set custom sequences of optimizations
> > >
> > > (anything else?)
> > > I am afraid, we will have to hack a lot above JUnit to get all these.
> > >
> > > Let's decide whether we need a framework like this at the time. We can
> > > make a first version quite quickly and improve it further on as-needed
> > > basis. Design is not quite clear now, though it is expected to be a
> > > fast-converging discussion.
> > >
> > >
> > > --
> > > Egor Pasko, Intel Managed Runtime Division
> >
> >
>
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [testing] optimization regressions (was: Re: [result] Re: [vote] HARMONY-1363 - DRLVM fixes and additions)

Posted by Rana Dasgupta <rd...@gmail.com>.
Hi Mikhail,
  I save the same observation as Egor, that jit optimizations should be
much more deterministic and their impact predictable. I understand the
unpredictability of the situation you described.

Thanks,
Rana





> On 14 Sep 2006 21:55:16 +0700, Egor Pasko <eg...@gmail.com> wrote:
> >
> > On the 0x1E4 day of Apache Harmony Mikhail Loenko wrote:
> > > In the example i've mentioned before the difference between optimized
> > and
> > > non-optimized calls was about 1000x. But the test sometimes failed
> > anyway
> >
> > Yet, I think, pure Java performance is more predictable than network
> > performance. I am also afraid to include performance tests to the
> > pre-commit process, but having these tests around would be nice to
> > track performance regressions from time to time.
> >
> >
> > --
> > Egor Pasko, Intel Managed Runtime Division
> >
> >
> > ---------------------------------------------------------------------
> > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >
> >
>

Re: [testing] optimization regressions (was: Re: [result] Re: [vote] HARMONY-1363 - DRLVM fixes and additions)

Posted by Egor Pasko <eg...@gmail.com>.
On the 0x1E4 day of Apache Harmony Mikhail Loenko wrote:
> In the example i've mentioned before the difference between optimized and
> non-optimized calls was about 1000x. But the test sometimes failed anyway

Yet, I think, pure Java performance is more predictable than network
performance. I am also afraid to include performance tests to the
pre-commit process, but having these tests around would be nice to
track performance regressions from time to time.


-- 
Egor Pasko, Intel Managed Runtime Division


---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [testing] optimization regressions (was: Re: [result] Re: [vote] HARMONY-1363 - DRLVM fixes and additions)

Posted by Mikhail Loenko <ml...@gmail.com>.
In the example i've mentioned before the difference between optimized and
non-optimized calls was about 1000x. But the test sometimes failed anyway

Thanks,
Mikhail

14 Sep 2006 17:59:44 +0700, Egor Pasko <eg...@gmail.com>:
> On the 0x1E4 day of Apache Harmony Pavel Ozhdikhin wrote:
> > When I think of an optimization which gives 1% improvement on some simple
> > workload or 3% improvement on EM64T platforms only I doubt this can be
> > easily detected with a general-purpose test suite. IMO the performance
> > regression testing should have a specialized framework and a stable
> > environment which guarantees no user application can spoil the results.
> >
> > The right solution might also be a JIT testing framework which would
> > understand the JIT IRs and check if some code patterns have been optimized
> > as expected. Such way we can guarantee necessary optimizations are done
> > independently of the user environment.
>
> Pavel, Rana,
>
> Sometimes a performance issue is well reproduced with a microbenchmark
> on all platforms. Basically, you can compare execution times with
> some_optpass=on and some_optpass=off. If the difference is less than,
> say, 20%, the test fails. In this case, it is easier to write a test
> like this and not stick to IR-level testing.
>
> Sometimes, a performance issue is more sophisticated and we need an
> IR-oriented test.
>
> I would vote for having *both* kinds of tests in JIT regression testbase.
>
> P.S.: Are we out of ideas and it's time to implement something?
>
> > On 9/14/06, Mikhail Loenko <ml...@gmail.com> wrote:
> > >
> > > Hi Rana
> > >
> > > 2006/9/14, Rana Dasgupta <rd...@gmail.com>:
> > > <SNIP>
> > > >  One way to write the test would be to loop N times on a scenario that
> > > > kicks in the optimization say, array bounds check elimination and then
> > > loop
> > > > N times a very similar scenario but such that the bounds check does not
> > > get
> > > > eliminated. Then the test should pass only if the difference in timing
> > > is at
> > > > least X on any platform.
> > >
> > > I tried to create a similar test when was testing that resolved IP
> > > addresses are
> > > cached. Finally I've figured out that this test is not the best
> > > pre-commit test as it
> > > may accidentally fail if I run other apps on the same machine where I
> > > run the tests.
> > >
> > > And as you know unstable failure is not the most pleasant thing to deal
> > > with :)
> > >
> > > Thanks,
> > > Mikhail
> > >
> > >
> > > >  I have been forced to do this several times :-) So I couldn't resist
> > > > spreading the pain.
> > > >
> > > > Thanks,
> > > > Rana
> > > >
> > > >
> > > >
> > > > > On 14 Sep 2006 12:10:19 +0700, Egor Pasko < egor.pasko@gmail.com>
> > > wrote:
> > > > > >
> > > > > >
> > > > > > Weldon, I am afraid, this is a performance issue and the test would
> > > > > > show nothing more than a serious performance boost after the fix.
> > > I'll
> > > > > > find someone with a test like this :) and ask to attach it to JIRA.
> > > > > > But .. do we need performance tests in the regression suite?
> > > > > >
> > > > > > Apart of this issue I see that JIT infrastructure is not so
> > > > > > test-oriented as one would expect. JIT tests should sometimes be
> > > more
> > > > > > sophisticated than those in vm/tests and, I guess, we need a
> > > separate
> > > > > > place for them in the JIT tree.
> > > > > >
> > > > > > Many JIT tests are sensitive to various JIT options and cannot be
> > > > > > reproduced in default mode. For example, to catch a bug in OPT with
> > > a
> > > > > > small test you will have to provide "-Xem opt" options. Thus, in a
> > > > > > regression test we will need:
> > > > > > (a) extra options to VM,
> > > > > > (b) sources (often in jasmin or C++ (for hand-crafted IRs))
> > > > > > (c) and even *.emconfig files to set custom sequences of
> > > optimizations
> > > > > >
> > > > > > (anything else?)
> > > > > > I am afraid, we will have to hack a lot above JUnit to get all
> > > these.
> > > > > >
> > > > > > Let's decide whether we need a framework like this at the time. We
> > > can
> > > > > > make a first version quite quickly and improve it further on
> > > as-needed
> > > > > > basis. Design is not quite clear now, though it is expected to be a
> > > > > > fast-converging discussion.
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Egor Pasko, Intel Managed Runtime Division
> > > > >
> > > > >
> > > >
> > > >
> > > > ---------------------------------------------------------------------
> > > > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > > > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > > > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> > > >
> > > >
> > >
> > > ---------------------------------------------------------------------
> > > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> > >
> > >
>
> --
> Egor Pasko, Intel Managed Runtime Division
>
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [testing] optimization regressions (was: Re: [result] Re: [vote] HARMONY-1363 - DRLVM fixes and additions)

Posted by Egor Pasko <eg...@gmail.com>.
On the 0x1E4 day of Apache Harmony Pavel Ozhdikhin wrote:
> When I think of an optimization which gives 1% improvement on some simple
> workload or 3% improvement on EM64T platforms only I doubt this can be
> easily detected with a general-purpose test suite. IMO the performance
> regression testing should have a specialized framework and a stable
> environment which guarantees no user application can spoil the results.
> 
> The right solution might also be a JIT testing framework which would
> understand the JIT IRs and check if some code patterns have been optimized
> as expected. Such way we can guarantee necessary optimizations are done
> independently of the user environment.

Pavel, Rana,

Sometimes a performance issue is well reproduced with a microbenchmark
on all platforms. Basically, you can compare execution times with
some_optpass=on and some_optpass=off. If the difference is less than,
say, 20%, the test fails. In this case, it is easier to write a test
like this and not stick to IR-level testing.

Sometimes, a performance issue is more sophisticated and we need an
IR-oriented test.

I would vote for having *both* kinds of tests in JIT regression testbase.

P.S.: Are we out of ideas and it's time to implement something?

> On 9/14/06, Mikhail Loenko <ml...@gmail.com> wrote:
> >
> > Hi Rana
> >
> > 2006/9/14, Rana Dasgupta <rd...@gmail.com>:
> > <SNIP>
> > >  One way to write the test would be to loop N times on a scenario that
> > > kicks in the optimization say, array bounds check elimination and then
> > loop
> > > N times a very similar scenario but such that the bounds check does not
> > get
> > > eliminated. Then the test should pass only if the difference in timing
> > is at
> > > least X on any platform.
> >
> > I tried to create a similar test when was testing that resolved IP
> > addresses are
> > cached. Finally I've figured out that this test is not the best
> > pre-commit test as it
> > may accidentally fail if I run other apps on the same machine where I
> > run the tests.
> >
> > And as you know unstable failure is not the most pleasant thing to deal
> > with :)
> >
> > Thanks,
> > Mikhail
> >
> >
> > >  I have been forced to do this several times :-) So I couldn't resist
> > > spreading the pain.
> > >
> > > Thanks,
> > > Rana
> > >
> > >
> > >
> > > > On 14 Sep 2006 12:10:19 +0700, Egor Pasko < egor.pasko@gmail.com>
> > wrote:
> > > > >
> > > > >
> > > > > Weldon, I am afraid, this is a performance issue and the test would
> > > > > show nothing more than a serious performance boost after the fix.
> > I'll
> > > > > find someone with a test like this :) and ask to attach it to JIRA.
> > > > > But .. do we need performance tests in the regression suite?
> > > > >
> > > > > Apart of this issue I see that JIT infrastructure is not so
> > > > > test-oriented as one would expect. JIT tests should sometimes be
> > more
> > > > > sophisticated than those in vm/tests and, I guess, we need a
> > separate
> > > > > place for them in the JIT tree.
> > > > >
> > > > > Many JIT tests are sensitive to various JIT options and cannot be
> > > > > reproduced in default mode. For example, to catch a bug in OPT with
> > a
> > > > > small test you will have to provide "-Xem opt" options. Thus, in a
> > > > > regression test we will need:
> > > > > (a) extra options to VM,
> > > > > (b) sources (often in jasmin or C++ (for hand-crafted IRs))
> > > > > (c) and even *.emconfig files to set custom sequences of
> > optimizations
> > > > >
> > > > > (anything else?)
> > > > > I am afraid, we will have to hack a lot above JUnit to get all
> > these.
> > > > >
> > > > > Let's decide whether we need a framework like this at the time. We
> > can
> > > > > make a first version quite quickly and improve it further on
> > as-needed
> > > > > basis. Design is not quite clear now, though it is expected to be a
> > > > > fast-converging discussion.
> > > > >
> > > > >
> > > > > --
> > > > > Egor Pasko, Intel Managed Runtime Division
> > > >
> > > >
> > >
> > >
> > > ---------------------------------------------------------------------
> > > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> > >
> > >
> >
> > ---------------------------------------------------------------------
> > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >
> >

-- 
Egor Pasko, Intel Managed Runtime Division


---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [testing] optimization regressions (was: Re: [result] Re: [vote] HARMONY-1363 - DRLVM fixes and additions)

Posted by Rana Dasgupta <rd...@gmail.com>.
>On 15 Sep 2006 11:26:40 +0700, Egor Pasko <eg...@gmail.com> wrote:
>
> >>On the 0x1E4 day of Apache Harmony Mikhail Fursov wrote:
> >> This would be the best solution to test if an optimization works as
> > >expected.
> > >We can create the following framework inside Jitrino compiler to test
> > >individual optimizations and optimizations inter-dependencies:
> >>
> >> Create a special optimization ("test") that that works only for
> "special"
> >> Java method (jitrino.TestCase.testJitrino) during the compilation.
> >> It works in the following way:
> >> 1) Cleans current IR
> >> 2) Set up some kind of template IR: e.g. IR with 1 loop and a const
> inside
> >> the loop
> >> 3) Runs a test that uses internal Jitrino API and checks the results:
> e.g.
> >> runs some loop optimizations and checks that constant is moved out from
> the
> >> loop.
> >> 4) Restores initial IR of the method.
>
> >> Such tests could be run from junit with a special adapter as usual Java
> >> tests.
> >>
> >> Does it makes sense?
>
> >I like it! Thanks!


This looks like a good idea for cases where the optimization is  more subtle
and may not immediately translate to a tangile perf gain. These( at least
some ) can be included in the pre-commit tests as correctness tests, rather
than performance tests.


> >> Any other ideas or experience how to test compiler optimizations
> >> predictably?
>
> >although having performance measurements, NULLSTONE-like tests are
> >quite predictable and relatively easy to write.


Yes.  As Egor said above, we need both types. Nullstone-like perf tests (
for cases where they can be expressed in this way ) should be quite
predictable and platform independent. We can play with a reasonably safe
confidence interval ( eg., within X% of the manual optimization ). These
should be OK for pre-commit too.  If they are unsafe, we will know soon :-)

Thanks for the good ideas,
Rana


>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

Re: [testing] optimization regressions (was: Re: [result] Re: [vote] HARMONY-1363 - DRLVM fixes and additions)

Posted by Pavel Ozhdikhin <pa...@gmail.com>.
On 18 Sep 2006 12:00:57 +0700, Egor Pasko <eg...@gmail.com> wrote:
> On the 0x1E5 day of Apache Harmony Pavel Ozhdikhin wrote:
> > Thanks for explaining. This is another variant of the bytecode-based
> > regression tests.
>
> This variant is also adoptable to Java-based and IR-based regression tests.
>

The IR-based framework proposed by Mikhail Fursov does not implies
running the transformed code so what you have just proposed makes a
new type of IR-based tests. Good idea for extension of the framework,
thank you!

> > On 15 Sep 2006 17:53:00 +0700, Egor Pasko <eg...@gmail.com> wrote:
> > >
> > > On the 0x1E5 day of Apache Harmony Pavel Ozhdikhin wrote:
> > > >  Egor,
> > > >
> > > > How Nullstone tests differ from what Rana proposed and Mikhail L.
> > > prototyped
> > > > - could you please elaborate?
> > >
> > > the idea is simple. You have two versions of a test. First --
> > > unoptimized, second -- same algorithm, but optimized by hand with a
> > > specific optimization. If the times of execution differ much, then,
> > > the optimization is not done properly in the checked compiler.
> > >
> > > It works best with optimizations that are easy to represent in a
> > > high-level language (i.e. Java), such as "load hoisting", "loop
> > > unrolling", etc.
> > >
> > > see http://www.nullstone.com for more info
> > >
> > > --
> > > Egor Pasko, Intel Managed Runtime Division
> > >
> > >
> > > ---------------------------------------------------------------------
> > > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> > >
> > >
>
> --
> Egor Pasko, Intel Managed Runtime Division
>
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

-Pavel Ozhdikhin

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [testing] optimization regressions (was: Re: [result] Re: [vote] HARMONY-1363 - DRLVM fixes and additions)

Posted by Egor Pasko <eg...@gmail.com>.
On the 0x1E5 day of Apache Harmony Pavel Ozhdikhin wrote:
> Thanks for explaining. This is another variant of the bytecode-based
> regression tests.

This variant is also adoptable to Java-based and IR-based regression tests.

> On 15 Sep 2006 17:53:00 +0700, Egor Pasko <eg...@gmail.com> wrote:
> >
> > On the 0x1E5 day of Apache Harmony Pavel Ozhdikhin wrote:
> > >  Egor,
> > >
> > > How Nullstone tests differ from what Rana proposed and Mikhail L.
> > prototyped
> > > - could you please elaborate?
> >
> > the idea is simple. You have two versions of a test. First --
> > unoptimized, second -- same algorithm, but optimized by hand with a
> > specific optimization. If the times of execution differ much, then,
> > the optimization is not done properly in the checked compiler.
> >
> > It works best with optimizations that are easy to represent in a
> > high-level language (i.e. Java), such as "load hoisting", "loop
> > unrolling", etc.
> >
> > see http://www.nullstone.com for more info
> >
> > --
> > Egor Pasko, Intel Managed Runtime Division
> >
> >
> > ---------------------------------------------------------------------
> > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >
> >

-- 
Egor Pasko, Intel Managed Runtime Division


---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [testing] optimization regressions (was: Re: [result] Re: [vote] HARMONY-1363 - DRLVM fixes and additions)

Posted by Pavel Ozhdikhin <pa...@gmail.com>.
Thanks for explaining. This is another variant of the bytecode-based
regression tests.

-Pavel


On 15 Sep 2006 17:53:00 +0700, Egor Pasko <eg...@gmail.com> wrote:
>
> On the 0x1E5 day of Apache Harmony Pavel Ozhdikhin wrote:
> >  Egor,
> >
> > How Nullstone tests differ from what Rana proposed and Mikhail L.
> prototyped
> > - could you please elaborate?
>
> the idea is simple. You have two versions of a test. First --
> unoptimized, second -- same algorithm, but optimized by hand with a
> specific optimization. If the times of execution differ much, then,
> the optimization is not done properly in the checked compiler.
>
> It works best with optimizations that are easy to represent in a
> high-level language (i.e. Java), such as "load hoisting", "loop
> unrolling", etc.
>
> see http://www.nullstone.com for more info
>
> --
> Egor Pasko, Intel Managed Runtime Division
>
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

Re: [testing] optimization regressions (was: Re: [result] Re: [vote] HARMONY-1363 - DRLVM fixes and additions)

Posted by Egor Pasko <eg...@gmail.com>.
On the 0x1E5 day of Apache Harmony Pavel Ozhdikhin wrote:
>  Egor,
> 
> How Nullstone tests differ from what Rana proposed and Mikhail L. prototyped
> - could you please elaborate?

the idea is simple. You have two versions of a test. First --
unoptimized, second -- same algorithm, but optimized by hand with a
specific optimization. If the times of execution differ much, then,
the optimization is not done properly in the checked compiler.

It works best with optimizations that are easy to represent in a
high-level language (i.e. Java), such as "load hoisting", "loop
unrolling", etc.

see http://www.nullstone.com for more info

-- 
Egor Pasko, Intel Managed Runtime Division


---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [testing] optimization regressions (was: Re: [result] Re: [vote] HARMONY-1363 - DRLVM fixes and additions)

Posted by Pavel Ozhdikhin <pa...@gmail.com>.
 Egor,

How Nullstone tests differ from what Rana proposed and Mikhail L. prototyped
- could you please elaborate?

Thanks,
Pavel


> Any other ideas or experience how to test compiler optimizations
> > predictably?
>
> although having performance measurements, NULLSTONE-like tests are
> quite predictable and relatively easy to write.
>
> --
> Egor Pasko, Intel Managed Runtime Division
>
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

Re: [testing] optimization regressions (was: Re: [result] Re: [vote] HARMONY-1363 - DRLVM fixes and additions)

Posted by Egor Pasko <eg...@gmail.com>.
On the 0x1E4 day of Apache Harmony Mikhail Fursov wrote:
> This would be the best solution to test if an optimization works as
> expected.
> We can create the following framework inside Jitrino compiler to test
> individual optimizations and optimizations inter-dependencies:
> 
> Create a special optimization ("test") that that works only for "special"
> Java method (jitrino.TestCase.testJitrino) during the compilation.
> It works in the following way:
> 1) Cleans current IR
> 2) Set up some kind of template IR: e.g. IR with 1 loop and a const inside
> the loop
> 3) Runs a test that uses internal Jitrino API and checks the results: e.g.
> runs some loop optimizations and checks that constant is moved out from the
> loop.
> 4) Restores initial IR of the method.
> 
> Such tests could be run from junit with a special adapter as usual Java
> tests.
> 
> Does it makes sense?

I like it! Thanks!

> Any other ideas or experience how to test compiler optimizations
> predictably?

although having performance measurements, NULLSTONE-like tests are
quite predictable and relatively easy to write.

-- 
Egor Pasko, Intel Managed Runtime Division


---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [testing] optimization regressions (was: Re: [result] Re: [vote] HARMONY-1363 - DRLVM fixes and additions)

Posted by Mikhail Fursov <mi...@gmail.com>.
This would be the best solution to test if an optimization works as
expected.
We can create the following framework inside Jitrino compiler to test
individual optimizations and optimizations inter-dependencies:

Create a special optimization ("test") that that works only for "special"
Java method (jitrino.TestCase.testJitrino) during the compilation.
It works in the following way:
1) Cleans current IR
2) Set up some kind of template IR: e.g. IR with 1 loop and a const inside
the loop
3) Runs a test that uses internal Jitrino API and checks the results: e.g.
runs some loop optimizations and checks that constant is moved out from the
loop.
4) Restores initial IR of the method.

Such tests could be run from junit with a special adapter as usual Java
tests.

Does it makes sense?
Any other ideas or experience how to test compiler optimizations
predictably?


On 9/14/06, Pavel Ozhdikhin <pa...@gmail.com> wrote:
>
> *Re-sending to the new thread:*
>
>
> The right solution might also be a JIT testing framework which would
> understand the JIT IRs and check if some code patterns have been optimized
> as expected. Such way we can guarantee necessary optimizations are done
> independently of the user environment.
>
>

-- 
Mikhail Fursov

Re: [testing] optimization regressions (was: Re: [result] Re: [vote] HARMONY-1363 - DRLVM fixes and additions)

Posted by Pavel Ozhdikhin <pa...@gmail.com>.
*Re-sending to the new thread:*

 Hello Rana,

When I think of an optimization which gives 1% improvement on some simple
workload or 3% improvement on EM64T platforms only I doubt this can be
easily detected with a general-purpose test suite. IMO the performance
regression testing should have a specialized framework and a stable
environment which guarantees no user application can spoil the results.

The right solution might also be a JIT testing framework which would
understand the JIT IRs and check if some code patterns have been optimized
as expected. Such way we can guarantee necessary optimizations are done
independently of the user environment.

Thanks,
Pavel



On 9/14/06, Mikhail Loenko <ml...@gmail.com> wrote:
>
> Hi Rana
>
> 2006/9/14, Rana Dasgupta <rd...@gmail.com>:
> <SNIP>
> >  One way to write the test would be to loop N times on a scenario that
> > kicks in the optimization say, array bounds check elimination and then
> loop
> > N times a very similar scenario but such that the bounds check does not
> get
> > eliminated. Then the test should pass only if the difference in timing
> is at
> > least X on any platform.
>
> I tried to create a similar test when was testing that resolved IP
> addresses are
> cached. Finally I've figured out that this test is not the best
> pre-commit test as it
> may accidentally fail if I run other apps on the same machine where I
> run the tests.
>
> And as you know unstable failure is not the most pleasant thing to deal
> with :)
>
> Thanks,
> Mikhail
>
>
> >  I have been forced to do this several times :-) So I couldn't resist
> > spreading the pain.
> >
> > Thanks,
> > Rana
> >
> >
> >
> > > On 14 Sep 2006 12:10:19 +0700, Egor Pasko < egor.pasko@gmail.com>
> wrote:
> > > >
> > > >
> > > > Weldon, I am afraid, this is a performance issue and the test would
> > > > show nothing more than a serious performance boost after the fix.
> I'll
> > > > find someone with a test like this :) and ask to attach it to JIRA.
> > > > But .. do we need performance tests in the regression suite?
> > > >
> > > > Apart of this issue I see that JIT infrastructure is not so
> > > > test-oriented as one would expect. JIT tests should sometimes be
> more
> > > > sophisticated than those in vm/tests and, I guess, we need a
> separate
> > > > place for them in the JIT tree.
> > > >
> > > > Many JIT tests are sensitive to various JIT options and cannot be
> > > > reproduced in default mode. For example, to catch a bug in OPT with
> a
> > > > small test you will have to provide "-Xem opt" options. Thus, in a
> > > > regression test we will need:
> > > > (a) extra options to VM,
> > > > (b) sources (often in jasmin or C++ (for hand-crafted IRs))
> > > > (c) and even *.emconfig files to set custom sequences of
> optimizations
> > > >
> > > > (anything else?)
> > > > I am afraid, we will have to hack a lot above JUnit to get all
> these.
> > > >
> > > > Let's decide whether we need a framework like this at the time. We
> can
> > > > make a first version quite quickly and improve it further on
> as-needed
> > > > basis. Design is not quite clear now, though it is expected to be a
> > > > fast-converging discussion.
> > > >
> > > >
> > > > --
> > > > Egor Pasko, Intel Managed Runtime Division
> > >
> > >
> >
> >
> > ---------------------------------------------------------------------
> > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >
> >
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>