You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@harmony.apache.org by Vladimir Strigun <vs...@gmail.com> on 2006/08/01 14:55:53 UTC

[performance] performance measurement of HDK

Our project is now in a good shape and we started encouraging people
to try real applications on top of Harmony. One of the important parts
of the feedback will be general impression of stability, reliability
and performance of the HDK. We are doing pretty good in fixing bugs
and developing new functionality in HDK, and it might be a good time
for us to start thinking of performance as well. It might be a little
bit preliminary, but do we consider having any performance targets for
us? If yes, we may need to focus at least some of our efforts on
benchmarking and tuning overall performance of HDK.

One of the main questions here is what should be the targets for us
and how should we measure our performance. There are several ways for
measuring performance, such us commercial benchmarks, free benchmarks,
application startup, small micro suites, etc. Some of free benchmarks
have been mentioned in JIRA issues and dev list, nevertheless at the
moment we don't have any goals for performance. In spite of
application enabling initiative it might be good to consider publicly
available benchmarks as the additional list of the software
applications which we would enable on Harmony.

So I suggest to start discussing performance techniques and methods
that can be used for comparing performance between RI and HDK. I think
in case we do not consider performance issues, we can get negative
feedback from users even if application starts without any errors,
exceptions, etc.

One of the benchmarks that was mentioned is DaCapo[1]. It's a free
open-source benchmark suite and I believe it can be used for regular
performance measurement of HDK. I've tried to find other free suites
and got the following list:

Telco - this one mostly stresses BigInteger/BigDecimal functionality
GcOld - the purpose of this one is clear from the name :)
SciMark - java benchmark for scientific and numerical computing
Linpack java - well-known benchmark solving linear equations The
Plasma Benchmark - creates an animated display by continuously summing
four sine waves in an applet
JavaWorld Benchmark - benchmark for low-level operations: loops,
accessing variables, method invocation, arithmetic operators, casting,
instantiation, exception handling, thread creation and switching.
CaffeineMark 3.0 - low-level benchmark suite, including sieve of
Eratosthenes, sorting, logic ops, method invocation, floating point,
simple graphics and GUI ops
JavaGrande benchmark suite - a set of benchmarks stressing different
areas of java.

Having in mind that the list of publicly available benchmarks is not
too big, sometimes it will be necessary to create micro benches for
some of patches (for instance, Harmony-935). IMO micro should be
started in case we change some code that the bench covers.

Other interesting and possibly more productive way for comparing
performance between different implementations are to use non-free
benchmarks. For instance, we can use benchmarks from Spec[n], like
SpecJVM, SpecJBB, SpecJAppserver. Unfortunately first we should get
license for it, but I believe this issue can be solved within the help
of companies participating in Harmony :)

Thoughts? Comments?

[1] http://dacapobench.org
[2] http://www.spec.org

--
Vladimir Strigun,
Intel Middleware Products Division

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [performance] performance measurement of HDK

Posted by Geir Magnusson Jr <ge...@pobox.com>.
That's very cool - thanks for the link

Wes Felter wrote:
> Check out the continuous performance measurement the CACAO team is doing:
> 
> http://www.complang.tuwien.ac.at/cacaojvm/tgolem/
> http://www.complang.tuwien.ac.at/cacaojvm/tgolem/testing/benchmark_history.html
> 
> 
> Wes Felter - wesley@felter.org
> 
> 
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> 
> 
> 

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [performance] performance measurement of HDK

Posted by Wes Felter <we...@felter.org>.
Check out the continuous performance measurement the CACAO team is doing:

http://www.complang.tuwien.ac.at/cacaojvm/tgolem/
http://www.complang.tuwien.ac.at/cacaojvm/tgolem/testing/benchmark_history.html

Wes Felter - wesley@felter.org


---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [performance] performance measurement of HDK

Posted by Geir Magnusson Jr <ge...@pobox.com>.

Vladimir Strigun wrote:
> On 8/2/06, Geir Magnusson Jr <ge...@pobox.com> wrote:
>>
>>
>> Vladimir Strigun wrote:
>> >
>> > Telco - this one mostly stresses BigInteger/BigDecimal functionality
>> > GcOld - the purpose of this one is clear from the name :)
>> > SciMark - java benchmark for scientific and numerical computing
>> > Linpack java - well-known benchmark solving linear equations The
>> > Plasma Benchmark - creates an animated display by continuously summing
>> > four sine waves in an applet
>> > JavaWorld Benchmark - benchmark for low-level operations: loops,
>> > accessing variables, method invocation, arithmetic operators, casting,
>> > instantiation, exception handling, thread creation and switching.
>> > CaffeineMark 3.0 - low-level benchmark suite, including sieve of
>> > Eratosthenes, sorting, logic ops, method invocation, floating point,
>> > simple graphics and GUI ops
>> > JavaGrande benchmark suite - a set of benchmarks stressing different
>> > areas of java.
>>
>> These are good.  I it would be nice to just hook them into the
>> build-test framework as optional parts.
> 
> I'll try to hook them into build-test framework. Do you think that all
> of them should be added to the framework? I suppose benchmarks should
> be downloaded the same way as other dependencies in Harmony?

Well, I myself tend to try to get things working simply, and then
"enhance the experience".

So I think that as an example, getting the JavaWorld (or whatever)
working as an optional thing for the CI - which would simply be an ant
script - first would be cool, where the user has to go get the thing
him/herself, and then if we like it, we can automate the fech. (But yes,
automating the fetch would be cool...

But this sounds good.  I'm going to go get the JavaWorld one since I've
never heard of it and wonder how up-to-date it is.

geir

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [performance] performance measurement of HDK

Posted by Vladimir Strigun <vs...@gmail.com>.
On 8/2/06, Geir Magnusson Jr <ge...@pobox.com> wrote:
>
>
> Vladimir Strigun wrote:
> > Our project is now in a good shape and we started encouraging people
> > to try real applications on top of Harmony. One of the important parts
> > of the feedback will be general impression of stability, reliability
> > and performance of the HDK. We are doing pretty good in fixing bugs
> > and developing new functionality in HDK, and it might be a good time
> > for us to start thinking of performance as well. It might be a little
> > bit preliminary, but do we consider having any performance targets for
> > us? If yes, we may need to focus at least some of our efforts on
> > benchmarking and tuning overall performance of HDK.
>
> Well, I wouldn't say the HDK, as much as Harmony.
>
> >
> > One of the main questions here is what should be the targets for us
> > and how should we measure our performance. There are several ways for
> > measuring performance, such us commercial benchmarks, free benchmarks,
> > application startup, small micro suites, etc. Some of free benchmarks
> > have been mentioned in JIRA issues and dev list, nevertheless at the
> > moment we don't have any goals for performance. In spite of
> > application enabling initiative it might be good to consider publicly
> > available benchmarks as the additional list of the software
> > applications which we would enable on Harmony.
>
> Yep.
>
> >
> > So I suggest to start discussing performance techniques and methods
> > that can be used for comparing performance between RI and HDK. I think
>
> s/HDK/Harmony
>
> > in case we do not consider performance issues, we can get negative
> > feedback from users even if application starts without any errors,
> > exceptions, etc.
> >
> > One of the benchmarks that was mentioned is DaCapo[1]. It's a free
> > open-source benchmark suite and I believe it can be used for regular
> > performance measurement of HDK. I've tried to find other free suites
> > and got the following list:
> >
> > Telco - this one mostly stresses BigInteger/BigDecimal functionality
> > GcOld - the purpose of this one is clear from the name :)
> > SciMark - java benchmark for scientific and numerical computing
> > Linpack java - well-known benchmark solving linear equations The
> > Plasma Benchmark - creates an animated display by continuously summing
> > four sine waves in an applet
> > JavaWorld Benchmark - benchmark for low-level operations: loops,
> > accessing variables, method invocation, arithmetic operators, casting,
> > instantiation, exception handling, thread creation and switching.
> > CaffeineMark 3.0 - low-level benchmark suite, including sieve of
> > Eratosthenes, sorting, logic ops, method invocation, floating point,
> > simple graphics and GUI ops
> > JavaGrande benchmark suite - a set of benchmarks stressing different
> > areas of java.
>
> These are good.  I it would be nice to just hook them into the
> build-test framework as optional parts.

I'll try to hook them into build-test framework. Do you think that all
of them should be added to the framework? I suppose benchmarks should
be downloaded the same way as other dependencies in Harmony?

> >
> > Having in mind that the list of publicly available benchmarks is not
> > too big, sometimes it will be necessary to create micro benches for
> > some of patches (for instance, Harmony-935). IMO micro should be
> > started in case we change some code that the bench covers.
>
> Sure
>
> >
> > Other interesting and possibly more productive way for comparing
> > performance between different implementations are to use non-free
> > benchmarks. For instance, we can use benchmarks from Spec[n], like
> > SpecJVM, SpecJBB, SpecJAppserver. Unfortunately first we should get
> > license for it, but I believe this issue can be solved within the help
> > of companies participating in Harmony :)
>
> Maybe.  What we want to have is those benchmarks available to anyone in
> the project, although we'll take what we can get for now...
>
> geir
>
> >
> > Thoughts? Comments?
> >
> > [1] http://dacapobench.org
> > [2] http://www.spec.org
> >
> > --
> > Vladimir Strigun,
> > Intel Middleware Products Division
> >
> > ---------------------------------------------------------------------
> > Terms of use : http://incubator.apache.org/harmony/mailing.html
> > To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> > For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> >
> >
> >
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [performance] performance measurement of HDK

Posted by Geir Magnusson Jr <ge...@pobox.com>.

Vladimir Strigun wrote:
> Our project is now in a good shape and we started encouraging people
> to try real applications on top of Harmony. One of the important parts
> of the feedback will be general impression of stability, reliability
> and performance of the HDK. We are doing pretty good in fixing bugs
> and developing new functionality in HDK, and it might be a good time
> for us to start thinking of performance as well. It might be a little
> bit preliminary, but do we consider having any performance targets for
> us? If yes, we may need to focus at least some of our efforts on
> benchmarking and tuning overall performance of HDK.

Well, I wouldn't say the HDK, as much as Harmony.

> 
> One of the main questions here is what should be the targets for us
> and how should we measure our performance. There are several ways for
> measuring performance, such us commercial benchmarks, free benchmarks,
> application startup, small micro suites, etc. Some of free benchmarks
> have been mentioned in JIRA issues and dev list, nevertheless at the
> moment we don't have any goals for performance. In spite of
> application enabling initiative it might be good to consider publicly
> available benchmarks as the additional list of the software
> applications which we would enable on Harmony.

Yep.

> 
> So I suggest to start discussing performance techniques and methods
> that can be used for comparing performance between RI and HDK. I think

s/HDK/Harmony

> in case we do not consider performance issues, we can get negative
> feedback from users even if application starts without any errors,
> exceptions, etc.
> 
> One of the benchmarks that was mentioned is DaCapo[1]. It's a free
> open-source benchmark suite and I believe it can be used for regular
> performance measurement of HDK. I've tried to find other free suites
> and got the following list:
> 
> Telco - this one mostly stresses BigInteger/BigDecimal functionality
> GcOld - the purpose of this one is clear from the name :)
> SciMark - java benchmark for scientific and numerical computing
> Linpack java - well-known benchmark solving linear equations The
> Plasma Benchmark - creates an animated display by continuously summing
> four sine waves in an applet
> JavaWorld Benchmark - benchmark for low-level operations: loops,
> accessing variables, method invocation, arithmetic operators, casting,
> instantiation, exception handling, thread creation and switching.
> CaffeineMark 3.0 - low-level benchmark suite, including sieve of
> Eratosthenes, sorting, logic ops, method invocation, floating point,
> simple graphics and GUI ops
> JavaGrande benchmark suite - a set of benchmarks stressing different
> areas of java.

These are good.  I it would be nice to just hook them into the
build-test framework as optional parts.

> 
> Having in mind that the list of publicly available benchmarks is not
> too big, sometimes it will be necessary to create micro benches for
> some of patches (for instance, Harmony-935). IMO micro should be
> started in case we change some code that the bench covers.

Sure

> 
> Other interesting and possibly more productive way for comparing
> performance between different implementations are to use non-free
> benchmarks. For instance, we can use benchmarks from Spec[n], like
> SpecJVM, SpecJBB, SpecJAppserver. Unfortunately first we should get
> license for it, but I believe this issue can be solved within the help
> of companies participating in Harmony :)

Maybe.  What we want to have is those benchmarks available to anyone in
the project, although we'll take what we can get for now...

geir

> 
> Thoughts? Comments?
> 
> [1] http://dacapobench.org
> [2] http://www.spec.org
> 
> -- 
> Vladimir Strigun,
> Intel Middleware Products Division
> 
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
> 
> 
> 

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [performance] performance measurement of HDK

Posted by Anton Luht <an...@gmail.com>.
Vladimir,

I've found some more Java benchmarks in bookmarks, maybe some of them
will be interesting sor someone.

- Richards and deltaBlue [1] - first "simulates the task dispatcher in
the kernel of an operating system", second is "constraint solver
benchmark in the Java programming language."
- Copier [2] - "The source transmits 10,000 numbers through a large
number of copiers (1,000 to 20,000) to the sink"

As far as I understand there are Richards and Copier implementations
under Creative Common License.

[1] http://research.sun.com/people/mario/java_benchmarking/
[2] http://pws.prserv.net/dlissett/ben/copier1.htm

> Telco - this one mostly stresses BigInteger/BigDecimal functionality
> GcOld - the purpose of this one is clear from the name :)
> SciMark - java benchmark for scientific and numerical computing
> Linpack java - well-known benchmark solving linear equations The
> Plasma Benchmark - creates an animated display by continuously summing
> four sine waves in an applet
> JavaWorld Benchmark - benchmark for low-level operations: loops,
> accessing variables, method invocation, arithmetic operators, casting,
> instantiation, exception handling, thread creation and switching.
> CaffeineMark 3.0 - low-level benchmark suite, including sieve of
> Eratosthenes, sorting, logic ops, method invocation, floating point,
> simple graphics and GUI ops
> JavaGrande benchmark suite - a set of benchmarks stressing different
> areas of java.
>
> Having in mind that the list of publicly available benchmarks is not
> too big, sometimes it will be necessary to create micro benches for
> some of patches (for instance, Harmony-935). IMO micro should be
> started in case we change some code that the bench covers.
>
> Other interesting and possibly more productive way for comparing
> performance between different implementations are to use non-free
> benchmarks. For instance, we can use benchmarks from Spec[n], like
> SpecJVM, SpecJBB, SpecJAppserver. Unfortunately first we should get
> license for it, but I believe this issue can be solved within the help
> of companies participating in Harmony :)
>
> Thoughts? Comments?
>
> [1] http://dacapobench.org
> [2] http://www.spec.org
>
> --
> Vladimir Strigun,
> Intel Middleware Products Division
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>


-- 
Regards,
Anton Luht,
Intel Middleware Products Division

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [performance] performance measurement of HDK

Posted by Vladimir Strigun <vs...@gmail.com>.
On 8/2/06, Geir Magnusson Jr <ge...@pobox.com> wrote:
>
>
> Vladimir Strigun wrote:
>
> [SNIP]
>
> > JavaWorld Benchmark - benchmark for low-level operations: loops,
> > accessing variables, method invocation, arithmetic operators, casting,
> > instantiation, exception handling, thread creation and switching.
>
> Is this the Volano suite?  I tried to run it, but it has hardwired
> locations for the JRE, and all popular implementations of 1.3.  Very funny.

By JavaWorld benchmark I meant this one :
http://www.javaworld.com/javaworld/jw-04-1997/jw-04-optimize_p.html


> > CaffeineMark 3.0 - low-level benchmark suite, including sieve of
> > Eratosthenes, sorting, logic ops, method invocation, floating point,
> > simple graphics and GUI ops
>
> I was able to get this to work.
>
> I was impressed that it actually worked. (Ok, there are graphics
> glitches...)
>
> With a debug build on Ubuntu6 on a T42 w/ 1G Ram, two runs gave me to
> following.  NUmbers for Sun's JDK are in parens
>
> Sieve = 15347 / 17075        (24830)
> Loop = 50043 / 50138         (55529)
> Logic = 32705 / 32699        (34982)
> String = 12504 / 17341       (19216)
> Float = 29280 / 32209        (45124)
> Method = 45405 / 45361       (37331)
> Graphics = 552 / 628         (9069)
> Image = 26 / 33              (7966)
> Dialog = 103 / 118           (1463)
> CaffineMark == 4399 / 4934   (17635)
>
> I'm quite impressed.  IMO, we're holding our own everywhere except
> graphcs, and this is a debug build.
>
> This is fun.
>
> geir
>
>
>
>
> ---------------------------------------------------------------------
> Terms of use : http://incubator.apache.org/harmony/mailing.html
> To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
> For additional commands, e-mail: harmony-dev-help@incubator.apache.org
>
>

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [performance] performance measurement of HDK

Posted by Geir Magnusson Jr <ge...@pobox.com>.

Vladimir Strigun wrote:

[SNIP]

> JavaWorld Benchmark - benchmark for low-level operations: loops,
> accessing variables, method invocation, arithmetic operators, casting,
> instantiation, exception handling, thread creation and switching.

Is this the Volano suite?  I tried to run it, but it has hardwired
locations for the JRE, and all popular implementations of 1.3.  Very funny.

> CaffeineMark 3.0 - low-level benchmark suite, including sieve of
> Eratosthenes, sorting, logic ops, method invocation, floating point,
> simple graphics and GUI ops

I was able to get this to work.

I was impressed that it actually worked. (Ok, there are graphics
glitches...)

With a debug build on Ubuntu6 on a T42 w/ 1G Ram, two runs gave me to
following.  NUmbers for Sun's JDK are in parens

Sieve = 15347 / 17075        (24830)
Loop = 50043 / 50138         (55529)
Logic = 32705 / 32699        (34982)
String = 12504 / 17341       (19216)
Float = 29280 / 32209        (45124)
Method = 45405 / 45361       (37331)
Graphics = 552 / 628         (9069)
Image = 26 / 33              (7966)
Dialog = 103 / 118           (1463)
CaffineMark == 4399 / 4934   (17635)

I'm quite impressed.  IMO, we're holding our own everywhere except
graphcs, and this is a debug build.

This is fun.

geir




---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [performance] performance measurement of HDK

Posted by Tim Ellison <t....@gmail.com>.
Rana Dasgupta wrote:
> Before tracking detailed EM/JIT profiling information( which we may need at
> some point ), it may be useful to initially just track benchmark raw scores
> weekly to see overall progress/regression and make it publicly available.
> If there are licensing issues with SpecJVM and SpecJBB, we could use a
> broad
> public suite like JavaGrande. We could just run this externally weekly
> on the same reference machine and post the numbers.

I think it will be interesting to do both if we can.  Different people
will get different things from the profiling and benchmark scores and I
think both are useful.  Capturing the data regularly gives us history to
help track down any regressions.

The build machine at IBM has the logs for loads of Harmony builds now,
it would be interesting to plot the JUnit reported times for each test
run across all the builds and see how they have changed.

> We need to decide what we want to use  benchmarks for. If it to be used
> internally primarily for performance health, we could consider requiring
> reporting before and after scores as part of JIRA code submissions along
> with smoke test logs.

No, JIRA issues should be zero cost to create, we want to encourage
everyone to raise lots of relevant issues.  Don't put hurdles in their way.

As i said above, I believe the performance measurements will be used at
differing levels of analysis by people in Harmony.

> For this, in addition to the jitted code oriented ones like Linpack,
> Scimark, we also need the memory benchmarks like decapo. We could
> also start filing perf bugs. But given that DRLVM will change 
> significantly for some time, it seems too early to do this.

Do you mean the MMTk work?  I don;t know what the timeline is for that
but in the meantime fixing performance problems as they are uncovered
makes sense doesn't it?

> To report/publish competitive scores we will need the rights to run
> specJVM98, specjbb2005, specjAPPServer etc. In addition to licensing, this
> also has some minimal infrastructure needs. Again, we can do do the prep
> work, but it maybe  too early to post competitive scores.

I agree it is too early, there are enough benchmarks to keep us going.

Regards,
Tim


> Thanks,
> Rana
> 
> 
> On 8/2/06, Mikhail Fursov <mi...@gmail.com> wrote:
>>
>> In my opinion this is a very good idea to have public performance profile
>> with a hotspots identified.
>> So, if this idea is accepted by community we can start a discussion which
>> kind of profile might be useful.
>>
>> I know that execution manager and optimizing JIT in DRLVM have a command
>> line keys to dump a lot of useful profiling information. I hope that
>> other
>> components have such switches too. So the only thing we need to do first
>> (if
>> the your proposal is accepted) is to write a tool to parses this data and
>> shows as webpage. I can help to anyone with this task (importing profile
>> from DRLVM JIT/EM) or just find a time and do it by myself if no
>> volunteers
>> will be found..
>>
>> On 8/2/06, Stefano Mazzocchi <st...@apache.org> wrote:
>> >
>> > one thing that happened in mozilla-land that catalized the community in
>> > fixing leaks and performance issues was adding profiling information to
>> > the tests and start plotting them overtime.
>> >
>> > Not only that gives an idea of the evolution of the program performance
>> > overtime, but it also keeps people honest because profiling is not
>> > something that should be done once and being forgotten but something
>> > that should be considered part of the feature of the program.
>> >
>> > --
>> > Stefano.
>> >
>> >
>>
>> -- 
>> Mikhail Fursov
>>
>>
> 

-- 

Tim Ellison (t.p.ellison@gmail.com)
IBM Java technology centre, UK.

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [performance] performance measurement of HDK

Posted by Geir Magnusson Jr <ge...@pobox.com>.

Rana Dasgupta wrote:
> Before tracking detailed EM/JIT profiling information( which we may need at
> some point ), it may be useful to initially just track benchmark raw scores
> weekly to see overall progress/regression and make it publicly available.
> If there are licensing issues with SpecJVM and SpecJBB, we could use a
> broad
> public suite like JavaGrande. We could just run this externally weekly
> on the same reference machine and post the numbers.

Sure - but there's no harm in the profiling info either.

> 
> We need to decide what we want to use  benchmarks for. If it to be used
> internally primarily for performance health, we could consider requiring
> reporting before and after scores as part of JIRA code submissions along
> with smoke test logs. For this, in addition to the jitted code oriented
> ones
> like Linpack, Scimark, we also need the memory benchmarks like decapo. We
> could also start filing perf bugs. But given that DRLVM will change
> significantly for some time, it seems too early to do this.

Yep.  And a real discouragement for people to contribute.

> 
> To report/publish competitive scores we will need the rights to run
> specJVM98, specjbb2005, specjAPPServer etc. In addition to licensing, this
> also has some minimal infrastructure needs. Again, we can do do the prep
> work, but it maybe  too early to post competitive scores.

Right - I'm not so worried about the spec scores at this point.  Just
working with apps that people can really identify with should be a good
start.

geir

> 
> Thanks,
> Rana
> 
> 
> On 8/2/06, Mikhail Fursov <mi...@gmail.com> wrote:
>>
>> In my opinion this is a very good idea to have public performance profile
>> with a hotspots identified.
>> So, if this idea is accepted by community we can start a discussion which
>> kind of profile might be useful.
>>
>> I know that execution manager and optimizing JIT in DRLVM have a command
>> line keys to dump a lot of useful profiling information. I hope that
>> other
>> components have such switches too. So the only thing we need to do first
>> (if
>> the your proposal is accepted) is to write a tool to parses this data and
>> shows as webpage. I can help to anyone with this task (importing profile
>> from DRLVM JIT/EM) or just find a time and do it by myself if no
>> volunteers
>> will be found..
>>
>> On 8/2/06, Stefano Mazzocchi <st...@apache.org> wrote:
>> >
>> > one thing that happened in mozilla-land that catalized the community in
>> > fixing leaks and performance issues was adding profiling information to
>> > the tests and start plotting them overtime.
>> >
>> > Not only that gives an idea of the evolution of the program performance
>> > overtime, but it also keeps people honest because profiling is not
>> > something that should be done once and being forgotten but something
>> > that should be considered part of the feature of the program.
>> >
>> > --
>> > Stefano.
>> >
>> >
>>
>> -- 
>> Mikhail Fursov
>>
>>
> 

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [performance] performance measurement of HDK

Posted by Rana Dasgupta <rd...@gmail.com>.
Before tracking detailed EM/JIT profiling information( which we may need at
some point ), it may be useful to initially just track benchmark raw scores
weekly to see overall progress/regression and make it publicly available.
If there are licensing issues with SpecJVM and SpecJBB, we could use a broad
public suite like JavaGrande. We could just run this externally weekly
on the same reference machine and post the numbers.

We need to decide what we want to use  benchmarks for. If it to be used
internally primarily for performance health, we could consider requiring
reporting before and after scores as part of JIRA code submissions along
with smoke test logs. For this, in addition to the jitted code oriented ones
like Linpack, Scimark, we also need the memory benchmarks like decapo. We
could also start filing perf bugs. But given that DRLVM will change
significantly for some time, it seems too early to do this.

To report/publish competitive scores we will need the rights to run
specJVM98, specjbb2005, specjAPPServer etc. In addition to licensing, this
also has some minimal infrastructure needs. Again, we can do do the prep
work, but it maybe  too early to post competitive scores.

Thanks,
Rana


On 8/2/06, Mikhail Fursov <mi...@gmail.com> wrote:
>
> In my opinion this is a very good idea to have public performance profile
> with a hotspots identified.
> So, if this idea is accepted by community we can start a discussion which
> kind of profile might be useful.
>
> I know that execution manager and optimizing JIT in DRLVM have a command
> line keys to dump a lot of useful profiling information. I hope that other
> components have such switches too. So the only thing we need to do first
> (if
> the your proposal is accepted) is to write a tool to parses this data and
> shows as webpage. I can help to anyone with this task (importing profile
> from DRLVM JIT/EM) or just find a time and do it by myself if no
> volunteers
> will be found..
>
> On 8/2/06, Stefano Mazzocchi <st...@apache.org> wrote:
> >
> > one thing that happened in mozilla-land that catalized the community in
> > fixing leaks and performance issues was adding profiling information to
> > the tests and start plotting them overtime.
> >
> > Not only that gives an idea of the evolution of the program performance
> > overtime, but it also keeps people honest because profiling is not
> > something that should be done once and being forgotten but something
> > that should be considered part of the feature of the program.
> >
> > --
> > Stefano.
> >
> >
>
> --
> Mikhail Fursov
>
>

Re: [performance] performance measurement of HDK

Posted by Mikhail Fursov <mi...@gmail.com>.
On 8/2/06, Geir Magnusson Jr <ge...@pobox.com> wrote:
>
> Why don't you kick it off by adding a page to the website on how to do
> this?  At least how to get the info out...


Geir,
I linked the file with description of DRLVM Execution Manager configuration
to the document prepared by Nadya:
http://issues.apache.org/jira/browse/HARMONY-1058

I do not know when this JIRA will be accepted, but anyone who is interested
in hacking DRLVM performance today have something to start from.


-- 
Mikhail Fursov

Re: [performance] performance measurement of HDK

Posted by Geir Magnusson Jr <ge...@pobox.com>.

Mikhail Fursov wrote:
> In my opinion this is a very good idea to have public performance profile
> with a hotspots identified.
> So, if this idea is accepted by community we can start a discussion which
> kind of profile might be useful.

I didn't think it was even a question :)

> 
> I know that execution manager and optimizing JIT in DRLVM have a command
> line keys to dump a lot of useful profiling information. I hope that other
> components have such switches too. So the only thing we need to do first
> (if
> the your proposal is accepted) is to write a tool to parses this data and
> shows as webpage. I can help to anyone with this task (importing profile
> from DRLVM JIT/EM) or just find a time and do it by myself if no volunteers
> will be found..

Why don't you kick it off by adding a page to the website on how to do
this?  At least how to get the info out...

geir

> 
> On 8/2/06, Stefano Mazzocchi <st...@apache.org> wrote:
>>
>> one thing that happened in mozilla-land that catalized the community in
>> fixing leaks and performance issues was adding profiling information to
>> the tests and start plotting them overtime.
>>
>> Not only that gives an idea of the evolution of the program performance
>> overtime, but it also keeps people honest because profiling is not
>> something that should be done once and being forgotten but something
>> that should be considered part of the feature of the program.
>>
>> -- 
>> Stefano.
>>
>>
> 

---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org


Re: [performance] performance measurement of HDK

Posted by Mikhail Fursov <mi...@gmail.com>.
In my opinion this is a very good idea to have public performance profile
with a hotspots identified.
So, if this idea is accepted by community we can start a discussion which
kind of profile might be useful.

I know that execution manager and optimizing JIT in DRLVM have a command
line keys to dump a lot of useful profiling information. I hope that other
components have such switches too. So the only thing we need to do first (if
the your proposal is accepted) is to write a tool to parses this data and
shows as webpage. I can help to anyone with this task (importing profile
from DRLVM JIT/EM) or just find a time and do it by myself if no volunteers
will be found..

On 8/2/06, Stefano Mazzocchi <st...@apache.org> wrote:
>
> one thing that happened in mozilla-land that catalized the community in
> fixing leaks and performance issues was adding profiling information to
> the tests and start plotting them overtime.
>
> Not only that gives an idea of the evolution of the program performance
> overtime, but it also keeps people honest because profiling is not
> something that should be done once and being forgotten but something
> that should be considered part of the feature of the program.
>
> --
> Stefano.
>
>

-- 
Mikhail Fursov

Re: [performance] performance measurement of HDK

Posted by Stefano Mazzocchi <st...@apache.org>.
Vladimir Strigun wrote:
> Our project is now in a good shape and we started encouraging people
> to try real applications on top of Harmony. One of the important parts
> of the feedback will be general impression of stability, reliability
> and performance of the HDK. We are doing pretty good in fixing bugs
> and developing new functionality in HDK, and it might be a good time
> for us to start thinking of performance as well. It might be a little
> bit preliminary, but do we consider having any performance targets for
> us? If yes, we may need to focus at least some of our efforts on
> benchmarking and tuning overall performance of HDK.
> 
> One of the main questions here is what should be the targets for us
> and how should we measure our performance. There are several ways for
> measuring performance, such us commercial benchmarks, free benchmarks,
> application startup, small micro suites, etc. Some of free benchmarks
> have been mentioned in JIRA issues and dev list, nevertheless at the
> moment we don't have any goals for performance. In spite of
> application enabling initiative it might be good to consider publicly
> available benchmarks as the additional list of the software
> applications which we would enable on Harmony.
> 
> So I suggest to start discussing performance techniques and methods
> that can be used for comparing performance between RI and HDK. I think
> in case we do not consider performance issues, we can get negative
> feedback from users even if application starts without any errors,
> exceptions, etc.
> 
> One of the benchmarks that was mentioned is DaCapo[1]. It's a free
> open-source benchmark suite and I believe it can be used for regular
> performance measurement of HDK. I've tried to find other free suites
> and got the following list:
> 
> Telco - this one mostly stresses BigInteger/BigDecimal functionality
> GcOld - the purpose of this one is clear from the name :)
> SciMark - java benchmark for scientific and numerical computing
> Linpack java - well-known benchmark solving linear equations The
> Plasma Benchmark - creates an animated display by continuously summing
> four sine waves in an applet
> JavaWorld Benchmark - benchmark for low-level operations: loops,
> accessing variables, method invocation, arithmetic operators, casting,
> instantiation, exception handling, thread creation and switching.
> CaffeineMark 3.0 - low-level benchmark suite, including sieve of
> Eratosthenes, sorting, logic ops, method invocation, floating point,
> simple graphics and GUI ops
> JavaGrande benchmark suite - a set of benchmarks stressing different
> areas of java.
> 
> Having in mind that the list of publicly available benchmarks is not
> too big, sometimes it will be necessary to create micro benches for
> some of patches (for instance, Harmony-935). IMO micro should be
> started in case we change some code that the bench covers.
> 
> Other interesting and possibly more productive way for comparing
> performance between different implementations are to use non-free
> benchmarks. For instance, we can use benchmarks from Spec[n], like
> SpecJVM, SpecJBB, SpecJAppserver. Unfortunately first we should get
> license for it, but I believe this issue can be solved within the help
> of companies participating in Harmony :)
> 
> Thoughts? Comments?

one thing that happened in mozilla-land that catalized the community in
fixing leaks and performance issues was adding profiling information to
the tests and start plotting them overtime.

Not only that gives an idea of the evolution of the program performance
overtime, but it also keeps people honest because profiling is not
something that should be done once and being forgotten but something
that should be considered part of the feature of the program.

-- 
Stefano.


---------------------------------------------------------------------
Terms of use : http://incubator.apache.org/harmony/mailing.html
To unsubscribe, e-mail: harmony-dev-unsubscribe@incubator.apache.org
For additional commands, e-mail: harmony-dev-help@incubator.apache.org