You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@mina.apache.org by Adam Fisk <ad...@gmail.com> on 2007/05/24 17:45:01 UTC

grizzly versus mina

The slides were just posted from this Java One session claiming Grizzly
blows MINA away performance-wise, and I'm just curious as to people's views
on it.  They present some interesting ideas about optimizing selector
threading and ByteBuffer use.

http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5

Maybe someone could comment on the performance improvements in MINA 2.0?  It
might also be useful to look at Grizzlies techniques to see if MINA could
incorporate them.  I know at least Scott Oaks from Grizzly is a solid
performance guy, so their numbers are likely correct.

Quick note:  I'm not trying to spark a Grizzly/MINA battle by any means.  I
just started using MINA after having implemented several generic NIO
frameworks myself, and I absolutely love MINA's approach.  It allowed me to
code a STUN server in 2 days, and I'm porting my SIP server now.

Thanks,

Adam

Re: grizzly versus mina

Posted by Adam Fisk <ad...@gmail.com>.
Hmnn....I don't think you're reading the benchmarks correctly.  Slide 19
shows an improvement of over 50% with Grizzly.

I think the MINA coders should feel very proud too.  I love the framework
and have no plans to stop using it.

-Adam


On 5/24/07, John Preston <by...@gmail.com> wrote:
>
> I think that the MINA coders should feel very proud. If I read the
> benchmarks correct then we are talking about 10% difference and that's
> within the margin of error of almost anything. Considering the issues
> mentioned previously about tuning for HTTP probably MINA and Grizzly
> are equals, at a fraction of the cost.
>
> John
>
> On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> > I agree on the tendency to manipulate benchmarks, but that doesn't mean
> > benchmarks aren't a useful tool.  How else can we evaluate
> performance?  I
> > guess I'm most curious about what the two projects might be able to
> learn
> > from each other.  I would suspect MINA's APIs are significantly easier
> to
> > use than Grizzly's, for example, and it wouldn't surprise me at all if
> Sun's
> > benchmarks were somewhat accurate.  I hate Sun's java.net projects as
> much
> > as the next guy, but that doesn't mean there's not an occasional jewel
> in
> > there.
> >
> > It would at least be worth running independent tests.  If the
> differences
> > are even close to the claims, it would make a ton of sense to just copy
> > their ideas.  No need for too much pride on either side!  Just seems
> like
> > they've put a ton of work into rigorously analyzing the performance
> > tradeoffs of different design decisions, and it might make sense to take
> > advantage of that.  If their benchmarks are off and MINA performs
> better,
> > then they should go ahead and copy MINA.
> >
> > That's all assuming the performance tweaks don't make the existing APIs
> > unworkable.
> >
> > -Adam
> >
> >
> > On 5/24/07, Alex Karasulu <ak...@apache.org> wrote:
> > >
> > > On 5/24/07, Mladen Turk <mt...@apache.org> wrote:
> > > >
> > > > Adam Fisk wrote:
> > > > > The slides were just posted from this Java One session claiming
> > > Grizzly
> > > > > blows MINA away performance-wise, and I'm just curious as to
> people's
> > > > views
> > > > > on it.  They present some interesting ideas about optimizing
> selector
> > > > > threading and ByteBuffer use.
> > > > >
> > > > >
> > > >
> > >
> http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5
> > > > >
> > > >
> > > > I love the slide 20!
> > > > JFA finally admitted that Tomcat's APR-NIO is faster then JDK one ;)
> > > > However last time I did benchmarks it was much faster then 10%.
> > > >
> > > > >
> > > > > Maybe someone could comment on the performance improvements in
> MINA
> > > > > 2.0?
> > > >
> > > > He probably compared MINA's Serial IO, and that is not usable
> > > > for production (jet). I wonder how it would look with real
> > > > async http server.
> > > > Nevertheless, benchmarks are like assholes. Everyone has one.
> > >
> > >
> > > Exactly!
> > >
> > > Incidentally SUN has been trying to attack several projects via the
> > > performance angle for
> > > some time now.  Just recently I received a cease and desist letter
> from
> > > them
> > > when I
> > > compiled some performance metrics.  The point behind it is was that we
> > > were
> > > not correctly
> > > configuring their products.  I guess they just want to make sure
> things
> > > are
> > > setup to their
> > > advantage.  That's what all these metrics revolve around and if you
> ask me
> > > they're not worth
> > > a damn.  There is a million ways to make one product perform better
> than
> > > another depending
> > > on configuration, environment and the application.  However is raw
> > > performance metrics as
> > > important as a good flexible design?
> > >
> > > Alex
> > >
> >
>

Re: grizzly versus mina

Posted by John Preston <by...@gmail.com>.
I think that the MINA coders should feel very proud. If I read the
benchmarks correct then we are talking about 10% difference and that's
within the margin of error of almost anything. Considering the issues
mentioned previously about tuning for HTTP probably MINA and Grizzly
are equals, at a fraction of the cost.

John

On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> I agree on the tendency to manipulate benchmarks, but that doesn't mean
> benchmarks aren't a useful tool.  How else can we evaluate performance?  I
> guess I'm most curious about what the two projects might be able to learn
> from each other.  I would suspect MINA's APIs are significantly easier to
> use than Grizzly's, for example, and it wouldn't surprise me at all if Sun's
> benchmarks were somewhat accurate.  I hate Sun's java.net projects as much
> as the next guy, but that doesn't mean there's not an occasional jewel in
> there.
>
> It would at least be worth running independent tests.  If the differences
> are even close to the claims, it would make a ton of sense to just copy
> their ideas.  No need for too much pride on either side!  Just seems like
> they've put a ton of work into rigorously analyzing the performance
> tradeoffs of different design decisions, and it might make sense to take
> advantage of that.  If their benchmarks are off and MINA performs better,
> then they should go ahead and copy MINA.
>
> That's all assuming the performance tweaks don't make the existing APIs
> unworkable.
>
> -Adam
>
>
> On 5/24/07, Alex Karasulu <ak...@apache.org> wrote:
> >
> > On 5/24/07, Mladen Turk <mt...@apache.org> wrote:
> > >
> > > Adam Fisk wrote:
> > > > The slides were just posted from this Java One session claiming
> > Grizzly
> > > > blows MINA away performance-wise, and I'm just curious as to people's
> > > views
> > > > on it.  They present some interesting ideas about optimizing selector
> > > > threading and ByteBuffer use.
> > > >
> > > >
> > >
> > http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5
> > > >
> > >
> > > I love the slide 20!
> > > JFA finally admitted that Tomcat's APR-NIO is faster then JDK one ;)
> > > However last time I did benchmarks it was much faster then 10%.
> > >
> > > >
> > > > Maybe someone could comment on the performance improvements in MINA
> > > > 2.0?
> > >
> > > He probably compared MINA's Serial IO, and that is not usable
> > > for production (jet). I wonder how it would look with real
> > > async http server.
> > > Nevertheless, benchmarks are like assholes. Everyone has one.
> >
> >
> > Exactly!
> >
> > Incidentally SUN has been trying to attack several projects via the
> > performance angle for
> > some time now.  Just recently I received a cease and desist letter from
> > them
> > when I
> > compiled some performance metrics.  The point behind it is was that we
> > were
> > not correctly
> > configuring their products.  I guess they just want to make sure things
> > are
> > setup to their
> > advantage.  That's what all these metrics revolve around and if you ask me
> > they're not worth
> > a damn.  There is a million ways to make one product perform better than
> > another depending
> > on configuration, environment and the application.  However is raw
> > performance metrics as
> > important as a good flexible design?
> >
> > Alex
> >
>

Re: grizzly versus mina

Posted by Trustin Lee <tr...@gmail.com>.
On 5/25/07, John Preston <by...@gmail.com> wrote:
> My thought was that when comparing Glassfish that is built on top of
> Grizzly, and Tomcat with it own NIO engine you only get 10%
> improvement. But when you compare AsyncWeb on top of MINA or Grizzly
> you get a 50% difference. That would tell me that MINA is way slower
> than the IO engine for Tomcat. But I haven't seen this.

AsyncWeb has its own request-response pipeline, and I found it causes
performance slowdown.  I made the pipeline optional and AsyncWeb
protocol codec work directly with MINA, and was able to get much
better performance test result.

BTW, I'm not sure about non-keepalive connections, because I couldn't
test properly due to TIME_WAIT limitation.

HTH,
Trustin
-- 
what we call human nature is actually human habit
--
http://gleamynode.net/
--
PGP Key ID: 0x0255ECA6

Re: grizzly versus mina

Posted by jian wu <he...@gmail.com>.
Hi,

Just want to look at the bright side, this presentation also gives a
lot of proof
that Mina made right design decision up front.

For example, on slide 67 "Tip#6 To Thread or not to Thread", it said:
"We've benchmarked all of the above options and found that the one that
  perform the best is option C:
  * Execute OP_ACCEPT using the Selector thread and
    OP_READ on a separate thread
    ... ..."

This actually proves that Mina made brilliant design decision to introduce
"ThreadPoolFilter" in Mina 0.9 and "ExecutorFilter"/"ThreadModel" in
Mina 1.0 at framework level, so Mina Application will get best performance
result by using ExecutorFilter/ThreadPoolFilter :-)

Thanks a lot!

Jian




On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> Oh I see.  That is certainly odd.  Maybe the previous post about Tomcat IO
> being faster than Java IO is a clue?
>
> On 5/24/07, John Preston <by...@gmail.com> wrote:
> >
> > My thought was that when comparing Glassfish that is built on top of
> > Grizzly, and Tomcat with it own NIO engine you only get 10%
> > improvement. But when you compare AsyncWeb on top of MINA or Grizzly
> > you get a 50% difference. That would tell me that MINA is way slower
> > than the IO engine for Tomcat. But I haven't seen this.
> >
> > John
> >
> > On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> > > The benchmark was swapping MINA and Grizzly, both using AsyncWeb...  I
> > think
> > > you're maybe thinking of Grizzly as synonymous with Glassfish?  They
> > pulled
> > > it out into a generic NIO framework along the lines of MINA.
> > >
> > > On 5/24/07, John Preston <by...@gmail.com> wrote:
> > > >
> > > > OK. I was looking at the Tomcat vd grizzly benchmark. But then its a
> > > > bit strange. If your'e only 10% faster than tomcat but 50% faster than
> > > > MINA. That 50% is with AsyncWeb on MINA. So its not a bench mark of
> > > > MINA alone the application on MINA.
> > > >
> > > > I chose MINA for a simple fast scalable server that would server up
> > > > data files via HTTP requests and MINA for me at the time (about 1 year
> > > > ago) was the quickest, most simple to use. I remember trying tomcat
> > > > but it was too big and wasn't that fast for simple responses, so I'm
> > > > not sure that the 50% is MINA or AsyncWeb.
> > > >
> > > > I also agree java.net has some very useful projects, and for me, I
> > > > appreciate being able to read other implementation details and see
> > > > whether they have any use for me. Also lets remember that SUN, like
> > > > everybody else has the right to beat their chest and say they are the
> > > > best. Its for us to ignore them when we see that its more ego than
> > > > anything substantial.
> > > >
> > > > Anyway, back to the matter of benchmarks. it might be nice to have a
> > > > set of classes that would allow one to create a test of various
> > > > operations using MINA, and so from version to version, patches
> > > > included, we could keep track of whether we are improving things.
> > > >
> > > > John
> > > > On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> > > > > I hear you.  Sun's generally just annoying.  It would just probably
> > be
> > > > worth
> > > > > taking a look under the hood to see if there's any real magic there
> > > > > regardless of all th politics.  Wish I could volunteer to do it, but
> > > > I've
> > > > > got a startup to run!
> > > > >
> > > > > Thanks.
> > > > >
> > > > > -Adam
> > > > >
> > > > >
> > > > > On 5/24/07, Alex Karasulu <ak...@apache.org> wrote:
> > > > > >
> > > > > > Oh yes I agree with you completely.  I was really referring to how
> > > > > > benchmarks are
> > > > > > being used as marketing tools and published to discredit other
> > > > projects.
> > > > > > Also I
> > > > > > believe that there are jewels at java.net as well.  And you read
> > me
> > > > right:
> > > > > > I'm no fan
> > > > > > of SUN nor it's "open source" efforts.
> > > > > >
> > > > > > <OT>
> > > > > > Back in the day when Bill Joy and Scott McNealy were at the helm I
> > had
> > > > a
> > > > > > profound sense of
> > > > > > respect for SUN.  I actually wanted to become an engineer
> > there.  Now,
> > > > > > IMO,
> > > > > > they're a completely
> > > > > > different beast driven by marketing rather than engineering
> > > > principals.  I
> > > > > > feel they resort to base
> > > > > > practices that show a different character than the noble SUN I was
> > > > used
> > > > > > to.
> > > > > > It's sad to know that
> > > > > > the SUN many of us respected and looked up to has long since died.
> > > > > > </OT>
> > > > > >
> > > > > > Regarding benchmarks they are great for internal metrics and
> > shedding
> > > > > > light
> > > > > > on differences in
> > > > > > architecture that could produce more efficient software.  I'm a
> > big
> > > > fan of
> > > > > > competing
> > > > > > against our own releases - meaning benchmarking a baseline and
> > looking
> > > > at
> > > > > > the
> > > > > > performance progression of the software as it evolves with
> > time.  Also
> > > > > > testing other
> > > > > > frameworks is good for just showing how different scenarios are
> > > > handled
> > > > > > better
> > > > > > with different architectures: I agree that we can learn a lot from
> > > > these
> > > > > > tests.
> > > > > >
> > > > > > I just don't want to use metrics to put down other projects.  It's
> > all
> > > > > > about
> > > > > > how you use
> > > > > > the metrics which I think was my intent on the last post.  This
> > > > perhaps is
> > > > > > why I am a
> > > > > > bit disgusted with these tactics which are not in line with open
> > > > source
> > > > > > etiquette but
> > > > > > rather the mark of commercially driven and marketing oriented OSS
> > > > efforts.
> > > > > >
> > > > > > Alex
> > > > > >
> > > > > > On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> > > > > > >
> > > > > > > I agree on the tendency to manipulate benchmarks, but that
> > doesn't
> > > > mean
> > > > > > > benchmarks aren't a useful tool.  How else can we evaluate
> > > > > > performance?  I
> > > > > > > guess I'm most curious about what the two projects might be able
> > to
> > > > > > learn
> > > > > > > from each other.  I would suspect MINA's APIs are significantly
> > > > easier
> > > > > > to
> > > > > > > use than Grizzly's, for example, and it wouldn't surprise me at
> > all
> > > > if
> > > > > > > Sun's
> > > > > > > benchmarks were somewhat accurate.  I hate Sun's java.netprojects
> > > > as
> > > > > > much
> > > > > > > as the next guy, but that doesn't mean there's not an occasional
> > > > jewel
> > > > > > in
> > > > > > > there.
> > > > > > >
> > > > > > > It would at least be worth running independent tests.  If the
> > > > > > differences
> > > > > > > are even close to the claims, it would make a ton of sense to
> > just
> > > > copy
> > > > > > > their ideas.  No need for too much pride on either side!  Just
> > seems
> > > > > > like
> > > > > > > they've put a ton of work into rigorously analyzing the
> > performance
> > > > > > > tradeoffs of different design decisions, and it might make sense
> > to
> > > > take
> > > > > > > advantage of that.  If their benchmarks are off and MINA
> > performs
> > > > > > better,
> > > > > > > then they should go ahead and copy MINA.
> > > > > > >
> > > > > > > That's all assuming the performance tweaks don't make the
> > existing
> > > > APIs
> > > > > > > unworkable.
> > > > > > >
> > > > > > > -Adam
> > > > > > >
> > > > > > >
> > > > > > > On 5/24/07, Alex Karasulu <ak...@apache.org> wrote:
> > > > > > > >
> > > > > > > > On 5/24/07, Mladen Turk <mt...@apache.org> wrote:
> > > > > > > > >
> > > > > > > > > Adam Fisk wrote:
> > > > > > > > > > The slides were just posted from this Java One session
> > > > claiming
> > > > > > > > Grizzly
> > > > > > > > > > blows MINA away performance-wise, and I'm just curious as
> > to
> > > > > > > people's
> > > > > > > > > views
> > > > > > > > > > on it.  They present some interesting ideas about
> > optimizing
> > > > > > > selector
> > > > > > > > > > threading and ByteBuffer use.
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > >
> > http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > I love the slide 20!
> > > > > > > > > JFA finally admitted that Tomcat's APR-NIO is faster then
> > JDK
> > > > one ;)
> > > > > > > > > However last time I did benchmarks it was much faster then
> > 10%.
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Maybe someone could comment on the performance
> > improvements in
> > > > > > MINA
> > > > > > > > > > 2.0?
> > > > > > > > >
> > > > > > > > > He probably compared MINA's Serial IO, and that is not
> > usable
> > > > > > > > > for production (jet). I wonder how it would look with real
> > > > > > > > > async http server.
> > > > > > > > > Nevertheless, benchmarks are like assholes. Everyone has
> > one.
> > > > > > > >
> > > > > > > >
> > > > > > > > Exactly!
> > > > > > > >
> > > > > > > > Incidentally SUN has been trying to attack several projects
> > via
> > > > the
> > > > > > > > performance angle for
> > > > > > > > some time now.  Just recently I received a cease and desist
> > letter
> > > > > > from
> > > > > > > > them
> > > > > > > > when I
> > > > > > > > compiled some performance metrics.  The point behind it is was
> > > > that we
> > > > > > > > were
> > > > > > > > not correctly
> > > > > > > > configuring their products.  I guess they just want to make
> > sure
> > > > > > things
> > > > > > > > are
> > > > > > > > setup to their
> > > > > > > > advantage.  That's what all these metrics revolve around and
> > if
> > > > you
> > > > > > ask
> > > > > > > me
> > > > > > > > they're not worth
> > > > > > > > a damn.  There is a million ways to make one product perform
> > > > better
> > > > > > than
> > > > > > > > another depending
> > > > > > > > on configuration, environment and the application.  However is
> > raw
> > > > > > > > performance metrics as
> > > > > > > > important as a good flexible design?
> > > > > > > >
> > > > > > > > Alex
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: grizzly versus mina

Posted by Adam Fisk <ad...@gmail.com>.
Oh I see.  That is certainly odd.  Maybe the previous post about Tomcat IO
being faster than Java IO is a clue?

On 5/24/07, John Preston <by...@gmail.com> wrote:
>
> My thought was that when comparing Glassfish that is built on top of
> Grizzly, and Tomcat with it own NIO engine you only get 10%
> improvement. But when you compare AsyncWeb on top of MINA or Grizzly
> you get a 50% difference. That would tell me that MINA is way slower
> than the IO engine for Tomcat. But I haven't seen this.
>
> John
>
> On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> > The benchmark was swapping MINA and Grizzly, both using AsyncWeb...  I
> think
> > you're maybe thinking of Grizzly as synonymous with Glassfish?  They
> pulled
> > it out into a generic NIO framework along the lines of MINA.
> >
> > On 5/24/07, John Preston <by...@gmail.com> wrote:
> > >
> > > OK. I was looking at the Tomcat vd grizzly benchmark. But then its a
> > > bit strange. If your'e only 10% faster than tomcat but 50% faster than
> > > MINA. That 50% is with AsyncWeb on MINA. So its not a bench mark of
> > > MINA alone the application on MINA.
> > >
> > > I chose MINA for a simple fast scalable server that would server up
> > > data files via HTTP requests and MINA for me at the time (about 1 year
> > > ago) was the quickest, most simple to use. I remember trying tomcat
> > > but it was too big and wasn't that fast for simple responses, so I'm
> > > not sure that the 50% is MINA or AsyncWeb.
> > >
> > > I also agree java.net has some very useful projects, and for me, I
> > > appreciate being able to read other implementation details and see
> > > whether they have any use for me. Also lets remember that SUN, like
> > > everybody else has the right to beat their chest and say they are the
> > > best. Its for us to ignore them when we see that its more ego than
> > > anything substantial.
> > >
> > > Anyway, back to the matter of benchmarks. it might be nice to have a
> > > set of classes that would allow one to create a test of various
> > > operations using MINA, and so from version to version, patches
> > > included, we could keep track of whether we are improving things.
> > >
> > > John
> > > On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> > > > I hear you.  Sun's generally just annoying.  It would just probably
> be
> > > worth
> > > > taking a look under the hood to see if there's any real magic there
> > > > regardless of all th politics.  Wish I could volunteer to do it, but
> > > I've
> > > > got a startup to run!
> > > >
> > > > Thanks.
> > > >
> > > > -Adam
> > > >
> > > >
> > > > On 5/24/07, Alex Karasulu <ak...@apache.org> wrote:
> > > > >
> > > > > Oh yes I agree with you completely.  I was really referring to how
> > > > > benchmarks are
> > > > > being used as marketing tools and published to discredit other
> > > projects.
> > > > > Also I
> > > > > believe that there are jewels at java.net as well.  And you read
> me
> > > right:
> > > > > I'm no fan
> > > > > of SUN nor it's "open source" efforts.
> > > > >
> > > > > <OT>
> > > > > Back in the day when Bill Joy and Scott McNealy were at the helm I
> had
> > > a
> > > > > profound sense of
> > > > > respect for SUN.  I actually wanted to become an engineer
> there.  Now,
> > > > > IMO,
> > > > > they're a completely
> > > > > different beast driven by marketing rather than engineering
> > > principals.  I
> > > > > feel they resort to base
> > > > > practices that show a different character than the noble SUN I was
> > > used
> > > > > to.
> > > > > It's sad to know that
> > > > > the SUN many of us respected and looked up to has long since died.
> > > > > </OT>
> > > > >
> > > > > Regarding benchmarks they are great for internal metrics and
> shedding
> > > > > light
> > > > > on differences in
> > > > > architecture that could produce more efficient software.  I'm a
> big
> > > fan of
> > > > > competing
> > > > > against our own releases - meaning benchmarking a baseline and
> looking
> > > at
> > > > > the
> > > > > performance progression of the software as it evolves with
> time.  Also
> > > > > testing other
> > > > > frameworks is good for just showing how different scenarios are
> > > handled
> > > > > better
> > > > > with different architectures: I agree that we can learn a lot from
> > > these
> > > > > tests.
> > > > >
> > > > > I just don't want to use metrics to put down other projects.  It's
> all
> > > > > about
> > > > > how you use
> > > > > the metrics which I think was my intent on the last post.  This
> > > perhaps is
> > > > > why I am a
> > > > > bit disgusted with these tactics which are not in line with open
> > > source
> > > > > etiquette but
> > > > > rather the mark of commercially driven and marketing oriented OSS
> > > efforts.
> > > > >
> > > > > Alex
> > > > >
> > > > > On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> > > > > >
> > > > > > I agree on the tendency to manipulate benchmarks, but that
> doesn't
> > > mean
> > > > > > benchmarks aren't a useful tool.  How else can we evaluate
> > > > > performance?  I
> > > > > > guess I'm most curious about what the two projects might be able
> to
> > > > > learn
> > > > > > from each other.  I would suspect MINA's APIs are significantly
> > > easier
> > > > > to
> > > > > > use than Grizzly's, for example, and it wouldn't surprise me at
> all
> > > if
> > > > > > Sun's
> > > > > > benchmarks were somewhat accurate.  I hate Sun's java.netprojects
> > > as
> > > > > much
> > > > > > as the next guy, but that doesn't mean there's not an occasional
> > > jewel
> > > > > in
> > > > > > there.
> > > > > >
> > > > > > It would at least be worth running independent tests.  If the
> > > > > differences
> > > > > > are even close to the claims, it would make a ton of sense to
> just
> > > copy
> > > > > > their ideas.  No need for too much pride on either side!  Just
> seems
> > > > > like
> > > > > > they've put a ton of work into rigorously analyzing the
> performance
> > > > > > tradeoffs of different design decisions, and it might make sense
> to
> > > take
> > > > > > advantage of that.  If their benchmarks are off and MINA
> performs
> > > > > better,
> > > > > > then they should go ahead and copy MINA.
> > > > > >
> > > > > > That's all assuming the performance tweaks don't make the
> existing
> > > APIs
> > > > > > unworkable.
> > > > > >
> > > > > > -Adam
> > > > > >
> > > > > >
> > > > > > On 5/24/07, Alex Karasulu <ak...@apache.org> wrote:
> > > > > > >
> > > > > > > On 5/24/07, Mladen Turk <mt...@apache.org> wrote:
> > > > > > > >
> > > > > > > > Adam Fisk wrote:
> > > > > > > > > The slides were just posted from this Java One session
> > > claiming
> > > > > > > Grizzly
> > > > > > > > > blows MINA away performance-wise, and I'm just curious as
> to
> > > > > > people's
> > > > > > > > views
> > > > > > > > > on it.  They present some interesting ideas about
> optimizing
> > > > > > selector
> > > > > > > > > threading and ByteBuffer use.
> > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > >
> http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5
> > > > > > > > >
> > > > > > > >
> > > > > > > > I love the slide 20!
> > > > > > > > JFA finally admitted that Tomcat's APR-NIO is faster then
> JDK
> > > one ;)
> > > > > > > > However last time I did benchmarks it was much faster then
> 10%.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Maybe someone could comment on the performance
> improvements in
> > > > > MINA
> > > > > > > > > 2.0?
> > > > > > > >
> > > > > > > > He probably compared MINA's Serial IO, and that is not
> usable
> > > > > > > > for production (jet). I wonder how it would look with real
> > > > > > > > async http server.
> > > > > > > > Nevertheless, benchmarks are like assholes. Everyone has
> one.
> > > > > > >
> > > > > > >
> > > > > > > Exactly!
> > > > > > >
> > > > > > > Incidentally SUN has been trying to attack several projects
> via
> > > the
> > > > > > > performance angle for
> > > > > > > some time now.  Just recently I received a cease and desist
> letter
> > > > > from
> > > > > > > them
> > > > > > > when I
> > > > > > > compiled some performance metrics.  The point behind it is was
> > > that we
> > > > > > > were
> > > > > > > not correctly
> > > > > > > configuring their products.  I guess they just want to make
> sure
> > > > > things
> > > > > > > are
> > > > > > > setup to their
> > > > > > > advantage.  That's what all these metrics revolve around and
> if
> > > you
> > > > > ask
> > > > > > me
> > > > > > > they're not worth
> > > > > > > a damn.  There is a million ways to make one product perform
> > > better
> > > > > than
> > > > > > > another depending
> > > > > > > on configuration, environment and the application.  However is
> raw
> > > > > > > performance metrics as
> > > > > > > important as a good flexible design?
> > > > > > >
> > > > > > > Alex
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: grizzly versus mina

Posted by John Preston <by...@gmail.com>.
My thought was that when comparing Glassfish that is built on top of
Grizzly, and Tomcat with it own NIO engine you only get 10%
improvement. But when you compare AsyncWeb on top of MINA or Grizzly
you get a 50% difference. That would tell me that MINA is way slower
than the IO engine for Tomcat. But I haven't seen this.

John

On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> The benchmark was swapping MINA and Grizzly, both using AsyncWeb...  I think
> you're maybe thinking of Grizzly as synonymous with Glassfish?  They pulled
> it out into a generic NIO framework along the lines of MINA.
>
> On 5/24/07, John Preston <by...@gmail.com> wrote:
> >
> > OK. I was looking at the Tomcat vd grizzly benchmark. But then its a
> > bit strange. If your'e only 10% faster than tomcat but 50% faster than
> > MINA. That 50% is with AsyncWeb on MINA. So its not a bench mark of
> > MINA alone the application on MINA.
> >
> > I chose MINA for a simple fast scalable server that would server up
> > data files via HTTP requests and MINA for me at the time (about 1 year
> > ago) was the quickest, most simple to use. I remember trying tomcat
> > but it was too big and wasn't that fast for simple responses, so I'm
> > not sure that the 50% is MINA or AsyncWeb.
> >
> > I also agree java.net has some very useful projects, and for me, I
> > appreciate being able to read other implementation details and see
> > whether they have any use for me. Also lets remember that SUN, like
> > everybody else has the right to beat their chest and say they are the
> > best. Its for us to ignore them when we see that its more ego than
> > anything substantial.
> >
> > Anyway, back to the matter of benchmarks. it might be nice to have a
> > set of classes that would allow one to create a test of various
> > operations using MINA, and so from version to version, patches
> > included, we could keep track of whether we are improving things.
> >
> > John
> > On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> > > I hear you.  Sun's generally just annoying.  It would just probably be
> > worth
> > > taking a look under the hood to see if there's any real magic there
> > > regardless of all th politics.  Wish I could volunteer to do it, but
> > I've
> > > got a startup to run!
> > >
> > > Thanks.
> > >
> > > -Adam
> > >
> > >
> > > On 5/24/07, Alex Karasulu <ak...@apache.org> wrote:
> > > >
> > > > Oh yes I agree with you completely.  I was really referring to how
> > > > benchmarks are
> > > > being used as marketing tools and published to discredit other
> > projects.
> > > > Also I
> > > > believe that there are jewels at java.net as well.  And you read me
> > right:
> > > > I'm no fan
> > > > of SUN nor it's "open source" efforts.
> > > >
> > > > <OT>
> > > > Back in the day when Bill Joy and Scott McNealy were at the helm I had
> > a
> > > > profound sense of
> > > > respect for SUN.  I actually wanted to become an engineer there.  Now,
> > > > IMO,
> > > > they're a completely
> > > > different beast driven by marketing rather than engineering
> > principals.  I
> > > > feel they resort to base
> > > > practices that show a different character than the noble SUN I was
> > used
> > > > to.
> > > > It's sad to know that
> > > > the SUN many of us respected and looked up to has long since died.
> > > > </OT>
> > > >
> > > > Regarding benchmarks they are great for internal metrics and shedding
> > > > light
> > > > on differences in
> > > > architecture that could produce more efficient software.  I'm a big
> > fan of
> > > > competing
> > > > against our own releases - meaning benchmarking a baseline and looking
> > at
> > > > the
> > > > performance progression of the software as it evolves with time.  Also
> > > > testing other
> > > > frameworks is good for just showing how different scenarios are
> > handled
> > > > better
> > > > with different architectures: I agree that we can learn a lot from
> > these
> > > > tests.
> > > >
> > > > I just don't want to use metrics to put down other projects.  It's all
> > > > about
> > > > how you use
> > > > the metrics which I think was my intent on the last post.  This
> > perhaps is
> > > > why I am a
> > > > bit disgusted with these tactics which are not in line with open
> > source
> > > > etiquette but
> > > > rather the mark of commercially driven and marketing oriented OSS
> > efforts.
> > > >
> > > > Alex
> > > >
> > > > On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> > > > >
> > > > > I agree on the tendency to manipulate benchmarks, but that doesn't
> > mean
> > > > > benchmarks aren't a useful tool.  How else can we evaluate
> > > > performance?  I
> > > > > guess I'm most curious about what the two projects might be able to
> > > > learn
> > > > > from each other.  I would suspect MINA's APIs are significantly
> > easier
> > > > to
> > > > > use than Grizzly's, for example, and it wouldn't surprise me at all
> > if
> > > > > Sun's
> > > > > benchmarks were somewhat accurate.  I hate Sun's java.net projects
> > as
> > > > much
> > > > > as the next guy, but that doesn't mean there's not an occasional
> > jewel
> > > > in
> > > > > there.
> > > > >
> > > > > It would at least be worth running independent tests.  If the
> > > > differences
> > > > > are even close to the claims, it would make a ton of sense to just
> > copy
> > > > > their ideas.  No need for too much pride on either side!  Just seems
> > > > like
> > > > > they've put a ton of work into rigorously analyzing the performance
> > > > > tradeoffs of different design decisions, and it might make sense to
> > take
> > > > > advantage of that.  If their benchmarks are off and MINA performs
> > > > better,
> > > > > then they should go ahead and copy MINA.
> > > > >
> > > > > That's all assuming the performance tweaks don't make the existing
> > APIs
> > > > > unworkable.
> > > > >
> > > > > -Adam
> > > > >
> > > > >
> > > > > On 5/24/07, Alex Karasulu <ak...@apache.org> wrote:
> > > > > >
> > > > > > On 5/24/07, Mladen Turk <mt...@apache.org> wrote:
> > > > > > >
> > > > > > > Adam Fisk wrote:
> > > > > > > > The slides were just posted from this Java One session
> > claiming
> > > > > > Grizzly
> > > > > > > > blows MINA away performance-wise, and I'm just curious as to
> > > > > people's
> > > > > > > views
> > > > > > > > on it.  They present some interesting ideas about optimizing
> > > > > selector
> > > > > > > > threading and ByteBuffer use.
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5
> > > > > > > >
> > > > > > >
> > > > > > > I love the slide 20!
> > > > > > > JFA finally admitted that Tomcat's APR-NIO is faster then JDK
> > one ;)
> > > > > > > However last time I did benchmarks it was much faster then 10%.
> > > > > > >
> > > > > > > >
> > > > > > > > Maybe someone could comment on the performance improvements in
> > > > MINA
> > > > > > > > 2.0?
> > > > > > >
> > > > > > > He probably compared MINA's Serial IO, and that is not usable
> > > > > > > for production (jet). I wonder how it would look with real
> > > > > > > async http server.
> > > > > > > Nevertheless, benchmarks are like assholes. Everyone has one.
> > > > > >
> > > > > >
> > > > > > Exactly!
> > > > > >
> > > > > > Incidentally SUN has been trying to attack several projects via
> > the
> > > > > > performance angle for
> > > > > > some time now.  Just recently I received a cease and desist letter
> > > > from
> > > > > > them
> > > > > > when I
> > > > > > compiled some performance metrics.  The point behind it is was
> > that we
> > > > > > were
> > > > > > not correctly
> > > > > > configuring their products.  I guess they just want to make sure
> > > > things
> > > > > > are
> > > > > > setup to their
> > > > > > advantage.  That's what all these metrics revolve around and if
> > you
> > > > ask
> > > > > me
> > > > > > they're not worth
> > > > > > a damn.  There is a million ways to make one product perform
> > better
> > > > than
> > > > > > another depending
> > > > > > on configuration, environment and the application.  However is raw
> > > > > > performance metrics as
> > > > > > important as a good flexible design?
> > > > > >
> > > > > > Alex
> > > > > >
> > > > >
> > > >
> > >
> >
>

Re: grizzly versus mina

Posted by Adam Fisk <ad...@gmail.com>.
The benchmark was swapping MINA and Grizzly, both using AsyncWeb...  I think
you're maybe thinking of Grizzly as synonymous with Glassfish?  They pulled
it out into a generic NIO framework along the lines of MINA.

On 5/24/07, John Preston <by...@gmail.com> wrote:
>
> OK. I was looking at the Tomcat vd grizzly benchmark. But then its a
> bit strange. If your'e only 10% faster than tomcat but 50% faster than
> MINA. That 50% is with AsyncWeb on MINA. So its not a bench mark of
> MINA alone the application on MINA.
>
> I chose MINA for a simple fast scalable server that would server up
> data files via HTTP requests and MINA for me at the time (about 1 year
> ago) was the quickest, most simple to use. I remember trying tomcat
> but it was too big and wasn't that fast for simple responses, so I'm
> not sure that the 50% is MINA or AsyncWeb.
>
> I also agree java.net has some very useful projects, and for me, I
> appreciate being able to read other implementation details and see
> whether they have any use for me. Also lets remember that SUN, like
> everybody else has the right to beat their chest and say they are the
> best. Its for us to ignore them when we see that its more ego than
> anything substantial.
>
> Anyway, back to the matter of benchmarks. it might be nice to have a
> set of classes that would allow one to create a test of various
> operations using MINA, and so from version to version, patches
> included, we could keep track of whether we are improving things.
>
> John
> On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> > I hear you.  Sun's generally just annoying.  It would just probably be
> worth
> > taking a look under the hood to see if there's any real magic there
> > regardless of all th politics.  Wish I could volunteer to do it, but
> I've
> > got a startup to run!
> >
> > Thanks.
> >
> > -Adam
> >
> >
> > On 5/24/07, Alex Karasulu <ak...@apache.org> wrote:
> > >
> > > Oh yes I agree with you completely.  I was really referring to how
> > > benchmarks are
> > > being used as marketing tools and published to discredit other
> projects.
> > > Also I
> > > believe that there are jewels at java.net as well.  And you read me
> right:
> > > I'm no fan
> > > of SUN nor it's "open source" efforts.
> > >
> > > <OT>
> > > Back in the day when Bill Joy and Scott McNealy were at the helm I had
> a
> > > profound sense of
> > > respect for SUN.  I actually wanted to become an engineer there.  Now,
> > > IMO,
> > > they're a completely
> > > different beast driven by marketing rather than engineering
> principals.  I
> > > feel they resort to base
> > > practices that show a different character than the noble SUN I was
> used
> > > to.
> > > It's sad to know that
> > > the SUN many of us respected and looked up to has long since died.
> > > </OT>
> > >
> > > Regarding benchmarks they are great for internal metrics and shedding
> > > light
> > > on differences in
> > > architecture that could produce more efficient software.  I'm a big
> fan of
> > > competing
> > > against our own releases - meaning benchmarking a baseline and looking
> at
> > > the
> > > performance progression of the software as it evolves with time.  Also
> > > testing other
> > > frameworks is good for just showing how different scenarios are
> handled
> > > better
> > > with different architectures: I agree that we can learn a lot from
> these
> > > tests.
> > >
> > > I just don't want to use metrics to put down other projects.  It's all
> > > about
> > > how you use
> > > the metrics which I think was my intent on the last post.  This
> perhaps is
> > > why I am a
> > > bit disgusted with these tactics which are not in line with open
> source
> > > etiquette but
> > > rather the mark of commercially driven and marketing oriented OSS
> efforts.
> > >
> > > Alex
> > >
> > > On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> > > >
> > > > I agree on the tendency to manipulate benchmarks, but that doesn't
> mean
> > > > benchmarks aren't a useful tool.  How else can we evaluate
> > > performance?  I
> > > > guess I'm most curious about what the two projects might be able to
> > > learn
> > > > from each other.  I would suspect MINA's APIs are significantly
> easier
> > > to
> > > > use than Grizzly's, for example, and it wouldn't surprise me at all
> if
> > > > Sun's
> > > > benchmarks were somewhat accurate.  I hate Sun's java.net projects
> as
> > > much
> > > > as the next guy, but that doesn't mean there's not an occasional
> jewel
> > > in
> > > > there.
> > > >
> > > > It would at least be worth running independent tests.  If the
> > > differences
> > > > are even close to the claims, it would make a ton of sense to just
> copy
> > > > their ideas.  No need for too much pride on either side!  Just seems
> > > like
> > > > they've put a ton of work into rigorously analyzing the performance
> > > > tradeoffs of different design decisions, and it might make sense to
> take
> > > > advantage of that.  If their benchmarks are off and MINA performs
> > > better,
> > > > then they should go ahead and copy MINA.
> > > >
> > > > That's all assuming the performance tweaks don't make the existing
> APIs
> > > > unworkable.
> > > >
> > > > -Adam
> > > >
> > > >
> > > > On 5/24/07, Alex Karasulu <ak...@apache.org> wrote:
> > > > >
> > > > > On 5/24/07, Mladen Turk <mt...@apache.org> wrote:
> > > > > >
> > > > > > Adam Fisk wrote:
> > > > > > > The slides were just posted from this Java One session
> claiming
> > > > > Grizzly
> > > > > > > blows MINA away performance-wise, and I'm just curious as to
> > > > people's
> > > > > > views
> > > > > > > on it.  They present some interesting ideas about optimizing
> > > > selector
> > > > > > > threading and ByteBuffer use.
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5
> > > > > > >
> > > > > >
> > > > > > I love the slide 20!
> > > > > > JFA finally admitted that Tomcat's APR-NIO is faster then JDK
> one ;)
> > > > > > However last time I did benchmarks it was much faster then 10%.
> > > > > >
> > > > > > >
> > > > > > > Maybe someone could comment on the performance improvements in
> > > MINA
> > > > > > > 2.0?
> > > > > >
> > > > > > He probably compared MINA's Serial IO, and that is not usable
> > > > > > for production (jet). I wonder how it would look with real
> > > > > > async http server.
> > > > > > Nevertheless, benchmarks are like assholes. Everyone has one.
> > > > >
> > > > >
> > > > > Exactly!
> > > > >
> > > > > Incidentally SUN has been trying to attack several projects via
> the
> > > > > performance angle for
> > > > > some time now.  Just recently I received a cease and desist letter
> > > from
> > > > > them
> > > > > when I
> > > > > compiled some performance metrics.  The point behind it is was
> that we
> > > > > were
> > > > > not correctly
> > > > > configuring their products.  I guess they just want to make sure
> > > things
> > > > > are
> > > > > setup to their
> > > > > advantage.  That's what all these metrics revolve around and if
> you
> > > ask
> > > > me
> > > > > they're not worth
> > > > > a damn.  There is a million ways to make one product perform
> better
> > > than
> > > > > another depending
> > > > > on configuration, environment and the application.  However is raw
> > > > > performance metrics as
> > > > > important as a good flexible design?
> > > > >
> > > > > Alex
> > > > >
> > > >
> > >
> >
>

Re: grizzly versus mina

Posted by John Preston <by...@gmail.com>.
OK. I was looking at the Tomcat vd grizzly benchmark. But then its a
bit strange. If your'e only 10% faster than tomcat but 50% faster than
MINA. That 50% is with AsyncWeb on MINA. So its not a bench mark of
MINA alone the application on MINA.

I chose MINA for a simple fast scalable server that would server up
data files via HTTP requests and MINA for me at the time (about 1 year
ago) was the quickest, most simple to use. I remember trying tomcat
but it was too big and wasn't that fast for simple responses, so I'm
not sure that the 50% is MINA or AsyncWeb.

I also agree java.net has some very useful projects, and for me, I
appreciate being able to read other implementation details and see
whether they have any use for me. Also lets remember that SUN, like
everybody else has the right to beat their chest and say they are the
best. Its for us to ignore them when we see that its more ego than
anything substantial.

Anyway, back to the matter of benchmarks. it might be nice to have a
set of classes that would allow one to create a test of various
operations using MINA, and so from version to version, patches
included, we could keep track of whether we are improving things.

John
On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> I hear you.  Sun's generally just annoying.  It would just probably be worth
> taking a look under the hood to see if there's any real magic there
> regardless of all th politics.  Wish I could volunteer to do it, but I've
> got a startup to run!
>
> Thanks.
>
> -Adam
>
>
> On 5/24/07, Alex Karasulu <ak...@apache.org> wrote:
> >
> > Oh yes I agree with you completely.  I was really referring to how
> > benchmarks are
> > being used as marketing tools and published to discredit other projects.
> > Also I
> > believe that there are jewels at java.net as well.  And you read me right:
> > I'm no fan
> > of SUN nor it's "open source" efforts.
> >
> > <OT>
> > Back in the day when Bill Joy and Scott McNealy were at the helm I had a
> > profound sense of
> > respect for SUN.  I actually wanted to become an engineer there.  Now,
> > IMO,
> > they're a completely
> > different beast driven by marketing rather than engineering principals.  I
> > feel they resort to base
> > practices that show a different character than the noble SUN I was used
> > to.
> > It's sad to know that
> > the SUN many of us respected and looked up to has long since died.
> > </OT>
> >
> > Regarding benchmarks they are great for internal metrics and shedding
> > light
> > on differences in
> > architecture that could produce more efficient software.  I'm a big fan of
> > competing
> > against our own releases - meaning benchmarking a baseline and looking at
> > the
> > performance progression of the software as it evolves with time.  Also
> > testing other
> > frameworks is good for just showing how different scenarios are handled
> > better
> > with different architectures: I agree that we can learn a lot from these
> > tests.
> >
> > I just don't want to use metrics to put down other projects.  It's all
> > about
> > how you use
> > the metrics which I think was my intent on the last post.  This perhaps is
> > why I am a
> > bit disgusted with these tactics which are not in line with open source
> > etiquette but
> > rather the mark of commercially driven and marketing oriented OSS efforts.
> >
> > Alex
> >
> > On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> > >
> > > I agree on the tendency to manipulate benchmarks, but that doesn't mean
> > > benchmarks aren't a useful tool.  How else can we evaluate
> > performance?  I
> > > guess I'm most curious about what the two projects might be able to
> > learn
> > > from each other.  I would suspect MINA's APIs are significantly easier
> > to
> > > use than Grizzly's, for example, and it wouldn't surprise me at all if
> > > Sun's
> > > benchmarks were somewhat accurate.  I hate Sun's java.net projects as
> > much
> > > as the next guy, but that doesn't mean there's not an occasional jewel
> > in
> > > there.
> > >
> > > It would at least be worth running independent tests.  If the
> > differences
> > > are even close to the claims, it would make a ton of sense to just copy
> > > their ideas.  No need for too much pride on either side!  Just seems
> > like
> > > they've put a ton of work into rigorously analyzing the performance
> > > tradeoffs of different design decisions, and it might make sense to take
> > > advantage of that.  If their benchmarks are off and MINA performs
> > better,
> > > then they should go ahead and copy MINA.
> > >
> > > That's all assuming the performance tweaks don't make the existing APIs
> > > unworkable.
> > >
> > > -Adam
> > >
> > >
> > > On 5/24/07, Alex Karasulu <ak...@apache.org> wrote:
> > > >
> > > > On 5/24/07, Mladen Turk <mt...@apache.org> wrote:
> > > > >
> > > > > Adam Fisk wrote:
> > > > > > The slides were just posted from this Java One session claiming
> > > > Grizzly
> > > > > > blows MINA away performance-wise, and I'm just curious as to
> > > people's
> > > > > views
> > > > > > on it.  They present some interesting ideas about optimizing
> > > selector
> > > > > > threading and ByteBuffer use.
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5
> > > > > >
> > > > >
> > > > > I love the slide 20!
> > > > > JFA finally admitted that Tomcat's APR-NIO is faster then JDK one ;)
> > > > > However last time I did benchmarks it was much faster then 10%.
> > > > >
> > > > > >
> > > > > > Maybe someone could comment on the performance improvements in
> > MINA
> > > > > > 2.0?
> > > > >
> > > > > He probably compared MINA's Serial IO, and that is not usable
> > > > > for production (jet). I wonder how it would look with real
> > > > > async http server.
> > > > > Nevertheless, benchmarks are like assholes. Everyone has one.
> > > >
> > > >
> > > > Exactly!
> > > >
> > > > Incidentally SUN has been trying to attack several projects via the
> > > > performance angle for
> > > > some time now.  Just recently I received a cease and desist letter
> > from
> > > > them
> > > > when I
> > > > compiled some performance metrics.  The point behind it is was that we
> > > > were
> > > > not correctly
> > > > configuring their products.  I guess they just want to make sure
> > things
> > > > are
> > > > setup to their
> > > > advantage.  That's what all these metrics revolve around and if you
> > ask
> > > me
> > > > they're not worth
> > > > a damn.  There is a million ways to make one product perform better
> > than
> > > > another depending
> > > > on configuration, environment and the application.  However is raw
> > > > performance metrics as
> > > > important as a good flexible design?
> > > >
> > > > Alex
> > > >
> > >
> >
>

Re: grizzly versus mina

Posted by Adam Fisk <ad...@gmail.com>.
I hear you.  Sun's generally just annoying.  It would just probably be worth
taking a look under the hood to see if there's any real magic there
regardless of all th politics.  Wish I could volunteer to do it, but I've
got a startup to run!

Thanks.

-Adam


On 5/24/07, Alex Karasulu <ak...@apache.org> wrote:
>
> Oh yes I agree with you completely.  I was really referring to how
> benchmarks are
> being used as marketing tools and published to discredit other projects.
> Also I
> believe that there are jewels at java.net as well.  And you read me right:
> I'm no fan
> of SUN nor it's "open source" efforts.
>
> <OT>
> Back in the day when Bill Joy and Scott McNealy were at the helm I had a
> profound sense of
> respect for SUN.  I actually wanted to become an engineer there.  Now,
> IMO,
> they're a completely
> different beast driven by marketing rather than engineering principals.  I
> feel they resort to base
> practices that show a different character than the noble SUN I was used
> to.
> It's sad to know that
> the SUN many of us respected and looked up to has long since died.
> </OT>
>
> Regarding benchmarks they are great for internal metrics and shedding
> light
> on differences in
> architecture that could produce more efficient software.  I'm a big fan of
> competing
> against our own releases - meaning benchmarking a baseline and looking at
> the
> performance progression of the software as it evolves with time.  Also
> testing other
> frameworks is good for just showing how different scenarios are handled
> better
> with different architectures: I agree that we can learn a lot from these
> tests.
>
> I just don't want to use metrics to put down other projects.  It's all
> about
> how you use
> the metrics which I think was my intent on the last post.  This perhaps is
> why I am a
> bit disgusted with these tactics which are not in line with open source
> etiquette but
> rather the mark of commercially driven and marketing oriented OSS efforts.
>
> Alex
>
> On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> >
> > I agree on the tendency to manipulate benchmarks, but that doesn't mean
> > benchmarks aren't a useful tool.  How else can we evaluate
> performance?  I
> > guess I'm most curious about what the two projects might be able to
> learn
> > from each other.  I would suspect MINA's APIs are significantly easier
> to
> > use than Grizzly's, for example, and it wouldn't surprise me at all if
> > Sun's
> > benchmarks were somewhat accurate.  I hate Sun's java.net projects as
> much
> > as the next guy, but that doesn't mean there's not an occasional jewel
> in
> > there.
> >
> > It would at least be worth running independent tests.  If the
> differences
> > are even close to the claims, it would make a ton of sense to just copy
> > their ideas.  No need for too much pride on either side!  Just seems
> like
> > they've put a ton of work into rigorously analyzing the performance
> > tradeoffs of different design decisions, and it might make sense to take
> > advantage of that.  If their benchmarks are off and MINA performs
> better,
> > then they should go ahead and copy MINA.
> >
> > That's all assuming the performance tweaks don't make the existing APIs
> > unworkable.
> >
> > -Adam
> >
> >
> > On 5/24/07, Alex Karasulu <ak...@apache.org> wrote:
> > >
> > > On 5/24/07, Mladen Turk <mt...@apache.org> wrote:
> > > >
> > > > Adam Fisk wrote:
> > > > > The slides were just posted from this Java One session claiming
> > > Grizzly
> > > > > blows MINA away performance-wise, and I'm just curious as to
> > people's
> > > > views
> > > > > on it.  They present some interesting ideas about optimizing
> > selector
> > > > > threading and ByteBuffer use.
> > > > >
> > > > >
> > > >
> > >
> >
> http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5
> > > > >
> > > >
> > > > I love the slide 20!
> > > > JFA finally admitted that Tomcat's APR-NIO is faster then JDK one ;)
> > > > However last time I did benchmarks it was much faster then 10%.
> > > >
> > > > >
> > > > > Maybe someone could comment on the performance improvements in
> MINA
> > > > > 2.0?
> > > >
> > > > He probably compared MINA's Serial IO, and that is not usable
> > > > for production (jet). I wonder how it would look with real
> > > > async http server.
> > > > Nevertheless, benchmarks are like assholes. Everyone has one.
> > >
> > >
> > > Exactly!
> > >
> > > Incidentally SUN has been trying to attack several projects via the
> > > performance angle for
> > > some time now.  Just recently I received a cease and desist letter
> from
> > > them
> > > when I
> > > compiled some performance metrics.  The point behind it is was that we
> > > were
> > > not correctly
> > > configuring their products.  I guess they just want to make sure
> things
> > > are
> > > setup to their
> > > advantage.  That's what all these metrics revolve around and if you
> ask
> > me
> > > they're not worth
> > > a damn.  There is a million ways to make one product perform better
> than
> > > another depending
> > > on configuration, environment and the application.  However is raw
> > > performance metrics as
> > > important as a good flexible design?
> > >
> > > Alex
> > >
> >
>

Re: grizzly versus mina

Posted by Alex Karasulu <ak...@apache.org>.
Oh yes I agree with you completely.  I was really referring to how
benchmarks are
being used as marketing tools and published to discredit other projects.
Also I
believe that there are jewels at java.net as well.  And you read me right:
I'm no fan
of SUN nor it's "open source" efforts.

<OT>
Back in the day when Bill Joy and Scott McNealy were at the helm I had a
profound sense of
respect for SUN.  I actually wanted to become an engineer there.  Now, IMO,
they're a completely
different beast driven by marketing rather than engineering principals.  I
feel they resort to base
practices that show a different character than the noble SUN I was used to.
It's sad to know that
the SUN many of us respected and looked up to has long since died.
</OT>

Regarding benchmarks they are great for internal metrics and shedding light
on differences in
architecture that could produce more efficient software.  I'm a big fan of
competing
against our own releases - meaning benchmarking a baseline and looking at
the
performance progression of the software as it evolves with time.  Also
testing other
frameworks is good for just showing how different scenarios are handled
better
with different architectures: I agree that we can learn a lot from these
tests.

I just don't want to use metrics to put down other projects.  It's all about
how you use
the metrics which I think was my intent on the last post.  This perhaps is
why I am a
bit disgusted with these tactics which are not in line with open source
etiquette but
rather the mark of commercially driven and marketing oriented OSS efforts.

Alex

On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
>
> I agree on the tendency to manipulate benchmarks, but that doesn't mean
> benchmarks aren't a useful tool.  How else can we evaluate performance?  I
> guess I'm most curious about what the two projects might be able to learn
> from each other.  I would suspect MINA's APIs are significantly easier to
> use than Grizzly's, for example, and it wouldn't surprise me at all if
> Sun's
> benchmarks were somewhat accurate.  I hate Sun's java.net projects as much
> as the next guy, but that doesn't mean there's not an occasional jewel in
> there.
>
> It would at least be worth running independent tests.  If the differences
> are even close to the claims, it would make a ton of sense to just copy
> their ideas.  No need for too much pride on either side!  Just seems like
> they've put a ton of work into rigorously analyzing the performance
> tradeoffs of different design decisions, and it might make sense to take
> advantage of that.  If their benchmarks are off and MINA performs better,
> then they should go ahead and copy MINA.
>
> That's all assuming the performance tweaks don't make the existing APIs
> unworkable.
>
> -Adam
>
>
> On 5/24/07, Alex Karasulu <ak...@apache.org> wrote:
> >
> > On 5/24/07, Mladen Turk <mt...@apache.org> wrote:
> > >
> > > Adam Fisk wrote:
> > > > The slides were just posted from this Java One session claiming
> > Grizzly
> > > > blows MINA away performance-wise, and I'm just curious as to
> people's
> > > views
> > > > on it.  They present some interesting ideas about optimizing
> selector
> > > > threading and ByteBuffer use.
> > > >
> > > >
> > >
> >
> http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5
> > > >
> > >
> > > I love the slide 20!
> > > JFA finally admitted that Tomcat's APR-NIO is faster then JDK one ;)
> > > However last time I did benchmarks it was much faster then 10%.
> > >
> > > >
> > > > Maybe someone could comment on the performance improvements in MINA
> > > > 2.0?
> > >
> > > He probably compared MINA's Serial IO, and that is not usable
> > > for production (jet). I wonder how it would look with real
> > > async http server.
> > > Nevertheless, benchmarks are like assholes. Everyone has one.
> >
> >
> > Exactly!
> >
> > Incidentally SUN has been trying to attack several projects via the
> > performance angle for
> > some time now.  Just recently I received a cease and desist letter from
> > them
> > when I
> > compiled some performance metrics.  The point behind it is was that we
> > were
> > not correctly
> > configuring their products.  I guess they just want to make sure things
> > are
> > setup to their
> > advantage.  That's what all these metrics revolve around and if you ask
> me
> > they're not worth
> > a damn.  There is a million ways to make one product perform better than
> > another depending
> > on configuration, environment and the application.  However is raw
> > performance metrics as
> > important as a good flexible design?
> >
> > Alex
> >
>

Re: grizzly versus mina

Posted by Adam Fisk <ad...@gmail.com>.
I agree on the tendency to manipulate benchmarks, but that doesn't mean
benchmarks aren't a useful tool.  How else can we evaluate performance?  I
guess I'm most curious about what the two projects might be able to learn
from each other.  I would suspect MINA's APIs are significantly easier to
use than Grizzly's, for example, and it wouldn't surprise me at all if Sun's
benchmarks were somewhat accurate.  I hate Sun's java.net projects as much
as the next guy, but that doesn't mean there's not an occasional jewel in
there.

It would at least be worth running independent tests.  If the differences
are even close to the claims, it would make a ton of sense to just copy
their ideas.  No need for too much pride on either side!  Just seems like
they've put a ton of work into rigorously analyzing the performance
tradeoffs of different design decisions, and it might make sense to take
advantage of that.  If their benchmarks are off and MINA performs better,
then they should go ahead and copy MINA.

That's all assuming the performance tweaks don't make the existing APIs
unworkable.

-Adam


On 5/24/07, Alex Karasulu <ak...@apache.org> wrote:
>
> On 5/24/07, Mladen Turk <mt...@apache.org> wrote:
> >
> > Adam Fisk wrote:
> > > The slides were just posted from this Java One session claiming
> Grizzly
> > > blows MINA away performance-wise, and I'm just curious as to people's
> > views
> > > on it.  They present some interesting ideas about optimizing selector
> > > threading and ByteBuffer use.
> > >
> > >
> >
> http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5
> > >
> >
> > I love the slide 20!
> > JFA finally admitted that Tomcat's APR-NIO is faster then JDK one ;)
> > However last time I did benchmarks it was much faster then 10%.
> >
> > >
> > > Maybe someone could comment on the performance improvements in MINA
> > > 2.0?
> >
> > He probably compared MINA's Serial IO, and that is not usable
> > for production (jet). I wonder how it would look with real
> > async http server.
> > Nevertheless, benchmarks are like assholes. Everyone has one.
>
>
> Exactly!
>
> Incidentally SUN has been trying to attack several projects via the
> performance angle for
> some time now.  Just recently I received a cease and desist letter from
> them
> when I
> compiled some performance metrics.  The point behind it is was that we
> were
> not correctly
> configuring their products.  I guess they just want to make sure things
> are
> setup to their
> advantage.  That's what all these metrics revolve around and if you ask me
> they're not worth
> a damn.  There is a million ways to make one product perform better than
> another depending
> on configuration, environment and the application.  However is raw
> performance metrics as
> important as a good flexible design?
>
> Alex
>

Re: grizzly versus mina

Posted by Alex Karasulu <ak...@apache.org>.
On 5/24/07, Mladen Turk <mt...@apache.org> wrote:
>
> Adam Fisk wrote:
> > The slides were just posted from this Java One session claiming Grizzly
> > blows MINA away performance-wise, and I'm just curious as to people's
> views
> > on it.  They present some interesting ideas about optimizing selector
> > threading and ByteBuffer use.
> >
> >
> http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5
> >
>
> I love the slide 20!
> JFA finally admitted that Tomcat's APR-NIO is faster then JDK one ;)
> However last time I did benchmarks it was much faster then 10%.
>
> >
> > Maybe someone could comment on the performance improvements in MINA
> > 2.0?
>
> He probably compared MINA's Serial IO, and that is not usable
> for production (jet). I wonder how it would look with real
> async http server.
> Nevertheless, benchmarks are like assholes. Everyone has one.


Exactly!

Incidentally SUN has been trying to attack several projects via the
performance angle for
some time now.  Just recently I received a cease and desist letter from them
when I
compiled some performance metrics.  The point behind it is was that we were
not correctly
configuring their products.  I guess they just want to make sure things are
setup to their
advantage.  That's what all these metrics revolve around and if you ask me
they're not worth
a damn.  There is a million ways to make one product perform better than
another depending
on configuration, environment and the application.  However is raw
performance metrics as
important as a good flexible design?

Alex

Re: grizzly versus mina

Posted by Mladen Turk <mt...@apache.org>.
Adam Fisk wrote:
> The slides were just posted from this Java One session claiming Grizzly
> blows MINA away performance-wise, and I'm just curious as to people's views
> on it.  They present some interesting ideas about optimizing selector
> threading and ByteBuffer use.
> 
> http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5 
> 

I love the slide 20!
JFA finally admitted that Tomcat's APR-NIO is faster then JDK one ;)
However last time I did benchmarks it was much faster then 10%.

> 
> Maybe someone could comment on the performance improvements in MINA 
> 2.0?

He probably compared MINA's Serial IO, and that is not usable
for production (jet). I wonder how it would look with real
async http server.
Nevertheless, benchmarks are like assholes. Everyone has one.

Regards,
Mladen.

Re: grizzly versus mina

Posted by John Preston <by...@gmail.com>.
As they say, don't shoot the messenger, deal wih the message.

John

On 5/25/07, Ashish Sharma <as...@gmail.com> wrote:
> its strange to see how x vs y wars can generate so many responses.
>
> if they are good we can adapt, if we are good we have nothing to worry.
>
> On 5/25/07, Trustin Lee <tr...@gmail.com> wrote:
> > Hi Holger,
> >
> > On 5/25/07, Holger Hoffstaette <ho...@wizards.de> wrote:
> > > On Fri, 25 May 2007 08:45:34 +0900, Trustin Lee wrote:
> > >
> > > > The HEAD revision of AsyncWeb highly depends on MINA and had significant
> > > > performance improvement.  It seems like they used older release (or
> > didn't
> > > > use a lightweight asyncweb example).  Moreover, I don't think MINA
> > doesn't
> > > > have any noticeably big margin for improvement in performance, at least
> > > > from the following performance test report.
> > > >
> > > > http://mina.apache.org/performance-test-reports.html
> > >
> > > That's a fantastic report! However it would be fair to the poor old Apache
> > > if you enabled mod_cache and used a newer version (2.2) with a threaded
> > > mpm - I assume you ran the worker model? While that is popular for
> > > deployments for many good reasons, it's not a fair comparison to what Mina
> > > does.
> > > Other than that keep up the great work :)
> >
> > Thanks for pointing out the issues with my tests.  I'd like to run the
> > test again with more powerful machines and update the report someday.
> >
> > Trustin
> > --
> > what we call human nature is actually human habit
> > --
> > http://gleamynode.net/
> > --
> > PGP Key ID: 0x0255ECA6
> >
>

Re: grizzly versus mina

Posted by Ashish Sharma <as...@gmail.com>.
its strange to see how x vs y wars can generate so many responses.

if they are good we can adapt, if we are good we have nothing to worry.

On 5/25/07, Trustin Lee <tr...@gmail.com> wrote:
> Hi Holger,
>
> On 5/25/07, Holger Hoffstaette <ho...@wizards.de> wrote:
> > On Fri, 25 May 2007 08:45:34 +0900, Trustin Lee wrote:
> >
> > > The HEAD revision of AsyncWeb highly depends on MINA and had significant
> > > performance improvement.  It seems like they used older release (or
> didn't
> > > use a lightweight asyncweb example).  Moreover, I don't think MINA
> doesn't
> > > have any noticeably big margin for improvement in performance, at least
> > > from the following performance test report.
> > >
> > > http://mina.apache.org/performance-test-reports.html
> >
> > That's a fantastic report! However it would be fair to the poor old Apache
> > if you enabled mod_cache and used a newer version (2.2) with a threaded
> > mpm - I assume you ran the worker model? While that is popular for
> > deployments for many good reasons, it's not a fair comparison to what Mina
> > does.
> > Other than that keep up the great work :)
>
> Thanks for pointing out the issues with my tests.  I'd like to run the
> test again with more powerful machines and update the report someday.
>
> Trustin
> --
> what we call human nature is actually human habit
> --
> http://gleamynode.net/
> --
> PGP Key ID: 0x0255ECA6
>

Re: grizzly versus mina

Posted by Trustin Lee <tr...@gmail.com>.
Hi Holger,

On 5/25/07, Holger Hoffstaette <ho...@wizards.de> wrote:
> On Fri, 25 May 2007 08:45:34 +0900, Trustin Lee wrote:
>
> > The HEAD revision of AsyncWeb highly depends on MINA and had significant
> > performance improvement.  It seems like they used older release (or didn't
> > use a lightweight asyncweb example).  Moreover, I don't think MINA doesn't
> > have any noticeably big margin for improvement in performance, at least
> > from the following performance test report.
> >
> > http://mina.apache.org/performance-test-reports.html
>
> That's a fantastic report! However it would be fair to the poor old Apache
> if you enabled mod_cache and used a newer version (2.2) with a threaded
> mpm - I assume you ran the worker model? While that is popular for
> deployments for many good reasons, it's not a fair comparison to what Mina
> does.
> Other than that keep up the great work :)

Thanks for pointing out the issues with my tests.  I'd like to run the
test again with more powerful machines and update the report someday.

Trustin
-- 
what we call human nature is actually human habit
--
http://gleamynode.net/
--
PGP Key ID: 0x0255ECA6

Re: grizzly versus mina

Posted by Holger Hoffstaette <ho...@wizards.de>.
On Fri, 25 May 2007 08:45:34 +0900, Trustin Lee wrote:

> The HEAD revision of AsyncWeb highly depends on MINA and had significant
> performance improvement.  It seems like they used older release (or didn't
> use a lightweight asyncweb example).  Moreover, I don't think MINA doesn't
> have any noticeably big margin for improvement in performance, at least
> from the following performance test report.
> 
> http://mina.apache.org/performance-test-reports.html

That's a fantastic report! However it would be fair to the poor old Apache
if you enabled mod_cache and used a newer version (2.2) with a threaded
mpm - I assume you ran the worker model? While that is popular for
deployments for many good reasons, it's not a fair comparison to what Mina
does.
Other than that keep up the great work :)

-h



Re: grizzly versus mina

Posted by Trustin Lee <tr...@gmail.com>.
Hi guys,

On 5/25/07, peter royal <pr...@apache.org> wrote:
> On May 24, 2007, at 8:45 AM, Adam Fisk wrote:
> > The slides were just posted from this Java One session claiming
> > Grizzly
> > blows MINA away performance-wise, and I'm just curious as to
> > people's views
> > on it.  They present some interesting ideas about optimizing selector
> > threading and ByteBuffer use.
> >
> > http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?
> > sessn=TS-2992&yr=2007&track=5
>
> I just read over that this morning..
>
> Without really knowing more details, my first blush is that on HTTP,
> Grizzly is probably faster due to the fact that it was initially
> tuned for HTTP. We know that MINA has extra latency wrt accepting
> connections, so that's a definite spot that grizzly can beat us out at.
>
> MINA was designed to be protocol agnostic from day one, whereas
> Grizzly was designed for HTTP. (They have been working to remove the
> HTTP-centric nature of the design from what I read on their blogs,
> and their 1.5 release has easy support for doing any kind of protocol
> now)

I didn't read the slide yet.  Let me read it soon...

The HEAD revision of AsyncWeb highly depends on MINA and had
significant performance improvement.  It seems like they used older
release (or didn't use a lightweight asyncweb example).  Moreover, I
don't think MINA doesn't have any noticeably big margin for
improvement in performance, at least from the following performance
test report.

http://mina.apache.org/performance-test-reports.html

Trustin
-- 
what we call human nature is actually human habit
--
http://gleamynode.net/
--
PGP Key ID: 0x0255ECA6

Re: grizzly versus mina

Posted by peter royal <pr...@apache.org>.
On May 24, 2007, at 8:45 AM, Adam Fisk wrote:
> The slides were just posted from this Java One session claiming  
> Grizzly
> blows MINA away performance-wise, and I'm just curious as to  
> people's views
> on it.  They present some interesting ideas about optimizing selector
> threading and ByteBuffer use.
>
> http://developers.sun.com/learning/javaoneonline/j1sessn.jsp? 
> sessn=TS-2992&yr=2007&track=5

I just read over that this morning..

Without really knowing more details, my first blush is that on HTTP,  
Grizzly is probably faster due to the fact that it was initially  
tuned for HTTP. We know that MINA has extra latency wrt accepting  
connections, so that's a definite spot that grizzly can beat us out at.

MINA was designed to be protocol agnostic from day one, whereas  
Grizzly was designed for HTTP. (They have been working to remove the  
HTTP-centric nature of the design from what I read on their blogs,  
and their 1.5 release has easy support for doing any kind of protocol  
now)

-pete


-- 
proyal@apache.org - http://fotap.org/~osi




Re: grizzly versus mina

Posted by Adam Fisk <ad...@gmail.com>.
I just wanted to reiterate that I didn't mean to be discouraging to the MINA
devs at all.  If I hadn't come across the Grizzly tidbit, I was planning on
posting a "thank you" for writing such a kick-ass framework that's saving me
a ton of time, money, and stress.

Asyncweb rocks too, by the way.  I'm now grabbing your handy state-machine
code for my SIP server, since SIP and HTTP message parsing is almost
identical.  I'll hopefully get around to adapting it to a generic HTTP-style
protocol parsing framework, but no promises yet!

If anyone else has leaned towards writing a state machine for their protocol
parsing, I'd recommend checking out the AsyncWeb code.

All the Best,

-Adam


On 5/25/07, Ersin Er <er...@gmail.com> wrote:
>
> Hi,
>
> MINA started a new trend in NIO based frameworks. It's elegant and I
> am sure it can be further optimized for performance. What I care now
> is how easily I can integrate it into my application and MINA with new
> enhancements seems to be quite cool in this job.
>
> And it's Apache!
>
> On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> > The slides were just posted from this Java One session claiming Grizzly
> > blows MINA away performance-wise, and I'm just curious as to people's
> views
> > on it.  They present some interesting ideas about optimizing selector
> > threading and ByteBuffer use.
> >
> >
> http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5
> >
> > Maybe someone could comment on the performance improvements in MINA 2.0
> ?  It
> > might also be useful to look at Grizzlies techniques to see if MINA
> could
> > incorporate them.  I know at least Scott Oaks from Grizzly is a solid
> > performance guy, so their numbers are likely correct.
> >
> > Quick note:  I'm not trying to spark a Grizzly/MINA battle by any
> means.  I
> > just started using MINA after having implemented several generic NIO
> > frameworks myself, and I absolutely love MINA's approach.  It allowed me
> to
> > code a STUN server in 2 days, and I'm porting my SIP server now.
> >
> > Thanks,
> >
> > Adam
> >
>
>
> --
> Ersin
>

Re: grizzly versus mina

Posted by Ersin Er <er...@gmail.com>.
Hi,

MINA started a new trend in NIO based frameworks. It's elegant and I
am sure it can be further optimized for performance. What I care now
is how easily I can integrate it into my application and MINA with new
enhancements seems to be quite cool in this job.

And it's Apache!

On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> The slides were just posted from this Java One session claiming Grizzly
> blows MINA away performance-wise, and I'm just curious as to people's views
> on it.  They present some interesting ideas about optimizing selector
> threading and ByteBuffer use.
>
> http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5
>
> Maybe someone could comment on the performance improvements in MINA 2.0?  It
> might also be useful to look at Grizzlies techniques to see if MINA could
> incorporate them.  I know at least Scott Oaks from Grizzly is a solid
> performance guy, so their numbers are likely correct.
>
> Quick note:  I'm not trying to spark a Grizzly/MINA battle by any means.  I
> just started using MINA after having implemented several generic NIO
> frameworks myself, and I absolutely love MINA's approach.  It allowed me to
> code a STUN server in 2 days, and I'm porting my SIP server now.
>
> Thanks,
>
> Adam
>


-- 
Ersin

Re: grizzly versus mina

Posted by Julien Vermillard <jv...@archean.fr>.
Le jeudi 24 mai 2007 à 11:45 -0400, Adam Fisk a écrit :
> The slides were just posted from this Java One session claiming Grizzly
> blows MINA away performance-wise, and I'm just curious as to people's views
> on it.  They present some interesting ideas about optimizing selector
> threading and ByteBuffer use.
> 
> http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5
> 
> Maybe someone could comment on the performance improvements in MINA 2.0?  It
> might also be useful to look at Grizzlies techniques to see if MINA could
> incorporate them.  I know at least Scott Oaks from Grizzly is a solid
> performance guy, so their numbers are likely correct.
> 
> Quick note:  I'm not trying to spark a Grizzly/MINA battle by any means.  I
> just started using MINA after having implemented several generic NIO
> frameworks myself, and I absolutely love MINA's approach.  It allowed me to
> code a STUN server in 2 days, and I'm porting my SIP server now.
> 
> Thanks,
> 
> Adam

Do you think they mind publishing the tested code, so we can take a
look ?

I sound very funny, specialy comparing to 'C based server' without more
details.

Julien


Re: grizzly versus mina

Posted by Emmanuel Lecharny <el...@apache.org>.
Mehmet D. AKIN a écrit :

>
> Still benchmarks  presented in that document seems contradicting
> Trustin's benchmarks. In the presentation it says Grizzly with Async
> web is just a little faster than a "C based web server " but almost
> two times faster and scalable than Mina.But  Trustin's test also shows
> that Mina based Async Web server is a little faster than a C based
> Server (Apache HTTP)  so there is something fishy going on here..
>
> If only we had a  wider benchmark suite which tests for different
> types of protocols and loads, instead of microbenchmarks.
>
> Mehmet


I think that Mladen just summarize the situation with benchmarks : 
everyone has one... :)

 From my POV, and as I conducted a lot of benchmark on Apache Directory 
Project, benchmarks are good for internal use, just to see haow good you 
are or how far from the competitors, or to be able to answer stupid 
questions lie "can you afford 100 000 requests per second?" when you 
*know* that your potential client is expecting to have 100 req/s max ;) ...

Nothing more. But doing benchmarks, fixing the code, and getting better 
number is definitively great fun !

Emmanuel


Re: grizzly versus mina

Posted by "Mehmet D. AKIN" <md...@gmail.com>.
On 5/24/07, Adam Fisk <ad...@gmail.com> wrote:
> The slides were just posted from this Java One session claiming Grizzly
> blows MINA away performance-wise, and I'm just curious as to people's views
> on it.  They present some interesting ideas about optimizing selector
> threading and ByteBuffer use.
>
> http://developers.sun.com/learning/javaoneonline/j1sessn.jsp?sessn=TS-2992&yr=2007&track=5
>
> Maybe someone could comment on the performance improvements in MINA 2.0?  It
> might also be useful to look at Grizzlies techniques to see if MINA could
> incorporate them.  I know at least Scott Oaks from Grizzly is a solid
> performance guy, so their numbers are likely correct.
>
> Quick note:  I'm not trying to spark a Grizzly/MINA battle by any means.  I
> just started using MINA after having implemented several generic NIO
> frameworks myself, and I absolutely love MINA's approach.  It allowed me to
> code a STUN server in 2 days, and I'm porting my SIP server now.
>
> Thanks,
>
> Adam
>

Still benchmarks  presented in that document seems contradicting
Trustin's benchmarks. In the presentation it says Grizzly with Async
web is just a little faster than a "C based web server " but almost
two times faster and scalable than Mina.But  Trustin's test also shows
that Mina based Async Web server is a little faster than a C based
Server (Apache HTTP)  so there is something fishy going on here..

If only we had a  wider benchmark suite which tests for different
types of protocols and loads, instead of microbenchmarks.

Mehmet