You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tapestry.apache.org by Rural Hunter <ru...@gmail.com> on 2013/05/07 03:48:09 UTC

Re: Tapestry performance

Here is a latest framework benchmark: 
http://www.techempower.com/benchmarks/#section=data-r4
How do you guys think about it?

于 2013/1/25 9:39, Howard Lewis Ship 写道:
> Performance can be black magic.
>
> On the one hand, I've heard some specific things about Node.js performance
> that are very positive.  Certainly, when developing for Node, the virtually
> instance server startup is wonderful. Java has beaten us down into thinking
> that code needs lots of time to start up.
>
> However, Node.js is constantly switching contexts in significant ways. All
> those callbacks executing in, effectively, a chaotic order means that
> memory must be swept into and out of core quite constantly. In a Java-style
> synchronous server, the operating system has more clues about what memory
> should stay resident based on which threads have been recently active.
>   Operating systems are very good at managing virtual memory, when you let
> them!
>
> It's probably impossible to compare things in an apples-to-apples way,
> however. I appreciate that single-thread simplicity of Node ... no locks,
> no deadlocks, no concerns about memory visibility between threads, no
> synchronize, no ConcurrentReadWriteLock, huzzah!
>
> The flip side is that in node you have to always be on your toes w.r.t. to
> writing everything in the callback style. That's not everyone's cup of tea
> ... and in  some cases, can be truly complicated to manage, especially
> figuring out how to respond to errors when you are in the middle of an
> indeterminate number of partially executed work flows.
>
>
> On Thu, Jan 24, 2013 at 5:05 PM, Lenny Primak <lp...@hope.nyc.ny.us>wrote:
>
>> I was also surprised by the results when I saw them.
>> This was a C++ project, not java, but the performance characteristics
>> wouldn't be any different.
>> This was also proprietary project for one of my old companies so I can't
>> divulge anything else about it.
>> What I can tell you, it comes down to a very simple fact:
>> In all async servers, any single I/O operation comes down to 2 calls:
>> poll() (or whatever equivalent you want to use) and read/write()
>> There is also setup costs for poll() or its equivalents that are not
>> present in a synchronous server.
>> With synchronous server, there is no poll(), just the read and write, thus
>> the overhead of the poll() and its setup is eliminated.
>> Now it all comes down to the OS threading and I/O performance, and the
>> real surprise was that the multiple threads,
>> even 100s of thousands of them, all doing IO were not bogging down the
>> system at all.
>>
>> I know that there is all hype right now around async servers, but in real
>> world, async is just slow.
>>
>> I also believe that lightweight threads (green_threads, I believe) were
>> eliminated from the JVM long time ago.
>>
>> On Jan 24, 2013, at 7:51 PM, Robert Zeigler wrote:
>>
>>> I find this very difficult to swallow, at least for java apps, unless,
>> maybe, you're using a java implementation that uses native threads instead
>> of lightweight java threads, then I might believe it. I would also believe
>> it if the async server is poorly written. And I can believe that many an
>> async server is poorly written. It also depends a LOT on whether your
>> connections are short or long lived. For something like a web server where
>> you typically have very short-lived client connections, I can also believe
>> this. I'm rather skeptical of general claims that an async server is
>> slower, and would love to see some of the "space research project" worth of
>> data backing the claims.
>>> Robert
>>>
>>> On Jan 24, 2013, at 1/249:29 AM , Lenny Primak <lp...@hope.nyc.ny.us>
>> wrote:
>>>> I've done extensive ( no, not extensive, really, really, extensive,
>> worthy of a space research project extensive ) testing of async IO
>> performance vs. threaded server performance.
>>>> The conclusion is that unless you have over 10,000 active, users,
>>   async IO is about 1/2 the performance of the usual thread-per-connection
>> performance.
>>>> By active users I mean connections that are actually putting out IO all
>> the time, as opposed to just idle sitting connections.
>>>> If you really, really, do have that many uses ( amazon.com type shop )
>> your bottleneck won't be at the web server level anyway, so the right thing
>> to do is to load balance and scale out.
>>>> Async IO won't solve any of these problems and will just introduce bugs
>> and complexity and actually degrade performance by significant margin.
>>>> On Jan 24, 2013, at 7:06 AM, "Thiago H de Paula Figueiredo" <
>> thiagohp@gmail.com> wrote:
>>>>> On Thu, 24 Jan 2013 09:26:45 -0200, Muhammad Gelbana <
>> m.gelbana@gmail.com> wrote:
>>>>>> Can someone clarify why would play! be better than tapestry in this
>> test?
>>>>> I guess only someone with play! internal architecture can tell you
>> this for sure. I also think that is probable that its usage of Netty (
>> https://netty.io/), which uses NIO and asynchronous IO, instead of
>> servlet containers (usually synchronous) is an important factor. I'm
>> playing with the idea of running Tapestry over Vert.X (http://vertx.io/),
>> but no code written yet.
>>>>> --
>>>>> Thiago H. de Paula Figueiredo
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
>>>>> For additional commands, e-mail: users-help@tapestry.apache.org
>>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
>>>> For additional commands, e-mail: users-help@tapestry.apache.org
>>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
>>> For additional commands, e-mail: users-help@tapestry.apache.org
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
>> For additional commands, e-mail: users-help@tapestry.apache.org
>>
>>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
For additional commands, e-mail: users-help@tapestry.apache.org


Re: Tapestry performance

Posted by Denis Stepanov <de...@gmail.com>.
> Isn't scaling out using multiple instances is the way to go?

Yes, but I would better use more than one thread per process.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
For additional commands, e-mail: users-help@tapestry.apache.org


Re: Tapestry performance

Posted by Dmitry Gusev <dm...@gmail.com>.
On Tue, May 7, 2013 at 2:30 PM, Denis Stepanov <de...@gmail.com>wrote:

> Finally they have added test that actually tests web frameworks -
> "Fortunes test" but no Tapestry yet. Does anyone is going to contribute the
> fortune test?
>
> >> Performance can be black magic.
> >>
> >> On the one hand, I've heard some specific things about Node.js
> performance
> >> that are very positive.  Certainly, when developing for Node, the
> virtually
> >> instance server startup is wonderful. Java has beaten us down into
> thinking
> >> that code needs lots of time to start up.
>
> It reminds me of a time a few year back when Ruby on Rails was a way to go
> - no threads and scale out using multiple instances.
>
>
Isn't scaling out using multiple instances is the way to go?


> Denis
>
> May 7, 2013 v 3:48 AM, Rural Hunter <ru...@gmail.com>:
>
> > Here is a latest framework benchmark:
> http://www.techempower.com/benchmarks/#section=data-r4
> > How do you guys think about it?
> >
> > 于 2013/1/25 9:39, Howard Lewis Ship 写道:
> >> Performance can be black magic.
> >>
> >> On the one hand, I've heard some specific things about Node.js
> performance
> >> that are very positive.  Certainly, when developing for Node, the
> virtually
> >> instance server startup is wonderful. Java has beaten us down into
> thinking
> >> that code needs lots of time to start up.
> >>
> >> However, Node.js is constantly switching contexts in significant ways.
> All
> >> those callbacks executing in, effectively, a chaotic order means that
> >> memory must be swept into and out of core quite constantly. In a
> Java-style
> >> synchronous server, the operating system has more clues about what
> memory
> >> should stay resident based on which threads have been recently active.
> >>  Operating systems are very good at managing virtual memory, when you
> let
> >> them!
> >>
> >> It's probably impossible to compare things in an apples-to-apples way,
> >> however. I appreciate that single-thread simplicity of Node ... no
> locks,
> >> no deadlocks, no concerns about memory visibility between threads, no
> >> synchronize, no ConcurrentReadWriteLock, huzzah!
> >>
> >> The flip side is that in node you have to always be on your toes w.r.t.
> to
> >> writing everything in the callback style. That's not everyone's cup of
> tea
> >> ... and in  some cases, can be truly complicated to manage, especially
> >> figuring out how to respond to errors when you are in the middle of an
> >> indeterminate number of partially executed work flows.
> >>
> >>
> >> On Thu, Jan 24, 2013 at 5:05 PM, Lenny Primak <lprimak@hope.nyc.ny.us
> >wrote:
> >>
> >>> I was also surprised by the results when I saw them.
> >>> This was a C++ project, not java, but the performance characteristics
> >>> wouldn't be any different.
> >>> This was also proprietary project for one of my old companies so I
> can't
> >>> divulge anything else about it.
> >>> What I can tell you, it comes down to a very simple fact:
> >>> In all async servers, any single I/O operation comes down to 2 calls:
> >>> poll() (or whatever equivalent you want to use) and read/write()
> >>> There is also setup costs for poll() or its equivalents that are not
> >>> present in a synchronous server.
> >>> With synchronous server, there is no poll(), just the read and write,
> thus
> >>> the overhead of the poll() and its setup is eliminated.
> >>> Now it all comes down to the OS threading and I/O performance, and the
> >>> real surprise was that the multiple threads,
> >>> even 100s of thousands of them, all doing IO were not bogging down the
> >>> system at all.
> >>>
> >>> I know that there is all hype right now around async servers, but in
> real
> >>> world, async is just slow.
> >>>
> >>> I also believe that lightweight threads (green_threads, I believe) were
> >>> eliminated from the JVM long time ago.
> >>>
> >>> On Jan 24, 2013, at 7:51 PM, Robert Zeigler wrote:
> >>>
> >>>> I find this very difficult to swallow, at least for java apps, unless,
> >>> maybe, you're using a java implementation that uses native threads
> instead
> >>> of lightweight java threads, then I might believe it. I would also
> believe
> >>> it if the async server is poorly written. And I can believe that many
> an
> >>> async server is poorly written. It also depends a LOT on whether your
> >>> connections are short or long lived. For something like a web server
> where
> >>> you typically have very short-lived client connections, I can also
> believe
> >>> this. I'm rather skeptical of general claims that an async server is
> >>> slower, and would love to see some of the "space research project"
> worth of
> >>> data backing the claims.
> >>>> Robert
> >>>>
> >>>> On Jan 24, 2013, at 1/249:29 AM , Lenny Primak <
> lprimak@hope.nyc.ny.us>
> >>> wrote:
> >>>>> I've done extensive ( no, not extensive, really, really, extensive,
> >>> worthy of a space research project extensive ) testing of async IO
> >>> performance vs. threaded server performance.
> >>>>> The conclusion is that unless you have over 10,000 active, users,
> >>>  async IO is about 1/2 the performance of the usual
> thread-per-connection
> >>> performance.
> >>>>> By active users I mean connections that are actually putting out IO
> all
> >>> the time, as opposed to just idle sitting connections.
> >>>>> If you really, really, do have that many uses ( amazon.com type
> shop )
> >>> your bottleneck won't be at the web server level anyway, so the right
> thing
> >>> to do is to load balance and scale out.
> >>>>> Async IO won't solve any of these problems and will just introduce
> bugs
> >>> and complexity and actually degrade performance by significant margin.
> >>>>> On Jan 24, 2013, at 7:06 AM, "Thiago H de Paula Figueiredo" <
> >>> thiagohp@gmail.com> wrote:
> >>>>>> On Thu, 24 Jan 2013 09:26:45 -0200, Muhammad Gelbana <
> >>> m.gelbana@gmail.com> wrote:
> >>>>>>> Can someone clarify why would play! be better than tapestry in this
> >>> test?
> >>>>>> I guess only someone with play! internal architecture can tell you
> >>> this for sure. I also think that is probable that its usage of Netty (
> >>> https://netty.io/), which uses NIO and asynchronous IO, instead of
> >>> servlet containers (usually synchronous) is an important factor. I'm
> >>> playing with the idea of running Tapestry over Vert.X (
> http://vertx.io/),
> >>> but no code written yet.
> >>>>>> --
> >>>>>> Thiago H. de Paula Figueiredo
> >>>>>>
> >>>>>>
> ---------------------------------------------------------------------
> >>>>>> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
> >>>>>> For additional commands, e-mail: users-help@tapestry.apache.org
> >>>>>>
> >>>>> ---------------------------------------------------------------------
> >>>>> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
> >>>>> For additional commands, e-mail: users-help@tapestry.apache.org
> >>>>>
> >>>>
> >>>> ---------------------------------------------------------------------
> >>>> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
> >>>> For additional commands, e-mail: users-help@tapestry.apache.org
> >>>>
> >>>
> >>> ---------------------------------------------------------------------
> >>> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
> >>> For additional commands, e-mail: users-help@tapestry.apache.org
> >>>
> >>>
> >>
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
> > For additional commands, e-mail: users-help@tapestry.apache.org
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
> For additional commands, e-mail: users-help@tapestry.apache.org
>
>


-- 
Dmitry Gusev

AnjLab Team
http://anjlab.com

Re: Tapestry performance

Posted by Denis Stepanov <de...@gmail.com>.
Finally they have added test that actually tests web frameworks - "Fortunes test" but no Tapestry yet. Does anyone is going to contribute the fortune test?

>> Performance can be black magic.
>> 
>> On the one hand, I've heard some specific things about Node.js performance
>> that are very positive.  Certainly, when developing for Node, the virtually
>> instance server startup is wonderful. Java has beaten us down into thinking
>> that code needs lots of time to start up.

It reminds me of a time a few year back when Ruby on Rails was a way to go - no threads and scale out using multiple instances.

Denis

May 7, 2013 v 3:48 AM, Rural Hunter <ru...@gmail.com>:

> Here is a latest framework benchmark: http://www.techempower.com/benchmarks/#section=data-r4
> How do you guys think about it?
> 
> 于 2013/1/25 9:39, Howard Lewis Ship 写道:
>> Performance can be black magic.
>> 
>> On the one hand, I've heard some specific things about Node.js performance
>> that are very positive.  Certainly, when developing for Node, the virtually
>> instance server startup is wonderful. Java has beaten us down into thinking
>> that code needs lots of time to start up.
>> 
>> However, Node.js is constantly switching contexts in significant ways. All
>> those callbacks executing in, effectively, a chaotic order means that
>> memory must be swept into and out of core quite constantly. In a Java-style
>> synchronous server, the operating system has more clues about what memory
>> should stay resident based on which threads have been recently active.
>>  Operating systems are very good at managing virtual memory, when you let
>> them!
>> 
>> It's probably impossible to compare things in an apples-to-apples way,
>> however. I appreciate that single-thread simplicity of Node ... no locks,
>> no deadlocks, no concerns about memory visibility between threads, no
>> synchronize, no ConcurrentReadWriteLock, huzzah!
>> 
>> The flip side is that in node you have to always be on your toes w.r.t. to
>> writing everything in the callback style. That's not everyone's cup of tea
>> ... and in  some cases, can be truly complicated to manage, especially
>> figuring out how to respond to errors when you are in the middle of an
>> indeterminate number of partially executed work flows.
>> 
>> 
>> On Thu, Jan 24, 2013 at 5:05 PM, Lenny Primak <lp...@hope.nyc.ny.us>wrote:
>> 
>>> I was also surprised by the results when I saw them.
>>> This was a C++ project, not java, but the performance characteristics
>>> wouldn't be any different.
>>> This was also proprietary project for one of my old companies so I can't
>>> divulge anything else about it.
>>> What I can tell you, it comes down to a very simple fact:
>>> In all async servers, any single I/O operation comes down to 2 calls:
>>> poll() (or whatever equivalent you want to use) and read/write()
>>> There is also setup costs for poll() or its equivalents that are not
>>> present in a synchronous server.
>>> With synchronous server, there is no poll(), just the read and write, thus
>>> the overhead of the poll() and its setup is eliminated.
>>> Now it all comes down to the OS threading and I/O performance, and the
>>> real surprise was that the multiple threads,
>>> even 100s of thousands of them, all doing IO were not bogging down the
>>> system at all.
>>> 
>>> I know that there is all hype right now around async servers, but in real
>>> world, async is just slow.
>>> 
>>> I also believe that lightweight threads (green_threads, I believe) were
>>> eliminated from the JVM long time ago.
>>> 
>>> On Jan 24, 2013, at 7:51 PM, Robert Zeigler wrote:
>>> 
>>>> I find this very difficult to swallow, at least for java apps, unless,
>>> maybe, you're using a java implementation that uses native threads instead
>>> of lightweight java threads, then I might believe it. I would also believe
>>> it if the async server is poorly written. And I can believe that many an
>>> async server is poorly written. It also depends a LOT on whether your
>>> connections are short or long lived. For something like a web server where
>>> you typically have very short-lived client connections, I can also believe
>>> this. I'm rather skeptical of general claims that an async server is
>>> slower, and would love to see some of the "space research project" worth of
>>> data backing the claims.
>>>> Robert
>>>> 
>>>> On Jan 24, 2013, at 1/249:29 AM , Lenny Primak <lp...@hope.nyc.ny.us>
>>> wrote:
>>>>> I've done extensive ( no, not extensive, really, really, extensive,
>>> worthy of a space research project extensive ) testing of async IO
>>> performance vs. threaded server performance.
>>>>> The conclusion is that unless you have over 10,000 active, users,
>>>  async IO is about 1/2 the performance of the usual thread-per-connection
>>> performance.
>>>>> By active users I mean connections that are actually putting out IO all
>>> the time, as opposed to just idle sitting connections.
>>>>> If you really, really, do have that many uses ( amazon.com type shop )
>>> your bottleneck won't be at the web server level anyway, so the right thing
>>> to do is to load balance and scale out.
>>>>> Async IO won't solve any of these problems and will just introduce bugs
>>> and complexity and actually degrade performance by significant margin.
>>>>> On Jan 24, 2013, at 7:06 AM, "Thiago H de Paula Figueiredo" <
>>> thiagohp@gmail.com> wrote:
>>>>>> On Thu, 24 Jan 2013 09:26:45 -0200, Muhammad Gelbana <
>>> m.gelbana@gmail.com> wrote:
>>>>>>> Can someone clarify why would play! be better than tapestry in this
>>> test?
>>>>>> I guess only someone with play! internal architecture can tell you
>>> this for sure. I also think that is probable that its usage of Netty (
>>> https://netty.io/), which uses NIO and asynchronous IO, instead of
>>> servlet containers (usually synchronous) is an important factor. I'm
>>> playing with the idea of running Tapestry over Vert.X (http://vertx.io/),
>>> but no code written yet.
>>>>>> --
>>>>>> Thiago H. de Paula Figueiredo
>>>>>> 
>>>>>> ---------------------------------------------------------------------
>>>>>> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
>>>>>> For additional commands, e-mail: users-help@tapestry.apache.org
>>>>>> 
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
>>>>> For additional commands, e-mail: users-help@tapestry.apache.org
>>>>> 
>>>> 
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
>>>> For additional commands, e-mail: users-help@tapestry.apache.org
>>>> 
>>> 
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
>>> For additional commands, e-mail: users-help@tapestry.apache.org
>>> 
>>> 
>> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
> For additional commands, e-mail: users-help@tapestry.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
For additional commands, e-mail: users-help@tapestry.apache.org


Re: Tapestry performance

Posted by Ulrich Stärk <ul...@spielviel.de>.
I think this benchmark is fundamentally flawed. It compares apples to oranges and is too simplistic
to generalize anything. JSON serialization? Really? You aren't testing web framework performance
with that. Interpreted languages in the same benchmark as compiled languages? How can you even try
comparing these? ORM performance of frameworks with a built-in OR mapper against performance of
frameworks which don't even have the glue code? Are you serious?

Benchmarks are hard to do if they want to stand up to scientific standards. This one clearly doesn't.

Basically it's community provided marketing material for their in-house "Gemini" framework.

Uli

On 07.05.2013 03:48, Rural Hunter wrote:
> Here is a latest framework benchmark: http://www.techempower.com/benchmarks/#section=data-r4
> How do you guys think about it?
> 
> 于 2013/1/25 9:39, Howard Lewis Ship 写道:
>> Performance can be black magic.
>>
>> On the one hand, I've heard some specific things about Node.js performance
>> that are very positive.  Certainly, when developing for Node, the virtually
>> instance server startup is wonderful. Java has beaten us down into thinking
>> that code needs lots of time to start up.
>>
>> However, Node.js is constantly switching contexts in significant ways. All
>> those callbacks executing in, effectively, a chaotic order means that
>> memory must be swept into and out of core quite constantly. In a Java-style
>> synchronous server, the operating system has more clues about what memory
>> should stay resident based on which threads have been recently active.
>>   Operating systems are very good at managing virtual memory, when you let
>> them!
>>
>> It's probably impossible to compare things in an apples-to-apples way,
>> however. I appreciate that single-thread simplicity of Node ... no locks,
>> no deadlocks, no concerns about memory visibility between threads, no
>> synchronize, no ConcurrentReadWriteLock, huzzah!
>>
>> The flip side is that in node you have to always be on your toes w.r.t. to
>> writing everything in the callback style. That's not everyone's cup of tea
>> ... and in  some cases, can be truly complicated to manage, especially
>> figuring out how to respond to errors when you are in the middle of an
>> indeterminate number of partially executed work flows.
>>
>>
>> On Thu, Jan 24, 2013 at 5:05 PM, Lenny Primak <lp...@hope.nyc.ny.us>wrote:
>>
>>> I was also surprised by the results when I saw them.
>>> This was a C++ project, not java, but the performance characteristics
>>> wouldn't be any different.
>>> This was also proprietary project for one of my old companies so I can't
>>> divulge anything else about it.
>>> What I can tell you, it comes down to a very simple fact:
>>> In all async servers, any single I/O operation comes down to 2 calls:
>>> poll() (or whatever equivalent you want to use) and read/write()
>>> There is also setup costs for poll() or its equivalents that are not
>>> present in a synchronous server.
>>> With synchronous server, there is no poll(), just the read and write, thus
>>> the overhead of the poll() and its setup is eliminated.
>>> Now it all comes down to the OS threading and I/O performance, and the
>>> real surprise was that the multiple threads,
>>> even 100s of thousands of them, all doing IO were not bogging down the
>>> system at all.
>>>
>>> I know that there is all hype right now around async servers, but in real
>>> world, async is just slow.
>>>
>>> I also believe that lightweight threads (green_threads, I believe) were
>>> eliminated from the JVM long time ago.
>>>
>>> On Jan 24, 2013, at 7:51 PM, Robert Zeigler wrote:
>>>
>>>> I find this very difficult to swallow, at least for java apps, unless,
>>> maybe, you're using a java implementation that uses native threads instead
>>> of lightweight java threads, then I might believe it. I would also believe
>>> it if the async server is poorly written. And I can believe that many an
>>> async server is poorly written. It also depends a LOT on whether your
>>> connections are short or long lived. For something like a web server where
>>> you typically have very short-lived client connections, I can also believe
>>> this. I'm rather skeptical of general claims that an async server is
>>> slower, and would love to see some of the "space research project" worth of
>>> data backing the claims.
>>>> Robert
>>>>
>>>> On Jan 24, 2013, at 1/249:29 AM , Lenny Primak <lp...@hope.nyc.ny.us>
>>> wrote:
>>>>> I've done extensive ( no, not extensive, really, really, extensive,
>>> worthy of a space research project extensive ) testing of async IO
>>> performance vs. threaded server performance.
>>>>> The conclusion is that unless you have over 10,000 active, users,
>>>   async IO is about 1/2 the performance of the usual thread-per-connection
>>> performance.
>>>>> By active users I mean connections that are actually putting out IO all
>>> the time, as opposed to just idle sitting connections.
>>>>> If you really, really, do have that many uses ( amazon.com type shop )
>>> your bottleneck won't be at the web server level anyway, so the right thing
>>> to do is to load balance and scale out.
>>>>> Async IO won't solve any of these problems and will just introduce bugs
>>> and complexity and actually degrade performance by significant margin.
>>>>> On Jan 24, 2013, at 7:06 AM, "Thiago H de Paula Figueiredo" <
>>> thiagohp@gmail.com> wrote:
>>>>>> On Thu, 24 Jan 2013 09:26:45 -0200, Muhammad Gelbana <
>>> m.gelbana@gmail.com> wrote:
>>>>>>> Can someone clarify why would play! be better than tapestry in this
>>> test?
>>>>>> I guess only someone with play! internal architecture can tell you
>>> this for sure. I also think that is probable that its usage of Netty (
>>> https://netty.io/), which uses NIO and asynchronous IO, instead of
>>> servlet containers (usually synchronous) is an important factor. I'm
>>> playing with the idea of running Tapestry over Vert.X (http://vertx.io/),
>>> but no code written yet.
>>>>>> -- 
>>>>>> Thiago H. de Paula Figueiredo
>>>>>>
>>>>>> ---------------------------------------------------------------------
>>>>>> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
>>>>>> For additional commands, e-mail: users-help@tapestry.apache.org
>>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
>>>>> For additional commands, e-mail: users-help@tapestry.apache.org
>>>>>
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
>>>> For additional commands, e-mail: users-help@tapestry.apache.org
>>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
>>> For additional commands, e-mail: users-help@tapestry.apache.org
>>>
>>>
>>
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
> For additional commands, e-mail: users-help@tapestry.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
For additional commands, e-mail: users-help@tapestry.apache.org