You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by yogesh hingmire <yo...@gmail.com> on 2013/05/04 12:59:46 UTC

Designing for Load on TomCat

While planning / designing to build a web app that must scale to 2000
concurrent users, distributed across 5 Tomcat nodes in a cluster, Apache at
the front of course and the ability to serve 20 concurrent requests per
seconds during business hours, with a page response time of 5 seconds, how
would we go about the ask ? What Apache / Tomcat / System (CPU/JVM)
parameters should be considered for this design ?

Thank You,
Yogesh

Re: Designing for Load on TomCat

Posted by Jakub 1983 <jj...@gmail.com>.
Yogesh

"with a page response time of 5 seconds"

is it current response time or acceptable await time?
if it is current response time, when was it measured, during max load, or
during min load ? how many concurrent requests where sent ?


why so long ? processor is busy, or communication with external systems ,
database, webservice, filesystem ?
I am not a specialist, but here are my doubts
  if you do processor intensive task setting much more threads than cpu
cores/hyperthreads may not help, it may even slowdown, question arises how
much more threads than cores is sensible value: cores +1, 2*cores ?
  if time is taken mostly by communication - sockets, filesystem, database,
webservices, my doubt is:
      assumption: tomcat uses nio - non blocking IO, so above operation
shouldn't block, so my question is why and when do we need more threads ?
isn't a thread doing other job when awaiting for NIO ?

I would be grateful if somebody more experienced could explain this, and
expand this topic.

regards
Jakub



On Sat, May 4, 2013 at 9:17 PM, David Kerber <dc...@verizon.net> wrote:

> On 5/4/2013 1:24 PM, Mark Thomas wrote:
>
>> On 04/05/2013 16:01, Yogesh wrote:
>>
>>> Well my question is Is it a common design practice from your experiences
>>> to configure one node (maxthreads) for the scenario where all other nodes
>>> amongst which the load was distribued fail ?
>>>
>>
>> You design for whatever level of resilience you need to meet the
>> availability requirements.
>>
>> Mark
>>
>
> Which IME means allow for either one or two of the cluster nodes to fail,
> depending on how many you have to start with.  Never all but one, unless
> you only have two to begin with.
>
>
>
>
>>
>>> On the cluster part, wrt tomcats talking to each other do you mean the
>>> session replication feature or something else ?
>>>
>>> Sent from my iPhone
>>>
>>> On May 4, 2013, at 9:26 AM, André Warnier <aw...@ice-sa.com> wrote:
>>>
>>>  yogesh hingmire wrote:
>>>>
>>>>
>>>>  On Sat, May 4, 2013 at 7:07 AM, André Warnier <aw...@ice-sa.com> wrote:
>>>>>
>>>>>> yogesh hingmire wrote:
>>>>>>
>>>>>>  While planning / designing to build a web app that must scale to 2000
>>>>>>> concurrent users, distributed across 5 Tomcat nodes in a cluster,
>>>>>>> Apache
>>>>>>> at
>>>>>>> the front of course and the ability to serve 20 concurrent requests
>>>>>>> per
>>>>>>> seconds during business hours, with a page response time of 5
>>>>>>> seconds, how
>>>>>>> would we go about the ask ? What Apache / Tomcat / System (CPU/JVM)
>>>>>>> parameters should be considered for this design ?
>>>>>>>
>>>>>> I will provide the ABC, and leave the details for someone else.
>>>>>> You have 20 requests arriving per second, and it takes 5 seconds to
>>>>>> process one request and return the response.
>>>>>> So, over time, it will look like this
>>>>>>
>>>>>> Time   new requests   requests in-process  requests terminated
>>>>>>
>>>>>> 0        20              20                      0
>>>>>> +1s      20              40                      0
>>>>>> +2s      20              60                      0
>>>>>> +3s      20              80                      0
>>>>>> +4s      20             100                      0
>>>>>> +5s      20             100                     20
>>>>>> +6s      20             100                     40
>>>>>> +7s      20             100                     60
>>>>>> etc...
>>>>>>
>>>>>> So, in principle, and assuming nothing else is going on, you need 100
>>>>>> concurrent threads in Tomcat to process these requests.
>>>>>> (I would take a healthy margin of security and double that).
>>>>>> Whether for that you need a cluster of Tomcats is another discussion.
>>>>>> And how much memory you need to allocate to your Tomcat(s) JVM(s) is a
>>>>>> function of what your webapp needs, to process one request.
>>>>>>
>>>>>> The numer of concurrent users should be relatively irrelevant, if all
>>>>>> you
>>>>>> mean by that is that some of these requests come from the same user,
>>>>>> but
>>>>>> they are otherwise independent of one another.
>>>>>>
>>>>>> Note that I have a suspicion that what you describe as "requests"
>>>>>> above
>>>>>> probably only count the requests to your webapp code, and do not
>>>>>> count the
>>>>>> additional requests for stylesheets, images, etc.. which may be
>>>>>> embedded in
>>>>>> any page that the user's browser eventually displays.
>>>>>> So unless you plan on serving those directly from the Apache httpd
>>>>>> front-end, you should take them into account too.
>>>>>>
>>>>> Thanks Andre and sorry for not mentioning about the other content that
>>>>> are
>>>>> actually requested by http get's from the jsp served.,
>>>>> There is quite a lot of ajax calls and static content and that can be
>>>>> served out of httpd, but as of now it is not. I know not the best way,
>>>>>
>>>>
>>>> but you can read the on-line documentation, I presume ?
>>>>
>>>> so i
>>>>
>>>>> assume i have to increment my thread count correspondingly..
>>>>>
>>>>
>>>> Well yes, because then you do not have 20 requests per second, you have
>>>> more.
>>>> Only you would know how many more, and how long they take to serve, but
>>>> the calculation is similar.
>>>>
>>>>
>>>>> While planning to threads on a single node, do i have to take into
>>>>> account
>>>>> the failure scenario where say all other 4 nodes fail and just this one
>>>>> node has to serve out the entire web app load. For that, do i have to
>>>>> provision the thread count as many as 4 times what i arrive for a
>>>>> single
>>>>> node ?
>>>>>
>>>>> Your thoughts?
>>>>>
>>>>
>>>> I think that you can figure that one out by yourself, no ?
>>>>
>>>> One more thing, to avoid you looking in the wrong direction : having
>>>> one Apache httpd front-end distributing calls to several back-end Tomcats,
>>>> does not make it so that the Tomcat servers constitute a "cluster".  A
>>>> "cluster" is a name more usually used when the Tomcats are talking to
>>>> eachother.  In this case, they would not be.  It would just be the
>>>> connector, on the Apache httpd side, which distributes the load between the
>>>> back-end Tomcats, and detects when one or more is not working anymore.
>>>>
>>>
>
> ------------------------------**------------------------------**---------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.**apache.org<us...@tomcat.apache.org>
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>

Re: Designing for Load on TomCat

Posted by David Kerber <dc...@verizon.net>.
On 5/4/2013 1:24 PM, Mark Thomas wrote:
> On 04/05/2013 16:01, Yogesh wrote:
>> Well my question is Is it a common design practice from your experiences to configure one node (maxthreads) for the scenario where all other nodes amongst which the load was distribued fail ?
>
> You design for whatever level of resilience you need to meet the
> availability requirements.
>
> Mark

Which IME means allow for either one or two of the cluster nodes to 
fail, depending on how many you have to start with.  Never all but one, 
unless you only have two to begin with.


>
>>
>> On the cluster part, wrt tomcats talking to each other do you mean the session replication feature or something else ?
>>
>> Sent from my iPhone
>>
>> On May 4, 2013, at 9:26 AM, André Warnier <aw...@ice-sa.com> wrote:
>>
>>> yogesh hingmire wrote:
>>>
>>>
>>>> On Sat, May 4, 2013 at 7:07 AM, André Warnier <aw...@ice-sa.com> wrote:
>>>>> yogesh hingmire wrote:
>>>>>
>>>>>> While planning / designing to build a web app that must scale to 2000
>>>>>> concurrent users, distributed across 5 Tomcat nodes in a cluster, Apache
>>>>>> at
>>>>>> the front of course and the ability to serve 20 concurrent requests per
>>>>>> seconds during business hours, with a page response time of 5 seconds, how
>>>>>> would we go about the ask ? What Apache / Tomcat / System (CPU/JVM)
>>>>>> parameters should be considered for this design ?
>>>>> I will provide the ABC, and leave the details for someone else.
>>>>> You have 20 requests arriving per second, and it takes 5 seconds to
>>>>> process one request and return the response.
>>>>> So, over time, it will look like this
>>>>>
>>>>> Time   new requests   requests in-process  requests terminated
>>>>>
>>>>> 0        20              20                      0
>>>>> +1s      20              40                      0
>>>>> +2s      20              60                      0
>>>>> +3s      20              80                      0
>>>>> +4s      20             100                      0
>>>>> +5s      20             100                     20
>>>>> +6s      20             100                     40
>>>>> +7s      20             100                     60
>>>>> etc...
>>>>>
>>>>> So, in principle, and assuming nothing else is going on, you need 100
>>>>> concurrent threads in Tomcat to process these requests.
>>>>> (I would take a healthy margin of security and double that).
>>>>> Whether for that you need a cluster of Tomcats is another discussion.
>>>>> And how much memory you need to allocate to your Tomcat(s) JVM(s) is a
>>>>> function of what your webapp needs, to process one request.
>>>>>
>>>>> The numer of concurrent users should be relatively irrelevant, if all you
>>>>> mean by that is that some of these requests come from the same user, but
>>>>> they are otherwise independent of one another.
>>>>>
>>>>> Note that I have a suspicion that what you describe as "requests" above
>>>>> probably only count the requests to your webapp code, and do not count the
>>>>> additional requests for stylesheets, images, etc.. which may be embedded in
>>>>> any page that the user's browser eventually displays.
>>>>> So unless you plan on serving those directly from the Apache httpd
>>>>> front-end, you should take them into account too.
>>>> Thanks Andre and sorry for not mentioning about the other content that are
>>>> actually requested by http get's from the jsp served.,
>>>> There is quite a lot of ajax calls and static content and that can be
>>>> served out of httpd, but as of now it is not. I know not the best way,
>>>
>>> but you can read the on-line documentation, I presume ?
>>>
>>> so i
>>>> assume i have to increment my thread count correspondingly..
>>>
>>> Well yes, because then you do not have 20 requests per second, you have more.
>>> Only you would know how many more, and how long they take to serve, but the calculation is similar.
>>>
>>>>
>>>> While planning to threads on a single node, do i have to take into account
>>>> the failure scenario where say all other 4 nodes fail and just this one
>>>> node has to serve out the entire web app load. For that, do i have to
>>>> provision the thread count as many as 4 times what i arrive for a single
>>>> node ?
>>>>
>>>> Your thoughts?
>>>
>>> I think that you can figure that one out by yourself, no ?
>>>
>>> One more thing, to avoid you looking in the wrong direction : having one Apache httpd front-end distributing calls to several back-end Tomcats, does not make it so that the Tomcat servers constitute a "cluster".  A "cluster" is a name more usually used when the Tomcats are talking to eachother.  In this case, they would not be.  It would just be the connector, on the Apache httpd side, which distributes the load between the back-end Tomcats, and detects when one or more is not working anymore.


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Designing for Load on TomCat

Posted by Mark Thomas <ma...@apache.org>.
On 04/05/2013 16:01, Yogesh wrote:
> Well my question is Is it a common design practice from your experiences to configure one node (maxthreads) for the scenario where all other nodes amongst which the load was distribued fail ? 

You design for whatever level of resilience you need to meet the
availability requirements.

Mark

> 
> On the cluster part, wrt tomcats talking to each other do you mean the session replication feature or something else ?
> 
> Sent from my iPhone
> 
> On May 4, 2013, at 9:26 AM, André Warnier <aw...@ice-sa.com> wrote:
> 
>> yogesh hingmire wrote:
>>
>>
>>> On Sat, May 4, 2013 at 7:07 AM, André Warnier <aw...@ice-sa.com> wrote:
>>>> yogesh hingmire wrote:
>>>>
>>>>> While planning / designing to build a web app that must scale to 2000
>>>>> concurrent users, distributed across 5 Tomcat nodes in a cluster, Apache
>>>>> at
>>>>> the front of course and the ability to serve 20 concurrent requests per
>>>>> seconds during business hours, with a page response time of 5 seconds, how
>>>>> would we go about the ask ? What Apache / Tomcat / System (CPU/JVM)
>>>>> parameters should be considered for this design ?
>>>> I will provide the ABC, and leave the details for someone else.
>>>> You have 20 requests arriving per second, and it takes 5 seconds to
>>>> process one request and return the response.
>>>> So, over time, it will look like this
>>>>
>>>> Time   new requests   requests in-process  requests terminated
>>>>
>>>> 0        20              20                      0
>>>> +1s      20              40                      0
>>>> +2s      20              60                      0
>>>> +3s      20              80                      0
>>>> +4s      20             100                      0
>>>> +5s      20             100                     20
>>>> +6s      20             100                     40
>>>> +7s      20             100                     60
>>>> etc...
>>>>
>>>> So, in principle, and assuming nothing else is going on, you need 100
>>>> concurrent threads in Tomcat to process these requests.
>>>> (I would take a healthy margin of security and double that).
>>>> Whether for that you need a cluster of Tomcats is another discussion.
>>>> And how much memory you need to allocate to your Tomcat(s) JVM(s) is a
>>>> function of what your webapp needs, to process one request.
>>>>
>>>> The numer of concurrent users should be relatively irrelevant, if all you
>>>> mean by that is that some of these requests come from the same user, but
>>>> they are otherwise independent of one another.
>>>>
>>>> Note that I have a suspicion that what you describe as "requests" above
>>>> probably only count the requests to your webapp code, and do not count the
>>>> additional requests for stylesheets, images, etc.. which may be embedded in
>>>> any page that the user's browser eventually displays.
>>>> So unless you plan on serving those directly from the Apache httpd
>>>> front-end, you should take them into account too.
>>> Thanks Andre and sorry for not mentioning about the other content that are
>>> actually requested by http get's from the jsp served.,
>>> There is quite a lot of ajax calls and static content and that can be
>>> served out of httpd, but as of now it is not. I know not the best way,
>>
>> but you can read the on-line documentation, I presume ?
>>
>> so i
>>> assume i have to increment my thread count correspondingly..
>>
>> Well yes, because then you do not have 20 requests per second, you have more.
>> Only you would know how many more, and how long they take to serve, but the calculation is similar.
>>
>>>
>>> While planning to threads on a single node, do i have to take into account
>>> the failure scenario where say all other 4 nodes fail and just this one
>>> node has to serve out the entire web app load. For that, do i have to
>>> provision the thread count as many as 4 times what i arrive for a single
>>> node ?
>>>
>>> Your thoughts?
>>
>> I think that you can figure that one out by yourself, no ?
>>
>> One more thing, to avoid you looking in the wrong direction : having one Apache httpd front-end distributing calls to several back-end Tomcats, does not make it so that the Tomcat servers constitute a "cluster".  A "cluster" is a name more usually used when the Tomcats are talking to eachother.  In this case, they would not be.  It would just be the connector, on the Apache httpd side, which distributes the load between the back-end Tomcats, and detects when one or more is not working anymore.
>>
>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Designing for Load on TomCat

Posted by Yogesh <yo...@gmail.com>.
Well my question is Is it a common design practice from your experiences to configure one node (maxthreads) for the scenario where all other nodes amongst which the load was distribued fail ? 

On the cluster part, wrt tomcats talking to each other do you mean the session replication feature or something else ?

Sent from my iPhone

On May 4, 2013, at 9:26 AM, André Warnier <aw...@ice-sa.com> wrote:

> yogesh hingmire wrote:
> 
> 
>> On Sat, May 4, 2013 at 7:07 AM, André Warnier <aw...@ice-sa.com> wrote:
>>> yogesh hingmire wrote:
>>> 
>>>> While planning / designing to build a web app that must scale to 2000
>>>> concurrent users, distributed across 5 Tomcat nodes in a cluster, Apache
>>>> at
>>>> the front of course and the ability to serve 20 concurrent requests per
>>>> seconds during business hours, with a page response time of 5 seconds, how
>>>> would we go about the ask ? What Apache / Tomcat / System (CPU/JVM)
>>>> parameters should be considered for this design ?
>>> I will provide the ABC, and leave the details for someone else.
>>> You have 20 requests arriving per second, and it takes 5 seconds to
>>> process one request and return the response.
>>> So, over time, it will look like this
>>> 
>>> Time   new requests   requests in-process  requests terminated
>>> 
>>> 0        20              20                      0
>>> +1s      20              40                      0
>>> +2s      20              60                      0
>>> +3s      20              80                      0
>>> +4s      20             100                      0
>>> +5s      20             100                     20
>>> +6s      20             100                     40
>>> +7s      20             100                     60
>>> etc...
>>> 
>>> So, in principle, and assuming nothing else is going on, you need 100
>>> concurrent threads in Tomcat to process these requests.
>>> (I would take a healthy margin of security and double that).
>>> Whether for that you need a cluster of Tomcats is another discussion.
>>> And how much memory you need to allocate to your Tomcat(s) JVM(s) is a
>>> function of what your webapp needs, to process one request.
>>> 
>>> The numer of concurrent users should be relatively irrelevant, if all you
>>> mean by that is that some of these requests come from the same user, but
>>> they are otherwise independent of one another.
>>> 
>>> Note that I have a suspicion that what you describe as "requests" above
>>> probably only count the requests to your webapp code, and do not count the
>>> additional requests for stylesheets, images, etc.. which may be embedded in
>>> any page that the user's browser eventually displays.
>>> So unless you plan on serving those directly from the Apache httpd
>>> front-end, you should take them into account too.
> > Thanks Andre and sorry for not mentioning about the other content that are
> > actually requested by http get's from the jsp served.,
> > There is quite a lot of ajax calls and static content and that can be
> > served out of httpd, but as of now it is not. I know not the best way,
> 
> but you can read the on-line documentation, I presume ?
> 
> so i
> > assume i have to increment my thread count correspondingly..
> 
> Well yes, because then you do not have 20 requests per second, you have more.
> Only you would know how many more, and how long they take to serve, but the calculation is similar.
> 
> >
> > While planning to threads on a single node, do i have to take into account
> > the failure scenario where say all other 4 nodes fail and just this one
> > node has to serve out the entire web app load. For that, do i have to
> > provision the thread count as many as 4 times what i arrive for a single
> > node ?
> >
> > Your thoughts?
> 
> I think that you can figure that one out by yourself, no ?
> 
> One more thing, to avoid you looking in the wrong direction : having one Apache httpd front-end distributing calls to several back-end Tomcats, does not make it so that the Tomcat servers constitute a "cluster".  A "cluster" is a name more usually used when the Tomcats are talking to eachother.  In this case, they would not be.  It would just be the connector, on the Apache httpd side, which distributes the load between the back-end Tomcats, and detects when one or more is not working anymore.
> 
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Designing for Load on TomCat

Posted by André Warnier <aw...@ice-sa.com>.
yogesh hingmire wrote:


> 
> 
> On Sat, May 4, 2013 at 7:07 AM, André Warnier <aw...@ice-sa.com> wrote:
> 
>> yogesh hingmire wrote:
>>
>>> While planning / designing to build a web app that must scale to 2000
>>> concurrent users, distributed across 5 Tomcat nodes in a cluster, Apache
>>> at
>>> the front of course and the ability to serve 20 concurrent requests per
>>> seconds during business hours, with a page response time of 5 seconds, how
>>> would we go about the ask ? What Apache / Tomcat / System (CPU/JVM)
>>> parameters should be considered for this design ?
>>>
>>>
>> I will provide the ABC, and leave the details for someone else.
>> You have 20 requests arriving per second, and it takes 5 seconds to
>> process one request and return the response.
>> So, over time, it will look like this
>>
>> Time   new requests   requests in-process  requests terminated
>>
>> 0        20              20                      0
>> +1s      20              40                      0
>> +2s      20              60                      0
>> +3s      20              80                      0
>> +4s      20             100                      0
>> +5s      20             100                     20
>> +6s      20             100                     40
>> +7s      20             100                     60
>> etc...
>>
>> So, in principle, and assuming nothing else is going on, you need 100
>> concurrent threads in Tomcat to process these requests.
>> (I would take a healthy margin of security and double that).
>> Whether for that you need a cluster of Tomcats is another discussion.
>> And how much memory you need to allocate to your Tomcat(s) JVM(s) is a
>> function of what your webapp needs, to process one request.
>>
>> The numer of concurrent users should be relatively irrelevant, if all you
>> mean by that is that some of these requests come from the same user, but
>> they are otherwise independent of one another.
>>
>> Note that I have a suspicion that what you describe as "requests" above
>> probably only count the requests to your webapp code, and do not count the
>> additional requests for stylesheets, images, etc.. which may be embedded in
>> any page that the user's browser eventually displays.
>> So unless you plan on serving those directly from the Apache httpd
>> front-end, you should take them into account too.
>>
>>
 > Thanks Andre and sorry for not mentioning about the other content that are
 > actually requested by http get's from the jsp served.,
 > There is quite a lot of ajax calls and static content and that can be
 > served out of httpd, but as of now it is not. I know not the best way,

but you can read the on-line documentation, I presume ?

so i
 > assume i have to increment my thread count correspondingly..

Well yes, because then you do not have 20 requests per second, you have more.
Only you would know how many more, and how long they take to serve, but the calculation is 
similar.

 >
 > While planning to threads on a single node, do i have to take into account
 > the failure scenario where say all other 4 nodes fail and just this one
 > node has to serve out the entire web app load. For that, do i have to
 > provision the thread count as many as 4 times what i arrive for a single
 > node ?
 >
 > Your thoughts?

I think that you can figure that one out by yourself, no ?

One more thing, to avoid you looking in the wrong direction : having one Apache httpd 
front-end distributing calls to several back-end Tomcats, does not make it so that the 
Tomcat servers constitute a "cluster".  A "cluster" is a name more usually used when the 
Tomcats are talking to eachother.  In this case, they would not be.  It would just be the 
connector, on the Apache httpd side, which distributes the load between the back-end 
Tomcats, and detects when one or more is not working anymore.




---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Designing for Load on TomCat

Posted by yogesh hingmire <yo...@gmail.com>.
Thanks Andre and sorry for not mentioning about the other content that are
actually requested by http get's from the jsp served.,
There is quite a lot of ajax calls and static content and that can be
served out of httpd, but as of now it is not. I know not the best way, so i
assume i have to increment my thread count correspondingly..

While planning to threads on a single node, do i have to take into account
the failure scenario where say all other 4 nodes fail and just this one
node has to serve out the entire web app load. For that, do i have to
provision the thread count as many as 4 times what i arrive for a single
node ?

Your thoughts?


On Sat, May 4, 2013 at 7:07 AM, André Warnier <aw...@ice-sa.com> wrote:

> yogesh hingmire wrote:
>
>> While planning / designing to build a web app that must scale to 2000
>> concurrent users, distributed across 5 Tomcat nodes in a cluster, Apache
>> at
>> the front of course and the ability to serve 20 concurrent requests per
>> seconds during business hours, with a page response time of 5 seconds, how
>> would we go about the ask ? What Apache / Tomcat / System (CPU/JVM)
>> parameters should be considered for this design ?
>>
>>
> I will provide the ABC, and leave the details for someone else.
> You have 20 requests arriving per second, and it takes 5 seconds to
> process one request and return the response.
> So, over time, it will look like this
>
> Time   new requests   requests in-process  requests terminated
>
> 0        20              20                      0
> +1s      20              40                      0
> +2s      20              60                      0
> +3s      20              80                      0
> +4s      20             100                      0
> +5s      20             100                     20
> +6s      20             100                     40
> +7s      20             100                     60
> etc...
>
> So, in principle, and assuming nothing else is going on, you need 100
> concurrent threads in Tomcat to process these requests.
> (I would take a healthy margin of security and double that).
> Whether for that you need a cluster of Tomcats is another discussion.
> And how much memory you need to allocate to your Tomcat(s) JVM(s) is a
> function of what your webapp needs, to process one request.
>
> The numer of concurrent users should be relatively irrelevant, if all you
> mean by that is that some of these requests come from the same user, but
> they are otherwise independent of one another.
>
> Note that I have a suspicion that what you describe as "requests" above
> probably only count the requests to your webapp code, and do not count the
> additional requests for stylesheets, images, etc.. which may be embedded in
> any page that the user's browser eventually displays.
> So unless you plan on serving those directly from the Apache httpd
> front-end, you should take them into account too.
>
>
>
> ------------------------------**------------------------------**---------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.**apache.org<us...@tomcat.apache.org>
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>

Re: Designing for Load on TomCat

Posted by André Warnier <aw...@ice-sa.com>.
yogesh hingmire wrote:
> While planning / designing to build a web app that must scale to 2000
> concurrent users, distributed across 5 Tomcat nodes in a cluster, Apache at
> the front of course and the ability to serve 20 concurrent requests per
> seconds during business hours, with a page response time of 5 seconds, how
> would we go about the ask ? What Apache / Tomcat / System (CPU/JVM)
> parameters should be considered for this design ?
> 

I will provide the ABC, and leave the details for someone else.
You have 20 requests arriving per second, and it takes 5 seconds to process one request 
and return the response.
So, over time, it will look like this

Time   new requests   requests in-process  requests terminated

0        20              20                      0
+1s      20              40                      0
+2s      20              60                      0
+3s      20              80                      0
+4s      20             100                      0
+5s      20             100                     20
+6s      20             100                     40
+7s      20             100                     60
etc...

So, in principle, and assuming nothing else is going on, you need 100 concurrent threads 
in Tomcat to process these requests.
(I would take a healthy margin of security and double that).
Whether for that you need a cluster of Tomcats is another discussion.
And how much memory you need to allocate to your Tomcat(s) JVM(s) is a function of what 
your webapp needs, to process one request.

The numer of concurrent users should be relatively irrelevant, if all you mean by that is 
that some of these requests come from the same user, but they are otherwise independent of 
one another.

Note that I have a suspicion that what you describe as "requests" above probably only 
count the requests to your webapp code, and do not count the additional requests for 
stylesheets, images, etc.. which may be embedded in any page that the user's browser 
eventually displays.
So unless you plan on serving those directly from the Apache httpd front-end, you should 
take them into account too.



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org