You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Owen Rubel <or...@gmail.com> on 2017/08/11 15:39:29 UTC

Per EndPoint Threads???

Hi All,

I'm looking for a way (or a tool) in Tomcat to associate threads with
endpoints.

The reason being is that on a whole, threads are used not by the whole
system but distributed dynamically to specific pieces. Tomcat repeats this
process over and over but never stores this knowledge of which pieces
endpoints continually have high volume and which have lower volume traffic.

Even at startup/restart, these individual endpoints in the system should
start with a higher number of threads by DEFAULT as a result of the
continual higher traffic.

Is there a way to assign/distribute much like 'load balancing' the number
of threads across available endpoints???

ie:
localhost/v0.1/user/show: 50%
localhost/v0.1/user/create: 10%
localhost/v0.1/user/edit: 5%
localhost/v0.1/user/delete: 2%

Owen Rubel
orubel@gmail.com

Re: Per EndPoint Threads???

Posted by Owen Rubel <or...@gmail.com>.
On Sat, Aug 12, 2017 at 9:36 AM, Christopher Schultz <
chris@christopherschultz.net> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Owen,
>
> On 8/12/17 11:21 AM, Owen Rubel wrote:
> > On Sat, Aug 12, 2017 at 1:19 AM, Mark Thomas <ma...@apache.org>
> > wrote:
> >
> >> On 12/08/17 06:00, Christopher Schultz wrote:
> >>> Owen,
> >>>
> >>> Please do not top-post. I have re-ordered your post to be
> >>> bottom-post.
> >>>
> >>> On 8/11/17 10:12 PM, Owen Rubel wrote:
> >>>> On Fri, Aug 11, 2017 at 5:58 PM, <ch...@baus.net>
> >>>> wrote:
> >>>
> >>>>>> Hi All,
> >>>>>>
> >>>>>> I'm looking for a way (or a tool) in Tomcat to associate
> >>>>>> threads with endpoints.
> >>>>>
> >>>>> It isn't clear to me why this would be necessary. Threads
> >>>>> should be allocated on demand to individual requests. If
> >>>>> one route sees more traffic, then it should automatically
> >>>>> be allocated more threads. This could starve some requests
> >>>>> if the maximum number of threads had been allocated to a
> >>>>> lessor used route, while available threads went unused for
> >>>>> more commonly used route.
> >>>
> >>>> Absolutely but it could ramp up more threads as needed.
> >>>
> >>>> I base the logic on neuron and neuralTransmitters. When
> >>>> neurons talk to each other, they send back neural
> >>>> transmitters to enforce that pathway.
> >>>
> >>>> If we could do the same through threads by adding additional
> >>>> threads for endpoints that receive more traffic vs those
> >>>> which do not, it would enforce better and faster
> >>>> communication on those paths.> The current way Tomcat does it
> >>>> is not dynamic and it just applies to ALL pathways equally
> >>>> which is not efficient.
> >>> How would this improve efficiency at all?
> >>>
> >>> There is nothing inherently "showy" or "edity" about a
> >>> particular thread; each request-processing thread is
> >>> indistinguishable from any other. I don't believe there is a
> >>> way to improve the situation even if "per-endpoint" (whatever
> >>> that would mean) threads were a possibility.
> >>>
> >>> What would you attach to a thread that would make it any better
> >>> at editing records? Or deleting them?
> >>
> >> And I'll add that the whole original proposal ignores a number of
> >> rather fundamental points about how Servlet containers (and web
> >> servers in general) work. To name a few:
> >>
> >> - Until the request has been parsed (which requires a thread)
> >> Tomcat doesn't know which Servlet (endpoint) the request is
> >> destined for. Switching processing to a different thread at that
> >> point would add significant overhead for no benefit.
> >>
> >> - Even after parsing, the actual Servlet that processes the
> >> request (if any) can change during processing (e.g. a Filter that
> >> conditionally forwards to a different Servlet, authentication,
> >> etc.)
> >>
> >> There is nothing about a endpoint specific thread that would
> >> allow it to process a request more efficiently than a general
> >> thread.
> >>
> >> Any per-endpoint thread-pool solution will require the
> >> additional overhead to switch processing from the general parsing
> >> thread to the endpoint specific thread. This additional cost
> >> comes with zero benefits hence it will always be less efficient.
> >>
> >> In short, there is no way pre-allocating threads to particular
> >> endpoints can improve performance compared to just adding the
> >> same number of additional threads to the general thread pool.
>
> > Ah ok thank you for very concise answer. am chasing a pipe dream I
> > guess. Maybe there is another way to get this kind of benefit.
> The answer is caching, and that can be done at many levels, but the
> thread-level makes the least sense due to the reasons Mark outlined abov
> e.
>
> - -chris
> -----BEGIN PGP SIGNATURE-----
> Comment: GPGTools - http://gpgtools.org
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlmPLngACgkQHPApP6U8
> pFisbw//aiIg0vGmmlm4T/xoEbAKblKf6Qn9zmDzbLY9IbIG7MdsMcuV9hnsasEp
> iaZs3ROTy3BvWKGoyIGThtRsBPSFmb1H/XuKs4bqxgdRNgcbxEbjkH+1wZCx76Aq
> aqdIiCFdWvkOll4EqC4UYjNXCMkMBoTGN4GTxGmB8arujOyiC1KVPLY+wiRtXusF
> BrV3n9G+wN7Qq+rHIvgct1J29xTnPwQWhcdTrR5+IXn7vuNhEe9yxlKyJh4N6Pkt
> TW8ZlZfUgPnAXYZFvb0UfRK43cOCP4HsgncvIDjnnRJVTnaqRKBuRE4ZVYJG91SN
> CHUCYAmCR/rUZcOO3VJZ0dE7OEkrtcs6tmRT7j0qfS2qxbAb6YuW5xNYrCTgWKyD
> 6bUCQsKzcChV4mQPVDjXO/yv1t3dpXeMB+44KwCVB3bFPTediwISzTxInCSd/Kdu
> I+57Rcrclto8S3+GRsUPRG3dwsNMYMIxHpuzj/LYzLNdoANI8vM5NntYdQ4cwEFM
> H23i54m00WQ5RLuRJGzker+T5H0NvGlVwFQnqO9kCkA57o1Gi+vk34UuNPVLsqHx
> sKq6Eb4s3MeslZBPHhJWYXGPx226+T6sEXO1y2UZ9GuWYzfI3MF6/xcFOI2/W3id
> kYZEnR3R1Xes7GzsSLuCXVRDQco3GhXvSiyLvYC9xwgIsjnM61Q=
> =Q/vf
> -----END PGP SIGNATURE-----
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
I think I understand the confusion.

The old API pattern binds communication/business logic in centralized
architectures causing an architectural cross cutting concern in distributed
architectures (ie proxy, api server, message queue, etc).

To fix this you have to unbind communication from business logic so that
you can share I/O (request/response) without duplication and entanglement.

You may be under and assumption that the endpoint is CONTROLLER/BUSINESS
LOGIC; this is false. It is communication layer as communication layer can
callback to iteself. Communication layer can forward, redirect, talks to
service/business logic to gather resource, etc (as it doesn't even have to
return resource) but the communication layer in the API service is where
the endpoint exists.

What I am trying to do is atomically assign resources to endpoints as
traffic increases/decreases.

*In an embedded tomcat instance, I thought this would be possible.*

Re: Per EndPoint Threads???

Posted by Owen Rubel <or...@gmail.com>.
Owen Rubel
orubel@gmail.com

On Tue, Aug 15, 2017 at 8:23 AM, Christopher Schultz <
chris@christopherschultz.net> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Owen,
>
> On 8/13/17 10:46 AM, Owen Rubel wrote:
> > Owen Rubel orubel@gmail.com
> >
> > On Sun, Aug 13, 2017 at 5:57 AM, Christopher Schultz <
> > chris@christopherschultz.net> wrote:
> >
> > Owen,
> >
> > On 8/12/17 12:47 PM, Owen Rubel wrote:
> >>>> What I am talking about is something that improves
> >>>> communication as we notice that communication channel needing
> >>>> more resources. Not caching what is communicated... improving
> >>>> the CHANNEL for communicating the resource (whatever it may
> >>>> be).
> >
> > If the channel is an HTTP connection (or TCP; the application
> > protocol isn't terribly relevant), then you are limited by the
> > following:
> >
> > 1. Network bandwidth 2. Available threads (to service a particular
> > request) 3. Hardware resources on the server
> > (CPU/memory/disk/etc.)
> >
> > Let's ignore 1 and 3 for now, since you are primarily concerned
> > with concurrency, and concurrency is useless if the other resources
> > are constrained or otherwise limiting the equation.
> >
> > Let's say we had "per endpoint" thread pools, so that e.g. /create
> > had its own thread pool, and /show had another one, etc. What would
> > that buy us?
> >
> > (Let's ignore for now the fact that one set of threads must always
> > be used to decode the request to decide where it's going, like
> > /create or /show.)
> >
> > If we have a limited total number of threads (e.g. 10), then we
> > could "reserve" some of them so that we could always have 2 threads
> > for /create even if all the other threads in the system (the other
> > 8) were being used for something else. If we had 2 threads for
> > /create and 2 threads for /show, then only 6 would remain for e.g.
> > /edit or /delete. So if 6 threads were already being used for /edit
> > or /delete, the 7th incoming request would be queued, but anyone
> > making a request for /show or /create would (if a thread in those
> > pools is available) be serviced immediately.
> >
> > I can see some utility in this ability, because it would allow the
> > container to ensure that some resources were never starved... or,
> > rather, that they have some priority over certain other services.
> > In other words, the service could enjoy guaranteed provisioning
> > for certain endpoints.
> >
> > As it stands, Tomcat (and, I would venture a guess, most if not
> > all other containers) implements a fair request pipeline where
> > requests are (at least roughly) serviced in the order in which they
> > are received. Rather than guaranteeing provisioning for a
> > particular endpoint, the closest thing that could be implemented
> > (at the application level) would be a
> > resource-availability-limiting mechanism, such as counting the
> > number of in-flight requests and rejecting those which exceed some
> > threshold with e.g. a 503 response.
> >
> > Unfortunately, that doesn't actually prioritize some requests, it
> > merely rejects others in order to attempt to prioritize those
> > others. It also starves endpoints even when there is no reason to
> > do so (e.g. in the 10-thread scenario, if all 4 /show and /create
> > threads are idle, but 6 requests are already in process for the
> > other endpoints, a 7th request for those other endpoints will be
> > rejected).
> >
> > I believe that per-endpoint provisioning is a possibility, but I
> > don't think that the potential gains are worth the certain
> > complexity of the system required to implement it.
> >
> > There are other ways to handle heterogeneous service requests in a
> > way that doesn't starve one type of request in favor of another.
> > One obvious solution is horizontal scaling with a load-balancer. An
> > LB can be used to implement a sort of guaranteed-provisioning for
> > certain endpoints by providing more back-end servers for certain
> > endpoints. If you want to make sure that /show can be called by any
> > client at any time, then make sure you spin-up 1000 /show servers
> > and register them with the load-balancer. You can survive with only
> > maybe 10 nodes servicing /delete requests; others will either wait
> > in a queue or receive a 503 from the lb.
> >
> > For my money, I'd maximize the number of threads available for all
> > requests (whether within a single server, or across a large
> > cluster) and not require that they be available for any particular
> > endpoint. Once you have to depart from a single server, you MUST
> > have something like a load-balancer involved, and therefore the
> > above solution becomes not only more practical but also more
> > powerful.
> >
> > Since relying on a one-box-wonder to run a high-availability web
> > service isn't practical, provisioning is necessarily above the
> > cluster-node level, and so the problem has effectively moved from
> > the app server to the load-balancer (or reverse proxy). I believe
> > the application server is an inappropriate place to implement this
> > type of provisioning because it's too small-scale. The app server
> > should serve requests as quickly as possible, and arranging for
> > this kind of provisioning would add a level of complexity that
> > would jeopardize performance of all requests within the application
> > server.
> >
> >>>> But like you said, this is not something that is doable so
> >>>> I'll look elsewhere.
> >
> > I think it's doable, just not worth it given the orthogonal
> > solutions available. Some things are better-implemented at other
> > layers of the application (as a whole system) and perhaps not the
> > application server itself.
> >
> > Someone with intimate experience with Obidos should be familiar
> > with the benefits of separation of these kinds of concerns ;)
> >
> > If you are really more concerned with threads that are tied-up
> > with I/O-bound work, then Websocket really is your friend. The
> > complex threading model of Websocket allows applications to do Real
> > Work on application threads and then delegate the work of pushing
> > bytes across the wire to the container, resulting in very few
> > I/O-bound threads.
> >
> > But the way you have phrased your questions seems like you were
> > more interested in guaranteed provisioning than avoiding I/O-bound
> > threads.
> >
> > -chris
> >>
> >> ---------------------------------------------------------------------
> >>
> >>
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> >> For additional commands, e-mail: users-help@tomcat.apache.org
> >>
> >>
> >
> >> If we have a limited total number of threads (e.g. 10), then we
> >> could "reserve" some of them so that we could always have 2
> >> threads for /create even if all the other threads in the system
> >> (the other 8) were being used for something else. If we had 2
> >> threads for /create and 2 threads for /show, then only 6 would
> >> remain for e.g. /edit or /delete. So if 6 threads were already
> >> being used for /edit or /delete, the 7th incoming request would
> >> be queued, but anyone making a request for /show or /create would
> >> (if a thread in those pools is available) be serviced
> >> immediately.
> >
> > Use percentages like most load balancers do to solve that problem
> > and then adjust the percentages as traffic changes.
> >
> >
> > So say we have the following assigned thread percentages:
> >
> > person/show - 5% person/create-2% person/edit-2% person/delete 1%
>
> What happened to the remaining 90% of threads? If they don't exist,
> then everything above needs to be multiplied by 10x. If they do exist,
> then they either needs to be "provisioned" to a specific endpoint, or
> they needs to be explicitly defined to be "unprovisioned", meaning
> that they can be used by/for any endpoint.
>
> > *(always guaranteeing that each would have 1 thread shared from the
> > pool at all times)
>
> You have been talking about guaranteed provisioning and not really
> talking about any kind of "shared" pool. I'm not entirely sure what a
> hybrid approach would look like, here, but it really all goes back the
> fact that all threads are really created equal, unless you are really
> trying to create presistent connections (e.g. Websocket, HTTP keepalve
> between lb/reverse-proxy and app server endpoints).
>
> > If suddenly traffic starts to spike on 'person/edit', we steal
> > from 'person/show'. Why? 'person/show' had those threads created
> > dynamically and may not be using them all currently.
>
> Sounds like a plain-old shared thread pool.
>
> > We steal from the highest percentages durinmg spikes because we
> > currently have a new highest percentage.
> >
> > And if that changes, they will steal back.
> >
> > At least this is what I was envisioning for an implementation.
>
> There is no penalty for "stealing" a thread from another pool, so the
> result is that all pools are equal, and a single pool will do the job
> just as well.
>
> I'm obviously missing something fundamental about your reasoning, here.
>
> If it's communication channels you are concerned with, then I think
> there is an argument to be made for guaranteed provisioning. For for
> threads, there is no property of the thread that can make it any
> better-suited for handling requests for endpoint A versus endpoint B.
>
> - -chris
> -----BEGIN PGP SIGNATURE-----
> Comment: GPGTools - http://gpgtools.org
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlmTEgAACgkQHPApP6U8
> pFjjuA/+O0Lvzi8BSiaGucXs7JA+f4dwvyf50tZfcLpD1ZGpkxSDEoyaU/e8sJZQ
> 83b0KOQ/4ejHFqJ0rrgrMrTMgh0+9zhj4nBLI8W0NCih2Rrzaaf/+/XRItxWZmlw
> y4HJfGK+VYsKZF6MGvudenWPLMfU4EdK+qzbyKFm8fkQVj6w7vt0+6SiF2IWyB3X
> 8v2W6qr1aWVc19Km6xFB7csClwa93Fbv3hb05PJa3JEdiXPBb0Hh1lh7JT8RY4b6
> gAgjyfGvnlYp5OaY8Tb8CHrPSHwt0G1TuoFRkl/R2jwZicMCwYxEShQJOdE/nbVQ
> /zYq4flZQUDNVtoLNsob4GLh9tHL21CsZammyWeZZYNDdaA2b5EJP/YCJLmqOSio
> 2jkn+98BSxrAfIJdz/w+Pb3gDxJP30jtfCqBFhfisEzjKtpNUhh+Sr4PlgF3ejVi
> 2j6rNgb8SK1RJQPQMKZBCUYJ7TAxP21wiGWPInxqiVYg39axrAyCJxN/lO0PTl9t
> +pxDjcpSY5ZUXVltgpM9lTZr8t3LXmkFYQG72wuFYxwtloFhyXIXhhi0udY5LNcv
> /YSlG+FpGXh7GS+nsBkBdZs2zo7C7Bzjzwm3Km1M1QifQ+5ncC1yNaFl8KgV3JjS
> QAqTIlOJbYdw0uESDpkkEARBbTHc0DQ7u3oJo3DrYcGAGKzka4s=
> =OLcB
> -----END PGP SIGNATURE-----
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
Well you only steal when you need to steal resources so no... it would
NEVER be the same; certain endpoints would always be balanced different.

Think of it like 'load balancing per endpoint' but with threads.

Re: Per EndPoint Threads???

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Owen,

On 8/13/17 10:46 AM, Owen Rubel wrote:
> Owen Rubel orubel@gmail.com
> 
> On Sun, Aug 13, 2017 at 5:57 AM, Christopher Schultz < 
> chris@christopherschultz.net> wrote:
> 
> Owen,
> 
> On 8/12/17 12:47 PM, Owen Rubel wrote:
>>>> What I am talking about is something that improves
>>>> communication as we notice that communication channel needing
>>>> more resources. Not caching what is communicated... improving
>>>> the CHANNEL for communicating the resource (whatever it may
>>>> be).
> 
> If the channel is an HTTP connection (or TCP; the application
> protocol isn't terribly relevant), then you are limited by the
> following:
> 
> 1. Network bandwidth 2. Available threads (to service a particular
> request) 3. Hardware resources on the server
> (CPU/memory/disk/etc.)
> 
> Let's ignore 1 and 3 for now, since you are primarily concerned
> with concurrency, and concurrency is useless if the other resources
> are constrained or otherwise limiting the equation.
> 
> Let's say we had "per endpoint" thread pools, so that e.g. /create
> had its own thread pool, and /show had another one, etc. What would
> that buy us?
> 
> (Let's ignore for now the fact that one set of threads must always
> be used to decode the request to decide where it's going, like
> /create or /show.)
> 
> If we have a limited total number of threads (e.g. 10), then we
> could "reserve" some of them so that we could always have 2 threads
> for /create even if all the other threads in the system (the other
> 8) were being used for something else. If we had 2 threads for
> /create and 2 threads for /show, then only 6 would remain for e.g.
> /edit or /delete. So if 6 threads were already being used for /edit
> or /delete, the 7th incoming request would be queued, but anyone
> making a request for /show or /create would (if a thread in those
> pools is available) be serviced immediately.
> 
> I can see some utility in this ability, because it would allow the 
> container to ensure that some resources were never starved... or, 
> rather, that they have some priority over certain other services.
> In other words, the service could enjoy guaranteed provisioning
> for certain endpoints.
> 
> As it stands, Tomcat (and, I would venture a guess, most if not
> all other containers) implements a fair request pipeline where
> requests are (at least roughly) serviced in the order in which they
> are received. Rather than guaranteeing provisioning for a
> particular endpoint, the closest thing that could be implemented
> (at the application level) would be a
> resource-availability-limiting mechanism, such as counting the
> number of in-flight requests and rejecting those which exceed some
> threshold with e.g. a 503 response.
> 
> Unfortunately, that doesn't actually prioritize some requests, it 
> merely rejects others in order to attempt to prioritize those
> others. It also starves endpoints even when there is no reason to
> do so (e.g. in the 10-thread scenario, if all 4 /show and /create
> threads are idle, but 6 requests are already in process for the
> other endpoints, a 7th request for those other endpoints will be
> rejected).
> 
> I believe that per-endpoint provisioning is a possibility, but I
> don't think that the potential gains are worth the certain
> complexity of the system required to implement it.
> 
> There are other ways to handle heterogeneous service requests in a
> way that doesn't starve one type of request in favor of another.
> One obvious solution is horizontal scaling with a load-balancer. An
> LB can be used to implement a sort of guaranteed-provisioning for
> certain endpoints by providing more back-end servers for certain
> endpoints. If you want to make sure that /show can be called by any
> client at any time, then make sure you spin-up 1000 /show servers
> and register them with the load-balancer. You can survive with only
> maybe 10 nodes servicing /delete requests; others will either wait
> in a queue or receive a 503 from the lb.
> 
> For my money, I'd maximize the number of threads available for all 
> requests (whether within a single server, or across a large
> cluster) and not require that they be available for any particular
> endpoint. Once you have to depart from a single server, you MUST
> have something like a load-balancer involved, and therefore the
> above solution becomes not only more practical but also more
> powerful.
> 
> Since relying on a one-box-wonder to run a high-availability web 
> service isn't practical, provisioning is necessarily above the 
> cluster-node level, and so the problem has effectively moved from
> the app server to the load-balancer (or reverse proxy). I believe
> the application server is an inappropriate place to implement this
> type of provisioning because it's too small-scale. The app server
> should serve requests as quickly as possible, and arranging for
> this kind of provisioning would add a level of complexity that
> would jeopardize performance of all requests within the application
> server.
> 
>>>> But like you said, this is not something that is doable so
>>>> I'll look elsewhere.
> 
> I think it's doable, just not worth it given the orthogonal
> solutions available. Some things are better-implemented at other
> layers of the application (as a whole system) and perhaps not the
> application server itself.
> 
> Someone with intimate experience with Obidos should be familiar
> with the benefits of separation of these kinds of concerns ;)
> 
> If you are really more concerned with threads that are tied-up
> with I/O-bound work, then Websocket really is your friend. The
> complex threading model of Websocket allows applications to do Real
> Work on application threads and then delegate the work of pushing
> bytes across the wire to the container, resulting in very few
> I/O-bound threads.
> 
> But the way you have phrased your questions seems like you were
> more interested in guaranteed provisioning than avoiding I/O-bound
> threads.
> 
> -chris
>> 
>> ---------------------------------------------------------------------
>>
>> 
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>> 
>> 
> 
>> If we have a limited total number of threads (e.g. 10), then we
>> could "reserve" some of them so that we could always have 2
>> threads for /create even if all the other threads in the system
>> (the other 8) were being used for something else. If we had 2
>> threads for /create and 2 threads for /show, then only 6 would
>> remain for e.g. /edit or /delete. So if 6 threads were already
>> being used for /edit or /delete, the 7th incoming request would
>> be queued, but anyone making a request for /show or /create would
>> (if a thread in those pools is available) be serviced
>> immediately.
> 
> Use percentages like most load balancers do to solve that problem
> and then adjust the percentages as traffic changes.
> 
> 
> So say we have the following assigned thread percentages:
> 
> person/show - 5% person/create-2% person/edit-2% person/delete 1%

What happened to the remaining 90% of threads? If they don't exist,
then everything above needs to be multiplied by 10x. If they do exist,
then they either needs to be "provisioned" to a specific endpoint, or
they needs to be explicitly defined to be "unprovisioned", meaning
that they can be used by/for any endpoint.

> *(always guaranteeing that each would have 1 thread shared from the
> pool at all times)

You have been talking about guaranteed provisioning and not really
talking about any kind of "shared" pool. I'm not entirely sure what a
hybrid approach would look like, here, but it really all goes back the
fact that all threads are really created equal, unless you are really
trying to create presistent connections (e.g. Websocket, HTTP keepalve
between lb/reverse-proxy and app server endpoints).

> If suddenly traffic starts to spike on 'person/edit', we steal
> from 'person/show'. Why? 'person/show' had those threads created 
> dynamically and may not be using them all currently.

Sounds like a plain-old shared thread pool.

> We steal from the highest percentages durinmg spikes because we
> currently have a new highest percentage.
> 
> And if that changes, they will steal back.
> 
> At least this is what I was envisioning for an implementation.

There is no penalty for "stealing" a thread from another pool, so the
result is that all pools are equal, and a single pool will do the job
just as well.

I'm obviously missing something fundamental about your reasoning, here.

If it's communication channels you are concerned with, then I think
there is an argument to be made for guaranteed provisioning. For for
threads, there is no property of the thread that can make it any
better-suited for handling requests for endpoint A versus endpoint B.

- -chris
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlmTEgAACgkQHPApP6U8
pFjjuA/+O0Lvzi8BSiaGucXs7JA+f4dwvyf50tZfcLpD1ZGpkxSDEoyaU/e8sJZQ
83b0KOQ/4ejHFqJ0rrgrMrTMgh0+9zhj4nBLI8W0NCih2Rrzaaf/+/XRItxWZmlw
y4HJfGK+VYsKZF6MGvudenWPLMfU4EdK+qzbyKFm8fkQVj6w7vt0+6SiF2IWyB3X
8v2W6qr1aWVc19Km6xFB7csClwa93Fbv3hb05PJa3JEdiXPBb0Hh1lh7JT8RY4b6
gAgjyfGvnlYp5OaY8Tb8CHrPSHwt0G1TuoFRkl/R2jwZicMCwYxEShQJOdE/nbVQ
/zYq4flZQUDNVtoLNsob4GLh9tHL21CsZammyWeZZYNDdaA2b5EJP/YCJLmqOSio
2jkn+98BSxrAfIJdz/w+Pb3gDxJP30jtfCqBFhfisEzjKtpNUhh+Sr4PlgF3ejVi
2j6rNgb8SK1RJQPQMKZBCUYJ7TAxP21wiGWPInxqiVYg39axrAyCJxN/lO0PTl9t
+pxDjcpSY5ZUXVltgpM9lTZr8t3LXmkFYQG72wuFYxwtloFhyXIXhhi0udY5LNcv
/YSlG+FpGXh7GS+nsBkBdZs2zo7C7Bzjzwm3Km1M1QifQ+5ncC1yNaFl8KgV3JjS
QAqTIlOJbYdw0uESDpkkEARBbTHc0DQ7u3oJo3DrYcGAGKzka4s=
=OLcB
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Per EndPoint Threads???

Posted by Owen Rubel <or...@gmail.com>.
Owen Rubel
orubel@gmail.com

On Sun, Aug 13, 2017 at 5:57 AM, Christopher Schultz <
chris@christopherschultz.net> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Owen,
>
> On 8/12/17 12:47 PM, Owen Rubel wrote:
> > What I am talking about is something that improves communication as
> > we notice that communication channel needing more resources. Not
> > caching what is communicated... improving the CHANNEL for
> > communicating the resource (whatever it may be).
>
> If the channel is an HTTP connection (or TCP; the application protocol
> isn't terribly relevant), then you are limited by the following:
>
> 1. Network bandwidth
> 2. Available threads (to service a particular request)
> 3. Hardware resources on the server (CPU/memory/disk/etc.)
>
> Let's ignore 1 and 3 for now, since you are primarily concerned with
> concurrency, and concurrency is useless if the other resources are
> constrained or otherwise limiting the equation.
>
> Let's say we had "per endpoint" thread pools, so that e.g. /create had
> its own thread pool, and /show had another one, etc. What would that
> buy us?
>
> (Let's ignore for now the fact that one set of threads must always be
> used to decode the request to decide where it's going, like /create or
> /show.)
>
> If we have a limited total number of threads (e.g. 10), then we could
> "reserve" some of them so that we could always have 2 threads for
> /create even if all the other threads in the system (the other 8) were
> being used for something else. If we had 2 threads for /create and 2
> threads for /show, then only 6 would remain for e.g. /edit or /delete.
> So if 6 threads were already being used for /edit or /delete, the 7th
> incoming request would be queued, but anyone making a request for
> /show or /create would (if a thread in those pools is available) be
> serviced immediately.
>
> I can see some utility in this ability, because it would allow the
> container to ensure that some resources were never starved... or,
> rather, that they have some priority over certain other services. In
> other words, the service could enjoy guaranteed provisioning for
> certain endpoints.
>
> As it stands, Tomcat (and, I would venture a guess, most if not all
> other containers) implements a fair request pipeline where requests
> are (at least roughly) serviced in the order in which they are
> received. Rather than guaranteeing provisioning for a particular
> endpoint, the closest thing that could be implemented (at the
> application level) would be a resource-availability-limiting
> mechanism, such as counting the number of in-flight requests and
> rejecting those which exceed some threshold with e.g. a 503 response.
>
> Unfortunately, that doesn't actually prioritize some requests, it
> merely rejects others in order to attempt to prioritize those others.
> It also starves endpoints even when there is no reason to do so (e.g.
> in the 10-thread scenario, if all 4 /show and /create threads are
> idle, but 6 requests are already in process for the other endpoints, a
> 7th request for those other endpoints will be rejected).
>
> I believe that per-endpoint provisioning is a possibility, but I don't
> think that the potential gains are worth the certain complexity of the
> system required to implement it.
>
> There are other ways to handle heterogeneous service requests in a way
> that doesn't starve one type of request in favor of another. One
> obvious solution is horizontal scaling with a load-balancer. An LB can
> be used to implement a sort of guaranteed-provisioning for certain
> endpoints by providing more back-end servers for certain endpoints. If
> you want to make sure that /show can be called by any client at any
> time, then make sure you spin-up 1000 /show servers and register them
> with the load-balancer. You can survive with only maybe 10 nodes
> servicing /delete requests; others will either wait in a queue or
> receive a 503 from the lb.
>
> For my money, I'd maximize the number of threads available for all
> requests (whether within a single server, or across a large cluster)
> and not require that they be available for any particular endpoint.
> Once you have to depart from a single server, you MUST have something
> like a load-balancer involved, and therefore the above solution
> becomes not only more practical but also more powerful.
>
> Since relying on a one-box-wonder to run a high-availability web
> service isn't practical, provisioning is necessarily above the
> cluster-node level, and so the problem has effectively moved from the
> app server to the load-balancer (or reverse proxy). I believe the
> application server is an inappropriate place to implement this type of
> provisioning because it's too small-scale. The app server should serve
> requests as quickly as possible, and arranging for this kind of
> provisioning would add a level of complexity that would jeopardize
> performance of all requests within the application server.
>
> > But like you said, this is not something that is doable so I'll
> > look elsewhere.
>
> I think it's doable, just not worth it given the orthogonal solutions
> available. Some things are better-implemented at other layers of the
> application (as a whole system) and perhaps not the application server
> itself.
>
> Someone with intimate experience with Obidos should be familiar with
> the benefits of separation of these kinds of concerns ;)
>
> If you are really more concerned with threads that are tied-up with
> I/O-bound work, then Websocket really is your friend. The complex
> threading model of Websocket allows applications to do Real Work on
> application threads and then delegate the work of pushing bytes across
> the wire to the container, resulting in very few I/O-bound threads.
>
> But the way you have phrased your questions seems like you were more
> interested in guaranteed provisioning than avoiding I/O-bound threads.
>
> - -chris
> -----BEGIN PGP SIGNATURE-----
> Comment: GPGTools - http://gpgtools.org
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlmQTK8ACgkQHPApP6U8
> pFiSqw/+LB0K2z2wMZkZca7hqpOTnC3wyhr/8tAsJhPKNWMu9A/MzTAbDLhHM6Q3
> anRBSEzAPU1RR0YDh4ym0yi81C+5LWf92i74ITHhZOqnOsHJpP2NpENdumHNeq5C
> USwbaa2BAycL0SxKdSmm5kiXDs6HQcH/dspudIHcna2Wx9mOWaW7/jcnmc4XZcFe
> Na/Xi6Ita+oky8yadjt8k5GTqPBD0AFDu6KYXfhIaqkoa5OXTn8A1HuCsMoDYJQj
> jYMd58ahbKGjhPgwPq0D/1gtFf6VcTAxK7d7T4EvKXvIYgv3vj+4ddAXRk6y6Ac4
> AMw70PjvZpIZdslHwTwGk3AJ2u+fxBYIXmF3dDh7oIh00+HXow9V9WqLfkW9jDV1
> vIC5ofjsiztNCZnhGH4eTIRohn0mou3mZnIbM1dtc+NmLGArGYjxU2Q1rHcWqjlM
> QjKQimdPEaAT0iwtz6iY8hMI4PHJ9B8BnFHrZMm6wnYkMBbA0IHM2ofl1BgtgdIH
> IKfm2yo4cGcUKFXYvWTKHFslV5Seqs5rc0NlaRO8OYt4FvxjEt3THS6b8Wog7qzs
> EMGTrFouq2SyW+4cKp6cajOUAAU7u2PqkUxbEEZcf1ITwhw4aNgdS+bVhTwDApw4
> w1hDPV/IsNHpgSFRiOzKpOWFtRjsCbtKIwf3WNEO6EfgmGZGQpA=
> =Lrxc
> -----END PGP SIGNATURE-----
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>

>If we have a limited total number of threads (e.g. 10), then we could
>"reserve" some of them so that we could always have 2 threads for
>/create even if all the other threads in the system (the other 8) were
>being used for something else. If we had 2 threads for /create and 2
>threads for /show, then only 6 would remain for e.g. /edit or /delete.
>So if 6 threads were already being used for /edit or /delete, the 7th
>incoming request would be queued, but anyone making a request for
>/show or /create would (if a thread in those pools is available) be
>serviced immediately.

Use percentages like most load balancers do to solve that problem and then
adjust the percentages as traffic changes.


So say we have the following assigned thread percentages:

person/show - 5%
person/create-2%
person/edit-2%
person/delete 1%

*(always guaranteeing that each would have 1 thread shared from the pool at
all times)

If suddenly traffic starts to spike on 'person/edit', we steal from
'person/show'. Why? 'person/show' had those threads created created
dynamically and may not be using them all currently.

We steal from the highest percentages durinmg spikes because we currently
have a new highest percentage.

And if that changes, they will steal back.

At least this is what I was envisioning for an implementation.

Re: Per EndPoint Threads???

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Owen,

On 8/12/17 12:47 PM, Owen Rubel wrote:
> What I am talking about is something that improves communication as
> we notice that communication channel needing more resources. Not
> caching what is communicated... improving the CHANNEL for
> communicating the resource (whatever it may be).

If the channel is an HTTP connection (or TCP; the application protocol
isn't terribly relevant), then you are limited by the following:

1. Network bandwidth
2. Available threads (to service a particular request)
3. Hardware resources on the server (CPU/memory/disk/etc.)

Let's ignore 1 and 3 for now, since you are primarily concerned with
concurrency, and concurrency is useless if the other resources are
constrained or otherwise limiting the equation.

Let's say we had "per endpoint" thread pools, so that e.g. /create had
its own thread pool, and /show had another one, etc. What would that
buy us?

(Let's ignore for now the fact that one set of threads must always be
used to decode the request to decide where it's going, like /create or
/show.)

If we have a limited total number of threads (e.g. 10), then we could
"reserve" some of them so that we could always have 2 threads for
/create even if all the other threads in the system (the other 8) were
being used for something else. If we had 2 threads for /create and 2
threads for /show, then only 6 would remain for e.g. /edit or /delete.
So if 6 threads were already being used for /edit or /delete, the 7th
incoming request would be queued, but anyone making a request for
/show or /create would (if a thread in those pools is available) be
serviced immediately.

I can see some utility in this ability, because it would allow the
container to ensure that some resources were never starved... or,
rather, that they have some priority over certain other services. In
other words, the service could enjoy guaranteed provisioning for
certain endpoints.

As it stands, Tomcat (and, I would venture a guess, most if not all
other containers) implements a fair request pipeline where requests
are (at least roughly) serviced in the order in which they are
received. Rather than guaranteeing provisioning for a particular
endpoint, the closest thing that could be implemented (at the
application level) would be a resource-availability-limiting
mechanism, such as counting the number of in-flight requests and
rejecting those which exceed some threshold with e.g. a 503 response.

Unfortunately, that doesn't actually prioritize some requests, it
merely rejects others in order to attempt to prioritize those others.
It also starves endpoints even when there is no reason to do so (e.g.
in the 10-thread scenario, if all 4 /show and /create threads are
idle, but 6 requests are already in process for the other endpoints, a
7th request for those other endpoints will be rejected).

I believe that per-endpoint provisioning is a possibility, but I don't
think that the potential gains are worth the certain complexity of the
system required to implement it.

There are other ways to handle heterogeneous service requests in a way
that doesn't starve one type of request in favor of another. One
obvious solution is horizontal scaling with a load-balancer. An LB can
be used to implement a sort of guaranteed-provisioning for certain
endpoints by providing more back-end servers for certain endpoints. If
you want to make sure that /show can be called by any client at any
time, then make sure you spin-up 1000 /show servers and register them
with the load-balancer. You can survive with only maybe 10 nodes
servicing /delete requests; others will either wait in a queue or
receive a 503 from the lb.

For my money, I'd maximize the number of threads available for all
requests (whether within a single server, or across a large cluster)
and not require that they be available for any particular endpoint.
Once you have to depart from a single server, you MUST have something
like a load-balancer involved, and therefore the above solution
becomes not only more practical but also more powerful.

Since relying on a one-box-wonder to run a high-availability web
service isn't practical, provisioning is necessarily above the
cluster-node level, and so the problem has effectively moved from the
app server to the load-balancer (or reverse proxy). I believe the
application server is an inappropriate place to implement this type of
provisioning because it's too small-scale. The app server should serve
requests as quickly as possible, and arranging for this kind of
provisioning would add a level of complexity that would jeopardize
performance of all requests within the application server.

> But like you said, this is not something that is doable so I'll
> look elsewhere.

I think it's doable, just not worth it given the orthogonal solutions
available. Some things are better-implemented at other layers of the
application (as a whole system) and perhaps not the application server
itself.

Someone with intimate experience with Obidos should be familiar with
the benefits of separation of these kinds of concerns ;)

If you are really more concerned with threads that are tied-up with
I/O-bound work, then Websocket really is your friend. The complex
threading model of Websocket allows applications to do Real Work on
application threads and then delegate the work of pushing bytes across
the wire to the container, resulting in very few I/O-bound threads.

But the way you have phrased your questions seems like you were more
interested in guaranteed provisioning than avoiding I/O-bound threads.

- -chris
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlmQTK8ACgkQHPApP6U8
pFiSqw/+LB0K2z2wMZkZca7hqpOTnC3wyhr/8tAsJhPKNWMu9A/MzTAbDLhHM6Q3
anRBSEzAPU1RR0YDh4ym0yi81C+5LWf92i74ITHhZOqnOsHJpP2NpENdumHNeq5C
USwbaa2BAycL0SxKdSmm5kiXDs6HQcH/dspudIHcna2Wx9mOWaW7/jcnmc4XZcFe
Na/Xi6Ita+oky8yadjt8k5GTqPBD0AFDu6KYXfhIaqkoa5OXTn8A1HuCsMoDYJQj
jYMd58ahbKGjhPgwPq0D/1gtFf6VcTAxK7d7T4EvKXvIYgv3vj+4ddAXRk6y6Ac4
AMw70PjvZpIZdslHwTwGk3AJ2u+fxBYIXmF3dDh7oIh00+HXow9V9WqLfkW9jDV1
vIC5ofjsiztNCZnhGH4eTIRohn0mou3mZnIbM1dtc+NmLGArGYjxU2Q1rHcWqjlM
QjKQimdPEaAT0iwtz6iY8hMI4PHJ9B8BnFHrZMm6wnYkMBbA0IHM2ofl1BgtgdIH
IKfm2yo4cGcUKFXYvWTKHFslV5Seqs5rc0NlaRO8OYt4FvxjEt3THS6b8Wog7qzs
EMGTrFouq2SyW+4cKp6cajOUAAU7u2PqkUxbEEZcf1ITwhw4aNgdS+bVhTwDApw4
w1hDPV/IsNHpgSFRiOzKpOWFtRjsCbtKIwf3WNEO6EfgmGZGQpA=
=Lrxc
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Per EndPoint Threads???

Posted by Owen Rubel <or...@gmail.com>.
On Sat, Aug 12, 2017 at 3:13 PM, Christopher Schultz <
chris@christopherschultz.net> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Owen,
>
> On 8/12/17 12:47 PM, Owen Rubel wrote:
> > On Sat, Aug 12, 2017 at 9:36 AM, Christopher Schultz <
> > chris@christopherschultz.net> wrote:
> >
> > Owen,
> >
> > On 8/12/17 11:21 AM, Owen Rubel wrote:
> >>>> On Sat, Aug 12, 2017 at 1:19 AM, Mark Thomas
> >>>> <ma...@apache.org> wrote:
> >>>>
> >>>>> On 12/08/17 06:00, Christopher Schultz wrote:
> >>>>>> Owen,
> >>>>>>
> >>>>>> Please do not top-post. I have re-ordered your post to
> >>>>>> be bottom-post.
> >>>>>>
> >>>>>> On 8/11/17 10:12 PM, Owen Rubel wrote:
> >>>>>>> On Fri, Aug 11, 2017 at 5:58 PM,
> >>>>>>> <ch...@baus.net> wrote:
> >>>>>>
> >>>>>>>>> Hi All,
> >>>>>>>>>
> >>>>>>>>> I'm looking for a way (or a tool) in Tomcat to
> >>>>>>>>> associate threads with endpoints.
> >>>>>>>>
> >>>>>>>> It isn't clear to me why this would be necessary.
> >>>>>>>> Threads should be allocated on demand to individual
> >>>>>>>> requests. If one route sees more traffic, then it
> >>>>>>>> should automatically be allocated more threads. This
> >>>>>>>> could starve some requests if the maximum number of
> >>>>>>>> threads had been allocated to a lessor used route,
> >>>>>>>> while available threads went unused for more commonly
> >>>>>>>> used route.
> >>>>>>
> >>>>>>> Absolutely but it could ramp up more threads as
> >>>>>>> needed.
> >>>>>>
> >>>>>>> I base the logic on neuron and neuralTransmitters.
> >>>>>>> When neurons talk to each other, they send back neural
> >>>>>>> transmitters to enforce that pathway.
> >>>>>>
> >>>>>>> If we could do the same through threads by adding
> >>>>>>> additional threads for endpoints that receive more
> >>>>>>> traffic vs those which do not, it would enforce better
> >>>>>>> and faster communication on those paths.> The current
> >>>>>>> way Tomcat does it is not dynamic and it just applies
> >>>>>>> to ALL pathways equally which is not efficient.
> >>>>>> How would this improve efficiency at all?
> >>>>>>
> >>>>>> There is nothing inherently "showy" or "edity" about a
> >>>>>> particular thread; each request-processing thread is
> >>>>>> indistinguishable from any other. I don't believe there
> >>>>>> is a way to improve the situation even if "per-endpoint"
> >>>>>> (whatever that would mean) threads were a possibility.
> >>>>>>
> >>>>>> What would you attach to a thread that would make it any
> >>>>>> better at editing records? Or deleting them?
> >>>>>
> >>>>> And I'll add that the whole original proposal ignores a
> >>>>> number of rather fundamental points about how Servlet
> >>>>> containers (and web servers in general) work. To name a
> >>>>> few:
> >>>>>
> >>>>> - Until the request has been parsed (which requires a
> >>>>> thread) Tomcat doesn't know which Servlet (endpoint) the
> >>>>> request is destined for. Switching processing to a
> >>>>> different thread at that point would add significant
> >>>>> overhead for no benefit.
> >>>>>
> >>>>> - Even after parsing, the actual Servlet that processes
> >>>>> the request (if any) can change during processing (e.g. a
> >>>>> Filter that conditionally forwards to a different Servlet,
> >>>>> authentication, etc.)
> >>>>>
> >>>>> There is nothing about a endpoint specific thread that
> >>>>> would allow it to process a request more efficiently than a
> >>>>> general thread.
> >>>>>
> >>>>> Any per-endpoint thread-pool solution will require the
> >>>>> additional overhead to switch processing from the general
> >>>>> parsing thread to the endpoint specific thread. This
> >>>>> additional cost comes with zero benefits hence it will
> >>>>> always be less efficient.
> >>>>>
> >>>>> In short, there is no way pre-allocating threads to
> >>>>> particular endpoints can improve performance compared to
> >>>>> just adding the same number of additional threads to the
> >>>>> general thread pool.
> >
> >>>> Ah ok thank you for very concise answer. am chasing a pipe
> >>>> dream I guess. Maybe there is another way to get this kind of
> >>>> benefit.
> > The answer is caching, and that can be done at many levels, but
> > the thread-level makes the least sense due to the reasons Mark
> > outlined abov e.
> >
> > -chris
> >>
> >> ---------------------------------------------------------------------
> >>
> >>
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> >> For additional commands, e-mail: users-help@tomcat.apache.org
> >>
> >>
> > Well caching is: - related to resource not communication - is a one
> > time thing and has to have a version check every time.
> >
> > What I am talking about is something that improves communication as
> > we notice that communication channel needing more resources. Not
> > caching what is communicated... improving the CHANNEL for
> > communicating the resource (whatever it may be).
> >
> > But like you said, this is not something that is doable so I'll
> > look elsewhere. Thanks again. :)
>
> If you want to improve communication efficiency, I think that HTTP
> isn't the protocol for you. Perhaps Websocket?
>
>

> - -chris
> -----BEGIN PGP SIGNATURE-----
> Comment: GPGTools - http://gpgtools.org
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlmPfYEACgkQHPApP6U8
> pFivQg//RACaNIne3hC5nlpoQATZ1qDl6zJLrGD9qSyooRkAeQxmOS2AlIZgrdzP
> c69Ze3hIRSaxGW0ecdm0weRINimtKqjJqGsEa6AOWsWyC6KUMk6L63cLIvmoNl7t
> 7kUK/YeOfD4dVnHBcy4QeOY0svVHP9CLPT7wnNv4qH02+ZzB3CtEXu5gOFmUg7Zk
> H7R8tKKce0RtNWtf2UDrlYy2ZKrpTsp5G4KI8p3hmKeIxY1UyItNbxRTL53OMDSU
> 9negiFcHGxf8JKQPq+Uqh08Bj+ZGlY4tdlhmY+MWNEJnu4DQPlx67QfHGlbmz1Cc
> Eegnb2Rc5DEBvnaj8Ow7vgjTAosng3BQ2dLJR9m+nfzBpGfsAFbMPDp5LEsPQH3P
> Erw/OY9gUt41jqqOq0K5uiB//tu0KMfeR4XPGZ0avq12lv2zYfbp6oDIidFsAPpd
> TiZOV1GsJhLVe61nG28+QTDxBuzWuaoBqPmVmNb+vT3DC37VbRG1v/9dnX3mn373
> dB87iKnb8VGTbAP2loTV0OsyBtpn8ruc3WNURHgAgxKADmettG6c47WxDN2HwPpH
> L6avgIkiGhS/3y7quDNo8JD08VQpuMtxiVsdt5xwsC6fdHtkNgbSG/2qCAgKQcKQ
> DNiadO15ASuTm59e3Tqmk6vVhtMvsc+cWeq6T5x5tXyXPSP2VpA=
> =sDX7
> -----END PGP SIGNATURE-----
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
Well I'll try not to take offense at that...heh. You may not be aware of my
contributions to date. :)

But again, appreciate the feedback. So thank you.

Re: Per EndPoint Threads???

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Owen,

On 8/12/17 12:47 PM, Owen Rubel wrote:
> On Sat, Aug 12, 2017 at 9:36 AM, Christopher Schultz < 
> chris@christopherschultz.net> wrote:
> 
> Owen,
> 
> On 8/12/17 11:21 AM, Owen Rubel wrote:
>>>> On Sat, Aug 12, 2017 at 1:19 AM, Mark Thomas
>>>> <ma...@apache.org> wrote:
>>>> 
>>>>> On 12/08/17 06:00, Christopher Schultz wrote:
>>>>>> Owen,
>>>>>> 
>>>>>> Please do not top-post. I have re-ordered your post to
>>>>>> be bottom-post.
>>>>>> 
>>>>>> On 8/11/17 10:12 PM, Owen Rubel wrote:
>>>>>>> On Fri, Aug 11, 2017 at 5:58 PM,
>>>>>>> <ch...@baus.net> wrote:
>>>>>> 
>>>>>>>>> Hi All,
>>>>>>>>> 
>>>>>>>>> I'm looking for a way (or a tool) in Tomcat to
>>>>>>>>> associate threads with endpoints.
>>>>>>>> 
>>>>>>>> It isn't clear to me why this would be necessary.
>>>>>>>> Threads should be allocated on demand to individual
>>>>>>>> requests. If one route sees more traffic, then it
>>>>>>>> should automatically be allocated more threads. This
>>>>>>>> could starve some requests if the maximum number of
>>>>>>>> threads had been allocated to a lessor used route,
>>>>>>>> while available threads went unused for more commonly
>>>>>>>> used route.
>>>>>> 
>>>>>>> Absolutely but it could ramp up more threads as
>>>>>>> needed.
>>>>>> 
>>>>>>> I base the logic on neuron and neuralTransmitters.
>>>>>>> When neurons talk to each other, they send back neural 
>>>>>>> transmitters to enforce that pathway.
>>>>>> 
>>>>>>> If we could do the same through threads by adding
>>>>>>> additional threads for endpoints that receive more
>>>>>>> traffic vs those which do not, it would enforce better
>>>>>>> and faster communication on those paths.> The current
>>>>>>> way Tomcat does it is not dynamic and it just applies
>>>>>>> to ALL pathways equally which is not efficient.
>>>>>> How would this improve efficiency at all?
>>>>>> 
>>>>>> There is nothing inherently "showy" or "edity" about a 
>>>>>> particular thread; each request-processing thread is 
>>>>>> indistinguishable from any other. I don't believe there
>>>>>> is a way to improve the situation even if "per-endpoint"
>>>>>> (whatever that would mean) threads were a possibility.
>>>>>> 
>>>>>> What would you attach to a thread that would make it any
>>>>>> better at editing records? Or deleting them?
>>>>> 
>>>>> And I'll add that the whole original proposal ignores a
>>>>> number of rather fundamental points about how Servlet
>>>>> containers (and web servers in general) work. To name a
>>>>> few:
>>>>> 
>>>>> - Until the request has been parsed (which requires a
>>>>> thread) Tomcat doesn't know which Servlet (endpoint) the
>>>>> request is destined for. Switching processing to a
>>>>> different thread at that point would add significant
>>>>> overhead for no benefit.
>>>>> 
>>>>> - Even after parsing, the actual Servlet that processes
>>>>> the request (if any) can change during processing (e.g. a
>>>>> Filter that conditionally forwards to a different Servlet,
>>>>> authentication, etc.)
>>>>> 
>>>>> There is nothing about a endpoint specific thread that
>>>>> would allow it to process a request more efficiently than a
>>>>> general thread.
>>>>> 
>>>>> Any per-endpoint thread-pool solution will require the 
>>>>> additional overhead to switch processing from the general
>>>>> parsing thread to the endpoint specific thread. This
>>>>> additional cost comes with zero benefits hence it will
>>>>> always be less efficient.
>>>>> 
>>>>> In short, there is no way pre-allocating threads to
>>>>> particular endpoints can improve performance compared to
>>>>> just adding the same number of additional threads to the
>>>>> general thread pool.
> 
>>>> Ah ok thank you for very concise answer. am chasing a pipe
>>>> dream I guess. Maybe there is another way to get this kind of
>>>> benefit.
> The answer is caching, and that can be done at many levels, but
> the thread-level makes the least sense due to the reasons Mark
> outlined abov e.
> 
> -chris
>> 
>> ---------------------------------------------------------------------
>>
>> 
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>> 
>> 
> Well caching is: - related to resource not communication - is a one
> time thing and has to have a version check every time.
> 
> What I am talking about is something that improves communication as
> we notice that communication channel needing more resources. Not
> caching what is communicated... improving the CHANNEL for
> communicating the resource (whatever it may be).
> 
> But like you said, this is not something that is doable so I'll
> look elsewhere. Thanks again. :)

If you want to improve communication efficiency, I think that HTTP
isn't the protocol for you. Perhaps Websocket?

- -chris
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlmPfYEACgkQHPApP6U8
pFivQg//RACaNIne3hC5nlpoQATZ1qDl6zJLrGD9qSyooRkAeQxmOS2AlIZgrdzP
c69Ze3hIRSaxGW0ecdm0weRINimtKqjJqGsEa6AOWsWyC6KUMk6L63cLIvmoNl7t
7kUK/YeOfD4dVnHBcy4QeOY0svVHP9CLPT7wnNv4qH02+ZzB3CtEXu5gOFmUg7Zk
H7R8tKKce0RtNWtf2UDrlYy2ZKrpTsp5G4KI8p3hmKeIxY1UyItNbxRTL53OMDSU
9negiFcHGxf8JKQPq+Uqh08Bj+ZGlY4tdlhmY+MWNEJnu4DQPlx67QfHGlbmz1Cc
Eegnb2Rc5DEBvnaj8Ow7vgjTAosng3BQ2dLJR9m+nfzBpGfsAFbMPDp5LEsPQH3P
Erw/OY9gUt41jqqOq0K5uiB//tu0KMfeR4XPGZ0avq12lv2zYfbp6oDIidFsAPpd
TiZOV1GsJhLVe61nG28+QTDxBuzWuaoBqPmVmNb+vT3DC37VbRG1v/9dnX3mn373
dB87iKnb8VGTbAP2loTV0OsyBtpn8ruc3WNURHgAgxKADmettG6c47WxDN2HwPpH
L6avgIkiGhS/3y7quDNo8JD08VQpuMtxiVsdt5xwsC6fdHtkNgbSG/2qCAgKQcKQ
DNiadO15ASuTm59e3Tqmk6vVhtMvsc+cWeq6T5x5tXyXPSP2VpA=
=sDX7
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Per EndPoint Threads???

Posted by Owen Rubel <or...@gmail.com>.
On Sat, Aug 12, 2017 at 9:36 AM, Christopher Schultz <
chris@christopherschultz.net> wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Owen,
>
> On 8/12/17 11:21 AM, Owen Rubel wrote:
> > On Sat, Aug 12, 2017 at 1:19 AM, Mark Thomas <ma...@apache.org>
> > wrote:
> >
> >> On 12/08/17 06:00, Christopher Schultz wrote:
> >>> Owen,
> >>>
> >>> Please do not top-post. I have re-ordered your post to be
> >>> bottom-post.
> >>>
> >>> On 8/11/17 10:12 PM, Owen Rubel wrote:
> >>>> On Fri, Aug 11, 2017 at 5:58 PM, <ch...@baus.net>
> >>>> wrote:
> >>>
> >>>>>> Hi All,
> >>>>>>
> >>>>>> I'm looking for a way (or a tool) in Tomcat to associate
> >>>>>> threads with endpoints.
> >>>>>
> >>>>> It isn't clear to me why this would be necessary. Threads
> >>>>> should be allocated on demand to individual requests. If
> >>>>> one route sees more traffic, then it should automatically
> >>>>> be allocated more threads. This could starve some requests
> >>>>> if the maximum number of threads had been allocated to a
> >>>>> lessor used route, while available threads went unused for
> >>>>> more commonly used route.
> >>>
> >>>> Absolutely but it could ramp up more threads as needed.
> >>>
> >>>> I base the logic on neuron and neuralTransmitters. When
> >>>> neurons talk to each other, they send back neural
> >>>> transmitters to enforce that pathway.
> >>>
> >>>> If we could do the same through threads by adding additional
> >>>> threads for endpoints that receive more traffic vs those
> >>>> which do not, it would enforce better and faster
> >>>> communication on those paths.> The current way Tomcat does it
> >>>> is not dynamic and it just applies to ALL pathways equally
> >>>> which is not efficient.
> >>> How would this improve efficiency at all?
> >>>
> >>> There is nothing inherently "showy" or "edity" about a
> >>> particular thread; each request-processing thread is
> >>> indistinguishable from any other. I don't believe there is a
> >>> way to improve the situation even if "per-endpoint" (whatever
> >>> that would mean) threads were a possibility.
> >>>
> >>> What would you attach to a thread that would make it any better
> >>> at editing records? Or deleting them?
> >>
> >> And I'll add that the whole original proposal ignores a number of
> >> rather fundamental points about how Servlet containers (and web
> >> servers in general) work. To name a few:
> >>
> >> - Until the request has been parsed (which requires a thread)
> >> Tomcat doesn't know which Servlet (endpoint) the request is
> >> destined for. Switching processing to a different thread at that
> >> point would add significant overhead for no benefit.
> >>
> >> - Even after parsing, the actual Servlet that processes the
> >> request (if any) can change during processing (e.g. a Filter that
> >> conditionally forwards to a different Servlet, authentication,
> >> etc.)
> >>
> >> There is nothing about a endpoint specific thread that would
> >> allow it to process a request more efficiently than a general
> >> thread.
> >>
> >> Any per-endpoint thread-pool solution will require the
> >> additional overhead to switch processing from the general parsing
> >> thread to the endpoint specific thread. This additional cost
> >> comes with zero benefits hence it will always be less efficient.
> >>
> >> In short, there is no way pre-allocating threads to particular
> >> endpoints can improve performance compared to just adding the
> >> same number of additional threads to the general thread pool.
>
> > Ah ok thank you for very concise answer. am chasing a pipe dream I
> > guess. Maybe there is another way to get this kind of benefit.
> The answer is caching, and that can be done at many levels, but the
> thread-level makes the least sense due to the reasons Mark outlined abov
> e.
>
> - -chris
> -----BEGIN PGP SIGNATURE-----
> Comment: GPGTools - http://gpgtools.org
> Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/
>
> iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlmPLngACgkQHPApP6U8
> pFisbw//aiIg0vGmmlm4T/xoEbAKblKf6Qn9zmDzbLY9IbIG7MdsMcuV9hnsasEp
> iaZs3ROTy3BvWKGoyIGThtRsBPSFmb1H/XuKs4bqxgdRNgcbxEbjkH+1wZCx76Aq
> aqdIiCFdWvkOll4EqC4UYjNXCMkMBoTGN4GTxGmB8arujOyiC1KVPLY+wiRtXusF
> BrV3n9G+wN7Qq+rHIvgct1J29xTnPwQWhcdTrR5+IXn7vuNhEe9yxlKyJh4N6Pkt
> TW8ZlZfUgPnAXYZFvb0UfRK43cOCP4HsgncvIDjnnRJVTnaqRKBuRE4ZVYJG91SN
> CHUCYAmCR/rUZcOO3VJZ0dE7OEkrtcs6tmRT7j0qfS2qxbAb6YuW5xNYrCTgWKyD
> 6bUCQsKzcChV4mQPVDjXO/yv1t3dpXeMB+44KwCVB3bFPTediwISzTxInCSd/Kdu
> I+57Rcrclto8S3+GRsUPRG3dwsNMYMIxHpuzj/LYzLNdoANI8vM5NntYdQ4cwEFM
> H23i54m00WQ5RLuRJGzker+T5H0NvGlVwFQnqO9kCkA57o1Gi+vk34UuNPVLsqHx
> sKq6Eb4s3MeslZBPHhJWYXGPx226+T6sEXO1y2UZ9GuWYzfI3MF6/xcFOI2/W3id
> kYZEnR3R1Xes7GzsSLuCXVRDQco3GhXvSiyLvYC9xwgIsjnM61Q=
> =Q/vf
> -----END PGP SIGNATURE-----
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>
Well caching is:
- related to resource not communication
- is a one time thing and has to have a version check every time.

What I am talking about is something that improves communication as we
notice that communication channel needing more resources. Not caching what
is communicated... improving the CHANNEL for communicating the resource
(whatever it may be).

But like you said, this is not something that is doable so I'll look
elsewhere. Thanks again. :)

Re: Per EndPoint Threads???

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Owen,

On 8/12/17 11:21 AM, Owen Rubel wrote:
> On Sat, Aug 12, 2017 at 1:19 AM, Mark Thomas <ma...@apache.org>
> wrote:
> 
>> On 12/08/17 06:00, Christopher Schultz wrote:
>>> Owen,
>>> 
>>> Please do not top-post. I have re-ordered your post to be
>>> bottom-post.
>>> 
>>> On 8/11/17 10:12 PM, Owen Rubel wrote:
>>>> On Fri, Aug 11, 2017 at 5:58 PM, <ch...@baus.net>
>>>> wrote:
>>> 
>>>>>> Hi All,
>>>>>> 
>>>>>> I'm looking for a way (or a tool) in Tomcat to associate 
>>>>>> threads with endpoints.
>>>>> 
>>>>> It isn't clear to me why this would be necessary. Threads
>>>>> should be allocated on demand to individual requests. If
>>>>> one route sees more traffic, then it should automatically
>>>>> be allocated more threads. This could starve some requests
>>>>> if the maximum number of threads had been allocated to a
>>>>> lessor used route, while available threads went unused for
>>>>> more commonly used route.
>>> 
>>>> Absolutely but it could ramp up more threads as needed.
>>> 
>>>> I base the logic on neuron and neuralTransmitters. When
>>>> neurons talk to each other, they send back neural
>>>> transmitters to enforce that pathway.
>>> 
>>>> If we could do the same through threads by adding additional 
>>>> threads for endpoints that receive more traffic vs those
>>>> which do not, it would enforce better and faster
>>>> communication on those paths.> The current way Tomcat does it
>>>> is not dynamic and it just applies to ALL pathways equally
>>>> which is not efficient.
>>> How would this improve efficiency at all?
>>> 
>>> There is nothing inherently "showy" or "edity" about a
>>> particular thread; each request-processing thread is
>>> indistinguishable from any other. I don't believe there is a
>>> way to improve the situation even if "per-endpoint" (whatever
>>> that would mean) threads were a possibility.
>>> 
>>> What would you attach to a thread that would make it any better
>>> at editing records? Or deleting them?
>> 
>> And I'll add that the whole original proposal ignores a number of
>> rather fundamental points about how Servlet containers (and web
>> servers in general) work. To name a few:
>> 
>> - Until the request has been parsed (which requires a thread)
>> Tomcat doesn't know which Servlet (endpoint) the request is
>> destined for. Switching processing to a different thread at that
>> point would add significant overhead for no benefit.
>> 
>> - Even after parsing, the actual Servlet that processes the
>> request (if any) can change during processing (e.g. a Filter that
>> conditionally forwards to a different Servlet, authentication,
>> etc.)
>> 
>> There is nothing about a endpoint specific thread that would
>> allow it to process a request more efficiently than a general
>> thread.
>> 
>> Any per-endpoint thread-pool solution will require the
>> additional overhead to switch processing from the general parsing
>> thread to the endpoint specific thread. This additional cost
>> comes with zero benefits hence it will always be less efficient.
>> 
>> In short, there is no way pre-allocating threads to particular
>> endpoints can improve performance compared to just adding the
>> same number of additional threads to the general thread pool.

> Ah ok thank you for very concise answer. am chasing a pipe dream I 
> guess. Maybe there is another way to get this kind of benefit.
The answer is caching, and that can be done at many levels, but the
thread-level makes the least sense due to the reasons Mark outlined abov
e.

- -chris
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlmPLngACgkQHPApP6U8
pFisbw//aiIg0vGmmlm4T/xoEbAKblKf6Qn9zmDzbLY9IbIG7MdsMcuV9hnsasEp
iaZs3ROTy3BvWKGoyIGThtRsBPSFmb1H/XuKs4bqxgdRNgcbxEbjkH+1wZCx76Aq
aqdIiCFdWvkOll4EqC4UYjNXCMkMBoTGN4GTxGmB8arujOyiC1KVPLY+wiRtXusF
BrV3n9G+wN7Qq+rHIvgct1J29xTnPwQWhcdTrR5+IXn7vuNhEe9yxlKyJh4N6Pkt
TW8ZlZfUgPnAXYZFvb0UfRK43cOCP4HsgncvIDjnnRJVTnaqRKBuRE4ZVYJG91SN
CHUCYAmCR/rUZcOO3VJZ0dE7OEkrtcs6tmRT7j0qfS2qxbAb6YuW5xNYrCTgWKyD
6bUCQsKzcChV4mQPVDjXO/yv1t3dpXeMB+44KwCVB3bFPTediwISzTxInCSd/Kdu
I+57Rcrclto8S3+GRsUPRG3dwsNMYMIxHpuzj/LYzLNdoANI8vM5NntYdQ4cwEFM
H23i54m00WQ5RLuRJGzker+T5H0NvGlVwFQnqO9kCkA57o1Gi+vk34UuNPVLsqHx
sKq6Eb4s3MeslZBPHhJWYXGPx226+T6sEXO1y2UZ9GuWYzfI3MF6/xcFOI2/W3id
kYZEnR3R1Xes7GzsSLuCXVRDQco3GhXvSiyLvYC9xwgIsjnM61Q=
=Q/vf
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Per EndPoint Threads???

Posted by Owen Rubel <or...@gmail.com>.
Ah ok thank you for very concise answer. am chasing a pipe dream I guess.
Maybe there is another way to get this kind of benefit.

Thanks again for your answer.

Owen Rubel
orubel@gmail.com

On Sat, Aug 12, 2017 at 1:19 AM, Mark Thomas <ma...@apache.org> wrote:

> On 12/08/17 06:00, Christopher Schultz wrote:
> > Owen,
> >
> > Please do not top-post. I have re-ordered your post to be bottom-post.
> >
> > On 8/11/17 10:12 PM, Owen Rubel wrote:
> >> On Fri, Aug 11, 2017 at 5:58 PM, <ch...@baus.net> wrote:
> >
> >>>> Hi All,
> >>>>
> >>>> I'm looking for a way (or a tool) in Tomcat to associate
> >>>> threads with endpoints.
> >>>
> >>> It isn't clear to me why this would be necessary. Threads should
> >>> be allocated on demand to individual requests. If one route sees
> >>> more traffic, then it should automatically be allocated more
> >>> threads. This could starve some requests if the maximum number of
> >>> threads had been allocated to a lessor used route, while
> >>> available threads went unused for more commonly used route.
> >
> >> Absolutely but it could ramp up more threads as needed.
> >
> >> I base the logic on neuron and neuralTransmitters. When neurons
> >> talk to each other, they send back neural transmitters to enforce
> >> that pathway.
> >
> >> If we could do the same through threads by adding additional
> >> threads for endpoints that receive more traffic vs those which do
> >> not, it would enforce better and faster communication on those
> >> paths.> The current way Tomcat does it is not dynamic and it just
> >> applies to ALL pathways equally which is not efficient.
> > How would this improve efficiency at all?
> >
> > There is nothing inherently "showy" or "edity" about a particular
> > thread; each request-processing thread is indistinguishable from any
> > other. I don't believe there is a way to improve the situation even if
> > "per-endpoint" (whatever that would mean) threads were a possibility.
> >
> > What would you attach to a thread that would make it any better at
> > editing records? Or deleting them?
>
> And I'll add that the whole original proposal ignores a number of rather
> fundamental points about how Servlet containers (and web servers in
> general) work. To name a few:
>
> - Until the request has been parsed (which requires a thread) Tomcat
> doesn't know which Servlet (endpoint) the request is destined for.
> Switching processing to a different thread at that point would add
> significant overhead for no benefit.
>
> - Even after parsing, the actual Servlet that processes the request (if
> any) can change during processing (e.g. a Filter that conditionally
> forwards to a different Servlet, authentication, etc.)
>
> There is nothing about a endpoint specific thread that would allow it to
> process a request more efficiently than a general thread.
>
> Any per-endpoint thread-pool solution will require the additional
> overhead to switch processing from the general parsing thread to the
> endpoint specific thread. This additional cost comes with zero benefits
> hence it will always be less efficient.
>
> In short, there is no way pre-allocating threads to particular endpoints
> can improve performance compared to just adding the same number of
> additional threads to the general thread pool.
>
> Mark
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>

Re: Per EndPoint Threads???

Posted by Mark Thomas <ma...@apache.org>.
On 12/08/17 06:00, Christopher Schultz wrote:
> Owen,
> 
> Please do not top-post. I have re-ordered your post to be bottom-post.
> 
> On 8/11/17 10:12 PM, Owen Rubel wrote:
>> On Fri, Aug 11, 2017 at 5:58 PM, <ch...@baus.net> wrote:
> 
>>>> Hi All,
>>>>
>>>> I'm looking for a way (or a tool) in Tomcat to associate
>>>> threads with endpoints.
>>>
>>> It isn't clear to me why this would be necessary. Threads should
>>> be allocated on demand to individual requests. If one route sees
>>> more traffic, then it should automatically be allocated more
>>> threads. This could starve some requests if the maximum number of
>>> threads had been allocated to a lessor used route, while
>>> available threads went unused for more commonly used route.
> 
>> Absolutely but it could ramp up more threads as needed.
> 
>> I base the logic on neuron and neuralTransmitters. When neurons
>> talk to each other, they send back neural transmitters to enforce
>> that pathway.
> 
>> If we could do the same through threads by adding additional
>> threads for endpoints that receive more traffic vs those which do
>> not, it would enforce better and faster communication on those
>> paths.> The current way Tomcat does it is not dynamic and it just
>> applies to ALL pathways equally which is not efficient.
> How would this improve efficiency at all?
> 
> There is nothing inherently "showy" or "edity" about a particular
> thread; each request-processing thread is indistinguishable from any
> other. I don't believe there is a way to improve the situation even if
> "per-endpoint" (whatever that would mean) threads were a possibility.
> 
> What would you attach to a thread that would make it any better at
> editing records? Or deleting them?

And I'll add that the whole original proposal ignores a number of rather
fundamental points about how Servlet containers (and web servers in
general) work. To name a few:

- Until the request has been parsed (which requires a thread) Tomcat
doesn't know which Servlet (endpoint) the request is destined for.
Switching processing to a different thread at that point would add
significant overhead for no benefit.

- Even after parsing, the actual Servlet that processes the request (if
any) can change during processing (e.g. a Filter that conditionally
forwards to a different Servlet, authentication, etc.)

There is nothing about a endpoint specific thread that would allow it to
process a request more efficiently than a general thread.

Any per-endpoint thread-pool solution will require the additional
overhead to switch processing from the general parsing thread to the
endpoint specific thread. This additional cost comes with zero benefits
hence it will always be less efficient.

In short, there is no way pre-allocating threads to particular endpoints
can improve performance compared to just adding the same number of
additional threads to the general thread pool.

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Per EndPoint Threads???

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Owen,

Please do not top-post. I have re-ordered your post to be bottom-post.

On 8/11/17 10:12 PM, Owen Rubel wrote:
> On Fri, Aug 11, 2017 at 5:58 PM, <ch...@baus.net> wrote:
> 
>>> Hi All,
>>> 
>>> I'm looking for a way (or a tool) in Tomcat to associate
>>> threads with endpoints.
>> 
>> It isn't clear to me why this would be necessary. Threads should
>> be allocated on demand to individual requests. If one route sees
>> more traffic, then it should automatically be allocated more
>> threads. This could starve some requests if the maximum number of
>> threads had been allocated to a lessor used route, while
>> available threads went unused for more commonly used route.
> 
> Absolutely but it could ramp up more threads as needed.
> 
> I base the logic on neuron and neuralTransmitters. When neurons
> talk to each other, they send back neural transmitters to enforce
> that pathway.
> 
> If we could do the same through threads by adding additional
> threads for endpoints that receive more traffic vs those which do
> not, it would enforce better and faster communication on those
> paths.> The current way Tomcat does it is not dynamic and it just
> applies to ALL pathways equally which is not efficient.
How would this improve efficiency at all?

There is nothing inherently "showy" or "edity" about a particular
thread; each request-processing thread is indistinguishable from any
other. I don't believe there is a way to improve the situation even if
"per-endpoint" (whatever that would mean) threads were a possibility.

What would you attach to a thread that would make it any better at
editing records? Or deleting them?

- -chris
-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlmOi2YACgkQHPApP6U8
pFh+ohAAkIDqAaZK3mmQsSAE100a4RMwCyAjT076eiEkqj3MVJHUBuYf2adNlRYe
jvcKJCmvu061mW+/kos0+YIrt6ao2j60+fryX1goMOXhBxxrSlioccOwLkBu4HIG
SB/AuFIYqIG6S1ICqVunCFJsrYnMuJEX6WfA8O7G+sQWFH54w9XadewabEduu3uO
PwoP14a7XFOC8RPp9HM9Rdx8EfADRXrFugN0E5YSjXN5cdMs8bxJcabo8vjVnfNH
JDCkvF0tDd+FWj4t/AqXugM6fc6EYb8sSxEifxkdbu701A4doe8n1d1zawd3+qd4
IBVR6jFDHGqRm6cHvmhI8G4Tlx6c5EX29ZGTTdKnPvNloyob0a3/LauPJMr/97Xv
eIsj0shEfbUOWgcBWHRMbXbmZRjOAU7wxXtm2KsLZpJ6ZVZe9c7wSRLThYjp0Yyx
jgpwHN4sVPGG821trGht29E3v1e2GN1A7nuYbM7A7BK1PHP3MmLozVxAMxAip1T4
hVaVDHc1hd/G79Jvugq/T7atKQfOetLD4vg9ZFGIukaPZwA+3BtMYTNWn/bX2u9d
hBsWCw5Abn1SABlQ4cl87OJF9jya4p/P3Kqejyg9jbDbUy9J21QFEP6n5qHy9/vy
Jg6cjWpho6s9Ajx690ZNsdudDPoRuBe2TRLkFTOnUXsgwHTmToY=
=tiO+
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Per EndPoint Threads???

Posted by Owen Rubel <or...@gmail.com>.
Absolutely but it could ramp up more threads as needed.

I base the logic on neuron and neuralTransmitters. When neurons talk to
each other, they send back neural transmitters to enforce that pathway.

If we could do the same through threads by adding additional threads for
endpoints that receive more traffic vs those which do not, it would enforce
better and faster communication on those paths.

The current way Tomcat does it is not dynamic and it just applies to ALL
pathways equally which is not efficient.


Owen Rubel
orubel@gmail.com

On Fri, Aug 11, 2017 at 5:58 PM, <ch...@baus.net> wrote:

> > Hi All,
> >
> > I'm looking for a way (or a tool) in Tomcat to associate threads with
> > endpoints.
>
> It isn't clear to me why this would be necessary. Threads should be
> allocated on demand to individual requests. If one route sees more
> traffic, then it should automatically be allocated more threads. This
> could starve some requests if the maximum number of threads had been
> allocated to a lessor used route, while available threads went unused
> for more commonly used route.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>

Re: Per EndPoint Threads???

Posted by ch...@baus.net.
> Hi All,
> 
> I'm looking for a way (or a tool) in Tomcat to associate threads with
> endpoints.

It isn't clear to me why this would be necessary. Threads should be
allocated on demand to individual requests. If one route sees more
traffic, then it should automatically be allocated more threads. This
could starve some requests if the maximum number of threads had been
allocated to a lessor used route, while available threads went unused
for more commonly used route.


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org