You are viewing a plain text version of this content. The canonical link for it is here.
Posted to httpclient-users@hc.apache.org by Thomas Boniface <th...@stickyads.tv> on 2015/05/07 11:22:08 UTC

Pool congestion

Hi,

I have an application that receives and HTTP request from a user and
contact multiple external servers. When each external server responded (or
the servlet timeout is reached) and HTTP response is built based on
external servers responses.

I think in such a case, when one of the external server is having trouble
to respond as fast as it should my application will become less and less
responsive: as it will wait to reach servlet timeout to respond to the
client and other client request still incoming will experience the same
problem.

Thinking on how to prevent such cases, I started thinking of ways to
decrease the number of requests made to route having bad performances (by
implementing an exponential backoff mechanism for instance) but it came to
my mind that it may be possible to prevent this just by modifying the pool
configuration. My idea would be to greatly decrease the connection request
timeout (setConnectionRequestTimeout), my understanding is if the http
async client cannot send the request within say 5ms it probably means the
route is currently overloaded.

Is this the right approach to solve this type of scenario ?

Thanks,
Thomas

Re: Pool congestion

Posted by Stefan Magnus Landrø <st...@gmail.com>.
Maybe worthwhile to check out hystrix instead:
https://github.com/Netflix/Hystrix

2015-05-13 9:16 GMT+02:00 Thomas Boniface <th...@stickyads.tv>:

> Thanks for the pointers, this does seems similar to what I was thinking of.
> My understanding is that this is currently not possible with the http async
> client at the moment. Applying this logic to the httpasyncclient would mean
> dynamically change the route max connection depending on how well they
> behave. Is that right ?
>
> Thomas
>
> 2015-05-08 9:46 GMT+02:00 Oleg Kalnichevski <ol...@apache.org>:
>
> > On Thu, 2015-05-07 at 11:22 +0200, Thomas Boniface wrote:
> > > Hi,
> > >
> > > I have an application that receives and HTTP request from a user and
> > > contact multiple external servers. When each external server responded
> > (or
> > > the servlet timeout is reached) and HTTP response is built based on
> > > external servers responses.
> > >
> > > I think in such a case, when one of the external server is having
> trouble
> > > to respond as fast as it should my application will become less and
> less
> > > responsive: as it will wait to reach servlet timeout to respond to the
> > > client and other client request still incoming will experience the same
> > > problem.
> > >
> > > Thinking on how to prevent such cases, I started thinking of ways to
> > > decrease the number of requests made to route having bad performances
> (by
> > > implementing an exponential backoff mechanism for instance) but it came
> > to
> > > my mind that it may be possible to prevent this just by modifying the
> > pool
> > > configuration. My idea would be to greatly decrease the connection
> > request
> > > timeout (setConnectionRequestTimeout), my understanding is if the http
> > > async client cannot send the request within say 5ms it probably means
> the
> > > route is currently overloaded.
> > >
> > > Is this the right approach to solve this type of scenario ?
> > >
> > > Thanks,
> > > Thomas
> >
> > Thomas
> >
> > HttpClient ships with a so called AIMD back-off manager
> >
> >
> http://hc.apache.org/httpcomponents-client-4.4.x/httpclient/apidocs/org/apache/http/impl/client/AIMDBackoffManager.html
> >
> >
> http://hc.apache.org/httpcomponents-client-4.4.x/httpclient/apidocs/org/apache/http/impl/client/DefaultBackoffStrategy.html
> >
> > You probably might be able to use it (or some custom implementation) for
> > your purpose.
> >
> > Oleg
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> > For additional commands, e-mail: httpclient-users-help@hc.apache.org
> >
> >
>



-- 
BEKK Open
http://open.bekk.no

TesTcl - a unit test framework for iRules
http://testcl.com

Re: Pool congestion

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Wed, 2015-05-27 at 15:42 +0200, Thomas Boniface wrote:
> Olged when you say "You really do not want several dozen threads trying to
> initiate a request all at the same time.". I'm a bit surprised this
> basically what our application need to do and what I though http async was
> about. This leads me to the following questions:
> 
> Why shouldn't we do this

Because this is causing lock contention you have been seeing. There are
various ways of making sure that does not happen but they all imply
certain design decisions. 

>  or should we use http-async to be compliant with
> its design ? Should we look into another technology if http-async does not
> fit this use case ?
> 

One generally uses non-blocking i/o in order to have few threads manage
many connections. In your case though you have quite a number of threads
managing incoming connections and the same number of outgoing
connections. If all those threads race to execute a request on an
outgoing connection, you get what you get: contention on the lock that
guards the connection pool. So, you have to decide if that is what you
want. If it is not, you have options: (1) build your own lock-less pool
at the expense of not being able to impose a strict max total / max per
route limit. (2) use an intermediate queue to queue requests and use a
small number of threads to submit them to the client for execution at
the expense of higher complexity. 

Oleg 
PS: generally, building a good HTTP proxy / gateway is _hard_.

> Thanks
> 
> 2015-05-27 13:26 GMT+02:00 Oleg Kalnichevski <ol...@apache.org>:
> 
> > On Wed, 2015-05-27 at 10:51 +0200, Thomas Boniface wrote:
> > > Hi,
> > >
> > > We managed to observe our issue once again, this time with connection
> > > logging. Here is the situation:
> > >
> > > No activity was detected on the application log for a few seconds,
> > > triggering a tomcat thread dump.
> > >
> > > The thread dump shows all http-nio-127.0.0.1-8080-exec-* threads waiting
> > on
> > >
> > org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.requestConnection
> > > while all I/O dispatcher threads are waiting on
> > >
> > org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.releaseConnection.
> > >
> > > The application log (covering the last 2 sec of activity before the
> > > application being stuck) shows activity on the http async client until a
> > > point where we received requests that are supposed to trigger http
> > > communication to upstream servers but nothing is logged by http client (I
> > > assume because the threads from incoming requests are waiting to lease a
> > > connection).
> > >
> > > Here are the corresponding logs:
> > > http://s000.tinyupload.com/index.php?file_id=15495126975418045294
> > >
> > > Thanks,
> > > Thomas
> > >
> >
> > Thomas
> > I am sorry but really cannot sift through 300MB of logs. What I can
> > glean from a cursory look is there appears to be no dead-lock, just a
> > lot of threads contending for the lock <0x00000000be315a50>. The lock
> > does not look to be locked by any tread.
> >
> > Therefore this is unlikely to be due to a bug in HttpClient and more
> > likely to be due to the design of your application. One thing that I
> > find bizarre is that why there are so many threads are contending for
> > the lock in the first place. This looks wrong. You really do not want
> > several dozen threads trying to initiate a request all at the same time.
> >
> > Oleg
> >
> >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> > For additional commands, e-mail: httpclient-users-help@hc.apache.org
> >
> >



---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
For additional commands, e-mail: httpclient-users-help@hc.apache.org


Re: Pool congestion

Posted by Thomas Boniface <th...@stickyads.tv>.
Olged when you say "You really do not want several dozen threads trying to
initiate a request all at the same time.". I'm a bit surprised this
basically what our application need to do and what I though http async was
about. This leads me to the following questions:

Why shouldn't we do this or should we use http-async to be compliant with
its design ? Should we look into another technology if http-async does not
fit this use case ?

Thanks

2015-05-27 13:26 GMT+02:00 Oleg Kalnichevski <ol...@apache.org>:

> On Wed, 2015-05-27 at 10:51 +0200, Thomas Boniface wrote:
> > Hi,
> >
> > We managed to observe our issue once again, this time with connection
> > logging. Here is the situation:
> >
> > No activity was detected on the application log for a few seconds,
> > triggering a tomcat thread dump.
> >
> > The thread dump shows all http-nio-127.0.0.1-8080-exec-* threads waiting
> on
> >
> org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.requestConnection
> > while all I/O dispatcher threads are waiting on
> >
> org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.releaseConnection.
> >
> > The application log (covering the last 2 sec of activity before the
> > application being stuck) shows activity on the http async client until a
> > point where we received requests that are supposed to trigger http
> > communication to upstream servers but nothing is logged by http client (I
> > assume because the threads from incoming requests are waiting to lease a
> > connection).
> >
> > Here are the corresponding logs:
> > http://s000.tinyupload.com/index.php?file_id=15495126975418045294
> >
> > Thanks,
> > Thomas
> >
>
> Thomas
> I am sorry but really cannot sift through 300MB of logs. What I can
> glean from a cursory look is there appears to be no dead-lock, just a
> lot of threads contending for the lock <0x00000000be315a50>. The lock
> does not look to be locked by any tread.
>
> Therefore this is unlikely to be due to a bug in HttpClient and more
> likely to be due to the design of your application. One thing that I
> find bizarre is that why there are so many threads are contending for
> the lock in the first place. This looks wrong. You really do not want
> several dozen threads trying to initiate a request all at the same time.
>
> Oleg
>
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> For additional commands, e-mail: httpclient-users-help@hc.apache.org
>
>

Re: Pool congestion

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Wed, 2015-05-27 at 10:51 +0200, Thomas Boniface wrote:
> Hi,
> 
> We managed to observe our issue once again, this time with connection
> logging. Here is the situation:
> 
> No activity was detected on the application log for a few seconds,
> triggering a tomcat thread dump.
> 
> The thread dump shows all http-nio-127.0.0.1-8080-exec-* threads waiting on
> org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.requestConnection
> while all I/O dispatcher threads are waiting on
> org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.releaseConnection.
> 
> The application log (covering the last 2 sec of activity before the
> application being stuck) shows activity on the http async client until a
> point where we received requests that are supposed to trigger http
> communication to upstream servers but nothing is logged by http client (I
> assume because the threads from incoming requests are waiting to lease a
> connection).
> 
> Here are the corresponding logs:
> http://s000.tinyupload.com/index.php?file_id=15495126975418045294
> 
> Thanks,
> Thomas
> 

Thomas
I am sorry but really cannot sift through 300MB of logs. What I can
glean from a cursory look is there appears to be no dead-lock, just a
lot of threads contending for the lock <0x00000000be315a50>. The lock
does not look to be locked by any tread.

Therefore this is unlikely to be due to a bug in HttpClient and more
likely to be due to the design of your application. One thing that I
find bizarre is that why there are so many threads are contending for
the lock in the first place. This looks wrong. You really do not want
several dozen threads trying to initiate a request all at the same time.

Oleg




---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
For additional commands, e-mail: httpclient-users-help@hc.apache.org


Re: Pool congestion

Posted by Thomas Boniface <th...@stickyads.tv>.
Hi,

We managed to observe our issue once again, this time with connection
logging. Here is the situation:

No activity was detected on the application log for a few seconds,
triggering a tomcat thread dump.

The thread dump shows all http-nio-127.0.0.1-8080-exec-* threads waiting on
org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.requestConnection
while all I/O dispatcher threads are waiting on
org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.releaseConnection.

The application log (covering the last 2 sec of activity before the
application being stuck) shows activity on the http async client until a
point where we received requests that are supposed to trigger http
communication to upstream servers but nothing is logged by http client (I
assume because the threads from incoming requests are waiting to lease a
connection).

Here are the corresponding logs:
http://s000.tinyupload.com/index.php?file_id=15495126975418045294

Thanks,
Thomas

2015-05-21 11:46 GMT+02:00 Oleg Kalnichevski <ol...@apache.org>:

> On Wed, 2015-05-20 at 17:53 +0200, Thomas Boniface wrote:
> > Just to make sure as I'm using log4j2, the configuration is slightly
> > different. When testing locally I have logs that looks like this:
> >
> > org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager:
> > 2015-05-20 17:45:56,595 DEBUG http-nio-8080-exec-3 [Req_12] [    ]
> > Connection request: [route: {}->http://sandbox.stickyadstv.com:80][total
> > kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 10]
> > org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager:
> > 2015-05-20 17:45:56,617 DEBUG I/O dispatcher 6 [    ] [    ] Connection
> > leased: [id: http-outgoing-5][route:
> > {}->http://sandbox.stickyadstv.com:80][total
> > kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 10]
> > org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl:
> 2015-05-20
> > 17:45:56,617 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> > 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][r:]: Set attribute
> > http.nio.exchange-handler
> > org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl:
> 2015-05-20
> > 17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> > 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][rw:]: Event set [w]
> > org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl:
> 2015-05-20
> > 17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> > 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][rw:]: Set timeout 0
> > org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl:
> 2015-05-20
> > 17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> > 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][rw:]: Set attribute
> > http.nio.http-exchange-state
> > org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl:
> 2015-05-20
> > 17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> > 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][rw:]: Set timeout 60000
> > org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl:
> 2015-05-20
> > 17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> > 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][rw:]: Event set [w]
> > org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl:
> 2015-05-20
> > 17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> > 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][rw:w]: 883 bytes written
> > org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl:
> 2015-05-20
> > 17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> > 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][r:w]: Event cleared [w]
> > org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl:
> 2015-05-20
> > 17:45:56,653 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> > 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][r:r]: 1349 bytes read
> > org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl:
> 2015-05-20
> > 17:45:56,653 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> > 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][r:r]: Remove attribute
> > http.nio.exchange-handler
> > org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager:
> > 2015-05-20 17:45:56,653 DEBUG I/O dispatcher 6 [    ] [    ] Releasing
> > connection: [id: http-outgoing-5][route:
> > {}->http://sandbox.stickyadstv.com:80][total kept alive: 0; route
> > allocated: 1 of 10; total allocated: 1 of 10]
> > org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager:
> > 2015-05-20 17:45:56,653 DEBUG I/O dispatcher 6 [    ] [    ] Connection
> > [id: http-outgoing-5][route: {}->http://sandbox.stickyadstv.com:80] can
> be
> > kept alive for 15.0 seconds
> > org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl:
> 2015-05-20
> > 17:45:56,653 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> > 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][r:r]: Set timeout 0
> > org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager:
> > 2015-05-20 17:45:56,653 DEBUG I/O dispatcher 6 [    ] [    ] Connection
> > released: [id: http-outgoing-5][route:
> > {}->http://sandbox.stickyadstv.com:80][total
> > kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 10]
> >
> > Is this helpful if we log this in a real production environment (this
> will
> > represent a high volume of data considering the number of request
> > processed, could be up to 200k line per seconds) to know why lock seem to
> > occur ?
> >
> > Thomas
> >
>
> Hi Thomas
>
> I might be able to take a cursory look at the log but you should try to
> isolate events that happen at the time of high lock contention and
> understand what the client was trying to do. If you find all your i/o
> dispatch threads trying to grab or release a connection approximately at
> the same time this should explain the lock contention.
>
> Oleg
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> For additional commands, e-mail: httpclient-users-help@hc.apache.org
>
>

Re: Pool congestion

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Wed, 2015-05-20 at 17:53 +0200, Thomas Boniface wrote:
> Just to make sure as I'm using log4j2, the configuration is slightly
> different. When testing locally I have logs that looks like this:
> 
> org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager:
> 2015-05-20 17:45:56,595 DEBUG http-nio-8080-exec-3 [Req_12] [    ]
> Connection request: [route: {}->http://sandbox.stickyadstv.com:80][total
> kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 10]
> org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager:
> 2015-05-20 17:45:56,617 DEBUG I/O dispatcher 6 [    ] [    ] Connection
> leased: [id: http-outgoing-5][route:
> {}->http://sandbox.stickyadstv.com:80][total
> kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 10]
> org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
> 17:45:56,617 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][r:]: Set attribute
> http.nio.exchange-handler
> org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
> 17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][rw:]: Event set [w]
> org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
> 17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][rw:]: Set timeout 0
> org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
> 17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][rw:]: Set attribute
> http.nio.http-exchange-state
> org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
> 17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][rw:]: Set timeout 60000
> org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
> 17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][rw:]: Event set [w]
> org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
> 17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][rw:w]: 883 bytes written
> org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
> 17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][r:w]: Event cleared [w]
> org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
> 17:45:56,653 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][r:r]: 1349 bytes read
> org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
> 17:45:56,653 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][r:r]: Remove attribute
> http.nio.exchange-handler
> org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager:
> 2015-05-20 17:45:56,653 DEBUG I/O dispatcher 6 [    ] [    ] Releasing
> connection: [id: http-outgoing-5][route:
> {}->http://sandbox.stickyadstv.com:80][total kept alive: 0; route
> allocated: 1 of 10; total allocated: 1 of 10]
> org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager:
> 2015-05-20 17:45:56,653 DEBUG I/O dispatcher 6 [    ] [    ] Connection
> [id: http-outgoing-5][route: {}->http://sandbox.stickyadstv.com:80] can be
> kept alive for 15.0 seconds
> org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
> 17:45:56,653 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
> 192.168.0.89:54298<->5.135.147.172:80[ACTIVE][r:r]: Set timeout 0
> org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager:
> 2015-05-20 17:45:56,653 DEBUG I/O dispatcher 6 [    ] [    ] Connection
> released: [id: http-outgoing-5][route:
> {}->http://sandbox.stickyadstv.com:80][total
> kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 10]
> 
> Is this helpful if we log this in a real production environment (this will
> represent a high volume of data considering the number of request
> processed, could be up to 200k line per seconds) to know why lock seem to
> occur ?
> 
> Thomas
> 

Hi Thomas

I might be able to take a cursory look at the log but you should try to
isolate events that happen at the time of high lock contention and
understand what the client was trying to do. If you find all your i/o
dispatch threads trying to grab or release a connection approximately at
the same time this should explain the lock contention.

Oleg



---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
For additional commands, e-mail: httpclient-users-help@hc.apache.org


Re: Pool congestion

Posted by Thomas Boniface <th...@stickyads.tv>.
Just to make sure as I'm using log4j2, the configuration is slightly
different. When testing locally I have logs that looks like this:

org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager:
2015-05-20 17:45:56,595 DEBUG http-nio-8080-exec-3 [Req_12] [    ]
Connection request: [route: {}->http://sandbox.stickyadstv.com:80][total
kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 10]
org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager:
2015-05-20 17:45:56,617 DEBUG I/O dispatcher 6 [    ] [    ] Connection
leased: [id: http-outgoing-5][route:
{}->http://sandbox.stickyadstv.com:80][total
kept alive: 0; route allocated: 1 of 10; total allocated: 1 of 10]
org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
17:45:56,617 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
192.168.0.89:54298<->5.135.147.172:80[ACTIVE][r:]: Set attribute
http.nio.exchange-handler
org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
192.168.0.89:54298<->5.135.147.172:80[ACTIVE][rw:]: Event set [w]
org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
192.168.0.89:54298<->5.135.147.172:80[ACTIVE][rw:]: Set timeout 0
org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
192.168.0.89:54298<->5.135.147.172:80[ACTIVE][rw:]: Set attribute
http.nio.http-exchange-state
org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
192.168.0.89:54298<->5.135.147.172:80[ACTIVE][rw:]: Set timeout 60000
org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
192.168.0.89:54298<->5.135.147.172:80[ACTIVE][rw:]: Event set [w]
org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
192.168.0.89:54298<->5.135.147.172:80[ACTIVE][rw:w]: 883 bytes written
org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
17:45:56,618 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
192.168.0.89:54298<->5.135.147.172:80[ACTIVE][r:w]: Event cleared [w]
org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
17:45:56,653 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
192.168.0.89:54298<->5.135.147.172:80[ACTIVE][r:r]: 1349 bytes read
org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
17:45:56,653 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
192.168.0.89:54298<->5.135.147.172:80[ACTIVE][r:r]: Remove attribute
http.nio.exchange-handler
org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager:
2015-05-20 17:45:56,653 DEBUG I/O dispatcher 6 [    ] [    ] Releasing
connection: [id: http-outgoing-5][route:
{}->http://sandbox.stickyadstv.com:80][total kept alive: 0; route
allocated: 1 of 10; total allocated: 1 of 10]
org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager:
2015-05-20 17:45:56,653 DEBUG I/O dispatcher 6 [    ] [    ] Connection
[id: http-outgoing-5][route: {}->http://sandbox.stickyadstv.com:80] can be
kept alive for 15.0 seconds
org.apache.http.impl.nio.conn.ManagedNHttpClientConnectionImpl: 2015-05-20
17:45:56,653 DEBUG I/O dispatcher 6 [    ] [    ] http-outgoing-5
192.168.0.89:54298<->5.135.147.172:80[ACTIVE][r:r]: Set timeout 0
org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager:
2015-05-20 17:45:56,653 DEBUG I/O dispatcher 6 [    ] [    ] Connection
released: [id: http-outgoing-5][route:
{}->http://sandbox.stickyadstv.com:80][total
kept alive: 1; route allocated: 1 of 10; total allocated: 1 of 10]

Is this helpful if we log this in a real production environment (this will
represent a high volume of data considering the number of request
processed, could be up to 200k line per seconds) to know why lock seem to
occur ?

Thomas

2015-05-20 16:09 GMT+02:00 Oleg Kalnichevski <ol...@apache.org>:

> On Wed, 2015-05-20 at 16:07 +0200, Thomas Boniface wrote:
> > I assume when using http async client the package to log is
> > org.apache.http.impl.nio.conn instead of org.apache.http.impl.conn ?
> >
>
> Yes. Or some such.
>
> Oleg
>
> > We are using the latest stable release already.
> >
> > 2015-05-19 10:30 GMT+02:00 Oleg Kalnichevski <ol...@apache.org>:
> >
> > > On Mon, 2015-05-18 at 11:11 +0200, Thomas Boniface wrote:
> > > > Thanks your answers, also hystrix seems pretty interesting. I'll
> have a
> > > > look into it.
> > > >
> > > > Regarding my problem, I observed some cases where my application
> becomes
> > > > apparently stuck. After nothing happens in my application log for a
> > > couple
> > > > of seconds a thread dump is made. This thread dump showed that all
> I/O
> > > > dispatcher threads and all the http nio threads were waiting for the
> a
> > > lock
> > > > from AbstractNIOConnPool.
> > > >
> > >
> > > Please run the client with context logging for connection management
> > > turned on as described here to find out why HttpClient is trying to
> > > acquire the pool lock.
> > >
> > > http://hc.apache.org/httpcomponents-client-4.4.x/logging.html
> > >
> > > Please also make sure you are using the latest stable release of
> > > HttpAsyncClient (which is 4.1).
> > >
> > > Oleg
> > >
> > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> > > For additional commands, e-mail: httpclient-users-help@hc.apache.org
> > >
> > >
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> For additional commands, e-mail: httpclient-users-help@hc.apache.org
>
>

Re: Pool congestion

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Wed, 2015-05-20 at 16:07 +0200, Thomas Boniface wrote:
> I assume when using http async client the package to log is
> org.apache.http.impl.nio.conn instead of org.apache.http.impl.conn ?
> 

Yes. Or some such.

Oleg

> We are using the latest stable release already.
> 
> 2015-05-19 10:30 GMT+02:00 Oleg Kalnichevski <ol...@apache.org>:
> 
> > On Mon, 2015-05-18 at 11:11 +0200, Thomas Boniface wrote:
> > > Thanks your answers, also hystrix seems pretty interesting. I'll have a
> > > look into it.
> > >
> > > Regarding my problem, I observed some cases where my application becomes
> > > apparently stuck. After nothing happens in my application log for a
> > couple
> > > of seconds a thread dump is made. This thread dump showed that all I/O
> > > dispatcher threads and all the http nio threads were waiting for the a
> > lock
> > > from AbstractNIOConnPool.
> > >
> >
> > Please run the client with context logging for connection management
> > turned on as described here to find out why HttpClient is trying to
> > acquire the pool lock.
> >
> > http://hc.apache.org/httpcomponents-client-4.4.x/logging.html
> >
> > Please also make sure you are using the latest stable release of
> > HttpAsyncClient (which is 4.1).
> >
> > Oleg
> >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> > For additional commands, e-mail: httpclient-users-help@hc.apache.org
> >
> >



---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
For additional commands, e-mail: httpclient-users-help@hc.apache.org


Re: Pool congestion

Posted by Thomas Boniface <th...@stickyads.tv>.
I assume when using http async client the package to log is
org.apache.http.impl.nio.conn instead of org.apache.http.impl.conn ?

We are using the latest stable release already.

2015-05-19 10:30 GMT+02:00 Oleg Kalnichevski <ol...@apache.org>:

> On Mon, 2015-05-18 at 11:11 +0200, Thomas Boniface wrote:
> > Thanks your answers, also hystrix seems pretty interesting. I'll have a
> > look into it.
> >
> > Regarding my problem, I observed some cases where my application becomes
> > apparently stuck. After nothing happens in my application log for a
> couple
> > of seconds a thread dump is made. This thread dump showed that all I/O
> > dispatcher threads and all the http nio threads were waiting for the a
> lock
> > from AbstractNIOConnPool.
> >
>
> Please run the client with context logging for connection management
> turned on as described here to find out why HttpClient is trying to
> acquire the pool lock.
>
> http://hc.apache.org/httpcomponents-client-4.4.x/logging.html
>
> Please also make sure you are using the latest stable release of
> HttpAsyncClient (which is 4.1).
>
> Oleg
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> For additional commands, e-mail: httpclient-users-help@hc.apache.org
>
>

Re: Pool congestion

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Mon, 2015-05-18 at 11:11 +0200, Thomas Boniface wrote:
> Thanks your answers, also hystrix seems pretty interesting. I'll have a
> look into it.
> 
> Regarding my problem, I observed some cases where my application becomes
> apparently stuck. After nothing happens in my application log for a couple
> of seconds a thread dump is made. This thread dump showed that all I/O
> dispatcher threads and all the http nio threads were waiting for the a lock
> from AbstractNIOConnPool.
> 

Please run the client with context logging for connection management
turned on as described here to find out why HttpClient is trying to
acquire the pool lock.

http://hc.apache.org/httpcomponents-client-4.4.x/logging.html

Please also make sure you are using the latest stable release of
HttpAsyncClient (which is 4.1).

Oleg



---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
For additional commands, e-mail: httpclient-users-help@hc.apache.org


Re: Pool congestion

Posted by Thomas Boniface <th...@stickyads.tv>.
Thanks your answers, also hystrix seems pretty interesting. I'll have a
look into it.

Regarding my problem, I observed some cases where my application becomes
apparently stuck. After nothing happens in my application log for a couple
of seconds a thread dump is made. This thread dump showed that all I/O
dispatcher threads and all the http nio threads were waiting for the a lock
from AbstractNIOConnPool.

Here is an example:

"http-nio-127.0.0.1-8080-exec-100" daemon prio=10 tid=0x00007f7a64a21000
nid=0x56c2 waiting on condition [0x00007f7a4e8e6000]
   java.lang.Thread.State: WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        - parking to wait for  <0x00000000ba27ded8> (a
java.util.concurrent.locks.ReentrantLock$NonfairSync)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
        at
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
        at
java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:214)
        at
java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
        at
org.apache.http.nio.pool.AbstractNIOConnPool.lease(AbstractNIOConnPool.java:271)
        at
org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.requestConnection(PoolingNHttpClientConnectionManager.java:265)
        at
org.apache.http.impl.nio.client.AbstractClientExchangeHandler.requestConnection(AbstractClientExchangeHandler.java:358)
        at
org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.start(DefaultClientExchangeHandlerImpl.java:125)
        at
org.apache.http.impl.nio.client.InternalHttpAsyncClient.execute(InternalHttpAsyncClient.java:141)
        at
org.apache.http.impl.nio.client.CloseableHttpAsyncClient.execute(CloseableHttpAsyncClient.java:74)
        at
org.apache.http.impl.nio.client.CloseableHttpAsyncClient.execute(CloseableHttpAsyncClient.java:107)
        at
org.apache.http.impl.nio.client.CloseableHttpAsyncClient.execute(CloseableHttpAsyncClient.java:91)
        at
com.stickyadstv.adex.bidder.openrtb.OpenRTBBuyerPlatform.sendBidRequest(OpenRTBBuyerPlatform.java:117)
        at
com.stickyadstv.adex.Auctioneer.sendBidRequests(Auctioneer.java:338)
        at com.stickyadstv.adex.Auctioneer.startAuction(Auctioneer.java:152)
        at
com.stickyadstv.adex.bidder.marketplace.MarketPlaceBuyerPlatform.startMarketPlaceAuction(MarketPlaceBuyerPlatform.java:144)
        at
com.stickyadstv.adex.bidder.marketplace.MarketPlaceBuyerPlatform.sendBidRequest(MarketPlaceBuyerPlatform.java:82)
        at
com.stickyadstv.adex.Auctioneer.sendBidRequests(Auctioneer.java:338)
        at com.stickyadstv.adex.Auctioneer.startAuction(Auctioneer.java:152)
        at
networkComm.commands.SwfIndexCommand.getProtocolSpecificResponse(SwfIndexCommand.java:66)
        at networkComm.commands.HttpCommand.getResponse(HttpCommand.java:70)
        at
com.stickyadstv.web.SwfIndexServlet.doGet(SwfIndexServlet.java:39)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:624)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:731)
        at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:303)
        at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
        at
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
        at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
        at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
        at
com.stickyadstv.deliveryengine.http.CORSAndNoCacheFilter.doFilter(CORSAndNoCacheFilter.java:34)
        at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
        at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
        at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
        at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
        at
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505)
        at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
        at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
        at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
        at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:423)
        at
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1079)
        at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:620)
        at
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1741)
        at
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1700)
        - locked <0x00000000d4a6a4a0> (a
org.apache.tomcat.util.net.NioChannel)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
        at java.lang.Thread.run(Thread.java:745)

Is this possible the lock leaked in a particular use case (this did not
happened at a request peak) ?

Thanks


2015-05-13 10:20 GMT+02:00 Oleg Kalnichevski <ol...@apache.org>:

> On Wed, 2015-05-13 at 09:16 +0200, Thomas Boniface wrote:
> > Thanks for the pointers, this does seems similar to what I was thinking
> of.
> > My understanding is that this is currently not possible with the http
> async
> > client at the moment. Applying this logic to the httpasyncclient would
> mean
> > dynamically change the route max connection depending on how well they
> > behave. Is that right ?
> >
>
> Sounds correct.
>
> Oleg
>
> > Thomas
> >
> > 2015-05-08 9:46 GMT+02:00 Oleg Kalnichevski <ol...@apache.org>:
> >
> > > On Thu, 2015-05-07 at 11:22 +0200, Thomas Boniface wrote:
> > > > Hi,
> > > >
> > > > I have an application that receives and HTTP request from a user and
> > > > contact multiple external servers. When each external server
> responded
> > > (or
> > > > the servlet timeout is reached) and HTTP response is built based on
> > > > external servers responses.
> > > >
> > > > I think in such a case, when one of the external server is having
> trouble
> > > > to respond as fast as it should my application will become less and
> less
> > > > responsive: as it will wait to reach servlet timeout to respond to
> the
> > > > client and other client request still incoming will experience the
> same
> > > > problem.
> > > >
> > > > Thinking on how to prevent such cases, I started thinking of ways to
> > > > decrease the number of requests made to route having bad
> performances (by
> > > > implementing an exponential backoff mechanism for instance) but it
> came
> > > to
> > > > my mind that it may be possible to prevent this just by modifying the
> > > pool
> > > > configuration. My idea would be to greatly decrease the connection
> > > request
> > > > timeout (setConnectionRequestTimeout), my understanding is if the
> http
> > > > async client cannot send the request within say 5ms it probably
> means the
> > > > route is currently overloaded.
> > > >
> > > > Is this the right approach to solve this type of scenario ?
> > > >
> > > > Thanks,
> > > > Thomas
> > >
> > > Thomas
> > >
> > > HttpClient ships with a so called AIMD back-off manager
> > >
> > >
> http://hc.apache.org/httpcomponents-client-4.4.x/httpclient/apidocs/org/apache/http/impl/client/AIMDBackoffManager.html
> > >
> > >
> http://hc.apache.org/httpcomponents-client-4.4.x/httpclient/apidocs/org/apache/http/impl/client/DefaultBackoffStrategy.html
> > >
> > > You probably might be able to use it (or some custom implementation)
> for
> > > your purpose.
> > >
> > > Oleg
> > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> > > For additional commands, e-mail: httpclient-users-help@hc.apache.org
> > >
> > >
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> For additional commands, e-mail: httpclient-users-help@hc.apache.org
>
>

Re: Pool congestion

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Wed, 2015-05-13 at 09:16 +0200, Thomas Boniface wrote:
> Thanks for the pointers, this does seems similar to what I was thinking of.
> My understanding is that this is currently not possible with the http async
> client at the moment. Applying this logic to the httpasyncclient would mean
> dynamically change the route max connection depending on how well they
> behave. Is that right ?
> 

Sounds correct.

Oleg

> Thomas
> 
> 2015-05-08 9:46 GMT+02:00 Oleg Kalnichevski <ol...@apache.org>:
> 
> > On Thu, 2015-05-07 at 11:22 +0200, Thomas Boniface wrote:
> > > Hi,
> > >
> > > I have an application that receives and HTTP request from a user and
> > > contact multiple external servers. When each external server responded
> > (or
> > > the servlet timeout is reached) and HTTP response is built based on
> > > external servers responses.
> > >
> > > I think in such a case, when one of the external server is having trouble
> > > to respond as fast as it should my application will become less and less
> > > responsive: as it will wait to reach servlet timeout to respond to the
> > > client and other client request still incoming will experience the same
> > > problem.
> > >
> > > Thinking on how to prevent such cases, I started thinking of ways to
> > > decrease the number of requests made to route having bad performances (by
> > > implementing an exponential backoff mechanism for instance) but it came
> > to
> > > my mind that it may be possible to prevent this just by modifying the
> > pool
> > > configuration. My idea would be to greatly decrease the connection
> > request
> > > timeout (setConnectionRequestTimeout), my understanding is if the http
> > > async client cannot send the request within say 5ms it probably means the
> > > route is currently overloaded.
> > >
> > > Is this the right approach to solve this type of scenario ?
> > >
> > > Thanks,
> > > Thomas
> >
> > Thomas
> >
> > HttpClient ships with a so called AIMD back-off manager
> >
> > http://hc.apache.org/httpcomponents-client-4.4.x/httpclient/apidocs/org/apache/http/impl/client/AIMDBackoffManager.html
> >
> > http://hc.apache.org/httpcomponents-client-4.4.x/httpclient/apidocs/org/apache/http/impl/client/DefaultBackoffStrategy.html
> >
> > You probably might be able to use it (or some custom implementation) for
> > your purpose.
> >
> > Oleg
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> > For additional commands, e-mail: httpclient-users-help@hc.apache.org
> >
> >



---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
For additional commands, e-mail: httpclient-users-help@hc.apache.org


Re: Pool congestion

Posted by Thomas Boniface <th...@stickyads.tv>.
Thanks for the pointers, this does seems similar to what I was thinking of.
My understanding is that this is currently not possible with the http async
client at the moment. Applying this logic to the httpasyncclient would mean
dynamically change the route max connection depending on how well they
behave. Is that right ?

Thomas

2015-05-08 9:46 GMT+02:00 Oleg Kalnichevski <ol...@apache.org>:

> On Thu, 2015-05-07 at 11:22 +0200, Thomas Boniface wrote:
> > Hi,
> >
> > I have an application that receives and HTTP request from a user and
> > contact multiple external servers. When each external server responded
> (or
> > the servlet timeout is reached) and HTTP response is built based on
> > external servers responses.
> >
> > I think in such a case, when one of the external server is having trouble
> > to respond as fast as it should my application will become less and less
> > responsive: as it will wait to reach servlet timeout to respond to the
> > client and other client request still incoming will experience the same
> > problem.
> >
> > Thinking on how to prevent such cases, I started thinking of ways to
> > decrease the number of requests made to route having bad performances (by
> > implementing an exponential backoff mechanism for instance) but it came
> to
> > my mind that it may be possible to prevent this just by modifying the
> pool
> > configuration. My idea would be to greatly decrease the connection
> request
> > timeout (setConnectionRequestTimeout), my understanding is if the http
> > async client cannot send the request within say 5ms it probably means the
> > route is currently overloaded.
> >
> > Is this the right approach to solve this type of scenario ?
> >
> > Thanks,
> > Thomas
>
> Thomas
>
> HttpClient ships with a so called AIMD back-off manager
>
> http://hc.apache.org/httpcomponents-client-4.4.x/httpclient/apidocs/org/apache/http/impl/client/AIMDBackoffManager.html
>
> http://hc.apache.org/httpcomponents-client-4.4.x/httpclient/apidocs/org/apache/http/impl/client/DefaultBackoffStrategy.html
>
> You probably might be able to use it (or some custom implementation) for
> your purpose.
>
> Oleg
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> For additional commands, e-mail: httpclient-users-help@hc.apache.org
>
>

Re: Pool congestion

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Thu, 2015-05-07 at 11:22 +0200, Thomas Boniface wrote:
> Hi,
> 
> I have an application that receives and HTTP request from a user and
> contact multiple external servers. When each external server responded (or
> the servlet timeout is reached) and HTTP response is built based on
> external servers responses.
> 
> I think in such a case, when one of the external server is having trouble
> to respond as fast as it should my application will become less and less
> responsive: as it will wait to reach servlet timeout to respond to the
> client and other client request still incoming will experience the same
> problem.
> 
> Thinking on how to prevent such cases, I started thinking of ways to
> decrease the number of requests made to route having bad performances (by
> implementing an exponential backoff mechanism for instance) but it came to
> my mind that it may be possible to prevent this just by modifying the pool
> configuration. My idea would be to greatly decrease the connection request
> timeout (setConnectionRequestTimeout), my understanding is if the http
> async client cannot send the request within say 5ms it probably means the
> route is currently overloaded.
> 
> Is this the right approach to solve this type of scenario ?
> 
> Thanks,
> Thomas

Thomas

HttpClient ships with a so called AIMD back-off manager  
http://hc.apache.org/httpcomponents-client-4.4.x/httpclient/apidocs/org/apache/http/impl/client/AIMDBackoffManager.html
http://hc.apache.org/httpcomponents-client-4.4.x/httpclient/apidocs/org/apache/http/impl/client/DefaultBackoffStrategy.html

You probably might be able to use it (or some custom implementation) for
your purpose.

Oleg 


---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
For additional commands, e-mail: httpclient-users-help@hc.apache.org