You are viewing a plain text version of this content. The canonical link for it is here.
Posted to httpclient-users@hc.apache.org by Sam Crawford <sa...@gmail.com> on 2009/04/14 15:20:16 UTC

Gracefully handling half-closed connections (encore!)

Afternoon all,
A few months back we had an issue with handling half closed TCP connections
with HttpClient, and at the time I was advised to include something akin to
the IdleConnectionEvictor - which we did and it's working very nicely in
nearly all scenarios.

However, in the past few days we've encountered a few WebLogic based hosts
that aren't playing fair.

The following is one (extreme) example of the issue we're encountering:

Time (ms)    TCP action
0.0000         Client > Server [SYN]
0.5634         Server > Client [SYN,ACK]
1.2092         Client > Server [ACK]          <-- TCP session established
312.5276         Server > Client [FIN,ACK]
313.1309         Client > Server [ACK]
401.5089         Client > Server [HTTP POST /blah]
403.2986         Server > Client [RST]

In the above example, the server closes its side of the connection only
300ms after establishment (by sending the FIN). (As an aside I'm curious as
to why HttpClient is taking 400ms after the TCP connection has been
established to send the request - any suggestions are also much appreciated,
but this doesn't happen often).

But the above is an extreme example. We see other cases where the WebLogic
server is closing the connection of a keep-alive connection around 10-15
seconds after the last request. Our IdleConnectionEvictor doesn't run that
often, so we end up with unusable connections. We could just run
IdleConnectionEvictor more often, but that's not really desirable.

I'm going to be digging into the WebLogic side of things this afternoon (to
see if there's any limits we can modify there), but it does seem as though
there should be a nice way for HttpClient to detect such cases. I've got
stale connection checking enabled already by the way.

I'm interested in any feedback/ideas here! I can include a wire capture as
an example if it would be helpful.

Thanks again,

Sam

Re: Gracefully handling half-closed connections (encore!)

Posted by Sam Crawford <sa...@gmail.com>.
Anyone?

Thanks,

Sam


2009/4/23 Sam Crawford <sa...@gmail.com>

> Oleg and all,
> I think I'm getting closer to tracking down the root cause of this bizarre
> issue. We've been having a few occurrences every day, and it's getting worse
> as load is increasing.
>
> I've attached a screenshot of wireshark, a HttpClient wire trace, and a
> code snippet.
>
> The wireshark screenshot shows the httpclient host (10.69.13.28) connecting
> to the server (10.96.109.6) and establishing a TCP connection (frames 1-3).
> Then nearly 10 seconds later, without any traffic being sent, the server
> sends a FIN. Nearly 20 seconds after that the client gives up, PSH's the
> last of it's data and FIN's its side of the connection too.
>
> The HttpClient wire trace shows the request being started at 09:08:33 (the
> same time as the wireshark capture starts), and everything seems to progress
> normally initially (connection is established, headers are sent, etc).
> However, the wire trace shows the headers being sent, but the wireshark
> capture does not reflect this. I'm not blaming HttpClient for this, because
> frame 30397 (the PSH) in the packet capture shows the headers being sent but
> with no POST body. It looks to me like the InputStream that's being given to
> HttpClient is somehow causing the issue.
>
> Now, the actual application of HttpClient here is a reverse proxy. It runs
> on a GlassFish v2u2 J2EE container. I'm beginning to suspect that GlassFish
> itself may be causing the issue. The code snippet attached shows how I'm
> reading the input stream from the incoming HttpServletRequest and passing it
> to HttpClient.
>
> I'm performing some additional packet captures now that will hopefully help
> determine if:
> (1) it's the originating client somehow malforming it's post data, which
> causes the inputstream never to be fully read
> (2) the originating client's request is fine and there's some issue with
> our J2EE container.
>
> Does this make sense to you? Any additional thoughts/input would be
> appreciated. I don't believe this is an issue with HttpClient, but thought
> you may have some useful insights into the matter.
>
> Thanks,
>
> Sam
>
>
>
>
>
> 2009/4/14 Oleg Kalnichevski <ol...@apache.org>
>
>> Sam Crawford wrote:
>>
>>  Afternoon all,
>>> A few months back we had an issue with handling half closed TCP
>>> connections
>>> with HttpClient, and at the time I was advised to include something akin
>>> to
>>> the IdleConnectionEvictor - which we did and it's working very nicely in
>>> nearly all scenarios.
>>>
>>> However, in the past few days we've encountered a few WebLogic based
>>> hosts
>>> that aren't playing fair.
>>>
>>> The following is one (extreme) example of the issue we're encountering:
>>>
>>> Time (ms)    TCP action
>>> 0.0000         Client > Server [SYN]
>>> 0.5634         Server > Client [SYN,ACK]
>>> 1.2092         Client > Server [ACK]          <-- TCP session established
>>> 312.5276         Server > Client [FIN,ACK]
>>> 313.1309         Client > Server [ACK]
>>> 401.5089         Client > Server [HTTP POST /blah]
>>> 403.2986         Server > Client [RST]
>>>
>>> In the above example, the server closes its side of the connection only
>>> 300ms after establishment (by sending the FIN). (As an aside I'm curious
>>> as
>>> to why HttpClient is taking 400ms after the TCP connection has been
>>> established to send the request - any suggestions are also much
>>> appreciated,
>>> but this doesn't happen often).
>>>
>>>
>> This does not sound right. The stale connection check may cause a 20 to 30
>> millisecond delay (and generally should be avoided) but this is a bit too
>> much. Can you produce a wire / context log of the session?
>>
>>
>>  But the above is an extreme example. We see other cases where the
>>> WebLogic
>>> server is closing the connection of a keep-alive connection around 10-15
>>> seconds after the last request.
>>>
>>
>> Does the server send a 'Keep-alive' header with the response?
>>
>>
>>  Our IdleConnectionEvictor doesn't run that
>>
>>> often, so we end up with unusable connections. We could just run
>>> IdleConnectionEvictor more often, but that's not really desirable.
>>>
>>> I'm going to be digging into the WebLogic side of things this afternoon
>>> (to
>>> see if there's any limits we can modify there), but it does seem as
>>> though
>>> there should be a nice way for HttpClient to detect such cases. I've got
>>> stale connection checking enabled already by the way.
>>>
>>>
>> Stale connection checking is (in most cases) evil and should be avoided.
>>
>>  I'm interested in any feedback/ideas here! I can include a wire capture
>>> as
>>> an example if it would be helpful.
>>>
>>>
>> A wire / context log that correlates with the TCP dump would be great.
>>
>> Oleg
>>
>>  Thanks again,
>>>
>>> Sam
>>>
>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
>> For additional commands, e-mail: httpclient-users-help@hc.apache.org
>>
>>
>

Re: Gracefully handling half-closed connections (encore!)

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Mon, Apr 27, 2009 at 02:14:55PM +0100, Sam Crawford wrote:
> No problem, thanks for getting back to me. Your response has at least
> confirmed my suspicion that GlassFish *could* be the cause.

I did not want to imply that. However, reducing the complexity of the problem
domain might help.

Oleg


 Packet captures
> have yet to prove or disprove the theory, as it happens quite infrequently.
> 
> I'll update this thread when I crack it :-)
> 
> Thanks,
> 
> Sam
> 
> 
> 
> 2009/4/27 Oleg Kalnichevski <ol...@apache.org>
> 
> > On Thu, Apr 23, 2009 at 11:12:34AM +0100, Sam Crawford wrote:
> > > Oleg and all,
> > > I think I'm getting closer to tracking down the root cause of this
> > bizarre
> > > issue. We've been having a few occurrences every day, and it's getting
> > worse
> > > as load is increasing.
> > >
> > > I've attached a screenshot of wireshark, a HttpClient wire trace, and a
> > code
> > > snippet.
> > >
> > > The wireshark screenshot shows the httpclient host (10.69.13.28)
> > connecting
> > > to the server (10.96.109.6) and establishing a TCP connection (frames
> > 1-3).
> > > Then nearly 10 seconds later, without any traffic being sent, the server
> > > sends a FIN. Nearly 20 seconds after that the client gives up, PSH's the
> > > last of it's data and FIN's its side of the connection too.
> > >
> > > The HttpClient wire trace shows the request being started at 09:08:33
> > (the
> > > same time as the wireshark capture starts), and everything seems to
> > progress
> > > normally initially (connection is established, headers are sent, etc).
> > > However, the wire trace shows the headers being sent, but the wireshark
> > > capture does not reflect this.
> >
> > Please note that HttpClient succeeding in committing output data to the
> > socket's SNDBUF does not necessarily guarantee the JVM succeeding in
> > transmitting data across the wire. A successful write operation from the
> > HttpClient standpoint may not necessarily be successful from the TCP/IP
> > stack
> > standpoint.
> >
> >
> >  I'm not blaming HttpClient for this, because
> > > frame 30397 (the PSH) in the packet capture shows the headers being sent
> > but
> > > with no POST body. It looks to me like the InputStream that's being given
> > to
> > > HttpClient is somehow causing the issue.
> > >
> > > Now, the actual application of HttpClient here is a reverse proxy. It
> > runs
> > > on a GlassFish v2u2 J2EE container. I'm beginning to suspect that
> > GlassFish
> > > itself may be causing the issue. The code snippet attached shows how I'm
> > > reading the input stream from the incoming HttpServletRequest and passing
> > it
> > > to HttpClient.
> > >
> > > I'm performing some additional packet captures now that will hopefully
> > help
> > > determine if:
> > > (1) it's the originating client somehow malforming it's post data, which
> > > causes the inputstream never to be fully read
> > > (2) the originating client's request is fine and there's some issue with
> > our
> > > J2EE container.
> > >
> > > Does this make sense to you? Any additional thoughts/input would be
> > > appreciated. I don't believe this is an issue with HttpClient, but
> > thought
> > > you may have some useful insights into the matter.
> > >
> >
> > If you want to eliminate GlassFish as a contributing factor to the problem
> > consider trying out this reverse proxy as a testbed:
> >
> >
> > http://svn.apache.org/repos/asf/httpcomponents/httpcore/trunk/httpcore/src/examples/org/apache/http/examples/ElementalReverseProxy.java
> >
> > This application implements a very basic reverse proxy using HttpCore
> > classes
> > only giving you a full control over the entire HTTP transport both client
> > and
> > server side.
> >
> > Unfortunately this is all I can do for you.
> >
> > Oleg
> >
> >
> > > Thanks,
> > >
> > > Sam
> > >
> > >
> > >
> > >
> > >
> > > 2009/4/14 Oleg Kalnichevski <ol...@apache.org>
> > >
> > > > Sam Crawford wrote:
> > > >
> > > >> Afternoon all,
> > > >> A few months back we had an issue with handling half closed TCP
> > > >> connections
> > > >> with HttpClient, and at the time I was advised to include something
> > akin
> > > >> to
> > > >> the IdleConnectionEvictor - which we did and it's working very nicely
> > in
> > > >> nearly all scenarios.
> > > >>
> > > >> However, in the past few days we've encountered a few WebLogic based
> > hosts
> > > >> that aren't playing fair.
> > > >>
> > > >> The following is one (extreme) example of the issue we're
> > encountering:
> > > >>
> > > >> Time (ms)    TCP action
> > > >> 0.0000         Client > Server [SYN]
> > > >> 0.5634         Server > Client [SYN,ACK]
> > > >> 1.2092         Client > Server [ACK]          <-- TCP session
> > established
> > > >> 312.5276         Server > Client [FIN,ACK]
> > > >> 313.1309         Client > Server [ACK]
> > > >> 401.5089         Client > Server [HTTP POST /blah]
> > > >> 403.2986         Server > Client [RST]
> > > >>
> > > >> In the above example, the server closes its side of the connection
> > only
> > > >> 300ms after establishment (by sending the FIN). (As an aside I'm
> > curious
> > > >> as
> > > >> to why HttpClient is taking 400ms after the TCP connection has been
> > > >> established to send the request - any suggestions are also much
> > > >> appreciated,
> > > >> but this doesn't happen often).
> > > >>
> > > >>
> > > > This does not sound right. The stale connection check may cause a 20 to
> > 30
> > > > millisecond delay (and generally should be avoided) but this is a bit
> > too
> > > > much. Can you produce a wire / context log of the session?
> > > >
> > > >
> > > >  But the above is an extreme example. We see other cases where the
> > WebLogic
> > > >> server is closing the connection of a keep-alive connection around
> > 10-15
> > > >> seconds after the last request.
> > > >>
> > > >
> > > > Does the server send a 'Keep-alive' header with the response?
> > > >
> > > >
> > > >  Our IdleConnectionEvictor doesn't run that
> > > >
> > > >> often, so we end up with unusable connections. We could just run
> > > >> IdleConnectionEvictor more often, but that's not really desirable.
> > > >>
> > > >> I'm going to be digging into the WebLogic side of things this
> > afternoon
> > > >> (to
> > > >> see if there's any limits we can modify there), but it does seem as
> > though
> > > >> there should be a nice way for HttpClient to detect such cases. I've
> > got
> > > >> stale connection checking enabled already by the way.
> > > >>
> > > >>
> > > > Stale connection checking is (in most cases) evil and should be
> > avoided.
> > > >
> > > >  I'm interested in any feedback/ideas here! I can include a wire
> > capture as
> > > >> an example if it would be helpful.
> > > >>
> > > >>
> > > > A wire / context log that correlates with the TCP dump would be great.
> > > >
> > > > Oleg
> > > >
> > > >  Thanks again,
> > > >>
> > > >> Sam
> > > >>
> > > >>
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> > > > For additional commands, e-mail: httpclient-users-help@hc.apache.org
> > > >
> > > >
> >
> >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> > > For additional commands, e-mail: httpclient-users-help@hc.apache.org
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> > For additional commands, e-mail: httpclient-users-help@hc.apache.org
> >
> >

---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
For additional commands, e-mail: httpclient-users-help@hc.apache.org


Re: Gracefully handling half-closed connections (encore!)

Posted by Sam Crawford <sa...@gmail.com>.
No problem, thanks for getting back to me. Your response has at least
confirmed my suspicion that GlassFish *could* be the cause. Packet captures
have yet to prove or disprove the theory, as it happens quite infrequently.

I'll update this thread when I crack it :-)

Thanks,

Sam



2009/4/27 Oleg Kalnichevski <ol...@apache.org>

> On Thu, Apr 23, 2009 at 11:12:34AM +0100, Sam Crawford wrote:
> > Oleg and all,
> > I think I'm getting closer to tracking down the root cause of this
> bizarre
> > issue. We've been having a few occurrences every day, and it's getting
> worse
> > as load is increasing.
> >
> > I've attached a screenshot of wireshark, a HttpClient wire trace, and a
> code
> > snippet.
> >
> > The wireshark screenshot shows the httpclient host (10.69.13.28)
> connecting
> > to the server (10.96.109.6) and establishing a TCP connection (frames
> 1-3).
> > Then nearly 10 seconds later, without any traffic being sent, the server
> > sends a FIN. Nearly 20 seconds after that the client gives up, PSH's the
> > last of it's data and FIN's its side of the connection too.
> >
> > The HttpClient wire trace shows the request being started at 09:08:33
> (the
> > same time as the wireshark capture starts), and everything seems to
> progress
> > normally initially (connection is established, headers are sent, etc).
> > However, the wire trace shows the headers being sent, but the wireshark
> > capture does not reflect this.
>
> Please note that HttpClient succeeding in committing output data to the
> socket's SNDBUF does not necessarily guarantee the JVM succeeding in
> transmitting data across the wire. A successful write operation from the
> HttpClient standpoint may not necessarily be successful from the TCP/IP
> stack
> standpoint.
>
>
>  I'm not blaming HttpClient for this, because
> > frame 30397 (the PSH) in the packet capture shows the headers being sent
> but
> > with no POST body. It looks to me like the InputStream that's being given
> to
> > HttpClient is somehow causing the issue.
> >
> > Now, the actual application of HttpClient here is a reverse proxy. It
> runs
> > on a GlassFish v2u2 J2EE container. I'm beginning to suspect that
> GlassFish
> > itself may be causing the issue. The code snippet attached shows how I'm
> > reading the input stream from the incoming HttpServletRequest and passing
> it
> > to HttpClient.
> >
> > I'm performing some additional packet captures now that will hopefully
> help
> > determine if:
> > (1) it's the originating client somehow malforming it's post data, which
> > causes the inputstream never to be fully read
> > (2) the originating client's request is fine and there's some issue with
> our
> > J2EE container.
> >
> > Does this make sense to you? Any additional thoughts/input would be
> > appreciated. I don't believe this is an issue with HttpClient, but
> thought
> > you may have some useful insights into the matter.
> >
>
> If you want to eliminate GlassFish as a contributing factor to the problem
> consider trying out this reverse proxy as a testbed:
>
>
> http://svn.apache.org/repos/asf/httpcomponents/httpcore/trunk/httpcore/src/examples/org/apache/http/examples/ElementalReverseProxy.java
>
> This application implements a very basic reverse proxy using HttpCore
> classes
> only giving you a full control over the entire HTTP transport both client
> and
> server side.
>
> Unfortunately this is all I can do for you.
>
> Oleg
>
>
> > Thanks,
> >
> > Sam
> >
> >
> >
> >
> >
> > 2009/4/14 Oleg Kalnichevski <ol...@apache.org>
> >
> > > Sam Crawford wrote:
> > >
> > >> Afternoon all,
> > >> A few months back we had an issue with handling half closed TCP
> > >> connections
> > >> with HttpClient, and at the time I was advised to include something
> akin
> > >> to
> > >> the IdleConnectionEvictor - which we did and it's working very nicely
> in
> > >> nearly all scenarios.
> > >>
> > >> However, in the past few days we've encountered a few WebLogic based
> hosts
> > >> that aren't playing fair.
> > >>
> > >> The following is one (extreme) example of the issue we're
> encountering:
> > >>
> > >> Time (ms)    TCP action
> > >> 0.0000         Client > Server [SYN]
> > >> 0.5634         Server > Client [SYN,ACK]
> > >> 1.2092         Client > Server [ACK]          <-- TCP session
> established
> > >> 312.5276         Server > Client [FIN,ACK]
> > >> 313.1309         Client > Server [ACK]
> > >> 401.5089         Client > Server [HTTP POST /blah]
> > >> 403.2986         Server > Client [RST]
> > >>
> > >> In the above example, the server closes its side of the connection
> only
> > >> 300ms after establishment (by sending the FIN). (As an aside I'm
> curious
> > >> as
> > >> to why HttpClient is taking 400ms after the TCP connection has been
> > >> established to send the request - any suggestions are also much
> > >> appreciated,
> > >> but this doesn't happen often).
> > >>
> > >>
> > > This does not sound right. The stale connection check may cause a 20 to
> 30
> > > millisecond delay (and generally should be avoided) but this is a bit
> too
> > > much. Can you produce a wire / context log of the session?
> > >
> > >
> > >  But the above is an extreme example. We see other cases where the
> WebLogic
> > >> server is closing the connection of a keep-alive connection around
> 10-15
> > >> seconds after the last request.
> > >>
> > >
> > > Does the server send a 'Keep-alive' header with the response?
> > >
> > >
> > >  Our IdleConnectionEvictor doesn't run that
> > >
> > >> often, so we end up with unusable connections. We could just run
> > >> IdleConnectionEvictor more often, but that's not really desirable.
> > >>
> > >> I'm going to be digging into the WebLogic side of things this
> afternoon
> > >> (to
> > >> see if there's any limits we can modify there), but it does seem as
> though
> > >> there should be a nice way for HttpClient to detect such cases. I've
> got
> > >> stale connection checking enabled already by the way.
> > >>
> > >>
> > > Stale connection checking is (in most cases) evil and should be
> avoided.
> > >
> > >  I'm interested in any feedback/ideas here! I can include a wire
> capture as
> > >> an example if it would be helpful.
> > >>
> > >>
> > > A wire / context log that correlates with the TCP dump would be great.
> > >
> > > Oleg
> > >
> > >  Thanks again,
> > >>
> > >> Sam
> > >>
> > >>
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> > > For additional commands, e-mail: httpclient-users-help@hc.apache.org
> > >
> > >
>
>
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> > For additional commands, e-mail: httpclient-users-help@hc.apache.org
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> For additional commands, e-mail: httpclient-users-help@hc.apache.org
>
>

Re: Gracefully handling half-closed connections (encore!)

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Thu, Apr 23, 2009 at 11:12:34AM +0100, Sam Crawford wrote:
> Oleg and all,
> I think I'm getting closer to tracking down the root cause of this bizarre
> issue. We've been having a few occurrences every day, and it's getting worse
> as load is increasing.
> 
> I've attached a screenshot of wireshark, a HttpClient wire trace, and a code
> snippet.
> 
> The wireshark screenshot shows the httpclient host (10.69.13.28) connecting
> to the server (10.96.109.6) and establishing a TCP connection (frames 1-3).
> Then nearly 10 seconds later, without any traffic being sent, the server
> sends a FIN. Nearly 20 seconds after that the client gives up, PSH's the
> last of it's data and FIN's its side of the connection too.
> 
> The HttpClient wire trace shows the request being started at 09:08:33 (the
> same time as the wireshark capture starts), and everything seems to progress
> normally initially (connection is established, headers are sent, etc).
> However, the wire trace shows the headers being sent, but the wireshark
> capture does not reflect this.

Please note that HttpClient succeeding in committing output data to the
socket's SNDBUF does not necessarily guarantee the JVM succeeding in
transmitting data across the wire. A successful write operation from the
HttpClient standpoint may not necessarily be successful from the TCP/IP stack
standpoint.


 I'm not blaming HttpClient for this, because
> frame 30397 (the PSH) in the packet capture shows the headers being sent but
> with no POST body. It looks to me like the InputStream that's being given to
> HttpClient is somehow causing the issue.
> 
> Now, the actual application of HttpClient here is a reverse proxy. It runs
> on a GlassFish v2u2 J2EE container. I'm beginning to suspect that GlassFish
> itself may be causing the issue. The code snippet attached shows how I'm
> reading the input stream from the incoming HttpServletRequest and passing it
> to HttpClient.
> 
> I'm performing some additional packet captures now that will hopefully help
> determine if:
> (1) it's the originating client somehow malforming it's post data, which
> causes the inputstream never to be fully read
> (2) the originating client's request is fine and there's some issue with our
> J2EE container.
> 
> Does this make sense to you? Any additional thoughts/input would be
> appreciated. I don't believe this is an issue with HttpClient, but thought
> you may have some useful insights into the matter.
> 

If you want to eliminate GlassFish as a contributing factor to the problem
consider trying out this reverse proxy as a testbed: 

http://svn.apache.org/repos/asf/httpcomponents/httpcore/trunk/httpcore/src/examples/org/apache/http/examples/ElementalReverseProxy.java

This application implements a very basic reverse proxy using HttpCore classes
only giving you a full control over the entire HTTP transport both client and
server side.

Unfortunately this is all I can do for you.

Oleg


> Thanks,
> 
> Sam
> 
> 
> 
> 
> 
> 2009/4/14 Oleg Kalnichevski <ol...@apache.org>
> 
> > Sam Crawford wrote:
> >
> >> Afternoon all,
> >> A few months back we had an issue with handling half closed TCP
> >> connections
> >> with HttpClient, and at the time I was advised to include something akin
> >> to
> >> the IdleConnectionEvictor - which we did and it's working very nicely in
> >> nearly all scenarios.
> >>
> >> However, in the past few days we've encountered a few WebLogic based hosts
> >> that aren't playing fair.
> >>
> >> The following is one (extreme) example of the issue we're encountering:
> >>
> >> Time (ms)    TCP action
> >> 0.0000         Client > Server [SYN]
> >> 0.5634         Server > Client [SYN,ACK]
> >> 1.2092         Client > Server [ACK]          <-- TCP session established
> >> 312.5276         Server > Client [FIN,ACK]
> >> 313.1309         Client > Server [ACK]
> >> 401.5089         Client > Server [HTTP POST /blah]
> >> 403.2986         Server > Client [RST]
> >>
> >> In the above example, the server closes its side of the connection only
> >> 300ms after establishment (by sending the FIN). (As an aside I'm curious
> >> as
> >> to why HttpClient is taking 400ms after the TCP connection has been
> >> established to send the request - any suggestions are also much
> >> appreciated,
> >> but this doesn't happen often).
> >>
> >>
> > This does not sound right. The stale connection check may cause a 20 to 30
> > millisecond delay (and generally should be avoided) but this is a bit too
> > much. Can you produce a wire / context log of the session?
> >
> >
> >  But the above is an extreme example. We see other cases where the WebLogic
> >> server is closing the connection of a keep-alive connection around 10-15
> >> seconds after the last request.
> >>
> >
> > Does the server send a 'Keep-alive' header with the response?
> >
> >
> >  Our IdleConnectionEvictor doesn't run that
> >
> >> often, so we end up with unusable connections. We could just run
> >> IdleConnectionEvictor more often, but that's not really desirable.
> >>
> >> I'm going to be digging into the WebLogic side of things this afternoon
> >> (to
> >> see if there's any limits we can modify there), but it does seem as though
> >> there should be a nice way for HttpClient to detect such cases. I've got
> >> stale connection checking enabled already by the way.
> >>
> >>
> > Stale connection checking is (in most cases) evil and should be avoided.
> >
> >  I'm interested in any feedback/ideas here! I can include a wire capture as
> >> an example if it would be helpful.
> >>
> >>
> > A wire / context log that correlates with the TCP dump would be great.
> >
> > Oleg
> >
> >  Thanks again,
> >>
> >> Sam
> >>
> >>
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> > For additional commands, e-mail: httpclient-users-help@hc.apache.org
> >
> >


> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> For additional commands, e-mail: httpclient-users-help@hc.apache.org

---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
For additional commands, e-mail: httpclient-users-help@hc.apache.org


Re: Gracefully handling half-closed connections (encore!)

Posted by Sam Crawford <sa...@gmail.com>.
Oleg and all,
I think I'm getting closer to tracking down the root cause of this bizarre
issue. We've been having a few occurrences every day, and it's getting worse
as load is increasing.

I've attached a screenshot of wireshark, a HttpClient wire trace, and a code
snippet.

The wireshark screenshot shows the httpclient host (10.69.13.28) connecting
to the server (10.96.109.6) and establishing a TCP connection (frames 1-3).
Then nearly 10 seconds later, without any traffic being sent, the server
sends a FIN. Nearly 20 seconds after that the client gives up, PSH's the
last of it's data and FIN's its side of the connection too.

The HttpClient wire trace shows the request being started at 09:08:33 (the
same time as the wireshark capture starts), and everything seems to progress
normally initially (connection is established, headers are sent, etc).
However, the wire trace shows the headers being sent, but the wireshark
capture does not reflect this. I'm not blaming HttpClient for this, because
frame 30397 (the PSH) in the packet capture shows the headers being sent but
with no POST body. It looks to me like the InputStream that's being given to
HttpClient is somehow causing the issue.

Now, the actual application of HttpClient here is a reverse proxy. It runs
on a GlassFish v2u2 J2EE container. I'm beginning to suspect that GlassFish
itself may be causing the issue. The code snippet attached shows how I'm
reading the input stream from the incoming HttpServletRequest and passing it
to HttpClient.

I'm performing some additional packet captures now that will hopefully help
determine if:
(1) it's the originating client somehow malforming it's post data, which
causes the inputstream never to be fully read
(2) the originating client's request is fine and there's some issue with our
J2EE container.

Does this make sense to you? Any additional thoughts/input would be
appreciated. I don't believe this is an issue with HttpClient, but thought
you may have some useful insights into the matter.

Thanks,

Sam





2009/4/14 Oleg Kalnichevski <ol...@apache.org>

> Sam Crawford wrote:
>
>> Afternoon all,
>> A few months back we had an issue with handling half closed TCP
>> connections
>> with HttpClient, and at the time I was advised to include something akin
>> to
>> the IdleConnectionEvictor - which we did and it's working very nicely in
>> nearly all scenarios.
>>
>> However, in the past few days we've encountered a few WebLogic based hosts
>> that aren't playing fair.
>>
>> The following is one (extreme) example of the issue we're encountering:
>>
>> Time (ms)    TCP action
>> 0.0000         Client > Server [SYN]
>> 0.5634         Server > Client [SYN,ACK]
>> 1.2092         Client > Server [ACK]          <-- TCP session established
>> 312.5276         Server > Client [FIN,ACK]
>> 313.1309         Client > Server [ACK]
>> 401.5089         Client > Server [HTTP POST /blah]
>> 403.2986         Server > Client [RST]
>>
>> In the above example, the server closes its side of the connection only
>> 300ms after establishment (by sending the FIN). (As an aside I'm curious
>> as
>> to why HttpClient is taking 400ms after the TCP connection has been
>> established to send the request - any suggestions are also much
>> appreciated,
>> but this doesn't happen often).
>>
>>
> This does not sound right. The stale connection check may cause a 20 to 30
> millisecond delay (and generally should be avoided) but this is a bit too
> much. Can you produce a wire / context log of the session?
>
>
>  But the above is an extreme example. We see other cases where the WebLogic
>> server is closing the connection of a keep-alive connection around 10-15
>> seconds after the last request.
>>
>
> Does the server send a 'Keep-alive' header with the response?
>
>
>  Our IdleConnectionEvictor doesn't run that
>
>> often, so we end up with unusable connections. We could just run
>> IdleConnectionEvictor more often, but that's not really desirable.
>>
>> I'm going to be digging into the WebLogic side of things this afternoon
>> (to
>> see if there's any limits we can modify there), but it does seem as though
>> there should be a nice way for HttpClient to detect such cases. I've got
>> stale connection checking enabled already by the way.
>>
>>
> Stale connection checking is (in most cases) evil and should be avoided.
>
>  I'm interested in any feedback/ideas here! I can include a wire capture as
>> an example if it would be helpful.
>>
>>
> A wire / context log that correlates with the TCP dump would be great.
>
> Oleg
>
>  Thanks again,
>>
>> Sam
>>
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> For additional commands, e-mail: httpclient-users-help@hc.apache.org
>
>

Re: Gracefully handling half-closed connections (encore!)

Posted by Sam Crawford <sa...@gmail.com>.
For anyone else's reference who might be following this thread...

I've extended the DefaultConnectionKeepAliveStrategy to allow a default
timeout to be specified (which is used if the Keep-Alive header doesn't
exist).

I've then removed the connectionManager.closeIdleConnections(30000,
TimeUnit.SECONDS) line in my IdleConnectionEvictor and now rely upon the
connectionManager.closeExpiredConnections() method to handle the closure of
connections.

Seems to be working well so far.

Thanks,

Sam


2009/4/15 Sam Crawford <sa...@gmail.com>

> Perfect, thanks. I think the custom ConnectionKeepAliveStrategy is the way
> forward, as you suggest.
>
> I'll have a tinker with disabling stale connection checking too and see how
> it affects performance.
>
> Thanks again,
>
> Sam
>
>
> 2009/4/15 Oleg Kalnichevski <ol...@apache.org>
>
> On Tue, Apr 14, 2009 at 11:39:01PM +0100, Sam Crawford wrote:
>> > Oleg,
>> >
>> > Thanks for the quick reply, as always.
>> >
>> > I'm afraid I haven't been able to get a wire log of the extreme scenario
>> I
>> > highlighted earlier (I have a tcpdump capture of it from last night, but
>> I
>> > doubt that's of much use to you).
>> >
>> > I have however got a wire log of the issue as it's manifesting itself
>> this
>> > evening. Inspection of the tcpdump traces reveal that this particular
>> > webserver is sending FINs for idle TCP connections after 30 seconds,
>> whereas
>> > many of the other servers we're dealing with are all much higher (in the
>> > order of 5 minutes or so). This server does not reply with a Keep-Alive
>> > header, despite Connection: Keep-Alive being sent as a request header (I
>> > appreciate it's under no obligation to obey the client's wishes).
>> >
>> > I've attached the wire log anyway, but I think for the moment I'm going
>> to
>> > have to reduce my IdleConnectionEvictor to run at least every 30 seconds
>> (it
>> > runs every two minutes at present).
>> >
>> > A couple of closing questions if I may:
>> >
>> > 1) Is there any way with the ClientConnectionManager to perform
>> > closeIdleConnections only for a specific HttpRoute? If the answer is
>> > "override the closeIdleConnections method and implement it" then that's
>> fine
>> > by me :-)
>>
>> A better approach could be implementing a custom
>> ConnectionKeepAliveStrategy and setting a lower keep-alive timeout for
>> known naughty servers.
>>
>>
>> > 2) You said the Stale Connection check is (in most cases) "evil". Is
>> there
>> > any docs detailing exactly what it does and what we will lose if we
>> disable
>> > it? (I believe it's enabled by default). The tiny speed hit is not a
>> massive
>> > issue for us at the moment (but could be later).
>> >
>>
>> Stale connection checking simply cannot be 100% reliable as there is
>> always a window of time between a successful stale check and the request
>> execution when the connection can go stale on unsuspecting HttpClient.
>> A well designed application has to have a recovery code for such
>> situations anyways, which kind of makes stale connection checking pretty
>> much pointless.
>>
>> The performance is not that tiny. 30 milliseconds for small payloads
>> that take only 3-5 milliseconds to execute is a lot.
>>
>> Oleg
>>
>> > Thanks again,
>> >
>> > Sam
>> >
>> >
>> >
>> > 2009/4/14 Oleg Kalnichevski <ol...@apache.org>
>> >
>> > > Sam Crawford wrote:
>> > >
>> > >> Afternoon all,
>> > >> A few months back we had an issue with handling half closed TCP
>> > >> connections
>> > >> with HttpClient, and at the time I was advised to include something
>> akin
>> > >> to
>> > >> the IdleConnectionEvictor - which we did and it's working very nicely
>> in
>> > >> nearly all scenarios.
>> > >>
>> > >> However, in the past few days we've encountered a few WebLogic based
>> hosts
>> > >> that aren't playing fair.
>> > >>
>> > >> The following is one (extreme) example of the issue we're
>> encountering:
>> > >>
>> > >> Time (ms)    TCP action
>> > >> 0.0000         Client > Server [SYN]
>> > >> 0.5634         Server > Client [SYN,ACK]
>> > >> 1.2092         Client > Server [ACK]          <-- TCP session
>> established
>> > >> 312.5276         Server > Client [FIN,ACK]
>> > >> 313.1309         Client > Server [ACK]
>> > >> 401.5089         Client > Server [HTTP POST /blah]
>> > >> 403.2986         Server > Client [RST]
>> > >>
>> > >> In the above example, the server closes its side of the connection
>> only
>> > >> 300ms after establishment (by sending the FIN). (As an aside I'm
>> curious
>> > >> as
>> > >> to why HttpClient is taking 400ms after the TCP connection has been
>> > >> established to send the request - any suggestions are also much
>> > >> appreciated,
>> > >> but this doesn't happen often).
>> > >>
>> > >>
>> > > This does not sound right. The stale connection check may cause a 20
>> to 30
>> > > millisecond delay (and generally should be avoided) but this is a bit
>> too
>> > > much. Can you produce a wire / context log of the session?
>> > >
>> > >
>> > >  But the above is an extreme example. We see other cases where the
>> WebLogic
>> > >> server is closing the connection of a keep-alive connection around
>> 10-15
>> > >> seconds after the last request.
>> > >>
>> > >
>> > > Does the server send a 'Keep-alive' header with the response?
>> > >
>> > >
>> > >  Our IdleConnectionEvictor doesn't run that
>> > >
>> > >> often, so we end up with unusable connections. We could just run
>> > >> IdleConnectionEvictor more often, but that's not really desirable.
>> > >>
>> > >> I'm going to be digging into the WebLogic side of things this
>> afternoon
>> > >> (to
>> > >> see if there's any limits we can modify there), but it does seem as
>> though
>> > >> there should be a nice way for HttpClient to detect such cases. I've
>> got
>> > >> stale connection checking enabled already by the way.
>> > >>
>> > >>
>> > > Stale connection checking is (in most cases) evil and should be
>> avoided.
>> > >
>> > >  I'm interested in any feedback/ideas here! I can include a wire
>> capture as
>> > >> an example if it would be helpful.
>> > >>
>> > >>
>> > > A wire / context log that correlates with the TCP dump would be great.
>> > >
>> > > Oleg
>> > >
>> > >  Thanks again,
>> > >>
>> > >> Sam
>> > >>
>> > >>
>> > >
>> > > ---------------------------------------------------------------------
>> > > To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
>> > > For additional commands, e-mail: httpclient-users-help@hc.apache.org
>> > >
>> > >
>>
>>
>> >
>> > ---------------------------------------------------------------------
>> > To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
>> > For additional commands, e-mail: httpclient-users-help@hc.apache.org
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
>> For additional commands, e-mail: httpclient-users-help@hc.apache.org
>>
>>
>

Re: Gracefully handling half-closed connections (encore!)

Posted by Sam Crawford <sa...@gmail.com>.
Perfect, thanks. I think the custom ConnectionKeepAliveStrategy is the way
forward, as you suggest.

I'll have a tinker with disabling stale connection checking too and see how
it affects performance.

Thanks again,

Sam


2009/4/15 Oleg Kalnichevski <ol...@apache.org>

> On Tue, Apr 14, 2009 at 11:39:01PM +0100, Sam Crawford wrote:
> > Oleg,
> >
> > Thanks for the quick reply, as always.
> >
> > I'm afraid I haven't been able to get a wire log of the extreme scenario
> I
> > highlighted earlier (I have a tcpdump capture of it from last night, but
> I
> > doubt that's of much use to you).
> >
> > I have however got a wire log of the issue as it's manifesting itself
> this
> > evening. Inspection of the tcpdump traces reveal that this particular
> > webserver is sending FINs for idle TCP connections after 30 seconds,
> whereas
> > many of the other servers we're dealing with are all much higher (in the
> > order of 5 minutes or so). This server does not reply with a Keep-Alive
> > header, despite Connection: Keep-Alive being sent as a request header (I
> > appreciate it's under no obligation to obey the client's wishes).
> >
> > I've attached the wire log anyway, but I think for the moment I'm going
> to
> > have to reduce my IdleConnectionEvictor to run at least every 30 seconds
> (it
> > runs every two minutes at present).
> >
> > A couple of closing questions if I may:
> >
> > 1) Is there any way with the ClientConnectionManager to perform
> > closeIdleConnections only for a specific HttpRoute? If the answer is
> > "override the closeIdleConnections method and implement it" then that's
> fine
> > by me :-)
>
> A better approach could be implementing a custom
> ConnectionKeepAliveStrategy and setting a lower keep-alive timeout for
> known naughty servers.
>
>
> > 2) You said the Stale Connection check is (in most cases) "evil". Is
> there
> > any docs detailing exactly what it does and what we will lose if we
> disable
> > it? (I believe it's enabled by default). The tiny speed hit is not a
> massive
> > issue for us at the moment (but could be later).
> >
>
> Stale connection checking simply cannot be 100% reliable as there is
> always a window of time between a successful stale check and the request
> execution when the connection can go stale on unsuspecting HttpClient.
> A well designed application has to have a recovery code for such
> situations anyways, which kind of makes stale connection checking pretty
> much pointless.
>
> The performance is not that tiny. 30 milliseconds for small payloads
> that take only 3-5 milliseconds to execute is a lot.
>
> Oleg
>
> > Thanks again,
> >
> > Sam
> >
> >
> >
> > 2009/4/14 Oleg Kalnichevski <ol...@apache.org>
> >
> > > Sam Crawford wrote:
> > >
> > >> Afternoon all,
> > >> A few months back we had an issue with handling half closed TCP
> > >> connections
> > >> with HttpClient, and at the time I was advised to include something
> akin
> > >> to
> > >> the IdleConnectionEvictor - which we did and it's working very nicely
> in
> > >> nearly all scenarios.
> > >>
> > >> However, in the past few days we've encountered a few WebLogic based
> hosts
> > >> that aren't playing fair.
> > >>
> > >> The following is one (extreme) example of the issue we're
> encountering:
> > >>
> > >> Time (ms)    TCP action
> > >> 0.0000         Client > Server [SYN]
> > >> 0.5634         Server > Client [SYN,ACK]
> > >> 1.2092         Client > Server [ACK]          <-- TCP session
> established
> > >> 312.5276         Server > Client [FIN,ACK]
> > >> 313.1309         Client > Server [ACK]
> > >> 401.5089         Client > Server [HTTP POST /blah]
> > >> 403.2986         Server > Client [RST]
> > >>
> > >> In the above example, the server closes its side of the connection
> only
> > >> 300ms after establishment (by sending the FIN). (As an aside I'm
> curious
> > >> as
> > >> to why HttpClient is taking 400ms after the TCP connection has been
> > >> established to send the request - any suggestions are also much
> > >> appreciated,
> > >> but this doesn't happen often).
> > >>
> > >>
> > > This does not sound right. The stale connection check may cause a 20 to
> 30
> > > millisecond delay (and generally should be avoided) but this is a bit
> too
> > > much. Can you produce a wire / context log of the session?
> > >
> > >
> > >  But the above is an extreme example. We see other cases where the
> WebLogic
> > >> server is closing the connection of a keep-alive connection around
> 10-15
> > >> seconds after the last request.
> > >>
> > >
> > > Does the server send a 'Keep-alive' header with the response?
> > >
> > >
> > >  Our IdleConnectionEvictor doesn't run that
> > >
> > >> often, so we end up with unusable connections. We could just run
> > >> IdleConnectionEvictor more often, but that's not really desirable.
> > >>
> > >> I'm going to be digging into the WebLogic side of things this
> afternoon
> > >> (to
> > >> see if there's any limits we can modify there), but it does seem as
> though
> > >> there should be a nice way for HttpClient to detect such cases. I've
> got
> > >> stale connection checking enabled already by the way.
> > >>
> > >>
> > > Stale connection checking is (in most cases) evil and should be
> avoided.
> > >
> > >  I'm interested in any feedback/ideas here! I can include a wire
> capture as
> > >> an example if it would be helpful.
> > >>
> > >>
> > > A wire / context log that correlates with the TCP dump would be great.
> > >
> > > Oleg
> > >
> > >  Thanks again,
> > >>
> > >> Sam
> > >>
> > >>
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> > > For additional commands, e-mail: httpclient-users-help@hc.apache.org
> > >
> > >
>
>
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> > For additional commands, e-mail: httpclient-users-help@hc.apache.org
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> For additional commands, e-mail: httpclient-users-help@hc.apache.org
>
>

Re: Gracefully handling half-closed connections (encore!)

Posted by Oleg Kalnichevski <ol...@apache.org>.
On Tue, Apr 14, 2009 at 11:39:01PM +0100, Sam Crawford wrote:
> Oleg,
> 
> Thanks for the quick reply, as always.
> 
> I'm afraid I haven't been able to get a wire log of the extreme scenario I
> highlighted earlier (I have a tcpdump capture of it from last night, but I
> doubt that's of much use to you).
> 
> I have however got a wire log of the issue as it's manifesting itself this
> evening. Inspection of the tcpdump traces reveal that this particular
> webserver is sending FINs for idle TCP connections after 30 seconds, whereas
> many of the other servers we're dealing with are all much higher (in the
> order of 5 minutes or so). This server does not reply with a Keep-Alive
> header, despite Connection: Keep-Alive being sent as a request header (I
> appreciate it's under no obligation to obey the client's wishes).
> 
> I've attached the wire log anyway, but I think for the moment I'm going to
> have to reduce my IdleConnectionEvictor to run at least every 30 seconds (it
> runs every two minutes at present).
> 
> A couple of closing questions if I may:
> 
> 1) Is there any way with the ClientConnectionManager to perform
> closeIdleConnections only for a specific HttpRoute? If the answer is
> "override the closeIdleConnections method and implement it" then that's fine
> by me :-)

A better approach could be implementing a custom
ConnectionKeepAliveStrategy and setting a lower keep-alive timeout for
known naughty servers.


> 2) You said the Stale Connection check is (in most cases) "evil". Is there
> any docs detailing exactly what it does and what we will lose if we disable
> it? (I believe it's enabled by default). The tiny speed hit is not a massive
> issue for us at the moment (but could be later).
> 

Stale connection checking simply cannot be 100% reliable as there is
always a window of time between a successful stale check and the request
execution when the connection can go stale on unsuspecting HttpClient.
A well designed application has to have a recovery code for such
situations anyways, which kind of makes stale connection checking pretty
much pointless.

The performance is not that tiny. 30 milliseconds for small payloads
that take only 3-5 milliseconds to execute is a lot.

Oleg

> Thanks again,
> 
> Sam
> 
> 
> 
> 2009/4/14 Oleg Kalnichevski <ol...@apache.org>
> 
> > Sam Crawford wrote:
> >
> >> Afternoon all,
> >> A few months back we had an issue with handling half closed TCP
> >> connections
> >> with HttpClient, and at the time I was advised to include something akin
> >> to
> >> the IdleConnectionEvictor - which we did and it's working very nicely in
> >> nearly all scenarios.
> >>
> >> However, in the past few days we've encountered a few WebLogic based hosts
> >> that aren't playing fair.
> >>
> >> The following is one (extreme) example of the issue we're encountering:
> >>
> >> Time (ms)    TCP action
> >> 0.0000         Client > Server [SYN]
> >> 0.5634         Server > Client [SYN,ACK]
> >> 1.2092         Client > Server [ACK]          <-- TCP session established
> >> 312.5276         Server > Client [FIN,ACK]
> >> 313.1309         Client > Server [ACK]
> >> 401.5089         Client > Server [HTTP POST /blah]
> >> 403.2986         Server > Client [RST]
> >>
> >> In the above example, the server closes its side of the connection only
> >> 300ms after establishment (by sending the FIN). (As an aside I'm curious
> >> as
> >> to why HttpClient is taking 400ms after the TCP connection has been
> >> established to send the request - any suggestions are also much
> >> appreciated,
> >> but this doesn't happen often).
> >>
> >>
> > This does not sound right. The stale connection check may cause a 20 to 30
> > millisecond delay (and generally should be avoided) but this is a bit too
> > much. Can you produce a wire / context log of the session?
> >
> >
> >  But the above is an extreme example. We see other cases where the WebLogic
> >> server is closing the connection of a keep-alive connection around 10-15
> >> seconds after the last request.
> >>
> >
> > Does the server send a 'Keep-alive' header with the response?
> >
> >
> >  Our IdleConnectionEvictor doesn't run that
> >
> >> often, so we end up with unusable connections. We could just run
> >> IdleConnectionEvictor more often, but that's not really desirable.
> >>
> >> I'm going to be digging into the WebLogic side of things this afternoon
> >> (to
> >> see if there's any limits we can modify there), but it does seem as though
> >> there should be a nice way for HttpClient to detect such cases. I've got
> >> stale connection checking enabled already by the way.
> >>
> >>
> > Stale connection checking is (in most cases) evil and should be avoided.
> >
> >  I'm interested in any feedback/ideas here! I can include a wire capture as
> >> an example if it would be helpful.
> >>
> >>
> > A wire / context log that correlates with the TCP dump would be great.
> >
> > Oleg
> >
> >  Thanks again,
> >>
> >> Sam
> >>
> >>
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> > For additional commands, e-mail: httpclient-users-help@hc.apache.org
> >
> >


> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> For additional commands, e-mail: httpclient-users-help@hc.apache.org

---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
For additional commands, e-mail: httpclient-users-help@hc.apache.org


Re: Gracefully handling half-closed connections (encore!)

Posted by Sam Crawford <sa...@gmail.com>.
Oleg,

Thanks for the quick reply, as always.

I'm afraid I haven't been able to get a wire log of the extreme scenario I
highlighted earlier (I have a tcpdump capture of it from last night, but I
doubt that's of much use to you).

I have however got a wire log of the issue as it's manifesting itself this
evening. Inspection of the tcpdump traces reveal that this particular
webserver is sending FINs for idle TCP connections after 30 seconds, whereas
many of the other servers we're dealing with are all much higher (in the
order of 5 minutes or so). This server does not reply with a Keep-Alive
header, despite Connection: Keep-Alive being sent as a request header (I
appreciate it's under no obligation to obey the client's wishes).

I've attached the wire log anyway, but I think for the moment I'm going to
have to reduce my IdleConnectionEvictor to run at least every 30 seconds (it
runs every two minutes at present).

A couple of closing questions if I may:

1) Is there any way with the ClientConnectionManager to perform
closeIdleConnections only for a specific HttpRoute? If the answer is
"override the closeIdleConnections method and implement it" then that's fine
by me :-)
2) You said the Stale Connection check is (in most cases) "evil". Is there
any docs detailing exactly what it does and what we will lose if we disable
it? (I believe it's enabled by default). The tiny speed hit is not a massive
issue for us at the moment (but could be later).

Thanks again,

Sam



2009/4/14 Oleg Kalnichevski <ol...@apache.org>

> Sam Crawford wrote:
>
>> Afternoon all,
>> A few months back we had an issue with handling half closed TCP
>> connections
>> with HttpClient, and at the time I was advised to include something akin
>> to
>> the IdleConnectionEvictor - which we did and it's working very nicely in
>> nearly all scenarios.
>>
>> However, in the past few days we've encountered a few WebLogic based hosts
>> that aren't playing fair.
>>
>> The following is one (extreme) example of the issue we're encountering:
>>
>> Time (ms)    TCP action
>> 0.0000         Client > Server [SYN]
>> 0.5634         Server > Client [SYN,ACK]
>> 1.2092         Client > Server [ACK]          <-- TCP session established
>> 312.5276         Server > Client [FIN,ACK]
>> 313.1309         Client > Server [ACK]
>> 401.5089         Client > Server [HTTP POST /blah]
>> 403.2986         Server > Client [RST]
>>
>> In the above example, the server closes its side of the connection only
>> 300ms after establishment (by sending the FIN). (As an aside I'm curious
>> as
>> to why HttpClient is taking 400ms after the TCP connection has been
>> established to send the request - any suggestions are also much
>> appreciated,
>> but this doesn't happen often).
>>
>>
> This does not sound right. The stale connection check may cause a 20 to 30
> millisecond delay (and generally should be avoided) but this is a bit too
> much. Can you produce a wire / context log of the session?
>
>
>  But the above is an extreme example. We see other cases where the WebLogic
>> server is closing the connection of a keep-alive connection around 10-15
>> seconds after the last request.
>>
>
> Does the server send a 'Keep-alive' header with the response?
>
>
>  Our IdleConnectionEvictor doesn't run that
>
>> often, so we end up with unusable connections. We could just run
>> IdleConnectionEvictor more often, but that's not really desirable.
>>
>> I'm going to be digging into the WebLogic side of things this afternoon
>> (to
>> see if there's any limits we can modify there), but it does seem as though
>> there should be a nice way for HttpClient to detect such cases. I've got
>> stale connection checking enabled already by the way.
>>
>>
> Stale connection checking is (in most cases) evil and should be avoided.
>
>  I'm interested in any feedback/ideas here! I can include a wire capture as
>> an example if it would be helpful.
>>
>>
> A wire / context log that correlates with the TCP dump would be great.
>
> Oleg
>
>  Thanks again,
>>
>> Sam
>>
>>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
> For additional commands, e-mail: httpclient-users-help@hc.apache.org
>
>

Re: Gracefully handling half-closed connections (encore!)

Posted by Oleg Kalnichevski <ol...@apache.org>.
Sam Crawford wrote:
> Afternoon all,
> A few months back we had an issue with handling half closed TCP connections
> with HttpClient, and at the time I was advised to include something akin to
> the IdleConnectionEvictor - which we did and it's working very nicely in
> nearly all scenarios.
> 
> However, in the past few days we've encountered a few WebLogic based hosts
> that aren't playing fair.
> 
> The following is one (extreme) example of the issue we're encountering:
> 
> Time (ms)    TCP action
> 0.0000         Client > Server [SYN]
> 0.5634         Server > Client [SYN,ACK]
> 1.2092         Client > Server [ACK]          <-- TCP session established
> 312.5276         Server > Client [FIN,ACK]
> 313.1309         Client > Server [ACK]
> 401.5089         Client > Server [HTTP POST /blah]
> 403.2986         Server > Client [RST]
> 
> In the above example, the server closes its side of the connection only
> 300ms after establishment (by sending the FIN). (As an aside I'm curious as
> to why HttpClient is taking 400ms after the TCP connection has been
> established to send the request - any suggestions are also much appreciated,
> but this doesn't happen often).
> 

This does not sound right. The stale connection check may cause a 20 to 
30 millisecond delay (and generally should be avoided) but this is a bit 
too much. Can you produce a wire / context log of the session?


> But the above is an extreme example. We see other cases where the WebLogic
> server is closing the connection of a keep-alive connection around 10-15
> seconds after the last request.

Does the server send a 'Keep-alive' header with the response?


  Our IdleConnectionEvictor doesn't run that
> often, so we end up with unusable connections. We could just run
> IdleConnectionEvictor more often, but that's not really desirable.
> 
> I'm going to be digging into the WebLogic side of things this afternoon (to
> see if there's any limits we can modify there), but it does seem as though
> there should be a nice way for HttpClient to detect such cases. I've got
> stale connection checking enabled already by the way.
> 

Stale connection checking is (in most cases) evil and should be avoided.

> I'm interested in any feedback/ideas here! I can include a wire capture as
> an example if it would be helpful.
> 

A wire / context log that correlates with the TCP dump would be great.

Oleg

> Thanks again,
> 
> Sam
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: httpclient-users-unsubscribe@hc.apache.org
For additional commands, e-mail: httpclient-users-help@hc.apache.org