You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Nabble User <ho...@gmail.com> on 2010/08/10 20:46:13 UTC

Anyone using Tomcat 6 Comet with Apache mod_proxy?

I have a CometProcessor servlet similar to the one in the sample here:
http://tomcat.apache.org/tomcat-6.0-doc/aio.html#Example_code

One difference is that we close the writer after sending the message in the
MessageSender thread.  It all works fine hitting Tomcat directly.  However,
I'd like to proxy the requests through Apache (for various reasons).  So, I
setup a ProxyPass like this:

ProxyPass /comet http://localhost:8081/CometApp/
ProxyPassReverse /comet http://localhost:8081/CometApp/

This seems to work sometimes but other times I get a 502 response and see
the following in the Apache log:

[Tue Aug 10 16:00:45 2010] [error] [client xx.xx.xx.xx] (104)Connection
reset by peer: proxy: error reading status line from remote server localhost
[Tue Aug 10 16:00:45 2010] [error] [client xx.xx.xx.xx] proxy: Error reading
from remote server returned by /comet/wait

On my local Windows box I added the following line to the apache config to
diable keepalives and the problem went away in that environment:
SetEnv proxy-nokeepalive 1

I figured this was an OK workaround until I have time to come back and
figure out the real issue.


I just released the app to a Centos 5.4 server and started seeing the 502s
again.  So, I disabled keepalives in the apache config there as well.
However, in that environment with the keepalives disabled, the response to
the client is not closed until the Comet Request timeout is exceeded (30
seconds).  So, that hack won't fly (even temporarily).

Now I am back to trying to figure out why this fails with keepalives on.  It
seems like on some requests Apache thinks the connection is still valid but
Tomcat thinks otherwise.  On Centos I can easily duplicate but hitting the
url 8 times.  One the 9th time I get the 502.  I will then get 8 x 502
responses then 8 good responses and so on.  I can see 8 connections open
(netstat) after the first 8 hits and I I get the 502s I see the connections
terminated one by one.

I tried messing with various timeouts, etc to no avail.

I also notice that if I do something like the following in the servlet
(instead of queuing up the connections and sending a result later) I can hit
the url all day long with no problem:

if (event.getEventType() == CometEvent.EventType.BEGIN) {

    PrintWriter writer = response.getWriter();
    writer.println("<!doctype html public \"-//w3c//dtd html 4.0
transitional//en\">");
    writer.println("<head><title>JSP Chat</title></head><body
bgcolor=\"#FFFFFF\">");
    writer.println("Message Sent:" + sentCount);
    writer.flush();

    event.close();
}

So, it seems to only be an issue when sending the message and closing the
response writer from the "queue" connections.


Here are the Centos 5.4 environment details:

Apache 2.2.3
Tomcat 6.0.26
OpenJDK 64-Bit Server VM (build 14.0-b16, mixed mode)

Anyone have any ideas what might be going on here?  Anyone have a
configuration like this actually working?

Thanks in advance!

Re: Anyone using Tomcat 6 Comet with Apache mod_proxy?

Posted by Nabble User <ho...@gmail.com>.
A little more info.

I mentioned that on Centos when I disabled keepalives the response to the
client would not close right away (as it would do properly on my Windows
machine).  I downgraded TC on my windows machine from 6.0.28 to 6.0.26 (same
as I have on Centos) and I can now duplicate this issue on Windows.

I have not tried upgrading Centos to a new TC yet but I am assuming this
will fix the issue when keepalives are disabled.  I'm still wondering about
why keepalives have the problem in mode_proxy though.


>
>

Re: Anyone using Tomcat 6 Comet with Apache mod_proxy?

Posted by Nabble User <ho...@gmail.com>.
FYI - I am using org.apache.coyote.http11.Http11NioProtocol as my connector
protocol.  My "long polling" requests are pretty quick (less than 2 seconds)
so its not a long connection issue.