You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Peter Chamberlain <pe...@htk.co.uk> on 2021/04/09 10:53:51 UTC

Understanding issues with connection refused when redirecting internally

Hello,
I've been trying to understand the behaviour of tomcat when handling
internal redirects. I'm testing using tomcat 9.0.38. I'm testing using
jdk8 1.8.0_265. My main test cases have been 2 forwards to the same
servlet, and then a response. Or 2 redirects to the same servlet and
then a response. Servlet as follows:

@WebServlet(loadOnStartup = 1, value = "/")
public class ConnectorLimitServlet extends HttpServlet {

  @Override
  protected void doGet(HttpServletRequest req, HttpServletResponse
resp) throws IOException, ServletException {
    int number = Integer.parseInt(req.getParameter("number"));
    // Fake some work done at each stage of processing
    try { Thread.sleep(500); } catch (InterruptedException e) {}
    resp.setContentType("text/plain");
    if (number <= 1) {
      resp.getWriter().write("Finished " + req.getServletPath());
      return;
    }
    switch (req.getServletPath()) {
      case "/redirect":
        resp.sendRedirect(new URL(req.getScheme() + "://" +
req.getServerName() + ":" + req.getServerPort() +
            req.getRequestURI() + "?number=" + (number - 1)).toString());
        return;
      case "/forward":
        final String forwardAddress = "/forward?number=" + (number - 1);
        getServletContext().getRequestDispatcher(forwardAddress).forward(req,
resp);
    }
  }
}


It seems that under high load, 1000 threads in jmeter, Tomcat will
refuse some of the connections for nio2 connections but not for nio,
further it seems that these failures happen considerably earlier than
the configuration page would suggest would be the case. The
configuration suggests that if acceptCount is high enough for the
number of connections then they will be queued prior to reaching the
processing threads, so a small number of processing threads can exist
with a queue of connection feeding them, it seems like until
connectionTimeout is reached connections shouldn't be refused, but
that is not what occurs. In fact acceptCount seems to have very little
effect.
In short, my questions are:
Why is the nio2 connector type worse at this than nio type?
Why are connections refused before acceptCount is reached, or
connectionTimeout is reached?
I'm guessing that each forward or redirect effectively counts as an
extra connection, as removing the redirects and multipling the number
of jmeter threads suggests that is the case, am I correct here?

Also, I feel like it would help if there were better documentation
around the differences between nio2 and nio, as, for example, the
connector comparison part makes them sound almost the same.

Apologies if this has been covered elsewhere before, I have been
searching but haven't found anything particularly clear covering this.
Best regards, Peter

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Understanding issues with connection refused when redirecting internally

Posted by Peter Chamberlain <pe...@htk.co.uk>.
On Mon, 12 Apr 2021, 09:07 Mark Thomas, <ma...@apache.org> wrote:

> On 11/04/2021 11:03, Peter Chamberlain wrote:
>
> <snip/>
>
> > I've been investigating this some more, as I'm not convinced nio2 isn't
> > behaving strangely in this case. I think there may of been some sort of
> > reversion as it is much less likely to refuse connections for nio2 in
> > tomcat 9.0.13 when compared to 9.0.14. I'm wondering if it has something
> to
> > do with:
> >
> >           Avoid using a dedicated thread for accept on the NIO2
> connector,
> > it is always less efficient. (remm)
> >
> > And if it is hitting some sort of accept thread starvation case when it
> is
> > fully loaded. In tomcat 9.0.13 I can hit a maxTheads=200 nio2 connector
> > with 5000 jmeter threads and not experience a connection refused, but in
> > 9.0.14 I can't reach 1000 without refused connections. It doesn't seem to
> > be related to forwards or redirects either. If I just sleep for 1500
> > milliseconds for every servlet run and not redirect or forward and it
> > behaves the same.
> > We've been using nio2 in our tomcats exclusively for some time, as we hit
> > an issue with nio in the past (can't remember what it was, it is likely
> > fixed by now I would think), so I guess we're more likely to notice this
> > sort of thing.
>
> I think you are asking the wrong question(s). 200 threads with a 1500ms
> wait means I would expect Tomcat to be processing ~133 requests per
> second. (Assuming you have at least 200 client threads as well). Higher
> numbers of client threads, the timeouts configured on the client, the
> timeouts configured on Tomcat, the accept count etc shouldn't change the
> requests per second results. What will change is the failure scenarios
> you observe - and I think that is what you are seeing here between
> 9.0.13 and 9.0.14. 9.0.13 might be accepting more connections but that
> doesn't mean those connections are being processed faster. Depending on
> timeouts, they might (eventually) get processed or they might timeout.
>
> You might want to try the following:
> - Limit the number of loops to, say, 10 so you get 50,000 requests. Look
> at the response time stats. What is the average? What is the min/max?
> - Repeat the test. Do the results remain consistent?
> - Repeat the test with more loops. Do the results remain consistent?
> - Repeat the test with fewer client threads. At what point do you start
> to get consistent results?
>
> It may well be that changes to Tomcat over time have changed the way
> Tomcat behaves under various (overloaded network) failure scenarios.
>
> My reading of the change that you reference above does mean that Tomcat
> will only accept a new connection over NIO2 when it has a processing
> thread available to process it. That will change the way Tomcat behaves
> when presented with a large spike of new connections. (Significantly)
> increasing the acceptCount (a.k.a. backlog) to more than the number
> connections expected in a single "spike" in 9.0.14 should give 9.0.13
> like behaviour.
>
> HTH,
>
> Mark
>

I understand what you are saying. I'm only actually hitting it with 1000
requests total, and approx 300 are failing with connection refused. This
isn't jus the first run either, so it isn't a jvm warm up issue. I'm
overloading the number of threads (200). But it doesn't really handle that
overloading in the way that might be expected (just delaying processing,
its failing some inside 7 seconds,  even with high accept count, max
connections, and connection timeouts). Essentially we're looking at cases
where we are overloaded for short periods, and trying to cope with that
without a bad customer response. This is for a link server of sorts, so the
result at present is people clicking links get failures, rather than
delays. Obviously we can increase number of threads to mitigate this to
some degree (although that increases resources used),  we're looking at
improving the performance too, and we can spread the load over more servers
if necessary. I'm still concerned this is likely to happen for this
application, so have recommended we switch back to nio instead, as it seems
to cope better with it. There is a difficult balance here with sufficient
performance against coping with ddos attempts, so I understand its not
really a simple area. Just thought you should know that 9.0.14 made it much
worse compared to 9.0.13, in case this query comes up again.
Obviously waiting for a large period of time for link clicks to work is
also undesirable, we are really just looking at worse case scenarios here.

Best regards, Peter.


> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>

Re: Understanding issues with connection refused when redirecting internally

Posted by Mark Thomas <ma...@apache.org>.
On 11/04/2021 11:03, Peter Chamberlain wrote:

<snip/>

> I've been investigating this some more, as I'm not convinced nio2 isn't
> behaving strangely in this case. I think there may of been some sort of
> reversion as it is much less likely to refuse connections for nio2 in
> tomcat 9.0.13 when compared to 9.0.14. I'm wondering if it has something to
> do with:
> 
>           Avoid using a dedicated thread for accept on the NIO2 connector,
> it is always less efficient. (remm)
> 
> And if it is hitting some sort of accept thread starvation case when it is
> fully loaded. In tomcat 9.0.13 I can hit a maxTheads=200 nio2 connector
> with 5000 jmeter threads and not experience a connection refused, but in
> 9.0.14 I can't reach 1000 without refused connections. It doesn't seem to
> be related to forwards or redirects either. If I just sleep for 1500
> milliseconds for every servlet run and not redirect or forward and it
> behaves the same.
> We've been using nio2 in our tomcats exclusively for some time, as we hit
> an issue with nio in the past (can't remember what it was, it is likely
> fixed by now I would think), so I guess we're more likely to notice this
> sort of thing.

I think you are asking the wrong question(s). 200 threads with a 1500ms 
wait means I would expect Tomcat to be processing ~133 requests per 
second. (Assuming you have at least 200 client threads as well). Higher 
numbers of client threads, the timeouts configured on the client, the 
timeouts configured on Tomcat, the accept count etc shouldn't change the 
requests per second results. What will change is the failure scenarios 
you observe - and I think that is what you are seeing here between 
9.0.13 and 9.0.14. 9.0.13 might be accepting more connections but that 
doesn't mean those connections are being processed faster. Depending on 
timeouts, they might (eventually) get processed or they might timeout.

You might want to try the following:
- Limit the number of loops to, say, 10 so you get 50,000 requests. Look 
at the response time stats. What is the average? What is the min/max?
- Repeat the test. Do the results remain consistent?
- Repeat the test with more loops. Do the results remain consistent?
- Repeat the test with fewer client threads. At what point do you start 
to get consistent results?

It may well be that changes to Tomcat over time have changed the way 
Tomcat behaves under various (overloaded network) failure scenarios.

My reading of the change that you reference above does mean that Tomcat 
will only accept a new connection over NIO2 when it has a processing 
thread available to process it. That will change the way Tomcat behaves 
when presented with a large spike of new connections. (Significantly) 
increasing the acceptCount (a.k.a. backlog) to more than the number 
connections expected in a single "spike" in 9.0.14 should give 9.0.13 
like behaviour.

HTH,

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Understanding issues with connection refused when redirecting internally

Posted by Peter Chamberlain <pe...@htk.co.uk>.
On Fri, 9 Apr 2021 at 18:12, Peter Chamberlain <pe...@htk.co.uk>
wrote:

>
>
> On Fri, 9 Apr 2021, 14:10 Christopher Schultz, <
> chris@christopherschultz.net> wrote:
>
>> Peter,
>>
>> On 4/9/21 06:53, Peter Chamberlain wrote:
>> > Hello,
>> > I've been trying to understand the behaviour of tomcat when handling
>> > internal redirects. I'm testing using tomcat 9.0.38. I'm testing using
>> > jdk8 1.8.0_265. My main test cases have been 2 forwards to the same
>> > servlet, and then a response. Or 2 redirects to the same servlet and
>> > then a response. Servlet as follows:
>> >
>> > @WebServlet(loadOnStartup = 1, value = "/")
>> > public class ConnectorLimitServlet extends HttpServlet {
>> >
>> >    @Override
>> >    protected void doGet(HttpServletRequest req, HttpServletResponse
>> > resp) throws IOException, ServletException {
>> >      int number = Integer.parseInt(req.getParameter("number"));
>> >      // Fake some work done at each stage of processing
>> >      try { Thread.sleep(500); } catch (InterruptedException e) {}
>> >      resp.setContentType("text/plain");
>> >      if (number <= 1) {
>> >        resp.getWriter().write("Finished " + req.getServletPath());
>> >        return;
>> >      }
>> >      switch (req.getServletPath()) {
>> >        case "/redirect":
>> >          resp.sendRedirect(new URL(req.getScheme() + "://" +
>> > req.getServerName() + ":" + req.getServerPort() +
>> >              req.getRequestURI() + "?number=" + (number -
>> 1)).toString());
>> >          return;
>> >        case "/forward":
>> >          final String forwardAddress = "/forward?number=" + (number -
>> 1);
>> >
>> getServletContext().getRequestDispatcher(forwardAddress).forward(req,
>> > resp);
>> >      }
>> >    }
>> > }
>> >
>> >
>> > It seems that under high load, 1000 threads in jmeter, Tomcat will
>> > refuse some of the connections for nio2 connections but not for nio,
>> > further it seems that these failures happen considerably earlier than
>> > the configuration page would suggest would be the case. The
>> > configuration suggests that if acceptCount is high enough for the
>> > number of connections then they will be queued prior to reaching the
>> > processing threads, so a small number of processing threads can exist
>> > with a queue of connection feeding them, it seems like until
>> > connectionTimeout is reached connections shouldn't be refused, but
>> > that is not what occurs. In fact acceptCount seems to have very little
>> > effect.
>>
>> Are you testing on localhost, or over a real network connection? If a
>> real network, what kind of network? How many JMeter instances vs Tomcat
>> instances?
>>
>>
> Localhost on Windows,  although similar has been seen across the network
> on Linux,  this was an attempt to replicate a live issue in a minimal code
> approach.
>
> > In short, my questions are:
>> > Why is the nio2 connector type worse at this than nio type?
>>
>> Let's table that for now.
>>
>> > Why are connections refused before acceptCount is reached, or
>> > connectionTimeout is reached?
>>
>> How are you measuring the size of the OS's TCP connection queue? What
>> makes you think that the OS has allocated exactly acceptCount entries in
>> the TCP connection queue? What makes you think acceptCount has been
>> reached? Or not yet reached?
>>
>> What do you think connectionTimeout does, and when do you think it
>> applies?
>>
>>
>>
> I was attempting to use netstat for the queue. Tbh, I found it almost
> impossible so was trying to gauge it mostly from jmeter results. I found
> that it was important to leave a gap between tests as otherwise it was more
> likely to fail.
>
> I was just reading the configuration,  and it sounded like acceptCount
> connections would be queued, after maxThreads, until connectionTimeout
> expired, but it seems connections were refused before then. From Marks
> response it sounds like acceptCount is more of a hint than a precise value,
> and may not be used at all. And also there are likely to be other factors
> outside of these settings that have impacts on these sorts of cases.
>
> > I'm guessing that each forward or redirect effectively counts as an
>> > extra connection, as removing the redirects and multipling the number
>> > of jmeter threads suggests that is the case, am I correct here?
>>
>> A redirect will cause one connection to be terminated (at least
>> logically) and a new connection established. Assuming you are using
>> KeepAlives from JMeter, the same underlying TCP connection will likely
>> be used for the first and second requests. acceptCount probably doesn't
>> apply, since the connection has definitely been established.
>>
>> For a "forward", the connection is definitely maintained. The client is
>> unaware of the fact that it is being sent back through the
>> request-processing pipeline as if there were a new request being made.
>> At this point, acceptCount, connectionTimeout, and everything else
>> you've been talking about is no longer an issue because the connection
>> has been accepted and request-processing has begun.
>>
>>
> I expect the issue I was seeing wasn't necessarily related to forwarding
> or redirecting, more the extra sleeptime and context switching. Although it
> wasn't exactly consistent, so it's hard to say.
>
> > Also, I feel like it would help if there were better documentation
>> > around the differences between nio2 and nio, as, for example, the
>> > connector comparison part makes them sound almost the same.
>>
>> The differences are mostly in the uses of the underlying Java APIs. If
>> you are familiar with the differences between NIO and NIO2 in Java, then
>> the differences between the connectors will be self-evident. If you are
>> unfamiliar with those differences, listing them won't help very much.
>>
>> NIO is significantly different from BIO (blocking I/O) and therefore
>> requires a very different I/O model than BIO. NIO and NIO2 are much more
>> similar to each other. When NIO2 was introduced, it looked as though NIO
>> had been a stepping-stone between BIO and NIO2 and that NIO2 would
>> definitely be the way to go into the future, as the APIs were cleaner
>> and generally offered the best performance. The Java VM has been
>> undergoing a re-implementation of NIO to bring some of those performance
>> improvements "back" to NIO from NIO2 and so the difference is becoming
>> less important at this point. It pretty much comes down to API usage at
>> this point.
>>
>> Hope that helps,
>> -chris
>>
>
> I think I'm much clearer on this in general now. Just wanted to check
> there wasn't some magic setting I was missing, but it sounds like this is
> expected behaviour in certain cases (greatly exceeding the maxThreads with
> requests). Knowing this, we can factor it in better.
>
> Thanks, Peter.
>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
I've been investigating this some more, as I'm not convinced nio2 isn't
behaving strangely in this case. I think there may of been some sort of
reversion as it is much less likely to refuse connections for nio2 in
tomcat 9.0.13 when compared to 9.0.14. I'm wondering if it has something to
do with:

         Avoid using a dedicated thread for accept on the NIO2 connector,
it is always less efficient. (remm)

And if it is hitting some sort of accept thread starvation case when it is
fully loaded. In tomcat 9.0.13 I can hit a maxTheads=200 nio2 connector
with 5000 jmeter threads and not experience a connection refused, but in
9.0.14 I can't reach 1000 without refused connections. It doesn't seem to
be related to forwards or redirects either. If I just sleep for 1500
milliseconds for every servlet run and not redirect or forward and it
behaves the same.
We've been using nio2 in our tomcats exclusively for some time, as we hit
an issue with nio in the past (can't remember what it was, it is likely
fixed by now I would think), so I guess we're more likely to notice this
sort of thing.

Best regards, Peter

Re: Understanding issues with connection refused when redirecting internally

Posted by Peter Chamberlain <pe...@htk.co.uk>.
On Fri, 9 Apr 2021, 14:10 Christopher Schultz, <ch...@christopherschultz.net>
wrote:

> Peter,
>
> On 4/9/21 06:53, Peter Chamberlain wrote:
> > Hello,
> > I've been trying to understand the behaviour of tomcat when handling
> > internal redirects. I'm testing using tomcat 9.0.38. I'm testing using
> > jdk8 1.8.0_265. My main test cases have been 2 forwards to the same
> > servlet, and then a response. Or 2 redirects to the same servlet and
> > then a response. Servlet as follows:
> >
> > @WebServlet(loadOnStartup = 1, value = "/")
> > public class ConnectorLimitServlet extends HttpServlet {
> >
> >    @Override
> >    protected void doGet(HttpServletRequest req, HttpServletResponse
> > resp) throws IOException, ServletException {
> >      int number = Integer.parseInt(req.getParameter("number"));
> >      // Fake some work done at each stage of processing
> >      try { Thread.sleep(500); } catch (InterruptedException e) {}
> >      resp.setContentType("text/plain");
> >      if (number <= 1) {
> >        resp.getWriter().write("Finished " + req.getServletPath());
> >        return;
> >      }
> >      switch (req.getServletPath()) {
> >        case "/redirect":
> >          resp.sendRedirect(new URL(req.getScheme() + "://" +
> > req.getServerName() + ":" + req.getServerPort() +
> >              req.getRequestURI() + "?number=" + (number -
> 1)).toString());
> >          return;
> >        case "/forward":
> >          final String forwardAddress = "/forward?number=" + (number - 1);
> >
> getServletContext().getRequestDispatcher(forwardAddress).forward(req,
> > resp);
> >      }
> >    }
> > }
> >
> >
> > It seems that under high load, 1000 threads in jmeter, Tomcat will
> > refuse some of the connections for nio2 connections but not for nio,
> > further it seems that these failures happen considerably earlier than
> > the configuration page would suggest would be the case. The
> > configuration suggests that if acceptCount is high enough for the
> > number of connections then they will be queued prior to reaching the
> > processing threads, so a small number of processing threads can exist
> > with a queue of connection feeding them, it seems like until
> > connectionTimeout is reached connections shouldn't be refused, but
> > that is not what occurs. In fact acceptCount seems to have very little
> > effect.
>
> Are you testing on localhost, or over a real network connection? If a
> real network, what kind of network? How many JMeter instances vs Tomcat
> instances?
>
>
Localhost on Windows,  although similar has been seen across the network on
Linux,  this was an attempt to replicate a live issue in a minimal code
approach.

> In short, my questions are:
> > Why is the nio2 connector type worse at this than nio type?
>
> Let's table that for now.
>
> > Why are connections refused before acceptCount is reached, or
> > connectionTimeout is reached?
>
> How are you measuring the size of the OS's TCP connection queue? What
> makes you think that the OS has allocated exactly acceptCount entries in
> the TCP connection queue? What makes you think acceptCount has been
> reached? Or not yet reached?
>
> What do you think connectionTimeout does, and when do you think it applies?
>
>
>
I was attempting to use netstat for the queue. Tbh, I found it almost
impossible so was trying to gauge it mostly from jmeter results. I found
that it was important to leave a gap between tests as otherwise it was more
likely to fail.

I was just reading the configuration,  and it sounded like acceptCount
connections would be queued, after maxThreads, until connectionTimeout
expired, but it seems connections were refused before then. From Marks
response it sounds like acceptCount is more of a hint than a precise value,
and may not be used at all. And also there are likely to be other factors
outside of these settings that have impacts on these sorts of cases.

> I'm guessing that each forward or redirect effectively counts as an
> > extra connection, as removing the redirects and multipling the number
> > of jmeter threads suggests that is the case, am I correct here?
>
> A redirect will cause one connection to be terminated (at least
> logically) and a new connection established. Assuming you are using
> KeepAlives from JMeter, the same underlying TCP connection will likely
> be used for the first and second requests. acceptCount probably doesn't
> apply, since the connection has definitely been established.
>
> For a "forward", the connection is definitely maintained. The client is
> unaware of the fact that it is being sent back through the
> request-processing pipeline as if there were a new request being made.
> At this point, acceptCount, connectionTimeout, and everything else
> you've been talking about is no longer an issue because the connection
> has been accepted and request-processing has begun.
>
>
I expect the issue I was seeing wasn't necessarily related to forwarding or
redirecting, more the extra sleeptime and context switching. Although it
wasn't exactly consistent, so it's hard to say.

> Also, I feel like it would help if there were better documentation
> > around the differences between nio2 and nio, as, for example, the
> > connector comparison part makes them sound almost the same.
>
> The differences are mostly in the uses of the underlying Java APIs. If
> you are familiar with the differences between NIO and NIO2 in Java, then
> the differences between the connectors will be self-evident. If you are
> unfamiliar with those differences, listing them won't help very much.
>
> NIO is significantly different from BIO (blocking I/O) and therefore
> requires a very different I/O model than BIO. NIO and NIO2 are much more
> similar to each other. When NIO2 was introduced, it looked as though NIO
> had been a stepping-stone between BIO and NIO2 and that NIO2 would
> definitely be the way to go into the future, as the APIs were cleaner
> and generally offered the best performance. The Java VM has been
> undergoing a re-implementation of NIO to bring some of those performance
> improvements "back" to NIO from NIO2 and so the difference is becoming
> less important at this point. It pretty much comes down to API usage at
> this point.
>
> Hope that helps,
> -chris
>

I think I'm much clearer on this in general now. Just wanted to check there
wasn't some magic setting I was missing, but it sounds like this is
expected behaviour in certain cases (greatly exceeding the maxThreads with
requests). Knowing this, we can factor it in better.

Thanks, Peter.

> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>

Re: Understanding issues with connection refused when redirecting internally

Posted by Christopher Schultz <ch...@christopherschultz.net>.
Peter,

On 4/9/21 06:53, Peter Chamberlain wrote:
> Hello,
> I've been trying to understand the behaviour of tomcat when handling
> internal redirects. I'm testing using tomcat 9.0.38. I'm testing using
> jdk8 1.8.0_265. My main test cases have been 2 forwards to the same
> servlet, and then a response. Or 2 redirects to the same servlet and
> then a response. Servlet as follows:
> 
> @WebServlet(loadOnStartup = 1, value = "/")
> public class ConnectorLimitServlet extends HttpServlet {
> 
>    @Override
>    protected void doGet(HttpServletRequest req, HttpServletResponse
> resp) throws IOException, ServletException {
>      int number = Integer.parseInt(req.getParameter("number"));
>      // Fake some work done at each stage of processing
>      try { Thread.sleep(500); } catch (InterruptedException e) {}
>      resp.setContentType("text/plain");
>      if (number <= 1) {
>        resp.getWriter().write("Finished " + req.getServletPath());
>        return;
>      }
>      switch (req.getServletPath()) {
>        case "/redirect":
>          resp.sendRedirect(new URL(req.getScheme() + "://" +
> req.getServerName() + ":" + req.getServerPort() +
>              req.getRequestURI() + "?number=" + (number - 1)).toString());
>          return;
>        case "/forward":
>          final String forwardAddress = "/forward?number=" + (number - 1);
>          getServletContext().getRequestDispatcher(forwardAddress).forward(req,
> resp);
>      }
>    }
> }
> 
> 
> It seems that under high load, 1000 threads in jmeter, Tomcat will
> refuse some of the connections for nio2 connections but not for nio,
> further it seems that these failures happen considerably earlier than
> the configuration page would suggest would be the case. The
> configuration suggests that if acceptCount is high enough for the
> number of connections then they will be queued prior to reaching the
> processing threads, so a small number of processing threads can exist
> with a queue of connection feeding them, it seems like until
> connectionTimeout is reached connections shouldn't be refused, but
> that is not what occurs. In fact acceptCount seems to have very little
> effect.

Are you testing on localhost, or over a real network connection? If a 
real network, what kind of network? How many JMeter instances vs Tomcat 
instances?

> In short, my questions are:
> Why is the nio2 connector type worse at this than nio type?

Let's table that for now.

> Why are connections refused before acceptCount is reached, or
> connectionTimeout is reached?

How are you measuring the size of the OS's TCP connection queue? What 
makes you think that the OS has allocated exactly acceptCount entries in 
the TCP connection queue? What makes you think acceptCount has been 
reached? Or not yet reached?

What do you think connectionTimeout does, and when do you think it applies?

> I'm guessing that each forward or redirect effectively counts as an
> extra connection, as removing the redirects and multipling the number
> of jmeter threads suggests that is the case, am I correct here?

A redirect will cause one connection to be terminated (at least 
logically) and a new connection established. Assuming you are using 
KeepAlives from JMeter, the same underlying TCP connection will likely 
be used for the first and second requests. acceptCount probably doesn't 
apply, since the connection has definitely been established.

For a "forward", the connection is definitely maintained. The client is 
unaware of the fact that it is being sent back through the 
request-processing pipeline as if there were a new request being made. 
At this point, acceptCount, connectionTimeout, and everything else 
you've been talking about is no longer an issue because the connection 
has been accepted and request-processing has begun.

> Also, I feel like it would help if there were better documentation
> around the differences between nio2 and nio, as, for example, the
> connector comparison part makes them sound almost the same.

The differences are mostly in the uses of the underlying Java APIs. If 
you are familiar with the differences between NIO and NIO2 in Java, then 
the differences between the connectors will be self-evident. If you are 
unfamiliar with those differences, listing them won't help very much.

NIO is significantly different from BIO (blocking I/O) and therefore 
requires a very different I/O model than BIO. NIO and NIO2 are much more 
similar to each other. When NIO2 was introduced, it looked as though NIO 
had been a stepping-stone between BIO and NIO2 and that NIO2 would 
definitely be the way to go into the future, as the APIs were cleaner 
and generally offered the best performance. The Java VM has been 
undergoing a re-implementation of NIO to bring some of those performance 
improvements "back" to NIO from NIO2 and so the difference is becoming 
less important at this point. It pretty much comes down to API usage at 
this point.

Hope that helps,
-chris

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: Understanding issues with connection refused when redirecting internally

Posted by Peter Chamberlain <pe...@htk.co.uk>.
On Fri, 9 Apr 2021, 14:29 Mark Thomas, <ma...@apache.org> wrote:

> On 09/04/2021 11:53, Peter Chamberlain wrote:
> > Hello,
> > I've been trying to understand the behaviour of tomcat when handling
> > internal redirects. I'm testing using tomcat 9.0.38. I'm testing using
> > jdk8 1.8.0_265. My main test cases have been 2 forwards to the same
> > servlet, and then a response. Or 2 redirects to the same servlet and
> > then a response.
>
> The forward case looks like a single HTTP request to both Tomcat and the
> client.
>
> The redirect case looks like 3 separate HTTP requests to both Tomcat and
> the client. The first two receive a 302 response (no body) and finally a
> 200 response with a body. Depending on how the client and Tomcat are
> configured these requests may occur on a single network connection (HTTP
> keep-alive is enabled) or may require a separate connection for each
> request (HTTP keep-alive is disabled).
>
> Once you get into the situation where the network layer is over-loaded,
> behaviour is very much system dependent. It will vary between operating
> systems and between major Java versions.
>
> Note that the OS treats any accept count setting more as a guideline
> than a hard rule and may ignore it completely. Under heavy load you also
> often see other effects (such as port exhaustion impacting the results).
>
> If the backlog is considered to be full, any subsequent connection
> attempts will will refused immediately.
>
> Connection timeout is measured from when the server first tries to read
> the request. From that point the client has connectionTimeout to send
> the first byte.
>
> NIO uses a Poller/Selector approach whereas NIO2 uses completion
> handlers. In many ways there isn't that much difference between them. I
> suspect that NIO will perform better on some systems and NIO2 on others.
>
> When I have looked at this sort of thing in the past, the results have
> nearly always been skewed by other factors. Only by significantly
> reducing the number of client threads and Tomcat threads (less than 10
> each) was I able to start to see the sort of behaviour expected around
> dropped connections, backlog etc and even then it took a fair amount of
> analysis to confirm that what I was observing was as expected.
>
> Mark
>

Okay, that's very helpful. I did find it very difficult to get repeatable
results, so I suspect other layers are causing the issues I've noticed. So
long as I'm not misunderstanding the configuration options, or missing
anything that's fine.

Thanks alot,

Peter

>
>   Servlet as follows:
> >
> > @WebServlet(loadOnStartup = 1, value = "/")
> > public class ConnectorLimitServlet extends HttpServlet {
> >
> >    @Override
> >    protected void doGet(HttpServletRequest req, HttpServletResponse
> > resp) throws IOException, ServletException {
> >      int number = Integer.parseInt(req.getParameter("number"));
> >      // Fake some work done at each stage of processing
> >      try { Thread.sleep(500); } catch (InterruptedException e) {}
> >      resp.setContentType("text/plain");
> >      if (number <= 1) {
> >        resp.getWriter().write("Finished " + req.getServletPath());
> >        return;
> >      }
> >      switch (req.getServletPath()) {
> >        case "/redirect":
> >          resp.sendRedirect(new URL(req.getScheme() + "://" +
> > req.getServerName() + ":" + req.getServerPort() +
> >              req.getRequestURI() + "?number=" + (number -
> 1)).toString());
> >          return;
> >        case "/forward":
> >          final String forwardAddress = "/forward?number=" + (number - 1);
> >
> getServletContext().getRequestDispatcher(forwardAddress).forward(req,
> > resp);
> >      }
> >    }
> > }
> >
> >
> > It seems that under high load, 1000 threads in jmeter, Tomcat will
> > refuse some of the connections for nio2 connections but not for nio,
> > further it seems that these failures happen considerably earlier than
> > the configuration page would suggest would be the case. The
> > configuration suggests that if acceptCount is high enough for the
> > number of connections then they will be queued prior to reaching the
> > processing threads, so a small number of processing threads can exist
> > with a queue of connection feeding them, it seems like until
> > connectionTimeout is reached connections shouldn't be refused, but
> > that is not what occurs. In fact acceptCount seems to have very little
> > effect.
> > In short, my questions are:
> > Why is the nio2 connector type worse at this than nio type?
> > Why are connections refused before acceptCount is reached, or
> > connectionTimeout is reached?
> > I'm guessing that each forward or redirect effectively counts as an
> > extra connection, as removing the redirects and multipling the number
> > of jmeter threads suggests that is the case, am I correct here?
> >
> > Also, I feel like it would help if there were better documentation
> > around the differences between nio2 and nio, as, for example, the
> > connector comparison part makes them sound almost the same.
> >
> > Apologies if this has been covered elsewhere before, I have been
> > searching but haven't found anything particularly clear covering this.
> > Best regards, Peter
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> > For additional commands, e-mail: users-help@tomcat.apache.org
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org

Re: Understanding issues with connection refused when redirecting internally

Posted by Mark Thomas <ma...@apache.org>.
On 09/04/2021 11:53, Peter Chamberlain wrote:
> Hello,
> I've been trying to understand the behaviour of tomcat when handling
> internal redirects. I'm testing using tomcat 9.0.38. I'm testing using
> jdk8 1.8.0_265. My main test cases have been 2 forwards to the same
> servlet, and then a response. Or 2 redirects to the same servlet and
> then a response.

The forward case looks like a single HTTP request to both Tomcat and the 
client.

The redirect case looks like 3 separate HTTP requests to both Tomcat and 
the client. The first two receive a 302 response (no body) and finally a 
200 response with a body. Depending on how the client and Tomcat are 
configured these requests may occur on a single network connection (HTTP 
keep-alive is enabled) or may require a separate connection for each 
request (HTTP keep-alive is disabled).

Once you get into the situation where the network layer is over-loaded, 
behaviour is very much system dependent. It will vary between operating 
systems and between major Java versions.

Note that the OS treats any accept count setting more as a guideline 
than a hard rule and may ignore it completely. Under heavy load you also 
often see other effects (such as port exhaustion impacting the results).

If the backlog is considered to be full, any subsequent connection 
attempts will will refused immediately.

Connection timeout is measured from when the server first tries to read 
the request. From that point the client has connectionTimeout to send 
the first byte.

NIO uses a Poller/Selector approach whereas NIO2 uses completion 
handlers. In many ways there isn't that much difference between them. I 
suspect that NIO will perform better on some systems and NIO2 on others.

When I have looked at this sort of thing in the past, the results have 
nearly always been skewed by other factors. Only by significantly 
reducing the number of client threads and Tomcat threads (less than 10 
each) was I able to start to see the sort of behaviour expected around 
dropped connections, backlog etc and even then it took a fair amount of 
analysis to confirm that what I was observing was as expected.

Mark



  Servlet as follows:
> 
> @WebServlet(loadOnStartup = 1, value = "/")
> public class ConnectorLimitServlet extends HttpServlet {
> 
>    @Override
>    protected void doGet(HttpServletRequest req, HttpServletResponse
> resp) throws IOException, ServletException {
>      int number = Integer.parseInt(req.getParameter("number"));
>      // Fake some work done at each stage of processing
>      try { Thread.sleep(500); } catch (InterruptedException e) {}
>      resp.setContentType("text/plain");
>      if (number <= 1) {
>        resp.getWriter().write("Finished " + req.getServletPath());
>        return;
>      }
>      switch (req.getServletPath()) {
>        case "/redirect":
>          resp.sendRedirect(new URL(req.getScheme() + "://" +
> req.getServerName() + ":" + req.getServerPort() +
>              req.getRequestURI() + "?number=" + (number - 1)).toString());
>          return;
>        case "/forward":
>          final String forwardAddress = "/forward?number=" + (number - 1);
>          getServletContext().getRequestDispatcher(forwardAddress).forward(req,
> resp);
>      }
>    }
> }
> 
> 
> It seems that under high load, 1000 threads in jmeter, Tomcat will
> refuse some of the connections for nio2 connections but not for nio,
> further it seems that these failures happen considerably earlier than
> the configuration page would suggest would be the case. The
> configuration suggests that if acceptCount is high enough for the
> number of connections then they will be queued prior to reaching the
> processing threads, so a small number of processing threads can exist
> with a queue of connection feeding them, it seems like until
> connectionTimeout is reached connections shouldn't be refused, but
> that is not what occurs. In fact acceptCount seems to have very little
> effect.
> In short, my questions are:
> Why is the nio2 connector type worse at this than nio type?
> Why are connections refused before acceptCount is reached, or
> connectionTimeout is reached?
> I'm guessing that each forward or redirect effectively counts as an
> extra connection, as removing the redirects and multipling the number
> of jmeter threads suggests that is the case, am I correct here?
> 
> Also, I feel like it would help if there were better documentation
> around the differences between nio2 and nio, as, for example, the
> connector comparison part makes them sound almost the same.
> 
> Apologies if this has been covered elsewhere before, I have been
> searching but haven't found anything particularly clear covering this.
> Best regards, Peter
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org