You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Arshiya Shariff <ar...@ericsson.com.INVALID> on 2020/09/28 16:58:38 UTC

HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Hi All,
With 200 threads(users) , ramp up duration of 2 seconds , loop count 80 and by sending 1000 http2 requests/sec from JMeter Client to an embedded tomcat application we did not observe any memory issue , but on sending 1000 http2 requests/sec with 2000 or 1000 users from JMeter , the application's heap space of 20 GB is occupied in 2 minutes and after 2 full GCs the memory clears and comes down to 4GB (expected) .

Embedded tomcat Version:9.0.38
Max Threads : 200
All other properties are the tomcat defaults.

Why is tomcat not able to process many connections ?
Why is the memory filled when the connections are increased, are there any parameters to tune connections ?
Please let us know.

Thanks and Regards
Arshiya Shariff

Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Christopher Schultz <ch...@christopherschultz.net>.
Arshiya,

On 9/28/20 12:58, Arshiya Shariff wrote:
> With 200 threads(users) , ramp up duration of 2 seconds , loop count
> 80 and by sending 1000 http2 requests/sec from JMeter Client to an
> embedded tomcat application we did not observe any memory issue , but
> on sending 1000 http2 requests/sec with 2000 or 1000 users from
> JMeter , the application's heap space of 20 GB is occupied in 2
> minutes and after 2 full GCs the memory clears and comes down to 4GB
> (expected) .

So a full GC releases the memory?

> Embedded tomcat Version:9.0.38
> Max Threads : 200
> All other properties are the tomcat defaults.
> 
> Why is tomcat not able to process many connections ?

What evidence is there that Tomcat cannot process that many connections?
You said above the only concern was used-heap space.

If you have 200 threads, then you can only handle 200 active requests at
a time. If you need to process 2000 requests *simultaneously*, then you
need 2000 threads.

If you only need 2000 *users* at the "same time", then 200 threads
should be able to handle the load, depending upon the application's
performance characteristics.

> Why is the memory filled when the connections are increased, are
> there any parameters to tune connections ?

Are you using HttpSessions?

-chris

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by "André Warnier (tomcat/perl)" <aw...@ice-sa.com>.
On 30.09.2020 07:42, Arshiya Shariff wrote:
> Hi Martin ,
> 
> Thank you for the response.
> 
> With a payload of 200 bytes we were able to send 20K requests/sec with 200 users from Jmeter without any memory issue . On increasing the payload to 5Kb and the number of users to 1000 in Jmeter and sending 1000 requests per second , the heap of 20GB got filled in 2 minutes . 

How long does it typically take (at the beginning of the test) for tomcat to *process* one 
of these requests ?

With 200 users the memory is cleared in the G1 mixed GC itself , but with 1000 users the 
memory is not cleared in the mixed GC , it takes full GCs of 7 to 10 seconds to clear the 
memory. These cases were executed with maxThreads 200 in tomcat , so we tried increasing 
the maxThreads from 200 to 1000, but still GC was struggling .
> 
> When we tried with 10 instances of JMeter , each with 100 users , where each instance was started with a delay of 1 minute we were able to see 1000 connections created in tomcat without any memory issues. But when 1000 users are created using single instance of JMeter in 20 seconds , tomcat's memory is filling fast- 20GB in 2 minutes.
> We suspect that the burst of connections being opened has a problem . Please help us with the same .
> 
> On analyzing the heap dump we see org.apache.tomcat.util.collections.SynchronizedStack occupying around 93% of 3GB live data ,the remaining 17GB is Garbage collected in the heap dump.
> 
> Thanks and Regards
> Arshiya Shariff
> 
> -----Original Message-----
> From: Martin Grigorov <mg...@apache.org>
> Sent: Monday, September 28, 2020 11:44 PM
> To: Tomcat Users List <us...@tomcat.apache.org>
> Subject: Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)
> 
> Hi Arshiya,
> 
> 
> On Mon, Sep 28, 2020 at 7:59 PM Arshiya Shariff <ar...@ericsson.com.invalid> wrote:
> 
>> Hi All,
>> With 200 threads(users) , ramp up duration of 2 seconds , loop count
>> 80 and by sending 1000 http2 requests/sec from JMeter Client to an
>> embedded tomcat application we did not observe any memory issue , but
>> on sending
>> 1000 http2 requests/sec with 2000 or 1000 users from JMeter , the
>> application's heap space of 20 GB is occupied in 2 minutes and after 2
>> full GCs the memory clears and comes down to 4GB (expected) .
>>
> 
> I am not sure whether you follow the other discussions at users@.
> In another email thread we discuss load testing Tomcat HTTP2 and we are able to make around 12K reqs/s with another load testing tool - https://protect2.fireeye.com/v1/url?k=f8cfc13c-a66f0379-f8cf81a7-8692dc8284cb-2c0aae53194b790f&q=1&e=6a9c569d-7da1-4394-a9ac-bf72724992fa&u=https%3A%2F%2Fgithub.com%2Ftsenart%2Fvegeta
> For me JMeter itself failed with OOM when increasing the number of the virtual users above 2K.
> There are several improvements in Tomcat master and 9.0.x in the HTTP2 area. Some of the changes are not yet downported to 9.0.x. We still test them, trying to avoid introducing regressions in 9.0.x.
> 
> 
>>
>> Embedded tomcat Version:9.0.38
>> Max Threads : 200
>>
> 
> The number of threads should be less if you do only CPU calculations without IO/network. If your app blocks on IO/network calls then you need more spare threads.
> With more threads there will be more context switches and less throughput.
> That's why there is no one golden rule that applies to all applications.
> 200 is a good default that works for most of the applications. But you need to test with different values to see which one gives the best performance for your scenaria.
> 
> 
>> All other properties are the tomcat defaults.
>>
>> Why is tomcat not able to process many connections ?
>>
> 
> You can tell us by enabling -XX:+HeapDumpOnOutOfMemoryError and -XX:HeapDumpPath=<file-or-dir-path>. Once you have the .hprof file you can examine it with Eclipse Memory Analyzer tool and see what is leaking.
> I will try to reproduce this issue tomorrow with Vegeta.
> 
> 
>> Why is the memory filled when the connections are increased, are there
>> any parameters to tune connections ?
>> Please let us know.
>>
>> Thanks and Regards
>> Arshiya Shariff
>>
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


RE: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Arshiya Shariff <ar...@ericsson.com.INVALID>.
Thank you so much Mark .
We will test and keep you posted .

Thanks and Regards
Arshiya Shariff

-----Original Message-----
From: Mark Thomas <ma...@apache.org> 
Sent: Thursday, October 1, 2020 2:59 PM
To: users@tomcat.apache.org
Subject: Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

On 30/09/2020 18:47, Martin Grigorov wrote:
> On Wed, Sep 30, 2020 at 7:47 PM Mark Thomas <ma...@apache.org> wrote:
>> On 30/09/2020 16:17, Mark Thomas wrote:

<snip/>

>>> That is helpful. Looks like you have found a way to reproduce the 
>>> buffer issues reported in 
>>> https://bz.apache.org/bugzilla/show_bug.cgi?id=64710
>>
>> Can you share the command you used to trigger those errors please.
>>
> 
> The Vegeta command I used is:
> 
> jq -ncM '{"method": "POST", "url": 
> "https://localhost:8080/testbed/plaintext",
> "body":"payload=Some
> sdgggwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwsdgssfshffheeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeessssffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffpayload"
> | @base64, header: {"Content-Type":
> ["application/x-www-form-urlencoded"]}}' | vegeta attack -format=json
> -http2 -rate=1000 -max-workers=8 -insecure -duration=2m | vegeta 
> encode > /tmp/http2.json; and vegeta report -type=json /tmp/http2.json | jq .
> 
> The app is at
> https://protect2.fireeye.com/v1/url?k=c248c6cb-9ce86806-c2488650-866132fe445e-bcef5199a0b9f57e&q=1&e=5f78363c-a75a-4bc1-b339-a919d2804f90&u=https%3A%2F%2Fgithub.com%2Fmartin-g%2Fhttp2-server-perf-tests%2Ftree%2Fmaster%2Fjava%2Ftomcat.
> Just start EmbeddedTomcat#main() with -Dtomcat.http2=true

Definitely timing related as I am unable to reproduce the problem with that command or some variations.

However, I think I have managed to track down the root cause. The good news is that the BufferOverflowException is largely harmless. It is a side-effect of the connection being closed due to an error. My guess is that the error was a combination of vegeta sending an unexpected reset frame and Tomcat maintaining state for a very small number of streams in some circumstances.

If you could retest with the latest 9.0.x that would be very helpful.
The memory usage, stream state maintenance and this BufferOverflowException should all be fixed.

Mark


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


RE: FW: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Arshiya Shariff <ar...@ericsson.com.INVALID>.
Hi all, 

Thanks for the update Christopher.

I tried modifying our application as below i.e by adding a read Listener and processing the requests after all the data is read (onDataAvailable()) . 
But I can still see the StackOverFlow Error printed .

		AsyncContext asyncContext = req.startAsync( req, resp );
		ServletInputStream input = req.getInputStream();
		ReadListenerImpl listener = new ReadListenerImpl( input, asyncContext, req, resp, map, myExecutor, this );
		input.setReadListener( listener );
		. . .
		public class ReadListenerImpl implements ReadListener
			{
				.
				.
				.
				@Override
				public void onDataAvailable() throws IOException
				{
					int len = -1;
					byte b[] = new byte[4 * 1024];
					while( input.isReady() && !input.isFinished() )
					{
						len = input.read( b );
						if( len > 0 )
						{
							String data = new String( b, 0, len );
							sb.append( data );
						}
					}
				}

				@Override
				public void onError(Throwable throwable)
				{
					asyncContext.complete();
					throwable.printStackTrace();

				}
				@Override
				public void onAllDataRead() throws IOException
				{		
					String jsonData = sb.toString();
					.
					.
					.
				}
			}

Unfortunately I am still not able to reproduce this exception with a test code , I am trying .

Thanks and Regards
Arshiya Shariff


-----Original Message-----
From: Christopher Schultz <ch...@christopherschultz.net> 
Sent: Wednesday, October 21, 2020 7:53 PM
To: users@tomcat.apache.org
Subject: Re: FW: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Arshiya,

On 10/21/20 00:34, Arshiya Shariff wrote:
> Hi,
> 
> Christopher, Please find the answer in-line:
> How... exactly?
> private String getRequestBody(HttpServletRequest request) throws IOException
> 	{
> 		StringBuilder sb = new StringBuilder();
> 		BufferedReader reader = request.getReader();
> 		try
> 		{
> 			String line;
> 			while( ( line = reader.readLine() ) != null )
> 			{
> 				sb.append( line ).append( '\n' );

Note that this may modify the incoming request. Are you sure you want to return a value which does not match the exact POST body?

> 			}
> 		}
> 		finally
> 		{
> 			reader.close();
> 		}
> 		return sb.toString();
> 	}		
>  

Is that method run from within the asynchronous context or before you begin async processing? I'm not an expert at servlet-async, but I think you should be reading the request entirely before entering the async context. Reading the request from async may cause problems.

Instead of using blocking reads of the request bbody in asynchronous mode, you should do this:

request.getInputStream().setReadListener(new ReadListener() {
  public void onoAllDataRead() { ... }
  public void onDataAvailable() { ... }
  public void onError(Throwable t) { ... } });


> I am trying to reproduce the StackOverflowError with a sample 
> application , Once it is reproduced I will share it across.

See https://www.slideshare.net/SimoneBordet/servlet-31-async-io

Specifically slides 43-53.

-chris

> -----Original Message-----
> From: Christopher Schultz <ch...@christopherschultz.net>
> Sent: Thursday, October 15, 2020 12:01 AM
> To: users@tomcat.apache.org
> Subject: Re: FW: HTTP2: memory filled up fast on increasing the 
> connections to 1000/2000 (Embedded tomcat 9.0.38)
> 
> Arshiya,
> 
> On 10/14/20 01:23, Arshiya Shariff wrote:
>> Please find the answers in-line Mark.
>>
>> Http2 requests with message payload of  34KB are pumped from JMeter 
>> at
>> 20 TPS with 700 connections to an application with Embedded tomcat
>> - 9.0.39 (max-Threads : 200, all other values are the tomcat
>> defaults)
>>
>>> What does that URL do with the POSTed content? Ignore it? Read it 
>>> from an InputStream? Read it via getParameter()?
>>
>> The posted content is read via BufferedReader reader =
>> request.getReader() and processed asynchronously
> How... exactly?
>> Is JMeter run on the same machine as Tomcat?
>> JMeter is run from a different machine.
>>
>> Do you use the JMeter GUI or the command line?
>> Launched via Command line (JMeter heap increased to 10 GB )
>>
>> What are the specs of the server(s) being used?
>> The server is a VM with 12 CPUs and 120 GB RAM
>>
>> Please let us know  if you require more details.
> 
> This would probabyl be easier if you'd just provide a test-case: a sample (simple!) web application which reproduces what you are reporting.
> 
> -chris
> 
>> -----Original Message-----
>> From: Mark Thomas <ma...@apache.org>
>> Sent: Monday, October 12, 2020 7:28 PM
>> To: users@tomcat.apache.org
>> Subject: Re: HTTP2: memory filled up fast on increasing the 
>> connections to 1000/2000 (Embedded tomcat 9.0.38)
>>
>> On 12/10/2020 08:02, Arshiya Shariff wrote:
>>> Hi Mark ,
>>>
>>> The issue is reproduced with version 9.0.39 as well. Max threads in Tomcat is 200.
>>>
>>> Please find the case:
>>> Client:JMeter 5.2.1 (With http2 plugin)
>>> TPS: around 20
>>> No of users from JMeter : 700
>>> Message payload size: 6 KB to 34 KB
>>> Loop: Infinite
>>> We let the loop run infinitely and see the java.lang.StackOverflowError trace printed multiple times in the log within few minutes of starting the test.
>>
>> POSTing to what URL?
>>
>> What does that URL do with the POSTed content? Ignore it? Read it from an InputStream? Read it via getParameter()?
>>
>> Is JMeter run on the same machine as Tomcat?
>>
>> Do you use the JMeter GUI or the command line?
>>
>> What are the specs of the server(s) being used?
>>
>> You need to provide the exact steps to recreate this issue on a clean install of Tomcat 9.0.39 as provided by the ASF.
>>
>> Mark
>>
>>
>>> Please help us with this . What is the impact of StackOverflowError ?
>>>
>>> Thanks and Regards
>>> Arshiya Shariff
>>>
>>> -----Original Message-----
>>> From: Mark Thomas <ma...@apache.org>
>>> Sent: Friday, October 9, 2020 5:31 PM
>>> To: users@tomcat.apache.org
>>> Subject: Re: HTTP2: memory filled up fast on increasing the 
>>> connections to 1000/2000 (Embedded tomcat 9.0.38)
>>>
>>> On 09/10/2020 12:32, Arshiya Shariff wrote:
>>>> Hi,
>>>>
>>>> Mark , with the test runs that I performed over clean 9.0.x branch I was not able to reproduce this.
>>>
>>> Good. But I'd really like to understand why...
>>>
>>>> But with 9.0.38 and the jars built from 9.0.x with hash: c8ec2d4cde3a31b0e9df9a30e7915d77ba725545  , with 700 or 1000 users (connections) and on sending 1000 Requests per second (or even lesser) , payload of 16K  from JMeter I can see that this Exception occurs within few minutes of starting the test . The maxThreads configured in tomcat is 200 .
>>>>
>>>> How often do you see these errors in your test run?
>>>> Randomly, at times 2 or 3 such traces.
>>>
>>> OK. Definitely a timing issue then.
>>>
>>>> Do you have the other end of that stack trace?
>>>> It is only the two lines that is recursively printed till the end about  ~500 times in one trace  :
>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>>         at
>>>> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHa
>>>> n
>>>> d
>>>> l
>>>> er.completed(SocketWrapperBase.java:1100)
>>>
>>> Doesn't tell me much unfortunately.
>>>
>>>> I see the trace starting with :
>>>> Exception in thread "http-nio-x.y.z-1090-exec-107" java.lang.StackOverflowError 
>>>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:446)
>>>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>>>         at
>>>> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHa
>>>> n
>>>> d
>>>> l
>>>> er.completed(SocketWrapperBase.java:1100)
>>>>
>>>>  		(OR)
>>>>
>>>> Exception in thread "http-nio-x.y.z-1090-exec-87" java.lang.StackOverflowError
>>>>         at sun.nio.ch.IOVecWrapper.get(IOVecWrapper.java:96)
>>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
>>>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
>>>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>>         .....
>>>>         .....
>>>>         .....
>>>>         .....
>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>>         at
>>>> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHa
>>>> n
>>>> d
>>>> l
>>>> er.completed(SocketWrapperBase.java:1100)
>>>>
>>>> Is there anything that was fixed around this in latest 9.0.x branch ?
>>>
>>> Not obviously. I've reviewed every commit since c8ec2d4c. There is nothing that directly works with the I/O. There is 1e97ab2 which fixes a relatively recent regression in the HTTP/2 code. I guess it is possible (but it seems a bit of a stretch) that that bug is triggering an issue in JMeter which in turn is sending invalid HTTP/2 packets.
>>>
>>> I think at this point, given the relatively small number of commits between c8ec2d4c and HEAD, the most useful thing you could do is run a binary search to find out at which commit the issue is fixed. If we know which commit to look at that should help track down the root cause.
>>>
>>> Mark
>>>
>>>>
>>>> Thanks and Regards
>>>> Arshiya Shariff
>>>>
>>>> -----Original Message-----
>>>> From: Mark Thomas <ma...@apache.org>
>>>> Sent: Monday, October 5, 2020 9:52 PM
>>>> To: users@tomcat.apache.org
>>>> Subject: Re: HTTP2: memory filled up fast on increasing the 
>>>> connections to 1000/2000 (Embedded tomcat 9.0.38)
>>>>
>>>> On 05/10/2020 10:56, Arshiya Shariff wrote:
>>>>> Hi All,
>>>>>
>>>>> Thank you so much Mark . 
>>>>> We tested the jars built from latest 9.0.x  with 2000 / 5000 users
>>>>> (connections) from JMeter , We see a very good improvement with 
>>>>> the heap usage
>>>>
>>>> Good news. As is the fact that the other errors have been cleared up.
>>>>
>>>>> But I see this exception printed multiple times , I am not sure why this occurs :
>>>>> Exception in thread "http-nio-x.y.z-1234-exec-213" java.lang.StackOverflowError 
>>>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
>>>>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
>>>>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>>>         at
>>>>> org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperati
>>>>> o
>>>>> n
>>>>> S
>>>>> t
>>>>> ate.run(NioEndpoint.java:1511)
>>>>
>>>> That looks like an infinite loop reading an incoming frame.
>>>> New frames are read using a 9 byte buffer for the header and a 16k buffer for the payload (since Tomcat sets this as the max frame size).
>>>>
>>>> The loop is occurring because one of those buffers is simultaneously both full and still has more data to read. That should not be possible and I haven't yet been able to figure out how this is happening.
>>>>
>>>> How easy is this to reproduce?
>>>>
>>>> How often do you see these errors in your test run?
>>>>
>>>> Do you have a reliable test case that reproduces this on a clean Tomcat 9.0.x build? If is, can you share the details?
>>>>
>>>> Do you have the other end of that stack trace? I'm interested in how the code enters the loop.
>>>>
>>>> Thanks,
>>>>
>>>> Mark
>>>>
>>>> -------------------------------------------------------------------
>>>> -
>>>> - To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>>
>>>>
>>>> -------------------------------------------------------------------
>>>> -
>>>> - To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>>
>>>
>>>
>>> --------------------------------------------------------------------
>>> - To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>
>>>
>>> --------------------------------------------------------------------
>>> - To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: FW: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Christopher Schultz <ch...@christopherschultz.net>.
Arshiya,

On 10/21/20 00:34, Arshiya Shariff wrote:
> Hi,
> 
> Christopher, Please find the answer in-line:
> How... exactly?
> private String getRequestBody(HttpServletRequest request) throws IOException
> 	{
> 		StringBuilder sb = new StringBuilder();
> 		BufferedReader reader = request.getReader();
> 		try
> 		{
> 			String line;
> 			while( ( line = reader.readLine() ) != null )
> 			{
> 				sb.append( line ).append( '\n' );

Note that this may modify the incoming request. Are you sure you want to
return a value which does not match the exact POST body?

> 			}
> 		}
> 		finally
> 		{
> 			reader.close();
> 		}
> 		return sb.toString();
> 	}		
>  

Is that method run from within the asynchronous context or before you
begin async processing? I'm not an expert at servlet-async, but I think
you should be reading the request entirely before entering the async
context. Reading the request from async may cause problems.

Instead of using blocking reads of the request bbody in asynchronous
mode, you should do this:

request.getInputStream().setReadListener(new ReadListener() {
  public void onoAllDataRead() { ... }
  public void onDataAvailable() { ... }
  public void onError(Throwable t) { ... }
});


> I am trying to reproduce the StackOverflowError with a sample
> application , Once it is reproduced I will share it across.

See https://www.slideshare.net/SimoneBordet/servlet-31-async-io

Specifically slides 43-53.

-chris

> -----Original Message-----
> From: Christopher Schultz <ch...@christopherschultz.net> 
> Sent: Thursday, October 15, 2020 12:01 AM
> To: users@tomcat.apache.org
> Subject: Re: FW: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)
> 
> Arshiya,
> 
> On 10/14/20 01:23, Arshiya Shariff wrote:
>> Please find the answers in-line Mark.
>>
>> Http2 requests with message payload of  34KB are pumped from JMeter at 
>> 20 TPS with 700 connections to an application with Embedded tomcat
>> - 9.0.39 (max-Threads : 200, all other values are the tomcat
>> defaults)
>>
>>> What does that URL do with the POSTed content? Ignore it? Read it 
>>> from an InputStream? Read it via getParameter()?
>>
>> The posted content is read via BufferedReader reader =
>> request.getReader() and processed asynchronously
> How... exactly?
>> Is JMeter run on the same machine as Tomcat?
>> JMeter is run from a different machine.
>>
>> Do you use the JMeter GUI or the command line?
>> Launched via Command line (JMeter heap increased to 10 GB )
>>
>> What are the specs of the server(s) being used?
>> The server is a VM with 12 CPUs and 120 GB RAM
>>
>> Please let us know  if you require more details.
> 
> This would probabyl be easier if you'd just provide a test-case: a sample (simple!) web application which reproduces what you are reporting.
> 
> -chris
> 
>> -----Original Message-----
>> From: Mark Thomas <ma...@apache.org>
>> Sent: Monday, October 12, 2020 7:28 PM
>> To: users@tomcat.apache.org
>> Subject: Re: HTTP2: memory filled up fast on increasing the 
>> connections to 1000/2000 (Embedded tomcat 9.0.38)
>>
>> On 12/10/2020 08:02, Arshiya Shariff wrote:
>>> Hi Mark ,
>>>
>>> The issue is reproduced with version 9.0.39 as well. Max threads in Tomcat is 200.
>>>
>>> Please find the case:
>>> Client:JMeter 5.2.1 (With http2 plugin)
>>> TPS: around 20
>>> No of users from JMeter : 700
>>> Message payload size: 6 KB to 34 KB
>>> Loop: Infinite
>>> We let the loop run infinitely and see the java.lang.StackOverflowError trace printed multiple times in the log within few minutes of starting the test.
>>
>> POSTing to what URL?
>>
>> What does that URL do with the POSTed content? Ignore it? Read it from an InputStream? Read it via getParameter()?
>>
>> Is JMeter run on the same machine as Tomcat?
>>
>> Do you use the JMeter GUI or the command line?
>>
>> What are the specs of the server(s) being used?
>>
>> You need to provide the exact steps to recreate this issue on a clean install of Tomcat 9.0.39 as provided by the ASF.
>>
>> Mark
>>
>>
>>> Please help us with this . What is the impact of StackOverflowError ?
>>>
>>> Thanks and Regards
>>> Arshiya Shariff
>>>
>>> -----Original Message-----
>>> From: Mark Thomas <ma...@apache.org>
>>> Sent: Friday, October 9, 2020 5:31 PM
>>> To: users@tomcat.apache.org
>>> Subject: Re: HTTP2: memory filled up fast on increasing the 
>>> connections to 1000/2000 (Embedded tomcat 9.0.38)
>>>
>>> On 09/10/2020 12:32, Arshiya Shariff wrote:
>>>> Hi,
>>>>
>>>> Mark , with the test runs that I performed over clean 9.0.x branch I was not able to reproduce this.
>>>
>>> Good. But I'd really like to understand why...
>>>
>>>> But with 9.0.38 and the jars built from 9.0.x with hash: c8ec2d4cde3a31b0e9df9a30e7915d77ba725545  , with 700 or 1000 users (connections) and on sending 1000 Requests per second (or even lesser) , payload of 16K  from JMeter I can see that this Exception occurs within few minutes of starting the test . The maxThreads configured in tomcat is 200 .
>>>>
>>>> How often do you see these errors in your test run?
>>>> Randomly, at times 2 or 3 such traces.
>>>
>>> OK. Definitely a timing issue then.
>>>
>>>> Do you have the other end of that stack trace?
>>>> It is only the two lines that is recursively printed till the end about  ~500 times in one trace  :
>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>>         at
>>>> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHan
>>>> d
>>>> l
>>>> er.completed(SocketWrapperBase.java:1100)
>>>
>>> Doesn't tell me much unfortunately.
>>>
>>>> I see the trace starting with :
>>>> Exception in thread "http-nio-x.y.z-1090-exec-107" java.lang.StackOverflowError 
>>>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:446)
>>>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>>>         at
>>>> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHan
>>>> d
>>>> l
>>>> er.completed(SocketWrapperBase.java:1100)
>>>>
>>>>  		(OR)
>>>>
>>>> Exception in thread "http-nio-x.y.z-1090-exec-87" java.lang.StackOverflowError
>>>>         at sun.nio.ch.IOVecWrapper.get(IOVecWrapper.java:96)
>>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
>>>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
>>>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>>         .....
>>>>         .....
>>>>         .....
>>>>         .....
>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>>         at
>>>> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHan
>>>> d
>>>> l
>>>> er.completed(SocketWrapperBase.java:1100)
>>>>
>>>> Is there anything that was fixed around this in latest 9.0.x branch ?
>>>
>>> Not obviously. I've reviewed every commit since c8ec2d4c. There is nothing that directly works with the I/O. There is 1e97ab2 which fixes a relatively recent regression in the HTTP/2 code. I guess it is possible (but it seems a bit of a stretch) that that bug is triggering an issue in JMeter which in turn is sending invalid HTTP/2 packets.
>>>
>>> I think at this point, given the relatively small number of commits between c8ec2d4c and HEAD, the most useful thing you could do is run a binary search to find out at which commit the issue is fixed. If we know which commit to look at that should help track down the root cause.
>>>
>>> Mark
>>>
>>>>
>>>> Thanks and Regards
>>>> Arshiya Shariff
>>>>
>>>> -----Original Message-----
>>>> From: Mark Thomas <ma...@apache.org>
>>>> Sent: Monday, October 5, 2020 9:52 PM
>>>> To: users@tomcat.apache.org
>>>> Subject: Re: HTTP2: memory filled up fast on increasing the 
>>>> connections to 1000/2000 (Embedded tomcat 9.0.38)
>>>>
>>>> On 05/10/2020 10:56, Arshiya Shariff wrote:
>>>>> Hi All,
>>>>>
>>>>> Thank you so much Mark . 
>>>>> We tested the jars built from latest 9.0.x  with 2000 / 5000 users
>>>>> (connections) from JMeter , We see a very good improvement with the 
>>>>> heap usage
>>>>
>>>> Good news. As is the fact that the other errors have been cleared up.
>>>>
>>>>> But I see this exception printed multiple times , I am not sure why this occurs :
>>>>> Exception in thread "http-nio-x.y.z-1234-exec-213" java.lang.StackOverflowError 
>>>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
>>>>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
>>>>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>>>         at
>>>>> org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperatio
>>>>> n
>>>>> S
>>>>> t
>>>>> ate.run(NioEndpoint.java:1511)
>>>>
>>>> That looks like an infinite loop reading an incoming frame.
>>>> New frames are read using a 9 byte buffer for the header and a 16k buffer for the payload (since Tomcat sets this as the max frame size).
>>>>
>>>> The loop is occurring because one of those buffers is simultaneously both full and still has more data to read. That should not be possible and I haven't yet been able to figure out how this is happening.
>>>>
>>>> How easy is this to reproduce?
>>>>
>>>> How often do you see these errors in your test run?
>>>>
>>>> Do you have a reliable test case that reproduces this on a clean Tomcat 9.0.x build? If is, can you share the details?
>>>>
>>>> Do you have the other end of that stack trace? I'm interested in how the code enters the loop.
>>>>
>>>> Thanks,
>>>>
>>>> Mark
>>>>
>>>> --------------------------------------------------------------------
>>>> - To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>>
>>>>
>>>> --------------------------------------------------------------------
>>>> - To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>>
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


RE: FW: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Arshiya Shariff <ar...@ericsson.com.INVALID>.
Hi,

Christopher, Please find the answer in-line:
How... exactly?
private String getRequestBody(HttpServletRequest request) throws IOException
	{
		StringBuilder sb = new StringBuilder();
		BufferedReader reader = request.getReader();
		try
		{
			String line;
			while( ( line = reader.readLine() ) != null )
			{
				sb.append( line ).append( '\n' );
			}
		}
		finally
		{
			reader.close();
		}
		return sb.toString();
	}		
 
I am trying to reproduce the StackOverflowError with a sample application , Once it is reproduced I will share it across.

Thanks and Regards
Arshiya Shariff

-----Original Message-----
From: Christopher Schultz <ch...@christopherschultz.net> 
Sent: Thursday, October 15, 2020 12:01 AM
To: users@tomcat.apache.org
Subject: Re: FW: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Arshiya,

On 10/14/20 01:23, Arshiya Shariff wrote:
> Please find the answers in-line Mark.
> 
> Http2 requests with message payload of  34KB are pumped from JMeter at 
> 20 TPS with 700 connections to an application with Embedded tomcat
> - 9.0.39 (max-Threads : 200, all other values are the tomcat
> defaults)
> 
>> What does that URL do with the POSTed content? Ignore it? Read it 
>> from an InputStream? Read it via getParameter()?
>
> The posted content is read via BufferedReader reader =
> request.getReader() and processed asynchronously
How... exactly?
> Is JMeter run on the same machine as Tomcat?
> JMeter is run from a different machine.
> 
> Do you use the JMeter GUI or the command line?
> Launched via Command line (JMeter heap increased to 10 GB )
> 
> What are the specs of the server(s) being used?
> The server is a VM with 12 CPUs and 120 GB RAM
> 
> Please let us know  if you require more details.

This would probabyl be easier if you'd just provide a test-case: a sample (simple!) web application which reproduces what you are reporting.

-chris

> -----Original Message-----
> From: Mark Thomas <ma...@apache.org>
> Sent: Monday, October 12, 2020 7:28 PM
> To: users@tomcat.apache.org
> Subject: Re: HTTP2: memory filled up fast on increasing the 
> connections to 1000/2000 (Embedded tomcat 9.0.38)
> 
> On 12/10/2020 08:02, Arshiya Shariff wrote:
>> Hi Mark ,
>>
>> The issue is reproduced with version 9.0.39 as well. Max threads in Tomcat is 200.
>>
>> Please find the case:
>> Client:JMeter 5.2.1 (With http2 plugin)
>> TPS: around 20
>> No of users from JMeter : 700
>> Message payload size: 6 KB to 34 KB
>> Loop: Infinite
>> We let the loop run infinitely and see the java.lang.StackOverflowError trace printed multiple times in the log within few minutes of starting the test.
> 
> POSTing to what URL?
> 
> What does that URL do with the POSTed content? Ignore it? Read it from an InputStream? Read it via getParameter()?
> 
> Is JMeter run on the same machine as Tomcat?
> 
> Do you use the JMeter GUI or the command line?
> 
> What are the specs of the server(s) being used?
> 
> You need to provide the exact steps to recreate this issue on a clean install of Tomcat 9.0.39 as provided by the ASF.
> 
> Mark
> 
> 
>> Please help us with this . What is the impact of StackOverflowError ?
>>
>> Thanks and Regards
>> Arshiya Shariff
>>
>> -----Original Message-----
>> From: Mark Thomas <ma...@apache.org>
>> Sent: Friday, October 9, 2020 5:31 PM
>> To: users@tomcat.apache.org
>> Subject: Re: HTTP2: memory filled up fast on increasing the 
>> connections to 1000/2000 (Embedded tomcat 9.0.38)
>>
>> On 09/10/2020 12:32, Arshiya Shariff wrote:
>>> Hi,
>>>
>>> Mark , with the test runs that I performed over clean 9.0.x branch I was not able to reproduce this.
>>
>> Good. But I'd really like to understand why...
>>
>>> But with 9.0.38 and the jars built from 9.0.x with hash: c8ec2d4cde3a31b0e9df9a30e7915d77ba725545  , with 700 or 1000 users (connections) and on sending 1000 Requests per second (or even lesser) , payload of 16K  from JMeter I can see that this Exception occurs within few minutes of starting the test . The maxThreads configured in tomcat is 200 .
>>>
>>> How often do you see these errors in your test run?
>>> Randomly, at times 2 or 3 such traces.
>>
>> OK. Definitely a timing issue then.
>>
>>> Do you have the other end of that stack trace?
>>> It is only the two lines that is recursively printed till the end about  ~500 times in one trace  :
>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>         at
>>> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHan
>>> d
>>> l
>>> er.completed(SocketWrapperBase.java:1100)
>>
>> Doesn't tell me much unfortunately.
>>
>>> I see the trace starting with :
>>> Exception in thread "http-nio-x.y.z-1090-exec-107" java.lang.StackOverflowError 
>>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:446)
>>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>>         at
>>> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHan
>>> d
>>> l
>>> er.completed(SocketWrapperBase.java:1100)
>>>
>>>  		(OR)
>>>
>>> Exception in thread "http-nio-x.y.z-1090-exec-87" java.lang.StackOverflowError
>>>         at sun.nio.ch.IOVecWrapper.get(IOVecWrapper.java:96)
>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
>>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
>>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>         .....
>>>         .....
>>>         .....
>>>         .....
>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>         at
>>> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHan
>>> d
>>> l
>>> er.completed(SocketWrapperBase.java:1100)
>>>
>>> Is there anything that was fixed around this in latest 9.0.x branch ?
>>
>> Not obviously. I've reviewed every commit since c8ec2d4c. There is nothing that directly works with the I/O. There is 1e97ab2 which fixes a relatively recent regression in the HTTP/2 code. I guess it is possible (but it seems a bit of a stretch) that that bug is triggering an issue in JMeter which in turn is sending invalid HTTP/2 packets.
>>
>> I think at this point, given the relatively small number of commits between c8ec2d4c and HEAD, the most useful thing you could do is run a binary search to find out at which commit the issue is fixed. If we know which commit to look at that should help track down the root cause.
>>
>> Mark
>>
>>>
>>> Thanks and Regards
>>> Arshiya Shariff
>>>
>>> -----Original Message-----
>>> From: Mark Thomas <ma...@apache.org>
>>> Sent: Monday, October 5, 2020 9:52 PM
>>> To: users@tomcat.apache.org
>>> Subject: Re: HTTP2: memory filled up fast on increasing the 
>>> connections to 1000/2000 (Embedded tomcat 9.0.38)
>>>
>>> On 05/10/2020 10:56, Arshiya Shariff wrote:
>>>> Hi All,
>>>>
>>>> Thank you so much Mark . 
>>>> We tested the jars built from latest 9.0.x  with 2000 / 5000 users
>>>> (connections) from JMeter , We see a very good improvement with the 
>>>> heap usage
>>>
>>> Good news. As is the fact that the other errors have been cleared up.
>>>
>>>> But I see this exception printed multiple times , I am not sure why this occurs :
>>>> Exception in thread "http-nio-x.y.z-1234-exec-213" java.lang.StackOverflowError 
>>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
>>>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
>>>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>>         at
>>>> org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperatio
>>>> n
>>>> S
>>>> t
>>>> ate.run(NioEndpoint.java:1511)
>>>
>>> That looks like an infinite loop reading an incoming frame.
>>> New frames are read using a 9 byte buffer for the header and a 16k buffer for the payload (since Tomcat sets this as the max frame size).
>>>
>>> The loop is occurring because one of those buffers is simultaneously both full and still has more data to read. That should not be possible and I haven't yet been able to figure out how this is happening.
>>>
>>> How easy is this to reproduce?
>>>
>>> How often do you see these errors in your test run?
>>>
>>> Do you have a reliable test case that reproduces this on a clean Tomcat 9.0.x build? If is, can you share the details?
>>>
>>> Do you have the other end of that stack trace? I'm interested in how the code enters the loop.
>>>
>>> Thanks,
>>>
>>> Mark
>>>
>>> --------------------------------------------------------------------
>>> - To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>
>>>
>>> --------------------------------------------------------------------
>>> - To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: FW: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Christopher Schultz <ch...@christopherschultz.net>.
Arshiya,

On 10/14/20 01:23, Arshiya Shariff wrote:
> Please find the answers in-line Mark.
> 
> Http2 requests with message payload of  34KB are pumped from JMeter
> at 20 TPS with 700 connections to an application with Embedded tomcat
> - 9.0.39 (max-Threads : 200, all other values are the tomcat
> defaults)
> 
>> What does that URL do with the POSTed content? Ignore it? Read it 
>> from an InputStream? Read it via getParameter()?
>
> The posted content is read via BufferedReader reader =
> request.getReader() and processed asynchronously
How... exactly?

> Is JMeter run on the same machine as Tomcat?
> JMeter is run from a different machine.
> 
> Do you use the JMeter GUI or the command line?
> Launched via Command line (JMeter heap increased to 10 GB )
> 
> What are the specs of the server(s) being used?
> The server is a VM with 12 CPUs and 120 GB RAM
> 
> Please let us know  if you require more details.

This would probabyl be easier if you'd just provide a test-case: a
sample (simple!) web application which reproduces what you are reporting.

-chris

> -----Original Message-----
> From: Mark Thomas <ma...@apache.org> 
> Sent: Monday, October 12, 2020 7:28 PM
> To: users@tomcat.apache.org
> Subject: Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)
> 
> On 12/10/2020 08:02, Arshiya Shariff wrote:
>> Hi Mark ,
>>
>> The issue is reproduced with version 9.0.39 as well. Max threads in Tomcat is 200.
>>
>> Please find the case:
>> Client:JMeter 5.2.1 (With http2 plugin)
>> TPS: around 20
>> No of users from JMeter : 700
>> Message payload size: 6 KB to 34 KB
>> Loop: Infinite
>> We let the loop run infinitely and see the java.lang.StackOverflowError trace printed multiple times in the log within few minutes of starting the test.
> 
> POSTing to what URL?
> 
> What does that URL do with the POSTed content? Ignore it? Read it from an InputStream? Read it via getParameter()?
> 
> Is JMeter run on the same machine as Tomcat?
> 
> Do you use the JMeter GUI or the command line?
> 
> What are the specs of the server(s) being used?
> 
> You need to provide the exact steps to recreate this issue on a clean install of Tomcat 9.0.39 as provided by the ASF.
> 
> Mark
> 
> 
>> Please help us with this . What is the impact of StackOverflowError ?
>>
>> Thanks and Regards
>> Arshiya Shariff
>>
>> -----Original Message-----
>> From: Mark Thomas <ma...@apache.org>
>> Sent: Friday, October 9, 2020 5:31 PM
>> To: users@tomcat.apache.org
>> Subject: Re: HTTP2: memory filled up fast on increasing the 
>> connections to 1000/2000 (Embedded tomcat 9.0.38)
>>
>> On 09/10/2020 12:32, Arshiya Shariff wrote:
>>> Hi,
>>>
>>> Mark , with the test runs that I performed over clean 9.0.x branch I was not able to reproduce this.
>>
>> Good. But I'd really like to understand why...
>>
>>> But with 9.0.38 and the jars built from 9.0.x with hash: c8ec2d4cde3a31b0e9df9a30e7915d77ba725545  , with 700 or 1000 users (connections) and on sending 1000 Requests per second (or even lesser) , payload of 16K  from JMeter I can see that this Exception occurs within few minutes of starting the test . The maxThreads configured in tomcat is 200 .
>>>
>>> How often do you see these errors in your test run?
>>> Randomly, at times 2 or 3 such traces.
>>
>> OK. Definitely a timing issue then.
>>
>>> Do you have the other end of that stack trace?
>>> It is only the two lines that is recursively printed till the end about  ~500 times in one trace  :
>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>         at
>>> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHand
>>> l
>>> er.completed(SocketWrapperBase.java:1100)
>>
>> Doesn't tell me much unfortunately.
>>
>>> I see the trace starting with :
>>> Exception in thread "http-nio-x.y.z-1090-exec-107" java.lang.StackOverflowError 
>>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:446)
>>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>>         at
>>> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHand
>>> l
>>> er.completed(SocketWrapperBase.java:1100)
>>>
>>>  		(OR)
>>>
>>> Exception in thread "http-nio-x.y.z-1090-exec-87" java.lang.StackOverflowError
>>>         at sun.nio.ch.IOVecWrapper.get(IOVecWrapper.java:96)
>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
>>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
>>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>         .....
>>>         .....
>>>         .....
>>>         .....
>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>         at
>>> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHand
>>> l
>>> er.completed(SocketWrapperBase.java:1100)
>>>
>>> Is there anything that was fixed around this in latest 9.0.x branch ?
>>
>> Not obviously. I've reviewed every commit since c8ec2d4c. There is nothing that directly works with the I/O. There is 1e97ab2 which fixes a relatively recent regression in the HTTP/2 code. I guess it is possible (but it seems a bit of a stretch) that that bug is triggering an issue in JMeter which in turn is sending invalid HTTP/2 packets.
>>
>> I think at this point, given the relatively small number of commits between c8ec2d4c and HEAD, the most useful thing you could do is run a binary search to find out at which commit the issue is fixed. If we know which commit to look at that should help track down the root cause.
>>
>> Mark
>>
>>>
>>> Thanks and Regards
>>> Arshiya Shariff
>>>
>>> -----Original Message-----
>>> From: Mark Thomas <ma...@apache.org>
>>> Sent: Monday, October 5, 2020 9:52 PM
>>> To: users@tomcat.apache.org
>>> Subject: Re: HTTP2: memory filled up fast on increasing the 
>>> connections to 1000/2000 (Embedded tomcat 9.0.38)
>>>
>>> On 05/10/2020 10:56, Arshiya Shariff wrote:
>>>> Hi All,
>>>>
>>>> Thank you so much Mark . 
>>>> We tested the jars built from latest 9.0.x  with 2000 / 5000 users
>>>> (connections) from JMeter , We see a very good improvement with the 
>>>> heap usage
>>>
>>> Good news. As is the fact that the other errors have been cleared up.
>>>
>>>> But I see this exception printed multiple times , I am not sure why this occurs :
>>>> Exception in thread "http-nio-x.y.z-1234-exec-213" java.lang.StackOverflowError 
>>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
>>>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
>>>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>>         at
>>>> org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperation
>>>> S
>>>> t
>>>> ate.run(NioEndpoint.java:1511)
>>>
>>> That looks like an infinite loop reading an incoming frame.
>>> New frames are read using a 9 byte buffer for the header and a 16k buffer for the payload (since Tomcat sets this as the max frame size).
>>>
>>> The loop is occurring because one of those buffers is simultaneously both full and still has more data to read. That should not be possible and I haven't yet been able to figure out how this is happening.
>>>
>>> How easy is this to reproduce?
>>>
>>> How often do you see these errors in your test run?
>>>
>>> Do you have a reliable test case that reproduces this on a clean Tomcat 9.0.x build? If is, can you share the details?
>>>
>>> Do you have the other end of that stack trace? I'm interested in how the code enters the loop.
>>>
>>> Thanks,
>>>
>>> Mark
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>>> For additional commands, e-mail: users-help@tomcat.apache.org
>>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


FW: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Arshiya Shariff <ar...@ericsson.com.INVALID>.
Hi ,

Please find the answers in-line Mark.

Http2 requests with message payload of  34KB are pumped from JMeter at 20 TPS with 700 connections to an application with Embedded tomcat - 9.0.39 (max-Threads : 200, all other values are the tomcat defaults)

What does that URL do with the POSTed content? Ignore it? Read it from an InputStream? Read it via getParameter()?
The posted content is read via BufferedReader reader = request.getReader() and  processed asynchronously.

Is JMeter run on the same machine as Tomcat?
JMeter is run from a different machine.

Do you use the JMeter GUI or the command line?
Launched via Command line (JMeter heap increased to 10 GB )

What are the specs of the server(s) being used?
The server is a VM with 12 CPUs and 120 GB RAM

Please let us know  if you require more details.

Thanks and Regards
Arshiya Shariff
-----Original Message-----
From: Mark Thomas <ma...@apache.org> 
Sent: Monday, October 12, 2020 7:28 PM
To: users@tomcat.apache.org
Subject: Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

On 12/10/2020 08:02, Arshiya Shariff wrote:
> Hi Mark ,
> 
> The issue is reproduced with version 9.0.39 as well. Max threads in Tomcat is 200.
> 
> Please find the case:
> Client:JMeter 5.2.1 (With http2 plugin)
> TPS: around 20
> No of users from JMeter : 700
> Message payload size: 6 KB to 34 KB
> Loop: Infinite
> We let the loop run infinitely and see the java.lang.StackOverflowError trace printed multiple times in the log within few minutes of starting the test.

POSTing to what URL?

What does that URL do with the POSTed content? Ignore it? Read it from an InputStream? Read it via getParameter()?

Is JMeter run on the same machine as Tomcat?

Do you use the JMeter GUI or the command line?

What are the specs of the server(s) being used?

You need to provide the exact steps to recreate this issue on a clean install of Tomcat 9.0.39 as provided by the ASF.

Mark


> Please help us with this . What is the impact of StackOverflowError ?
> 
> Thanks and Regards
> Arshiya Shariff
> 
> -----Original Message-----
> From: Mark Thomas <ma...@apache.org>
> Sent: Friday, October 9, 2020 5:31 PM
> To: users@tomcat.apache.org
> Subject: Re: HTTP2: memory filled up fast on increasing the 
> connections to 1000/2000 (Embedded tomcat 9.0.38)
> 
> On 09/10/2020 12:32, Arshiya Shariff wrote:
>> Hi,
>>
>> Mark , with the test runs that I performed over clean 9.0.x branch I was not able to reproduce this.
> 
> Good. But I'd really like to understand why...
> 
>> But with 9.0.38 and the jars built from 9.0.x with hash: c8ec2d4cde3a31b0e9df9a30e7915d77ba725545  , with 700 or 1000 users (connections) and on sending 1000 Requests per second (or even lesser) , payload of 16K  from JMeter I can see that this Exception occurs within few minutes of starting the test . The maxThreads configured in tomcat is 200 .
>>
>> How often do you see these errors in your test run?
>> Randomly, at times 2 or 3 such traces.
> 
> OK. Definitely a timing issue then.
> 
>> Do you have the other end of that stack trace?
>> It is only the two lines that is recursively printed till the end about  ~500 times in one trace  :
>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>         at
>> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHand
>> l
>> er.completed(SocketWrapperBase.java:1100)
> 
> Doesn't tell me much unfortunately.
> 
>> I see the trace starting with :
>> Exception in thread "http-nio-x.y.z-1090-exec-107" java.lang.StackOverflowError 
>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:446)
>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>         at
>> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHand
>> l
>> er.completed(SocketWrapperBase.java:1100)
>>
>>  		(OR)
>>
>> Exception in thread "http-nio-x.y.z-1090-exec-87" java.lang.StackOverflowError
>>         at sun.nio.ch.IOVecWrapper.get(IOVecWrapper.java:96)
>>         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>         .....
>>         .....
>>         .....
>>         .....
>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>         at
>> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHand
>> l
>> er.completed(SocketWrapperBase.java:1100)
>>
>> Is there anything that was fixed around this in latest 9.0.x branch ?
> 
> Not obviously. I've reviewed every commit since c8ec2d4c. There is nothing that directly works with the I/O. There is 1e97ab2 which fixes a relatively recent regression in the HTTP/2 code. I guess it is possible (but it seems a bit of a stretch) that that bug is triggering an issue in JMeter which in turn is sending invalid HTTP/2 packets.
> 
> I think at this point, given the relatively small number of commits between c8ec2d4c and HEAD, the most useful thing you could do is run a binary search to find out at which commit the issue is fixed. If we know which commit to look at that should help track down the root cause.
> 
> Mark
> 
>>
>> Thanks and Regards
>> Arshiya Shariff
>>
>> -----Original Message-----
>> From: Mark Thomas <ma...@apache.org>
>> Sent: Monday, October 5, 2020 9:52 PM
>> To: users@tomcat.apache.org
>> Subject: Re: HTTP2: memory filled up fast on increasing the 
>> connections to 1000/2000 (Embedded tomcat 9.0.38)
>>
>> On 05/10/2020 10:56, Arshiya Shariff wrote:
>>> Hi All,
>>>
>>> Thank you so much Mark . 
>>> We tested the jars built from latest 9.0.x  with 2000 / 5000 users
>>> (connections) from JMeter , We see a very good improvement with the 
>>> heap usage
>>
>> Good news. As is the fact that the other errors have been cleared up.
>>
>>> But I see this exception printed multiple times , I am not sure why this occurs :
>>> Exception in thread "http-nio-x.y.z-1234-exec-213" java.lang.StackOverflowError 
>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
>>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
>>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>         at
>>> org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperation
>>> S
>>> t
>>> ate.run(NioEndpoint.java:1511)
>>
>> That looks like an infinite loop reading an incoming frame.
>> New frames are read using a 9 byte buffer for the header and a 16k buffer for the payload (since Tomcat sets this as the max frame size).
>>
>> The loop is occurring because one of those buffers is simultaneously both full and still has more data to read. That should not be possible and I haven't yet been able to figure out how this is happening.
>>
>> How easy is this to reproduce?
>>
>> How often do you see these errors in your test run?
>>
>> Do you have a reliable test case that reproduces this on a clean Tomcat 9.0.x build? If is, can you share the details?
>>
>> Do you have the other end of that stack trace? I'm interested in how the code enters the loop.
>>
>> Thanks,
>>
>> Mark
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Mark Thomas <ma...@apache.org>.
On 12/10/2020 08:02, Arshiya Shariff wrote:
> Hi Mark , 
> 
> The issue is reproduced with version 9.0.39 as well. Max threads in Tomcat is 200.
> 
> Please find the case:
> Client:JMeter 5.2.1 (With http2 plugin)
> TPS: around 20
> No of users from JMeter : 700
> Message payload size: 6 KB to 34 KB
> Loop: Infinite 
> We let the loop run infinitely and see the java.lang.StackOverflowError trace printed multiple times in the log within few minutes of starting the test.

POSTing to what URL?

What does that URL do with the POSTed content? Ignore it? Read it from
an InputStream? Read it via getParameter()?

Is JMeter run on the same machine as Tomcat?

Do you use the JMeter GUI or the command line?

What are the specs of the server(s) being used?

You need to provide the exact steps to recreate this issue on a clean
install of Tomcat 9.0.39 as provided by the ASF.

Mark


> Please help us with this . What is the impact of StackOverflowError ?
> 
> Thanks and Regards
> Arshiya Shariff
> 
> -----Original Message-----
> From: Mark Thomas <ma...@apache.org> 
> Sent: Friday, October 9, 2020 5:31 PM
> To: users@tomcat.apache.org
> Subject: Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)
> 
> On 09/10/2020 12:32, Arshiya Shariff wrote:
>> Hi,
>>
>> Mark , with the test runs that I performed over clean 9.0.x branch I was not able to reproduce this.
> 
> Good. But I'd really like to understand why...
> 
>> But with 9.0.38 and the jars built from 9.0.x with hash: c8ec2d4cde3a31b0e9df9a30e7915d77ba725545  , with 700 or 1000 users (connections) and on sending 1000 Requests per second (or even lesser) , payload of 16K  from JMeter I can see that this Exception occurs within few minutes of starting the test . The maxThreads configured in tomcat is 200 .
>>
>> How often do you see these errors in your test run?
>> Randomly, at times 2 or 3 such traces.
> 
> OK. Definitely a timing issue then.
> 
>> Do you have the other end of that stack trace?
>> It is only the two lines that is recursively printed till the end about  ~500 times in one trace  :
>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>         at 
>> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandl
>> er.completed(SocketWrapperBase.java:1100)
> 
> Doesn't tell me much unfortunately.
> 
>> I see the trace starting with :
>> Exception in thread "http-nio-x.y.z-1090-exec-107" java.lang.StackOverflowError 
>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:446)
>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>         at 
>> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandl
>> er.completed(SocketWrapperBase.java:1100)
>>
>>  		(OR)
>>
>> Exception in thread "http-nio-x.y.z-1090-exec-87" java.lang.StackOverflowError
>>         at sun.nio.ch.IOVecWrapper.get(IOVecWrapper.java:96)
>>         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>         .....
>>         .....
>>         .....
>>         .....
>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>         at 
>> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandl
>> er.completed(SocketWrapperBase.java:1100)
>>
>> Is there anything that was fixed around this in latest 9.0.x branch ?
> 
> Not obviously. I've reviewed every commit since c8ec2d4c. There is nothing that directly works with the I/O. There is 1e97ab2 which fixes a relatively recent regression in the HTTP/2 code. I guess it is possible (but it seems a bit of a stretch) that that bug is triggering an issue in JMeter which in turn is sending invalid HTTP/2 packets.
> 
> I think at this point, given the relatively small number of commits between c8ec2d4c and HEAD, the most useful thing you could do is run a binary search to find out at which commit the issue is fixed. If we know which commit to look at that should help track down the root cause.
> 
> Mark
> 
>>
>> Thanks and Regards
>> Arshiya Shariff
>>
>> -----Original Message-----
>> From: Mark Thomas <ma...@apache.org>
>> Sent: Monday, October 5, 2020 9:52 PM
>> To: users@tomcat.apache.org
>> Subject: Re: HTTP2: memory filled up fast on increasing the 
>> connections to 1000/2000 (Embedded tomcat 9.0.38)
>>
>> On 05/10/2020 10:56, Arshiya Shariff wrote:
>>> Hi All,
>>>
>>> Thank you so much Mark . 
>>> We tested the jars built from latest 9.0.x  with 2000 / 5000 users
>>> (connections) from JMeter , We see a very good improvement with the 
>>> heap usage
>>
>> Good news. As is the fact that the other errors have been cleared up.
>>
>>> But I see this exception printed multiple times , I am not sure why this occurs :
>>> Exception in thread "http-nio-x.y.z-1234-exec-213" java.lang.StackOverflowError 
>>>         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
>>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
>>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>>         at
>>> org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationS
>>> t
>>> ate.run(NioEndpoint.java:1511)
>>
>> That looks like an infinite loop reading an incoming frame.
>> New frames are read using a 9 byte buffer for the header and a 16k buffer for the payload (since Tomcat sets this as the max frame size).
>>
>> The loop is occurring because one of those buffers is simultaneously both full and still has more data to read. That should not be possible and I haven't yet been able to figure out how this is happening.
>>
>> How easy is this to reproduce?
>>
>> How often do you see these errors in your test run?
>>
>> Do you have a reliable test case that reproduces this on a clean Tomcat 9.0.x build? If is, can you share the details?
>>
>> Do you have the other end of that stack trace? I'm interested in how the code enters the loop.
>>
>> Thanks,
>>
>> Mark
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


RE: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Arshiya Shariff <ar...@ericsson.com.INVALID>.
Hi Mark , 

The issue is reproduced with version 9.0.39 as well. Max threads in Tomcat is 200.

Please find the case:
Client:JMeter 5.2.1 (With http2 plugin)
TPS: around 20
No of users from JMeter : 700
Message payload size: 6 KB to 34 KB
Loop: Infinite 
We let the loop run infinitely and see the java.lang.StackOverflowError trace printed multiple times in the log within few minutes of starting the test.

Please help us with this . What is the impact of StackOverflowError ?

Thanks and Regards
Arshiya Shariff

-----Original Message-----
From: Mark Thomas <ma...@apache.org> 
Sent: Friday, October 9, 2020 5:31 PM
To: users@tomcat.apache.org
Subject: Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

On 09/10/2020 12:32, Arshiya Shariff wrote:
> Hi,
> 
> Mark , with the test runs that I performed over clean 9.0.x branch I was not able to reproduce this.

Good. But I'd really like to understand why...

> But with 9.0.38 and the jars built from 9.0.x with hash: c8ec2d4cde3a31b0e9df9a30e7915d77ba725545  , with 700 or 1000 users (connections) and on sending 1000 Requests per second (or even lesser) , payload of 16K  from JMeter I can see that this Exception occurs within few minutes of starting the test . The maxThreads configured in tomcat is 200 .
> 
> How often do you see these errors in your test run?
> Randomly, at times 2 or 3 such traces.

OK. Definitely a timing issue then.

> Do you have the other end of that stack trace?
> It is only the two lines that is recursively printed till the end about  ~500 times in one trace  :
>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>         at 
> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandl
> er.completed(SocketWrapperBase.java:1100)

Doesn't tell me much unfortunately.

> I see the trace starting with :
> Exception in thread "http-nio-x.y.z-1090-exec-107" java.lang.StackOverflowError 
>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:446)
>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>         at 
> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandl
> er.completed(SocketWrapperBase.java:1100)
> 
>  		(OR)
> 
> Exception in thread "http-nio-x.y.z-1090-exec-87" java.lang.StackOverflowError
>         at sun.nio.ch.IOVecWrapper.get(IOVecWrapper.java:96)
>         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>         .....
>         .....
>         .....
>         .....
>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>         at 
> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandl
> er.completed(SocketWrapperBase.java:1100)
> 
> Is there anything that was fixed around this in latest 9.0.x branch ?

Not obviously. I've reviewed every commit since c8ec2d4c. There is nothing that directly works with the I/O. There is 1e97ab2 which fixes a relatively recent regression in the HTTP/2 code. I guess it is possible (but it seems a bit of a stretch) that that bug is triggering an issue in JMeter which in turn is sending invalid HTTP/2 packets.

I think at this point, given the relatively small number of commits between c8ec2d4c and HEAD, the most useful thing you could do is run a binary search to find out at which commit the issue is fixed. If we know which commit to look at that should help track down the root cause.

Mark

> 
> Thanks and Regards
> Arshiya Shariff
> 
> -----Original Message-----
> From: Mark Thomas <ma...@apache.org>
> Sent: Monday, October 5, 2020 9:52 PM
> To: users@tomcat.apache.org
> Subject: Re: HTTP2: memory filled up fast on increasing the 
> connections to 1000/2000 (Embedded tomcat 9.0.38)
> 
> On 05/10/2020 10:56, Arshiya Shariff wrote:
>> Hi All,
>>
>> Thank you so much Mark . 
>> We tested the jars built from latest 9.0.x  with 2000 / 5000 users
>> (connections) from JMeter , We see a very good improvement with the 
>> heap usage
> 
> Good news. As is the fact that the other errors have been cleared up.
> 
>> But I see this exception printed multiple times , I am not sure why this occurs :
>> Exception in thread "http-nio-x.y.z-1234-exec-213" java.lang.StackOverflowError 
>>         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>         at
>> org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationS
>> t
>> ate.run(NioEndpoint.java:1511)
> 
> That looks like an infinite loop reading an incoming frame.
> New frames are read using a 9 byte buffer for the header and a 16k buffer for the payload (since Tomcat sets this as the max frame size).
> 
> The loop is occurring because one of those buffers is simultaneously both full and still has more data to read. That should not be possible and I haven't yet been able to figure out how this is happening.
> 
> How easy is this to reproduce?
> 
> How often do you see these errors in your test run?
> 
> Do you have a reliable test case that reproduces this on a clean Tomcat 9.0.x build? If is, can you share the details?
> 
> Do you have the other end of that stack trace? I'm interested in how the code enters the loop.
> 
> Thanks,
> 
> Mark
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Mark Thomas <ma...@apache.org>.
On 09/10/2020 12:32, Arshiya Shariff wrote:
> Hi, 
> 
> Mark , with the test runs that I performed over clean 9.0.x branch I was not able to reproduce this.

Good. But I'd really like to understand why...

> But with 9.0.38 and the jars built from 9.0.x with hash: c8ec2d4cde3a31b0e9df9a30e7915d77ba725545  , with 700 or 1000 users (connections) and on sending 1000 Requests per second (or even lesser) , payload of 16K  from JMeter I can see that this Exception occurs within few minutes of starting the test . The maxThreads configured in tomcat is 200 .
> 
> How often do you see these errors in your test run?
> Randomly, at times 2 or 3 such traces.

OK. Definitely a timing issue then.

> Do you have the other end of that stack trace?
> It is only the two lines that is recursively printed till the end about  ~500 times in one trace  :
>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)

Doesn't tell me much unfortunately.

> I see the trace starting with :
> Exception in thread "http-nio-x.y.z-1090-exec-107" java.lang.StackOverflowError 
>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:446)
>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
> 
>  		(OR)
> 
> Exception in thread "http-nio-x.y.z-1090-exec-87" java.lang.StackOverflowError
>         at sun.nio.ch.IOVecWrapper.get(IOVecWrapper.java:96)
>         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>         .....
>         .....
>         .....
>         .....
>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
> 
> Is there anything that was fixed around this in latest 9.0.x branch ?

Not obviously. I've reviewed every commit since c8ec2d4c. There is
nothing that directly works with the I/O. There is 1e97ab2 which fixes a
relatively recent regression in the HTTP/2 code. I guess it is possible
(but it seems a bit of a stretch) that that bug is triggering an issue
in JMeter which in turn is sending invalid HTTP/2 packets.

I think at this point, given the relatively small number of commits
between c8ec2d4c and HEAD, the most useful thing you could do is run a
binary search to find out at which commit the issue is fixed. If we know
which commit to look at that should help track down the root cause.

Mark

> 
> Thanks and Regards
> Arshiya Shariff
> 
> -----Original Message-----
> From: Mark Thomas <ma...@apache.org> 
> Sent: Monday, October 5, 2020 9:52 PM
> To: users@tomcat.apache.org
> Subject: Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)
> 
> On 05/10/2020 10:56, Arshiya Shariff wrote:
>> Hi All,
>>
>> Thank you so much Mark . 
>> We tested the jars built from latest 9.0.x  with 2000 / 5000 users 
>> (connections) from JMeter , We see a very good improvement with the 
>> heap usage
> 
> Good news. As is the fact that the other errors have been cleared up.
> 
>> But I see this exception printed multiple times , I am not sure why this occurs :
>> Exception in thread "http-nio-x.y.z-1234-exec-213" java.lang.StackOverflowError 
>>         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
>>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
>>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>>         at 
>> org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationSt
>> ate.run(NioEndpoint.java:1511)
> 
> That looks like an infinite loop reading an incoming frame.
> New frames are read using a 9 byte buffer for the header and a 16k buffer for the payload (since Tomcat sets this as the max frame size).
> 
> The loop is occurring because one of those buffers is simultaneously both full and still has more data to read. That should not be possible and I haven't yet been able to figure out how this is happening.
> 
> How easy is this to reproduce?
> 
> How often do you see these errors in your test run?
> 
> Do you have a reliable test case that reproduces this on a clean Tomcat 9.0.x build? If is, can you share the details?
> 
> Do you have the other end of that stack trace? I'm interested in how the code enters the loop.
> 
> Thanks,
> 
> Mark
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Martin Grigorov <mg...@apache.org>.
Hi Arshiya,


On Fri, Oct 9, 2020 at 2:33 PM Arshiya Shariff
<ar...@ericsson.com.invalid> wrote:

> Hi,
>
> Mark , with the test runs that I performed over clean 9.0.x branch I was
> not able to reproduce this. But with 9.0.38 and the jars built from 9.0.x
> with hash: c8ec2d4cde3a31b0e9df9a30e7915d77ba725545  , with 700 or 1000
> users (connections) and on sending 1000 Requests per second (or even
> lesser) , payload of 16K  from JMeter I can see that this Exception occurs
> within few minutes of starting the test . The maxThreads configured in
> tomcat is 200 .
>
> How often do you see these errors in your test run?
> Randomly, at times 2 or 3 such traces.
>
> Do you have the other end of that stack trace?
> It is only the two lines that is recursively printed till the end about
> ~500 times in one trace  :
>         at org.apache.tomcat.util.net
> .NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>         at org.apache.tomcat.util.net
> .SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>
> I see the trace starting with :
> Exception in thread "http-nio-x.y.z-1090-exec-107"
> java.lang.StackOverflowError
>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:446)
>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>         at org.apache.tomcat.util.net
> .NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>         at org.apache.tomcat.util.net
> .SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>
>                 (OR)
>
> Exception in thread "http-nio-x.y.z-1090-exec-87"
> java.lang.StackOverflowError
>         at sun.nio.ch.IOVecWrapper.get(IOVecWrapper.java:96)
>         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>         at org.apache.tomcat.util.net
> .NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>         at org.apache.tomcat.util.net
> .SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>         at org.apache.tomcat.util.net
> .NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>         at org.apache.tomcat.util.net
> .SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>         .....
>         .....
>         .....
>         .....
>         at org.apache.tomcat.util.net
> .NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>         at org.apache.tomcat.util.net
> .SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>         at org.apache.tomcat.util.net
> .NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>         at org.apache.tomcat.util.net
> .SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>
> Is there anything that was fixed around this in latest 9.0.x branch ?
>

9.0.39 is being voted now at dev@ mailing list:

It can be obtained from:
https://dist.apache.org/repos/dist/dev/tomcat/tomcat-9/v9.0.39/
The Maven staging repo is:
https://repository.apache.org/content/repositories/orgapachetomcat-1281/

Give it a try and vote!


> Thanks and Regards
> Arshiya Shariff
>
> -----Original Message-----
> From: Mark Thomas <ma...@apache.org>
> Sent: Monday, October 5, 2020 9:52 PM
> To: users@tomcat.apache.org
> Subject: Re: HTTP2: memory filled up fast on increasing the connections to
> 1000/2000 (Embedded tomcat 9.0.38)
>
> On 05/10/2020 10:56, Arshiya Shariff wrote:
> > Hi All,
> >
> > Thank you so much Mark .
> > We tested the jars built from latest 9.0.x  with 2000 / 5000 users
> > (connections) from JMeter , We see a very good improvement with the
> > heap usage
>
> Good news. As is the fact that the other errors have been cleared up.
>
> > But I see this exception printed multiple times , I am not sure why this
> occurs :
> > Exception in thread "http-nio-x.y.z-1234-exec-213"
> java.lang.StackOverflowError
> >         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
> >         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
> >         at org.apache.tomcat.util.net
> .NioChannel.read(NioChannel.java:174)
> >         at org.apache.tomcat.util.net
> .NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
> >         at org.apache.tomcat.util.net
> .SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
> >         at org.apache.tomcat.util.net
> .NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
> >         at org.apache.tomcat.util.net
> .SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
> >         at org.apache.tomcat.util.net
> .NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
> >         at org.apache.tomcat.util.net
> .SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
> >         at
> > org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationSt
> > ate.run(NioEndpoint.java:1511)
>
> That looks like an infinite loop reading an incoming frame.
> New frames are read using a 9 byte buffer for the header and a 16k buffer
> for the payload (since Tomcat sets this as the max frame size).
>
> The loop is occurring because one of those buffers is simultaneously both
> full and still has more data to read. That should not be possible and I
> haven't yet been able to figure out how this is happening.
>
> How easy is this to reproduce?
>
> How often do you see these errors in your test run?
>
> Do you have a reliable test case that reproduces this on a clean Tomcat
> 9.0.x build? If is, can you share the details?
>
> Do you have the other end of that stack trace? I'm interested in how the
> code enters the loop.
>
> Thanks,
>
> Mark
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>

RE: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Arshiya Shariff <ar...@ericsson.com.INVALID>.
Hi, 

Mark , with the test runs that I performed over clean 9.0.x branch I was not able to reproduce this. But with 9.0.38 and the jars built from 9.0.x with hash: c8ec2d4cde3a31b0e9df9a30e7915d77ba725545  , with 700 or 1000 users (connections) and on sending 1000 Requests per second (or even lesser) , payload of 16K  from JMeter I can see that this Exception occurs within few minutes of starting the test . The maxThreads configured in tomcat is 200 .

How often do you see these errors in your test run?
Randomly, at times 2 or 3 such traces.

Do you have the other end of that stack trace?
It is only the two lines that is recursively printed till the end about  ~500 times in one trace  :
        at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
        at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)

I see the trace starting with :
Exception in thread "http-nio-x.y.z-1090-exec-107" java.lang.StackOverflowError 
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:446)
        at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
        at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
        at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)

 		(OR)

Exception in thread "http-nio-x.y.z-1090-exec-87" java.lang.StackOverflowError
        at sun.nio.ch.IOVecWrapper.get(IOVecWrapper.java:96)
        at sun.nio.ch.IOUtil.read(IOUtil.java:240)
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
        at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
        at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
        at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
        at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
        at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
        .....
        .....
        .....
        .....
        at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
        at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
        at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
        at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)

Is there anything that was fixed around this in latest 9.0.x branch ?

Thanks and Regards
Arshiya Shariff

-----Original Message-----
From: Mark Thomas <ma...@apache.org> 
Sent: Monday, October 5, 2020 9:52 PM
To: users@tomcat.apache.org
Subject: Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

On 05/10/2020 10:56, Arshiya Shariff wrote:
> Hi All,
> 
> Thank you so much Mark . 
> We tested the jars built from latest 9.0.x  with 2000 / 5000 users 
> (connections) from JMeter , We see a very good improvement with the 
> heap usage

Good news. As is the fact that the other errors have been cleared up.

> But I see this exception printed multiple times , I am not sure why this occurs :
> Exception in thread "http-nio-x.y.z-1234-exec-213" java.lang.StackOverflowError 
>         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>         at 
> org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationSt
> ate.run(NioEndpoint.java:1511)

That looks like an infinite loop reading an incoming frame.
New frames are read using a 9 byte buffer for the header and a 16k buffer for the payload (since Tomcat sets this as the max frame size).

The loop is occurring because one of those buffers is simultaneously both full and still has more data to read. That should not be possible and I haven't yet been able to figure out how this is happening.

How easy is this to reproduce?

How often do you see these errors in your test run?

Do you have a reliable test case that reproduces this on a clean Tomcat 9.0.x build? If is, can you share the details?

Do you have the other end of that stack trace? I'm interested in how the code enters the loop.

Thanks,

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Mark Thomas <ma...@apache.org>.
On 05/10/2020 10:56, Arshiya Shariff wrote:
> Hi All, 
> 
> Thank you so much Mark . 
> We tested the jars built from latest 9.0.x  with 2000 / 5000 users (connections) from JMeter , We see a very good improvement with the heap usage 

Good news. As is the fact that the other errors have been cleared up.

> But I see this exception printed multiple times , I am not sure why this occurs :
> Exception in thread "http-nio-x.y.z-1234-exec-213" java.lang.StackOverflowError 
>         at sun.nio.ch.IOUtil.read(IOUtil.java:240)
>         at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
>         at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>         at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
>         at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)

That looks like an infinite loop reading an incoming frame.
New frames are read using a 9 byte buffer for the header and a 16k
buffer for the payload (since Tomcat sets this as the max frame size).

The loop is occurring because one of those buffers is simultaneously
both full and still has more data to read. That should not be possible
and I haven't yet been able to figure out how this is happening.

How easy is this to reproduce?

How often do you see these errors in your test run?

Do you have a reliable test case that reproduces this on a clean Tomcat
9.0.x build? If is, can you share the details?

Do you have the other end of that stack trace? I'm interested in how the
code enters the loop.

Thanks,

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


RE: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Arshiya Shariff <ar...@ericsson.com.INVALID>.
Hi All, 

Thank you so much Mark . 
We tested the jars built from latest 9.0.x  with 2000 / 5000 users (connections) from JMeter , We see a very good improvement with the heap usage .

But I see this exception printed multiple times , I am not sure why this occurs :
Exception in thread "http-nio-x.y.z-1234-exec-213" java.lang.StackOverflowError 
        at sun.nio.ch.IOUtil.read(IOUtil.java:240)
        at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:440)
        at org.apache.tomcat.util.net.NioChannel.read(NioChannel.java:174)
        at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1468)
        at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
        at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
        at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
        at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
        at org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1100)
        at org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)

Any help with this please.

Thanks and Regards
Arshiya Shariff
        
-----Original Message-----
From: Mark Thomas <ma...@apache.org> 
Sent: Thursday, October 1, 2020 2:59 PM
To: users@tomcat.apache.org
Subject: Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

On 30/09/2020 18:47, Martin Grigorov wrote:
> On Wed, Sep 30, 2020 at 7:47 PM Mark Thomas <ma...@apache.org> wrote:
>> On 30/09/2020 16:17, Mark Thomas wrote:

<snip/>

>>> That is helpful. Looks like you have found a way to reproduce the 
>>> buffer issues reported in 
>>> https://bz.apache.org/bugzilla/show_bug.cgi?id=64710
>>
>> Can you share the command you used to trigger those errors please.
>>
> 
> The Vegeta command I used is:
> 
> jq -ncM '{"method": "POST", "url": 
> "https://localhost:8080/testbed/plaintext",
> "body":"payload=Some
> sdgggwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwsdgssfshffheeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeessssffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffpayload"
> | @base64, header: {"Content-Type":
> ["application/x-www-form-urlencoded"]}}' | vegeta attack -format=json
> -http2 -rate=1000 -max-workers=8 -insecure -duration=2m | vegeta 
> encode > /tmp/http2.json; and vegeta report -type=json /tmp/http2.json | jq .
> 
> The app is at
> https://protect2.fireeye.com/v1/url?k=c248c6cb-9ce86806-c2488650-866132fe445e-bcef5199a0b9f57e&q=1&e=5f78363c-a75a-4bc1-b339-a919d2804f90&u=https%3A%2F%2Fgithub.com%2Fmartin-g%2Fhttp2-server-perf-tests%2Ftree%2Fmaster%2Fjava%2Ftomcat.
> Just start EmbeddedTomcat#main() with -Dtomcat.http2=true

Definitely timing related as I am unable to reproduce the problem with that command or some variations.

However, I think I have managed to track down the root cause. The good news is that the BufferOverflowException is largely harmless. It is a side-effect of the connection being closed due to an error. My guess is that the error was a combination of vegeta sending an unexpected reset frame and Tomcat maintaining state for a very small number of streams in some circumstances.

If you could retest with the latest 9.0.x that would be very helpful.
The memory usage, stream state maintenance and this BufferOverflowException should all be fixed.

Mark


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Martin Grigorov <mg...@apache.org>.
On Thu, Oct 1, 2020 at 12:29 PM Mark Thomas <ma...@apache.org> wrote:

> On 30/09/2020 18:47, Martin Grigorov wrote:
> > On Wed, Sep 30, 2020 at 7:47 PM Mark Thomas <ma...@apache.org> wrote:
> >> On 30/09/2020 16:17, Mark Thomas wrote:
>
> <snip/>
>
> >>> That is helpful. Looks like you have found a way to reproduce the
> buffer
> >>> issues reported in
> https://bz.apache.org/bugzilla/show_bug.cgi?id=64710
> >>
> >> Can you share the command you used to trigger those errors please.
> >>
> >
> > The Vegeta command I used is:
> >
> > jq -ncM '{"method": "POST", "url": "
> https://localhost:8080/testbed/plaintext",
> > "body":"payload=Some
> >
> sdgggwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwsdgssfshffheeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeessssffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffpayload"
> > | @base64, header: {"Content-Type":
> > ["application/x-www-form-urlencoded"]}}' | vegeta attack -format=json
> > -http2 -rate=1000 -max-workers=8 -insecure -duration=2m | vegeta encode >
> > /tmp/http2.json; and vegeta report -type=json /tmp/http2.json | jq .
> >
> > The app is at
> >
> https://github.com/martin-g/http2-server-perf-tests/tree/master/java/tomcat
> .
> > Just start EmbeddedTomcat#main() with -Dtomcat.http2=true
>
> Definitely timing related as I am unable to reproduce the problem with
> that command or some variations.
>
> However, I think I have managed to track down the root cause. The good
> news is that the BufferOverflowException is largely harmless. It is a
> side-effect of the connection being closed due to an error. My guess is
> that the error was a combination of vegeta sending an unexpected reset
> frame and Tomcat maintaining state for a very small number of streams in
> some circumstances.
>
> If you could retest with the latest 9.0.x that would be very helpful.
> The memory usage, stream state maintenance and this
> BufferOverflowException should all be fixed.
>

Yesterday it was very easy to reproduce it here.
It looks good now - both exception types didn't happen in several runs!

But something new broke:


SEVERE: Servlet.service() for servlet [plaintext] in context with path []
threw exception
java.lang.NullPointerException: Cannot throw exception because "ioe" is null
at
org.apache.coyote.http2.Http2UpgradeHandler.handleAppInitiatedIOException(Http2UpgradeHandler.java:797)
at
org.apache.coyote.http2.Http2AsyncUpgradeHandler.handleAsyncException(Http2AsyncUpgradeHandler.java:276)
at
org.apache.coyote.http2.Http2AsyncUpgradeHandler.writeWindowUpdate(Http2AsyncUpgradeHandler.java:252)
at org.apache.coyote.http2.Stream$StreamInputBuffer.doRead(Stream.java:1088)
at org.apache.coyote.Request.doRead(Request.java:555)
at
org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:336)
at
org.apache.catalina.connector.InputBuffer.checkByteBufferEof(InputBuffer.java:632)
at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:362)
at
org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:132)
at org.apache.catalina.connector.Request.readPostBody(Request.java:3308)
at org.apache.catalina.connector.Request.parseParameters(Request.java:3241)
at org.apache.catalina.connector.Request.getParameter(Request.java:1124)
at
org.apache.catalina.connector.RequestFacade.getParameter(RequestFacade.java:381)
at info.mgsolutions.tomcat.PlainTextServlet.doPost(PlainTextServlet.java:41)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:652)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:733)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)

I will improve it!


> Mark
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>

Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Mark Thomas <ma...@apache.org>.
On 30/09/2020 18:47, Martin Grigorov wrote:
> On Wed, Sep 30, 2020 at 7:47 PM Mark Thomas <ma...@apache.org> wrote:
>> On 30/09/2020 16:17, Mark Thomas wrote:

<snip/>

>>> That is helpful. Looks like you have found a way to reproduce the buffer
>>> issues reported in https://bz.apache.org/bugzilla/show_bug.cgi?id=64710
>>
>> Can you share the command you used to trigger those errors please.
>>
> 
> The Vegeta command I used is:
> 
> jq -ncM '{"method": "POST", "url": "https://localhost:8080/testbed/plaintext",
> "body":"payload=Some
> sdgggwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwsdgssfshffheeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeessssffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffpayload"
> | @base64, header: {"Content-Type":
> ["application/x-www-form-urlencoded"]}}' | vegeta attack -format=json
> -http2 -rate=1000 -max-workers=8 -insecure -duration=2m | vegeta encode >
> /tmp/http2.json; and vegeta report -type=json /tmp/http2.json | jq .
> 
> The app is at
> https://github.com/martin-g/http2-server-perf-tests/tree/master/java/tomcat.
> Just start EmbeddedTomcat#main() with -Dtomcat.http2=true

Definitely timing related as I am unable to reproduce the problem with
that command or some variations.

However, I think I have managed to track down the root cause. The good
news is that the BufferOverflowException is largely harmless. It is a
side-effect of the connection being closed due to an error. My guess is
that the error was a combination of vegeta sending an unexpected reset
frame and Tomcat maintaining state for a very small number of streams in
some circumstances.

If you could retest with the latest 9.0.x that would be very helpful.
The memory usage, stream state maintenance and this
BufferOverflowException should all be fixed.

Mark


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Martin Grigorov <mg...@apache.org>.
On Wed, Sep 30, 2020 at 7:47 PM Mark Thomas <ma...@apache.org> wrote:

> On 30/09/2020 16:17, Mark Thomas wrote:
> > On 30/09/2020 13:53, Martin Grigorov wrote:
> >> On Wed, Sep 30, 2020 at 12:50 PM Martin Grigorov <mg...@apache.org>
> >
> >
> > <snip/>
> >
> >> When I load test HTTP2 with POST (with big bodies) there are many errors
> >> like:
> >>
> >> 1)
> >> Exception in thread "https-jsse-nio-8080-exec-5"
> >> java.nio.BufferOverflowException
> >> at java.base/java.nio.ByteBuffer.put(ByteBuffer.java:957)
> >> at java.base/java.nio.HeapByteBuffer.put(HeapByteBuffer.java:247)
> >> at
> >> org.apache.tomcat.util.net
> .SocketBufferHandler.unReadReadBuffer(SocketBufferHandler.java:100)
> >> at
> >> org.apache.tomcat.util.net
> .SocketWrapperBase.unRead(SocketWrapperBase.java:401)
> >> at
> >>
> org.apache.coyote.http2.Http2AsyncParser$FrameCompletionHandler.completed(Http2AsyncParser.java:307)
> >> at
> >>
> org.apache.coyote.http2.Http2AsyncParser$FrameCompletionHandler.completed(Http2AsyncParser.java:164)
> >> at
> >> org.apache.tomcat.util.net
> .SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1087)
> >> at
> >> org.apache.tomcat.util.net
> .NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
> >> at
> >>
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
> >> at
> >>
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
> >> at
> >>
> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
> >> at java.base/java.lang.Thread.run(Thread.java:832)
> >>
> >> 2)
> >> Sep 30, 2020 3:44:04 PM org.apache.tomcat.util.net.NioEndpoint$Poller
> events
> >> SEVERE: Failed to register socket with selector from poller
> >> java.nio.channels.ClosedChannelException
> >> at
> >>
> java.base/java.nio.channels.spi.AbstractSelectableChannel.register(AbstractSelectableChannel.java:222)
> >> at
> >> org.apache.tomcat.util.net
> .NioEndpoint$Poller.events(NioEndpoint.java:609)
> >> at org.apache.tomcat.util.net
> .NioEndpoint$Poller.run(NioEndpoint.java:703)
> >> at java.base/java.lang.Thread.run(Thread.java:832)
> >
> > That is helpful. Looks like you have found a way to reproduce the buffer
> > issues reported in https://bz.apache.org/bugzilla/show_bug.cgi?id=64710
>
> Can you share the command you used to trigger those errors please.
>

The Vegeta command I used is:

jq -ncM '{"method": "POST", "url": "https://localhost:8080/testbed/plaintext",
"body":"payload=Some
sdgggwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwsdgssfshffheeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeessssffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffpayload"
| @base64, header: {"Content-Type":
["application/x-www-form-urlencoded"]}}' | vegeta attack -format=json
-http2 -rate=1000 -max-workers=8 -insecure -duration=2m | vegeta encode >
/tmp/http2.json; and vegeta report -type=json /tmp/http2.json | jq .

The app is at
https://github.com/martin-g/http2-server-perf-tests/tree/master/java/tomcat.
Just start EmbeddedTomcat#main() with -Dtomcat.http2=true

Martin


>
> Thanks,
>
> Mark
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>

Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Mark Thomas <ma...@apache.org>.
On 30/09/2020 16:17, Mark Thomas wrote:
> On 30/09/2020 13:53, Martin Grigorov wrote:
>> On Wed, Sep 30, 2020 at 12:50 PM Martin Grigorov <mg...@apache.org>
> 
> 
> <snip/>
> 
>> When I load test HTTP2 with POST (with big bodies) there are many errors
>> like:
>>
>> 1)
>> Exception in thread "https-jsse-nio-8080-exec-5"
>> java.nio.BufferOverflowException
>> at java.base/java.nio.ByteBuffer.put(ByteBuffer.java:957)
>> at java.base/java.nio.HeapByteBuffer.put(HeapByteBuffer.java:247)
>> at
>> org.apache.tomcat.util.net.SocketBufferHandler.unReadReadBuffer(SocketBufferHandler.java:100)
>> at
>> org.apache.tomcat.util.net.SocketWrapperBase.unRead(SocketWrapperBase.java:401)
>> at
>> org.apache.coyote.http2.Http2AsyncParser$FrameCompletionHandler.completed(Http2AsyncParser.java:307)
>> at
>> org.apache.coyote.http2.Http2AsyncParser$FrameCompletionHandler.completed(Http2AsyncParser.java:164)
>> at
>> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1087)
>> at
>> org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
>> at
>> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
>> at
>> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
>> at
>> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
>> at java.base/java.lang.Thread.run(Thread.java:832)
>>
>> 2)
>> Sep 30, 2020 3:44:04 PM org.apache.tomcat.util.net.NioEndpoint$Poller events
>> SEVERE: Failed to register socket with selector from poller
>> java.nio.channels.ClosedChannelException
>> at
>> java.base/java.nio.channels.spi.AbstractSelectableChannel.register(AbstractSelectableChannel.java:222)
>> at
>> org.apache.tomcat.util.net.NioEndpoint$Poller.events(NioEndpoint.java:609)
>> at org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:703)
>> at java.base/java.lang.Thread.run(Thread.java:832)
> 
> That is helpful. Looks like you have found a way to reproduce the buffer
> issues reported in https://bz.apache.org/bugzilla/show_bug.cgi?id=64710

Can you share the command you used to trigger those errors please.

Thanks,

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Mark Thomas <ma...@apache.org>.
On 30/09/2020 13:53, Martin Grigorov wrote:
> On Wed, Sep 30, 2020 at 12:50 PM Martin Grigorov <mg...@apache.org>


<snip/>

> When I load test HTTP2 with POST (with big bodies) there are many errors
> like:
> 
> 1)
> Exception in thread "https-jsse-nio-8080-exec-5"
> java.nio.BufferOverflowException
> at java.base/java.nio.ByteBuffer.put(ByteBuffer.java:957)
> at java.base/java.nio.HeapByteBuffer.put(HeapByteBuffer.java:247)
> at
> org.apache.tomcat.util.net.SocketBufferHandler.unReadReadBuffer(SocketBufferHandler.java:100)
> at
> org.apache.tomcat.util.net.SocketWrapperBase.unRead(SocketWrapperBase.java:401)
> at
> org.apache.coyote.http2.Http2AsyncParser$FrameCompletionHandler.completed(Http2AsyncParser.java:307)
> at
> org.apache.coyote.http2.Http2AsyncParser$FrameCompletionHandler.completed(Http2AsyncParser.java:164)
> at
> org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1087)
> at
> org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
> at
> org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
> at java.base/java.lang.Thread.run(Thread.java:832)
> 
> 2)
> Sep 30, 2020 3:44:04 PM org.apache.tomcat.util.net.NioEndpoint$Poller events
> SEVERE: Failed to register socket with selector from poller
> java.nio.channels.ClosedChannelException
> at
> java.base/java.nio.channels.spi.AbstractSelectableChannel.register(AbstractSelectableChannel.java:222)
> at
> org.apache.tomcat.util.net.NioEndpoint$Poller.events(NioEndpoint.java:609)
> at org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:703)
> at java.base/java.lang.Thread.run(Thread.java:832)

That is helpful. Looks like you have found a way to reproduce the buffer
issues reported in https://bz.apache.org/bugzilla/show_bug.cgi?id=64710

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Martin Grigorov <mg...@apache.org>.
On Wed, Sep 30, 2020 at 12:50 PM Martin Grigorov <mg...@apache.org>
wrote:

> Hi,
>
> On Wed, Sep 30, 2020 at 11:35 AM Mark Thomas <ma...@apache.org> wrote:
>
>> On 30/09/2020 06:42, Arshiya Shariff wrote:
>> > Hi Martin ,
>> >
>> > Thank you for the response.
>> >
>> > With a payload of 200 bytes we were able to send 20K requests/sec with
>> 200 users from Jmeter without any memory issue . On increasing the payload
>> to 5Kb and the number of users to 1000 in Jmeter and sending 1000 requests
>> per second , the heap of 20GB got filled in 2 minutes . With 200 users the
>> memory is cleared in the G1 mixed GC itself , but with 1000 users the
>> memory is not cleared in the mixed GC , it takes full GCs of 7 to 10
>> seconds to clear the memory. These cases were executed with maxThreads 200
>> in tomcat , so we tried increasing the maxThreads from 200 to 1000, but
>> still GC was struggling .
>> >
>> > When we tried with 10 instances of JMeter , each with 100 users , where
>> each instance was started with a delay of 1 minute we were able to see 1000
>> connections created in tomcat without any memory issues. But when 1000
>> users are created using single instance of JMeter in 20 seconds , tomcat's
>> memory is filling fast- 20GB in 2 minutes.
>> > We suspect that the burst of connections being opened has a problem .
>> Please help us with the same .
>> >
>> > On analyzing the heap dump we see
>> org.apache.tomcat.util.collections.SynchronizedStack occupying around 93%
>> of 3GB live data ,the remaining 17GB is Garbage collected in the heap dump.
>>
>> You can't have high throughput, low GC pauses and small heap sizes.
>> Broadly you can have any two of those three at the expense of the third.
>>
>> The way Tomcat currently retains information about completed h2 streams
>> means you are likely to need a large heap under heavy load. There are
>> some changes already in 10.0.x that I plan to back-port to 9.0.x and
>> 8.5.x later today that should significantly reduce the heap requirements.
>>
>
> Here is a screenshot of me loading Tomcat HTTP2 9.0.x+the changes from
> 10.0.x with Vegeta for 3 mins:
> https://pasteboard.co/JtshrAs.png
> As you can see the GC is properly cleaning the heap. At the end the memory
> is not released until the GC kicks.
>
> Note: this is with a GET request without a body! I'm going to start a new
> email thread for POST with body - there I get GOAWAY+ENHANCE_YOUR_CALM for
> very low load.
>

When I load test HTTP2 with POST (with big bodies) there are many errors
like:

1)
Exception in thread "https-jsse-nio-8080-exec-5"
java.nio.BufferOverflowException
at java.base/java.nio.ByteBuffer.put(ByteBuffer.java:957)
at java.base/java.nio.HeapByteBuffer.put(HeapByteBuffer.java:247)
at
org.apache.tomcat.util.net.SocketBufferHandler.unReadReadBuffer(SocketBufferHandler.java:100)
at
org.apache.tomcat.util.net.SocketWrapperBase.unRead(SocketWrapperBase.java:401)
at
org.apache.coyote.http2.Http2AsyncParser$FrameCompletionHandler.completed(Http2AsyncParser.java:307)
at
org.apache.coyote.http2.Http2AsyncParser$FrameCompletionHandler.completed(Http2AsyncParser.java:164)
at
org.apache.tomcat.util.net.SocketWrapperBase$VectoredIOCompletionHandler.completed(SocketWrapperBase.java:1087)
at
org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper$NioOperationState.run(NioEndpoint.java:1511)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
at
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.base/java.lang.Thread.run(Thread.java:832)

2)
Sep 30, 2020 3:44:04 PM org.apache.tomcat.util.net.NioEndpoint$Poller events
SEVERE: Failed to register socket with selector from poller
java.nio.channels.ClosedChannelException
at
java.base/java.nio.channels.spi.AbstractSelectableChannel.register(AbstractSelectableChannel.java:222)
at
org.apache.tomcat.util.net.NioEndpoint$Poller.events(NioEndpoint.java:609)
at org.apache.tomcat.util.net.NioEndpoint$Poller.run(NioEndpoint.java:703)
at java.base/java.lang.Thread.run(Thread.java:832)

And the client gets these kind of error responses:

"Post \"https://localhost:8080/testbed/plaintext\": http2: server sent
GOAWAY and closed the connection; LastStreamID=7, ErrCode=PROTOCOL_ERROR,
debug=\"Connection [9354], Stream [7], The content length header value
[255] does not agree with the size of the data received [0]\"",
"Post \"https://localhost:8080/testbed/plaintext\": http2: server sent
GOAWAY and closed the connection; LastStreamID=31,
ErrCode=ENHANCE_YOUR_CALM, debug=\"Connection [9355], Too much overhead so
the connection will be closed\""
"Post \"https://localhost:8080/testbed/plaintext\": http2: server sent
GOAWAY and closed the connection; LastStreamID=6155, ErrCode=STREAM_CLOSED,
debug=\"Connection [10048], Stream [6153], State [CLOSED_RX], Frame type
[RST]\""

"status_codes": {
    "0": 286,
    "200": 35373
}
, i.e. just a small portion of the requests fail.

If I tell Vegeta to use HTTP1.1 (-http2=f) then everything is OK. It could
be a problem in Vegeta too.

The memory usage is just fine (as in the chart for GET).

Martin


> Martin
>
>
>>
>> Mark
>>
>>
>> >
>> > Thanks and Regards
>> > Arshiya Shariff
>> >
>> > -----Original Message-----
>> > From: Martin Grigorov <mg...@apache.org>
>> > Sent: Monday, September 28, 2020 11:44 PM
>> > To: Tomcat Users List <us...@tomcat.apache.org>
>> > Subject: Re: HTTP2: memory filled up fast on increasing the connections
>> to 1000/2000 (Embedded tomcat 9.0.38)
>> >
>> > Hi Arshiya,
>> >
>> >
>> > On Mon, Sep 28, 2020 at 7:59 PM Arshiya Shariff <
>> arshiya.shariff@ericsson.com.invalid> wrote:
>> >
>> >> Hi All,
>> >> With 200 threads(users) , ramp up duration of 2 seconds , loop count
>> >> 80 and by sending 1000 http2 requests/sec from JMeter Client to an
>> >> embedded tomcat application we did not observe any memory issue , but
>> >> on sending
>> >> 1000 http2 requests/sec with 2000 or 1000 users from JMeter , the
>> >> application's heap space of 20 GB is occupied in 2 minutes and after 2
>> >> full GCs the memory clears and comes down to 4GB (expected) .
>> >>
>> >
>> > I am not sure whether you follow the other discussions at users@.
>> > In another email thread we discuss load testing Tomcat HTTP2 and we are
>> able to make around 12K reqs/s with another load testing tool -
>> https://protect2.fireeye.com/v1/url?k=f8cfc13c-a66f0379-f8cf81a7-8692dc8284cb-2c0aae53194b790f&q=1&e=6a9c569d-7da1-4394-a9ac-bf72724992fa&u=https%3A%2F%2Fgithub.com%2Ftsenart%2Fvegeta
>> > For me JMeter itself failed with OOM when increasing the number of the
>> virtual users above 2K.
>> > There are several improvements in Tomcat master and 9.0.x in the HTTP2
>> area. Some of the changes are not yet downported to 9.0.x. We still test
>> them, trying to avoid introducing regressions in 9.0.x.
>> >
>> >
>> >>
>> >> Embedded tomcat Version:9.0.38
>> >> Max Threads : 200
>> >>
>> >
>> > The number of threads should be less if you do only CPU calculations
>> without IO/network. If your app blocks on IO/network calls then you need
>> more spare threads.
>> > With more threads there will be more context switches and less
>> throughput.
>> > That's why there is no one golden rule that applies to all applications.
>> > 200 is a good default that works for most of the applications. But you
>> need to test with different values to see which one gives the best
>> performance for your scenaria.
>> >
>> >
>> >> All other properties are the tomcat defaults.
>> >>
>> >> Why is tomcat not able to process many connections ?
>> >>
>> >
>> > You can tell us by enabling -XX:+HeapDumpOnOutOfMemoryError and
>> -XX:HeapDumpPath=<file-or-dir-path>. Once you have the .hprof file you can
>> examine it with Eclipse Memory Analyzer tool and see what is leaking.
>> > I will try to reproduce this issue tomorrow with Vegeta.
>> >
>> >
>> >> Why is the memory filled when the connections are increased, are there
>> >> any parameters to tune connections ?
>> >> Please let us know.
>> >>
>> >> Thanks and Regards
>> >> Arshiya Shariff
>> >>
>> >
>> > ---------------------------------------------------------------------
>> > To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> > For additional commands, e-mail: users-help@tomcat.apache.org
>> >
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
>> For additional commands, e-mail: users-help@tomcat.apache.org
>>
>>

Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Martin Grigorov <mg...@apache.org>.
Hi,

On Wed, Sep 30, 2020 at 11:35 AM Mark Thomas <ma...@apache.org> wrote:

> On 30/09/2020 06:42, Arshiya Shariff wrote:
> > Hi Martin ,
> >
> > Thank you for the response.
> >
> > With a payload of 200 bytes we were able to send 20K requests/sec with
> 200 users from Jmeter without any memory issue . On increasing the payload
> to 5Kb and the number of users to 1000 in Jmeter and sending 1000 requests
> per second , the heap of 20GB got filled in 2 minutes . With 200 users the
> memory is cleared in the G1 mixed GC itself , but with 1000 users the
> memory is not cleared in the mixed GC , it takes full GCs of 7 to 10
> seconds to clear the memory. These cases were executed with maxThreads 200
> in tomcat , so we tried increasing the maxThreads from 200 to 1000, but
> still GC was struggling .
> >
> > When we tried with 10 instances of JMeter , each with 100 users , where
> each instance was started with a delay of 1 minute we were able to see 1000
> connections created in tomcat without any memory issues. But when 1000
> users are created using single instance of JMeter in 20 seconds , tomcat's
> memory is filling fast- 20GB in 2 minutes.
> > We suspect that the burst of connections being opened has a problem .
> Please help us with the same .
> >
> > On analyzing the heap dump we see
> org.apache.tomcat.util.collections.SynchronizedStack occupying around 93%
> of 3GB live data ,the remaining 17GB is Garbage collected in the heap dump.
>
> You can't have high throughput, low GC pauses and small heap sizes.
> Broadly you can have any two of those three at the expense of the third.
>
> The way Tomcat currently retains information about completed h2 streams
> means you are likely to need a large heap under heavy load. There are
> some changes already in 10.0.x that I plan to back-port to 9.0.x and
> 8.5.x later today that should significantly reduce the heap requirements.
>

Here is a screenshot of me loading Tomcat HTTP2 9.0.x+the changes from
10.0.x with Vegeta for 3 mins:
https://pasteboard.co/JtshrAs.png
As you can see the GC is properly cleaning the heap. At the end the memory
is not released until the GC kicks.

Note: this is with a GET request without a body! I'm going to start a new
email thread for POST with body - there I get GOAWAY+ENHANCE_YOUR_CALM for
very low load.

Martin


>
> Mark
>
>
> >
> > Thanks and Regards
> > Arshiya Shariff
> >
> > -----Original Message-----
> > From: Martin Grigorov <mg...@apache.org>
> > Sent: Monday, September 28, 2020 11:44 PM
> > To: Tomcat Users List <us...@tomcat.apache.org>
> > Subject: Re: HTTP2: memory filled up fast on increasing the connections
> to 1000/2000 (Embedded tomcat 9.0.38)
> >
> > Hi Arshiya,
> >
> >
> > On Mon, Sep 28, 2020 at 7:59 PM Arshiya Shariff <
> arshiya.shariff@ericsson.com.invalid> wrote:
> >
> >> Hi All,
> >> With 200 threads(users) , ramp up duration of 2 seconds , loop count
> >> 80 and by sending 1000 http2 requests/sec from JMeter Client to an
> >> embedded tomcat application we did not observe any memory issue , but
> >> on sending
> >> 1000 http2 requests/sec with 2000 or 1000 users from JMeter , the
> >> application's heap space of 20 GB is occupied in 2 minutes and after 2
> >> full GCs the memory clears and comes down to 4GB (expected) .
> >>
> >
> > I am not sure whether you follow the other discussions at users@.
> > In another email thread we discuss load testing Tomcat HTTP2 and we are
> able to make around 12K reqs/s with another load testing tool -
> https://protect2.fireeye.com/v1/url?k=f8cfc13c-a66f0379-f8cf81a7-8692dc8284cb-2c0aae53194b790f&q=1&e=6a9c569d-7da1-4394-a9ac-bf72724992fa&u=https%3A%2F%2Fgithub.com%2Ftsenart%2Fvegeta
> > For me JMeter itself failed with OOM when increasing the number of the
> virtual users above 2K.
> > There are several improvements in Tomcat master and 9.0.x in the HTTP2
> area. Some of the changes are not yet downported to 9.0.x. We still test
> them, trying to avoid introducing regressions in 9.0.x.
> >
> >
> >>
> >> Embedded tomcat Version:9.0.38
> >> Max Threads : 200
> >>
> >
> > The number of threads should be less if you do only CPU calculations
> without IO/network. If your app blocks on IO/network calls then you need
> more spare threads.
> > With more threads there will be more context switches and less
> throughput.
> > That's why there is no one golden rule that applies to all applications.
> > 200 is a good default that works for most of the applications. But you
> need to test with different values to see which one gives the best
> performance for your scenaria.
> >
> >
> >> All other properties are the tomcat defaults.
> >>
> >> Why is tomcat not able to process many connections ?
> >>
> >
> > You can tell us by enabling -XX:+HeapDumpOnOutOfMemoryError and
> -XX:HeapDumpPath=<file-or-dir-path>. Once you have the .hprof file you can
> examine it with Eclipse Memory Analyzer tool and see what is leaking.
> > I will try to reproduce this issue tomorrow with Vegeta.
> >
> >
> >> Why is the memory filled when the connections are increased, are there
> >> any parameters to tune connections ?
> >> Please let us know.
> >>
> >> Thanks and Regards
> >> Arshiya Shariff
> >>
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> > For additional commands, e-mail: users-help@tomcat.apache.org
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
>
>

Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Mark Thomas <ma...@apache.org>.
On 30/09/2020 06:42, Arshiya Shariff wrote:
> Hi Martin , 
> 
> Thank you for the response. 
> 
> With a payload of 200 bytes we were able to send 20K requests/sec with 200 users from Jmeter without any memory issue . On increasing the payload to 5Kb and the number of users to 1000 in Jmeter and sending 1000 requests per second , the heap of 20GB got filled in 2 minutes . With 200 users the memory is cleared in the G1 mixed GC itself , but with 1000 users the memory is not cleared in the mixed GC , it takes full GCs of 7 to 10 seconds to clear the memory. These cases were executed with maxThreads 200 in tomcat , so we tried increasing the maxThreads from 200 to 1000, but still GC was struggling .
> 
> When we tried with 10 instances of JMeter , each with 100 users , where each instance was started with a delay of 1 minute we were able to see 1000 connections created in tomcat without any memory issues. But when 1000 users are created using single instance of JMeter in 20 seconds , tomcat's memory is filling fast- 20GB in 2 minutes. 
> We suspect that the burst of connections being opened has a problem . Please help us with the same .
> 
> On analyzing the heap dump we see org.apache.tomcat.util.collections.SynchronizedStack occupying around 93% of 3GB live data ,the remaining 17GB is Garbage collected in the heap dump.

You can't have high throughput, low GC pauses and small heap sizes.
Broadly you can have any two of those three at the expense of the third.

The way Tomcat currently retains information about completed h2 streams
means you are likely to need a large heap under heavy load. There are
some changes already in 10.0.x that I plan to back-port to 9.0.x and
8.5.x later today that should significantly reduce the heap requirements.

Mark


> 
> Thanks and Regards
> Arshiya Shariff
> 
> -----Original Message-----
> From: Martin Grigorov <mg...@apache.org> 
> Sent: Monday, September 28, 2020 11:44 PM
> To: Tomcat Users List <us...@tomcat.apache.org>
> Subject: Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)
> 
> Hi Arshiya,
> 
> 
> On Mon, Sep 28, 2020 at 7:59 PM Arshiya Shariff <ar...@ericsson.com.invalid> wrote:
> 
>> Hi All,
>> With 200 threads(users) , ramp up duration of 2 seconds , loop count 
>> 80 and by sending 1000 http2 requests/sec from JMeter Client to an 
>> embedded tomcat application we did not observe any memory issue , but 
>> on sending
>> 1000 http2 requests/sec with 2000 or 1000 users from JMeter , the 
>> application's heap space of 20 GB is occupied in 2 minutes and after 2 
>> full GCs the memory clears and comes down to 4GB (expected) .
>>
> 
> I am not sure whether you follow the other discussions at users@.
> In another email thread we discuss load testing Tomcat HTTP2 and we are able to make around 12K reqs/s with another load testing tool - https://protect2.fireeye.com/v1/url?k=f8cfc13c-a66f0379-f8cf81a7-8692dc8284cb-2c0aae53194b790f&q=1&e=6a9c569d-7da1-4394-a9ac-bf72724992fa&u=https%3A%2F%2Fgithub.com%2Ftsenart%2Fvegeta
> For me JMeter itself failed with OOM when increasing the number of the virtual users above 2K.
> There are several improvements in Tomcat master and 9.0.x in the HTTP2 area. Some of the changes are not yet downported to 9.0.x. We still test them, trying to avoid introducing regressions in 9.0.x.
> 
> 
>>
>> Embedded tomcat Version:9.0.38
>> Max Threads : 200
>>
> 
> The number of threads should be less if you do only CPU calculations without IO/network. If your app blocks on IO/network calls then you need more spare threads.
> With more threads there will be more context switches and less throughput.
> That's why there is no one golden rule that applies to all applications.
> 200 is a good default that works for most of the applications. But you need to test with different values to see which one gives the best performance for your scenaria.
> 
> 
>> All other properties are the tomcat defaults.
>>
>> Why is tomcat not able to process many connections ?
>>
> 
> You can tell us by enabling -XX:+HeapDumpOnOutOfMemoryError and -XX:HeapDumpPath=<file-or-dir-path>. Once you have the .hprof file you can examine it with Eclipse Memory Analyzer tool and see what is leaking.
> I will try to reproduce this issue tomorrow with Vegeta.
> 
> 
>> Why is the memory filled when the connections are increased, are there 
>> any parameters to tune connections ?
>> Please let us know.
>>
>> Thanks and Regards
>> Arshiya Shariff
>>
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


RE: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Arshiya Shariff <ar...@ericsson.com.INVALID>.
Hi Martin , 

Thank you for the response. 

With a payload of 200 bytes we were able to send 20K requests/sec with 200 users from Jmeter without any memory issue . On increasing the payload to 5Kb and the number of users to 1000 in Jmeter and sending 1000 requests per second , the heap of 20GB got filled in 2 minutes . With 200 users the memory is cleared in the G1 mixed GC itself , but with 1000 users the memory is not cleared in the mixed GC , it takes full GCs of 7 to 10 seconds to clear the memory. These cases were executed with maxThreads 200 in tomcat , so we tried increasing the maxThreads from 200 to 1000, but still GC was struggling .

When we tried with 10 instances of JMeter , each with 100 users , where each instance was started with a delay of 1 minute we were able to see 1000 connections created in tomcat without any memory issues. But when 1000 users are created using single instance of JMeter in 20 seconds , tomcat's memory is filling fast- 20GB in 2 minutes. 
We suspect that the burst of connections being opened has a problem . Please help us with the same .

On analyzing the heap dump we see org.apache.tomcat.util.collections.SynchronizedStack occupying around 93% of 3GB live data ,the remaining 17GB is Garbage collected in the heap dump.

Thanks and Regards
Arshiya Shariff

-----Original Message-----
From: Martin Grigorov <mg...@apache.org> 
Sent: Monday, September 28, 2020 11:44 PM
To: Tomcat Users List <us...@tomcat.apache.org>
Subject: Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Hi Arshiya,


On Mon, Sep 28, 2020 at 7:59 PM Arshiya Shariff <ar...@ericsson.com.invalid> wrote:

> Hi All,
> With 200 threads(users) , ramp up duration of 2 seconds , loop count 
> 80 and by sending 1000 http2 requests/sec from JMeter Client to an 
> embedded tomcat application we did not observe any memory issue , but 
> on sending
> 1000 http2 requests/sec with 2000 or 1000 users from JMeter , the 
> application's heap space of 20 GB is occupied in 2 minutes and after 2 
> full GCs the memory clears and comes down to 4GB (expected) .
>

I am not sure whether you follow the other discussions at users@.
In another email thread we discuss load testing Tomcat HTTP2 and we are able to make around 12K reqs/s with another load testing tool - https://protect2.fireeye.com/v1/url?k=f8cfc13c-a66f0379-f8cf81a7-8692dc8284cb-2c0aae53194b790f&q=1&e=6a9c569d-7da1-4394-a9ac-bf72724992fa&u=https%3A%2F%2Fgithub.com%2Ftsenart%2Fvegeta
For me JMeter itself failed with OOM when increasing the number of the virtual users above 2K.
There are several improvements in Tomcat master and 9.0.x in the HTTP2 area. Some of the changes are not yet downported to 9.0.x. We still test them, trying to avoid introducing regressions in 9.0.x.


>
> Embedded tomcat Version:9.0.38
> Max Threads : 200
>

The number of threads should be less if you do only CPU calculations without IO/network. If your app blocks on IO/network calls then you need more spare threads.
With more threads there will be more context switches and less throughput.
That's why there is no one golden rule that applies to all applications.
200 is a good default that works for most of the applications. But you need to test with different values to see which one gives the best performance for your scenaria.


> All other properties are the tomcat defaults.
>
> Why is tomcat not able to process many connections ?
>

You can tell us by enabling -XX:+HeapDumpOnOutOfMemoryError and -XX:HeapDumpPath=<file-or-dir-path>. Once you have the .hprof file you can examine it with Eclipse Memory Analyzer tool and see what is leaking.
I will try to reproduce this issue tomorrow with Vegeta.


> Why is the memory filled when the connections are increased, are there 
> any parameters to tune connections ?
> Please let us know.
>
> Thanks and Regards
> Arshiya Shariff
>

Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Martin Grigorov <mg...@apache.org>.
Hi Arshiya,


On Mon, Sep 28, 2020 at 7:59 PM Arshiya Shariff
<ar...@ericsson.com.invalid> wrote:

> Hi All,
> With 200 threads(users) , ramp up duration of 2 seconds , loop count 80
> and by sending 1000 http2 requests/sec from JMeter Client to an embedded
> tomcat application we did not observe any memory issue , but on sending
> 1000 http2 requests/sec with 2000 or 1000 users from JMeter , the
> application's heap space of 20 GB is occupied in 2 minutes and after 2 full
> GCs the memory clears and comes down to 4GB (expected) .
>

I am not sure whether you follow the other discussions at users@.
In another email thread we discuss load testing Tomcat HTTP2 and we are
able to make around 12K reqs/s with another load testing tool -
https://github.com/tsenart/vegeta
For me JMeter itself failed with OOM when increasing the number of the
virtual users above 2K.
There are several improvements in Tomcat master and 9.0.x in the HTTP2
area. Some of the changes are not yet downported to 9.0.x. We still test
them, trying to avoid introducing regressions in 9.0.x.


>
> Embedded tomcat Version:9.0.38
> Max Threads : 200
>

The number of threads should be less if you do only CPU calculations
without IO/network. If your app blocks on IO/network calls then you need
more spare threads.
With more threads there will be more context switches and less throughput.
That's why there is no one golden rule that applies to all applications.
200 is a good default that works for most of the applications. But you need
to test with different values to see which one gives the best performance
for your scenaria.


> All other properties are the tomcat defaults.
>
> Why is tomcat not able to process many connections ?
>

You can tell us by enabling -XX:+HeapDumpOnOutOfMemoryError and
-XX:HeapDumpPath=<file-or-dir-path>. Once you have the .hprof file you can
examine it with Eclipse Memory Analyzer tool and see what is leaking.
I will try to reproduce this issue tomorrow with Vegeta.


> Why is the memory filled when the connections are increased, are there any
> parameters to tune connections ?
> Please let us know.
>
> Thanks and Regards
> Arshiya Shariff
>

Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Mark Thomas <ma...@apache.org>.
On 30/09/2020 09:32, Arshiya Shariff wrote:
> Thank you for the response Mark ,
> 
>>> Are you able to test with a custom Tomcat build and/or build Tomcat 9 from source for testing?
> Yes Mark , we will be able to test with the jars built from Tomcat 9 source for testing .

The reduced memory footprint changes have now been back-ported to 9.0.x.
Please let us know how you get on with your testing.

Mark


> 
> Thanks and Regards
> Arshiya Shariff
> 
> -----Original Message-----
> From: Mark Thomas <ma...@apache.org> 
> Sent: Tuesday, September 29, 2020 12:25 AM
> To: users@tomcat.apache.org
> Subject: Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)
> 
> On 28/09/2020 17:58, Arshiya Shariff wrote:
>> Hi All,
>> With 200 threads(users) , ramp up duration of 2 seconds , loop count 80 and by sending 1000 http2 requests/sec from JMeter Client to an embedded tomcat application we did not observe any memory issue , but on sending 1000 http2 requests/sec with 2000 or 1000 users from JMeter , the application's heap space of 20 GB is occupied in 2 minutes and after 2 full GCs the memory clears and comes down to 4GB (expected) .
>>
>> Embedded tomcat Version:9.0.38
>> Max Threads : 200
>> All other properties are the tomcat defaults.
>>
>> Why is tomcat not able to process many connections ?
> 
> You haven't provided any evidence that Tomcat isn't able to process "many" connections.
> 
>> Why is the memory filled when the connections are increased, are there any parameters to tune connections ?
> 
> It looks like users == HTTP/2 Connection. Connections are required to maintain state for closed streams for both prioritisation and for error handling. More connections == more state == more memory.
> 
> Given the number of connections increased by a factor of between 12.5 and 25, that the memory usage only increased by a factor of 5 looks to be a positive result rather than an issue.
> 
> There are significant improvements to memory usage in this area in Tomcat 10.0.x that will get back-ported to 9.0.x but more testing is required.
> 
> Are you able to test with a custom Tomcat build and/or build Tomcat 9 from source for testing?
> 
> Mark
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
> For additional commands, e-mail: users-help@tomcat.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


RE: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Arshiya Shariff <ar...@ericsson.com.INVALID>.
Thank you for the response Mark ,

>> Are you able to test with a custom Tomcat build and/or build Tomcat 9 from source for testing?
Yes Mark , we will be able to test with the jars built from Tomcat 9 source for testing .

Thanks and Regards
Arshiya Shariff

-----Original Message-----
From: Mark Thomas <ma...@apache.org> 
Sent: Tuesday, September 29, 2020 12:25 AM
To: users@tomcat.apache.org
Subject: Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

On 28/09/2020 17:58, Arshiya Shariff wrote:
> Hi All,
> With 200 threads(users) , ramp up duration of 2 seconds , loop count 80 and by sending 1000 http2 requests/sec from JMeter Client to an embedded tomcat application we did not observe any memory issue , but on sending 1000 http2 requests/sec with 2000 or 1000 users from JMeter , the application's heap space of 20 GB is occupied in 2 minutes and after 2 full GCs the memory clears and comes down to 4GB (expected) .
> 
> Embedded tomcat Version:9.0.38
> Max Threads : 200
> All other properties are the tomcat defaults.
> 
> Why is tomcat not able to process many connections ?

You haven't provided any evidence that Tomcat isn't able to process "many" connections.

> Why is the memory filled when the connections are increased, are there any parameters to tune connections ?

It looks like users == HTTP/2 Connection. Connections are required to maintain state for closed streams for both prioritisation and for error handling. More connections == more state == more memory.

Given the number of connections increased by a factor of between 12.5 and 25, that the memory usage only increased by a factor of 5 looks to be a positive result rather than an issue.

There are significant improvements to memory usage in this area in Tomcat 10.0.x that will get back-ported to 9.0.x but more testing is required.

Are you able to test with a custom Tomcat build and/or build Tomcat 9 from source for testing?

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: HTTP2: memory filled up fast on increasing the connections to 1000/2000 (Embedded tomcat 9.0.38)

Posted by Mark Thomas <ma...@apache.org>.
On 28/09/2020 17:58, Arshiya Shariff wrote:
> Hi All,
> With 200 threads(users) , ramp up duration of 2 seconds , loop count 80 and by sending 1000 http2 requests/sec from JMeter Client to an embedded tomcat application we did not observe any memory issue , but on sending 1000 http2 requests/sec with 2000 or 1000 users from JMeter , the application's heap space of 20 GB is occupied in 2 minutes and after 2 full GCs the memory clears and comes down to 4GB (expected) .
> 
> Embedded tomcat Version:9.0.38
> Max Threads : 200
> All other properties are the tomcat defaults.
> 
> Why is tomcat not able to process many connections ?

You haven't provided any evidence that Tomcat isn't able to process
"many" connections.

> Why is the memory filled when the connections are increased, are there any parameters to tune connections ?

It looks like users == HTTP/2 Connection. Connections are required to
maintain state for closed streams for both prioritisation and for error
handling. More connections == more state == more memory.

Given the number of connections increased by a factor of between 12.5
and 25, that the memory usage only increased by a factor of 5 looks to
be a positive result rather than an issue.

There are significant improvements to memory usage in this area in
Tomcat 10.0.x that will get back-ported to 9.0.x but more testing is
required.

Are you able to test with a custom Tomcat build and/or build Tomcat 9
from source for testing?

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org