You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@tomcat.apache.org by Olivier Jaquemet <ol...@jalios.com> on 2019/04/23 09:58:13 UTC

OutOfMemory on large file download with AJP and cachingAllowed=false

Hi all,

We were able to reproduce a OutOfMemory error when using AJP and the 
Resources cachingAllowed=false directive.
It looks like a bug of AJP connector(s), as it does not occurs with 
other HTTP connectors.

Could you confirm the behavior described below is indeed bug ? (if you 
do, I'll create the issue on bugzilla)

To reproduce :

  * Use latest tomcat 8.5 version (tested with Tomcat 8.5.40)
  * Add an AJP connector to server.xml
    <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
  * Add the following directive to context.xml :
    <Resources cachingAllowed="false" />
  * Create a large file in the samples webapp, for example :
    cd webapps/examples
    dd if=/dev/zero of=large.txt bs=1k count=2000000
  * Start Tomcat with a 1024 mb heap size (JAVA_OPTS="-Xms1024m -Xmx1024m"
  * Configure Apache HTTPD to use mod_proxy_ajp, or mod_jk (both will
    have the same issue) [1]
  * Start Apache HTTPD
  * Download file through default HTTP connector
    http://localhost:8080/examples/large.txt
    --> OK
  * Download file through Apache/AJP http://localhost/examples/large.txt
    --> BUG : OutOfMemory error occurs
    Exception in thread "ajp-nio-8009-exec-10"
    java.lang.OutOfMemoryError: Java heap space
             at
    org.apache.catalina.webresources.FileResource.getContent(FileResource.java:207)
             at
    org.apache.catalina.servlets.DefaultServlet.serveResource(DefaultServlet.java:992)
             at
    org.apache.catalina.servlets.DefaultServlet.doGet(DefaultServlet.java:438)
             at javax.servlet.http.HttpServlet.service(HttpServlet.java:635)
             ...

Additionnal informations :

  * The bug could be reproduced on both Linux/Windows
  * Tested with latest OpenJDK 8.

Regards,

Olivier


[1] the HTTPD configuration :

  * if using mod_proxy_ajp (sudo a2enmod proxy proxy_ajp)

    ProxyPass /  ajp://127.0.0.1:8009/

  * if using mod_jk : (sudo a2enmod jk)

    JkWorkerProperty worker.list=tomcat
    JkWorkerProperty worker.tomcat.type=ajp13
    JkWorkerProperty worker.tomcat.host=localhost
    JkWorkerProperty worker.tomcat.port=8009

    JkMount / tomcat
    JkMount /* tomcat




Re: OutOfMemory on large file download with AJP and cachingAllowed=false

Posted by Mark Thomas <ma...@apache.org>.
On 26/04/2019 21:07, Olivier Jaquemet wrote:

<snip/>

> PS : completely unrelated to this matter, I just find out that the page
> https://tomcat.apache.org/svn.html contains outdated information and
> should probably removed as it as been replaced with
> https://tomcat.apache.org/source.html

There is meant to be a rewrite rule in place for that. I'll see if I can
figure out why it isn't working.

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: OutOfMemory on large file download with AJP and cachingAllowed=false

Posted by Olivier Jaquemet <ol...@jalios.com>.
On 26/04/2019 09:56, Mark Thomas wrote:

> There was an extra copy but Chris's suggestion got me thinking and I
> found a much better solution.
>
> The patch has been applied to 9.0.x and 8.5.x and will be in the next
> release of both. 7.0.x is not affected.
>
> The patch fixes the OutOfMemoryError and the ArrayIndexOutOfBoundsException.
>
> Mark

Thank you Mark and Christopher for your work on this. As always you rocks.

For the record, if anyone is looking for the corresponding issue/bug, 
none was entered (quite unfortunate from my point of view regarding 
defect tracking), but the corresponding commit is there :

  * master (9.x)
    https://github.com/apache/tomcat/commit/a8f1e96a456d8493a8e64dfe743a8ae663b28ce
  * 8.5 :
    https://github.com/apache/tomcat/commit/4ab58e9881ebdc039a657f5f77caf66b673f934b

(and also two some commits added for improved getContent javadoc)

Thanks again.
Olivier

PS : completely unrelated to this matter, I just find out that the page 
https://tomcat.apache.org/svn.html contains outdated information and 
should probably removed as it as been replaced with 
https://tomcat.apache.org/source.html


Re: OutOfMemory on large file download with AJP and cachingAllowed=false

Posted by Mark Thomas <ma...@apache.org>.
On 25/04/2019 21:37, Mark Thomas wrote:
> On 25/04/2019 21:16, Christopher Schultz wrote:
>> On 4/25/19 15:55, Mark Thomas wrote:

<snip/>

>> If the resources are caching-aware, then I think the DefaultServlet
>> can just always use Resource.getInputStream.
>>
>> Hmm. That might cause a lot of unnecessary IO if the bytes are
>> actually available.
> 
> That is a very tempting solution. The result is a LOT cleaner than the
> patch I just wrote. CachingResource is smart enough to cache the bytes
> and wrap them in a ByteArrayInputStream if Resource.getInputStream is
> called. My only concern is I think this introduces and additional copy
> of the data. I need to check that.

There was an extra copy but Chris's suggestion got me thinking and I
found a much better solution.

The patch has been applied to 9.0.x and 8.5.x and will be in the next
release of both. 7.0.x is not affected.

The patch fixes the OutOfMemoryError and the ArrayIndexOutOfBoundsException.

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: OutOfMemory on large file download with AJP and cachingAllowed=false

Posted by Mark Thomas <ma...@apache.org>.
On 25/04/2019 21:16, Christopher Schultz wrote:
> Mark,
> 
> On 4/25/19 15:55, Mark Thomas wrote:
>> On 23/04/2019 16:29, Olivier Jaquemet wrote:
>>> On 23/04/2019 16:12, Christopher Schultz wrote:
>>>> On 4/23/19 05:58, Olivier Jaquemet wrote:
> 
>> <snip/>
> 
>>>>> * Add the following directive to context.xml : <Resources 
>>>>> cachingAllowed="false" />
>>>> Okay. Why context.xml, by the way?
>>> I don't even know (yet...) why this setting was added in the
>>> first place in the environment where it was present... ! so why
>>> this file... I don't know either :)
> 
>> DefaultServlet is assuming caching is in place. If you disable it,
>> you hit this issue.
> 
>>>>> * Create a large file in the samples webapp, for example :
>>>>> cd webapps/examples dd if=/dev/zero of=large.txt bs=1k
>>>>> count=2000000
> 
>> <snip/>
> 
>>>> Reading the code for FileResource.getContent, it's clear that
>>>> the entire file is being loaded into memory, which obviously
>>>> isn't going to work, here. I'm wondering why that's happening
>>>> since streaming is the correct behavior when caching=false.
>>>> Also strange is that DefaultServlet will attempt to call
>>>> FileResource.getContent() -- which returns a byte[] -- and, if
>>>> that returns null, it will call FileResource.getInputStream
>>>> which ... calls this.getContent. So this looks like a
>>>> special-case for FileResource just trying to implement that
>>>> interface in the simplest way possible.
> 
>> It is assuming it is working with a CachedResource instance rather
>> than directly with a FileResource instance.
> 
>>>> FileResource seems to implement in-memory caching whether it's
>>>> enabled or not.
>>>>
>>>> I can't understand why this doesn't fail for the other kind of 
>>>> connector. Everything else is the same? You have two separate 
>>>> connectors in one instance, or are you changing the connector
>>>> between tests?
>>>
>>> Everything is exactly the same as I have only one instance with
>>> two separate connectors (AJP+HTTP).
> 
>> I suspect HTTP avoids it because sendfile is enabled.
> 
>> The DefaultServlet logic needs a little refactoring.
> 
> And maybe FileResource, too?
> 
> I wasn't able to follow the logic of whether caching or not caching
> was enabled. I only did cursory checking, but it seemed like none of
> the resources implementations included any caching-aware code at all.
> Was I looking in the wrong place?

Don't think so. When caching is enabled everything gets wrapped in
CachedResource.

> If the resources are caching-aware, then I think the DefaultServlet
> can just always use Resource.getInputStream.
> 
> Hmm. That might cause a lot of unnecessary IO if the bytes are
> actually available.

That is a very tempting solution. The result is a LOT cleaner than the
patch I just wrote. CachingResource is smart enough to cache the bytes
and wrap them in a ByteArrayInputStream if Resource.getInputStream is
called. My only concern is I think this introduces and additional copy
of the data. I need to check that.

> Maybe when caching is disabled, we need to wrap resources in an
> UncachedResource object which always returns null from getContent()
> and forces the use of an InputStream?

My instinct is that would be too much but I'll keep it in mind in case I
end up in a logic hole that that digs me out of.

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: OutOfMemory on large file download with AJP and cachingAllowed=false

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Mark,

On 4/25/19 15:55, Mark Thomas wrote:
> On 23/04/2019 16:29, Olivier Jaquemet wrote:
>> On 23/04/2019 16:12, Christopher Schultz wrote:
>>> On 4/23/19 05:58, Olivier Jaquemet wrote:
> 
> <snip/>
> 
>>>> * Add the following directive to context.xml : <Resources 
>>>> cachingAllowed="false" />
>>> Okay. Why context.xml, by the way?
>> I don't even know (yet...) why this setting was added in the
>> first place in the environment where it was present... ! so why
>> this file... I don't know either :)
> 
> DefaultServlet is assuming caching is in place. If you disable it,
> you hit this issue.
> 
>>>> * Create a large file in the samples webapp, for example :
>>>> cd webapps/examples dd if=/dev/zero of=large.txt bs=1k
>>>> count=2000000
> 
> <snip/>
> 
>>> Reading the code for FileResource.getContent, it's clear that
>>> the entire file is being loaded into memory, which obviously
>>> isn't going to work, here. I'm wondering why that's happening
>>> since streaming is the correct behavior when caching=false.
>>> Also strange is that DefaultServlet will attempt to call
>>> FileResource.getContent() -- which returns a byte[] -- and, if
>>> that returns null, it will call FileResource.getInputStream
>>> which ... calls this.getContent. So this looks like a
>>> special-case for FileResource just trying to implement that
>>> interface in the simplest way possible.
> 
> It is assuming it is working with a CachedResource instance rather
> than directly with a FileResource instance.
> 
>>> FileResource seems to implement in-memory caching whether it's
>>> enabled or not.
>>> 
>>> I can't understand why this doesn't fail for the other kind of 
>>> connector. Everything else is the same? You have two separate 
>>> connectors in one instance, or are you changing the connector
>>> between tests?
>> 
>> Everything is exactly the same as I have only one instance with
>> two separate connectors (AJP+HTTP).
> 
> I suspect HTTP avoids it because sendfile is enabled.
> 
> The DefaultServlet logic needs a little refactoring.

And maybe FileResource, too?

I wasn't able to follow the logic of whether caching or not caching
was enabled. I only did cursory checking, but it seemed like none of
the resources implementations included any caching-aware code at all.
Was I looking in the wrong place?

If the resources are caching-aware, then I think the DefaultServlet
can just always use Resource.getInputStream.

Hmm. That might cause a lot of unnecessary IO if the bytes are
actually available.

Maybe when caching is disabled, we need to wrap resources in an
UncachedResource object which always returns null from getContent()
and forces the use of an InputStream?

- -chris
-----BEGIN PGP SIGNATURE-----
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlzCFbkACgkQHPApP6U8
pFjfng//YwlhPYoJLxXA1QOr9wQ3QvXusXqoqExQ7cQVlv9Z+in7Xl4/qHp65QBp
wlAGqxGCvxbltQCiBuMmIoK3Lz8GEJcgAl+Rt7FCDPC22yfJ/dCQjFEgAQ8+BftD
VohdH+7fJxP+KF2P29qh3Yb041VDC6L1NkkERUOf+4mDKA/B0RnmhuW/QVRCpB5a
6Ola0yR8VP6zfjubA+PkUjDtXD4nw0vIQ5MbvV3pqIIPE5bA0+GfgbslTbgNiPNZ
u8tQlHJzUuvEUexzWCN9f6Ltu0pO8l1ovp4djP5CMLuY1PUm/ZUIkJw1wuB8Qft4
ByX/i5VzOPXxzbfw2cSboNe5PKhHk1LOokLNWB0UDfPsHWJI9Ef0G0mr5neljUGY
uu9tAFG/G6G1LkglLRNVlBajmXi3wMy9I73l2Pkj7lE45kRwv+z185IobG/6034x
ocX8u8UMNAmi4dWSc51x4PaXYTaI7lH9jmrPsyYDq0AHhrd5RxVT1Q9KT2sJexY2
6Qf/6mOXqzUf6H5aBrloOiVOcMOrtYRJG5mRK+UuoaaN+zngvubhlQIVHOa+8eP/
kDtKsQctLoLafvVWpTRsy7wMpOcfD2LkyDDhVI9QpwJtigQwvjuqGRUc+iITvejX
+tIoaqqwoHb0MoHPHdQI+0XStCxEa6N5oK3EtbjhwKpAaZnsuiU=
=drNE
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: OutOfMemory on large file download with AJP and cachingAllowed=false

Posted by Mark Thomas <ma...@apache.org>.
On 23/04/2019 16:29, Olivier Jaquemet wrote:
> On 23/04/2019 16:12, Christopher Schultz wrote:
>> On 4/23/19 05:58, Olivier Jaquemet wrote:

<snip/>

>>> * Add the following directive to context.xml : <Resources
>>> cachingAllowed="false" />
>> Okay. Why context.xml, by the way?
> I don't even know (yet...) why this setting was added in the first place
> in the environment where it was present... !
> so why this file... I don't know either :)

DefaultServlet is assuming caching is in place. If you disable it, you
hit this issue.

>>> * Create a large file in the samples webapp, for example : cd
>>> webapps/examples dd if=/dev/zero of=large.txt bs=1k count=2000000

<snip/>

>> Reading the code for FileResource.getContent, it's clear that the
>> entire file is being loaded into memory, which obviously isn't going
>> to work, here. I'm wondering why that's happening since streaming is
>> the correct behavior when caching=false. Also strange is that
>> DefaultServlet will attempt to call FileResource.getContent() -- which
>> returns a byte[] -- and, if that returns null, it will call
>> FileResource.getInputStream which ... calls this.getContent. So this
>> looks like a special-case for FileResource just trying to implement
>> that interface in the simplest way possible.

It is assuming it is working with a CachedResource instance rather than
directly with a FileResource instance.

>> FileResource seems to implement in-memory caching whether it's enabled
>> or not.
>>
>> I can't understand why this doesn't fail for the other kind of
>> connector. Everything else is the same? You have two separate
>> connectors in one instance, or are you changing the connector between
>> tests?
> 
> Everything is exactly the same as I have only one instance with two
> separate connectors (AJP+HTTP).

I suspect HTTP avoids it because sendfile is enabled.

The DefaultServlet logic needs a little refactoring.

Mark

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: OutOfMemory on large file download with AJP and cachingAllowed=false

Posted by Olivier Jaquemet <ol...@jalios.com>.
On 23/04/2019 16:12, Christopher Schultz wrote:
> Olivier,
Hi Christopher,
Thanks for you answer.
> On 4/23/19 05:58, Olivier Jaquemet wrote:
>> Hi all,
>>
>> We were able to reproduce a OutOfMemory error when using AJP and
>> the Resources cachingAllowed=false directive. It looks like a bug
>> of AJP connector(s), as it does not occurs with other HTTP
>> connectors.
>>
>> Could you confirm the behavior described below is indeed bug ? (if
>> you do, I'll create the issue on bugzilla)
>>
>> To reproduce :
>>
>> * Use latest tomcat 8.5 version (tested with Tomcat 8.5.40) * Add
>> an AJP connector to server.xml <Connector port="8009"
>> protocol="AJP/1.3" redirectPort="8443" />
> nb: no compression
>
> nb: NIO connector is in use ; no APR (see stack trace for thread name)
>
>> * Add the following directive to context.xml : <Resources
>> cachingAllowed="false" />
> Okay. Why context.xml, by the way?
I don't even know (yet...) why this setting was added in the first place 
in the environment where it was present... !
so why this file... I don't know either :)
>> * Create a large file in the samples webapp, for example : cd
>> webapps/examples dd if=/dev/zero of=large.txt bs=1k count=2000000
> ~2GiB static file
>
>> * Start Tomcat with a 1024 mb heap size (JAVA_OPTS="-Xms1024m
>> -Xmx1024m" * Configure Apache HTTPD to use mod_proxy_ajp, or mod_jk
>> (both will have the same issue) [1] * Start Apache HTTPD * Download
>> file through default HTTP connector
>> http://localhost:8080/examples/large.txt --> OK * Download file
>> through Apache/AJP http://localhost/examples/large.txt --> BUG :
>> OutOfMemory error occurs Exception in thread
>> "ajp-nio-8009-exec-10" java.lang.OutOfMemoryError: Java heap space
>> at
>>
>> org.apache.catalina.webresources.FileResource.getContent(FileResource.
> java:207)
>>   at
>>
>> org.apache.catalina.servlets.DefaultServlet.serveResource(DefaultServl
> et.java:992)
>>   at
>>
>> org.apache.catalina.servlets.DefaultServlet.doGet(DefaultServlet.java:
> 438)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:635)
>> ...
> That's ... interesting. What if you request the file more than once
> via the default connector? I would have expected these code paths to
> be relatively the same.
There are no problems at all if I request the file many times in // 
using the default HTTP connector.
> Can you generate a heap dump when the OOME occurs? Use the following
> JVM startup options:
>
> - -XX:+HeapDumpOnOutOfMemoryError
> - -XX:HeapDumpPath=/path/to/heap-dump-file
Sure. Here are heap dumps and log file from two different run :
https://www.dropbox.com/sh/q6iqpe42fxvsvin/AAARR4uOnn-qJ4PCg8e_Ll8La?dl=0
> If you generate a heap dump, are you able to navigate it? I've heard
> that Eclipse MAT is good. I've mostly used YourKit and it's quite
> good, able to pick-out the "big things" and help locate what kind of
> object(s) is/are taking up all the space.

I've used MAT some times before, and indeed it's a really nice tool !
However, if you are familiar with YourKit, you might find the culprit 
faster than me...
Can you have a look a the heap dump I shared above ?

I did have a look at it with MAT... however I could not find any 
definitive conclusion ...
There is a BufferedWriter of an AccessLogValve which reference a large 
char array with \0 (like the content of the large generated file...).
There are many (leaking?) 
java.util.zip.ZipFile$ZipFileInflaterInputStream (however you said there 
were no compression, though here they are)
...

> Reading the code for FileResource.getContent, it's clear that the
> entire file is being loaded into memory, which obviously isn't going
> to work, here. I'm wondering why that's happening since streaming is
> the correct behavior when caching=false. Also strange is that
> DefaultServlet will attempt to call FileResource.getContent() -- which
> returns a byte[] -- and, if that returns null, it will call
> FileResource.getInputStream which ... calls this.getContent. So this
> looks like a special-case for FileResource just trying to implement
> that interface in the simplest way possible.
>
> FileResource seems to implement in-memory caching whether it's enabled
> or not.
>
> I can't understand why this doesn't fail for the other kind of
> connector. Everything else is the same? You have two separate
> connectors in one instance, or are you changing the connector between
> tests?

Everything is exactly the same as I have only one instance with two 
separate connectors (AJP+HTTP).

One last (confusing) information in form of an exception, that I could 
*not* reproduce our test environments, but that I saw on the server 
where the symptom first occurred.
I don't think this is related to the OOM, but it might be another 
symptom of the same resource configuration.

java.lang.ArrayIndexOutOfBoundsException: Unable to return 
[/path/to/file/being/downloaded.ext] as a byte array since the resource 
is [2,637,615,704] bytes in size which is larger than the maximum size 
of a byte array
     at 
org.apache.catalina.webresources.FileResource.getContent(FileResource.java:196)
     at 
org.apache.catalina.servlets.DefaultServlet.serveResource(DefaultServlet.java:1000)
     at 
org.apache.catalina.servlets.DefaultServlet.doGet(DefaultServlet.java:438)
     at javax.servlet.http.HttpServlet.service(HttpServlet.java:635)
     at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231)
     at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
     at 
org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
[...]
     at 
org.apache.catalina.servlets.DefaultServlet.service(DefaultServlet.java:418)
     at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193)
     at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166)
     at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:199)
     at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96)
     at 
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:496)
     at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140)
     at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81)
     at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87)
     at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342)
     at org.apache.coyote.ajp.AjpProcessor.service(AjpProcessor.java:486)
     at 
org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66)
     at 
org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:790)
     at 
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1459)
     at 
org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49)
     at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     at 
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
     at java.lang.Thread.run(Thread.java:748)

Olivier

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org


Re: OutOfMemory on large file download with AJP and cachingAllowed=false

Posted by Christopher Schultz <ch...@christopherschultz.net>.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Olivier,

On 4/23/19 05:58, Olivier Jaquemet wrote:
> Hi all,
> 
> We were able to reproduce a OutOfMemory error when using AJP and
> the Resources cachingAllowed=false directive. It looks like a bug
> of AJP connector(s), as it does not occurs with other HTTP
> connectors.
> 
> Could you confirm the behavior described below is indeed bug ? (if
> you do, I'll create the issue on bugzilla)
> 
> To reproduce :
> 
> * Use latest tomcat 8.5 version (tested with Tomcat 8.5.40) * Add
> an AJP connector to server.xml <Connector port="8009"
> protocol="AJP/1.3" redirectPort="8443" />

nb: no compression

nb: NIO connector is in use ; no APR (see stack trace for thread name)

> * Add the following directive to context.xml : <Resources
> cachingAllowed="false" />

Okay. Why context.xml, by the way?

> * Create a large file in the samples webapp, for example : cd
> webapps/examples dd if=/dev/zero of=large.txt bs=1k count=2000000

~2GiB static file

> * Start Tomcat with a 1024 mb heap size (JAVA_OPTS="-Xms1024m
> -Xmx1024m" * Configure Apache HTTPD to use mod_proxy_ajp, or mod_jk
> (both will have the same issue) [1] * Start Apache HTTPD * Download
> file through default HTTP connector 
> http://localhost:8080/examples/large.txt --> OK * Download file
> through Apache/AJP http://localhost/examples/large.txt --> BUG :
> OutOfMemory error occurs Exception in thread
> "ajp-nio-8009-exec-10" java.lang.OutOfMemoryError: Java heap space 
> at
> 
> org.apache.catalina.webresources.FileResource.getContent(FileResource.
java:207)
>
>  at
> 
> org.apache.catalina.servlets.DefaultServlet.serveResource(DefaultServl
et.java:992)
>
>  at
> 
> org.apache.catalina.servlets.DefaultServlet.doGet(DefaultServlet.java:
438)
>
> 
at javax.servlet.http.HttpServlet.service(HttpServlet.java:635)
> ...

That's ... interesting. What if you request the file more than once
via the default connector? I would have expected these code paths to
be relatively the same.

Can you generate a heap dump when the OOME occurs? Use the following
JVM startup options:

- -XX:+HeapDumpOnOutOfMemoryError
- -XX:HeapDumpPath=/path/to/heap-dump-file

If you generate a heap dump, are you able to navigate it? I've heard
that Eclipse MAT is good. I've mostly used YourKit and it's quite
good, able to pick-out the "big things" and help locate what kind of
object(s) is/are taking up all the space.

Reading the code for FileResource.getContent, it's clear that the
entire file is being loaded into memory, which obviously isn't going
to work, here. I'm wondering why that's happening since streaming is
the correct behavior when caching=false. Also strange is that
DefaultServlet will attempt to call FileResource.getContent() -- which
returns a byte[] -- and, if that returns null, it will call
FileResource.getInputStream which ... calls this.getContent. So this
looks like a special-case for FileResource just trying to implement
that interface in the simplest way possible.

FileResource seems to implement in-memory caching whether it's enabled
or not.

I can't understand why this doesn't fail for the other kind of
connector. Everything else is the same? You have two separate
connectors in one instance, or are you changing the connector between
tests?

- -chris
-----BEGIN PGP SIGNATURE-----
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAly/HVsACgkQHPApP6U8
pFgkLg//SGrPQ86VAyamI2nK0Ai67C1Y33+pBqEqlPP4UzPUu8cjnQd8cigvGqrv
HvgJJJrsDEA5XRpS7I2Xw/rUMHHyXjalarCHr0dHQdYbL108HLDlXSzhOBD/viA+
PZIeMjWAlcpZfRk1Eg9pZtlAbzANi8X6EdbKutpkMo+6clUmAaAMLI4C3e/OInKA
IqfUqqEseeS6Bdi51GZoNfm8yspWz5hEcBSoWI5H5kTQCbLZ4Wpf4D2RXFI5Cj1s
rVaGG9ne47QQKa1dS5iQGBVFE8MwLWPIW7rkyIrRbVaNWd54mX18cL/hMmQuiOfd
9D3KrUm6j8TeP48OcD5LFW5uz8k92cOZ114cVlql5ndifBp1XdC2TXddGjiC4Uf8
8DhG94wGIiv6U51v1XEtF9u7/b4d8UMsEsqpzZ/EcN85Bs5ZHrzGzDrhgq/SrsC3
tutVODV78FWM5daxvilFhzXAtbaF+wqFzakGIeHQ8QefQFR9rEPvMKxrrMKJgvM2
2DLNNvKUmBZICYDEahzHTbcdaELVRwbQwO5cElk1rcBp66EhG4DY0wgvimeQbnjA
iPjMG2VCSIXsVghV0qgLTPk8B8S00i/e18TYl46ciJpTvLE5tleDS/kRb75oj3Hm
aWOZ+qAAYpAiP1vsUFyHQfWYBIqfK7h7fRjruWDfBAtabbPRRyg=
=VhGf
-----END PGP SIGNATURE-----

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tomcat.apache.org
For additional commands, e-mail: users-help@tomcat.apache.org