You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@activemq.apache.org by "Francesco Nigro (Jira)" <ji...@apache.org> on 2019/09/10 17:22:00 UTC

[jira] [Updated] (ARTEMIS-2482) Large messages could leak native ByteBuffers

     [ https://issues.apache.org/jira/browse/ARTEMIS-2482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Francesco Nigro updated ARTEMIS-2482:
-------------------------------------
    Description: 
JournalStorageManager::addBytesToLargeMessage and LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of direct ByteBuffers performed internally by NIO.

Those buffers are pooled until certain size limit (ie jdk.nio.maxCachedBufferSize, as shown on https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right after the write succeed.

If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are always pooled regardless of the size, leading to OOM issues on high load of variable sized writes due to the amount of direct memory allocated and not released/late released.

The proposed solutions are:

perform ad hoc direct ByteBuffer caching on the write path thanks to the read lock
replace the NIO SequentialFile usage and just use RandomAccessFile that provide the right API to append byte[] without creating additional native copies

> Large messages could leak native ByteBuffers
> --------------------------------------------
>
>                 Key: ARTEMIS-2482
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-2482
>             Project: ActiveMQ Artemis
>          Issue Type: Bug
>          Components: AMQP, Broker, OpenWire
>    Affects Versions: 2.10.0
>            Reporter: Francesco Nigro
>            Priority: Major
>
> JournalStorageManager::addBytesToLargeMessage and LargeServerMessageImpl::DecodingContext::encode are relying on the pooling of direct ByteBuffers performed internally by NIO.
> Those buffers are pooled until certain size limit (ie jdk.nio.maxCachedBufferSize, as shown on https://bugs.openjdk.java.net/browse/JDK-8147468) otherwise are freed right after the write succeed.
> If the property jdk.nio.maxCachedBufferSize isn't set, the direct buffers are always pooled regardless of the size, leading to OOM issues on high load of variable sized writes due to the amount of direct memory allocated and not released/late released.
> The proposed solutions are:
> perform ad hoc direct ByteBuffer caching on the write path thanks to the read lock
> replace the NIO SequentialFile usage and just use RandomAccessFile that provide the right API to append byte[] without creating additional native copies



--
This message was sent by Atlassian Jira
(v8.3.2#803003)