You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@activemq.apache.org by "Clebert Suconic (Jira)" <ji...@apache.org> on 2022/06/08 03:08:00 UTC

[jira] [Commented] (ARTEMIS-3848) High cpu usage on ReadWrite locks

    [ https://issues.apache.org/jira/browse/ARTEMIS-3848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17551367#comment-17551367 ] 

Clebert Suconic commented on ARTEMIS-3848:
------------------------------------------

Some information on how to reproduce this issue if anyone intends to do it:

Here is a way to reproduce this issue:

clone my github sandbox project (I just created it).

with the old version of the broker, create a broker:

no need to auto-tune, the test will not send messages
./artemis create my-broker --no-autotune
start the broker:

cd my-broker/bin
./artemis run

clone a project I just created for this:

git clone https://github.com/clebertsuconic/sandbox
cd sandbox/session-stress
./build.sh
java -jar target/session-stress-0.2.SNAPSHOT-jar-with-dependencies.jar

find the broker process with jps:

jmap -dump:dump.hprof <process-id>

and inspect the generated hprof using Eclipse Memory Analyzer tool (MAT).

Do the following OQL:

select * from org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1

and watch the retained HEAPs. for the large objects, inspect the threadLocal variable on each one of these threads.

with the fix in place you should have just a handful of elements.

> High cpu usage on ReadWrite locks
> ---------------------------------
>
>                 Key: ARTEMIS-3848
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-3848
>             Project: ActiveMQ Artemis
>          Issue Type: Bug
>    Affects Versions: 2.22.0
>            Reporter: Clebert Suconic
>            Assignee: Clebert Suconic
>            Priority: Major
>             Fix For: 2.23.0
>
>          Time Spent: 2h
>  Remaining Estimate: 0h
>
> our HandlerBase is setting a boolean value if it's in handler.
> I thought by setting it to null I was clearing the ThreadLocal variable, however that's just creating a null entry.
> What makes it worst is the fact that threadLocal was non static. as a result we are getting a lot of entries on the ThreadLocal HashMap. as connections come and go, that will generate a leak on the thread.
> That will cause threads on the following stack trace to consume a lot of CPU:
>         at java.lang.ThreadLocal$ThreadLocalMap.expungeStaleEntry(ThreadLocal.java:638)
>         at java.lang.ThreadLocal$ThreadLocalMap.remove(ThreadLocal.java:520)
>         at java.lang.ThreadLocal$ThreadLocalMap.access$200(ThreadLocal.java:319)
>         at java.lang.ThreadLocal.remove(ThreadLocal.java:242)
>         at java.util.concurrent.locks.ReentrantReadWriteLock$Sync.tryReleaseShared(ReentrantReadWriteLock.java:426)
>         at java.util.concurrent.locks.AbstractQueuedSynchronizer.releaseShared(AbstractQueuedSynchronizer.java:1341)
>         at java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.unlock(ReentrantReadWriteLock.java:881)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)