You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@activemq.apache.org by "Timothy Bish (JIRA)" <ji...@apache.org> on 2014/06/19 15:57:24 UTC

[jira] [Commented] (AMQ-5235) erroneous temp percent used

    [ https://issues.apache.org/jira/browse/AMQ-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037338#comment-14037338 ] 

Timothy Bish commented on AMQ-5235:
-----------------------------------

I'd advise testing on 5.10 given the number of fixes that have gone into LevelDB since 5.9.0

> erroneous temp percent used
> ---------------------------
>
>                 Key: AMQ-5235
>                 URL: https://issues.apache.org/jira/browse/AMQ-5235
>             Project: ActiveMQ
>          Issue Type: Bug
>          Components: activemq-leveldb-store
>    Affects Versions: 5.9.0
>         Environment: debian (quality testing and production)
>            Reporter: anselme dewavrin
>
> Dear all,
> We have an activemq 5.9 configured with 1GB of tempUsage allowed. Just by security because we only use persistent messages (about 6000 messages per day). After severall days of use, the temp usage increases, and even shows values that are above the total amount of the data on disk. Here it shows 45% of its 1GB limit for the following files :
> find activemq-data -ls
> 76809801    4 drwxr-xr-x   5 anselme  anselme      4096 Jun 19 10:24 activemq-data
> 76809813    4 -rw-r--r--   1 anselme  anselme        24 Jun 16 16:13 activemq-data/store-version.txt
> 76809817    4 drwxr-xr-x   2 anselme  anselme      4096 Jun 16 16:13 activemq-data/dirty.index
> 76809811    4 -rw-r--r--   2 anselme  anselme      2437 Jun 16 12:06 activemq-data/dirty.index/000008.sst
> 76809820    4 -rw-r--r--   1 anselme  anselme        16 Jun 16 16:13 activemq-data/dirty.index/CURRENT
> 76809819   80 -rw-r--r--   1 anselme  anselme     80313 Jun 16 16:13 activemq-data/dirty.index/000011.sst
> 76809822    0 -rw-r--r--   1 anselme  anselme         0 Jun 16 16:13 activemq-data/dirty.index/LOCK
> 76809810  300 -rw-r--r--   2 anselme  anselme    305206 Jun 16 11:51 activemq-data/dirty.index/000005.sst
> 76809821 2048 -rw-r--r--   1 anselme  anselme   2097152 Jun 19 11:30 activemq-data/dirty.index/000012.log
> 76809818 1024 -rw-r--r--   1 anselme  anselme   1048576 Jun 16 16:13 activemq-data/dirty.index/MANIFEST-000010
> 76809816    0 -rw-r--r--   1 anselme  anselme         0 Jun 16 16:13 activemq-data/lock
> 76809815 102400 -rw-r--r--   1 anselme  anselme  104857600 Jun 19 11:30 activemq-data/0000000000f0faaf.log
> 76809823 102400 -rw-r--r--   1 anselme  anselme  104857600 Jun 16 11:50 activemq-data/0000000000385f46.log
> 76809807    4 drwxr-xr-x   2 anselme  anselme      4096 Jun 16 16:13 activemq-data/0000000000f0faaf.index
> 76809808  420 -rw-r--r--   1 anselme  anselme    429264 Jun 16 16:13 activemq-data/0000000000f0faaf.index/000009.log
> 76809811    4 -rw-r--r--   2 anselme  anselme      2437 Jun 16 12:06 activemq-data/0000000000f0faaf.index/000008.sst
> 76809812    4 -rw-r--r--   1 anselme  anselme       165 Jun 16 16:13 activemq-data/0000000000f0faaf.index/MANIFEST-000007
> 76809809    4 -rw-r--r--   1 anselme  anselme        16 Jun 16 16:13 activemq-data/0000000000f0faaf.index/CURRENT
> 76809810  300 -rw-r--r--   2 anselme  anselme    305206 Jun 16 11:51 activemq-data/0000000000f0faaf.index/000005.sst
> 76809814 102400 -rw-r--r--   1 anselme  anselme  104857600 Jun 12 21:06 activemq-data/0000000000000000.log
> 76809802    4 drwxr-xr-x   2 anselme  anselme      4096 Jun 16 16:13 activemq-data/plist.index
> 76809803    4 -rw-r--r--   1 anselme  anselme        16 Jun 16 16:13 activemq-data/plist.index/CURRENT
> 76809806    0 -rw-r--r--   1 anselme  anselme         0 Jun 16 16:13 activemq-data/plist.index/LOCK
> 76809805 1024 -rw-r--r--   1 anselme  anselme   1048576 Jun 16 16:13 activemq-data/plist.index/000003.log
> 76809804 1024 -rw-r--r--   1 anselme  anselme   1048576 Jun 16 16:13 activemq-data/plist.index/MANIFEST-000002
> The problem is that in our production system it once blocked producers with a tempusage at 122%, even if the disk was empty.
> So we invesigated and executed the broker in a debugger, and found how the usage is calculated. If it in the scala leveldb files : It is not based on what  is on disk, but on what it thinks is on the disk. It multiplies the size of one log by the number of logs known by a certain hashmap.
> I think the entries of  the hashmap are not removed when the log files are purged.
> Could you confirm ?
> Thanks in advance 
> Anselme



--
This message was sent by Atlassian JIRA
(v6.2#6252)