You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@activemq.apache.org by "volker.seibt" <vo...@buenting.de> on 2013/12/11 11:03:10 UTC

KahaDB journals growing - no visible entries

We use ActiveMQ 5.7 as embedded jms provider with camel 2.11.0.

It's a small application which uses about seven or eight queues with about
max. 1,000 small messages (tickets from a deposit machine) per queue / day.

We use persistent queues via KahaDB and did not change any KahaDB
parameters.

With this the number of KahaDB journals (db-xxx.log) is growing every day.
>From 2013/07 until now we got 439 files with the default size of 32mb, which
cumulates to about 12 gb.

I copied this database to a standale ActiveMQ instance to analyze it:

On startup an automatic recovery is startet which "recovers" 5,400,000 (!)
entries. All queues used by our application are shown with zero
entries except the ActiveMQ.DLQ (dead letter queue) with 77 entries.

There are also about four or three topics shown with maximum 7 entries.

While running this database on an ActiveMQ instance for 10 minutes it grows
by three new files (about 95 mb).

First questions: where do the 5.4 million of entries hide? How can I get rid
of them without without a complete deletion of the database? How can such a
state occure, and more important, how can we avoid this?




--
View this message in context: http://activemq.2283324.n4.nabble.com/KahaDB-journals-growing-no-visible-entries-tp4675358.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

AW: KahaDB journals growing - no visible entries

Posted by "Seibt, Volker" <vo...@buenting.de>.
Until now we did not change any defaults so the cleaning every 30 seconds should be active per default. In most of the environments it works without problems.

-----Ursprüngliche Nachricht-----
Von: barry.barnett@wellsfargo.com [mailto:barry.barnett@wellsfargo.com] 
Gesendet: Mittwoch, 11. Dezember 2013 18:41
An: users@activemq.apache.org
Betreff: RE: KahaDB journals growing - no visible entries

Those messages must still be in the journal files.  Is your checkpoint worker cleaing up the journals every 30 seconds?



-----Original Message-----
From: Seibt, Volker [mailto:volker.seibt@buenting.de] 
Sent: Wednesday, December 11, 2013 8:02 AM
To: users@activemq.apache.org
Subject: AW: KahaDB journals growing - no visible entries

In addition to the queues used by the application there is one queue "ActiveMQ.DLQ" which holds 77 messages.

There is also one topic "ActiveMQ.Advisory.Queue" which holds 7 messages. That's all.

After purging "ActiveMQ.DLQ" and restarting the instance ActiveMQ brings up 23 messages in "ActiveMQ.DLQ".

Except viewing the admin web page and purging the queue there was no further activity.

Regards,

Volker Seibt

-----Ursprüngliche Nachricht-----
Von: barry.barnett@wellsfargo.com [mailto:barry.barnett@wellsfargo.com] 
Gesendet: Mittwoch, 11. Dezember 2013 13:39
An: users@activemq.apache.org
Betreff: RE: KahaDB journals growing - no visible entries

Is there a DLQ for each application queue?  Are those holding any messages?

Regards,

Barry Barnett
WMQ Enterprise Services & Solutions
Wells Fargo
Cell: 704-564-5501

-----Original Message-----
From: volker.seibt [mailto:volker.seibt@buenting.de] 
Sent: Wednesday, December 11, 2013 5:03 AM
To: users@activemq.apache.org
Subject: KahaDB journals growing - no visible entries

We use ActiveMQ 5.7 as embedded jms provider with camel 2.11.0.

It's a small application which uses about seven or eight queues with about max. 1,000 small messages (tickets from a deposit machine) per queue / day.

We use persistent queues via KahaDB and did not change any KahaDB parameters.

With this the number of KahaDB journals (db-xxx.log) is growing every day.
>From 2013/07 until now we got 439 files with the default size of 32mb, which cumulates to about 12 gb.

I copied this database to a standale ActiveMQ instance to analyze it:

On startup an automatic recovery is startet which "recovers" 5,400,000 (!) entries. All queues used by our application are shown with zero entries except the ActiveMQ.DLQ (dead letter queue) with 77 entries.

There are also about four or three topics shown with maximum 7 entries.

While running this database on an ActiveMQ instance for 10 minutes it grows by three new files (about 95 mb).

First questions: where do the 5.4 million of entries hide? How can I get rid of them without without a complete deletion of the database? How can such a state occure, and more important, how can we avoid this?




--
View this message in context: http://activemq.2283324.n4.nabble.com/KahaDB-journals-growing-no-visible-entries-tp4675358.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

RE: KahaDB journals growing - no visible entries

Posted by ba...@wellsfargo.com.
Those messages must still be in the journal files.  Is your checkpoint worker cleaing up the journals every 30 seconds?



-----Original Message-----
From: Seibt, Volker [mailto:volker.seibt@buenting.de] 
Sent: Wednesday, December 11, 2013 8:02 AM
To: users@activemq.apache.org
Subject: AW: KahaDB journals growing - no visible entries

In addition to the queues used by the application there is one queue "ActiveMQ.DLQ" which holds 77 messages.

There is also one topic "ActiveMQ.Advisory.Queue" which holds 7 messages. That's all.

After purging "ActiveMQ.DLQ" and restarting the instance ActiveMQ brings up 23 messages in "ActiveMQ.DLQ".

Except viewing the admin web page and purging the queue there was no further activity.

Regards,

Volker Seibt

-----Ursprüngliche Nachricht-----
Von: barry.barnett@wellsfargo.com [mailto:barry.barnett@wellsfargo.com] 
Gesendet: Mittwoch, 11. Dezember 2013 13:39
An: users@activemq.apache.org
Betreff: RE: KahaDB journals growing - no visible entries

Is there a DLQ for each application queue?  Are those holding any messages?

Regards,

Barry Barnett
WMQ Enterprise Services & Solutions
Wells Fargo
Cell: 704-564-5501

-----Original Message-----
From: volker.seibt [mailto:volker.seibt@buenting.de] 
Sent: Wednesday, December 11, 2013 5:03 AM
To: users@activemq.apache.org
Subject: KahaDB journals growing - no visible entries

We use ActiveMQ 5.7 as embedded jms provider with camel 2.11.0.

It's a small application which uses about seven or eight queues with about max. 1,000 small messages (tickets from a deposit machine) per queue / day.

We use persistent queues via KahaDB and did not change any KahaDB parameters.

With this the number of KahaDB journals (db-xxx.log) is growing every day.
>From 2013/07 until now we got 439 files with the default size of 32mb, which cumulates to about 12 gb.

I copied this database to a standale ActiveMQ instance to analyze it:

On startup an automatic recovery is startet which "recovers" 5,400,000 (!) entries. All queues used by our application are shown with zero entries except the ActiveMQ.DLQ (dead letter queue) with 77 entries.

There are also about four or three topics shown with maximum 7 entries.

While running this database on an ActiveMQ instance for 10 minutes it grows by three new files (about 95 mb).

First questions: where do the 5.4 million of entries hide? How can I get rid of them without without a complete deletion of the database? How can such a state occure, and more important, how can we avoid this?




--
View this message in context: http://activemq.2283324.n4.nabble.com/KahaDB-journals-growing-no-visible-entries-tp4675358.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

AW: KahaDB journals growing - no visible entries

Posted by "Seibt, Volker" <vo...@buenting.de>.
In addition to the queues used by the application there is one queue "ActiveMQ.DLQ" which holds 77 messages.

There is also one topic "ActiveMQ.Advisory.Queue" which holds 7 messages. That's all.

After purging "ActiveMQ.DLQ" and restarting the instance ActiveMQ brings up 23 messages in "ActiveMQ.DLQ".

Except viewing the admin web page and purging the queue there was no further activity.

Regards,

Volker Seibt

-----Ursprüngliche Nachricht-----
Von: barry.barnett@wellsfargo.com [mailto:barry.barnett@wellsfargo.com] 
Gesendet: Mittwoch, 11. Dezember 2013 13:39
An: users@activemq.apache.org
Betreff: RE: KahaDB journals growing - no visible entries

Is there a DLQ for each application queue?  Are those holding any messages?

Regards,

Barry Barnett
WMQ Enterprise Services & Solutions
Wells Fargo
Cell: 704-564-5501

-----Original Message-----
From: volker.seibt [mailto:volker.seibt@buenting.de] 
Sent: Wednesday, December 11, 2013 5:03 AM
To: users@activemq.apache.org
Subject: KahaDB journals growing - no visible entries

We use ActiveMQ 5.7 as embedded jms provider with camel 2.11.0.

It's a small application which uses about seven or eight queues with about max. 1,000 small messages (tickets from a deposit machine) per queue / day.

We use persistent queues via KahaDB and did not change any KahaDB parameters.

With this the number of KahaDB journals (db-xxx.log) is growing every day.
>From 2013/07 until now we got 439 files with the default size of 32mb, which cumulates to about 12 gb.

I copied this database to a standale ActiveMQ instance to analyze it:

On startup an automatic recovery is startet which "recovers" 5,400,000 (!) entries. All queues used by our application are shown with zero entries except the ActiveMQ.DLQ (dead letter queue) with 77 entries.

There are also about four or three topics shown with maximum 7 entries.

While running this database on an ActiveMQ instance for 10 minutes it grows by three new files (about 95 mb).

First questions: where do the 5.4 million of entries hide? How can I get rid of them without without a complete deletion of the database? How can such a state occure, and more important, how can we avoid this?




--
View this message in context: http://activemq.2283324.n4.nabble.com/KahaDB-journals-growing-no-visible-entries-tp4675358.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

RE: KahaDB journals growing - no visible entries

Posted by ba...@wellsfargo.com.
Is there a DLQ for each application queue?  Are those holding any messages?

Regards,

Barry Barnett
WMQ Enterprise Services & Solutions
Wells Fargo
Cell: 704-564-5501

-----Original Message-----
From: volker.seibt [mailto:volker.seibt@buenting.de] 
Sent: Wednesday, December 11, 2013 5:03 AM
To: users@activemq.apache.org
Subject: KahaDB journals growing - no visible entries

We use ActiveMQ 5.7 as embedded jms provider with camel 2.11.0.

It's a small application which uses about seven or eight queues with about max. 1,000 small messages (tickets from a deposit machine) per queue / day.

We use persistent queues via KahaDB and did not change any KahaDB parameters.

With this the number of KahaDB journals (db-xxx.log) is growing every day.
>From 2013/07 until now we got 439 files with the default size of 32mb, which cumulates to about 12 gb.

I copied this database to a standale ActiveMQ instance to analyze it:

On startup an automatic recovery is startet which "recovers" 5,400,000 (!) entries. All queues used by our application are shown with zero entries except the ActiveMQ.DLQ (dead letter queue) with 77 entries.

There are also about four or three topics shown with maximum 7 entries.

While running this database on an ActiveMQ instance for 10 minutes it grows by three new files (about 95 mb).

First questions: where do the 5.4 million of entries hide? How can I get rid of them without without a complete deletion of the database? How can such a state occure, and more important, how can we avoid this?




--
View this message in context: http://activemq.2283324.n4.nabble.com/KahaDB-journals-growing-no-visible-entries-tp4675358.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: KahaDB journals growing - no visible entries

Posted by "volker.seibt" <vo...@buenting.de>.
gtully wrote
> that tx should be associated with one of your clients. So if you cycle
> your clients, it should get rolledback.

There are no clients. The database was copied from a pilot installation to a
fresh ActiveMQ installation for analysis.

But upgrading to 5.9 seems to solve the problem - your hint at JIRA issue
was helpfull.

In version 5.8 most of the logs where removed after a bunch of "unfound
messages" (log shows e. g. "message not found in sequence id index:
ID:KS-110-006-PC01-1543-1379412564726-3:2:395:1:1") were removed.

But 4 logs, the oldest from 2013-09-25 with about 127 MB stayed.

After using version 5.9 another bunch of "recovered in-flight XA
transactions" were discarded and all useless logs disappeared.

Now we have to check if our application works with 5.9.

Thanks!




--
View this message in context: http://activemq.2283324.n4.nabble.com/KahaDB-journals-growing-no-visible-entries-tp4675358p4675591.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.

Re: KahaDB journals growing - no visible entries

Posted by Gary Tully <ga...@gmail.com>.
that tx should be associated with one of your clients. So if you cycle
your clients, it should get rolledback.
I know we now (5.8/9) track a range of data files associated with a
transaction so that gc is not blocked any more than it needs be. in
5.7 any later log file needs to remain.
see: https://issues.apache.org/jira/browse/AMQ-4262

On 16 December 2013 08:58, volker.seibt <vo...@buenting.de> wrote:
> I did some analysis as described in
> http://activemq.apache.org/why-do-kahadb-log-files-remain-after-cleanup.html
> <http://activemq.apache.org/why-do-kahadb-log-files-remain-after-cleanup.html>
> .After reading the log
> 2013-12-16 09:11:13,486 [eckpoint Worker] DEBUG MessageDatabase
> - Checkpoint started.2013-12-16 09:11:13,493 [eckpoint Worker] TRACE
> MessageDatabase                - Last update: 441:10482170, full gc
> candidates set: [1, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,
> 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74,
> 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93,
> 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109,
> 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124,
> 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139,
> 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154,
> 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169,
> 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184,
> 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199,
> 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214,
> 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229,
> 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244,
> 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259,
> 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274,
> 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,
> 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304,
> 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319,
> 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334,
> 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349,
> 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364,
> 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379,
> 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394,
> 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409,
> 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424,
> 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439,
> 440, 441]2013-12-16 09:11:13,495 [eckpoint Worker] TRACE MessageDatabase
> - gc candidates after first tx:43:20042312, [1, 42]2013-12-16 09:11:13,495
> [eckpoint Worker] TRACE MessageDatabase                - gc candidates after
> dest:0:wincor-status, [1, 42]2013-12-16 09:11:13,495 [eckpoint Worker] TRACE
> MessageDatabase                - gc candidates after dest:0:ActiveMQ.DLQ,
> []2013-12-16 09:11:13,495 [eckpoint Worker] TRACE MessageDatabase
> - gc candidates: []
>
> I found two log files (1, 42) blocked by dead letter queue.
>
> After purging DLQ these files where deleted, but most of the log files
> stayed:
> 2013-12-16 09:12:13,608 [eckpoint Worker] TRACE MessageDatabase
> - Last update: 441:10612635, full gc candidates set: [43, 44, 45, 46, 47,
> 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66,
> 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85,
> 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103,
> 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118,
> 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133,
> 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148,
> 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
> 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178,
> 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193,
> 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208,
> 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223,
> 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238,
> 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253,
> 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268,
> 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283,
> 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298,
> 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313,
> 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328,
> 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343,
> 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358,
> 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373,
> 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388,
> 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403,
> 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418,
> 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433,
> 434, 435, 436, 437, 438, 439, 440, 441]2013-12-16 09:12:13,609 [eckpoint
> Worker] TRACE MessageDatabase                - gc candidates after first
> tx:43:20042312, []
>
> There seems to be an open transaction tx:43:20042312. Log file 43 is from
> 2013-09-25, 441 from 2013-12-10 so it's a very long running transaction.
> Nothing to see in ActiveMQ-Console.
>
> How can I get rid of this?
>
> Regards, Volker Seibt
>
>
>
> --
> View this message in context: http://activemq.2283324.n4.nabble.com/KahaDB-journals-growing-no-visible-entries-tp4675358p4675582.html
> Sent from the ActiveMQ - User mailing list archive at Nabble.com.



-- 
http://redhat.com
http://blog.garytully.com

Re: KahaDB journals growing - no visible entries

Posted by "volker.seibt" <vo...@buenting.de>.
I did some analysis as described in 
http://activemq.apache.org/why-do-kahadb-log-files-remain-after-cleanup.html
<http://activemq.apache.org/why-do-kahadb-log-files-remain-after-cleanup.html> 
.After reading the log
2013-12-16 09:11:13,486 [eckpoint Worker] DEBUG MessageDatabase               
- Checkpoint started.2013-12-16 09:11:13,493 [eckpoint Worker] TRACE
MessageDatabase                - Last update: 441:10482170, full gc
candidates set: [1, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,
56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74,
75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93,
94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109,
110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124,
125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139,
140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154,
155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169,
170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184,
185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199,
200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214,
215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229,
230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244,
245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259,
260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274,
275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289,
290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304,
305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319,
320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334,
335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349,
350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364,
365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379,
380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394,
395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409,
410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424,
425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439,
440, 441]2013-12-16 09:11:13,495 [eckpoint Worker] TRACE MessageDatabase               
- gc candidates after first tx:43:20042312, [1, 42]2013-12-16 09:11:13,495
[eckpoint Worker] TRACE MessageDatabase                - gc candidates after
dest:0:wincor-status, [1, 42]2013-12-16 09:11:13,495 [eckpoint Worker] TRACE
MessageDatabase                - gc candidates after dest:0:ActiveMQ.DLQ,
[]2013-12-16 09:11:13,495 [eckpoint Worker] TRACE MessageDatabase               
- gc candidates: []

I found two log files (1, 42) blocked by dead letter queue.

After purging DLQ these files where deleted, but most of the log files
stayed:
2013-12-16 09:12:13,608 [eckpoint Worker] TRACE MessageDatabase               
- Last update: 441:10612635, full gc candidates set: [43, 44, 45, 46, 47,
48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66,
67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85,
86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103,
104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118,
119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133,
134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148,
149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163,
164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178,
179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193,
194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208,
209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223,
224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238,
239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253,
254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268,
269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283,
284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298,
299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313,
314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328,
329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343,
344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358,
359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373,
374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388,
389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403,
404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418,
419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433,
434, 435, 436, 437, 438, 439, 440, 441]2013-12-16 09:12:13,609 [eckpoint
Worker] TRACE MessageDatabase                - gc candidates after first
tx:43:20042312, []

There seems to be an open transaction tx:43:20042312. Log file 43 is from
2013-09-25, 441 from 2013-12-10 so it's a very long running transaction.
Nothing to see in ActiveMQ-Console.

How can I get rid of this?

Regards, Volker Seibt



--
View this message in context: http://activemq.2283324.n4.nabble.com/KahaDB-journals-growing-no-visible-entries-tp4675358p4675582.html
Sent from the ActiveMQ - User mailing list archive at Nabble.com.