You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@activemq.apache.org by "Ashish (Jira)" <ji...@apache.org> on 2022/12/19 10:00:00 UTC

[jira] [Updated] (AMQ-9186) Documentation for how to recover mkahadb

     [ https://issues.apache.org/jira/browse/AMQ-9186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ashish updated AMQ-9186:
------------------------
    Description: 
Hello,

I have migrated my active mq setup to cloud(eks). I am using efs as my persistence layer. During testing, I have been using ignoreMissingJournalfiles="false" and from the past few days i have been seeing the below error. (restart policy is always)

{code:java}
Caused by: java.io.IOException: Detected corrupt journal files. [60:29718101 >= key < 60:33554433]
{code}
In order to fix this I first tried to delete the db.data file and restarted the broker. That ended up in the below error and also ended up in a for loop restart (restart policy is always)

{code:java}
Caused by: java.lang.IllegalStateException: PageFile is not loaded 
{code}
Later i set ignoreMissingJournalfiles="true", which allowed the broker to ignore the corrupt journal file and start the broker. 

What would be your recommendation/steps to address this problem. Are their already any documentation for it?

Note : The broker ends up in this state due to OOM, I have already increased the brokers memory substantially (4gb), I just dont want to increase the memory further without understanding if there is a way to recover the broker 1st. I am also using EFS with Performance mode : Max I/O and Throughput mode : Bursting

Regards
Ashish


  was:
Hello,

I have migrated my active mq setup to cloud(eks). I am using efs as my persistence layer.
I have been using ignoreMissingJournalfiles="false" and from the past few days i have been seeing the below error. (restart policy is always)

{code:java}
Caused by: java.io.IOException: Detected corrupt journal files. [60:29718101 >= key < 60:33554433]
{code}
In order to fix this I first tried to delete the db.data file and restarted the broker. That ended up in the below error and also ended up in a for loop restart (restart policy is always)

{code:java}
Caused by: java.lang.IllegalStateException: PageFile is not loaded 
{code}
Later i set ignoreMissingJournalfiles="true", which allowed the broker to ignore the corrupt journal file and start the broker. 

What would be your recommendation/steps to address this problem. Are their already any documentation for it?

Note : The broker ends up in this state due to OOM, I have already increased the brokers memory substantially (4gb), I just dont want to increase the memory further without understanding if there is a way to recover the broker 1st. I am also using EFS with Performance mode : Max I/O and Throughput mode : Bursting

Regards
Ashish



> Documentation for how to recover mkahadb
> ----------------------------------------
>
>                 Key: AMQ-9186
>                 URL: https://issues.apache.org/jira/browse/AMQ-9186
>             Project: ActiveMQ
>          Issue Type: Wish
>          Components: Broker, mKahaDB, Performance Test
>    Affects Versions: 5.17.0
>            Reporter: Ashish
>            Priority: Major
>
> Hello,
> I have migrated my active mq setup to cloud(eks). I am using efs as my persistence layer. During testing, I have been using ignoreMissingJournalfiles="false" and from the past few days i have been seeing the below error. (restart policy is always)
> {code:java}
> Caused by: java.io.IOException: Detected corrupt journal files. [60:29718101 >= key < 60:33554433]
> {code}
> In order to fix this I first tried to delete the db.data file and restarted the broker. That ended up in the below error and also ended up in a for loop restart (restart policy is always)
> {code:java}
> Caused by: java.lang.IllegalStateException: PageFile is not loaded 
> {code}
> Later i set ignoreMissingJournalfiles="true", which allowed the broker to ignore the corrupt journal file and start the broker. 
> What would be your recommendation/steps to address this problem. Are their already any documentation for it?
> Note : The broker ends up in this state due to OOM, I have already increased the brokers memory substantially (4gb), I just dont want to increase the memory further without understanding if there is a way to recover the broker 1st. I am also using EFS with Performance mode : Max I/O and Throughput mode : Bursting
> Regards
> Ashish



--
This message was sent by Atlassian Jira
(v8.20.10#820010)