You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@activemq.apache.org by "Maxim Fateev (JIRA)" <ji...@apache.org> on 2006/05/31 00:08:52 UTC

[jira] Created: (AMQ-732) Infinite recovery loop.

Infinite recovery loop.
-----------------------

         Key: AMQ-732
         URL: https://issues.apache.org/activemq/browse/AMQ-732
     Project: ActiveMQ
        Type: Bug

  Components: Broker  
    Versions: 4.0    
 Environment: Linux RHEL 3
    Reporter: Maxim Fateev


The simplest way to reproduce the problem:
1. Remove storage directory. 
2. Start broker using the following code:


 public static void main(String[] args)  throws Exception {
           BrokerService broker = new BrokerService();
           broker.setPersistent(true);
           DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
           pFactory.setJournalLogFiles(1);
           pFactory.setJournalLogFileSize(2048);
           broker.setPersistenceFactory(pFactory);
           broker.setUseJmx(false);
           broker.addConnector("tcp://localhost:61616");
           broker.start();
           Thread.sleep(1000000000000l);
   }
3. Shutdown the broker.
4. Start broker.
It enters infinite loop

[                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
[                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
[                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
[                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
[                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
[                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
[                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
[                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
[                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
[                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
[                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
[                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
[                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
[                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
[                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
[                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
[                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
[                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
[                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
[                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
[                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
[                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
[                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
[                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
[                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
[                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
[                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0

The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:

   public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
        RecordInfo ri = readRecordInfo(lastLocation);
        while (true) {

            int logFileId = ri.getLocation().getLogFileId();
            int offset = ri.getNextLocation();
            log.debug("getNextDataRecordLocation offset=" + offset);
            // Are we overflowing into next logFile?
            if (offset >= ri.getLogFileState().getAppendOffset()) {
                LogFileNode nextActive = ri.getLogFileState().getNextActive();
                log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
                if (nextActive == null) {
                    return null;
                }
                logFileId = nextActive.getId();
                offset = 0;
            }

            try {
                ri = readRecordInfo(new Location(logFileId, offset));
            } catch (InvalidRecordLocationException e) {
                return null;
            }

            // Is the next record the right record type?
            if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
                return ri.getLocation();
            }
            // No? go onto the next record.
        }
    }

It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 

Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.



-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Updated: (AMQ-732) Infinite recovery loop.

Posted by "Hiram Chirino (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/activemq/browse/AMQ-732?page=all ]

Hiram Chirino updated AMQ-732:
------------------------------

    Fix Version: 4.0.2
                     (was: 4.0.1)

> Infinite recovery loop.
> -----------------------
>
>          Key: AMQ-732
>          URL: https://issues.apache.org/activemq/browse/AMQ-732
>      Project: ActiveMQ
>         Type: Bug

>   Components: Broker
>     Versions: 4.0
>  Environment: Linux RHEL 3
>     Reporter: Maxim Fateev
>     Assignee: Hiram Chirino
>      Fix For: 4.1, 4.0.2
>  Attachments: activemq-data-1-journal.tar.gz, activemq-data-2-journal.tar.gz
>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (AMQ-732) Infinite recovery loop.

Posted by "Hiram Chirino (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/activemq/browse/AMQ-732?page=comments#action_36235 ] 

Hiram Chirino commented on AMQ-732:
-----------------------------------

How big were the messages?  Can the shiped example producer and consumer in the examples directory reproduce the issue?

> Infinite recovery loop.
> -----------------------
>
>          Key: AMQ-732
>          URL: https://issues.apache.org/activemq/browse/AMQ-732
>      Project: ActiveMQ
>         Type: Bug

>   Components: Broker
>     Versions: 4.0
>  Environment: Linux RHEL 3
>     Reporter: Maxim Fateev
>     Assignee: Hiram Chirino

>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (AMQ-732) Infinite recovery loop.

Posted by "Maxim Fateev (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/activemq/browse/AMQ-732?page=comments#action_36244 ] 

Maxim Fateev commented on AMQ-732:
----------------------------------

I don't know message size as I use shipped producer and consumer. Here are commands used to enqueue and dequeue messages:

activemq/assembly/target/activemq-4.0-SNAPSHOT/bin/activemq-4.0-SNAPSHOT/example> ant producer -Ddurable=true -Dmax=10000
activemq/assembly/target/activemq-4.0-SNAPSHOT/bin/activemq-4.0-SNAPSHOT/example> ant consumer -Ddurable=true -Dmax=10000


> Infinite recovery loop.
> -----------------------
>
>          Key: AMQ-732
>          URL: https://issues.apache.org/activemq/browse/AMQ-732
>      Project: ActiveMQ
>         Type: Bug

>   Components: Broker
>     Versions: 4.0
>  Environment: Linux RHEL 3
>     Reporter: Maxim Fateev
>     Assignee: Hiram Chirino

>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (AMQ-732) Infinite recovery loop.

Posted by "Maxim Fateev (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/activemq/browse/AMQ-732?page=comments#action_36252 ] 

Maxim Fateev commented on AMQ-732:
----------------------------------

One possible clue is that I'm running broker from eclipse. And eclipse kills processes without running their shutdown handlers. So it might be that to reproduce you'll need to kill broker with kill -9.

> Infinite recovery loop.
> -----------------------
>
>          Key: AMQ-732
>          URL: https://issues.apache.org/activemq/browse/AMQ-732
>      Project: ActiveMQ
>         Type: Bug

>   Components: Broker
>     Versions: 4.0
>  Environment: Linux RHEL 3
>     Reporter: Maxim Fateev
>     Assignee: Hiram Chirino

>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Updated: (AMQ-732) Infinite recovery loop.

Posted by "Hiram Chirino (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/activemq/browse/AMQ-732?page=all ]

Hiram Chirino updated AMQ-732:
------------------------------

    Fix Version/s: 4.2
                       (was: 4.1.0)

> Infinite recovery loop.
> -----------------------
>
>                 Key: AMQ-732
>                 URL: https://issues.apache.org/activemq/browse/AMQ-732
>             Project: ActiveMQ
>          Issue Type: Bug
>          Components: Broker
>    Affects Versions: 4.0
>         Environment: Linux RHEL 3
>            Reporter: Maxim Fateev
>         Assigned To: Hiram Chirino
>             Fix For: 4.2.0, 4.0.3
>
>         Attachments: activemq-data-1-journal.tar.gz, activemq-data-2-journal.tar.gz
>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Updated: (AMQ-732) Infinite recovery loop.

Posted by "Maxim Fateev (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/activemq/browse/AMQ-732?page=all ]

Maxim Fateev updated AMQ-732:
-----------------------------

    Attachment: activemq-data-1-journal.tar.gz

storage of first failure case when single log file is used an no messages where published.

> Infinite recovery loop.
> -----------------------
>
>          Key: AMQ-732
>          URL: https://issues.apache.org/activemq/browse/AMQ-732
>      Project: ActiveMQ
>         Type: Bug

>   Components: Broker
>     Versions: 4.0
>  Environment: Linux RHEL 3
>     Reporter: Maxim Fateev
>     Assignee: Hiram Chirino
>  Attachments: activemq-data-1-journal.tar.gz
>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Updated: (AMQ-732) Infinite recovery loop.

Posted by "Maxim Fateev (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/activemq/browse/AMQ-732?page=all ]

Maxim Fateev updated AMQ-732:
-----------------------------

    Attachment: activemq-data-2-journal.tar.gz

Storage that reproduces infinite recovery loop with two files configured through the following code:

            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
            pFactory.setJournalLogFiles(2);
            pFactory.setJournalLogFileSize(10240);
            broker.setPersistenceFactory(pFactory);

> Infinite recovery loop.
> -----------------------
>
>          Key: AMQ-732
>          URL: https://issues.apache.org/activemq/browse/AMQ-732
>      Project: ActiveMQ
>         Type: Bug

>   Components: Broker
>     Versions: 4.0
>  Environment: Linux RHEL 3
>     Reporter: Maxim Fateev
>     Assignee: Hiram Chirino
>  Attachments: activemq-data-1-journal.tar.gz, activemq-data-2-journal.tar.gz
>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Assigned: (AMQ-732) Infinite recovery loop.

Posted by "james strachan (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/activemq/browse/AMQ-732?page=all ]

james strachan reassigned AMQ-732:
----------------------------------

    Assign To: Hiram Chirino

> Infinite recovery loop.
> -----------------------
>
>          Key: AMQ-732
>          URL: https://issues.apache.org/activemq/browse/AMQ-732
>      Project: ActiveMQ
>         Type: Bug

>   Components: Broker
>     Versions: 4.0
>  Environment: Linux RHEL 3
>     Reporter: Maxim Fateev
>     Assignee: Hiram Chirino

>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (AMQ-732) Infinite recovery loop.

Posted by "Maxim Fateev (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/activemq/browse/AMQ-732?page=comments#action_36270 ] 

Maxim Fateev commented on AMQ-732:
----------------------------------

I've attached storatge files. 

> Infinite recovery loop.
> -----------------------
>
>          Key: AMQ-732
>          URL: https://issues.apache.org/activemq/browse/AMQ-732
>      Project: ActiveMQ
>         Type: Bug

>   Components: Broker
>     Versions: 4.0
>  Environment: Linux RHEL 3
>     Reporter: Maxim Fateev
>     Assignee: Hiram Chirino
>  Attachments: activemq-data-1-journal.tar.gz, activemq-data-2-journal.tar.gz
>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Reopened: (AMQ-732) Infinite recovery loop.

Posted by "Hiram Chirino (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/activemq/browse/AMQ-732?page=all ]
     
Hiram Chirino reopened AMQ-732:
-------------------------------


ah.. just noticed that 4.0.1 did not build a new activeio and ship with the new version.  I guess this will make it into 4.0.2 then.

> Infinite recovery loop.
> -----------------------
>
>          Key: AMQ-732
>          URL: https://issues.apache.org/activemq/browse/AMQ-732
>      Project: ActiveMQ
>         Type: Bug

>   Components: Broker
>     Versions: 4.0
>  Environment: Linux RHEL 3
>     Reporter: Maxim Fateev
>     Assignee: Hiram Chirino
>      Fix For: 4.1, 4.0.2
>  Attachments: activemq-data-1-journal.tar.gz, activemq-data-2-journal.tar.gz
>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (AMQ-732) Infinite recovery loop.

Posted by "Hiram Chirino (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/activemq/browse/AMQ-732?page=comments#action_36556 ] 

Hiram Chirino commented on AMQ-732:
-----------------------------------

We need to build activeio beta-4 and ship with that.

> Infinite recovery loop.
> -----------------------
>
>          Key: AMQ-732
>          URL: https://issues.apache.org/activemq/browse/AMQ-732
>      Project: ActiveMQ
>         Type: Bug

>   Components: Broker
>     Versions: 4.0
>  Environment: Linux RHEL 3
>     Reporter: Maxim Fateev
>     Assignee: Hiram Chirino
>      Fix For: 4.1, 4.0.2
>  Attachments: activemq-data-1-journal.tar.gz, activemq-data-2-journal.tar.gz
>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (AMQ-732) Infinite recovery loop.

Posted by "james strachan (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/activemq/browse/AMQ-732?page=comments#action_36258 ] 

james strachan commented on AMQ-732:
------------------------------------

BTW you should be able to attach a zipped up tarball of the activemq-data directory for a broker which can't recover and we can get it working

> Infinite recovery loop.
> -----------------------
>
>          Key: AMQ-732
>          URL: https://issues.apache.org/activemq/browse/AMQ-732
>      Project: ActiveMQ
>         Type: Bug

>   Components: Broker
>     Versions: 4.0
>  Environment: Linux RHEL 3
>     Reporter: Maxim Fateev
>     Assignee: Hiram Chirino

>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Resolved: (AMQ-732) Infinite recovery loop.

Posted by "Hiram Chirino (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/activemq/browse/AMQ-732?page=all ]
     
Hiram Chirino resolved AMQ-732:
-------------------------------

    Resolution: Fixed

Here's the diff:

http://svn.apache.org/viewvc/incubator/activemq/trunk/activeio/activeio-core/src/main/java/org/apache/activeio/journal/active/LogFileManager.java?view=diff&r1=413520&r2=413521&pathrev=413521

We were iterating though all the records in the journal, when we got to the last record, and ask for the subsequent record, instead of getting null, we would get the first record in the journal again.  This only happened when all the files in the journal were 'active', they held records that were recorded after the mark.  Fixed the logic so that we don't go past the last record.

> Infinite recovery loop.
> -----------------------
>
>          Key: AMQ-732
>          URL: https://issues.apache.org/activemq/browse/AMQ-732
>      Project: ActiveMQ
>         Type: Bug

>   Components: Broker
>     Versions: 4.0
>  Environment: Linux RHEL 3
>     Reporter: Maxim Fateev
>     Assignee: Hiram Chirino
>      Fix For: 4.1, 4.0.1
>  Attachments: activemq-data-1-journal.tar.gz, activemq-data-2-journal.tar.gz
>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Updated: (AMQ-732) Infinite recovery loop.

Posted by "Hiram Chirino (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/activemq/browse/AMQ-732?page=all ]

Hiram Chirino updated AMQ-732:
------------------------------

    Fix Version/s: 4.0.3
                       (was: 4.0.2)

> Infinite recovery loop.
> -----------------------
>
>                 Key: AMQ-732
>                 URL: https://issues.apache.org/activemq/browse/AMQ-732
>             Project: ActiveMQ
>          Issue Type: Bug
>          Components: Broker
>    Affects Versions: 4.0
>         Environment: Linux RHEL 3
>            Reporter: Maxim Fateev
>         Assigned To: Hiram Chirino
>             Fix For: 4.1, 4.0.3
>
>         Attachments: activemq-data-1-journal.tar.gz, activemq-data-2-journal.tar.gz
>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

[jira] Commented: (AMQ-732) Infinite recovery loop.

Posted by "Maxim Fateev (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/activemq/browse/AMQ-732?page=comments#action_36254 ] 

Maxim Fateev commented on AMQ-732:
----------------------------------

After more testing it looks like the root cause is not executing usual shutdown sequence of a broker. When I stop broker in normal way I cannot reproduce the problem. When I stop it through "kill -9" or through eclipse it fails in the way described in contact. 

For me it looks like Journal design issue. It has to tolerate catastrofic shutdown but by design shutdown code has to be executed to leave journal in consistent state. Is any way to force journal into consistent state after every modification?

> Infinite recovery loop.
> -----------------------
>
>          Key: AMQ-732
>          URL: https://issues.apache.org/activemq/browse/AMQ-732
>      Project: ActiveMQ
>         Type: Bug

>   Components: Broker
>     Versions: 4.0
>  Environment: Linux RHEL 3
>     Reporter: Maxim Fateev
>     Assignee: Hiram Chirino

>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (AMQ-732) Infinite recovery loop.

Posted by "james strachan (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/activemq/browse/AMQ-732?page=comments#action_36257 ] 

james strachan commented on AMQ-732:
------------------------------------

Great investigation Maxim; sounds like you are on to something.

In an ideal world we'd have an automated system test which did a kill -9 on a running broker and tested it recovers fine and so forth.

In the meantime, I wonder if you could give us a tarball of the journal files created after a kill -9 and we can use recovery of those files as an automated test case.

> Infinite recovery loop.
> -----------------------
>
>          Key: AMQ-732
>          URL: https://issues.apache.org/activemq/browse/AMQ-732
>      Project: ActiveMQ
>         Type: Bug

>   Components: Broker
>     Versions: 4.0
>  Environment: Linux RHEL 3
>     Reporter: Maxim Fateev
>     Assignee: Hiram Chirino

>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (AMQ-732) Infinite recovery loop.

Posted by "Maxim Fateev (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/activemq/browse/AMQ-732?page=comments#action_36222 ] 

Maxim Fateev commented on AMQ-732:
----------------------------------

I was able to reproduce problem with default journal configuration. 
1. Remove content of storage directory.
2. Start broker. 
3. Enqueue 30000 messages.
4. Stop broker.
5. Start broker.
6. Dequeue 30000 messages.
7. Stop broker.
8. Start broker.
9. Enqueue 30000 messages.
10. Stop broker.
11. Recovery never ends looping through log files:

fateev@fateev:/workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data> grep overflowing server.log
[                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=4
57352 [Journal Writer] DEBUG org.apache.activeio.journal.active.LogFileManager  - getNextDataRecordLocation overflowing into next logFile=4
[                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=3
73287 [Journal Writer] DEBUG org.apache.activeio.journal.active.LogFileManager  - getNextDataRecordLocation overflowing into next logFile=3
[                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=4
118821 [Journal Writer] DEBUG org.apache.activeio.journal.active.LogFileManager  - getNextDataRecordLocation overflowing into next logFile=4
[                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=3
122112 [Journal Writer] DEBUG org.apache.activeio.journal.active.LogFileManager  - getNextDataRecordLocation overflowing into next logFile=3
[                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=4
164157 [Journal Writer] DEBUG org.apache.activeio.journal.active.LogFileManager  - getNextDataRecordLocation overflowing into next logFile=4
[                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=3
166746 [Journal Writer] DEBUG org.apache.activeio.journal.active.LogFileManager  - getNextDataRecordLocation overflowing into next logFile=3


> Infinite recovery loop.
> -----------------------
>
>          Key: AMQ-732
>          URL: https://issues.apache.org/activemq/browse/AMQ-732
>      Project: ActiveMQ
>         Type: Bug

>   Components: Broker
>     Versions: 4.0
>  Environment: Linux RHEL 3
>     Reporter: Maxim Fateev

>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Commented: (AMQ-732) Infinite recovery loop.

Posted by "Maxim Fateev (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/activemq/browse/AMQ-732?page=comments#action_36245 ] 

Maxim Fateev commented on AMQ-732:
----------------------------------

Also note that my first entry shows how to reproduce the problem without ANY consumers and producers.

> Infinite recovery loop.
> -----------------------
>
>          Key: AMQ-732
>          URL: https://issues.apache.org/activemq/browse/AMQ-732
>      Project: ActiveMQ
>         Type: Bug

>   Components: Broker
>     Versions: 4.0
>  Environment: Linux RHEL 3
>     Reporter: Maxim Fateev
>     Assignee: Hiram Chirino

>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Resolved: (AMQ-732) Infinite recovery loop.

Posted by "Hiram Chirino (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/activemq/browse/AMQ-732?page=all ]
     
Hiram Chirino resolved AMQ-732:
-------------------------------

    Fix Version: 4.0.1
                 4.1
     Resolution: Fixed

Thanks Maxim,

Those journal files helped me reproduce and fix the problem!  I've committed the fix against the 4.1 and 4.0 branch.  A subsequent nighly snapshot should include the fix.

> Infinite recovery loop.
> -----------------------
>
>          Key: AMQ-732
>          URL: https://issues.apache.org/activemq/browse/AMQ-732
>      Project: ActiveMQ
>         Type: Bug

>   Components: Broker
>     Versions: 4.0
>  Environment: Linux RHEL 3
>     Reporter: Maxim Fateev
>     Assignee: Hiram Chirino
>      Fix For: 4.1, 4.0.1
>  Attachments: activemq-data-1-journal.tar.gz, activemq-data-2-journal.tar.gz
>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira


[jira] Reopened: (AMQ-732) Infinite recovery loop.

Posted by "Maxim Fateev (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/activemq/browse/AMQ-732?page=all ]
     
Maxim Fateev reopened AMQ-732:
------------------------------


Thanks for quick fix. 
Could you provide more info on the root cause? It would be nice to include such information as well as check in number every time you close an issue.

> Infinite recovery loop.
> -----------------------
>
>          Key: AMQ-732
>          URL: https://issues.apache.org/activemq/browse/AMQ-732
>      Project: ActiveMQ
>         Type: Bug

>   Components: Broker
>     Versions: 4.0
>  Environment: Linux RHEL 3
>     Reporter: Maxim Fateev
>     Assignee: Hiram Chirino
>      Fix For: 4.1, 4.0.1
>  Attachments: activemq-data-1-journal.tar.gz, activemq-data-2-journal.tar.gz
>
>
> The simplest way to reproduce the problem:
> 1. Remove storage directory. 
> 2. Start broker using the following code:
>  public static void main(String[] args)  throws Exception {
>            BrokerService broker = new BrokerService();
>            broker.setPersistent(true);
>            DefaultPersistenceAdapterFactory pFactory = new DefaultPersistenceAdapterFactory();
>            pFactory.setJournalLogFiles(1);
>            pFactory.setJournalLogFileSize(2048);
>            broker.setPersistenceFactory(pFactory);
>            broker.setUseJmx(false);
>            broker.addConnector("tcp://localhost:61616");
>            broker.start();
>            Thread.sleep(1000000000000l);
>    }
> 3. Shutdown the broker.
> 4. Start broker.
> It enters infinite loop
> [                          main] BrokerService                  INFO  ActiveMQ null JMS Message Broker (localhost) is starting
> [                          main] BrokerService                  INFO  For help or more information please see: http://incubator.apache.org/activemq/
> [                          main] JDBCPersistenceAdapter         INFO  Database driver recognized: [apache_derby_embedded_jdbc_driver]
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) )
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_MSGS(ID INTEGER NOT NULL, CONTAINER VARCHAR(250), MSGID_PROD VARCHAR(250), MSGID_SEQ INTEGER, EXPIRATION BIGINT, MSG BLOB, PRIMARY KEY ( ID ) ) Message: Table/View 'ACTIVEMQ_MSGS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_MIDX ON ACTIVEMQ_MSGS (MSGID_PROD,MSGID_SEQ)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_CIDX ON ACTIVEMQ_MSGS (CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE INDEX ACTIVEMQ_MSGS_EIDX ON ACTIVEMQ_MSGS (EXPIRATION)
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME))
> [                          main] DefaultJDBCAdapter             DEBUG Could not create JDBC tables; The message table already existed. Failure was: CREATE TABLE ACTIVEMQ_ACKS(CONTAINER VARCHAR(250) NOT NULL, CLIENT_ID VARCHAR(250) NOT NULL, SUB_NAME VARCHAR(250) NOT NULL, SELECTOR VARCHAR(250), LAST_ACKED_ID INTEGER, PRIMARY KEY ( CONTAINER, CLIENT_ID, SUB_NAME)) Message: Table/View 'ACTIVEMQ_ACKS' already exists in Schema 'APP'. SQLState: X0Y32 Vendor code: 20000
> [                          main] JDBCPersistenceAdapter         DEBUG Cleaning up old messages.
> [                          main] DefaultJDBCAdapter             DEBUG Executing SQL: DELETE FROM ACTIVEMQ_MSGS WHERE ( EXPIRATION<>0 AND EXPIRATION<?) OR ID <= ( SELECT min(ACTIVEMQ_ACKS.LAST_ACKED_ID) FROM ACTIVEMQ_ACKS WHERE ACTIVEMQ_ACKS.CONTAINER=ACTIVEMQ_MSGS.CONTAINER)
> [                          main] DefaultJDBCAdapter             DEBUG Deleted 0 old message(s).
> [                          main] JDBCPersistenceAdapter         DEBUG Cleanup done.
> [                          main] JournalPersistenceAdapter      INFO  Journal Recovery Started from: Active Journal: using 1 x 0.001953125 Megs at: /workplace/fateev/install/activemq-4.0-SNAPSHOT/activemq-core/activemq-data/journal
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> [                          main] JournalPersistenceAdapter      DEBUG TRACE Entry: RECOVERED
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=53
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation offset=97
> [                Journal Writer] LogFileManager                 DEBUG getNextDataRecordLocation overflowing into next logFile=0
> The log entry from getNextDataRecordLocation is coming from log statement I've added to LogFileManager.getNextDataRecordLocation:
>    public Location getNextDataRecordLocation(Location lastLocation) throws IOException, InvalidRecordLocationException {
>         RecordInfo ri = readRecordInfo(lastLocation);
>         while (true) {
>             int logFileId = ri.getLocation().getLogFileId();
>             int offset = ri.getNextLocation();
>             log.debug("getNextDataRecordLocation offset=" + offset);
>             // Are we overflowing into next logFile?
>             if (offset >= ri.getLogFileState().getAppendOffset()) {
>                 LogFileNode nextActive = ri.getLogFileState().getNextActive();
>                 log.debug("getNextDataRecordLocation overflowing into next logFile=" + (nextActive == null ? "null"  : String.valueOf(nextActive.getId())));
>                 if (nextActive == null) {
>                     return null;
>                 }
>                 logFileId = nextActive.getId();
>                 offset = 0;
>             }
>             try {
>                 ri = readRecordInfo(new Location(logFileId, offset));
>             } catch (InvalidRecordLocationException e) {
>                 return null;
>             }
>             // Is the next record the right record type?
>             if (ri.getHeader().getRecordType() == DATA_RECORD_TYPE) {
>                 return ri.getLocation();
>             }
>             // No? go onto the next record.
>         }
>     }
> It looks like recovery doesn't take into account that journaling storage file should have some end at some point. 
> Similar problem happens if multiple files of bigger size are used. It happens every time one of the log files grows bigger then size specified in pFactory.setJournalLogFileSize(...) call.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   https://issues.apache.org/activemq/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira