You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@activemq.apache.org by "Gary Tully (JIRA)" <ji...@apache.org> on 2010/08/13 13:03:47 UTC

[jira] Created: (AMQ-2868) NegativeQueueTest and JDBC variant - intermittent failure - missing message when cache exhausted

NegativeQueueTest and JDBC variant - intermittent failure - missing message when cache exhausted
------------------------------------------------------------------------------------------------

                 Key: AMQ-2868
                 URL: https://issues.apache.org/activemq/browse/AMQ-2868
             Project: ActiveMQ
          Issue Type: Bug
    Affects Versions: 5.4.0
            Reporter: Gary Tully
            Assignee: Gary Tully
             Fix For: 5.4.1


Test fails trying to consume all messages and misses one message on occasion.
Problem, concurrent transaction completion leaves messages out of order in the cursor w.r.t to the store. When the cursor is exhausted, the cache memory limit is reached and subsequent messages are not cached so the next message needs to be recovered from the store, the point at which we start reading from the store is important. If, at the point at which the cache is full, the cursor is out of order, it can skip a message in the store.
Previously, the entire store was replayed, as if it was never cached and these messages are suppressed by the cursor as duplicates, but there is a size limit and producers spread limit on the duplicate suppression that means messages can avoid duplicate detection. Also, in the case of consumer transactions that rollback, duplicate delivery is required so out of order messages may arrive on a subsequent dispatch.
setBatch, ensuring that messages are replayed form the correct point in the store is important to give ordering guarantees with failover and memory limits, so synchronization of the store and cursor w.r.t concurrent transactions is also needed in support of setBatch.

Store commit and the after completions that update the cursor need to be serialized for a destination to keep make this totally deterministic.
recap, memory limits such that a cache will be filled, concurrent send transaction completion so that store updates and cursor updated can overlap with each other and with cache invalidation. setBatch trying to reduce the replay of messages.

Outstanding question:
- do we make the use of setBatch and transaction sync with store and cursor configurable. If setBatch is off, don't sync. Then at the mercy of consumer transactions and duplicate suppression in the event of failover. An alternative is to make the sync conditional on the use of the cache for a destination. Very reliable but slow; slow is a very relative determination!
Also, may need to be disabled for all destinations as a transaction can span many destinations.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Resolved: (AMQ-2868) NegativeQueueTest and JDBC variant - intermittent failure - missing message when cache exhausted

Posted by "Gary Tully (JIRA)" <ji...@apache.org>.
     [ https://issues.apache.org/activemq/browse/AMQ-2868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Gary Tully resolved AMQ-2868.
-----------------------------

    Resolution: Fixed

sticking with the shared lock for the time being.

> NegativeQueueTest and JDBC variant - intermittent failure - missing message when cache exhausted
> ------------------------------------------------------------------------------------------------
>
>                 Key: AMQ-2868
>                 URL: https://issues.apache.org/activemq/browse/AMQ-2868
>             Project: ActiveMQ
>          Issue Type: Bug
>    Affects Versions: 5.4.0
>            Reporter: Gary Tully
>            Assignee: Gary Tully
>             Fix For: 5.4.1
>
>
> Test fails trying to consume all messages and misses one message on occasion.
> Problem, concurrent transaction completion leaves messages out of order in the cursor w.r.t to the store. When the cursor is exhausted, the cache memory limit is reached and subsequent messages are not cached so the next message needs to be recovered from the store, the point at which we start reading from the store is important. If, at the point at which the cache is full, the cursor is out of order, it can skip a message in the store.
> Previously, the entire store was replayed, as if it was never cached and these messages are suppressed by the cursor as duplicates, but there is a size limit and producers spread limit on the duplicate suppression that means messages can avoid duplicate detection. Also, in the case of consumer transactions that rollback, duplicate delivery is required so out of order messages may arrive on a subsequent dispatch.
> setBatch, ensuring that messages are replayed form the correct point in the store is important to give ordering guarantees with failover and memory limits, so synchronization of the store and cursor w.r.t concurrent transactions is also needed in support of setBatch.
> Store commit and the after completions that update the cursor need to be serialized for a destination to keep make this totally deterministic.
> recap, memory limits such that a cache will be filled, concurrent send transaction completion so that store updates and cursor updated can overlap with each other and with cache invalidation. setBatch trying to reduce the replay of messages.
> Outstanding question:
> - do we make the use of setBatch and transaction sync with store and cursor configurable. If setBatch is off, don't sync. Then at the mercy of consumer transactions and duplicate suppression in the event of failover. An alternative is to make the sync conditional on the use of the cache for a destination. Very reliable but slow; slow is a very relative determination!
> Also, may need to be disabled for all destinations as a transaction can span many destinations.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


[jira] Commented: (AMQ-2868) NegativeQueueTest and JDBC variant - intermittent failure - missing message when cache exhausted

Posted by "Gary Tully (JIRA)" <ji...@apache.org>.
    [ https://issues.apache.org/activemq/browse/AMQ-2868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=61199#action_61199 ] 

Gary Tully commented on AMQ-2868:
---------------------------------

Added the sync that serialises transaction updated to the store and cursor such that the cursor is always in order w.r.t to the store and setBatch can revert to just where it needs to so that there are no duplicates replayed to the cursor.
r985155
This is the bullet proof approach.
A variant could acquire per destination locks rather than the transaction store lock that is currently used. This would allow more per destination concurrency.

> NegativeQueueTest and JDBC variant - intermittent failure - missing message when cache exhausted
> ------------------------------------------------------------------------------------------------
>
>                 Key: AMQ-2868
>                 URL: https://issues.apache.org/activemq/browse/AMQ-2868
>             Project: ActiveMQ
>          Issue Type: Bug
>    Affects Versions: 5.4.0
>            Reporter: Gary Tully
>            Assignee: Gary Tully
>             Fix For: 5.4.1
>
>
> Test fails trying to consume all messages and misses one message on occasion.
> Problem, concurrent transaction completion leaves messages out of order in the cursor w.r.t to the store. When the cursor is exhausted, the cache memory limit is reached and subsequent messages are not cached so the next message needs to be recovered from the store, the point at which we start reading from the store is important. If, at the point at which the cache is full, the cursor is out of order, it can skip a message in the store.
> Previously, the entire store was replayed, as if it was never cached and these messages are suppressed by the cursor as duplicates, but there is a size limit and producers spread limit on the duplicate suppression that means messages can avoid duplicate detection. Also, in the case of consumer transactions that rollback, duplicate delivery is required so out of order messages may arrive on a subsequent dispatch.
> setBatch, ensuring that messages are replayed form the correct point in the store is important to give ordering guarantees with failover and memory limits, so synchronization of the store and cursor w.r.t concurrent transactions is also needed in support of setBatch.
> Store commit and the after completions that update the cursor need to be serialized for a destination to keep make this totally deterministic.
> recap, memory limits such that a cache will be filled, concurrent send transaction completion so that store updates and cursor updated can overlap with each other and with cache invalidation. setBatch trying to reduce the replay of messages.
> Outstanding question:
> - do we make the use of setBatch and transaction sync with store and cursor configurable. If setBatch is off, don't sync. Then at the mercy of consumer transactions and duplicate suppression in the event of failover. An alternative is to make the sync conditional on the use of the cache for a destination. Very reliable but slow; slow is a very relative determination!
> Also, may need to be disabled for all destinations as a transaction can span many destinations.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.