You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@activemq.apache.org by "ASF subversion and git services (JIRA)" <ji...@apache.org> on 2017/10/10 16:25:00 UTC
[jira] [Commented] (AMQ-6771) Improve performance of KahaDB
recovery check checkForCorruptJournalFiles=true
[ https://issues.apache.org/jira/browse/AMQ-6771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16198908#comment-16198908 ]
ASF subversion and git services commented on AMQ-6771:
------------------------------------------------------
Commit f9899922783e0e94de030f4c867e5d48a3d869a9 in activemq's branch refs/heads/master from [~gtully]
[ https://git-wip-us.apache.org/repos/asf?p=activemq.git;h=f989992 ]
[AMQ-6831, AMQ-6771] fix up recovery check to ensure full batch is available in memory, regression from AMQ-6771
> Improve performance of KahaDB recovery check checkForCorruptJournalFiles=true
> -----------------------------------------------------------------------------
>
> Key: AMQ-6771
> URL: https://issues.apache.org/jira/browse/AMQ-6771
> Project: ActiveMQ
> Issue Type: Improvement
> Components: KahaDB
> Affects Versions: 5.15.0
> Reporter: Gary Tully
> Assignee: Gary Tully
> Fix For: 5.15.1, 5.16.0
>
>
> The KahaDB checkForCorruptJournalFiles option validates the checksum of every journal batch record on startup. If a single producer writes many small messages, the batch sizes in the journal will be small. The current check implementation reads each batch at a time with a fseek/read sequence that can be very slow over shared disks.
> The check can be a quick buffered sequential read using the maxBatchSize which should already be tuned to match the disk transfer rate.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)