You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@jackrabbit.apache.org by "Thomas Mueller (JIRA)" <ji...@apache.org> on 2010/02/16 14:57:27 UTC
[jira] Commented: (JCR-2063) FileDataStore: garbage collection can
delete files that are still needed
[ https://issues.apache.org/jira/browse/JCR-2063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12834216#action_12834216 ]
Thomas Mueller commented on JCR-2063:
-------------------------------------
A workaround for implementations where this is not fixed is:
gc.mark();
try {
// sleep to ensure the last modified time is updated
// even for file system with a lower time resolution
Thread.sleep(5000);
} catch (Exception e) {
// can not ignore, otherwise data that is in use may be deleted
throw new RepositoryException("Interrupted");
}
gc.mark();
> FileDataStore: garbage collection can delete files that are still needed
> ------------------------------------------------------------------------
>
> Key: JCR-2063
> URL: https://issues.apache.org/jira/browse/JCR-2063
> Project: Jackrabbit Content Repository
> Issue Type: Bug
> Components: jackrabbit-core
> Reporter: Thomas Mueller
> Assignee: Thomas Mueller
> Fix For: 1.5.5
>
>
> It looks like the FileDataStore garbage collection (both regular scan and persistence manager scan) can delete files that are still needed.
> Currently it looks like the reason is the last access time resolution of the operating system. This is 2 seconds for FAT and Mac OS X, NTFS 100 ns, and 1 second for other file systems. That means file that are scanned at the very beginning are sometimes deleted, because they have a later last modified time then when the scan was started.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.