You are viewing a plain text version of this content. The canonical link for it is here.
Posted to oak-issues@jackrabbit.apache.org by "Amit Jain (JIRA)" <ji...@apache.org> on 2017/03/02 11:41:45 UTC
[jira] [Assigned] (OAK-5874) Duplicate uploads might happen with
AbstractSharedCachingDataStore
[ https://issues.apache.org/jira/browse/OAK-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Amit Jain reassigned OAK-5874:
------------------------------
Assignee: Amit Jain
> Duplicate uploads might happen with AbstractSharedCachingDataStore
> ------------------------------------------------------------------
>
> Key: OAK-5874
> URL: https://issues.apache.org/jira/browse/OAK-5874
> Project: Jackrabbit Oak
> Issue Type: Bug
> Reporter: Raul Hudea
> Assignee: Amit Jain
> Priority: Minor
> Attachments: OAK-5874.patch
>
>
> If a file is staged for async upload in UploadStagingCache and then another call to AbstractSharedCachingDataStore.addRecord is made for a file with same SHA1, the new call goes directly to the backed to write the file, because the cache is not taking into account pending uploads. This makes 2 uploads to happen for the same blob: one async (from UploadStagingCache) and one sync (from AbstractSharedCachingDataStore.addRecord
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)