You are viewing a plain text version of this content. The canonical link for it is here.
Posted to oak-issues@jackrabbit.apache.org by "Davide Giannella (JIRA)" <ji...@apache.org> on 2017/05/30 08:39:25 UTC
[jira] [Closed] (OAK-5874) Duplicate uploads might happen with
AbstractSharedCachingDataStore
[ https://issues.apache.org/jira/browse/OAK-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Davide Giannella closed OAK-5874.
---------------------------------
Bulk close for 1.7.0
> Duplicate uploads might happen with AbstractSharedCachingDataStore
> ------------------------------------------------------------------
>
> Key: OAK-5874
> URL: https://issues.apache.org/jira/browse/OAK-5874
> Project: Jackrabbit Oak
> Issue Type: Bug
> Components: blob
> Reporter: Raul Hudea
> Assignee: Amit Jain
> Priority: Minor
> Labels: candidate_oak_1_6, performance
> Fix For: 1.7.0, 1.8
>
> Attachments: OAK-5874.patch
>
>
> If a file is staged for async upload in UploadStagingCache and then another call to AbstractSharedCachingDataStore.addRecord is made for a file with same SHA1, the new call goes directly to the backed to write the file, because the cache is not taking into account pending uploads. This makes 2 uploads to happen for the same blob: one async (from UploadStagingCache) and one sync (from AbstractSharedCachingDataStore.addRecord
> (cc [~amitjain])
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)