You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2018/10/11 17:52:00 UTC

[jira] [Resolved] (HADOOP-13843) S3Guard, MetadataStore to support atomic create(path, overwrite=false)

     [ https://issues.apache.org/jira/browse/HADOOP-13843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Steve Loughran resolved HADOOP-13843.
-------------------------------------
    Resolution: Won't Fix

you'd have to add some lease-marker into the store for this, worry about having it expire, etc. best to move to algorithms which don't require atomic create-no-overwrite so they will work across all cloud infras

> S3Guard, MetadataStore to support atomic create(path, overwrite=false)
> ----------------------------------------------------------------------
>
>                 Key: HADOOP-13843
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13843
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 3.0.0-beta1
>            Reporter: Steve Loughran
>            Priority: Major
>
> Support atomically enforced file creation. Current s3a can do a check in create() and fail if there is something there, but a new entry only gets created at the end of the PUT; during the entire interval between that check and the close() of the stream, there's nothing to stop other callers creating an object.
> Proposed: s3afs can do a check + create a 0 byte file at the path; that'd need some {{putNoOverwrite(DirListingMetadata)}} call in MetadataStore, followed by a PUT of an 0-byte file to S3. That will increase cost of file creation, though at least with the MD store, the cost of the initial getFileStatus() check is down.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org