You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Chris Nauroth (JIRA)" <ji...@apache.org> on 2016/05/20 17:54:12 UTC

[jira] [Moved] (HADOOP-13188) S3A file-create should throw error rather than overwrite directories

     [ https://issues.apache.org/jira/browse/HADOOP-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Chris Nauroth moved HDFS-10442 to HADOOP-13188:
-----------------------------------------------

        Key: HADOOP-13188  (was: HDFS-10442)
    Project: Hadoop Common  (was: Hadoop HDFS)

> S3A file-create should throw error rather than overwrite directories
> --------------------------------------------------------------------
>
>                 Key: HADOOP-13188
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13188
>             Project: Hadoop Common
>          Issue Type: Bug
>            Reporter: Raymie Stata
>
> S3A.create(Path,FsPermission,boolean,int,short,long,Progressable) is not checking to see if it's being asked to overwrite a directory.  It could easily do so, and should throw an error in this case.
> There is a test-case for this in AbstractFSContractTestBase, but it's being skipped because S3A is a blobstore.  However, both the Azure and Swift file systems make this test, and the new S3 one should as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org