You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Steve Loughran (JIRA)" <ji...@apache.org> on 2016/06/01 21:30:59 UTC

[jira] [Commented] (HADOOP-13230) s3a's use of fake empty directory blobs does not interoperate with other s3 tools

    [ https://issues.apache.org/jira/browse/HADOOP-13230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15311154#comment-15311154 ] 

Steve Loughran commented on HADOOP-13230:
-----------------------------------------

# Have you tested this against branch-2 yet? HADOOP-11694 covers changes there
# This should be testable: you'll need to bypass the s3a code after a mkdir and PUT up a file; listing the dir probably won't find the path.

I don't see us changing the policy of creating those empty paths, it's how we emulate empty dirs, and there is a core assumption in the Hadoop FS APIs, after a call to fs.mkdirs(path), then exists(path). Holds


But, 

* HADOOP-13208 proposes making listFiles(recursive) do a bulk list call; that would bypass the directory walk. We'll take a patch there, with tests,
* We sort of do that in rename already. Does playing with that make any difference? Maybe rename() is copying the empty/ dir entries too, even though there are children, so propagating the problem. Again, we'll take a patch there.

Finally, there is always the possibility of bypassing that HEAD for the empty dir and going straight to a listing. 
# That listing will need to recognise the diff between an empty dir entry and the children
# you have to consider that the cost of list operation is >> than a HEAD, due to the need to parse the XML response. That means it may get bounced for cost reasons. On the other hand, if you can can show that overall, on a populated directory path, things come out lower (and we are now counting individual operations for you to add those measurements to your tests), then they will get in.

Note that any patches against S3 will need to be tested by you before anyone will look at them:

https://wiki.apache.org/hadoop/HowToContribute#Submitting_patches_against_object_stores_such_as_Amazon_S3.2C_OpenStack_Swift_and_Microsoft_Azure

That's a policy which we ourselves have to abide by.




> s3a's use of fake empty directory blobs does not interoperate with other s3 tools
> ---------------------------------------------------------------------------------
>
>                 Key: HADOOP-13230
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13230
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 2.9.0
>            Reporter: Aaron Fabbri
>
> Users of s3a may not realize that, in some cases, it does not interoperate well with other s3 tools, such as the AWS CLI.  (See HIVE-13778, IMPALA-3558).
> Specifically, if a user:
> - Creates an empty directory with hadoop fs -mkdir s3a://bucket/path
> - Copies data into that directory via another tool, i.e. aws cli.
> - Tries to access the data in that directory with any Hadoop software.
> Then the last step fails because the fake empty directory blob that s3a wrote in the first step, causes s3a (listStatus() etc.) to continue to treat that directory as empty, even though the second step was supposed to populate the directory with data.
> I wanted to document this fact for users. We may mark this as not-fix, "by design".. May also be interesting to brainstorm solutions and/or a config option to change the behavior if folks care.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org