You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Antoine Pitrou (Jira)" <ji...@apache.org> on 2020/09/08 09:49:00 UTC

[jira] [Commented] (ARROW-9935) New filesystem API unable to read empty S3 folders

    [ https://issues.apache.org/jira/browse/ARROW-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17192088#comment-17192088 ] 

Antoine Pitrou commented on ARROW-9935:
---------------------------------------

Have you tried using Arrow's own S3 filesystem implementation?

{code:python}
>>> from pyarrow.fs import S3FileSystem
>>> fs = S3FileSystem()
>>> fs.get_file_info("pyarrow-s3-empty-folder-file/mydataset")
{code}

(there may be more S3 configuration to do because this doesn't seem to work here: bad region perhaps?)

> New filesystem API unable to read empty S3 folders
> --------------------------------------------------
>
>                 Key: ARROW-9935
>                 URL: https://issues.apache.org/jira/browse/ARROW-9935
>             Project: Apache Arrow
>          Issue Type: Bug
>    Affects Versions: 1.0.0
>            Reporter: Weston Pace
>            Priority: Minor
>         Attachments: arrow_9935.py
>
>
> When an empty "folder" is created in S3 using the online bucket explorer tool on the management console then it creates a special empty file with the same name as the folder.
> (Some more details here: [https://docs.aws.amazon.com/AmazonS3/latest/user-guide/using-folders.html)]
> If parquet files are later loaded into one of these directories (with or without partitioning subdirectories) then this dataset cannot be read by the new dataset API.  The underlying s3fs `find` method returns a "file" object with size 0 that pyarrow then attempts to read.  Since this file doesn't truly exist a FileNotFoundError is thrown.
> Would it be safe to simply ignore all files with size 0?
> As a workaround I can wrap s3fs' find method and strip out these objects with size 0 myself.
> I've attached a script showing the issue and a workaround.  It uses a public bucket that I'll leave up for a few months.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)