You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "Weston Pace (Jira)" <ji...@apache.org> on 2020/09/08 03:00:00 UTC

[jira] [Created] (ARROW-9935) New filesystem API unable to read empty S3 folders

Weston Pace created ARROW-9935:
----------------------------------

             Summary: New filesystem API unable to read empty S3 folders
                 Key: ARROW-9935
                 URL: https://issues.apache.org/jira/browse/ARROW-9935
             Project: Apache Arrow
          Issue Type: Bug
    Affects Versions: 1.0.0
            Reporter: Weston Pace
         Attachments: arrow_453.py, arrow_9935.py

When an empty "folder" is created in S3 using the online bucket explorer tool on the management console then it creates a special empty file with the same name as the folder.

(Some more details here: [https://docs.aws.amazon.com/AmazonS3/latest/user-guide/using-folders.html)]

If parquet files are later loaded into one of these directories (with or without partitioning subdirectories) then this dataset cannot be read by the new dataset API.  The underlying s3fs `find` method returns a "file" object with size 0 that pyarrow then attempts to read.  Since this file doesn't truly exist a FileNotFoundError is thrown.

Would it be safe to simply ignore all files with size 0?

As a workaround I can wrap s3fs' find method and strip out these objects with size 0 myself.

I've attached a script showing the issue and a workaround.  It uses a public bucket that I'll leave up for a few months.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)