You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@arrow.apache.org by "Wes McKinney (JIRA)" <ji...@apache.org> on 2017/09/19 14:02:00 UTC

[jira] [Commented] (ARROW-1555) PyArrow write_to_dataset on s3

    [ https://issues.apache.org/jira/browse/ARROW-1555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16171755#comment-16171755 ] 

Wes McKinney commented on ARROW-1555:
-------------------------------------

cc [~fjetter]

This may not be too hard to fix -- I don't think that {{parquet.write_to_dataset}} has been tested with S3, so a patch to make this S3-friendly would be welcome. 

> PyArrow write_to_dataset on s3
> ------------------------------
>
>                 Key: ARROW-1555
>                 URL: https://issues.apache.org/jira/browse/ARROW-1555
>             Project: Apache Arrow
>          Issue Type: Bug
>    Affects Versions: 0.7.0
>            Reporter: Young-Jun Ko
>            Priority: Trivial
>             Fix For: 0.8.0
>
>
> When writing a arrow table to s3, I get an NotImplemented Exception.
> The root cause is in _ensure_filesystem and can be reproduced as follows:
> import pyarrow
> import pyarrow.parquet as pqa
> import s3fs
> s3 = s3fs.S3FileSystem()
> pqa._ensure_filesystem(s3).exists("anything")
> It appears that the S3FSWrapper that is instantiated in _ensure_filesystem does not expose the exist method of s3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)