You are viewing a plain text version of this content. The canonical link for it is here.
Posted to jira@arrow.apache.org by "David Li (Jira)" <ji...@apache.org> on 2021/06/22 13:17:00 UTC

[jira] [Resolved] (ARROW-13034) [Python][Docs] Update outdated examples for hdfs/azure on the Parquet doc page

     [ https://issues.apache.org/jira/browse/ARROW-13034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

David Li resolved ARROW-13034.
------------------------------
    Fix Version/s: 5.0.0
       Resolution: Fixed

Issue resolved by pull request 10548
[https://github.com/apache/arrow/pull/10548]

> [Python][Docs] Update outdated examples for hdfs/azure on the Parquet doc page
> ------------------------------------------------------------------------------
>
>                 Key: ARROW-13034
>                 URL: https://issues.apache.org/jira/browse/ARROW-13034
>             Project: Apache Arrow
>          Issue Type: Improvement
>          Components: Python
>            Reporter: Joris Van den Bossche
>            Assignee: Joris Van den Bossche
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 5.0.0
>
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> From https://github.com/apache/arrow/issues/10492
> - The chapter "Writing to Partitioned Datasets" still presents a "solution" with "hdfs.connect" but since it's mentioned as deprecated no more a good idea to mention it.
> - The chapter "Reading a Parquet File from Azure Blob storage" is based on the package "azure.storage.blob" ... but an old one and the actual "azure-sdk-for-python" doesn't have any-more methods like get_blob_to_stream(). Possible to update this part with new blob storage possibilities, and also another mentioning the same concept with Delta Lake (similar principle but since there are differences ...)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)