You are viewing a plain text version of this content. The canonical link for it is here.
Posted to github@arrow.apache.org by GitBox <gi...@apache.org> on 2020/08/27 14:43:57 UTC

[GitHub] [arrow] jorisvandenbossche opened a new pull request #8065: ARROW-9858: [Python][Docs] Add user guide for filesystems interface

jorisvandenbossche opened a new pull request #8065:
URL: https://github.com/apache/arrow/pull/8065


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] pitrou commented on a change in pull request #8065: ARROW-9858: [Python][Docs] Add user guide for filesystems interface

Posted by GitBox <gi...@apache.org>.
pitrou commented on a change in pull request #8065:
URL: https://github.com/apache/arrow/pull/8065#discussion_r481256449



##########
File path: docs/source/python/filesystems.rst
##########
@@ -34,21 +34,87 @@ underlying storage, are automatically dereferenced.  Only basic
 :class:`metadata <FileInfo>` about file entries, such as the file size
 and modification time, is made available.
 
-Types
+The core interface is represented by the base class :class:`FileSystem`.
+Concrete subclasses are available for various kinds of storage, such as local
+filesystem access (:class:`LocalFileSystem`), HDFS (:class:`HadoopFileSystem`)
+and Amazon S3-compatible storage (:class:`S3FileSystem`).
+
+
+Usage
 -----
 
-The core interface is represented by the base class :class:`FileSystem`.
-Concrete subclasses are available for various kinds of storage:
-:class:`local filesystem access <LocalFileSystem>`,
-:class:`HDFS <HadoopFileSystem>` and
-:class:`Amazon S3-compatible storage <S3FileSystem>`.
+A FileSystem object can be created with one of the constuctors (and check the
+respective constructor for its options)::
+
+   >>> from pyarrow import fs
+   >>> local = fs.LocalFileSystem()
+
+or alternatively inferred from a URI::
+
+   >>> s3, path = fs.FileSystem.from_uri("s3://my-bucket")
+   >>> s3
+   <pyarrow._s3fs.S3FileSystem at 0x7f6760cbf4f0>
+   >>> path
+   'my-bucket'
+
+
+Reading and writing files
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Several of the IO-related functions in PyArrow accept either a URI (and infer
+the filesystem) or an explicit ``filesystem`` argument to specify the filesystem
+to read or write from. For example the :meth:`pyarrow.parquet.read_table`
+function can be used in the following ways::
+
+   # using a URI
+   pq.read_table("s3://my-bucket")
+   # using a path and filesystem
+   s3 = fs.S3FileSystem(..)
+   pq.read_table("my-bucket", filesystem=s3)
+
+The filesystem interface further allows to open files for reading (input) or
+writing (output) directly, which can be combined with functions that work with
+file-like objects. For example::
+
+   with local.open_output_stream("test.arrow") as file:
+      with pa.RecordBatchFileWriter(file, table.schema) as writer:
+         writer.write_table(table)
+
+
+Listing files
+~~~~~~~~~~~~~
+
+Inspecting the directories and files on a filesystem can be done with the
+:meth:`FileSystem.get_file_info` method. To list the the contents of a
+directory, use the :class`FileSelector` object to specify the selection::
+
+   >>> local.get_file_info(fs.FileSelector("dataset/", recursive=True))
+   [<FileInfo for 'dataset/part=B': type=FileType.Directory>,
+    <FileInfo for 'dataset/part=B/data0.parquet': type=FileType.File, size=1564>,
+    <FileInfo for 'dataset/part=A': type=FileType.Directory>,
+    <FileInfo for 'dataset/part=A/data0.parquet': type=FileType.File, size=1564>]
+
+This returns a list of :class:`FileInfo` objects, containing information about
+the type (file or directory), the size, the date last modified, etc.
+
+You can also get this information for a single explicit path (or list of
+paths)::
+
+   >>> local.get_file_info(['test.arrow'])[0]
+   <FileInfo for 'test.arrow': type=FileType.File, size=3250>
+
+   >>> local.get_file_info(['non_existent'])
+   [<FileInfo for 'non_existent': type=FileType.NotFound>]
 
-Example
--------
+S3
+--
 
-Assuming your S3 credentials are correctly configured (for example by setting
-the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY`` environment variables),
-here is how you can read contents from a S3 bucket::
+The :class:`S3FileSystem` constructor has several options to configure the S3
+connection. In addition, it will also read configured S3 credentials (for
+example by setting the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY``

Review comment:
       Perhaps add a `.. seealso::` with those links?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] pitrou commented on pull request #8065: ARROW-9858: [Python][Docs] Add user guide for filesystems interface

Posted by GitBox <gi...@apache.org>.
pitrou commented on pull request #8065:
URL: https://github.com/apache/arrow/pull/8065#issuecomment-685503840


   Thank you! This is a great improvement!


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] jorisvandenbossche commented on a change in pull request #8065: ARROW-9858: [Python][Docs] Add user guide for filesystems interface

Posted by GitBox <gi...@apache.org>.
jorisvandenbossche commented on a change in pull request #8065:
URL: https://github.com/apache/arrow/pull/8065#discussion_r480948273



##########
File path: docs/source/python/filesystems.rst
##########
@@ -69,3 +135,66 @@ here is how you can read contents from a S3 bucket::
    >>> f = s3.open_input_stream('my-test-bucket/Dir1/File2')
    >>> f.readall()
    b'some data'
+
+
+Hadoop File System (HDFS)
+-------------------------
+
+PyArrow comes with bindings to the Hadoop File System (based on C++ bindings
+using ``libhdfs``, a JNI-based interface to the Java Hadoop client). You connect
+using the `class`:HadoopFileSystem: constructor::

Review comment:
       Ah, yes, I switched the `` ` `` and `` : `` order .. :)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] jorisvandenbossche commented on a change in pull request #8065: ARROW-9858: [Python][Docs] Add user guide for filesystems interface

Posted by GitBox <gi...@apache.org>.
jorisvandenbossche commented on a change in pull request #8065:
URL: https://github.com/apache/arrow/pull/8065#discussion_r481394184



##########
File path: docs/source/python/filesystems.rst
##########
@@ -34,21 +34,90 @@ underlying storage, are automatically dereferenced.  Only basic
 :class:`metadata <FileInfo>` about file entries, such as the file size
 and modification time, is made available.
 
-Types
+The core interface is represented by the base class :class:`FileSystem`.
+Concrete subclasses are available for various kinds of storage, such as local
+filesystem access (:class:`LocalFileSystem`), HDFS (:class:`HadoopFileSystem`)
+and Amazon S3-compatible storage (:class:`S3FileSystem`).
+
+
+Usage
 -----
 
-The core interface is represented by the base class :class:`FileSystem`.
-Concrete subclasses are available for various kinds of storage:
-:class:`local filesystem access <LocalFileSystem>`,
-:class:`HDFS <HadoopFileSystem>` and
-:class:`Amazon S3-compatible storage <S3FileSystem>`.
+A FileSystem object can be created with one of the constuctors (and check the
+respective constructor for its options)::
+
+   >>> from pyarrow import fs
+   >>> local = fs.LocalFileSystem()
+
+or alternatively inferred from a URI::
+
+   >>> s3, path = fs.FileSystem.from_uri("s3://my-bucket")
+   >>> s3
+   <pyarrow._s3fs.S3FileSystem at 0x7f6760cbf4f0>
+   >>> path
+   'my-bucket'
+
+
+Reading and writing files
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Several of the IO-related functions in PyArrow accept either a URI (and infer
+the filesystem) or an explicit ``filesystem`` argument to specify the filesystem
+to read or write from. For example, the :meth:`pyarrow.parquet.read_table`
+function can be used in the following ways::
+
+   # using a URI -> filesystem is inferred
+   pq.read_table("s3://my-bucket")

Review comment:
       `read_table` also works for "directories" (and reads the full directory into a table), but indeed for the example reading a file is fine




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] pitrou closed pull request #8065: ARROW-9858: [Python][Docs] Add user guide for filesystems interface

Posted by GitBox <gi...@apache.org>.
pitrou closed pull request #8065:
URL: https://github.com/apache/arrow/pull/8065


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] jorisvandenbossche commented on a change in pull request #8065: ARROW-9858: [Python][Docs] Add user guide for filesystems interface

Posted by GitBox <gi...@apache.org>.
jorisvandenbossche commented on a change in pull request #8065:
URL: https://github.com/apache/arrow/pull/8065#discussion_r478678307



##########
File path: docs/source/python/filesystems.rst
##########
@@ -34,21 +34,87 @@ underlying storage, are automatically dereferenced.  Only basic
 :class:`metadata <FileInfo>` about file entries, such as the file size
 and modification time, is made available.
 
-Types
+The core interface is represented by the base class :class:`FileSystem`.
+Concrete subclasses are available for various kinds of storage, such as local
+filesystem access (:class:`LocalFileSystem`), HDFS (:class:`HadoopFileSystem`)
+and Amazon S3-compatible storage (:class:`S3FileSystem`).
+
+
+Usage
 -----
 
-The core interface is represented by the base class :class:`FileSystem`.
-Concrete subclasses are available for various kinds of storage:
-:class:`local filesystem access <LocalFileSystem>`,
-:class:`HDFS <HadoopFileSystem>` and
-:class:`Amazon S3-compatible storage <S3FileSystem>`.
+A FileSystem object can be created with one of the constuctors (and check the
+respective constructor for its options)::
+
+   >>> from pyarrow import fs
+   >>> local = fs.LocalFileSystem()
+
+or alternatively inferred from a URI::
+
+   >>> s3, path = fs.FileSystem.from_uri("s3://my-bucket")
+   >>> s3
+   <pyarrow._s3fs.S3FileSystem at 0x7f6760cbf4f0>
+   >>> path
+   'my-bucket'
+
+
+Reading and writing files
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Several of the IO-related functions in PyArrow accept either a URI (and infer
+the filesystem) or an explicit ``filesystem`` argument to specify the filesystem
+to read or write from. For example the :meth:`pyarrow.parquet.read_table`
+function can be used in the following ways::
+
+   # using a URI
+   pq.read_table("s3://my-bucket")
+   # using a path and filesystem
+   s3 = fs.S3FileSystem(..)
+   pq.read_table("my-bucket", filesystem=s3)
+
+The filesystem interface further allows to open files for reading (input) or
+writing (output) directly, which can be combined with functions that work with
+file-like objects. For example::
+
+   with local.open_output_stream("test.arrow") as file:
+      with pa.RecordBatchFileWriter(file, table.schema) as writer:
+         writer.write_table(table)
+
+
+Listing files
+~~~~~~~~~~~~~
+
+Inspecting the directories and files on a filesystem can be done with the
+:meth:`FileSystem.get_file_info` method. To list the the contents of a
+directory, use the :class`FileSelector` object to specify the selection::
+
+   >>> local.get_file_info(fs.FileSelector("dataset/", recursive=True))
+   [<FileInfo for 'dataset/part=B': type=FileType.Directory>,
+    <FileInfo for 'dataset/part=B/data0.parquet': type=FileType.File, size=1564>,
+    <FileInfo for 'dataset/part=A': type=FileType.Directory>,
+    <FileInfo for 'dataset/part=A/data0.parquet': type=FileType.File, size=1564>]
+
+This returns a list of :class:`FileInfo` objects, containing information about
+the type (file or directory), the size, the date last modified, etc.
+
+You can also get this information for a single explicit path (or list of
+paths)::
+
+   >>> local.get_file_info(['test.arrow'])[0]
+   <FileInfo for 'test.arrow': type=FileType.File, size=3250>
+
+   >>> local.get_file_info(['non_existent'])
+   [<FileInfo for 'non_existent': type=FileType.NotFound>]
 
-Example
--------
+S3
+--
 
-Assuming your S3 credentials are correctly configured (for example by setting
-the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY`` environment variables),
-here is how you can read contents from a S3 bucket::
+The :class:`S3FileSystem` constructor has several options to configure the S3
+connection. In addition, it will also read configured S3 credentials (for
+example by setting the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY``

Review comment:
       It might make sense to link to one of those pages, but I don't know how stable those links are ..




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] nealrichardson commented on a change in pull request #8065: ARROW-9858: [Python][Docs] Add user guide for filesystems interface

Posted by GitBox <gi...@apache.org>.
nealrichardson commented on a change in pull request #8065:
URL: https://github.com/apache/arrow/pull/8065#discussion_r478656124



##########
File path: docs/source/python/filesystems.rst
##########
@@ -34,21 +34,87 @@ underlying storage, are automatically dereferenced.  Only basic
 :class:`metadata <FileInfo>` about file entries, such as the file size
 and modification time, is made available.
 
-Types
+The core interface is represented by the base class :class:`FileSystem`.
+Concrete subclasses are available for various kinds of storage, such as local
+filesystem access (:class:`LocalFileSystem`), HDFS (:class:`HadoopFileSystem`)
+and Amazon S3-compatible storage (:class:`S3FileSystem`).
+
+
+Usage
 -----
 
-The core interface is represented by the base class :class:`FileSystem`.
-Concrete subclasses are available for various kinds of storage:
-:class:`local filesystem access <LocalFileSystem>`,
-:class:`HDFS <HadoopFileSystem>` and
-:class:`Amazon S3-compatible storage <S3FileSystem>`.
+A FileSystem object can be created with one of the constuctors (and check the
+respective constructor for its options)::
+
+   >>> from pyarrow import fs
+   >>> local = fs.LocalFileSystem()
+
+or alternatively inferred from a URI::
+
+   >>> s3, path = fs.FileSystem.from_uri("s3://my-bucket")
+   >>> s3
+   <pyarrow._s3fs.S3FileSystem at 0x7f6760cbf4f0>
+   >>> path
+   'my-bucket'
+
+
+Reading and writing files
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Several of the IO-related functions in PyArrow accept either a URI (and infer
+the filesystem) or an explicit ``filesystem`` argument to specify the filesystem
+to read or write from. For example the :meth:`pyarrow.parquet.read_table`
+function can be used in the following ways::
+
+   # using a URI
+   pq.read_table("s3://my-bucket")
+   # using a path and filesystem
+   s3 = fs.S3FileSystem(..)
+   pq.read_table("my-bucket", filesystem=s3)
+
+The filesystem interface further allows to open files for reading (input) or
+writing (output) directly, which can be combined with functions that work with
+file-like objects. For example::
+
+   with local.open_output_stream("test.arrow") as file:
+      with pa.RecordBatchFileWriter(file, table.schema) as writer:
+         writer.write_table(table)
+
+
+Listing files
+~~~~~~~~~~~~~
+
+Inspecting the directories and files on a filesystem can be done with the
+:meth:`FileSystem.get_file_info` method. To list the the contents of a
+directory, use the :class`FileSelector` object to specify the selection::
+
+   >>> local.get_file_info(fs.FileSelector("dataset/", recursive=True))
+   [<FileInfo for 'dataset/part=B': type=FileType.Directory>,
+    <FileInfo for 'dataset/part=B/data0.parquet': type=FileType.File, size=1564>,
+    <FileInfo for 'dataset/part=A': type=FileType.Directory>,
+    <FileInfo for 'dataset/part=A/data0.parquet': type=FileType.File, size=1564>]
+
+This returns a list of :class:`FileInfo` objects, containing information about
+the type (file or directory), the size, the date last modified, etc.
+
+You can also get this information for a single explicit path (or list of
+paths)::
+
+   >>> local.get_file_info(['test.arrow'])[0]
+   <FileInfo for 'test.arrow': type=FileType.File, size=3250>
+
+   >>> local.get_file_info(['non_existent'])
+   [<FileInfo for 'non_existent': type=FileType.NotFound>]
 
-Example
--------
+S3
+--
 
-Assuming your S3 credentials are correctly configured (for example by setting
-the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY`` environment variables),
-here is how you can read contents from a S3 bucket::
+The :class:`S3FileSystem` constructor has several options to configure the S3
+connection. In addition, it will also read configured S3 credentials (for
+example by setting the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY``

Review comment:
       How does this work? I don't see these env vars in our code; is this an aws-sdk-cpp feature?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] pitrou commented on a change in pull request #8065: ARROW-9858: [Python][Docs] Add user guide for filesystems interface

Posted by GitBox <gi...@apache.org>.
pitrou commented on a change in pull request #8065:
URL: https://github.com/apache/arrow/pull/8065#discussion_r481253760



##########
File path: docs/source/python/filesystems.rst
##########
@@ -34,21 +34,90 @@ underlying storage, are automatically dereferenced.  Only basic
 :class:`metadata <FileInfo>` about file entries, such as the file size
 and modification time, is made available.
 
-Types
+The core interface is represented by the base class :class:`FileSystem`.
+Concrete subclasses are available for various kinds of storage, such as local
+filesystem access (:class:`LocalFileSystem`), HDFS (:class:`HadoopFileSystem`)
+and Amazon S3-compatible storage (:class:`S3FileSystem`).
+
+
+Usage
 -----
 
-The core interface is represented by the base class :class:`FileSystem`.
-Concrete subclasses are available for various kinds of storage:
-:class:`local filesystem access <LocalFileSystem>`,
-:class:`HDFS <HadoopFileSystem>` and
-:class:`Amazon S3-compatible storage <S3FileSystem>`.
+A FileSystem object can be created with one of the constuctors (and check the
+respective constructor for its options)::
+
+   >>> from pyarrow import fs
+   >>> local = fs.LocalFileSystem()
+
+or alternatively inferred from a URI::
+
+   >>> s3, path = fs.FileSystem.from_uri("s3://my-bucket")
+   >>> s3
+   <pyarrow._s3fs.S3FileSystem at 0x7f6760cbf4f0>
+   >>> path
+   'my-bucket'
+
+
+Reading and writing files
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Several of the IO-related functions in PyArrow accept either a URI (and infer
+the filesystem) or an explicit ``filesystem`` argument to specify the filesystem
+to read or write from. For example, the :meth:`pyarrow.parquet.read_table`
+function can be used in the following ways::
+
+   # using a URI -> filesystem is inferred
+   pq.read_table("s3://my-bucket")

Review comment:
       I think the example may look strange, since I don't think a bucket can be a file... Perhaps "s3://my-bucket/my-file" or something?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] jorisvandenbossche commented on a change in pull request #8065: ARROW-9858: [Python][Docs] Add user guide for filesystems interface

Posted by GitBox <gi...@apache.org>.
jorisvandenbossche commented on a change in pull request #8065:
URL: https://github.com/apache/arrow/pull/8065#discussion_r478678088



##########
File path: docs/source/python/filesystems.rst
##########
@@ -34,21 +34,87 @@ underlying storage, are automatically dereferenced.  Only basic
 :class:`metadata <FileInfo>` about file entries, such as the file size
 and modification time, is made available.
 
-Types
+The core interface is represented by the base class :class:`FileSystem`.
+Concrete subclasses are available for various kinds of storage, such as local
+filesystem access (:class:`LocalFileSystem`), HDFS (:class:`HadoopFileSystem`)
+and Amazon S3-compatible storage (:class:`S3FileSystem`).
+
+
+Usage
 -----
 
-The core interface is represented by the base class :class:`FileSystem`.
-Concrete subclasses are available for various kinds of storage:
-:class:`local filesystem access <LocalFileSystem>`,
-:class:`HDFS <HadoopFileSystem>` and
-:class:`Amazon S3-compatible storage <S3FileSystem>`.
+A FileSystem object can be created with one of the constuctors (and check the
+respective constructor for its options)::
+
+   >>> from pyarrow import fs
+   >>> local = fs.LocalFileSystem()
+
+or alternatively inferred from a URI::
+
+   >>> s3, path = fs.FileSystem.from_uri("s3://my-bucket")
+   >>> s3
+   <pyarrow._s3fs.S3FileSystem at 0x7f6760cbf4f0>
+   >>> path
+   'my-bucket'
+
+
+Reading and writing files
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Several of the IO-related functions in PyArrow accept either a URI (and infer
+the filesystem) or an explicit ``filesystem`` argument to specify the filesystem
+to read or write from. For example the :meth:`pyarrow.parquet.read_table`
+function can be used in the following ways::
+
+   # using a URI
+   pq.read_table("s3://my-bucket")
+   # using a path and filesystem
+   s3 = fs.S3FileSystem(..)
+   pq.read_table("my-bucket", filesystem=s3)
+
+The filesystem interface further allows to open files for reading (input) or
+writing (output) directly, which can be combined with functions that work with
+file-like objects. For example::
+
+   with local.open_output_stream("test.arrow") as file:
+      with pa.RecordBatchFileWriter(file, table.schema) as writer:
+         writer.write_table(table)
+
+
+Listing files
+~~~~~~~~~~~~~
+
+Inspecting the directories and files on a filesystem can be done with the
+:meth:`FileSystem.get_file_info` method. To list the the contents of a
+directory, use the :class`FileSelector` object to specify the selection::
+
+   >>> local.get_file_info(fs.FileSelector("dataset/", recursive=True))
+   [<FileInfo for 'dataset/part=B': type=FileType.Directory>,
+    <FileInfo for 'dataset/part=B/data0.parquet': type=FileType.File, size=1564>,
+    <FileInfo for 'dataset/part=A': type=FileType.Directory>,
+    <FileInfo for 'dataset/part=A/data0.parquet': type=FileType.File, size=1564>]
+
+This returns a list of :class:`FileInfo` objects, containing information about
+the type (file or directory), the size, the date last modified, etc.
+
+You can also get this information for a single explicit path (or list of
+paths)::
+
+   >>> local.get_file_info(['test.arrow'])[0]
+   <FileInfo for 'test.arrow': type=FileType.File, size=3250>
+
+   >>> local.get_file_info(['non_existent'])
+   [<FileInfo for 'non_existent': type=FileType.NotFound>]
 
-Example
--------
+S3
+--
 
-Assuming your S3 credentials are correctly configured (for example by setting
-the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY`` environment variables),
-here is how you can read contents from a S3 bucket::
+The :class:`S3FileSystem` constructor has several options to configure the S3
+connection. In addition, it will also read configured S3 credentials (for
+example by setting the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY``

Review comment:
       AWS provides several ways to configure credentials (and those are picked up by the SDK or CLI): https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/credentials.html, https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] pitrou commented on a change in pull request #8065: ARROW-9858: [Python][Docs] Add user guide for filesystems interface

Posted by GitBox <gi...@apache.org>.
pitrou commented on a change in pull request #8065:
URL: https://github.com/apache/arrow/pull/8065#discussion_r481256952



##########
File path: docs/source/python/filesystems.rst
##########
@@ -69,3 +138,66 @@ here is how you can read contents from a S3 bucket::
    >>> f = s3.open_input_stream('my-test-bucket/Dir1/File2')
    >>> f.readall()
    b'some data'
+
+
+Hadoop File System (HDFS)
+-------------------------
+
+PyArrow comes with bindings to the Hadoop File System (based on C++ bindings
+using ``libhdfs``, a JNI-based interface to the Java Hadoop client). You connect
+using the :class:`HadoopFileSystem` constructor::
+
+.. code-block:: python
+
+   from pyarrow import fs
+   hdfs = fs.HadoopFileSystem(host, port, user=user, kerb_ticket=ticket_cache_path)
+
+The ``libhdfs`` library is loaded **at runtime** (rather than at link / library
+load time, since the library may not be in your LD_LIBRARY_PATH), and relies on
+some environment variables.
+
+* ``HADOOP_HOME``: the root of your installed Hadoop distribution. Often has
+  `lib/native/libhdfs.so`.
+
+* ``JAVA_HOME``: the location of your Java SDK installation.
+
+* ``ARROW_LIBHDFS_DIR`` (optional): explicit location of ``libhdfs.so`` if it is
+  installed somewhere other than ``$HADOOP_HOME/lib/native``.
+
+* ``CLASSPATH``: must contain the Hadoop jars. You can set these using:
+
+  .. code-block:: shell
+
+      export CLASSPATH=`$HADOOP_HOME/bin/hdfs classpath --glob`
+
+  If ``CLASSPATH`` is not set, then it will be set automatically if the
+  ``hadoop`` executable is in your system path, or if ``HADOOP_HOME`` is set.
+
+
+Using fsspec-compatible filesystems
+-----------------------------------
+
+The filesystems mentioned above are natively supported by Arrow C++ / PyArrow.
+The Python ecosystem, however, also has several filesystem packages. Those
+packages following the
+`fsspec <https://filesystem-spec.readthedocs.io/en/latest/>`__ interface can be
+used in PyArrow as well.
+
+Functions accepting a filesystem object will also accept an fsspec subclass.
+For example::
+
+   # creating an ffspec-based filesystem object for Google Cloud Storage

Review comment:
       "fsspec" :-)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] jorisvandenbossche commented on a change in pull request #8065: ARROW-9858: [Python][Docs] Add user guide for filesystems interface

Posted by GitBox <gi...@apache.org>.
jorisvandenbossche commented on a change in pull request #8065:
URL: https://github.com/apache/arrow/pull/8065#discussion_r478665527



##########
File path: docs/source/python/filesystems.rst
##########
@@ -34,21 +34,87 @@ underlying storage, are automatically dereferenced.  Only basic
 :class:`metadata <FileInfo>` about file entries, such as the file size
 and modification time, is made available.
 
-Types
+The core interface is represented by the base class :class:`FileSystem`.
+Concrete subclasses are available for various kinds of storage, such as local
+filesystem access (:class:`LocalFileSystem`), HDFS (:class:`HadoopFileSystem`)
+and Amazon S3-compatible storage (:class:`S3FileSystem`).
+
+
+Usage
 -----
 
-The core interface is represented by the base class :class:`FileSystem`.
-Concrete subclasses are available for various kinds of storage:
-:class:`local filesystem access <LocalFileSystem>`,
-:class:`HDFS <HadoopFileSystem>` and
-:class:`Amazon S3-compatible storage <S3FileSystem>`.
+A FileSystem object can be created with one of the constuctors (and check the
+respective constructor for its options)::
+
+   >>> from pyarrow import fs
+   >>> local = fs.LocalFileSystem()
+
+or alternatively inferred from a URI::
+
+   >>> s3, path = fs.FileSystem.from_uri("s3://my-bucket")
+   >>> s3
+   <pyarrow._s3fs.S3FileSystem at 0x7f6760cbf4f0>
+   >>> path
+   'my-bucket'
+
+
+Reading and writing files
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Several of the IO-related functions in PyArrow accept either a URI (and infer
+the filesystem) or an explicit ``filesystem`` argument to specify the filesystem
+to read or write from. For example the :meth:`pyarrow.parquet.read_table`
+function can be used in the following ways::
+
+   # using a URI
+   pq.read_table("s3://my-bucket")
+   # using a path and filesystem
+   s3 = fs.S3FileSystem(..)
+   pq.read_table("my-bucket", filesystem=s3)
+
+The filesystem interface further allows to open files for reading (input) or
+writing (output) directly, which can be combined with functions that work with
+file-like objects. For example::
+
+   with local.open_output_stream("test.arrow") as file:
+      with pa.RecordBatchFileWriter(file, table.schema) as writer:
+         writer.write_table(table)
+
+
+Listing files
+~~~~~~~~~~~~~
+
+Inspecting the directories and files on a filesystem can be done with the
+:meth:`FileSystem.get_file_info` method. To list the the contents of a
+directory, use the :class`FileSelector` object to specify the selection::
+
+   >>> local.get_file_info(fs.FileSelector("dataset/", recursive=True))
+   [<FileInfo for 'dataset/part=B': type=FileType.Directory>,
+    <FileInfo for 'dataset/part=B/data0.parquet': type=FileType.File, size=1564>,
+    <FileInfo for 'dataset/part=A': type=FileType.Directory>,
+    <FileInfo for 'dataset/part=A/data0.parquet': type=FileType.File, size=1564>]
+
+This returns a list of :class:`FileInfo` objects, containing information about
+the type (file or directory), the size, the date last modified, etc.
+
+You can also get this information for a single explicit path (or list of
+paths)::
+
+   >>> local.get_file_info(['test.arrow'])[0]
+   <FileInfo for 'test.arrow': type=FileType.File, size=3250>
+
+   >>> local.get_file_info(['non_existent'])
+   [<FileInfo for 'non_existent': type=FileType.NotFound>]
 
-Example
--------
+S3
+--
 
-Assuming your S3 credentials are correctly configured (for example by setting
-the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY`` environment variables),
-here is how you can read contents from a S3 bucket::
+The :class:`S3FileSystem` constructor has several options to configure the S3
+connection. In addition, it will also read configured S3 credentials (for
+example by setting the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY``

Review comment:
       > is this an aws-sdk-cpp feature?
   
   I suppose so? (note that I didn't add this, this was already in the docs ;))




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] pitrou commented on a change in pull request #8065: ARROW-9858: [Python][Docs] Add user guide for filesystems interface

Posted by GitBox <gi...@apache.org>.
pitrou commented on a change in pull request #8065:
URL: https://github.com/apache/arrow/pull/8065#discussion_r478548600



##########
File path: docs/source/python/filesystems.rst
##########
@@ -34,21 +34,87 @@ underlying storage, are automatically dereferenced.  Only basic
 :class:`metadata <FileInfo>` about file entries, such as the file size
 and modification time, is made available.
 
-Types
+The core interface is represented by the base class :class:`FileSystem`.
+Concrete subclasses are available for various kinds of storage, such as local
+filesystem access (:class:`LocalFileSystem`), HDFS (:class:`HadoopFileSystem`)
+and Amazon S3-compatible storage (:class:`S3FileSystem`).
+
+
+Usage
 -----
 
-The core interface is represented by the base class :class:`FileSystem`.
-Concrete subclasses are available for various kinds of storage:
-:class:`local filesystem access <LocalFileSystem>`,
-:class:`HDFS <HadoopFileSystem>` and
-:class:`Amazon S3-compatible storage <S3FileSystem>`.
+A FileSystem object can be created with one of the constuctors (and check the
+respective constructor for its options)::
+
+   >>> from pyarrow import fs
+   >>> local = fs.LocalFileSystem()
+
+or alternatively inferred from a URI::
+
+   >>> s3, path = fs.FileSystem.from_uri("s3://my-bucket")
+   >>> s3
+   <pyarrow._s3fs.S3FileSystem at 0x7f6760cbf4f0>
+   >>> path
+   'my-bucket'
+
+
+Reading and writing files
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Several of the IO-related functions in PyArrow accept either a URI (and infer
+the filesystem) or an explicit ``filesystem`` argument to specify the filesystem
+to read or write from. For example the :meth:`pyarrow.parquet.read_table`
+function can be used in the following ways::
+
+   # using a URI
+   pq.read_table("s3://my-bucket")
+   # using a path and filesystem
+   s3 = fs.S3FileSystem(..)
+   pq.read_table("my-bucket", filesystem=s3)
+
+The filesystem interface further allows to open files for reading (input) or
+writing (output) directly, which can be combined with functions that work with
+file-like objects. For example::
+
+   with local.open_output_stream("test.arrow") as file:
+      with pa.RecordBatchFileWriter(file, table.schema) as writer:
+         writer.write_table(table)
+
+
+Listing files
+~~~~~~~~~~~~~
+
+Inspecting the directories and files on a filesystem can be done with the
+:meth:`FileSystem.get_file_info` method. To list the the contents of a
+directory, use the :class`FileSelector` object to specify the selection::
+
+   >>> local.get_file_info(fs.FileSelector("dataset/", recursive=True))
+   [<FileInfo for 'dataset/part=B': type=FileType.Directory>,
+    <FileInfo for 'dataset/part=B/data0.parquet': type=FileType.File, size=1564>,
+    <FileInfo for 'dataset/part=A': type=FileType.Directory>,
+    <FileInfo for 'dataset/part=A/data0.parquet': type=FileType.File, size=1564>]
+
+This returns a list of :class:`FileInfo` objects, containing information about
+the type (file or directory), the size, the date last modified, etc.
+
+You can also get this information for a single explicit path (or list of
+paths)::
+
+   >>> local.get_file_info(['test.arrow'])[0]
+   <FileInfo for 'test.arrow': type=FileType.File, size=3250>
+
+   >>> local.get_file_info(['non_existent'])
+   [<FileInfo for 'non_existent': type=FileType.NotFound>]
 
-Example
--------
+S3
+--
 
-Assuming your S3 credentials are correctly configured (for example by setting
-the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY`` environment variables),
-here is how you can read contents from a S3 bucket::
+The :class:`S3FileSystem` constructor has several options to configure the S3
+connection. In addition, it will also read configured S3 credentials (for
+example by setting the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY``

Review comment:
       "by reading" or "by inspecting", no?

##########
File path: docs/source/python/filesystems.rst
##########
@@ -69,3 +135,66 @@ here is how you can read contents from a S3 bucket::
    >>> f = s3.open_input_stream('my-test-bucket/Dir1/File2')
    >>> f.readall()
    b'some data'
+
+
+Hadoop File System (HDFS)
+-------------------------
+
+PyArrow comes with bindings to the Hadoop File System (based on C++ bindings
+using ``libhdfs``, a JNI-based interface to the Java Hadoop client). You connect
+using the `class`:HadoopFileSystem: constructor::

Review comment:
       Looks like a markup typo in `class`.

##########
File path: docs/source/python/api/filesystems.rst
##########
@@ -41,3 +41,13 @@ Concrete Subclasses
    LocalFileSystem
    S3FileSystem
    HadoopFileSystem
+   SubTreeFileSystem
+
+To define filesystems with behavior implemented in Python.

Review comment:
       Either put a ":" at the end or make a full sentence?

##########
File path: docs/source/python/filesystems.rst
##########
@@ -69,3 +135,66 @@ here is how you can read contents from a S3 bucket::
    >>> f = s3.open_input_stream('my-test-bucket/Dir1/File2')
    >>> f.readall()
    b'some data'
+
+
+Hadoop File System (HDFS)
+-------------------------
+
+PyArrow comes with bindings to the Hadoop File System (based on C++ bindings
+using ``libhdfs``, a JNI-based interface to the Java Hadoop client). You connect
+using the `class`:HadoopFileSystem: constructor::
+
+.. code-block:: python
+
+   from pyarrow import fs
+   hdfs = fs.HadoopFileSystem(host, port, user=user, kerb_ticket=ticket_cache_path)
+
+The ``libhdfs`` library is loaded **at runtime**
+(rather than at link / library load time, since the library may not be in your
+LD_LIBRARY_PATH), and relies on some environment variables.
+
+* ``HADOOP_HOME``: the root of your installed Hadoop distribution. Often has
+  `lib/native/libhdfs.so`.
+
+* ``JAVA_HOME``: the location of your Java SDK installation.
+
+* ``ARROW_LIBHDFS_DIR`` (optional): explicit location of ``libhdfs.so`` if it is
+  installed somewhere other than ``$HADOOP_HOME/lib/native``.
+
+* ``CLASSPATH``: must contain the Hadoop jars. You can set these using:
+
+  .. code-block:: shell
+
+      export CLASSPATH=`$HADOOP_HOME/bin/hdfs classpath --glob`
+
+  If ``CLASSPATH`` is not set, then it will be set automatically if the
+  ``hadoop`` executable is in your system path, or if ``HADOOP_HOME`` is set.
+
+
+Using fsspec-compatible filesystems
+-----------------------------------
+
+The filesystems mentioned above are natively supported by Arrow C++ / PyArrow.
+The Python ecosystem, however, also has several filesystem packages. Those
+packages following the
+`fsspec <https://filesystem-spec.readthedocs.io/en/latest/>`__ interface can be

Review comment:
       Need only a single "_" at the end of the hyperlink markup, I believe?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] pitrou commented on a change in pull request #8065: ARROW-9858: [Python][Docs] Add user guide for filesystems interface

Posted by GitBox <gi...@apache.org>.
pitrou commented on a change in pull request #8065:
URL: https://github.com/apache/arrow/pull/8065#discussion_r481251515



##########
File path: docs/source/python/filesystems.rst
##########
@@ -69,3 +135,66 @@ here is how you can read contents from a S3 bucket::
    >>> f = s3.open_input_stream('my-test-bucket/Dir1/File2')
    >>> f.readall()
    b'some data'
+
+
+Hadoop File System (HDFS)
+-------------------------
+
+PyArrow comes with bindings to the Hadoop File System (based on C++ bindings
+using ``libhdfs``, a JNI-based interface to the Java Hadoop client). You connect
+using the `class`:HadoopFileSystem: constructor::
+
+.. code-block:: python
+
+   from pyarrow import fs
+   hdfs = fs.HadoopFileSystem(host, port, user=user, kerb_ticket=ticket_cache_path)
+
+The ``libhdfs`` library is loaded **at runtime**
+(rather than at link / library load time, since the library may not be in your
+LD_LIBRARY_PATH), and relies on some environment variables.
+
+* ``HADOOP_HOME``: the root of your installed Hadoop distribution. Often has
+  `lib/native/libhdfs.so`.
+
+* ``JAVA_HOME``: the location of your Java SDK installation.
+
+* ``ARROW_LIBHDFS_DIR`` (optional): explicit location of ``libhdfs.so`` if it is
+  installed somewhere other than ``$HADOOP_HOME/lib/native``.
+
+* ``CLASSPATH``: must contain the Hadoop jars. You can set these using:
+
+  .. code-block:: shell
+
+      export CLASSPATH=`$HADOOP_HOME/bin/hdfs classpath --glob`
+
+  If ``CLASSPATH`` is not set, then it will be set automatically if the
+  ``hadoop`` executable is in your system path, or if ``HADOOP_HOME`` is set.
+
+
+Using fsspec-compatible filesystems
+-----------------------------------
+
+The filesystems mentioned above are natively supported by Arrow C++ / PyArrow.
+The Python ecosystem, however, also has several filesystem packages. Those
+packages following the
+`fsspec <https://filesystem-spec.readthedocs.io/en/latest/>`__ interface can be

Review comment:
       Ah, thank you, I've just learned something.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] pitrou commented on a change in pull request #8065: ARROW-9858: [Python][Docs] Add user guide for filesystems interface

Posted by GitBox <gi...@apache.org>.
pitrou commented on a change in pull request #8065:
URL: https://github.com/apache/arrow/pull/8065#discussion_r481254797



##########
File path: docs/source/python/filesystems.rst
##########
@@ -34,21 +34,90 @@ underlying storage, are automatically dereferenced.  Only basic
 :class:`metadata <FileInfo>` about file entries, such as the file size
 and modification time, is made available.
 
-Types
+The core interface is represented by the base class :class:`FileSystem`.
+Concrete subclasses are available for various kinds of storage, such as local
+filesystem access (:class:`LocalFileSystem`), HDFS (:class:`HadoopFileSystem`)
+and Amazon S3-compatible storage (:class:`S3FileSystem`).
+
+
+Usage
 -----
 
-The core interface is represented by the base class :class:`FileSystem`.
-Concrete subclasses are available for various kinds of storage:
-:class:`local filesystem access <LocalFileSystem>`,
-:class:`HDFS <HadoopFileSystem>` and
-:class:`Amazon S3-compatible storage <S3FileSystem>`.
+A FileSystem object can be created with one of the constuctors (and check the
+respective constructor for its options)::
+
+   >>> from pyarrow import fs
+   >>> local = fs.LocalFileSystem()
+
+or alternatively inferred from a URI::
+
+   >>> s3, path = fs.FileSystem.from_uri("s3://my-bucket")
+   >>> s3
+   <pyarrow._s3fs.S3FileSystem at 0x7f6760cbf4f0>
+   >>> path
+   'my-bucket'
+
+
+Reading and writing files
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Several of the IO-related functions in PyArrow accept either a URI (and infer
+the filesystem) or an explicit ``filesystem`` argument to specify the filesystem
+to read or write from. For example, the :meth:`pyarrow.parquet.read_table`
+function can be used in the following ways::
+
+   # using a URI -> filesystem is inferred
+   pq.read_table("s3://my-bucket")
+   # using a path and filesystem
+   s3 = fs.S3FileSystem(..)
+   pq.read_table("my-bucket", filesystem=s3)
+
+The filesystem interface further allows to open files for reading (input) or
+writing (output) directly, which can be combined with functions that work with
+file-like objects. For example::
+
+   local = fs.LocalFileSystem()
+
+   with local.open_output_stream("test.arrow") as file:
+      with pa.RecordBatchFileWriter(file, table.schema) as writer:
+         writer.write_table(table)
+
+
+Listing files
+~~~~~~~~~~~~~
+
+Inspecting the directories and files on a filesystem can be done with the
+:meth:`FileSystem.get_file_info` method. To list the contents of a directory,
+use the :class:`FileSelector` object to specify the selection::
+
+   >>> local.get_file_info(fs.FileSelector("dataset/", recursive=True))
+   [<FileInfo for 'dataset/part=B': type=FileType.Directory>,
+    <FileInfo for 'dataset/part=B/data0.parquet': type=FileType.File, size=1564>,
+    <FileInfo for 'dataset/part=A': type=FileType.Directory>,
+    <FileInfo for 'dataset/part=A/data0.parquet': type=FileType.File, size=1564>]
+
+This returns a list of :class:`FileInfo` objects, containing information about
+the type (file or directory), the size, the date last modified, etc.
+
+You can also get this information for a single explicit path (or list of
+paths)::
+
+   >>> local.get_file_info('test.arrow')
+   <FileInfo for 'test.arrow': type=FileType.File, size=3250>
+
+   >>> local.get_file_info('non_existent')
+   [<FileInfo for 'non_existent': type=FileType.NotFound>]

Review comment:
       Probably doesn't return a list here?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] nealrichardson commented on a change in pull request #8065: ARROW-9858: [Python][Docs] Add user guide for filesystems interface

Posted by GitBox <gi...@apache.org>.
nealrichardson commented on a change in pull request #8065:
URL: https://github.com/apache/arrow/pull/8065#discussion_r478671300



##########
File path: docs/source/python/filesystems.rst
##########
@@ -34,21 +34,87 @@ underlying storage, are automatically dereferenced.  Only basic
 :class:`metadata <FileInfo>` about file entries, such as the file size
 and modification time, is made available.
 
-Types
+The core interface is represented by the base class :class:`FileSystem`.
+Concrete subclasses are available for various kinds of storage, such as local
+filesystem access (:class:`LocalFileSystem`), HDFS (:class:`HadoopFileSystem`)
+and Amazon S3-compatible storage (:class:`S3FileSystem`).
+
+
+Usage
 -----
 
-The core interface is represented by the base class :class:`FileSystem`.
-Concrete subclasses are available for various kinds of storage:
-:class:`local filesystem access <LocalFileSystem>`,
-:class:`HDFS <HadoopFileSystem>` and
-:class:`Amazon S3-compatible storage <S3FileSystem>`.
+A FileSystem object can be created with one of the constuctors (and check the
+respective constructor for its options)::
+
+   >>> from pyarrow import fs
+   >>> local = fs.LocalFileSystem()
+
+or alternatively inferred from a URI::
+
+   >>> s3, path = fs.FileSystem.from_uri("s3://my-bucket")
+   >>> s3
+   <pyarrow._s3fs.S3FileSystem at 0x7f6760cbf4f0>
+   >>> path
+   'my-bucket'
+
+
+Reading and writing files
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Several of the IO-related functions in PyArrow accept either a URI (and infer
+the filesystem) or an explicit ``filesystem`` argument to specify the filesystem
+to read or write from. For example the :meth:`pyarrow.parquet.read_table`
+function can be used in the following ways::
+
+   # using a URI
+   pq.read_table("s3://my-bucket")
+   # using a path and filesystem
+   s3 = fs.S3FileSystem(..)
+   pq.read_table("my-bucket", filesystem=s3)
+
+The filesystem interface further allows to open files for reading (input) or
+writing (output) directly, which can be combined with functions that work with
+file-like objects. For example::
+
+   with local.open_output_stream("test.arrow") as file:
+      with pa.RecordBatchFileWriter(file, table.schema) as writer:
+         writer.write_table(table)
+
+
+Listing files
+~~~~~~~~~~~~~
+
+Inspecting the directories and files on a filesystem can be done with the
+:meth:`FileSystem.get_file_info` method. To list the the contents of a
+directory, use the :class`FileSelector` object to specify the selection::
+
+   >>> local.get_file_info(fs.FileSelector("dataset/", recursive=True))
+   [<FileInfo for 'dataset/part=B': type=FileType.Directory>,
+    <FileInfo for 'dataset/part=B/data0.parquet': type=FileType.File, size=1564>,
+    <FileInfo for 'dataset/part=A': type=FileType.Directory>,
+    <FileInfo for 'dataset/part=A/data0.parquet': type=FileType.File, size=1564>]
+
+This returns a list of :class:`FileInfo` objects, containing information about
+the type (file or directory), the size, the date last modified, etc.
+
+You can also get this information for a single explicit path (or list of
+paths)::
+
+   >>> local.get_file_info(['test.arrow'])[0]
+   <FileInfo for 'test.arrow': type=FileType.File, size=3250>
+
+   >>> local.get_file_info(['non_existent'])
+   [<FileInfo for 'non_existent': type=FileType.NotFound>]
 
-Example
--------
+S3
+--
 
-Assuming your S3 credentials are correctly configured (for example by setting
-the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY`` environment variables),
-here is how you can read contents from a S3 bucket::
+The :class:`S3FileSystem` constructor has several options to configure the S3
+connection. In addition, it will also read configured S3 credentials (for
+example by setting the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY``

Review comment:
       Yeah I know, this was a more general question. It seems that the only place "AWS_ACCESS_KEY_ID" appears in our code is on this line, meaning we didn't implement it and we don't test it, so 🤷 how (or even if) it works.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] jorisvandenbossche commented on a change in pull request #8065: ARROW-9858: [Python][Docs] Add user guide for filesystems interface

Posted by GitBox <gi...@apache.org>.
jorisvandenbossche commented on a change in pull request #8065:
URL: https://github.com/apache/arrow/pull/8065#discussion_r480953625



##########
File path: docs/source/python/filesystems.rst
##########
@@ -69,3 +135,66 @@ here is how you can read contents from a S3 bucket::
    >>> f = s3.open_input_stream('my-test-bucket/Dir1/File2')
    >>> f.readall()
    b'some data'
+
+
+Hadoop File System (HDFS)
+-------------------------
+
+PyArrow comes with bindings to the Hadoop File System (based on C++ bindings
+using ``libhdfs``, a JNI-based interface to the Java Hadoop client). You connect
+using the `class`:HadoopFileSystem: constructor::
+
+.. code-block:: python
+
+   from pyarrow import fs
+   hdfs = fs.HadoopFileSystem(host, port, user=user, kerb_ticket=ticket_cache_path)
+
+The ``libhdfs`` library is loaded **at runtime**
+(rather than at link / library load time, since the library may not be in your
+LD_LIBRARY_PATH), and relies on some environment variables.
+
+* ``HADOOP_HOME``: the root of your installed Hadoop distribution. Often has
+  `lib/native/libhdfs.so`.
+
+* ``JAVA_HOME``: the location of your Java SDK installation.
+
+* ``ARROW_LIBHDFS_DIR`` (optional): explicit location of ``libhdfs.so`` if it is
+  installed somewhere other than ``$HADOOP_HOME/lib/native``.
+
+* ``CLASSPATH``: must contain the Hadoop jars. You can set these using:
+
+  .. code-block:: shell
+
+      export CLASSPATH=`$HADOOP_HOME/bin/hdfs classpath --glob`
+
+  If ``CLASSPATH`` is not set, then it will be set automatically if the
+  ``hadoop`` executable is in your system path, or if ``HADOOP_HOME`` is set.
+
+
+Using fsspec-compatible filesystems
+-----------------------------------
+
+The filesystems mentioned above are natively supported by Arrow C++ / PyArrow.
+The Python ecosystem, however, also has several filesystem packages. Those
+packages following the
+`fsspec <https://filesystem-spec.readthedocs.io/en/latest/>`__ interface can be

Review comment:
       The double underscore also works, and is called "anonymous hyperlink". IIRC, the above syntax is fine for one-off inline links (if you would use the same "fsspec" text somewhere else in a hyperlink in the docs, then sphinx would warn about a duplicate target name if using a single underscore)




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] github-actions[bot] commented on pull request #8065: ARROW-9858: [Python][Docs] Add user guide for filesystems interface

Posted by GitBox <gi...@apache.org>.
github-actions[bot] commented on pull request #8065:
URL: https://github.com/apache/arrow/pull/8065#issuecomment-681996219


   https://issues.apache.org/jira/browse/ARROW-9858


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [arrow] jorisvandenbossche commented on pull request #8065: ARROW-9858: [Python][Docs] Add user guide for filesystems interface

Posted by GitBox <gi...@apache.org>.
jorisvandenbossche commented on pull request #8065:
URL: https://github.com/apache/arrow/pull/8065#issuecomment-681995199


   There is probably still plenty to improve in wording and actual examples, but started putting some content on the page.
   
   @nealrichardson if you have specific aspects / use cases that you encountered while experimenting with it that you would add, all ears.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org