You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@arrow.apache.org by am...@apache.org on 2022/05/26 09:11:50 UTC
[arrow] branch master updated: ARROW-16018: [Doc][Python] Run doctests on Python docstring examples (CI job)
This is an automated email from the ASF dual-hosted git repository.
amolina pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/arrow.git
The following commit(s) were added to refs/heads/master by this push:
new 7fda23c377 ARROW-16018: [Doc][Python] Run doctests on Python docstring examples (CI job)
7fda23c377 is described below
commit 7fda23c377657b9004317ebdf2524ee13f783fd3
Author: Alenka Frim <fr...@gmail.com>
AuthorDate: Thu May 26 11:11:38 2022 +0200
ARROW-16018: [Doc][Python] Run doctests on Python docstring examples (CI job)
This PR adds a CI job to test python docstrings with `doctest.`
It can be tested with `archery docker run conda-python-docs`.
Closes #13216 from AlenkaF/ARROW-16018-CI
Lead-authored-by: Alenka Frim <fr...@gmail.com>
Co-authored-by: Alenka Frim <Al...@users.noreply.github.com>
Co-authored-by: Raúl Cumplido <ra...@gmail.com>
Signed-off-by: Alessandro Molina <am...@turbogears.org>
---
ci/conda_env_sphinx.txt | 2 ++
docker-compose.yml | 4 +++-
python/pyarrow/fs.py | 7 ++++---
python/pyarrow/parquet/__init__.py | 4 ++--
4 files changed, 11 insertions(+), 6 deletions(-)
diff --git a/ci/conda_env_sphinx.txt b/ci/conda_env_sphinx.txt
index 1d5b5ab7db..a776808510 100644
--- a/ci/conda_env_sphinx.txt
+++ b/ci/conda_env_sphinx.txt
@@ -24,3 +24,5 @@ pydata-sphinx-theme
sphinx-design
sphinx>=4.2
sphinx-copybutton
+# Requirement for doctest-cython
+pytest-cython
diff --git a/docker-compose.yml b/docker-compose.yml
index 406d310cbb..cfa70708bd 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -1055,12 +1055,14 @@ services:
LANG: "C.UTF-8"
BUILD_DOCS_CPP: "ON"
BUILD_DOCS_PYTHON: "ON"
+ PYTEST_ARGS: "--doctest-modules"
volumes: *conda-volumes
command:
["/arrow/ci/scripts/cpp_build.sh /arrow /build &&
/arrow/ci/scripts/python_build.sh /arrow /build &&
pip install -e /arrow/dev/archery[numpydoc] &&
- archery numpydoc --allow-rule PR01,PR10"]
+ archery numpydoc --allow-rule PR01,PR10 &&
+ /arrow/ci/scripts/python_test.sh /arrow"]
conda-python-dask:
# Possible $DASK parameters:
diff --git a/python/pyarrow/fs.py b/python/pyarrow/fs.py
index e391e7d1ee..f22eaf0304 100644
--- a/python/pyarrow/fs.py
+++ b/python/pyarrow/fs.py
@@ -224,12 +224,13 @@ def copy_files(source, destination,
--------
Copy an S3 bucket's files to a local directory:
- >>> copy_files("s3://your-bucket-name", "local-directory")
+ >>> copy_files("s3://your-bucket-name",
+ ... "local-directory") # doctest: +SKIP
Using a FileSystem object:
>>> copy_files("your-bucket-name", "local-directory",
- ... source_filesystem=S3FileSystem(...))
+ ... source_filesystem=S3FileSystem(...)) # doctest: +SKIP
"""
source_fs, source_path = _resolve_filesystem_and_path(
@@ -263,7 +264,7 @@ class FSSpecHandler(FileSystemHandler):
Examples
--------
- >>> PyFileSystem(FSSpecHandler(fsspec_fs))
+ >>> PyFileSystem(FSSpecHandler(fsspec_fs)) # doctest: +SKIP
"""
def __init__(self, fs):
diff --git a/python/pyarrow/parquet/__init__.py b/python/pyarrow/parquet/__init__.py
index 049f955f80..b4713a717c 100644
--- a/python/pyarrow/parquet/__init__.py
+++ b/python/pyarrow/parquet/__init__.py
@@ -1926,7 +1926,7 @@ Examples
Select pandas metadata:
>>> dataset.read_pandas(columns=["n_legs"]).schema.pandas_metadata
- {'index_columns': [{'kind': 'range', ... 'pandas_version': '1.4.1'}
+ {'index_columns': [{'kind': 'range', 'name': None, 'start': 0, ...}
"""
return self.read(use_pandas_metadata=True, **kwargs)
@@ -2486,7 +2486,7 @@ class _ParquetDatasetV2:
n_legs: [[2,2,4,4,5,100]]
>>> dataset.read_pandas(columns=["n_legs"]).schema.pandas_metadata
- {'index_columns': [{'kind': 'range', ... 'pandas_version': '1.4.1'}
+ {'index_columns': [{'kind': 'range', 'name': None, 'start': 0, ...}
"""
return self.read(use_pandas_metadata=True, **kwargs)