You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by gu...@apache.org on 2020/02/14 02:01:47 UTC
[spark] branch branch-3.0 updated: [PYSPARK][DOCS][MINOR] Changed
`:func:` to `:attr:` Sphinx roles,
fixed links in documentation of `Data{Frame, Stream}{Reader, Writer}`
This is an automated email from the ASF dual-hosted git repository.
gurwls223 pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/branch-3.0 by this push:
new 35539ca [PYSPARK][DOCS][MINOR] Changed `:func:` to `:attr:` Sphinx roles, fixed links in documentation of `Data{Frame,Stream}{Reader,Writer}`
35539ca is described below
commit 35539cad17fd2b425ba8f7a7e298e9805541aa73
Author: David Toneian <da...@toneian.com>
AuthorDate: Fri Feb 14 11:00:35 2020 +0900
[PYSPARK][DOCS][MINOR] Changed `:func:` to `:attr:` Sphinx roles, fixed links in documentation of `Data{Frame,Stream}{Reader,Writer}`
This commit is published into the public domain.
### What changes were proposed in this pull request?
This PR fixes the documentation of `DataFrameReader`, `DataFrameWriter`, `DataStreamReader`, and `DataStreamWriter`, where attributes of other classes were misrepresented as functions. Additionally, creation of hyperlinks across modules was fixed in these instances.
### Why are the changes needed?
The old state produced documentation that suggested invalid usage of PySpark objects (accessing attributes as though they were callable.)
### Does this PR introduce any user-facing change?
No, except for improved documentation.
### How was this patch tested?
No test added; documentation build runs through.
Closes #27553 from DavidToneian/docfix-DataFrameReader-DataFrameWriter-DataStreamReader-DataStreamWriter.
Authored-by: David Toneian <da...@toneian.com>
Signed-off-by: HyukjinKwon <gu...@apache.org>
(cherry picked from commit 25db8c71a2100c167b8c2d7a6c540ebc61db9b73)
Signed-off-by: HyukjinKwon <gu...@apache.org>
---
python/pyspark/sql/readwriter.py | 4 ++--
python/pyspark/sql/streaming.py | 11 ++++++-----
2 files changed, 8 insertions(+), 7 deletions(-)
diff --git a/python/pyspark/sql/readwriter.py b/python/pyspark/sql/readwriter.py
index 3d3280d..6966039 100644
--- a/python/pyspark/sql/readwriter.py
+++ b/python/pyspark/sql/readwriter.py
@@ -48,7 +48,7 @@ class OptionUtils(object):
class DataFrameReader(OptionUtils):
"""
Interface used to load a :class:`DataFrame` from external storage systems
- (e.g. file systems, key-value stores, etc). Use :func:`spark.read`
+ (e.g. file systems, key-value stores, etc). Use :attr:`SparkSession.read`
to access this.
.. versionadded:: 1.4
@@ -616,7 +616,7 @@ class DataFrameReader(OptionUtils):
class DataFrameWriter(OptionUtils):
"""
Interface used to write a :class:`DataFrame` to external storage systems
- (e.g. file systems, key-value stores, etc). Use :func:`DataFrame.write`
+ (e.g. file systems, key-value stores, etc). Use :attr:`DataFrame.write`
to access this.
.. versionadded:: 1.4
diff --git a/python/pyspark/sql/streaming.py b/python/pyspark/sql/streaming.py
index f17a52f..5fced8a 100644
--- a/python/pyspark/sql/streaming.py
+++ b/python/pyspark/sql/streaming.py
@@ -276,9 +276,9 @@ class StreamingQueryManager(object):
class DataStreamReader(OptionUtils):
"""
- Interface used to load a streaming :class:`DataFrame` from external storage systems
- (e.g. file systems, key-value stores, etc). Use :func:`spark.readStream`
- to access this.
+ Interface used to load a streaming :class:`DataFrame <pyspark.sql.DataFrame>` from external
+ storage systems (e.g. file systems, key-value stores, etc).
+ Use :attr:`SparkSession.readStream <pyspark.sql.SparkSession.readStream>` to access this.
.. note:: Evolving.
@@ -750,8 +750,9 @@ class DataStreamReader(OptionUtils):
class DataStreamWriter(object):
"""
- Interface used to write a streaming :class:`DataFrame` to external storage systems
- (e.g. file systems, key-value stores, etc). Use :func:`DataFrame.writeStream`
+ Interface used to write a streaming :class:`DataFrame <pyspark.sql.DataFrame>` to external
+ storage systems (e.g. file systems, key-value stores, etc).
+ Use :attr:`DataFrame.writeStream <pyspark.sql.DataFrame.writeStream>`
to access this.
.. note:: Evolving.
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org