You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/08/15 10:29:43 UTC
[GitHub] [spark] HyukjinKwon commented on a diff in pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained
HyukjinKwon commented on code in PR #37517:
URL: https://github.com/apache/spark/pull/37517#discussion_r945552408
##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
Parameters
----------
- master : str, optional
+ master : str, optional, default None
Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
- appName : str, optional
+ appName : str, optional, default None
A name for your job, to display on the cluster web UI.
- sparkHome : str, optional
+ sparkHome : str, optional, default None
Location where Spark is installed on cluster nodes.
- pyFiles : list, optional
+ pyFiles : list, optional, default None
Collection of .zip or .py files to send to the cluster
and add to PYTHONPATH. These can be paths on the local file
system or HDFS, HTTP, HTTPS, or FTP URLs.
- environment : dict, optional
+ environment : dict, optional, default None
A dictionary of environment variables to set on
worker nodes.
- batchSize : int, optional
+ batchSize : int, optional, default 0
The number of Python objects represented as a single
Java object. Set 1 to disable batching, 0 to automatically choose
the batch size based on object sizes, or -1 to use an unlimited
batch size
- serializer : :class:`pyspark.serializers.Serializer`, optional
+ serializer : :class:`pyspark.serializers.Serializer`, optional, default `CPickleSerializer`
Review Comment:
```suggestion
serializer : :class:`Serializer`, optional, default :class:`CPickleSerializer`
```
##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
Parameters
----------
- master : str, optional
+ master : str, optional, default None
Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
- appName : str, optional
+ appName : str, optional, default None
A name for your job, to display on the cluster web UI.
- sparkHome : str, optional
+ sparkHome : str, optional, default None
Location where Spark is installed on cluster nodes.
- pyFiles : list, optional
+ pyFiles : list, optional, default None
Collection of .zip or .py files to send to the cluster
and add to PYTHONPATH. These can be paths on the local file
system or HDFS, HTTP, HTTPS, or FTP URLs.
- environment : dict, optional
+ environment : dict, optional, default None
A dictionary of environment variables to set on
worker nodes.
- batchSize : int, optional
+ batchSize : int, optional, default 0
The number of Python objects represented as a single
Java object. Set 1 to disable batching, 0 to automatically choose
the batch size based on object sizes, or -1 to use an unlimited
batch size
- serializer : :class:`pyspark.serializers.Serializer`, optional
+ serializer : :class:`pyspark.serializers.Serializer`, optional, default `CPickleSerializer`
The serializer for RDDs.
- conf : :py:class:`pyspark.SparkConf`, optional
+ conf : :py:class:`pyspark.SparkConf`, optional, default None
An object setting Spark properties.
- gateway : :py:class:`py4j.java_gateway.JavaGateway`, optional
+ gateway : :py:class:`py4j.java_gateway.JavaGateway`, optional, default None
Use an existing gateway and JVM, otherwise a new JVM
will be instantiated. This is only used internally.
- jsc : :py:class:`py4j.java_gateway.JavaObject`, optional
+ jsc : :py:class:`py4j.java_gateway.JavaObject`, optional, default None
The JavaSparkContext instance. This is only used internally.
- profiler_cls : type, optional
+ profiler_cls : type, optional, default :class:`pyspark.profiler.BasicProfiler`
Review Comment:
```suggestion
profiler_cls : type, optional, default :class:`BasicProfiler`
```
##########
python/pyspark/context.py:
##########
@@ -477,11 +475,25 @@ def __exit__(
@classmethod
def getOrCreate(cls, conf: Optional[SparkConf] = None) -> "SparkContext":
"""
- Get or instantiate a SparkContext and register it as a singleton object.
+ Get or instantiate a `SparkContext` and register it as a singleton object.
+
+ .. versionadded:: 1.4.0
Parameters
----------
- conf : :py:class:`pyspark.SparkConf`, optional
+ conf : :py:class:`pyspark.SparkConf`, optional, default None
+ `SparkConf` that will be used for initialisation of the `SparkContext`.
Review Comment:
```suggestion
:class:`SparkConf` that will be used for initialization of the :class:`SparkContext`.
```
##########
python/pyspark/context.py:
##########
@@ -477,11 +475,25 @@ def __exit__(
@classmethod
def getOrCreate(cls, conf: Optional[SparkConf] = None) -> "SparkContext":
"""
- Get or instantiate a SparkContext and register it as a singleton object.
+ Get or instantiate a `SparkContext` and register it as a singleton object.
+
+ .. versionadded:: 1.4.0
Parameters
----------
- conf : :py:class:`pyspark.SparkConf`, optional
+ conf : :py:class:`pyspark.SparkConf`, optional, default None
Review Comment:
```suggestion
conf : :class:`SparkConf`, optional, default None
```
##########
python/pyspark/context.py:
##########
@@ -477,11 +475,25 @@ def __exit__(
@classmethod
def getOrCreate(cls, conf: Optional[SparkConf] = None) -> "SparkContext":
"""
- Get or instantiate a SparkContext and register it as a singleton object.
+ Get or instantiate a `SparkContext` and register it as a singleton object.
Review Comment:
```suggestion
Get or instantiate a :class:`SparkContext` and register it as a singleton object.
```
##########
python/pyspark/context.py:
##########
@@ -477,11 +475,25 @@ def __exit__(
@classmethod
def getOrCreate(cls, conf: Optional[SparkConf] = None) -> "SparkContext":
"""
- Get or instantiate a SparkContext and register it as a singleton object.
+ Get or instantiate a `SparkContext` and register it as a singleton object.
+
+ .. versionadded:: 1.4.0
Parameters
----------
- conf : :py:class:`pyspark.SparkConf`, optional
+ conf : :py:class:`pyspark.SparkConf`, optional, default None
+ `SparkConf` that will be used for initialisation of the `SparkContext`.
+
+ Returns
+ -------
+ :class:`pyspark.context.SparkContext`
Review Comment:
```suggestion
:class:`SparkContext`
```
##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
Parameters
----------
- master : str, optional
+ master : str, optional, default None
Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
- appName : str, optional
+ appName : str, optional, default None
A name for your job, to display on the cluster web UI.
- sparkHome : str, optional
+ sparkHome : str, optional, default None
Location where Spark is installed on cluster nodes.
- pyFiles : list, optional
+ pyFiles : list, optional, default None
Collection of .zip or .py files to send to the cluster
and add to PYTHONPATH. These can be paths on the local file
system or HDFS, HTTP, HTTPS, or FTP URLs.
- environment : dict, optional
+ environment : dict, optional, default None
A dictionary of environment variables to set on
worker nodes.
- batchSize : int, optional
+ batchSize : int, optional, default 0
The number of Python objects represented as a single
Java object. Set 1 to disable batching, 0 to automatically choose
the batch size based on object sizes, or -1 to use an unlimited
batch size
- serializer : :class:`pyspark.serializers.Serializer`, optional
+ serializer : :class:`pyspark.serializers.Serializer`, optional, default `CPickleSerializer`
The serializer for RDDs.
- conf : :py:class:`pyspark.SparkConf`, optional
+ conf : :py:class:`pyspark.SparkConf`, optional, default None
An object setting Spark properties.
- gateway : :py:class:`py4j.java_gateway.JavaGateway`, optional
+ gateway : :py:class:`py4j.java_gateway.JavaGateway`, optional, default None
Use an existing gateway and JVM, otherwise a new JVM
will be instantiated. This is only used internally.
- jsc : :py:class:`py4j.java_gateway.JavaObject`, optional
+ jsc : :py:class:`py4j.java_gateway.JavaObject`, optional, default None
The JavaSparkContext instance. This is only used internally.
- profiler_cls : type, optional
+ profiler_cls : type, optional, default :class:`pyspark.profiler.BasicProfiler`
A class of custom Profiler used to do profiling
- (default is :class:`pyspark.profiler.BasicProfiler`).
- udf_profiler_cls : type, optional
+ udf_profiler_cls : type, optional, default :class:`pyspark.profiler.UDFBasicProfiler`
Review Comment:
```suggestion
udf_profiler_cls : type, optional, default :class:`UDFBasicProfiler`
```
##########
python/pyspark/context.py:
##########
@@ -477,11 +475,25 @@ def __exit__(
@classmethod
def getOrCreate(cls, conf: Optional[SparkConf] = None) -> "SparkContext":
"""
- Get or instantiate a SparkContext and register it as a singleton object.
+ Get or instantiate a `SparkContext` and register it as a singleton object.
+
+ .. versionadded:: 1.4.0
Parameters
----------
- conf : :py:class:`pyspark.SparkConf`, optional
+ conf : :py:class:`pyspark.SparkConf`, optional, default None
+ `SparkConf` that will be used for initialisation of the `SparkContext`.
+
+ Returns
+ -------
+ :class:`pyspark.context.SparkContext`
+ current `SparkContext`, or a new one if it wasn't created before the function call.
Review Comment:
```suggestion
current :class:`SparkContext`, or a new one if it wasn't created before the function
call.
```
##########
python/pyspark/context.py:
##########
@@ -510,6 +535,12 @@ def setSystemProperty(cls, key: str, value: str) -> None:
def version(self) -> str:
"""
The version of Spark on which this application is running.
+
+ .. versionadded:: 1.1.0
+
+ Examples
+ --------
+ >>> version = sc.version
Review Comment:
```suggestion
>>> _ = sc.version
```
##########
python/pyspark/context.py:
##########
@@ -531,32 +564,57 @@ def applicationId(self) -> str:
@property
def uiWebUrl(self) -> str:
- """Return the URL of the SparkUI instance started by this SparkContext"""
+ """Return the URL of the SparkUI instance started by this `SparkContext`
Review Comment:
```suggestion
"""Return the URL of the SparkUI instance started by this :class:`SparkContext`
```
##########
python/pyspark/context.py:
##########
@@ -592,22 +664,28 @@ def range(
way as python's built-in range() function. If called with a single argument,
the argument is interpreted as `end`, and `start` is set to 0.
+ .. versionadded:: 1.5.0
+
Parameters
----------
start : int
the start value
- end : int, optional
+ end : int, optional, default None
Review Comment:
```suggestion
end : int, optional
```
Let;s don't document the default value in this case (we should actually describe the default behaviour too but I would prefer to leave it out of the scope of this Pr).
##########
python/pyspark/context.py:
##########
@@ -493,14 +505,27 @@ def setLogLevel(self, logLevel: str) -> None:
"""
Control our logLevel. This overrides any user-defined log settings.
Valid log levels include: ALL, DEBUG, ERROR, FATAL, INFO, OFF, TRACE, WARN
+
+ .. versionadded:: 1.4.0
+
+ Parameters
+ ----------
+ logLevel : str
+ The desired log level as a string.
+
+ Examples
+ --------
+ >>> sc.setLogLevel("WARN")
"""
self._jsc.setLogLevel(logLevel)
@classmethod
def setSystemProperty(cls, key: str, value: str) -> None:
"""
- Set a Java system property, such as spark.executor.memory. This must
- must be invoked before instantiating SparkContext.
+ Set a Java system property, such as `spark.executor.memory`. This must
+ be invoked before instantiating SparkContext.
+
+ .. versionadded:: 0.9.0
Review Comment:
```suggestion
.. versionadded:: 0.9.0
Parameters
----------
key : str
The key of a new Java system property.
value : str
The value of a new Java system property.
```
##########
python/pyspark/context.py:
##########
@@ -531,32 +564,57 @@ def applicationId(self) -> str:
@property
def uiWebUrl(self) -> str:
- """Return the URL of the SparkUI instance started by this SparkContext"""
+ """Return the URL of the SparkUI instance started by this `SparkContext`
+
+ .. versionadded:: 2.1.0
+ """
return self._jsc.sc().uiWebUrl().get()
@property
def startTime(self) -> int:
- """Return the epoch time when the Spark Context was started."""
+ """Return the epoch time when the `SparkContext` was started.
+
+ .. versionadded:: 1.5.0
+
+ Examples
+ --------
+ >>> start = sc.startTime
+ """
return self._jsc.startTime()
@property
def defaultParallelism(self) -> int:
"""
- Default level of parallelism to use when not given by user (e.g. for
- reduce tasks)
+ Default level of parallelism to use when not given by user (e.g. for reduce tasks)
+
+ .. versionadded:: 0.7.0
+
+ Examples
+ --------
+ >>> sc.defaultParallelism > 0
+ True
"""
return self._jsc.sc().defaultParallelism()
@property
def defaultMinPartitions(self) -> int:
"""
Default min number of partitions for Hadoop RDDs when not given by user
+
+ .. versionadded:: 1.1.0
+
+ Examples
+ --------
+ >>> sc.defaultMinPartitions > 0
+ True
"""
return self._jsc.sc().defaultMinPartitions()
def stop(self) -> None:
"""
- Shut down the SparkContext.
+ Shut down the `SparkContext`.
Review Comment:
```suggestion
Shut down the :class:`SparkContext`.
```
##########
python/pyspark/context.py:
##########
@@ -531,32 +564,57 @@ def applicationId(self) -> str:
@property
def uiWebUrl(self) -> str:
- """Return the URL of the SparkUI instance started by this SparkContext"""
+ """Return the URL of the SparkUI instance started by this `SparkContext`
+
+ .. versionadded:: 2.1.0
+ """
return self._jsc.sc().uiWebUrl().get()
@property
def startTime(self) -> int:
- """Return the epoch time when the Spark Context was started."""
+ """Return the epoch time when the `SparkContext` was started.
+
+ .. versionadded:: 1.5.0
+
+ Examples
+ --------
+ >>> start = sc.startTime
Review Comment:
```suggestion
>>> _ = sc.startTime
```
##########
python/pyspark/context.py:
##########
@@ -579,7 +637,21 @@ def stop(self) -> None:
def emptyRDD(self) -> RDD[Any]:
"""
- Create an RDD that has no partitions or elements.
+ Create an `RDD` that has no partitions or elements.
Review Comment:
```suggestion
Create an :class:`RDD` that has no partitions or elements.
```
##########
python/pyspark/context.py:
##########
@@ -616,6 +694,18 @@ def range(
[2, 3]
>>> sc.range(1, 7, 2).collect()
[1, 3, 5]
+
+ Generate RDD with a negative step
Review Comment:
```suggestion
Generate RDD with a negative step
```
##########
python/pyspark/context.py:
##########
@@ -616,6 +694,18 @@ def range(
[2, 3]
>>> sc.range(1, 7, 2).collect()
[1, 3, 5]
+
+ Generate RDD with a negative step
+ >>> sc.range(5, 0, -1).collect()
+ [5, 4, 3, 2, 1]
+ >>> sc.range(0, 5, -1).collect()
+ []
+
+ Control the number of partitions
Review Comment:
```suggestion
Control the number of partitions
```
##########
python/pyspark/context.py:
##########
@@ -628,12 +718,32 @@ def parallelize(self, c: Iterable[T], numSlices: Optional[int] = None) -> RDD[T]
Distribute a local Python collection to form an RDD. Using range
is recommended if the input represents a range for performance.
+ .. versionadded:: 0.7.0
+
+ Parameters
+ ----------
+ c : :py:class:`collections.abc.Iterable`
+ iterable collection to distribute
+ numSlices : int, optional, default None
Review Comment:
```suggestion
numSlices : int, optional
```
##########
python/pyspark/context.py:
##########
@@ -579,7 +637,21 @@ def stop(self) -> None:
def emptyRDD(self) -> RDD[Any]:
"""
- Create an RDD that has no partitions or elements.
+ Create an `RDD` that has no partitions or elements.
+
+ .. versionadded:: 1.5.0
+
+ Returns
+ -------
+ :py:class:`pyspark.RDD`
Review Comment:
```suggestion
:class:`RDD`
```
##########
python/pyspark/context.py:
##########
@@ -592,22 +664,28 @@ def range(
way as python's built-in range() function. If called with a single argument,
the argument is interpreted as `end`, and `start` is set to 0.
+ .. versionadded:: 1.5.0
+
Parameters
----------
start : int
the start value
- end : int, optional
+ end : int, optional, default None
the end value (exclusive)
- step : int, optional
- the incremental step (default: 1)
- numSlices : int, optional
+ step : int, optional, default 1
+ the incremental step
+ numSlices : int, optional, default None
Review Comment:
```suggestion
numSlices : int, optional
```
##########
python/pyspark/context.py:
##########
@@ -731,13 +841,51 @@ def pickleFile(self, name: str, minPartitions: Optional[int] = None) -> RDD[Any]
"""
Load an RDD previously saved using :meth:`RDD.saveAsPickleFile` method.
+ .. versionadded:: 1.1.0
+
+ Parameters
+ ----------
+ name : str
+ directory to the input data files, the path can be comma separated
+ paths as a list of inputs
+ minPartitions : int, optional, default None
+ suggested minimum number of partitions for the resulting RDD
+
+ Returns
+ -------
+ :py:class:`pyspark.RDD`
Review Comment:
```suggestion
:class:`RDD`
```
##########
python/pyspark/context.py:
##########
@@ -628,12 +718,32 @@ def parallelize(self, c: Iterable[T], numSlices: Optional[int] = None) -> RDD[T]
Distribute a local Python collection to form an RDD. Using range
is recommended if the input represents a range for performance.
+ .. versionadded:: 0.7.0
+
+ Parameters
+ ----------
+ c : :py:class:`collections.abc.Iterable`
Review Comment:
```suggestion
c : :class:`collections.abc.Iterable`
```
##########
python/pyspark/context.py:
##########
@@ -628,12 +718,32 @@ def parallelize(self, c: Iterable[T], numSlices: Optional[int] = None) -> RDD[T]
Distribute a local Python collection to form an RDD. Using range
is recommended if the input represents a range for performance.
+ .. versionadded:: 0.7.0
+
+ Parameters
+ ----------
+ c : :py:class:`collections.abc.Iterable`
+ iterable collection to distribute
+ numSlices : int, optional, default None
+ the number of partitions of the new RDD
+
+ Returns
+ -------
+ :py:class:`pyspark.RDD`
Review Comment:
```suggestion
:class:`RDD`
```
##########
python/pyspark/context.py:
##########
@@ -731,13 +841,51 @@ def pickleFile(self, name: str, minPartitions: Optional[int] = None) -> RDD[Any]
"""
Load an RDD previously saved using :meth:`RDD.saveAsPickleFile` method.
+ .. versionadded:: 1.1.0
+
+ Parameters
+ ----------
+ name : str
+ directory to the input data files, the path can be comma separated
+ paths as a list of inputs
+ minPartitions : int, optional, default None
Review Comment:
```suggestion
minPartitions : int, optional
```
##########
python/pyspark/context.py:
##########
@@ -849,12 +1100,43 @@ def binaryRecords(self, path: str, recordLength: int) -> RDD[bytes]:
with the specified numerical format (see ByteBuffer), and the number of
bytes per record is constant.
+ .. versionadded:: 1.3.0
+
Parameters
----------
path : str
Directory to the input data files
recordLength : int
The length at which to split the records
+
+ Returns
+ -------
+ :py:class:`pyspark.RDD`
Review Comment:
```suggestion
:class:`RDD`
```
##########
python/pyspark/context.py:
##########
@@ -748,21 +896,60 @@ def textFile(
"""
Read a text file from HDFS, a local file system (available on all
nodes), or any Hadoop-supported file system URI, and return it as an
- RDD of Strings.
- The text files must be encoded as UTF-8.
+ RDD of Strings. The text files must be encoded as UTF-8.
+
+ .. versionadded:: 0.7.0
+
+ Parameters
+ ----------
+ name : str
+ directory to the input data files, the path can be comma separated
+ paths as a list of inputs
+ minPartitions : int, optional, default None
+ suggested minimum number of partitions for the resulting RDD
+ use_unicode : bool, default True
+ If use_unicode is False, the strings will be kept as `str` (encoding
+ as `utf-8`), which is faster and smaller than unicode.
- If use_unicode is False, the strings will be kept as `str` (encoding
- as `utf-8`), which is faster and smaller than unicode. (Added in
- Spark 1.2)
+ .. versionadded:: 1.2.0
+
+ Returns
+ -------
+ :py:class:`pyspark.RDD`
Review Comment:
```suggestion
:class:`RDD`
```
##########
python/pyspark/context.py:
##########
@@ -832,9 +1046,46 @@ def binaryFiles(self, path: str, minPartitions: Optional[int] = None) -> RDD[Tup
in a key-value pair, where the key is the path of each file, the
value is the content of each file.
+ .. versionadded:: 1.3.0
+
+ Parameters
+ ----------
+ path : str
+ directory to the input data files, the path can be comma separated
+ paths as a list of inputs
+ minPartitions : int, optional, default None
Review Comment:
```suggestion
minPartitions : int, optional
```
##########
python/pyspark/context.py:
##########
@@ -748,21 +896,60 @@ def textFile(
"""
Read a text file from HDFS, a local file system (available on all
nodes), or any Hadoop-supported file system URI, and return it as an
- RDD of Strings.
- The text files must be encoded as UTF-8.
+ RDD of Strings. The text files must be encoded as UTF-8.
+
+ .. versionadded:: 0.7.0
+
+ Parameters
+ ----------
+ name : str
+ directory to the input data files, the path can be comma separated
+ paths as a list of inputs
+ minPartitions : int, optional, default None
Review Comment:
```suggestion
minPartitions : int, optional
```
##########
python/pyspark/context.py:
##########
@@ -801,21 +986,50 @@ def wholeTextFiles(
...
(a-hdfs-path/part-nnnnn, its content)
+ Parameters
+ ----------
+ path : str
+ directory to the input data files, the path can be comma separated
+ paths as a list of inputs
+ minPartitions : int, optional, default None
+ suggested minimum number of partitions for the resulting RDD
+ use_unicode : bool, default True
+ If use_unicode is False, the strings will be kept as `str` (encoding
+ as `utf-8`), which is faster and smaller than unicode.
+
+ .. versionadded:: 1.2.0
+
+ Returns
+ -------
+ :py:class:`pyspark.RDD`
+ RDD representing path-content pairs from the file(s).
+
Notes
-----
Small files are preferred, as each file will be loaded fully in memory.
+ See Also
+ --------
+ :meth:`RDD.saveAsTextFile`
+ :meth:`SparkContext.textFile`
+
Examples
--------
- >>> dirPath = os.path.join(tempdir, "files")
- >>> os.mkdir(dirPath)
- >>> with open(os.path.join(dirPath, "1.txt"), "w") as file1:
- ... _ = file1.write("1")
- >>> with open(os.path.join(dirPath, "2.txt"), "w") as file2:
- ... _ = file2.write("2")
- >>> textFiles = sc.wholeTextFiles(dirPath)
- >>> sorted(textFiles.collect())
- [('.../1.txt', '1'), ('.../2.txt', '2')]
+ >>> import os
+ >>> import tempfile
+ >>> with tempfile.TemporaryDirectory() as d:
+ ... # Write a temporary text file
+ ... with open(os.path.join(d, "1.txt"), "w") as f:
+ ... _ = f.write("123")
+ ...
+ ... # Write another temporary text file
+ ... with open(os.path.join(d, "2.txt"), "w") as f:
+ ... _ = f.write("xyz")
+ ...
+ ... collected = sorted(sc.wholeTextFiles(d).collect())
+
Review Comment:
```suggestion
```
##########
python/pyspark/context.py:
##########
@@ -801,21 +986,50 @@ def wholeTextFiles(
...
(a-hdfs-path/part-nnnnn, its content)
+ Parameters
+ ----------
+ path : str
+ directory to the input data files, the path can be comma separated
+ paths as a list of inputs
+ minPartitions : int, optional, default None
+ suggested minimum number of partitions for the resulting RDD
+ use_unicode : bool, default True
+ If use_unicode is False, the strings will be kept as `str` (encoding
+ as `utf-8`), which is faster and smaller than unicode.
+
+ .. versionadded:: 1.2.0
+
+ Returns
+ -------
+ :py:class:`pyspark.RDD`
Review Comment:
```suggestion
:class:`RDD`
```
##########
python/pyspark/context.py:
##########
@@ -832,9 +1046,46 @@ def binaryFiles(self, path: str, minPartitions: Optional[int] = None) -> RDD[Tup
in a key-value pair, where the key is the path of each file, the
value is the content of each file.
+ .. versionadded:: 1.3.0
+
+ Parameters
+ ----------
+ path : str
+ directory to the input data files, the path can be comma separated
+ paths as a list of inputs
+ minPartitions : int, optional, default None
+ suggested minimum number of partitions for the resulting RDD
+
+ Returns
+ -------
+ :py:class:`pyspark.RDD`
Review Comment:
```suggestion
:class:`RDD`
```
##########
python/pyspark/context.py:
##########
@@ -888,24 +1170,60 @@ def sequenceFile(
3. If this fails, the fallback is to call 'toString' on each key and value
4. :class:`CPickleSerializer` is used to deserialize pickled objects on the Python side
+ .. versionadded:: 1.3.0
+
Parameters
----------
path : str
path to sequencefile
- keyClass: str, optional
+ keyClass: str, optional, default None
fully qualified classname of key Writable class (e.g. "org.apache.hadoop.io.Text")
- valueClass : str, optional
+ valueClass : str, optional, default None
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
fully qualified name of a function returning key WritableConverter
- valueConverter : str, optional
+ valueConverter : str, optional, default None
Review Comment:
```suggestion
valueConverter : str, optional
```
##########
python/pyspark/context.py:
##########
@@ -888,24 +1170,60 @@ def sequenceFile(
3. If this fails, the fallback is to call 'toString' on each key and value
4. :class:`CPickleSerializer` is used to deserialize pickled objects on the Python side
+ .. versionadded:: 1.3.0
+
Parameters
----------
path : str
path to sequencefile
- keyClass: str, optional
+ keyClass: str, optional, default None
fully qualified classname of key Writable class (e.g. "org.apache.hadoop.io.Text")
- valueClass : str, optional
+ valueClass : str, optional, default None
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
Review Comment:
```suggestion
keyConverter : str, optional
```
##########
python/pyspark/context.py:
##########
@@ -888,24 +1170,60 @@ def sequenceFile(
3. If this fails, the fallback is to call 'toString' on each key and value
4. :class:`CPickleSerializer` is used to deserialize pickled objects on the Python side
+ .. versionadded:: 1.3.0
+
Parameters
----------
path : str
path to sequencefile
- keyClass: str, optional
+ keyClass: str, optional, default None
fully qualified classname of key Writable class (e.g. "org.apache.hadoop.io.Text")
- valueClass : str, optional
+ valueClass : str, optional, default None
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
fully qualified name of a function returning key WritableConverter
- valueConverter : str, optional
+ valueConverter : str, optional, default None
fully qualifiedname of a function returning value WritableConverter
- minSplits : int, optional
+ minSplits : int, optional, default None
Review Comment:
```suggestion
minSplits : int, optional
```
##########
python/pyspark/context.py:
##########
@@ -888,24 +1170,60 @@ def sequenceFile(
3. If this fails, the fallback is to call 'toString' on each key and value
4. :class:`CPickleSerializer` is used to deserialize pickled objects on the Python side
+ .. versionadded:: 1.3.0
+
Parameters
----------
path : str
path to sequencefile
- keyClass: str, optional
+ keyClass: str, optional, default None
fully qualified classname of key Writable class (e.g. "org.apache.hadoop.io.Text")
- valueClass : str, optional
+ valueClass : str, optional, default None
Review Comment:
```suggestion
valueClass : str, optional
```
##########
python/pyspark/context.py:
##########
@@ -888,24 +1170,60 @@ def sequenceFile(
3. If this fails, the fallback is to call 'toString' on each key and value
4. :class:`CPickleSerializer` is used to deserialize pickled objects on the Python side
+ .. versionadded:: 1.3.0
+
Parameters
----------
path : str
path to sequencefile
- keyClass: str, optional
+ keyClass: str, optional, default None
fully qualified classname of key Writable class (e.g. "org.apache.hadoop.io.Text")
- valueClass : str, optional
+ valueClass : str, optional, default None
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
fully qualified name of a function returning key WritableConverter
- valueConverter : str, optional
+ valueConverter : str, optional, default None
fully qualifiedname of a function returning value WritableConverter
- minSplits : int, optional
+ minSplits : int, optional, default None
minimum splits in dataset (default min(2, sc.defaultParallelism))
- batchSize : int, optional
+ batchSize : int, optional, default 0
The number of Python objects represented as a single
Java object. (default 0, choose batchSize automatically)
+
+ Returns
+ -------
+ :py:class:`pyspark.RDD`
Review Comment:
```suggestion
:class:`RDD`
```
##########
python/pyspark/context.py:
##########
@@ -888,24 +1170,60 @@ def sequenceFile(
3. If this fails, the fallback is to call 'toString' on each key and value
4. :class:`CPickleSerializer` is used to deserialize pickled objects on the Python side
+ .. versionadded:: 1.3.0
+
Parameters
----------
path : str
path to sequencefile
- keyClass: str, optional
+ keyClass: str, optional, default None
Review Comment:
```suggestion
keyClass: str, optional
```
##########
python/pyspark/context.py:
##########
@@ -1007,17 +1367,66 @@ def newAPIHadoopRDD(
valueClass : str
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
Review Comment:
```suggestion
keyConverter : str, optional
```
##########
python/pyspark/context.py:
##########
@@ -1007,17 +1367,66 @@ def newAPIHadoopRDD(
valueClass : str
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
fully qualified name of a function returning key WritableConverter
(None by default)
- valueConverter : str, optional
+ valueConverter : str, optional, default None
Review Comment:
```suggestion
valueConverter : str, optional
```
##########
python/pyspark/context.py:
##########
@@ -1062,17 +1475,53 @@ def hadoopFile(
valueClass : str
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
Review Comment:
```suggestion
keyConverter : str, optional
```
##########
python/pyspark/context.py:
##########
@@ -1007,17 +1367,66 @@ def newAPIHadoopRDD(
valueClass : str
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
fully qualified name of a function returning key WritableConverter
(None by default)
- valueConverter : str, optional
+ valueConverter : str, optional, default None
fully qualified name of a function returning value WritableConverter
(None by default)
- conf : dict, optional
+ conf : dict, optional, default None
Review Comment:
```suggestion
conf : dict, optional
```
##########
python/pyspark/context.py:
##########
@@ -953,18 +1273,56 @@ def newAPIHadoopFile(
valueClass : str
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
fully qualified name of a function returning key WritableConverter
None by default
- valueConverter : str, optional
+ valueConverter : str, optional, default None
fully qualified name of a function returning value WritableConverter
None by default
- conf : dict, optional
+ conf : dict, optional, default None
Hadoop configuration, passed in as a dict
None by default
- batchSize : int, optional
+ batchSize : int, optional, default 0
The number of Python objects represented as a single
Java object. (default 0, choose batchSize automatically)
+
+ Returns
+ -------
+ :py:class:`pyspark.RDD`
Review Comment:
```suggestion
:class:`RDD`
```
##########
python/pyspark/context.py:
##########
@@ -1007,17 +1367,66 @@ def newAPIHadoopRDD(
valueClass : str
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
fully qualified name of a function returning key WritableConverter
(None by default)
- valueConverter : str, optional
+ valueConverter : str, optional, default None
fully qualified name of a function returning value WritableConverter
(None by default)
- conf : dict, optional
+ conf : dict, optional, default None
Hadoop configuration, passed in as a dict (None by default)
- batchSize : int, optional
+ batchSize : int, optional, default 0
The number of Python objects represented as a single
Java object. (default 0, choose batchSize automatically)
+
+ Returns
+ -------
+ :py:class:`pyspark.RDD`
Review Comment:
```suggestion
:class:`RDD`
```
##########
python/pyspark/context.py:
##########
@@ -1062,17 +1475,53 @@ def hadoopFile(
valueClass : str
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
fully qualified name of a function returning key WritableConverter
- (None by default)
- valueConverter : str, optional
+ valueConverter : str, optional, default None
fully qualified name of a function returning value WritableConverter
- (None by default)
- conf : dict, optional
- Hadoop configuration, passed in as a dict (None by default)
- batchSize : int, optional
+ conf : dict, optional, default None
+ Hadoop configuration, passed in as a dict
+ batchSize : int, optional, default 0
The number of Python objects represented as a single
Java object. (default 0, choose batchSize automatically)
+
+ Returns
+ -------
+ :py:class:`pyspark.RDD`
Review Comment:
```suggestion
:class:`RDD`
```
##########
python/pyspark/context.py:
##########
@@ -1062,17 +1475,53 @@ def hadoopFile(
valueClass : str
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
fully qualified name of a function returning key WritableConverter
- (None by default)
- valueConverter : str, optional
+ valueConverter : str, optional, default None
fully qualified name of a function returning value WritableConverter
- (None by default)
- conf : dict, optional
- Hadoop configuration, passed in as a dict (None by default)
- batchSize : int, optional
+ conf : dict, optional, default None
Review Comment:
```suggestion
conf : dict, optional
```
##########
python/pyspark/context.py:
##########
@@ -1062,17 +1475,53 @@ def hadoopFile(
valueClass : str
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
fully qualified name of a function returning key WritableConverter
- (None by default)
- valueConverter : str, optional
+ valueConverter : str, optional, default None
Review Comment:
```suggestion
valueConverter : str, optional
```
##########
python/pyspark/context.py:
##########
@@ -1115,17 +1566,63 @@ def hadoopRDD(
valueClass : str
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
fully qualified name of a function returning key WritableConverter
- (None by default)
- valueConverter : str, optional
+ valueConverter : str, optional, default None
Review Comment:
```suggestion
valueConverter : str, optional
```
##########
python/pyspark/context.py:
##########
@@ -1115,17 +1566,63 @@ def hadoopRDD(
valueClass : str
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
Review Comment:
```suggestion
keyConverter : str, optional
```
##########
python/pyspark/context.py:
##########
@@ -1194,6 +1703,30 @@ def broadcast(self, value: T) -> "Broadcast[T]":
Broadcast a read-only variable to the cluster, returning a :class:`Broadcast`
object for reading it in distributed functions. The variable will
be sent to each cluster only once.
+
+ .. versionadded:: 0.7.0
+
+ Parameters
+ ----------
+ value : T
+ value to broadcast to the Spark nodes
+
+ Returns
+ -------
+ :py:class:`pyspark.Broadcast`
+ `Broadcast` object, a read-only variable cached on each machine
Review Comment:
```suggestion
:class:`Broadcast` object, a read-only variable cached on each machine
```
##########
python/pyspark/context.py:
##########
@@ -1206,6 +1739,39 @@ def accumulator(
data type if provided. Default AccumulatorParams are used for integers
and floating-point numbers if you do not provide one. For other types,
a custom AccumulatorParam can be used.
+
+ .. versionadded:: 0.7.0
+
+ Parameters
+ ----------
+ value : T
+ initialized value
+ accum_param : :py:class:`pyspark.AccumulatorParam`, optional, default None
+ helper object to define how to add values
+
+ Returns
+ -------
+ :py:class:`pyspark.Accumulator`
Review Comment:
```suggestion
:class:`Accumulator`
```
##########
python/pyspark/context.py:
##########
@@ -1115,17 +1566,63 @@ def hadoopRDD(
valueClass : str
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
fully qualified name of a function returning key WritableConverter
- (None by default)
- valueConverter : str, optional
+ valueConverter : str, optional, default None
fully qualified name of a function returning value WritableConverter
- (None by default)
- conf : dict, optional
- Hadoop configuration, passed in as a dict (None by default)
- batchSize : int, optional
+ conf : dict, optional, default None
+ Hadoop configuration, passed in as a dict
+ batchSize : int, optional, default 0
The number of Python objects represented as a single
Java object. (default 0, choose batchSize automatically)
+
+ Returns
+ -------
+ :py:class:`pyspark.RDD`
Review Comment:
```suggestion
:class:`RDD`
```
##########
python/pyspark/context.py:
##########
@@ -1115,17 +1566,63 @@ def hadoopRDD(
valueClass : str
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
fully qualified name of a function returning key WritableConverter
- (None by default)
- valueConverter : str, optional
+ valueConverter : str, optional, default None
fully qualified name of a function returning value WritableConverter
- (None by default)
- conf : dict, optional
- Hadoop configuration, passed in as a dict (None by default)
- batchSize : int, optional
+ conf : dict, optional, default None
Review Comment:
```suggestion
conf : dict, optional
```
##########
python/pyspark/context.py:
##########
@@ -1206,6 +1739,39 @@ def accumulator(
data type if provided. Default AccumulatorParams are used for integers
and floating-point numbers if you do not provide one. For other types,
a custom AccumulatorParam can be used.
+
+ .. versionadded:: 0.7.0
+
+ Parameters
+ ----------
+ value : T
+ initialized value
+ accum_param : :py:class:`pyspark.AccumulatorParam`, optional, default None
Review Comment:
```suggestion
accum_param : :class:`pyspark.AccumulatorParam`, optional
```
##########
python/pyspark/context.py:
##########
@@ -1194,6 +1703,30 @@ def broadcast(self, value: T) -> "Broadcast[T]":
Broadcast a read-only variable to the cluster, returning a :class:`Broadcast`
object for reading it in distributed functions. The variable will
be sent to each cluster only once.
+
+ .. versionadded:: 0.7.0
+
+ Parameters
+ ----------
+ value : T
+ value to broadcast to the Spark nodes
+
+ Returns
+ -------
+ :py:class:`pyspark.Broadcast`
Review Comment:
```suggestion
:class:`Broadcast`
```
##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
Parameters
----------
- master : str, optional
+ master : str, optional, default None
Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
- appName : str, optional
+ appName : str, optional, default None
Review Comment:
```suggestion
appName : str, optional
```
##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
Parameters
----------
- master : str, optional
+ master : str, optional, default None
Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
- appName : str, optional
+ appName : str, optional, default None
A name for your job, to display on the cluster web UI.
- sparkHome : str, optional
+ sparkHome : str, optional, default None
Review Comment:
```suggestion
sparkHome : str, optional
```
##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
Parameters
----------
- master : str, optional
+ master : str, optional, default None
Review Comment:
```suggestion
master : str, optional
```
##########
python/pyspark/context.py:
##########
@@ -1562,12 +2293,22 @@ def dump_profiles(self, path: str) -> None:
)
def getConf(self) -> SparkConf:
+ """Return a copy of this SparkContext's configuration :py:class:`pyspark.SparkConf`.
+
+ .. versionadded:: 2.1.0
+ """
conf = SparkConf()
conf.setAll(self._conf.getAll())
return conf
@property
def resources(self) -> Dict[str, ResourceInformation]:
+ """
+ Return the resource information of this SparkContext.
Review Comment:
```suggestion
Return the resource information of this :class:`SparkContext`.
```
##########
python/pyspark/context.py:
##########
@@ -1562,12 +2293,22 @@ def dump_profiles(self, path: str) -> None:
)
def getConf(self) -> SparkConf:
+ """Return a copy of this SparkContext's configuration :py:class:`pyspark.SparkConf`.
Review Comment:
```suggestion
"""Return a copy of this SparkContext's configuration :class:`SparkConf`.
```
##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
Parameters
----------
- master : str, optional
+ master : str, optional, default None
Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
- appName : str, optional
+ appName : str, optional, default None
A name for your job, to display on the cluster web UI.
- sparkHome : str, optional
+ sparkHome : str, optional, default None
Location where Spark is installed on cluster nodes.
- pyFiles : list, optional
+ pyFiles : list, optional, default None
Review Comment:
```suggestion
pyFiles : list, optional
```
##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
Parameters
----------
- master : str, optional
+ master : str, optional, default None
Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
- appName : str, optional
+ appName : str, optional, default None
A name for your job, to display on the cluster web UI.
- sparkHome : str, optional
+ sparkHome : str, optional, default None
Location where Spark is installed on cluster nodes.
- pyFiles : list, optional
+ pyFiles : list, optional, default None
Collection of .zip or .py files to send to the cluster
and add to PYTHONPATH. These can be paths on the local file
system or HDFS, HTTP, HTTPS, or FTP URLs.
- environment : dict, optional
+ environment : dict, optional, default None
A dictionary of environment variables to set on
worker nodes.
- batchSize : int, optional
+ batchSize : int, optional, default 0
The number of Python objects represented as a single
Java object. Set 1 to disable batching, 0 to automatically choose
the batch size based on object sizes, or -1 to use an unlimited
batch size
- serializer : :class:`pyspark.serializers.Serializer`, optional
+ serializer : :class:`pyspark.serializers.Serializer`, optional, default `CPickleSerializer`
The serializer for RDDs.
- conf : :py:class:`pyspark.SparkConf`, optional
+ conf : :py:class:`pyspark.SparkConf`, optional, default None
Review Comment:
```suggestion
conf : :class:`SparkConf`, optional
```
##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
Parameters
----------
- master : str, optional
+ master : str, optional, default None
Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
- appName : str, optional
+ appName : str, optional, default None
A name for your job, to display on the cluster web UI.
- sparkHome : str, optional
+ sparkHome : str, optional, default None
Location where Spark is installed on cluster nodes.
- pyFiles : list, optional
+ pyFiles : list, optional, default None
Collection of .zip or .py files to send to the cluster
and add to PYTHONPATH. These can be paths on the local file
system or HDFS, HTTP, HTTPS, or FTP URLs.
- environment : dict, optional
+ environment : dict, optional, default None
Review Comment:
```suggestion
environment : dict, optional
```
##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
Parameters
----------
- master : str, optional
+ master : str, optional, default None
Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
- appName : str, optional
+ appName : str, optional, default None
A name for your job, to display on the cluster web UI.
- sparkHome : str, optional
+ sparkHome : str, optional, default None
Location where Spark is installed on cluster nodes.
- pyFiles : list, optional
+ pyFiles : list, optional, default None
Collection of .zip or .py files to send to the cluster
and add to PYTHONPATH. These can be paths on the local file
system or HDFS, HTTP, HTTPS, or FTP URLs.
- environment : dict, optional
+ environment : dict, optional, default None
A dictionary of environment variables to set on
worker nodes.
- batchSize : int, optional
+ batchSize : int, optional, default 0
The number of Python objects represented as a single
Java object. Set 1 to disable batching, 0 to automatically choose
the batch size based on object sizes, or -1 to use an unlimited
batch size
- serializer : :class:`pyspark.serializers.Serializer`, optional
+ serializer : :class:`pyspark.serializers.Serializer`, optional, default `CPickleSerializer`
The serializer for RDDs.
- conf : :py:class:`pyspark.SparkConf`, optional
+ conf : :py:class:`pyspark.SparkConf`, optional, default None
An object setting Spark properties.
- gateway : :py:class:`py4j.java_gateway.JavaGateway`, optional
+ gateway : :py:class:`py4j.java_gateway.JavaGateway`, optional, default None
Review Comment:
```suggestion
gateway : :class:`py4j.java_gateway.JavaGateway`, optional
```
##########
python/pyspark/context.py:
##########
@@ -477,11 +475,25 @@ def __exit__(
@classmethod
def getOrCreate(cls, conf: Optional[SparkConf] = None) -> "SparkContext":
"""
- Get or instantiate a SparkContext and register it as a singleton object.
+ Get or instantiate a `SparkContext` and register it as a singleton object.
+
+ .. versionadded:: 1.4.0
Parameters
----------
- conf : :py:class:`pyspark.SparkConf`, optional
+ conf : :py:class:`pyspark.SparkConf`, optional, default None
+ `SparkConf` that will be used for initialisation of the `SparkContext`.
+
+ Returns
+ -------
+ :class:`pyspark.context.SparkContext`
+ current `SparkContext`, or a new one if it wasn't created before the function call.
+
+ Examples
+ --------
+ >>> from pyspark.context import SparkContext
Review Comment:
```suggestion
```
This is automatically imported in the PySpark shell. so let's remove it.
##########
python/pyspark/context.py:
##########
@@ -493,14 +505,27 @@ def setLogLevel(self, logLevel: str) -> None:
"""
Control our logLevel. This overrides any user-defined log settings.
Valid log levels include: ALL, DEBUG, ERROR, FATAL, INFO, OFF, TRACE, WARN
+
+ .. versionadded:: 1.4.0
+
+ Parameters
+ ----------
+ logLevel : str
+ The desired log level as a string.
+
+ Examples
+ --------
+ >>> sc.setLogLevel("WARN")
"""
self._jsc.setLogLevel(logLevel)
@classmethod
def setSystemProperty(cls, key: str, value: str) -> None:
"""
- Set a Java system property, such as spark.executor.memory. This must
- must be invoked before instantiating SparkContext.
+ Set a Java system property, such as `spark.executor.memory`. This must
+ be invoked before instantiating SparkContext.
Review Comment:
```suggestion
be invoked before instantiating :class:`SparkContext`.
```
##########
python/pyspark/context.py:
##########
@@ -1468,13 +2136,21 @@ def getLocalProperty(self, key: str) -> Optional[str]:
"""
Get a local property set in this thread, or null if it is missing. See
:meth:`setLocalProperty`.
+
+ .. versionadded:: 1.0.0
+
+ See Also
+ --------
+ :meth:`SparkContext.setLocalProperty`
"""
return self._jsc.getLocalProperty(key)
def setJobDescription(self, value: str) -> None:
"""
Set a human readable description of the current job.
+ .. versionadded:: 2.3.0
+
Review Comment:
```suggestion
Parameters
----------
value : str
The job description to set.
```
##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
Parameters
----------
- master : str, optional
+ master : str, optional, default None
Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
- appName : str, optional
+ appName : str, optional, default None
A name for your job, to display on the cluster web UI.
- sparkHome : str, optional
+ sparkHome : str, optional, default None
Location where Spark is installed on cluster nodes.
- pyFiles : list, optional
+ pyFiles : list, optional, default None
Collection of .zip or .py files to send to the cluster
and add to PYTHONPATH. These can be paths on the local file
system or HDFS, HTTP, HTTPS, or FTP URLs.
- environment : dict, optional
+ environment : dict, optional, default None
A dictionary of environment variables to set on
worker nodes.
- batchSize : int, optional
+ batchSize : int, optional, default 0
The number of Python objects represented as a single
Java object. Set 1 to disable batching, 0 to automatically choose
the batch size based on object sizes, or -1 to use an unlimited
batch size
- serializer : :class:`pyspark.serializers.Serializer`, optional
+ serializer : :class:`pyspark.serializers.Serializer`, optional, default `CPickleSerializer`
The serializer for RDDs.
- conf : :py:class:`pyspark.SparkConf`, optional
+ conf : :py:class:`pyspark.SparkConf`, optional, default None
An object setting Spark properties.
- gateway : :py:class:`py4j.java_gateway.JavaGateway`, optional
+ gateway : :py:class:`py4j.java_gateway.JavaGateway`, optional, default None
Use an existing gateway and JVM, otherwise a new JVM
will be instantiated. This is only used internally.
- jsc : :py:class:`py4j.java_gateway.JavaObject`, optional
+ jsc : :py:class:`py4j.java_gateway.JavaObject`, optional, default None
Review Comment:
```suggestion
jsc : :py:class:`py4j.java_gateway.JavaObject`, optional
```
##########
python/pyspark/context.py:
##########
@@ -801,21 +986,50 @@ def wholeTextFiles(
...
(a-hdfs-path/part-nnnnn, its content)
+ Parameters
+ ----------
+ path : str
+ directory to the input data files, the path can be comma separated
+ paths as a list of inputs
+ minPartitions : int, optional, default None
Review Comment:
```suggestion
minPartitions : int, optional
```
##########
python/pyspark/context.py:
##########
@@ -531,32 +564,57 @@ def applicationId(self) -> str:
@property
def uiWebUrl(self) -> str:
- """Return the URL of the SparkUI instance started by this SparkContext"""
+ """Return the URL of the SparkUI instance started by this `SparkContext`
+
+ .. versionadded:: 2.1.0
+ """
return self._jsc.sc().uiWebUrl().get()
@property
def startTime(self) -> int:
- """Return the epoch time when the Spark Context was started."""
+ """Return the epoch time when the `SparkContext` was started.
Review Comment:
```suggestion
"""Return the epoch time when the :class:`SparkContext` was started.
```
##########
python/pyspark/context.py:
##########
@@ -1457,6 +2119,12 @@ def setLocalProperty(self, key: str, value: str) -> None:
Set a local property that affects jobs submitted from this thread, such as the
Spark fair scheduler pool.
+ .. versionadded:: 1.0.0
+
Review Comment:
```suggestion
Parameters
----------
key : str
The key of the local property to set.
value : str
The value of the local property to set.
```
##########
python/pyspark/context.py:
##########
@@ -592,22 +664,28 @@ def range(
way as python's built-in range() function. If called with a single argument,
the argument is interpreted as `end`, and `start` is set to 0.
+ .. versionadded:: 1.5.0
+
Parameters
----------
start : int
the start value
- end : int, optional
+ end : int, optional, default None
the end value (exclusive)
- step : int, optional
- the incremental step (default: 1)
- numSlices : int, optional
+ step : int, optional, default 1
+ the incremental step
+ numSlices : int, optional, default None
the number of partitions of the new RDD
Returns
-------
:py:class:`pyspark.RDD`
An RDD of int
+ See Also
+ --------
+ :meth:`SparkSession.range`
Review Comment:
```suggestion
:meth:`pyspark.sql.SparkSession.range`
```
Haven't built the docs against this PR but you would probably need the fully qualified path because this class is not being imported within this file.
##########
python/pyspark/context.py:
##########
@@ -1485,25 +2161,43 @@ def setJobDescription(self, value: str) -> None:
def sparkUser(self) -> str:
"""
Get SPARK_USER for user who is running SparkContext.
+
+ .. versionadded:: 1.0.0
"""
return self._jsc.sc().sparkUser()
def cancelJobGroup(self, groupId: str) -> None:
"""
Cancel active jobs for the specified group. See :meth:`SparkContext.setJobGroup`.
for more information.
+
+ .. versionadded:: 1.1.0
+
Review Comment:
```suggestion
Parameters
----------
groupId : str
The group ID to cancel the job.
```
##########
python/pyspark/context.py:
##########
@@ -1412,6 +2068,8 @@ def setJobGroup(self, groupId: str, description: str, interruptOnCancel: bool =
The application can use :meth:`SparkContext.cancelJobGroup` to cancel all
running jobs in this group.
+ .. versionadded:: 1.0.0
+
Review Comment:
```suggestion
Parameters
----------
groupId : str
The group ID to assign.
description : str
The description to set for the job group.
interruptOnCancel : bool, optional, default False
whether to interrupt jobs on job cancellation.
```
##########
python/pyspark/context.py:
##########
@@ -801,21 +986,50 @@ def wholeTextFiles(
...
(a-hdfs-path/part-nnnnn, its content)
+ Parameters
+ ----------
+ path : str
+ directory to the input data files, the path can be comma separated
+ paths as a list of inputs
+ minPartitions : int, optional, default None
+ suggested minimum number of partitions for the resulting RDD
+ use_unicode : bool, default True
+ If use_unicode is False, the strings will be kept as `str` (encoding
Review Comment:
```suggestion
If `use_unicode` is False, the strings will be kept as `str` (encoding
```
##########
python/pyspark/context.py:
##########
@@ -953,18 +1273,56 @@ def newAPIHadoopFile(
valueClass : str
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
fully qualified name of a function returning key WritableConverter
None by default
- valueConverter : str, optional
+ valueConverter : str, optional, default None
Review Comment:
```suggestion
valueConverter : str, optional
```
##########
python/pyspark/context.py:
##########
@@ -628,12 +718,32 @@ def parallelize(self, c: Iterable[T], numSlices: Optional[int] = None) -> RDD[T]
Distribute a local Python collection to form an RDD. Using range
is recommended if the input represents a range for performance.
+ .. versionadded:: 0.7.0
+
+ Parameters
+ ----------
+ c : :py:class:`collections.abc.Iterable`
+ iterable collection to distribute
+ numSlices : int, optional, default None
Review Comment:
ditto
##########
python/pyspark/context.py:
##########
@@ -748,21 +896,60 @@ def textFile(
"""
Read a text file from HDFS, a local file system (available on all
nodes), or any Hadoop-supported file system URI, and return it as an
- RDD of Strings.
- The text files must be encoded as UTF-8.
+ RDD of Strings. The text files must be encoded as UTF-8.
+
+ .. versionadded:: 0.7.0
+
+ Parameters
+ ----------
+ name : str
+ directory to the input data files, the path can be comma separated
+ paths as a list of inputs
+ minPartitions : int, optional, default None
+ suggested minimum number of partitions for the resulting RDD
+ use_unicode : bool, default True
+ If use_unicode is False, the strings will be kept as `str` (encoding
Review Comment:
```suggestion
If `use_unicode` is False, the strings will be kept as `str` (encoding
```
##########
python/pyspark/context.py:
##########
@@ -953,18 +1273,56 @@ def newAPIHadoopFile(
valueClass : str
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
fully qualified name of a function returning key WritableConverter
None by default
- valueConverter : str, optional
+ valueConverter : str, optional, default None
fully qualified name of a function returning value WritableConverter
None by default
- conf : dict, optional
+ conf : dict, optional, default None
Review Comment:
```suggestion
conf : dict, optional
```
##########
python/pyspark/context.py:
##########
@@ -953,18 +1273,56 @@ def newAPIHadoopFile(
valueClass : str
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
Review Comment:
```suggestion
keyConverter : str, optional
```
##########
python/pyspark/context.py:
##########
@@ -531,32 +564,57 @@ def applicationId(self) -> str:
@property
def uiWebUrl(self) -> str:
- """Return the URL of the SparkUI instance started by this SparkContext"""
+ """Return the URL of the SparkUI instance started by this `SparkContext`
+
+ .. versionadded:: 2.1.0
Review Comment:
```suggestion
.. versionadded:: 2.1.0
Examples
--------
>>> sc.uiWebUrl
'http://...
```
##########
python/pyspark/context.py:
##########
@@ -731,13 +841,51 @@ def pickleFile(self, name: str, minPartitions: Optional[int] = None) -> RDD[Any]
"""
Load an RDD previously saved using :meth:`RDD.saveAsPickleFile` method.
+ .. versionadded:: 1.1.0
+
+ Parameters
+ ----------
+ name : str
+ directory to the input data files, the path can be comma separated
+ paths as a list of inputs
+ minPartitions : int, optional, default None
Review Comment:
ditto
##########
python/pyspark/context.py:
##########
@@ -493,14 +505,27 @@ def setLogLevel(self, logLevel: str) -> None:
"""
Control our logLevel. This overrides any user-defined log settings.
Valid log levels include: ALL, DEBUG, ERROR, FATAL, INFO, OFF, TRACE, WARN
+
+ .. versionadded:: 1.4.0
+
+ Parameters
+ ----------
+ logLevel : str
+ The desired log level as a string.
+
+ Examples
+ --------
+ >>> sc.setLogLevel("WARN")
Review Comment:
```suggestion
>>> sc.setLogLevel("WARN") # doctest :+SKIP
```
##########
python/pyspark/context.py:
##########
@@ -748,21 +896,60 @@ def textFile(
"""
Read a text file from HDFS, a local file system (available on all
nodes), or any Hadoop-supported file system URI, and return it as an
- RDD of Strings.
- The text files must be encoded as UTF-8.
+ RDD of Strings. The text files must be encoded as UTF-8.
+
+ .. versionadded:: 0.7.0
+
+ Parameters
+ ----------
+ name : str
+ directory to the input data files, the path can be comma separated
+ paths as a list of inputs
+ minPartitions : int, optional, default None
Review Comment:
ditto
##########
python/pyspark/context.py:
##########
@@ -1520,6 +2214,29 @@ def runJob(
If 'partitions' is not specified, this will run over all partitions.
+ .. versionadded:: 1.1.0
+
+ Parameters
+ ----------
+ rdd : :py:class:`pyspark.RDD`
Review Comment:
```suggestion
rdd : :class:`RDD`
```
##########
python/pyspark/context.py:
##########
@@ -592,22 +664,28 @@ def range(
way as python's built-in range() function. If called with a single argument,
the argument is interpreted as `end`, and `start` is set to 0.
+ .. versionadded:: 1.5.0
+
Parameters
----------
start : int
the start value
- end : int, optional
+ end : int, optional, default None
the end value (exclusive)
- step : int, optional
- the incremental step (default: 1)
- numSlices : int, optional
+ step : int, optional, default 1
+ the incremental step
+ numSlices : int, optional, default None
the number of partitions of the new RDD
Returns
-------
:py:class:`pyspark.RDD`
Review Comment:
```suggestion
:class:`RDD`
```
##########
python/pyspark/context.py:
##########
@@ -493,14 +505,27 @@ def setLogLevel(self, logLevel: str) -> None:
"""
Control our logLevel. This overrides any user-defined log settings.
Valid log levels include: ALL, DEBUG, ERROR, FATAL, INFO, OFF, TRACE, WARN
+
+ .. versionadded:: 1.4.0
+
+ Parameters
+ ----------
+ logLevel : str
+ The desired log level as a string.
+
+ Examples
+ --------
+ >>> sc.setLogLevel("WARN")
Review Comment:
Let's probably add `# doctest :+SKIP` because it will affect the logging in other unittests.
##########
python/pyspark/context.py:
##########
@@ -592,22 +664,28 @@ def range(
way as python's built-in range() function. If called with a single argument,
the argument is interpreted as `end`, and `start` is set to 0.
+ .. versionadded:: 1.5.0
+
Parameters
----------
start : int
the start value
- end : int, optional
+ end : int, optional, default None
the end value (exclusive)
- step : int, optional
- the incremental step (default: 1)
- numSlices : int, optional
+ step : int, optional, default 1
+ the incremental step
+ numSlices : int, optional, default None
Review Comment:
ditto
##########
python/pyspark/context.py:
##########
@@ -888,24 +1170,60 @@ def sequenceFile(
3. If this fails, the fallback is to call 'toString' on each key and value
4. :class:`CPickleSerializer` is used to deserialize pickled objects on the Python side
+ .. versionadded:: 1.3.0
+
Parameters
----------
path : str
path to sequencefile
- keyClass: str, optional
+ keyClass: str, optional, default None
fully qualified classname of key Writable class (e.g. "org.apache.hadoop.io.Text")
- valueClass : str, optional
+ valueClass : str, optional, default None
fully qualified classname of value Writable class
(e.g. "org.apache.hadoop.io.LongWritable")
- keyConverter : str, optional
+ keyConverter : str, optional, default None
fully qualified name of a function returning key WritableConverter
- valueConverter : str, optional
+ valueConverter : str, optional, default None
fully qualifiedname of a function returning value WritableConverter
- minSplits : int, optional
+ minSplits : int, optional, default None
Review Comment:
Let's just remove `default None` case for now. `optional` implies it's `None` by default anyway.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org