You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/08/15 05:05:18 UTC

[GitHub] [spark] zhengruifeng opened a new pull request, #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

zhengruifeng opened a new pull request, #37517:
URL: https://github.com/apache/spark/pull/37517

   ### What changes were proposed in this pull request?
   Make pyspark.context examples self-contained
   
   
   ### Why are the changes needed?
   To make the documentation more readable and able to copy and paste directly in PySpark shell.
   
   
   ### Does this PR introduce _any_ user-facing change?
   documents were changed
   
   
   ### How was this patch tested?
   
   - added doctests.
   - manually copy-paste test the example in PySpark Shell.
   - build the documents and manually checks
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on PR #37517:
URL: https://github.com/apache/spark/pull/37517#issuecomment-1214778411

   Awesome!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] zhengruifeng commented on pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

Posted by GitBox <gi...@apache.org>.
zhengruifeng commented on PR #37517:
URL: https://github.com/apache/spark/pull/37517#issuecomment-1216022150

   thank you so much for you patient reivew @HyukjinKwon 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon closed pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

Posted by GitBox <gi...@apache.org>.
HyukjinKwon closed pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained
URL: https://github.com/apache/spark/pull/37517


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on code in PR #37517:
URL: https://github.com/apache/spark/pull/37517#discussion_r945714038


##########
python/pyspark/context.py:
##########
@@ -531,32 +564,57 @@ def applicationId(self) -> str:
 
     @property
     def uiWebUrl(self) -> str:
-        """Return the URL of the SparkUI instance started by this SparkContext"""
+        """Return the URL of the SparkUI instance started by this `SparkContext`
+
+        .. versionadded:: 2.1.0

Review Comment:
   Jsut don't bother, yeah. Let's ignore that in this PR.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on code in PR #37517:
URL: https://github.com/apache/spark/pull/37517#discussion_r945724474


##########
python/pyspark/context.py:
##########
@@ -1412,6 +2081,16 @@ def setJobGroup(self, groupId: str, description: str, interruptOnCancel: bool =
         The application can use :meth:`SparkContext.cancelJobGroup` to cancel all
         running jobs in this group.
 
+        .. versionadded:: 1.0.0

Review Comment:
   ```suggestion
           .. versionadded:: 1.0.0
   
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on code in PR #37517:
URL: https://github.com/apache/spark/pull/37517#discussion_r945723512


##########
python/pyspark/context.py:
##########
@@ -1468,13 +2163,25 @@ def getLocalProperty(self, key: str) -> Optional[str]:
         """
         Get a local property set in this thread, or null if it is missing. See
         :meth:`setLocalProperty`.
+
+        .. versionadded:: 1.0.0
+
+        See Also
+        --------
+        :meth:`SparkContext.setLocalProperty`
         """
         return self._jsc.getLocalProperty(key)
 
     def setJobDescription(self, value: str) -> None:
         """
         Set a human readable description of the current job.
 
+        .. versionadded:: 2.3.0

Review Comment:
   ```suggestion
           .. versionadded:: 2.3.0
   
   ```



##########
python/pyspark/context.py:
##########
@@ -1457,6 +2140,18 @@ def setLocalProperty(self, key: str, value: str) -> None:
         Set a local property that affects jobs submitted from this thread, such as the
         Spark fair scheduler pool.
 
+        .. versionadded:: 1.0.0

Review Comment:
   ```suggestion
           .. versionadded:: 1.0.0
   
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on code in PR #37517:
URL: https://github.com/apache/spark/pull/37517#discussion_r945727057


##########
python/pyspark/context.py:
##########
@@ -1520,6 +2214,29 @@ def runJob(
 
         If 'partitions' is not specified, this will run over all partitions.
 
+        .. versionadded:: 1.1.0
+
+        Parameters
+        ----------
+        rdd : :py:class:`pyspark.RDD`
+            target RDD to run tasks on
+        partitionFunc : function
+            a function to run on each partition of the RDD
+        partitions : list, optional, default None

Review Comment:
   ```suggestion
           partitions : list, optional
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on code in PR #37517:
URL: https://github.com/apache/spark/pull/37517#discussion_r945590476


##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
 
     Parameters
     ----------
-    master : str, optional
+    master : str, optional, default None
         Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
-    appName : str, optional
+    appName : str, optional, default None
         A name for your job, to display on the cluster web UI.
-    sparkHome : str, optional
+    sparkHome : str, optional, default None
         Location where Spark is installed on cluster nodes.
-    pyFiles : list, optional
+    pyFiles : list, optional, default None
         Collection of .zip or .py files to send to the cluster
         and add to PYTHONPATH.  These can be paths on the local file
         system or HDFS, HTTP, HTTPS, or FTP URLs.
-    environment : dict, optional
+    environment : dict, optional, default None
         A dictionary of environment variables to set on
         worker nodes.
-    batchSize : int, optional
+    batchSize : int, optional, default 0
         The number of Python objects represented as a single
         Java object. Set 1 to disable batching, 0 to automatically choose
         the batch size based on object sizes, or -1 to use an unlimited
         batch size
-    serializer : :class:`pyspark.serializers.Serializer`, optional
+    serializer : :class:`pyspark.serializers.Serializer`, optional, default `CPickleSerializer`
         The serializer for RDDs.
-    conf : :py:class:`pyspark.SparkConf`, optional
+    conf : :py:class:`pyspark.SparkConf`, optional, default None
         An object setting Spark properties.
-    gateway : :py:class:`py4j.java_gateway.JavaGateway`,  optional
+    gateway : :py:class:`py4j.java_gateway.JavaGateway`,  optional, default None

Review Comment:
   ```suggestion
       gateway : :py:class:`py4j.java_gateway.JavaGateway`,  optional
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] zhengruifeng commented on a diff in pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

Posted by GitBox <gi...@apache.org>.
zhengruifeng commented on code in PR #37517:
URL: https://github.com/apache/spark/pull/37517#discussion_r945679284


##########
python/pyspark/context.py:
##########
@@ -531,32 +564,57 @@ def applicationId(self) -> str:
 
     @property
     def uiWebUrl(self) -> str:
-        """Return the URL of the SparkUI instance started by this SparkContext"""
+        """Return the URL of the SparkUI instance started by this `SparkContext`
+
+        .. versionadded:: 2.1.0

Review Comment:
   ```
   File "/Users/ruifeng.zheng/Dev/spark/python/pyspark/context.py", line ?, in __main__.SparkContext.uiWebUrl
   Failed example:
       sc.uiWebUrl
   Exception raised:
       Traceback (most recent call last):
         File "/Users/ruifeng.zheng/.dev/miniconda3/lib/python3.9/doctest.py", line 1334, in __run
           exec(compile(example.source, filename, "single",
         File "<doctest __main__.SparkContext.uiWebUrl[0]>", line 1, in <module>
           sc.uiWebUrl
         File "/Users/ruifeng.zheng/Dev/spark/python/pyspark/context.py", line 583, in uiWebUrl
           return self._jsc.sc().uiWebUrl().get()
         File "/Users/ruifeng.zheng/Dev/spark/python/lib/py4j-0.10.9.5-src.zip/py4j/java_gateway.py", line 1321, in __call__
           return_value = get_return_value(
         File "/Users/ruifeng.zheng/Dev/spark/python/lib/py4j-0.10.9.5-src.zip/py4j/protocol.py", line 326, in get_return_value
           raise Py4JJavaError(
       py4j.protocol.Py4JJavaError: An error occurred while calling o408.get.
       : java.util.NoSuchElementException: None.get
           at scala.None$.get(Option.scala:529)
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on code in PR #37517:
URL: https://github.com/apache/spark/pull/37517#discussion_r945722190


##########
python/pyspark/context.py:
##########
@@ -1485,25 +2192,47 @@ def setJobDescription(self, value: str) -> None:
     def sparkUser(self) -> str:
         """
         Get SPARK_USER for user who is running SparkContext.
+
+        .. versionadded:: 1.0.0
         """
         return self._jsc.sc().sparkUser()
 
     def cancelJobGroup(self, groupId: str) -> None:
         """
         Cancel active jobs for the specified group. See :meth:`SparkContext.setJobGroup`.
         for more information.
+
+        .. versionadded:: 1.1.0

Review Comment:
   ```suggestion
           .. versionadded:: 1.1.0
   
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on code in PR #37517:
URL: https://github.com/apache/spark/pull/37517#discussion_r945678254


##########
python/pyspark/context.py:
##########
@@ -531,32 +564,57 @@ def applicationId(self) -> str:
 
     @property
     def uiWebUrl(self) -> str:
-        """Return the URL of the SparkUI instance started by this SparkContext"""
+        """Return the URL of the SparkUI instance started by this `SparkContext`
+
+        .. versionadded:: 2.1.0

Review Comment:
   ah, nvm. let's remove this then.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] zhengruifeng commented on a diff in pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

Posted by GitBox <gi...@apache.org>.
zhengruifeng commented on code in PR #37517:
URL: https://github.com/apache/spark/pull/37517#discussion_r945689438


##########
python/pyspark/context.py:
##########
@@ -531,32 +564,57 @@ def applicationId(self) -> str:
 
     @property
     def uiWebUrl(self) -> str:
-        """Return the URL of the SparkUI instance started by this SparkContext"""
+        """Return the URL of the SparkUI instance started by this `SparkContext`
+
+        .. versionadded:: 2.1.0

Review Comment:
   spark-shell works will with `spark.ui.enabled=False`
   
   ```scala
   (base) ➜  spark git:(py_doc_sc_self_contained) bin/spark-shell --conf spark.ui.enabled=False 
   Setting default log level to "WARN".
   To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
   22/08/15 20:46:14 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
   Spark context available as 'sc' (master = local[*], app id = local-1660567574597).
   Spark session available as 'spark'.
   Welcome to
         ____              __
        / __/__  ___ _____/ /__
       _\ \/ _ \/ _ `/ __/  '_/
      /___/ .__/\_,_/_/ /_/\_\   version 3.4.0-SNAPSHOT
         /_/
            
   Using Scala version 2.12.16 (OpenJDK 64-Bit Server VM, Java 1.8.0_342)
   Type in expressions to have them evaluated.
   Type :help for more information.
   
   scala> sc.uiWebUrl
   res0: Option[String] = None
   ```
   
   while pyspark will throw an exception in initialization:
   ```python
   (base) ➜  spark git:(py_doc_sc_self_contained) bin/pyspark --conf spark.ui.enabled=False
   Python 3.9.12 (main, Apr  5 2022, 01:52:34) 
   Type 'copyright', 'credits' or 'license' for more information
   IPython 8.4.0 -- An enhanced Interactive Python. Type '?' for help.
   Setting default log level to "WARN".
   To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
   22/08/15 20:45:48 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
   Welcome to
         ____              __
        / __/__  ___ _____/ /__
       _\ \/ _ \/ _ `/ __/  '_/
      /__ / .__/\_,_/_/ /_/\_\   version 3.4.0-SNAPSHOT
         /_/
   
   Using Python version 3.9.12 (main, Apr  5 2022 01:52:34)
   [TerminalIPythonApp] WARNING | Unknown error in handling PYTHONSTARTUP file /Users/ruifeng.zheng/Dev/spark//python/pyspark/shell.py:
   ---------------------------------------------------------------------------
   Py4JJavaError                             Traceback (most recent call last)
   File ~/.dev/miniconda3/lib/python3.9/site-packages/IPython/core/shellapp.py:360, in InteractiveShellApp._exec_file(self, fname, shell_futures)
       356                 self.shell.safe_execfile_ipy(full_filename,
       357                                              shell_futures=shell_futures)
       358             else:
       359                 # default to python, even without extension
   --> 360                 self.shell.safe_execfile(full_filename,
       361                                          self.shell.user_ns,
       362                                          shell_futures=shell_futures,
       363                                          raise_exceptions=True)
       364 finally:
       365     sys.argv = save_argv
   
   File ~/.dev/miniconda3/lib/python3.9/site-packages/IPython/core/interactiveshell.py:2738, in InteractiveShell.safe_execfile(self, fname, exit_ignore, raise_exceptions, shell_futures, *where)
      2736 try:
      2737     glob, loc = (where + (None, ))[:2]
   -> 2738     py3compat.execfile(
      2739         fname, glob, loc,
      2740         self.compile if shell_futures else None)
      2741 except SystemExit as status:
      2742     # If the call was made with 0 or None exit status (sys.exit(0)
      2743     # or sys.exit() ), don't bother showing a traceback, as both of
      (...)
      2749     # For other exit status, we show the exception unless
      2750     # explicitly silenced, but only in short form.
      2751     if status.code:
   
   File ~/.dev/miniconda3/lib/python3.9/site-packages/IPython/utils/py3compat.py:55, in execfile(fname, glob, loc, compiler)
        53 with open(fname, "rb") as f:
        54     compiler = compiler or compile
   ---> 55     exec(compiler(f.read(), fname, "exec"), glob, loc)
   
   File ~/Dev/spark/python/pyspark/shell.py:70, in <module>
        56 print(
        57     r"""Welcome to
        58       ____              __
      (...)
        64     % sc.version
        65 )
        66 print(
        67     "Using Python version %s (%s, %s)"
        68     % (platform.python_version(), platform.python_build()[0], platform.python_build()[1])
        69 )
   ---> 70 print("Spark context Web UI available at %s" % (sc.uiWebUrl))
        71 print("Spark context available as 'sc' (master = %s, app id = %s)." % (sc.master, sc.applicationId))
        72 print("SparkSession available as 'spark'.")
   
   File ~/Dev/spark/python/pyspark/context.py:583, in SparkContext.uiWebUrl(self)
       572 @property
       573 def uiWebUrl(self) -> str:
       574     """Return the URL of the SparkUI instance started by this :class:`SparkContext`
       575 
       576     .. versionadded:: 2.1.0
      (...)
       581     'http://...'
       582     """
   --> 583     return self._jsc.sc().uiWebUrl().get()
   
   File ~/Dev/spark/python/lib/py4j-0.10.9.5-src.zip/py4j/java_gateway.py:1321, in JavaMember.__call__(self, *args)
      1315 command = proto.CALL_COMMAND_NAME +\
      1316     self.command_header +\
      1317     args_command +\
      1318     proto.END_COMMAND_PART
      1320 answer = self.gateway_client.send_command(command)
   -> 1321 return_value = get_return_value(
      1322     answer, self.gateway_client, self.target_id, self.name)
      1324 for temp_arg in temp_args:
      1325     temp_arg._detach()
   
   File ~/Dev/spark/python/pyspark/sql/utils.py:190, in capture_sql_exception.<locals>.deco(*a, **kw)
       188 def deco(*a: Any, **kw: Any) -> Any:
       189     try:
   --> 190         return f(*a, **kw)
       191     except Py4JJavaError as e:
       192         converted = convert_exception(e.java_exception)
   
   File ~/Dev/spark/python/lib/py4j-0.10.9.5-src.zip/py4j/protocol.py:326, in get_return_value(answer, gateway_client, target_id, name)
       324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
       325 if answer[1] == REFERENCE_TYPE:
   --> 326     raise Py4JJavaError(
       327         "An error occurred while calling {0}{1}{2}.\n".
       328         format(target_id, ".", name), value)
       329 else:
       330     raise Py4JError(
       331         "An error occurred while calling {0}{1}{2}. Trace:\n{3}\n".
       332         format(target_id, ".", name, value))
   
   Py4JJavaError: An error occurred while calling o33.get.
   : java.util.NoSuchElementException: None.get
           at scala.None$.get(Option.scala:529)
           at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
           at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
           at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
           at java.lang.reflect.Method.invoke(Method.java:498)
           at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
           at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
           at py4j.Gateway.invoke(Gateway.java:282)
           at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
           at py4j.commands.CallCommand.execute(CallCommand.java:79)
           at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
           at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
           at java.lang.Thread.run(Thread.java:750)
   ```
   
   let me fix it in a seperate PR.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on PR #37517:
URL: https://github.com/apache/spark/pull/37517#issuecomment-1216017529

   Merged to master.
   
   Thanks @zhengruifeng 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] zhengruifeng commented on pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

Posted by GitBox <gi...@apache.org>.
zhengruifeng commented on PR #37517:
URL: https://github.com/apache/spark/pull/37517#issuecomment-1214713012

   cc @HyukjinKwon @Yikun 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on code in PR #37517:
URL: https://github.com/apache/spark/pull/37517#discussion_r945552408


##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
 
     Parameters
     ----------
-    master : str, optional
+    master : str, optional, default None
         Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
-    appName : str, optional
+    appName : str, optional, default None
         A name for your job, to display on the cluster web UI.
-    sparkHome : str, optional
+    sparkHome : str, optional, default None
         Location where Spark is installed on cluster nodes.
-    pyFiles : list, optional
+    pyFiles : list, optional, default None
         Collection of .zip or .py files to send to the cluster
         and add to PYTHONPATH.  These can be paths on the local file
         system or HDFS, HTTP, HTTPS, or FTP URLs.
-    environment : dict, optional
+    environment : dict, optional, default None
         A dictionary of environment variables to set on
         worker nodes.
-    batchSize : int, optional
+    batchSize : int, optional, default 0
         The number of Python objects represented as a single
         Java object. Set 1 to disable batching, 0 to automatically choose
         the batch size based on object sizes, or -1 to use an unlimited
         batch size
-    serializer : :class:`pyspark.serializers.Serializer`, optional
+    serializer : :class:`pyspark.serializers.Serializer`, optional, default `CPickleSerializer`

Review Comment:
   ```suggestion
       serializer : :class:`Serializer`, optional, default :class:`CPickleSerializer`
   ```



##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
 
     Parameters
     ----------
-    master : str, optional
+    master : str, optional, default None
         Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
-    appName : str, optional
+    appName : str, optional, default None
         A name for your job, to display on the cluster web UI.
-    sparkHome : str, optional
+    sparkHome : str, optional, default None
         Location where Spark is installed on cluster nodes.
-    pyFiles : list, optional
+    pyFiles : list, optional, default None
         Collection of .zip or .py files to send to the cluster
         and add to PYTHONPATH.  These can be paths on the local file
         system or HDFS, HTTP, HTTPS, or FTP URLs.
-    environment : dict, optional
+    environment : dict, optional, default None
         A dictionary of environment variables to set on
         worker nodes.
-    batchSize : int, optional
+    batchSize : int, optional, default 0
         The number of Python objects represented as a single
         Java object. Set 1 to disable batching, 0 to automatically choose
         the batch size based on object sizes, or -1 to use an unlimited
         batch size
-    serializer : :class:`pyspark.serializers.Serializer`, optional
+    serializer : :class:`pyspark.serializers.Serializer`, optional, default `CPickleSerializer`
         The serializer for RDDs.
-    conf : :py:class:`pyspark.SparkConf`, optional
+    conf : :py:class:`pyspark.SparkConf`, optional, default None
         An object setting Spark properties.
-    gateway : :py:class:`py4j.java_gateway.JavaGateway`,  optional
+    gateway : :py:class:`py4j.java_gateway.JavaGateway`,  optional, default None
         Use an existing gateway and JVM, otherwise a new JVM
         will be instantiated. This is only used internally.
-    jsc : :py:class:`py4j.java_gateway.JavaObject`, optional
+    jsc : :py:class:`py4j.java_gateway.JavaObject`, optional, default None
         The JavaSparkContext instance. This is only used internally.
-    profiler_cls : type, optional
+    profiler_cls : type, optional, default :class:`pyspark.profiler.BasicProfiler`

Review Comment:
   ```suggestion
       profiler_cls : type, optional, default :class:`BasicProfiler`
   ```



##########
python/pyspark/context.py:
##########
@@ -477,11 +475,25 @@ def __exit__(
     @classmethod
     def getOrCreate(cls, conf: Optional[SparkConf] = None) -> "SparkContext":
         """
-        Get or instantiate a SparkContext and register it as a singleton object.
+        Get or instantiate a `SparkContext` and register it as a singleton object.
+
+        .. versionadded:: 1.4.0
 
         Parameters
         ----------
-        conf : :py:class:`pyspark.SparkConf`, optional
+        conf : :py:class:`pyspark.SparkConf`, optional, default None
+            `SparkConf` that will be used for initialisation of the `SparkContext`.

Review Comment:
   ```suggestion
               :class:`SparkConf` that will be used for initialization of the :class:`SparkContext`.
   ```



##########
python/pyspark/context.py:
##########
@@ -477,11 +475,25 @@ def __exit__(
     @classmethod
     def getOrCreate(cls, conf: Optional[SparkConf] = None) -> "SparkContext":
         """
-        Get or instantiate a SparkContext and register it as a singleton object.
+        Get or instantiate a `SparkContext` and register it as a singleton object.
+
+        .. versionadded:: 1.4.0
 
         Parameters
         ----------
-        conf : :py:class:`pyspark.SparkConf`, optional
+        conf : :py:class:`pyspark.SparkConf`, optional, default None

Review Comment:
   ```suggestion
           conf : :class:`SparkConf`, optional, default None
   ```



##########
python/pyspark/context.py:
##########
@@ -477,11 +475,25 @@ def __exit__(
     @classmethod
     def getOrCreate(cls, conf: Optional[SparkConf] = None) -> "SparkContext":
         """
-        Get or instantiate a SparkContext and register it as a singleton object.
+        Get or instantiate a `SparkContext` and register it as a singleton object.

Review Comment:
   ```suggestion
           Get or instantiate a :class:`SparkContext` and register it as a singleton object.
   ```



##########
python/pyspark/context.py:
##########
@@ -477,11 +475,25 @@ def __exit__(
     @classmethod
     def getOrCreate(cls, conf: Optional[SparkConf] = None) -> "SparkContext":
         """
-        Get or instantiate a SparkContext and register it as a singleton object.
+        Get or instantiate a `SparkContext` and register it as a singleton object.
+
+        .. versionadded:: 1.4.0
 
         Parameters
         ----------
-        conf : :py:class:`pyspark.SparkConf`, optional
+        conf : :py:class:`pyspark.SparkConf`, optional, default None
+            `SparkConf` that will be used for initialisation of the `SparkContext`.
+
+        Returns
+        -------
+        :class:`pyspark.context.SparkContext`

Review Comment:
   ```suggestion
           :class:`SparkContext`
   ```



##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
 
     Parameters
     ----------
-    master : str, optional
+    master : str, optional, default None
         Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
-    appName : str, optional
+    appName : str, optional, default None
         A name for your job, to display on the cluster web UI.
-    sparkHome : str, optional
+    sparkHome : str, optional, default None
         Location where Spark is installed on cluster nodes.
-    pyFiles : list, optional
+    pyFiles : list, optional, default None
         Collection of .zip or .py files to send to the cluster
         and add to PYTHONPATH.  These can be paths on the local file
         system or HDFS, HTTP, HTTPS, or FTP URLs.
-    environment : dict, optional
+    environment : dict, optional, default None
         A dictionary of environment variables to set on
         worker nodes.
-    batchSize : int, optional
+    batchSize : int, optional, default 0
         The number of Python objects represented as a single
         Java object. Set 1 to disable batching, 0 to automatically choose
         the batch size based on object sizes, or -1 to use an unlimited
         batch size
-    serializer : :class:`pyspark.serializers.Serializer`, optional
+    serializer : :class:`pyspark.serializers.Serializer`, optional, default `CPickleSerializer`
         The serializer for RDDs.
-    conf : :py:class:`pyspark.SparkConf`, optional
+    conf : :py:class:`pyspark.SparkConf`, optional, default None
         An object setting Spark properties.
-    gateway : :py:class:`py4j.java_gateway.JavaGateway`,  optional
+    gateway : :py:class:`py4j.java_gateway.JavaGateway`,  optional, default None
         Use an existing gateway and JVM, otherwise a new JVM
         will be instantiated. This is only used internally.
-    jsc : :py:class:`py4j.java_gateway.JavaObject`, optional
+    jsc : :py:class:`py4j.java_gateway.JavaObject`, optional, default None
         The JavaSparkContext instance. This is only used internally.
-    profiler_cls : type, optional
+    profiler_cls : type, optional, default :class:`pyspark.profiler.BasicProfiler`
         A class of custom Profiler used to do profiling
-        (default is :class:`pyspark.profiler.BasicProfiler`).
-    udf_profiler_cls : type, optional
+    udf_profiler_cls : type, optional, default :class:`pyspark.profiler.UDFBasicProfiler`

Review Comment:
   ```suggestion
       udf_profiler_cls : type, optional, default :class:`UDFBasicProfiler`
   ```



##########
python/pyspark/context.py:
##########
@@ -477,11 +475,25 @@ def __exit__(
     @classmethod
     def getOrCreate(cls, conf: Optional[SparkConf] = None) -> "SparkContext":
         """
-        Get or instantiate a SparkContext and register it as a singleton object.
+        Get or instantiate a `SparkContext` and register it as a singleton object.
+
+        .. versionadded:: 1.4.0
 
         Parameters
         ----------
-        conf : :py:class:`pyspark.SparkConf`, optional
+        conf : :py:class:`pyspark.SparkConf`, optional, default None
+            `SparkConf` that will be used for initialisation of the `SparkContext`.
+
+        Returns
+        -------
+        :class:`pyspark.context.SparkContext`
+            current `SparkContext`, or a new one if it wasn't created before the function call.

Review Comment:
   ```suggestion
               current :class:`SparkContext`, or a new one if it wasn't created before the function
               call.
   ```



##########
python/pyspark/context.py:
##########
@@ -510,6 +535,12 @@ def setSystemProperty(cls, key: str, value: str) -> None:
     def version(self) -> str:
         """
         The version of Spark on which this application is running.
+
+        .. versionadded:: 1.1.0
+
+        Examples
+        --------
+        >>> version = sc.version

Review Comment:
   ```suggestion
           >>> _ = sc.version
   ```



##########
python/pyspark/context.py:
##########
@@ -531,32 +564,57 @@ def applicationId(self) -> str:
 
     @property
     def uiWebUrl(self) -> str:
-        """Return the URL of the SparkUI instance started by this SparkContext"""
+        """Return the URL of the SparkUI instance started by this `SparkContext`

Review Comment:
   ```suggestion
           """Return the URL of the SparkUI instance started by this :class:`SparkContext`
   ```



##########
python/pyspark/context.py:
##########
@@ -592,22 +664,28 @@ def range(
         way as python's built-in range() function. If called with a single argument,
         the argument is interpreted as `end`, and `start` is set to 0.
 
+        .. versionadded:: 1.5.0
+
         Parameters
         ----------
         start : int
             the start value
-        end : int, optional
+        end : int, optional, default None

Review Comment:
   ```suggestion
           end : int, optional
   ```
   
   Let;s don't document the default value in this case (we should actually describe the default behaviour too but I would prefer to leave it out of the scope of this Pr).



##########
python/pyspark/context.py:
##########
@@ -493,14 +505,27 @@ def setLogLevel(self, logLevel: str) -> None:
         """
         Control our logLevel. This overrides any user-defined log settings.
         Valid log levels include: ALL, DEBUG, ERROR, FATAL, INFO, OFF, TRACE, WARN
+
+        .. versionadded:: 1.4.0
+
+        Parameters
+        ----------
+        logLevel : str
+            The desired log level as a string.
+
+        Examples
+        --------
+        >>> sc.setLogLevel("WARN")
         """
         self._jsc.setLogLevel(logLevel)
 
     @classmethod
     def setSystemProperty(cls, key: str, value: str) -> None:
         """
-        Set a Java system property, such as spark.executor.memory. This must
-        must be invoked before instantiating SparkContext.
+        Set a Java system property, such as `spark.executor.memory`. This must
+        be invoked before instantiating SparkContext.
+
+        .. versionadded:: 0.9.0

Review Comment:
   ```suggestion
           .. versionadded:: 0.9.0
           
           Parameters
           ----------
           key : str
               The key of a new Java system property.
           value : str
               The value of a new Java system property.
   ```



##########
python/pyspark/context.py:
##########
@@ -531,32 +564,57 @@ def applicationId(self) -> str:
 
     @property
     def uiWebUrl(self) -> str:
-        """Return the URL of the SparkUI instance started by this SparkContext"""
+        """Return the URL of the SparkUI instance started by this `SparkContext`
+
+        .. versionadded:: 2.1.0
+        """
         return self._jsc.sc().uiWebUrl().get()
 
     @property
     def startTime(self) -> int:
-        """Return the epoch time when the Spark Context was started."""
+        """Return the epoch time when the `SparkContext` was started.
+
+        .. versionadded:: 1.5.0
+
+        Examples
+        --------
+        >>> start = sc.startTime
+        """
         return self._jsc.startTime()
 
     @property
     def defaultParallelism(self) -> int:
         """
-        Default level of parallelism to use when not given by user (e.g. for
-        reduce tasks)
+        Default level of parallelism to use when not given by user (e.g. for reduce tasks)
+
+        .. versionadded:: 0.7.0
+
+        Examples
+        --------
+        >>> sc.defaultParallelism > 0
+        True
         """
         return self._jsc.sc().defaultParallelism()
 
     @property
     def defaultMinPartitions(self) -> int:
         """
         Default min number of partitions for Hadoop RDDs when not given by user
+
+        .. versionadded:: 1.1.0
+
+        Examples
+        --------
+        >>> sc.defaultMinPartitions > 0
+        True
         """
         return self._jsc.sc().defaultMinPartitions()
 
     def stop(self) -> None:
         """
-        Shut down the SparkContext.
+        Shut down the `SparkContext`.

Review Comment:
   ```suggestion
           Shut down the :class:`SparkContext`.
   ```



##########
python/pyspark/context.py:
##########
@@ -531,32 +564,57 @@ def applicationId(self) -> str:
 
     @property
     def uiWebUrl(self) -> str:
-        """Return the URL of the SparkUI instance started by this SparkContext"""
+        """Return the URL of the SparkUI instance started by this `SparkContext`
+
+        .. versionadded:: 2.1.0
+        """
         return self._jsc.sc().uiWebUrl().get()
 
     @property
     def startTime(self) -> int:
-        """Return the epoch time when the Spark Context was started."""
+        """Return the epoch time when the `SparkContext` was started.
+
+        .. versionadded:: 1.5.0
+
+        Examples
+        --------
+        >>> start = sc.startTime

Review Comment:
   ```suggestion
           >>> _ = sc.startTime
   ```



##########
python/pyspark/context.py:
##########
@@ -579,7 +637,21 @@ def stop(self) -> None:
 
     def emptyRDD(self) -> RDD[Any]:
         """
-        Create an RDD that has no partitions or elements.
+        Create an `RDD` that has no partitions or elements.

Review Comment:
   ```suggestion
           Create an :class:`RDD` that has no partitions or elements.
   ```



##########
python/pyspark/context.py:
##########
@@ -616,6 +694,18 @@ def range(
         [2, 3]
         >>> sc.range(1, 7, 2).collect()
         [1, 3, 5]
+
+        Generate RDD with a negative step

Review Comment:
   ```suggestion
           Generate RDD with a negative step
   
   ```



##########
python/pyspark/context.py:
##########
@@ -616,6 +694,18 @@ def range(
         [2, 3]
         >>> sc.range(1, 7, 2).collect()
         [1, 3, 5]
+
+        Generate RDD with a negative step
+        >>> sc.range(5, 0, -1).collect()
+        [5, 4, 3, 2, 1]
+        >>> sc.range(0, 5, -1).collect()
+        []
+
+        Control the number of partitions

Review Comment:
   ```suggestion
           Control the number of partitions
   
   ```



##########
python/pyspark/context.py:
##########
@@ -628,12 +718,32 @@ def parallelize(self, c: Iterable[T], numSlices: Optional[int] = None) -> RDD[T]
         Distribute a local Python collection to form an RDD. Using range
         is recommended if the input represents a range for performance.
 
+        .. versionadded:: 0.7.0
+
+        Parameters
+        ----------
+        c : :py:class:`collections.abc.Iterable`
+            iterable collection to distribute
+        numSlices : int, optional, default None

Review Comment:
   ```suggestion
           numSlices : int, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -579,7 +637,21 @@ def stop(self) -> None:
 
     def emptyRDD(self) -> RDD[Any]:
         """
-        Create an RDD that has no partitions or elements.
+        Create an `RDD` that has no partitions or elements.
+
+        .. versionadded:: 1.5.0
+
+        Returns
+        -------
+        :py:class:`pyspark.RDD`

Review Comment:
   ```suggestion
           :class:`RDD`
   ```



##########
python/pyspark/context.py:
##########
@@ -592,22 +664,28 @@ def range(
         way as python's built-in range() function. If called with a single argument,
         the argument is interpreted as `end`, and `start` is set to 0.
 
+        .. versionadded:: 1.5.0
+
         Parameters
         ----------
         start : int
             the start value
-        end : int, optional
+        end : int, optional, default None
             the end value (exclusive)
-        step : int, optional
-            the incremental step (default: 1)
-        numSlices : int, optional
+        step : int, optional, default 1
+            the incremental step
+        numSlices : int, optional, default None

Review Comment:
   ```suggestion
           numSlices : int, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -731,13 +841,51 @@ def pickleFile(self, name: str, minPartitions: Optional[int] = None) -> RDD[Any]
         """
         Load an RDD previously saved using :meth:`RDD.saveAsPickleFile` method.
 
+        .. versionadded:: 1.1.0
+
+        Parameters
+        ----------
+        name : str
+            directory to the input data files, the path can be comma separated
+            paths as a list of inputs
+        minPartitions : int, optional, default None
+            suggested minimum number of partitions for the resulting RDD
+
+        Returns
+        -------
+        :py:class:`pyspark.RDD`

Review Comment:
   ```suggestion
           :class:`RDD`
   ```



##########
python/pyspark/context.py:
##########
@@ -628,12 +718,32 @@ def parallelize(self, c: Iterable[T], numSlices: Optional[int] = None) -> RDD[T]
         Distribute a local Python collection to form an RDD. Using range
         is recommended if the input represents a range for performance.
 
+        .. versionadded:: 0.7.0
+
+        Parameters
+        ----------
+        c : :py:class:`collections.abc.Iterable`

Review Comment:
   ```suggestion
           c : :class:`collections.abc.Iterable`
   ```



##########
python/pyspark/context.py:
##########
@@ -628,12 +718,32 @@ def parallelize(self, c: Iterable[T], numSlices: Optional[int] = None) -> RDD[T]
         Distribute a local Python collection to form an RDD. Using range
         is recommended if the input represents a range for performance.
 
+        .. versionadded:: 0.7.0
+
+        Parameters
+        ----------
+        c : :py:class:`collections.abc.Iterable`
+            iterable collection to distribute
+        numSlices : int, optional, default None
+            the number of partitions of the new RDD
+
+        Returns
+        -------
+        :py:class:`pyspark.RDD`

Review Comment:
   ```suggestion
           :class:`RDD`
   ```



##########
python/pyspark/context.py:
##########
@@ -731,13 +841,51 @@ def pickleFile(self, name: str, minPartitions: Optional[int] = None) -> RDD[Any]
         """
         Load an RDD previously saved using :meth:`RDD.saveAsPickleFile` method.
 
+        .. versionadded:: 1.1.0
+
+        Parameters
+        ----------
+        name : str
+            directory to the input data files, the path can be comma separated
+            paths as a list of inputs
+        minPartitions : int, optional, default None

Review Comment:
   ```suggestion
           minPartitions : int, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -849,12 +1100,43 @@ def binaryRecords(self, path: str, recordLength: int) -> RDD[bytes]:
         with the specified numerical format (see ByteBuffer), and the number of
         bytes per record is constant.
 
+        .. versionadded:: 1.3.0
+
         Parameters
         ----------
         path : str
             Directory to the input data files
         recordLength : int
             The length at which to split the records
+
+        Returns
+        -------
+        :py:class:`pyspark.RDD`

Review Comment:
   ```suggestion
           :class:`RDD`
   ```



##########
python/pyspark/context.py:
##########
@@ -748,21 +896,60 @@ def textFile(
         """
         Read a text file from HDFS, a local file system (available on all
         nodes), or any Hadoop-supported file system URI, and return it as an
-        RDD of Strings.
-        The text files must be encoded as UTF-8.
+        RDD of Strings. The text files must be encoded as UTF-8.
+
+        .. versionadded:: 0.7.0
+
+        Parameters
+        ----------
+        name : str
+            directory to the input data files, the path can be comma separated
+            paths as a list of inputs
+        minPartitions : int, optional, default None
+            suggested minimum number of partitions for the resulting RDD
+        use_unicode : bool, default True
+            If use_unicode is False, the strings will be kept as `str` (encoding
+            as `utf-8`), which is faster and smaller than unicode.
 
-        If use_unicode is False, the strings will be kept as `str` (encoding
-        as `utf-8`), which is faster and smaller than unicode. (Added in
-        Spark 1.2)
+            .. versionadded:: 1.2.0
+
+        Returns
+        -------
+        :py:class:`pyspark.RDD`

Review Comment:
   ```suggestion
           :class:`RDD`
   ```



##########
python/pyspark/context.py:
##########
@@ -832,9 +1046,46 @@ def binaryFiles(self, path: str, minPartitions: Optional[int] = None) -> RDD[Tup
         in a key-value pair, where the key is the path of each file, the
         value is the content of each file.
 
+        .. versionadded:: 1.3.0
+
+        Parameters
+        ----------
+        path : str
+            directory to the input data files, the path can be comma separated
+            paths as a list of inputs
+        minPartitions : int, optional, default None

Review Comment:
   ```suggestion
           minPartitions : int, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -748,21 +896,60 @@ def textFile(
         """
         Read a text file from HDFS, a local file system (available on all
         nodes), or any Hadoop-supported file system URI, and return it as an
-        RDD of Strings.
-        The text files must be encoded as UTF-8.
+        RDD of Strings. The text files must be encoded as UTF-8.
+
+        .. versionadded:: 0.7.0
+
+        Parameters
+        ----------
+        name : str
+            directory to the input data files, the path can be comma separated
+            paths as a list of inputs
+        minPartitions : int, optional, default None

Review Comment:
   ```suggestion
           minPartitions : int, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -801,21 +986,50 @@ def wholeTextFiles(
             ...
             (a-hdfs-path/part-nnnnn, its content)
 
+        Parameters
+        ----------
+        path : str
+            directory to the input data files, the path can be comma separated
+            paths as a list of inputs
+        minPartitions : int, optional, default None
+            suggested minimum number of partitions for the resulting RDD
+        use_unicode : bool, default True
+            If use_unicode is False, the strings will be kept as `str` (encoding
+            as `utf-8`), which is faster and smaller than unicode.
+
+            .. versionadded:: 1.2.0
+
+        Returns
+        -------
+        :py:class:`pyspark.RDD`
+            RDD representing path-content pairs from the file(s).
+
         Notes
         -----
         Small files are preferred, as each file will be loaded fully in memory.
 
+        See Also
+        --------
+        :meth:`RDD.saveAsTextFile`
+        :meth:`SparkContext.textFile`
+
         Examples
         --------
-        >>> dirPath = os.path.join(tempdir, "files")
-        >>> os.mkdir(dirPath)
-        >>> with open(os.path.join(dirPath, "1.txt"), "w") as file1:
-        ...    _ = file1.write("1")
-        >>> with open(os.path.join(dirPath, "2.txt"), "w") as file2:
-        ...    _ = file2.write("2")
-        >>> textFiles = sc.wholeTextFiles(dirPath)
-        >>> sorted(textFiles.collect())
-        [('.../1.txt', '1'), ('.../2.txt', '2')]
+        >>> import os
+        >>> import tempfile
+        >>> with tempfile.TemporaryDirectory() as d:
+        ...     # Write a temporary text file
+        ...     with open(os.path.join(d, "1.txt"), "w") as f:
+        ...         _ = f.write("123")
+        ...
+        ...     # Write another temporary text file
+        ...     with open(os.path.join(d, "2.txt"), "w") as f:
+        ...         _ = f.write("xyz")
+        ...
+        ...     collected = sorted(sc.wholeTextFiles(d).collect())
+

Review Comment:
   ```suggestion
   ```



##########
python/pyspark/context.py:
##########
@@ -801,21 +986,50 @@ def wholeTextFiles(
             ...
             (a-hdfs-path/part-nnnnn, its content)
 
+        Parameters
+        ----------
+        path : str
+            directory to the input data files, the path can be comma separated
+            paths as a list of inputs
+        minPartitions : int, optional, default None
+            suggested minimum number of partitions for the resulting RDD
+        use_unicode : bool, default True
+            If use_unicode is False, the strings will be kept as `str` (encoding
+            as `utf-8`), which is faster and smaller than unicode.
+
+            .. versionadded:: 1.2.0
+
+        Returns
+        -------
+        :py:class:`pyspark.RDD`

Review Comment:
   ```suggestion
           :class:`RDD`
   ```



##########
python/pyspark/context.py:
##########
@@ -832,9 +1046,46 @@ def binaryFiles(self, path: str, minPartitions: Optional[int] = None) -> RDD[Tup
         in a key-value pair, where the key is the path of each file, the
         value is the content of each file.
 
+        .. versionadded:: 1.3.0
+
+        Parameters
+        ----------
+        path : str
+            directory to the input data files, the path can be comma separated
+            paths as a list of inputs
+        minPartitions : int, optional, default None
+            suggested minimum number of partitions for the resulting RDD
+
+        Returns
+        -------
+        :py:class:`pyspark.RDD`

Review Comment:
   ```suggestion
           :class:`RDD`
   ```



##########
python/pyspark/context.py:
##########
@@ -888,24 +1170,60 @@ def sequenceFile(
             3. If this fails, the fallback is to call 'toString' on each key and value
             4. :class:`CPickleSerializer` is used to deserialize pickled objects on the Python side
 
+        .. versionadded:: 1.3.0
+
         Parameters
         ----------
         path : str
             path to sequencefile
-        keyClass: str, optional
+        keyClass: str, optional, default None
             fully qualified classname of key Writable class (e.g. "org.apache.hadoop.io.Text")
-        valueClass : str, optional
+        valueClass : str, optional, default None
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None
             fully qualified name of a function returning key WritableConverter
-        valueConverter : str, optional
+        valueConverter : str, optional, default None

Review Comment:
   ```suggestion
           valueConverter : str, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -888,24 +1170,60 @@ def sequenceFile(
             3. If this fails, the fallback is to call 'toString' on each key and value
             4. :class:`CPickleSerializer` is used to deserialize pickled objects on the Python side
 
+        .. versionadded:: 1.3.0
+
         Parameters
         ----------
         path : str
             path to sequencefile
-        keyClass: str, optional
+        keyClass: str, optional, default None
             fully qualified classname of key Writable class (e.g. "org.apache.hadoop.io.Text")
-        valueClass : str, optional
+        valueClass : str, optional, default None
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None

Review Comment:
   ```suggestion
           keyConverter : str, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -888,24 +1170,60 @@ def sequenceFile(
             3. If this fails, the fallback is to call 'toString' on each key and value
             4. :class:`CPickleSerializer` is used to deserialize pickled objects on the Python side
 
+        .. versionadded:: 1.3.0
+
         Parameters
         ----------
         path : str
             path to sequencefile
-        keyClass: str, optional
+        keyClass: str, optional, default None
             fully qualified classname of key Writable class (e.g. "org.apache.hadoop.io.Text")
-        valueClass : str, optional
+        valueClass : str, optional, default None
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None
             fully qualified name of a function returning key WritableConverter
-        valueConverter : str, optional
+        valueConverter : str, optional, default None
             fully qualifiedname of a function returning value WritableConverter
-        minSplits : int, optional
+        minSplits : int, optional, default None

Review Comment:
   ```suggestion
           minSplits : int, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -888,24 +1170,60 @@ def sequenceFile(
             3. If this fails, the fallback is to call 'toString' on each key and value
             4. :class:`CPickleSerializer` is used to deserialize pickled objects on the Python side
 
+        .. versionadded:: 1.3.0
+
         Parameters
         ----------
         path : str
             path to sequencefile
-        keyClass: str, optional
+        keyClass: str, optional, default None
             fully qualified classname of key Writable class (e.g. "org.apache.hadoop.io.Text")
-        valueClass : str, optional
+        valueClass : str, optional, default None

Review Comment:
   ```suggestion
           valueClass : str, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -888,24 +1170,60 @@ def sequenceFile(
             3. If this fails, the fallback is to call 'toString' on each key and value
             4. :class:`CPickleSerializer` is used to deserialize pickled objects on the Python side
 
+        .. versionadded:: 1.3.0
+
         Parameters
         ----------
         path : str
             path to sequencefile
-        keyClass: str, optional
+        keyClass: str, optional, default None
             fully qualified classname of key Writable class (e.g. "org.apache.hadoop.io.Text")
-        valueClass : str, optional
+        valueClass : str, optional, default None
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None
             fully qualified name of a function returning key WritableConverter
-        valueConverter : str, optional
+        valueConverter : str, optional, default None
             fully qualifiedname of a function returning value WritableConverter
-        minSplits : int, optional
+        minSplits : int, optional, default None
             minimum splits in dataset (default min(2, sc.defaultParallelism))
-        batchSize : int, optional
+        batchSize : int, optional, default 0
             The number of Python objects represented as a single
             Java object. (default 0, choose batchSize automatically)
+
+        Returns
+        -------
+        :py:class:`pyspark.RDD`

Review Comment:
   ```suggestion
           :class:`RDD`
   ```



##########
python/pyspark/context.py:
##########
@@ -888,24 +1170,60 @@ def sequenceFile(
             3. If this fails, the fallback is to call 'toString' on each key and value
             4. :class:`CPickleSerializer` is used to deserialize pickled objects on the Python side
 
+        .. versionadded:: 1.3.0
+
         Parameters
         ----------
         path : str
             path to sequencefile
-        keyClass: str, optional
+        keyClass: str, optional, default None

Review Comment:
   ```suggestion
           keyClass: str, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -1007,17 +1367,66 @@ def newAPIHadoopRDD(
         valueClass : str
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None

Review Comment:
   ```suggestion
           keyConverter : str, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -1007,17 +1367,66 @@ def newAPIHadoopRDD(
         valueClass : str
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None
             fully qualified name of a function returning key WritableConverter
             (None by default)
-        valueConverter : str, optional
+        valueConverter : str, optional, default None

Review Comment:
   ```suggestion
           valueConverter : str, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -1062,17 +1475,53 @@ def hadoopFile(
         valueClass : str
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None

Review Comment:
   ```suggestion
           keyConverter : str, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -1007,17 +1367,66 @@ def newAPIHadoopRDD(
         valueClass : str
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None
             fully qualified name of a function returning key WritableConverter
             (None by default)
-        valueConverter : str, optional
+        valueConverter : str, optional, default None
             fully qualified name of a function returning value WritableConverter
             (None by default)
-        conf : dict, optional
+        conf : dict, optional, default None

Review Comment:
   ```suggestion
           conf : dict, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -953,18 +1273,56 @@ def newAPIHadoopFile(
         valueClass : str
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None
             fully qualified name of a function returning key WritableConverter
             None by default
-        valueConverter : str, optional
+        valueConverter : str, optional, default None
             fully qualified name of a function returning value WritableConverter
             None by default
-        conf : dict, optional
+        conf : dict, optional, default None
             Hadoop configuration, passed in as a dict
             None by default
-        batchSize : int, optional
+        batchSize : int, optional, default 0
             The number of Python objects represented as a single
             Java object. (default 0, choose batchSize automatically)
+
+        Returns
+        -------
+        :py:class:`pyspark.RDD`

Review Comment:
   ```suggestion
           :class:`RDD`
   ```



##########
python/pyspark/context.py:
##########
@@ -1007,17 +1367,66 @@ def newAPIHadoopRDD(
         valueClass : str
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None
             fully qualified name of a function returning key WritableConverter
             (None by default)
-        valueConverter : str, optional
+        valueConverter : str, optional, default None
             fully qualified name of a function returning value WritableConverter
             (None by default)
-        conf : dict, optional
+        conf : dict, optional, default None
             Hadoop configuration, passed in as a dict (None by default)
-        batchSize : int, optional
+        batchSize : int, optional, default 0
             The number of Python objects represented as a single
             Java object. (default 0, choose batchSize automatically)
+
+        Returns
+        -------
+        :py:class:`pyspark.RDD`

Review Comment:
   ```suggestion
           :class:`RDD`
   ```



##########
python/pyspark/context.py:
##########
@@ -1062,17 +1475,53 @@ def hadoopFile(
         valueClass : str
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None
             fully qualified name of a function returning key WritableConverter
-            (None by default)
-        valueConverter : str, optional
+        valueConverter : str, optional, default None
             fully qualified name of a function returning value WritableConverter
-            (None by default)
-        conf : dict, optional
-            Hadoop configuration, passed in as a dict (None by default)
-        batchSize : int, optional
+        conf : dict, optional, default None
+            Hadoop configuration, passed in as a dict
+        batchSize : int, optional, default 0
             The number of Python objects represented as a single
             Java object. (default 0, choose batchSize automatically)
+
+        Returns
+        -------
+        :py:class:`pyspark.RDD`

Review Comment:
   ```suggestion
           :class:`RDD`
   ```



##########
python/pyspark/context.py:
##########
@@ -1062,17 +1475,53 @@ def hadoopFile(
         valueClass : str
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None
             fully qualified name of a function returning key WritableConverter
-            (None by default)
-        valueConverter : str, optional
+        valueConverter : str, optional, default None
             fully qualified name of a function returning value WritableConverter
-            (None by default)
-        conf : dict, optional
-            Hadoop configuration, passed in as a dict (None by default)
-        batchSize : int, optional
+        conf : dict, optional, default None

Review Comment:
   ```suggestion
           conf : dict, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -1062,17 +1475,53 @@ def hadoopFile(
         valueClass : str
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None
             fully qualified name of a function returning key WritableConverter
-            (None by default)
-        valueConverter : str, optional
+        valueConverter : str, optional, default None

Review Comment:
   ```suggestion
           valueConverter : str, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -1115,17 +1566,63 @@ def hadoopRDD(
         valueClass : str
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None
             fully qualified name of a function returning key WritableConverter
-            (None by default)
-        valueConverter : str, optional
+        valueConverter : str, optional, default None

Review Comment:
   ```suggestion
           valueConverter : str, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -1115,17 +1566,63 @@ def hadoopRDD(
         valueClass : str
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None

Review Comment:
   ```suggestion
           keyConverter : str, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -1194,6 +1703,30 @@ def broadcast(self, value: T) -> "Broadcast[T]":
         Broadcast a read-only variable to the cluster, returning a :class:`Broadcast`
         object for reading it in distributed functions. The variable will
         be sent to each cluster only once.
+
+        .. versionadded:: 0.7.0
+
+        Parameters
+        ----------
+        value : T
+            value to broadcast to the Spark nodes
+
+        Returns
+        -------
+        :py:class:`pyspark.Broadcast`
+            `Broadcast` object, a read-only variable cached on each machine

Review Comment:
   ```suggestion
               :class:`Broadcast` object, a read-only variable cached on each machine
   ```



##########
python/pyspark/context.py:
##########
@@ -1206,6 +1739,39 @@ def accumulator(
         data type if provided. Default AccumulatorParams are used for integers
         and floating-point numbers if you do not provide one. For other types,
         a custom AccumulatorParam can be used.
+
+        .. versionadded:: 0.7.0
+
+        Parameters
+        ----------
+        value : T
+            initialized value
+        accum_param : :py:class:`pyspark.AccumulatorParam`, optional, default None
+            helper object to define how to add values
+
+        Returns
+        -------
+        :py:class:`pyspark.Accumulator`

Review Comment:
   ```suggestion
           :class:`Accumulator`
   ```



##########
python/pyspark/context.py:
##########
@@ -1115,17 +1566,63 @@ def hadoopRDD(
         valueClass : str
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None
             fully qualified name of a function returning key WritableConverter
-            (None by default)
-        valueConverter : str, optional
+        valueConverter : str, optional, default None
             fully qualified name of a function returning value WritableConverter
-            (None by default)
-        conf : dict, optional
-            Hadoop configuration, passed in as a dict (None by default)
-        batchSize : int, optional
+        conf : dict, optional, default None
+            Hadoop configuration, passed in as a dict
+        batchSize : int, optional, default 0
             The number of Python objects represented as a single
             Java object. (default 0, choose batchSize automatically)
+
+        Returns
+        -------
+        :py:class:`pyspark.RDD`

Review Comment:
   ```suggestion
           :class:`RDD`
   ```



##########
python/pyspark/context.py:
##########
@@ -1115,17 +1566,63 @@ def hadoopRDD(
         valueClass : str
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None
             fully qualified name of a function returning key WritableConverter
-            (None by default)
-        valueConverter : str, optional
+        valueConverter : str, optional, default None
             fully qualified name of a function returning value WritableConverter
-            (None by default)
-        conf : dict, optional
-            Hadoop configuration, passed in as a dict (None by default)
-        batchSize : int, optional
+        conf : dict, optional, default None

Review Comment:
   ```suggestion
           conf : dict, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -1206,6 +1739,39 @@ def accumulator(
         data type if provided. Default AccumulatorParams are used for integers
         and floating-point numbers if you do not provide one. For other types,
         a custom AccumulatorParam can be used.
+
+        .. versionadded:: 0.7.0
+
+        Parameters
+        ----------
+        value : T
+            initialized value
+        accum_param : :py:class:`pyspark.AccumulatorParam`, optional, default None

Review Comment:
   ```suggestion
           accum_param : :class:`pyspark.AccumulatorParam`, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -1194,6 +1703,30 @@ def broadcast(self, value: T) -> "Broadcast[T]":
         Broadcast a read-only variable to the cluster, returning a :class:`Broadcast`
         object for reading it in distributed functions. The variable will
         be sent to each cluster only once.
+
+        .. versionadded:: 0.7.0
+
+        Parameters
+        ----------
+        value : T
+            value to broadcast to the Spark nodes
+
+        Returns
+        -------
+        :py:class:`pyspark.Broadcast`

Review Comment:
   ```suggestion
           :class:`Broadcast`
   ```



##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
 
     Parameters
     ----------
-    master : str, optional
+    master : str, optional, default None
         Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
-    appName : str, optional
+    appName : str, optional, default None

Review Comment:
   ```suggestion
       appName : str, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
 
     Parameters
     ----------
-    master : str, optional
+    master : str, optional, default None
         Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
-    appName : str, optional
+    appName : str, optional, default None
         A name for your job, to display on the cluster web UI.
-    sparkHome : str, optional
+    sparkHome : str, optional, default None

Review Comment:
   ```suggestion
       sparkHome : str, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
 
     Parameters
     ----------
-    master : str, optional
+    master : str, optional, default None

Review Comment:
   ```suggestion
       master : str, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -1562,12 +2293,22 @@ def dump_profiles(self, path: str) -> None:
             )
 
     def getConf(self) -> SparkConf:
+        """Return a copy of this SparkContext's configuration :py:class:`pyspark.SparkConf`.
+
+        .. versionadded:: 2.1.0
+        """
         conf = SparkConf()
         conf.setAll(self._conf.getAll())
         return conf
 
     @property
     def resources(self) -> Dict[str, ResourceInformation]:
+        """
+        Return the resource information of this SparkContext.

Review Comment:
   ```suggestion
           Return the resource information of this :class:`SparkContext`.
   ```



##########
python/pyspark/context.py:
##########
@@ -1562,12 +2293,22 @@ def dump_profiles(self, path: str) -> None:
             )
 
     def getConf(self) -> SparkConf:
+        """Return a copy of this SparkContext's configuration :py:class:`pyspark.SparkConf`.

Review Comment:
   ```suggestion
           """Return a copy of this SparkContext's configuration :class:`SparkConf`.
   ```



##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
 
     Parameters
     ----------
-    master : str, optional
+    master : str, optional, default None
         Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
-    appName : str, optional
+    appName : str, optional, default None
         A name for your job, to display on the cluster web UI.
-    sparkHome : str, optional
+    sparkHome : str, optional, default None
         Location where Spark is installed on cluster nodes.
-    pyFiles : list, optional
+    pyFiles : list, optional, default None

Review Comment:
   ```suggestion
       pyFiles : list, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
 
     Parameters
     ----------
-    master : str, optional
+    master : str, optional, default None
         Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
-    appName : str, optional
+    appName : str, optional, default None
         A name for your job, to display on the cluster web UI.
-    sparkHome : str, optional
+    sparkHome : str, optional, default None
         Location where Spark is installed on cluster nodes.
-    pyFiles : list, optional
+    pyFiles : list, optional, default None
         Collection of .zip or .py files to send to the cluster
         and add to PYTHONPATH.  These can be paths on the local file
         system or HDFS, HTTP, HTTPS, or FTP URLs.
-    environment : dict, optional
+    environment : dict, optional, default None
         A dictionary of environment variables to set on
         worker nodes.
-    batchSize : int, optional
+    batchSize : int, optional, default 0
         The number of Python objects represented as a single
         Java object. Set 1 to disable batching, 0 to automatically choose
         the batch size based on object sizes, or -1 to use an unlimited
         batch size
-    serializer : :class:`pyspark.serializers.Serializer`, optional
+    serializer : :class:`pyspark.serializers.Serializer`, optional, default `CPickleSerializer`
         The serializer for RDDs.
-    conf : :py:class:`pyspark.SparkConf`, optional
+    conf : :py:class:`pyspark.SparkConf`, optional, default None

Review Comment:
   ```suggestion
       conf : :class:`SparkConf`, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
 
     Parameters
     ----------
-    master : str, optional
+    master : str, optional, default None
         Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
-    appName : str, optional
+    appName : str, optional, default None
         A name for your job, to display on the cluster web UI.
-    sparkHome : str, optional
+    sparkHome : str, optional, default None
         Location where Spark is installed on cluster nodes.
-    pyFiles : list, optional
+    pyFiles : list, optional, default None
         Collection of .zip or .py files to send to the cluster
         and add to PYTHONPATH.  These can be paths on the local file
         system or HDFS, HTTP, HTTPS, or FTP URLs.
-    environment : dict, optional
+    environment : dict, optional, default None

Review Comment:
   ```suggestion
       environment : dict, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
 
     Parameters
     ----------
-    master : str, optional
+    master : str, optional, default None
         Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
-    appName : str, optional
+    appName : str, optional, default None
         A name for your job, to display on the cluster web UI.
-    sparkHome : str, optional
+    sparkHome : str, optional, default None
         Location where Spark is installed on cluster nodes.
-    pyFiles : list, optional
+    pyFiles : list, optional, default None
         Collection of .zip or .py files to send to the cluster
         and add to PYTHONPATH.  These can be paths on the local file
         system or HDFS, HTTP, HTTPS, or FTP URLs.
-    environment : dict, optional
+    environment : dict, optional, default None
         A dictionary of environment variables to set on
         worker nodes.
-    batchSize : int, optional
+    batchSize : int, optional, default 0
         The number of Python objects represented as a single
         Java object. Set 1 to disable batching, 0 to automatically choose
         the batch size based on object sizes, or -1 to use an unlimited
         batch size
-    serializer : :class:`pyspark.serializers.Serializer`, optional
+    serializer : :class:`pyspark.serializers.Serializer`, optional, default `CPickleSerializer`
         The serializer for RDDs.
-    conf : :py:class:`pyspark.SparkConf`, optional
+    conf : :py:class:`pyspark.SparkConf`, optional, default None
         An object setting Spark properties.
-    gateway : :py:class:`py4j.java_gateway.JavaGateway`,  optional
+    gateway : :py:class:`py4j.java_gateway.JavaGateway`,  optional, default None

Review Comment:
   ```suggestion
       gateway : :class:`py4j.java_gateway.JavaGateway`,  optional
   ```



##########
python/pyspark/context.py:
##########
@@ -477,11 +475,25 @@ def __exit__(
     @classmethod
     def getOrCreate(cls, conf: Optional[SparkConf] = None) -> "SparkContext":
         """
-        Get or instantiate a SparkContext and register it as a singleton object.
+        Get or instantiate a `SparkContext` and register it as a singleton object.
+
+        .. versionadded:: 1.4.0
 
         Parameters
         ----------
-        conf : :py:class:`pyspark.SparkConf`, optional
+        conf : :py:class:`pyspark.SparkConf`, optional, default None
+            `SparkConf` that will be used for initialisation of the `SparkContext`.
+
+        Returns
+        -------
+        :class:`pyspark.context.SparkContext`
+            current `SparkContext`, or a new one if it wasn't created before the function call.
+
+        Examples
+        --------
+        >>> from pyspark.context import SparkContext

Review Comment:
   ```suggestion
   ```
   
   This is automatically imported in the PySpark shell. so let's remove it.



##########
python/pyspark/context.py:
##########
@@ -493,14 +505,27 @@ def setLogLevel(self, logLevel: str) -> None:
         """
         Control our logLevel. This overrides any user-defined log settings.
         Valid log levels include: ALL, DEBUG, ERROR, FATAL, INFO, OFF, TRACE, WARN
+
+        .. versionadded:: 1.4.0
+
+        Parameters
+        ----------
+        logLevel : str
+            The desired log level as a string.
+
+        Examples
+        --------
+        >>> sc.setLogLevel("WARN")
         """
         self._jsc.setLogLevel(logLevel)
 
     @classmethod
     def setSystemProperty(cls, key: str, value: str) -> None:
         """
-        Set a Java system property, such as spark.executor.memory. This must
-        must be invoked before instantiating SparkContext.
+        Set a Java system property, such as `spark.executor.memory`. This must
+        be invoked before instantiating SparkContext.

Review Comment:
   ```suggestion
           be invoked before instantiating :class:`SparkContext`.
   ```



##########
python/pyspark/context.py:
##########
@@ -1468,13 +2136,21 @@ def getLocalProperty(self, key: str) -> Optional[str]:
         """
         Get a local property set in this thread, or null if it is missing. See
         :meth:`setLocalProperty`.
+
+        .. versionadded:: 1.0.0
+
+        See Also
+        --------
+        :meth:`SparkContext.setLocalProperty`
         """
         return self._jsc.getLocalProperty(key)
 
     def setJobDescription(self, value: str) -> None:
         """
         Set a human readable description of the current job.
 
+        .. versionadded:: 2.3.0
+

Review Comment:
   ```suggestion
           Parameters
           ----------
           value : str
               The job description to set.
   
   ```



##########
python/pyspark/context.py:
##########
@@ -99,39 +99,37 @@ class SparkContext:
 
     Parameters
     ----------
-    master : str, optional
+    master : str, optional, default None
         Cluster URL to connect to (e.g. mesos://host:port, spark://host:port, local[4]).
-    appName : str, optional
+    appName : str, optional, default None
         A name for your job, to display on the cluster web UI.
-    sparkHome : str, optional
+    sparkHome : str, optional, default None
         Location where Spark is installed on cluster nodes.
-    pyFiles : list, optional
+    pyFiles : list, optional, default None
         Collection of .zip or .py files to send to the cluster
         and add to PYTHONPATH.  These can be paths on the local file
         system or HDFS, HTTP, HTTPS, or FTP URLs.
-    environment : dict, optional
+    environment : dict, optional, default None
         A dictionary of environment variables to set on
         worker nodes.
-    batchSize : int, optional
+    batchSize : int, optional, default 0
         The number of Python objects represented as a single
         Java object. Set 1 to disable batching, 0 to automatically choose
         the batch size based on object sizes, or -1 to use an unlimited
         batch size
-    serializer : :class:`pyspark.serializers.Serializer`, optional
+    serializer : :class:`pyspark.serializers.Serializer`, optional, default `CPickleSerializer`
         The serializer for RDDs.
-    conf : :py:class:`pyspark.SparkConf`, optional
+    conf : :py:class:`pyspark.SparkConf`, optional, default None
         An object setting Spark properties.
-    gateway : :py:class:`py4j.java_gateway.JavaGateway`,  optional
+    gateway : :py:class:`py4j.java_gateway.JavaGateway`,  optional, default None
         Use an existing gateway and JVM, otherwise a new JVM
         will be instantiated. This is only used internally.
-    jsc : :py:class:`py4j.java_gateway.JavaObject`, optional
+    jsc : :py:class:`py4j.java_gateway.JavaObject`, optional, default None

Review Comment:
   ```suggestion
       jsc : :py:class:`py4j.java_gateway.JavaObject`, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -801,21 +986,50 @@ def wholeTextFiles(
             ...
             (a-hdfs-path/part-nnnnn, its content)
 
+        Parameters
+        ----------
+        path : str
+            directory to the input data files, the path can be comma separated
+            paths as a list of inputs
+        minPartitions : int, optional, default None

Review Comment:
   ```suggestion
           minPartitions : int, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -531,32 +564,57 @@ def applicationId(self) -> str:
 
     @property
     def uiWebUrl(self) -> str:
-        """Return the URL of the SparkUI instance started by this SparkContext"""
+        """Return the URL of the SparkUI instance started by this `SparkContext`
+
+        .. versionadded:: 2.1.0
+        """
         return self._jsc.sc().uiWebUrl().get()
 
     @property
     def startTime(self) -> int:
-        """Return the epoch time when the Spark Context was started."""
+        """Return the epoch time when the `SparkContext` was started.

Review Comment:
   ```suggestion
           """Return the epoch time when the :class:`SparkContext` was started.
   ```



##########
python/pyspark/context.py:
##########
@@ -1457,6 +2119,12 @@ def setLocalProperty(self, key: str, value: str) -> None:
         Set a local property that affects jobs submitted from this thread, such as the
         Spark fair scheduler pool.
 
+        .. versionadded:: 1.0.0
+

Review Comment:
   ```suggestion
           Parameters
           ----------
           key : str
               The key of the local property to set.
           value : str
               The value of the local property to set.
   
   ```



##########
python/pyspark/context.py:
##########
@@ -592,22 +664,28 @@ def range(
         way as python's built-in range() function. If called with a single argument,
         the argument is interpreted as `end`, and `start` is set to 0.
 
+        .. versionadded:: 1.5.0
+
         Parameters
         ----------
         start : int
             the start value
-        end : int, optional
+        end : int, optional, default None
             the end value (exclusive)
-        step : int, optional
-            the incremental step (default: 1)
-        numSlices : int, optional
+        step : int, optional, default 1
+            the incremental step
+        numSlices : int, optional, default None
             the number of partitions of the new RDD
 
         Returns
         -------
         :py:class:`pyspark.RDD`
             An RDD of int
 
+        See Also
+        --------
+        :meth:`SparkSession.range`

Review Comment:
   ```suggestion
           :meth:`pyspark.sql.SparkSession.range`
   ```
   
   Haven't built the docs against this PR but you would probably need the fully qualified path because this class is not being imported within this file.



##########
python/pyspark/context.py:
##########
@@ -1485,25 +2161,43 @@ def setJobDescription(self, value: str) -> None:
     def sparkUser(self) -> str:
         """
         Get SPARK_USER for user who is running SparkContext.
+
+        .. versionadded:: 1.0.0
         """
         return self._jsc.sc().sparkUser()
 
     def cancelJobGroup(self, groupId: str) -> None:
         """
         Cancel active jobs for the specified group. See :meth:`SparkContext.setJobGroup`.
         for more information.
+
+        .. versionadded:: 1.1.0
+

Review Comment:
   ```suggestion
           Parameters
           ----------
           groupId : str
               The group ID to cancel the job.
   
   ```



##########
python/pyspark/context.py:
##########
@@ -1412,6 +2068,8 @@ def setJobGroup(self, groupId: str, description: str, interruptOnCancel: bool =
         The application can use :meth:`SparkContext.cancelJobGroup` to cancel all
         running jobs in this group.
 
+        .. versionadded:: 1.0.0
+

Review Comment:
   ```suggestion
           Parameters
           ----------
           groupId : str
               The group ID to assign.
           description : str
               The description to set for the job group.
           interruptOnCancel : bool, optional, default False
               whether to interrupt jobs on job cancellation.
   
   ```



##########
python/pyspark/context.py:
##########
@@ -801,21 +986,50 @@ def wholeTextFiles(
             ...
             (a-hdfs-path/part-nnnnn, its content)
 
+        Parameters
+        ----------
+        path : str
+            directory to the input data files, the path can be comma separated
+            paths as a list of inputs
+        minPartitions : int, optional, default None
+            suggested minimum number of partitions for the resulting RDD
+        use_unicode : bool, default True
+            If use_unicode is False, the strings will be kept as `str` (encoding

Review Comment:
   ```suggestion
               If `use_unicode` is False, the strings will be kept as `str` (encoding
   ```



##########
python/pyspark/context.py:
##########
@@ -953,18 +1273,56 @@ def newAPIHadoopFile(
         valueClass : str
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None
             fully qualified name of a function returning key WritableConverter
             None by default
-        valueConverter : str, optional
+        valueConverter : str, optional, default None

Review Comment:
   ```suggestion
           valueConverter : str, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -628,12 +718,32 @@ def parallelize(self, c: Iterable[T], numSlices: Optional[int] = None) -> RDD[T]
         Distribute a local Python collection to form an RDD. Using range
         is recommended if the input represents a range for performance.
 
+        .. versionadded:: 0.7.0
+
+        Parameters
+        ----------
+        c : :py:class:`collections.abc.Iterable`
+            iterable collection to distribute
+        numSlices : int, optional, default None

Review Comment:
   ditto



##########
python/pyspark/context.py:
##########
@@ -748,21 +896,60 @@ def textFile(
         """
         Read a text file from HDFS, a local file system (available on all
         nodes), or any Hadoop-supported file system URI, and return it as an
-        RDD of Strings.
-        The text files must be encoded as UTF-8.
+        RDD of Strings. The text files must be encoded as UTF-8.
+
+        .. versionadded:: 0.7.0
+
+        Parameters
+        ----------
+        name : str
+            directory to the input data files, the path can be comma separated
+            paths as a list of inputs
+        minPartitions : int, optional, default None
+            suggested minimum number of partitions for the resulting RDD
+        use_unicode : bool, default True
+            If use_unicode is False, the strings will be kept as `str` (encoding

Review Comment:
   ```suggestion
               If `use_unicode` is False, the strings will be kept as `str` (encoding
   ```



##########
python/pyspark/context.py:
##########
@@ -953,18 +1273,56 @@ def newAPIHadoopFile(
         valueClass : str
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None
             fully qualified name of a function returning key WritableConverter
             None by default
-        valueConverter : str, optional
+        valueConverter : str, optional, default None
             fully qualified name of a function returning value WritableConverter
             None by default
-        conf : dict, optional
+        conf : dict, optional, default None

Review Comment:
   ```suggestion
           conf : dict, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -953,18 +1273,56 @@ def newAPIHadoopFile(
         valueClass : str
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None

Review Comment:
   ```suggestion
           keyConverter : str, optional
   ```



##########
python/pyspark/context.py:
##########
@@ -531,32 +564,57 @@ def applicationId(self) -> str:
 
     @property
     def uiWebUrl(self) -> str:
-        """Return the URL of the SparkUI instance started by this SparkContext"""
+        """Return the URL of the SparkUI instance started by this `SparkContext`
+
+        .. versionadded:: 2.1.0

Review Comment:
   ```suggestion
           .. versionadded:: 2.1.0
   
           Examples
           --------
           >>> sc.uiWebUrl
           'http://...
   ```



##########
python/pyspark/context.py:
##########
@@ -731,13 +841,51 @@ def pickleFile(self, name: str, minPartitions: Optional[int] = None) -> RDD[Any]
         """
         Load an RDD previously saved using :meth:`RDD.saveAsPickleFile` method.
 
+        .. versionadded:: 1.1.0
+
+        Parameters
+        ----------
+        name : str
+            directory to the input data files, the path can be comma separated
+            paths as a list of inputs
+        minPartitions : int, optional, default None

Review Comment:
   ditto



##########
python/pyspark/context.py:
##########
@@ -493,14 +505,27 @@ def setLogLevel(self, logLevel: str) -> None:
         """
         Control our logLevel. This overrides any user-defined log settings.
         Valid log levels include: ALL, DEBUG, ERROR, FATAL, INFO, OFF, TRACE, WARN
+
+        .. versionadded:: 1.4.0
+
+        Parameters
+        ----------
+        logLevel : str
+            The desired log level as a string.
+
+        Examples
+        --------
+        >>> sc.setLogLevel("WARN")

Review Comment:
   ```suggestion
           >>> sc.setLogLevel("WARN")  # doctest :+SKIP
   ```



##########
python/pyspark/context.py:
##########
@@ -748,21 +896,60 @@ def textFile(
         """
         Read a text file from HDFS, a local file system (available on all
         nodes), or any Hadoop-supported file system URI, and return it as an
-        RDD of Strings.
-        The text files must be encoded as UTF-8.
+        RDD of Strings. The text files must be encoded as UTF-8.
+
+        .. versionadded:: 0.7.0
+
+        Parameters
+        ----------
+        name : str
+            directory to the input data files, the path can be comma separated
+            paths as a list of inputs
+        minPartitions : int, optional, default None

Review Comment:
   ditto



##########
python/pyspark/context.py:
##########
@@ -1520,6 +2214,29 @@ def runJob(
 
         If 'partitions' is not specified, this will run over all partitions.
 
+        .. versionadded:: 1.1.0
+
+        Parameters
+        ----------
+        rdd : :py:class:`pyspark.RDD`

Review Comment:
   ```suggestion
           rdd : :class:`RDD`
   ```



##########
python/pyspark/context.py:
##########
@@ -592,22 +664,28 @@ def range(
         way as python's built-in range() function. If called with a single argument,
         the argument is interpreted as `end`, and `start` is set to 0.
 
+        .. versionadded:: 1.5.0
+
         Parameters
         ----------
         start : int
             the start value
-        end : int, optional
+        end : int, optional, default None
             the end value (exclusive)
-        step : int, optional
-            the incremental step (default: 1)
-        numSlices : int, optional
+        step : int, optional, default 1
+            the incremental step
+        numSlices : int, optional, default None
             the number of partitions of the new RDD
 
         Returns
         -------
         :py:class:`pyspark.RDD`

Review Comment:
   ```suggestion
           :class:`RDD`
   ```



##########
python/pyspark/context.py:
##########
@@ -493,14 +505,27 @@ def setLogLevel(self, logLevel: str) -> None:
         """
         Control our logLevel. This overrides any user-defined log settings.
         Valid log levels include: ALL, DEBUG, ERROR, FATAL, INFO, OFF, TRACE, WARN
+
+        .. versionadded:: 1.4.0
+
+        Parameters
+        ----------
+        logLevel : str
+            The desired log level as a string.
+
+        Examples
+        --------
+        >>> sc.setLogLevel("WARN")

Review Comment:
   Let's probably add `# doctest :+SKIP` because it will affect the logging in other unittests.



##########
python/pyspark/context.py:
##########
@@ -592,22 +664,28 @@ def range(
         way as python's built-in range() function. If called with a single argument,
         the argument is interpreted as `end`, and `start` is set to 0.
 
+        .. versionadded:: 1.5.0
+
         Parameters
         ----------
         start : int
             the start value
-        end : int, optional
+        end : int, optional, default None
             the end value (exclusive)
-        step : int, optional
-            the incremental step (default: 1)
-        numSlices : int, optional
+        step : int, optional, default 1
+            the incremental step
+        numSlices : int, optional, default None

Review Comment:
   ditto



##########
python/pyspark/context.py:
##########
@@ -888,24 +1170,60 @@ def sequenceFile(
             3. If this fails, the fallback is to call 'toString' on each key and value
             4. :class:`CPickleSerializer` is used to deserialize pickled objects on the Python side
 
+        .. versionadded:: 1.3.0
+
         Parameters
         ----------
         path : str
             path to sequencefile
-        keyClass: str, optional
+        keyClass: str, optional, default None
             fully qualified classname of key Writable class (e.g. "org.apache.hadoop.io.Text")
-        valueClass : str, optional
+        valueClass : str, optional, default None
             fully qualified classname of value Writable class
             (e.g. "org.apache.hadoop.io.LongWritable")
-        keyConverter : str, optional
+        keyConverter : str, optional, default None
             fully qualified name of a function returning key WritableConverter
-        valueConverter : str, optional
+        valueConverter : str, optional, default None
             fully qualifiedname of a function returning value WritableConverter
-        minSplits : int, optional
+        minSplits : int, optional, default None

Review Comment:
   Let's just remove `default None` case for now. `optional` implies it's `None` by default anyway.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on code in PR #37517:
URL: https://github.com/apache/spark/pull/37517#discussion_r945602083


##########
python/pyspark/context.py:
##########
@@ -1520,6 +2214,29 @@ def runJob(
 
         If 'partitions' is not specified, this will run over all partitions.
 
+        .. versionadded:: 1.1.0
+
+        Parameters
+        ----------
+        rdd : :py:class:`pyspark.RDD`
+            target RDD to run tasks on
+        partitionFunc : function
+            a function to run on each partition of the RDD
+        partitions : list, optional, default None

Review Comment:
   ```suggestion
           partitions : list, optional
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] HyukjinKwon commented on a diff in pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

Posted by GitBox <gi...@apache.org>.
HyukjinKwon commented on code in PR #37517:
URL: https://github.com/apache/spark/pull/37517#discussion_r945727496


##########
python/pyspark/context.py:
##########
@@ -477,11 +475,25 @@ def __exit__(
     @classmethod
     def getOrCreate(cls, conf: Optional[SparkConf] = None) -> "SparkContext":
         """
-        Get or instantiate a SparkContext and register it as a singleton object.
+        Get or instantiate a :class:`SparkContext` and register it as a singleton object.
+
+        .. versionadded:: 1.4.0
 
         Parameters
         ----------
-        conf : :py:class:`pyspark.SparkConf`, optional
+        conf : :class:`SparkConf`, optional, default None

Review Comment:
   ```suggestion
           conf : :class:`SparkConf`, optional
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] zhengruifeng commented on a diff in pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

Posted by GitBox <gi...@apache.org>.
zhengruifeng commented on code in PR #37517:
URL: https://github.com/apache/spark/pull/37517#discussion_r945683416


##########
python/pyspark/context.py:
##########
@@ -531,32 +564,57 @@ def applicationId(self) -> str:
 
     @property
     def uiWebUrl(self) -> str:
-        """Return the URL of the SparkUI instance started by this SparkContext"""
+        """Return the URL of the SparkUI instance started by this `SparkContext`
+
+        .. versionadded:: 2.1.0

Review Comment:
   enable `spark.ui.enabled` works in my local.
   I think it is fine if this `sc` is only used in `context.py`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


[GitHub] [spark] zhengruifeng commented on a diff in pull request #37517: [SPARK-40077][PYTHON][DOCS] Make pyspark.context examples self-contained

Posted by GitBox <gi...@apache.org>.
zhengruifeng commented on code in PR #37517:
URL: https://github.com/apache/spark/pull/37517#discussion_r945662857


##########
python/pyspark/context.py:
##########
@@ -531,32 +564,57 @@ def applicationId(self) -> str:
 
     @property
     def uiWebUrl(self) -> str:
-        """Return the URL of the SparkUI instance started by this SparkContext"""
+        """Return the URL of the SparkUI instance started by this `SparkContext`
+
+        .. versionadded:: 2.1.0

Review Comment:
   haha, I have tried this example, it will throw an exception since `self._jsc.sc().uiWebUrl()` is None in this case.
   Let me try to enable sparkUI



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org