You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by ya...@apache.org on 2023/06/28 03:16:56 UTC

[spark] branch master updated: [SPARK-44182][DOCS] Use Spark version variables in Python and Spark Connect installation docs

This is an automated email from the ASF dual-hosted git repository.

yao pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new f00de6f77a8 [SPARK-44182][DOCS] Use Spark version variables in Python and Spark Connect installation docs
f00de6f77a8 is described below

commit f00de6f77a80182215d0f3c07441849f2654b210
Author: Dongjoon Hyun <do...@apache.org>
AuthorDate: Wed Jun 28 11:16:42 2023 +0800

    [SPARK-44182][DOCS] Use Spark version variables in Python and Spark Connect installation docs
    
    ### What changes were proposed in this pull request?
    
    This PR aims to use Spark version placeholders in Python and Spark Connect installation docs
    - `site.SPARK_VERSION_SHORT` in `md` files
    - `|release|` in `rst` files
    
    ### Why are the changes needed?
    
    To provide an up-to-date Apache Spark docs document always.
    
    ### Does this PR introduce _any_ user-facing change?
    
    No.
    
    ### How was this patch tested?
    
    Manual review.
    
    ![Screenshot 2023-06-26 at 1 51 42 PM](https://github.com/apache/spark/assets/9700541/d4bc8166-e5cf-4c61-a1ab-0aa65810dc51)
    
    ![Screenshot 2023-06-27 at 9 21 23 AM](https://github.com/apache/spark/assets/9700541/a5a5ed98-c37e-47c4-ba14-69923c50dfd7)
    
    Closes #41728 from dongjoon-hyun/SPARK-44182.
    
    Authored-by: Dongjoon Hyun <do...@apache.org>
    Signed-off-by: Kent Yao <ya...@apache.org>
---
 docs/spark-connect-overview.md                 | 10 +++++-----
 python/docs/source/getting_started/install.rst |  8 ++++----
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/docs/spark-connect-overview.md b/docs/spark-connect-overview.md
index 55cc825a148..1e1464cfba0 100644
--- a/docs/spark-connect-overview.md
+++ b/docs/spark-connect-overview.md
@@ -93,7 +93,7 @@ the release drop down at the top of the page. Then choose your package type, typ
 Now extract the Spark package you just downloaded on your computer, for example:
 
 {% highlight bash %}
-tar -xvf spark-3.4.0-bin-hadoop3.tgz
+tar -xvf spark-{{site.SPARK_VERSION_SHORT}}-bin-hadoop3.tgz
 {% endhighlight %}
 
 In a terminal window, go to the `spark` folder in the location where you extracted
@@ -101,13 +101,13 @@ Spark before and run the `start-connect-server.sh` script to start Spark server
 Spark Connect, like in this example:
 
 {% highlight bash %}
-./sbin/start-connect-server.sh --packages org.apache.spark:spark-connect_2.12:3.4.0
+./sbin/start-connect-server.sh --packages org.apache.spark:spark-connect_2.12:{{site.SPARK_VERSION_SHORT}}
 {% endhighlight %}
 
-Note that we include a Spark Connect package (`spark-connect_2.12:3.4.0`), when starting
+Note that we include a Spark Connect package (`spark-connect_2.12:{{site.SPARK_VERSION_SHORT}}`), when starting
 Spark server. This is required to use Spark Connect. Make sure to use the same version
 of the package as the Spark version you downloaded previously. In this example,
-Spark 3.4.0 with Scala 2.12.
+Spark {{site.SPARK_VERSION_SHORT}} with Scala 2.12.
 
 Now Spark server is running and ready to accept Spark Connect sessions from client
 applications. In the next section we will walk through how to use Spark Connect
@@ -270,4 +270,4 @@ APIs you are using are available before migrating existing code to Spark Connect
 [functions](api/scala/org/apache/spark/sql/functions$.html), and
 [Column](api/scala/org/apache/spark/sql/Column.html).
 
-Support for more APIs is planned for upcoming Spark releases.
\ No newline at end of file
+Support for more APIs is planned for upcoming Spark releases.
diff --git a/python/docs/source/getting_started/install.rst b/python/docs/source/getting_started/install.rst
index b5256f2f2cb..eb296dc16d6 100644
--- a/python/docs/source/getting_started/install.rst
+++ b/python/docs/source/getting_started/install.rst
@@ -129,17 +129,17 @@ PySpark is included in the distributions available at the `Apache Spark website
 You can download a distribution you want from the site. After that, uncompress the tar file into the directory where you want
 to install Spark, for example, as below:
 
-.. code-block:: bash
+.. parsed-literal::
 
-    tar xzvf spark-3.4.0-bin-hadoop3.tgz
+    tar xzvf spark-\ |release|\-bin-hadoop3.tgz
 
 Ensure the ``SPARK_HOME`` environment variable points to the directory where the tar file has been extracted.
 Update ``PYTHONPATH`` environment variable such that it can find the PySpark and Py4J under ``SPARK_HOME/python/lib``.
 One example of doing this is shown below:
 
-.. code-block:: bash
+.. parsed-literal::
 
-    cd spark-3.4.0-bin-hadoop3
+    cd spark-\ |release|\-bin-hadoop3
     export SPARK_HOME=`pwd`
     export PYTHONPATH=$(ZIPS=("$SPARK_HOME"/python/lib/*.zip); IFS=:; echo "${ZIPS[*]}"):$PYTHONPATH
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org