You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by do...@apache.org on 2023/02/22 23:02:33 UTC

[spark] branch master updated: [SPARK-42532][K8S][DOCS] Update YuniKorn docs with v1.2

This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 5b1c45eedae [SPARK-42532][K8S][DOCS] Update YuniKorn docs with v1.2
5b1c45eedae is described below

commit 5b1c45eedaed0138afb260019db800b637c3b135
Author: Dongjoon Hyun <do...@apache.org>
AuthorDate: Wed Feb 22 15:02:21 2023 -0800

    [SPARK-42532][K8S][DOCS] Update YuniKorn docs with v1.2
    
    ### What changes were proposed in this pull request?
    
    This PR aims to update `YuniKorn` documentation with the latest v1.2.0 and fix codify issues in doc.
    
    ### Why are the changes needed?
    
    - https://yunikorn.apache.org/release-announce/1.2.0
    
    ### Does this PR introduce _any_ user-facing change?
    
    This is a documentation-only change.
    
    **BEFORE**
    - https://dist.apache.org/repos/dist/dev/spark/v3.4.0-rc1-docs/_site/running-on-kubernetes.html#using-apache-yunikorn-as-customized-scheduler-for-spark-on-kubernetes
    
    **AFTER**
    <img width="927" alt="Screenshot 2023-02-22 at 2 27 50 PM" src="https://user-images.githubusercontent.com/9700541/220775386-90268ecb-facf-4701-bcb7-4f6b3e847e70.png">
    
    ### How was this patch tested?
    
    Manually test with YuniKorn v1.2.0.
    
    ```
    $ helm list -n yunikorn
    NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
    yunikorn        yunikorn        1               2023-02-22 14:01:11.728926 -0800 PST    deployed        yunikorn-1.2.0
    ```
    
    ```
    $ build/sbt -Psparkr -Pkubernetes -Pkubernetes-integration-tests -Dspark.kubernetes.test.deployMode=docker-desktop "kubernetes-integration-tests/test" -Dtest.exclude.tags=minikube,local,decom -Dtest.default.exclude.tags=''
    [info] KubernetesSuite:
    [info] - SPARK-42190: Run SparkPi with local[*] (10 seconds, 832 milliseconds)
    [info] - Run SparkPi with no resources (12 seconds, 421 milliseconds)
    [info] - Run SparkPi with no resources & statefulset allocation (17 seconds, 861 milliseconds)
    [info] - Run SparkPi with a very long application name. (12 seconds, 531 milliseconds)
    [info] - Use SparkLauncher.NO_RESOURCE (17 seconds, 697 milliseconds)
    [info] - Run SparkPi with a master URL without a scheme. (12 seconds, 499 milliseconds)
    [info] - Run SparkPi with an argument. (18 seconds, 734 milliseconds)
    [info] - Run SparkPi with custom labels, annotations, and environment variables. (12 seconds, 520 milliseconds)
    [info] - All pods have the same service account by default (17 seconds, 504 milliseconds)
    [info] - Run extraJVMOptions check on driver (9 seconds, 402 milliseconds)
    [info] - SPARK-42474: Run extraJVMOptions JVM GC option check - G1GC (9 seconds, 389 milliseconds)
    [info] - SPARK-42474: Run extraJVMOptions JVM GC option check - Other GC (9 seconds, 330 milliseconds)
    [info] - Verify logging configuration is picked from the provided SPARK_CONF_DIR/log4j2.properties (17 seconds, 710 milliseconds)
    [info] - Run SparkPi with env and mount secrets. (19 seconds, 797 milliseconds)
    [info] - Run PySpark on simple pi.py example (18 seconds, 568 milliseconds)
    [info] - Run PySpark to test a pyfiles example (15 seconds, 622 milliseconds)
    [info] - Run PySpark with memory customization (18 seconds, 507 milliseconds)
    [info] - Run in client mode. (6 seconds, 185 milliseconds)
    [info] - Start pod creation from template (17 seconds, 696 milliseconds)
    [info] - SPARK-38398: Schedule pod creation from template (12 seconds, 585 milliseconds)
    [info] - Run SparkR on simple dataframe.R example (19 seconds, 639 milliseconds)
    [info] YuniKornSuite:
    [info] - SPARK-42190: Run SparkPi with local[*] (12 seconds, 421 milliseconds)
    [info] - Run SparkPi with no resources (20 seconds, 465 milliseconds)
    [info] - Run SparkPi with no resources & statefulset allocation (15 seconds, 516 milliseconds)
    [info] - Run SparkPi with a very long application name. (20 seconds, 532 milliseconds)
    [info] - Use SparkLauncher.NO_RESOURCE (15 seconds, 545 milliseconds)
    [info] - Run SparkPi with a master URL without a scheme. (20 seconds, 575 milliseconds)
    [info] - Run SparkPi with an argument. (16 seconds, 462 milliseconds)
    [info] - Run SparkPi with custom labels, annotations, and environment variables. (20 seconds, 568 milliseconds)
    [info] - All pods have the same service account by default (15 seconds, 630 milliseconds)
    [info] - Run extraJVMOptions check on driver (12 seconds, 483 milliseconds)
    [info] - SPARK-42474: Run extraJVMOptions JVM GC option check - G1GC (12 seconds, 665 milliseconds)
    [info] - SPARK-42474: Run extraJVMOptions JVM GC option check - Other GC (11 seconds, 615 milliseconds)
    [info] - Verify logging configuration is picked from the provided SPARK_CONF_DIR/log4j2.properties (20 seconds, 810 milliseconds)
    [info] - Run SparkPi with env and mount secrets. (24 seconds, 622 milliseconds)
    [info] - Run PySpark on simple pi.py example (16 seconds, 650 milliseconds)
    [info] - Run PySpark to test a pyfiles example (23 seconds, 662 milliseconds)
    [info] - Run PySpark with memory customization (15 seconds, 450 milliseconds)
    [info] - Run in client mode. (5 seconds, 121 milliseconds)
    [info] - Start pod creation from template (20 seconds, 552 milliseconds)
    [info] - SPARK-38398: Schedule pod creation from template (15 seconds, 847 milliseconds)
    [info] - Run SparkR on simple dataframe.R example (22 seconds, 739 milliseconds)
    [info] Run completed in 15 minutes, 41 seconds.
    [info] Total number of tests run: 42
    [info] Suites: completed 2, aborted 0
    [info] Tests: succeeded 42, failed 0, canceled 0, ignored 0, pending 0
    [info] All tests passed.
    [success] Total time: 1306 s (21:46), completed Feb 22, 2023, 2:28:18 PM
    ```
    
    Closes #40132 from dongjoon-hyun/SPARK-42532.
    
    Authored-by: Dongjoon Hyun <do...@apache.org>
    Signed-off-by: Dongjoon Hyun <do...@apache.org>
---
 docs/running-on-kubernetes.md | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/docs/running-on-kubernetes.md b/docs/running-on-kubernetes.md
index 4f2647d3e06..56129d38d15 100644
--- a/docs/running-on-kubernetes.md
+++ b/docs/running-on-kubernetes.md
@@ -1919,10 +1919,10 @@ Install Apache YuniKorn:
 ```bash
 helm repo add yunikorn https://apache.github.io/yunikorn-release
 helm repo update
-helm install yunikorn yunikorn/yunikorn --namespace yunikorn --version 1.1.0 --create-namespace --set embedAdmissionController=false
+helm install yunikorn yunikorn/yunikorn --namespace yunikorn --version 1.2.0 --create-namespace --set embedAdmissionController=false
 ```
 
-The above steps will install YuniKorn v1.1.0 on an existing Kubernetes cluster.
+The above steps will install YuniKorn v1.2.0 on an existing Kubernetes cluster.
 
 ##### Get started
 
@@ -1932,11 +1932,11 @@ Submit Spark jobs with the following extra options:
 --conf spark.kubernetes.scheduler.name=yunikorn
 --conf spark.kubernetes.driver.label.queue=root.default
 --conf spark.kubernetes.executor.label.queue=root.default
---conf spark.kubernetes.driver.annotation.yunikorn.apache.org/app-id={{APP_ID}}
---conf spark.kubernetes.executor.annotation.yunikorn.apache.org/app-id={{APP_ID}}
+--conf spark.kubernetes.driver.annotation.yunikorn.apache.org/app-id={% raw %}{{APP_ID}}{% endraw %}
+--conf spark.kubernetes.executor.annotation.yunikorn.apache.org/app-id={% raw %}{{APP_ID}}{% endraw %}
 ```
 
-Note that `{{APP_ID}}` is the built-in variable that will be substituted with Spark job ID automatically.
+Note that {% raw %}{{APP_ID}}{% endraw %} is the built-in variable that will be substituted with Spark job ID automatically.
 With the above configuration, the job will be scheduled by YuniKorn scheduler instead of the default Kubernetes scheduler.
 
 ### Stage Level Scheduling Overview


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org