You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@yunikorn.apache.org by yu...@apache.org on 2022/10/09 00:47:40 UTC

[yunikorn-site] branch master updated: [YUNIKORN-1345] Updating the Spark tutorial i18n (#192)

This is an automated email from the ASF dual-hosted git repository.

yuteng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/yunikorn-site.git


The following commit(s) were added to refs/heads/master by this push:
     new 41f8f6fba [YUNIKORN-1345] Updating the Spark tutorial i18n (#192)
41f8f6fba is described below

commit 41f8f6fba71cd504973439389e8b538666628caf
Author: Wilfred Spiegelenburg <wi...@apache.org>
AuthorDate: Sun Oct 9 11:47:36 2022 +1100

    [YUNIKORN-1345] Updating the Spark tutorial i18n (#192)
    
    * [YUNIKORN-1345] Updating the Spark tutorial i18n
    
    * review updates
---
 docs/user_guide/workloads/run_spark.md             | 10 ++---
 .../current/user_guide/workloads/run_spark.md      | 50 ++++++++++++++--------
 .../version-1.1.0/assets                           |  2 +-
 package.json                                       |  6 +--
 4 files changed, 41 insertions(+), 27 deletions(-)

diff --git a/docs/user_guide/workloads/run_spark.md b/docs/user_guide/workloads/run_spark.md
index 749b18170..1eed09c7d 100644
--- a/docs/user_guide/workloads/run_spark.md
+++ b/docs/user_guide/workloads/run_spark.md
@@ -36,7 +36,7 @@ To run Spark on Kubernetes, you'll need the Spark docker images. You can 1) use
 team, or 2) build one from scratch.
 If you want to build your own Spark docker image, you can find the [full instructions](https://spark.apache.org/docs/latest/building-spark.html)
 in the Spark documentation. Simplified steps:
-* Download a Spark version that has Kubernetes support, URL: https://github.com/apache/spark 
+* Download a Spark version that has Kubernetes support, URL: https://github.com/apache/spark
 * Build spark with Kubernetes support:
 ```shell script
 ./build/mvn -Pkubernetes -DskipTests clean package
@@ -72,7 +72,7 @@ metadata:
 apiVersion: rbac.authorization.k8s.io/v1
 kind: Role
 metadata:
-  name: spark-cluster-role
+  name: spark-role
   namespace: spark-test
 rules:
 - apiGroups: [""]
@@ -85,7 +85,7 @@ rules:
 apiVersion: rbac.authorization.k8s.io/v1
 kind: RoleBinding
 metadata:
-  name: spark-cluster-role-binding
+  name: spark-role-binding
   namespace: spark-test
 subjects:
 - kind: ServiceAccount
@@ -93,7 +93,7 @@ subjects:
   namespace: spark-test
 roleRef:
   kind: Role
-  name: spark-cluster-role
+  name: spark-role
   apiGroup: rbac.authorization.k8s.io
 EOF
 ```
@@ -130,7 +130,7 @@ There are more options for setting the driver and executor in the [spark](https:
 Assigning the applicationId and the queue path are possible.
 ```
 --conf spark.kubernetes.executor.label.applicationId=application-spark-0001
---conf spark.kubernetes.driver.label.applicationId=application-spark-0001  
+--conf spark.kubernetes.driver.label.applicationId=application-spark-0001
 --conf spark.kubernetes.executor.label.queue=default.root.sandbox
 --conf spark.kubernetes.driver.label.queue=default.root.sandbox
 ```
diff --git a/i18n/zh-cn/docusaurus-plugin-content-docs/current/user_guide/workloads/run_spark.md b/i18n/zh-cn/docusaurus-plugin-content-docs/current/user_guide/workloads/run_spark.md
index b7f4f3ded..37ec86a70 100644
--- a/i18n/zh-cn/docusaurus-plugin-content-docs/current/user_guide/workloads/run_spark.md
+++ b/i18n/zh-cn/docusaurus-plugin-content-docs/current/user_guide/workloads/run_spark.md
@@ -32,13 +32,15 @@ under the License.
 ## 为Spark准备docker镜像
 
 要在Kubernetes上运行Spark,您需要Spark的docker镜像。您可以
-1)使用YuniKorn团队提供的docker镜像
-2)从头开始构建一个镜像。如果你想建立自己的Spark的docker镜像,您可以
+1)使用Spark团队提供的docker镜像
+2)从头开始构建一个镜像。如果你想建立自己的Spark的docker镜像,您可以找到 [完整说明](https://spark.apache.org/docs/latest/building-spark.html)
+在 Spark 文档中。简化步骤:
 * 下载一个支持Kubernetes的Spark版本,URL: https://github.com/apache/spark
 * 构建支持Kubernetes的Spark版本:
 ```shell script
-mvn -Pyarn -Phadoop-2.7 -Dhadoop.version=2.7.4 -Phive -Pkubernetes -Phive-thriftserver -DskipTests package
+./buid/mvn -Pkubernetes -DskipTests clean package
 ```
+建议使用[dockerhub](https://hub.docker.com/r/apache/spark/tags)中不同spark版本的官方镜像
 
 ## 为Spark作业创建一个命名空间
 
@@ -53,7 +55,9 @@ metadata:
 EOF
 ```
 
-在 `spark-test` 命名空间下创建 service account 和 cluster role bindings :
+## 创建服务帐号和角色绑定
+
+在 `spark-test` 命名空间下创建 service account 和 role bindings :
 
 ```shell script
 cat <<EOF | kubectl apply -n spark-test -f -
@@ -64,9 +68,9 @@ metadata:
   namespace: spark-test
 ---
 apiVersion: rbac.authorization.k8s.io/v1
-kind: ClusterRole
+kind: Role
 metadata:
-  name: spark-cluster-role
+  name: spark-role
   namespace: spark-test
 rules:
 - apiGroups: [""]
@@ -77,17 +81,17 @@ rules:
   verbs: ["get", "create", "delete"]
 ---
 apiVersion: rbac.authorization.k8s.io/v1
-kind: ClusterRoleBinding
+kind: RoleBinding
 metadata:
-  name: spark-cluster-role-binding
+  name: spark-role-binding
   namespace: spark-test
 subjects:
 - kind: ServiceAccount
   name: spark
   namespace: spark-test
 roleRef:
-  kind: ClusterRole
-  name: spark-cluster-role
+  kind: Role
+  name: spark-role
   apiGroup: rbac.authorization.k8s.io
 EOF
 ```
@@ -104,28 +108,38 @@ EOF
 kubectl proxy
 ```
 
-运行一个简单的 SparkPi 作业(这假设Spark二进制文件已安装到 `/usr/local` 目录)。
+[dockerhub](https://hub.docker.com/r/apache/spark/tags)中有不同spark版本的官方镜像
+运行一个简单的 SparkPi 作业,假设 Spark 二进制文件本地安装在 `/usr/local` 目录中。
 ```shell script
-export SPARK_HOME=/usr/local/spark-2.4.4-bin-hadoop2.7/
+export SPARK_HOME=/usr/local/spark/
 ${SPARK_HOME}/bin/spark-submit --master k8s://http://localhost:8001 --deploy-mode cluster --name spark-pi \
    --master k8s://http://localhost:8001 --deploy-mode cluster --name spark-pi \
    --class org.apache.spark.examples.SparkPi \
    --conf spark.executor.instances=1 \
    --conf spark.kubernetes.namespace=spark-test \
    --conf spark.kubernetes.executor.request.cores=1 \
-   --conf spark.kubernetes.container.image=apache/yunikorn:spark-2.4.4 \
+   --conf spark.kubernetes.container.image=docker.io/apache/spark:v3.3.0 \
    --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark-test:spark \
-   local:///opt/spark/examples/jars/spark-examples_2.11-2.4.4.jar
+   local:///opt/spark/examples/jars/spark-examples_2.12-3.3.0.jar
+```
+:::note
+在 [spark](https://spark.apache.org/docs/latest/running-on-kubernetes.html#configuration) 中有更多设置驱动程序和执行程序的选项。
+可以分配 applicationId 和队列路径。
 ```
+--conf spark.kubernetes.executor.label.applicationId=application-spark-0001
+--conf spark.kubernetes.driver.label.applicationId=application-spark-0001
+--conf spark.kubernetes.executor.label.queue=default.root.sandbox
+--conf spark.kubernetes.driver.label.queue=default.root.sandbox
+```
+:::
 
 您可以看见Spark的driver和executors在Kubernetes上创建:
 
-![spark-pods](./../../assets/spark-pods.png)
+![spark-pods](./../../assets/RunningSparkOnK8s.png)
 
-您还可以从 YuniKorn UI 查看作业信息。如果您不知道如何访问 YuniKorn UI,请阅读文档
-[链接](../../get_started/get_started.md#访问-web-ui).
+spark-pi结果在 driver pod中。
 
-![spark-jobs-on-ui](./../../assets/spark-jobs-on-ui.png)
+![spark-pods](./../../assets/sparkResult.png)
 
 ## 幕后发生了什么?
 
diff --git a/i18n/zh-cn/docusaurus-plugin-content-docs/version-1.1.0/assets b/i18n/zh-cn/docusaurus-plugin-content-docs/version-1.1.0/assets
index 778d0f8e4..271e5348b 120000
--- a/i18n/zh-cn/docusaurus-plugin-content-docs/version-1.1.0/assets
+++ b/i18n/zh-cn/docusaurus-plugin-content-docs/version-1.1.0/assets
@@ -1 +1 @@
-../../../../docs/assets
\ No newline at end of file
+../../../../versioned_docs/version-1.1.0/assets
\ No newline at end of file
diff --git a/package.json b/package.json
index ba0000903..86ccc146d 100644
--- a/package.json
+++ b/package.json
@@ -10,9 +10,9 @@
     "release": "docusaurus docs:version"
   },
   "dependencies": {
-    "@docusaurus/core": "2.0.1",
-    "@docusaurus/preset-classic": "2.0.1",
-    "@docusaurus/theme-search-algolia": "^2.0.1",
+    "@docusaurus/core": "2.1.0",
+    "@docusaurus/preset-classic": "2.1.0",
+    "@docusaurus/theme-search-algolia": "2.1.0",
     "@mdx-js/react": "^1.5.8",
     "clsx": "^1.1.1",
     "node": "^18.8.0",