You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@yunikorn.apache.org by ww...@apache.org on 2021/06/24 03:23:55 UTC

[incubator-yunikorn-site] branch master updated: [YUNIKORN-727] Fix Spark RBAC in workspace examples (#60)

This is an automated email from the ASF dual-hosted git repository.

wwei pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-yunikorn-site.git


The following commit(s) were added to refs/heads/master by this push:
     new 65a3d36  [YUNIKORN-727] Fix Spark RBAC in workspace examples (#60)
65a3d36 is described below

commit 65a3d36979f77e73e83c29e54f4c3d8f10cf9d99
Author: Holden Karau <ho...@pigscanfly.ca>
AuthorDate: Wed Jun 23 20:23:48 2021 -0700

    [YUNIKORN-727] Fix Spark RBAC in workspace examples (#60)
---
 docs/user_guide/workloads/run_spark.md | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/docs/user_guide/workloads/run_spark.md b/docs/user_guide/workloads/run_spark.md
index cee95df..ca75526 100644
--- a/docs/user_guide/workloads/run_spark.md
+++ b/docs/user_guide/workloads/run_spark.md
@@ -61,11 +61,13 @@ apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: spark
+  namespace: spark-test
 ---
 apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRole
 metadata:
   name: spark-cluster-role
+  namespace: spark-test
 rules:
 - apiGroups: [""]
   resources: ["pods"]
@@ -78,9 +80,11 @@ apiVersion: rbac.authorization.k8s.io/v1
 kind: ClusterRoleBinding
 metadata:
   name: spark-cluster-role-binding
+  namespace: spark-test
 subjects:
 - kind: ServiceAccount
   name: spark
+  namespace: spark-test
 roleRef:
   kind: ClusterRole
   name: spark-cluster-role
@@ -110,7 +114,7 @@ ${SPARK_HOME}/bin/spark-submit --master k8s://http://localhost:8001 --deploy-mod
    --conf spark.kubernetes.namespace=spark-test \
    --conf spark.kubernetes.executor.request.cores=1 \
    --conf spark.kubernetes.container.image=apache/yunikorn:spark-2.4.4 \
-   --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
+   --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark-test:spark \
    local:///opt/spark/examples/jars/spark-examples_2.11-2.4.4.jar
 ```
 
@@ -142,4 +146,4 @@ YuniKorn reuses the Spark application ID set in label `spark-app-selector`, and
 to YuniKorn and being considered as a job. The job is scheduled and running as there is sufficient resources in the cluster.
 YuniKorn allocates the driver pod to a node, binds the pod and starts all the containers. Once the driver pod gets started,
 it requests for a bunch of executor pods to run its tasks. Those pods will be created in the same namespace as well and
-scheduled by YuniKorn as well.
\ No newline at end of file
+scheduled by YuniKorn as well.