You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@yunikorn.apache.org by "ted (Jira)" <ji...@apache.org> on 2022/04/16 08:40:00 UTC

[jira] [Commented] (YUNIKORN-966) Retrieve the username from the SparkApp CRD

    [ https://issues.apache.org/jira/browse/YUNIKORN-966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17523044#comment-17523044 ] 

ted commented on YUNIKORN-966:
------------------------------

Hi [Chaoran Yu|[https://issues.apache.org/jira/secure/ViewProfile.jspa?name=yuchaoran2011],]

I would like some clarifications.

The original method seems to be able to get the username of all pods.

Under this method, spark pod can execute smoothly.

can be seen:

 
{code:java}
utils/utils.go:225    Found user name from pod labels.    {"userLabel": "yunikorn.apache.org/username", "user": "ted"} {code}
 * Install spark-on-k8s-operator: Change batchScheduler to true
and webhook enable to true in values.yaml 
Then helm install.
 * Install yunikorn: 

 
 #   deployments/image/configmap/Dockerfile change to 
{code:java}
ENV OPERATOR_PLUGINS “general,spark-k8s-operator”{code}

        2.  configmap of yunikorn:

 
{code:java}
partitions:
  - name: default
    placementrules:
      - name: tag
        value: namespace
        create: true
    queues:
      - name: root
        submitacl: 'ted'{code}
 
 *   SparkApplication:

{code:java}
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
  name: spark-pi
  namespace: default
  labels:
    yunikorn.apache.org/username: "ted"
    queue: "root.parent"
spec:
  type: Scala
  mode: cluster
  image: "gcr.io/spark-operator/spark:v3.1.1"
  imagePullPolicy: Always
  mainClass: org.apache.spark.examples.SparkPi
  mainApplicationFile: "local:///opt/spark/examples/jars/spark-examples_2.12-3.1.1.jar"
  batchScheduler: "yunikorn"
  sparkVersion: "3.1.1"
  restartPolicy:
    type: Never
  volumes:
    - name: "test-volume"
      hostPath:
        path: "/tmp"
        type: Directory
  driver:
    cores: 1
    coreLimit: "1200m"
    memory: "512m"
    labels:
      version: 3.1.1
    serviceAccount: chart-1650093659-spark
    volumeMounts:
      - name: "test-volume"
        mountPath: "/tmp"
  executor:
    cores: 1
    instances: 1
    memory: "512m"
    labels:
      version: 3.1.1
    volumeMounts:
      - name: "test-volume"
        mountPath: "/tmp"{code}

> Retrieve the username from the SparkApp CRD
> -------------------------------------------
>
>                 Key: YUNIKORN-966
>                 URL: https://issues.apache.org/jira/browse/YUNIKORN-966
>             Project: Apache YuniKorn
>          Issue Type: Sub-task
>          Components: shim - kubernetes
>            Reporter: Chaoran Yu
>            Assignee: ted
>            Priority: Minor
>
> Currently the shim only looks at the pods to get the value of the label yunikorn.apache.org/username. When the Spark operator plugin is enabled, we should look at the SparkApp CRD for the label.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@yunikorn.apache.org
For additional commands, e-mail: issues-help@yunikorn.apache.org