You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Marcelo Vanzin (JIRA)" <ji...@apache.org> on 2018/12/05 23:08:00 UTC

[jira] [Commented] (SPARK-25148) Executors launched with Spark on K8s client mode should prefix name with spark.app.name

    [ https://issues.apache.org/jira/browse/SPARK-25148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16710726#comment-16710726 ] 

Marcelo Vanzin commented on SPARK-25148:
----------------------------------------

Actually there was a separate bug for the same issue. Duping...

> Executors launched with Spark on K8s client mode should prefix name with spark.app.name
> ---------------------------------------------------------------------------------------
>
>                 Key: SPARK-25148
>                 URL: https://issues.apache.org/jira/browse/SPARK-25148
>             Project: Spark
>          Issue Type: Improvement
>          Components: Kubernetes
>    Affects Versions: 2.4.0
>            Reporter: Timothy Chen
>            Priority: Major
>
> With the latest added client mode with Spark on k8s, executors launched by default are all named "spark-exec-#". Which means when multiple jobs are launched in the same cluster, they often have to retry to find unused pod names. Also it's hard to correlate which executors are launched for which spark app. The work around is to manually use the executor prefix configuration for each job launched.
> Ideally the experience should be the same for cluster mode, which each executor is default prefix with the spark.app.name.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org