You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2018/10/31 12:11:00 UTC

[jira] [Assigned] (SPARK-25887) Allow specifying Kubernetes context to use

     [ https://issues.apache.org/jira/browse/SPARK-25887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-25887:
------------------------------------

    Assignee: Apache Spark

> Allow specifying Kubernetes context to use
> ------------------------------------------
>
>                 Key: SPARK-25887
>                 URL: https://issues.apache.org/jira/browse/SPARK-25887
>             Project: Spark
>          Issue Type: Improvement
>          Components: Kubernetes
>    Affects Versions: 2.3.0, 2.3.1, 2.3.2, 2.4.0
>            Reporter: Rob Vesse
>            Assignee: Apache Spark
>            Priority: Major
>
> In working on SPARK-25809 support was added to the integration testing machinery for Spark on K8S to use an arbitrary context from the users K8S config file.  However this can fail/cause false positives because regardless of what the integration test harness does the K8S submission client uses the Fabric 8 client library in such a way that it only ever configures itself from the current context.
> For users who work with multiple K8S clusters or who have multiple K8S "users" for interacting with their cluster being able to support arbitrary contexts without forcing the user to first {{kubectl config use-context <context>}} is an important improvement.
> This would be a fairly small fix to {{SparkKubernetesClientFactory}} and an associated configuration key, likely {{spark.kubernetes.context}} to go along with this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org