You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Björn (Jira)" <ji...@apache.org> on 2019/08/30 09:08:00 UTC

[jira] [Comment Edited] (SPARK-28444) Bump Kubernetes Client Version to 4.3.0

    [ https://issues.apache.org/jira/browse/SPARK-28444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919346#comment-16919346 ] 

Björn edited comment on SPARK-28444 at 8/30/19 9:07 AM:
--------------------------------------------------------

We're running into the same issue. As I'm developing with a local ansible/Vagrant setup (Kubernetes deployed through kubeadm, 3 nodes) I did some testings on different versions with SparkPi example (Spark 2.4.3). My results were:
 * spark-submit in cluster and client mode works fine in local docker-desktop running Kubernetes 1.14.3
 * spark-submit in client mode works fine for Kubernetes 1.15.3 in the Vagrant multinode cluster
 * spark-submit in cluster mode does not work for Kubernetes 1.15.3, 1.14.3,1.13.10 in the Vagrant multinode cluster

spark-submit in cluster mode starts the driver, spawns the executors but fails when trying to watch the pod with HTTP 403 Exception with an empty message (esp. not complaining about permissions). The log is more or less the same as posted above.

I think neither the compatibility nor the permissions (as executor pods can be created with the service account) are the cause for this. 

Does anyone have ideas how to further debug this?  

 


was (Author: dolkemeier):
We're running into the same issue. As I'm developing with a local ansible/Vagrant setup (Kubernetes deployed through kubeadm, 3 nodes) I did some testings on different versions with SparkPi example (Spark 2.4.3). My results were:
 * spark-submit in cluster and client mode works fine in local docker-desktop running Kubernetes 1.14.3
 * spark-submit in client mode works fine for Kubernetes 1.15.3 in the Vagrant multinode cluster
 * spark-submit in cluster mode does not work for Kubernetes 1.15.3, 1.14.3,1.13.10 in the Vagrant multinode cluster

spark-submit in cluster mode starts the driver, spawns the executors but fails when trying to watch the pod with HTTP 403 Exception with an empty message (esp. not complaining about permissions). The log is more or less the same as posted above.

I think neither the compatibility nor the permissions (as executor pods can be created with the service account, spark sa has cluster role) are the cause for this. 

Does anyone have ideas how to further debug this?  

 

> Bump Kubernetes Client Version to 4.3.0
> ---------------------------------------
>
>                 Key: SPARK-28444
>                 URL: https://issues.apache.org/jira/browse/SPARK-28444
>             Project: Spark
>          Issue Type: Dependency upgrade
>          Components: Kubernetes
>    Affects Versions: 3.0.0, 2.4.3
>            Reporter: Patrick Winter
>            Priority: Major
>
> Spark is currently using the Kubernetes client version 4.1.2. This client does not support the current Kubernetes version 1.14, as can be seen on the [compatibility matrix|[https://github.com/fabric8io/kubernetes-client#compatibility-matrix]]. Therefore the Kubernetes client should be bumped up to version 4.3.0.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org