You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Matt Cheah (JIRA)" <ji...@apache.org> on 2018/08/01 21:00:00 UTC

[jira] [Resolved] (SPARK-24960) k8s: explicitly expose ports on driver container

     [ https://issues.apache.org/jira/browse/SPARK-24960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Matt Cheah resolved SPARK-24960.
--------------------------------
       Resolution: Fixed
    Fix Version/s: 2.4.0

> k8s: explicitly expose ports on driver container
> ------------------------------------------------
>
>                 Key: SPARK-24960
>                 URL: https://issues.apache.org/jira/browse/SPARK-24960
>             Project: Spark
>          Issue Type: Improvement
>          Components: Deploy, Kubernetes, Scheduler
>    Affects Versions: 2.2.0
>            Reporter: Adelbert Chang
>            Priority: Minor
>             Fix For: 2.4.0
>
>
> For the Kubernetes scheduler, the Driver Pod does not explicitly expose its ports. It is possible for a Kubernetes environment to be setup such that Pod ports are closed by default and must be opened explicitly in the Pod spec. In such an environment without this improvement the Driver Service will be unable to route requests (e.g. from the Executors) to the corresponding Driver Pod, which can be observed on the Executor side with this error message:
> {noformat}
> Caused by: java.io.IOException: Failed to connect to org-apache-spark-examples-sparkpi-1519271450264-driver-svc.dev.svc.cluster.local:7078{noformat}
>  
> For posterity, this is a copy of the [original issue|https://github.com/apache-spark-on-k8s/spark/issues/617] filed in the now deprecated {{apache-spark-on-k8s}} repository.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org