You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Patrick Clay (JIRA)" <ji...@apache.org> on 2019/08/14 18:27:00 UTC

[jira] [Commented] (SPARK-28721) Failing to stop SparkSession in K8S cluster mode PySpark leaks Driver and Executors

    [ https://issues.apache.org/jira/browse/SPARK-28721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16907522#comment-16907522 ] 

Patrick Clay commented on SPARK-28721:
--------------------------------------

I confirmed this affects 2.4.1, and re-confirmed that it does not affect 2.4.0.

> Failing to stop SparkSession in K8S cluster mode PySpark leaks Driver and Executors
> -----------------------------------------------------------------------------------
>
>                 Key: SPARK-28721
>                 URL: https://issues.apache.org/jira/browse/SPARK-28721
>             Project: Spark
>          Issue Type: Bug
>          Components: Kubernetes, PySpark
>    Affects Versions: 2.4.1, 2.4.3
>            Reporter: Patrick Clay
>            Priority: Minor
>
> This does not seem to affect 2.4.0.
> To repro:
>  # Download pristine Spark 2.4.3 binary
>  # Edit pi.py to not call spark.stop()
>  # ./bin/docker-image-tool.sh -r MY_IMAGE -t MY_TAG build push
>  # spark-submit --master k8s://IP --deploy-mode cluster --conf spark.kubernetes.driver.pod.name=spark-driver --conf spark.kubernetes.container.image=MY_IMAGE:MY_TAG file:/opt/spark/examples/src/main/python/pi.py
> The driver runs successfully and Python exits but the Driver and Executor JVMs and Pods remain up.
>  
> I realize that explicitly calling spark.stop() is always best practice, but since this does not repro in 2.4.0 it seems like a regression.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org