You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sergey (Jira)" <ji...@apache.org> on 2021/03/09 13:59:00 UTC

[jira] [Updated] (SPARK-34674) Spark app on k8s doesn't terminate without call to sparkContext.stop() method

     [ https://issues.apache.org/jira/browse/SPARK-34674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sergey updated SPARK-34674:
---------------------------
    Description: 
Hello!
 I have run into a problem that if I don't call the method sparkContext.stop() explicitly, then a Spark driver process doesn't terminate even after its Main method has been completed. This behaviour is different from spark on yarn, where the manual sparkContext stopping is not required.
 It looks like, the problem is in using non-daemon threads, which prevent the driver jvm process from terminating.
 At least I see two non-daemon threads, if I don't call sparkContext.stop():
{code:java}
Thread[OkHttp kubernetes.default.svc,5,main]
Thread[OkHttp kubernetes.default.svc Writer,5,main]
{code}
Could you tell please, if it is possible to solve this problem?

Docker image from the official release of spark-3.1.1 hadoop3.2 is used.

  was:
Hello!
I have run into a problem that if I don't call the method sparkContext.stop() explicitly, then a Spark driver process doesn't terminate even after its Main method has been completed. This behaviour is different from spark on yarn, where the manual sparkContext stopping is not required.
It looks like, the problem is in using non-daemon threads, which prevent the driver jvm process from terminating.
At least I see two non-daemon threads, if I don't call sparkContext.stop():

{{	}}
{code:java}
Thread[OkHttp kubernetes.default.svc,5,main]
Thread[OkHttp kubernetes.default.svc Writer,5,main]
{code}
{{}}

Could you tell please, if it is possible to solve this problem?

Docker image from the official release of spark-3.1.1 hadoop3.2 is used.


> Spark app on k8s doesn't terminate without call to sparkContext.stop() method
> -----------------------------------------------------------------------------
>
>                 Key: SPARK-34674
>                 URL: https://issues.apache.org/jira/browse/SPARK-34674
>             Project: Spark
>          Issue Type: Bug
>          Components: Kubernetes
>    Affects Versions: 3.1.1
>            Reporter: Sergey
>            Priority: Major
>
> Hello!
>  I have run into a problem that if I don't call the method sparkContext.stop() explicitly, then a Spark driver process doesn't terminate even after its Main method has been completed. This behaviour is different from spark on yarn, where the manual sparkContext stopping is not required.
>  It looks like, the problem is in using non-daemon threads, which prevent the driver jvm process from terminating.
>  At least I see two non-daemon threads, if I don't call sparkContext.stop():
> {code:java}
> Thread[OkHttp kubernetes.default.svc,5,main]
> Thread[OkHttp kubernetes.default.svc Writer,5,main]
> {code}
> Could you tell please, if it is possible to solve this problem?
> Docker image from the official release of spark-3.1.1 hadoop3.2 is used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org