You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (Jira)" <ji...@apache.org> on 2020/05/01 01:10:00 UTC

[jira] [Resolved] (SPARK-31549) Pyspark SparkContext.cancelJobGroup do not work correctly

     [ https://issues.apache.org/jira/browse/SPARK-31549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hyukjin Kwon resolved SPARK-31549.
----------------------------------
    Fix Version/s: 3.0.0
       Resolution: Fixed

Issue resolved by pull request 28395
[https://github.com/apache/spark/pull/28395]

> Pyspark SparkContext.cancelJobGroup do not work correctly
> ---------------------------------------------------------
>
>                 Key: SPARK-31549
>                 URL: https://issues.apache.org/jira/browse/SPARK-31549
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 2.4.5, 3.0.0
>            Reporter: Weichen Xu
>            Assignee: Weichen Xu
>            Priority: Critical
>             Fix For: 3.0.0
>
>
> Pyspark SparkContext.cancelJobGroup do not work correctly. This is an issue existing for a long time. This is because of pyspark thread didn't pinned to jvm thread when invoking java side methods, which leads to all pyspark API which related to java local thread variables do not work correctly. (Including `sc.setLocalProperty`, `sc.cancelJobGroup`, `sc.setJobDescription` and so on.)
> This is serious issue. Now there's an experimental pyspark 'PIN_THREAD' mode added in spark-3.0 which address it, but the 'PIN_THREAD' mode exists two issue:
> * It is disabled by default. We need to set additional environment variable to enable it.
> * There's memory leak issue which haven't been addressed.
> Now there's a series of project like hyperopt-spark, spark-joblib which rely on `sc.cancelJobGroup` API (use it to stop running jobs in their code). So it is critical to address this issue and we hope it work under default pyspark mode. An optional approach is implementing methods like `rdd.setGroupAndCollect`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org