You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Matthew Farrellee (JIRA)" <ji...@apache.org> on 2014/09/07 13:47:28 UTC

[jira] [Commented] (SPARK-2972) APPLICATION_COMPLETE not created in Python unless context explicitly stopped

    [ https://issues.apache.org/jira/browse/SPARK-2972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14124872#comment-14124872 ] 

Matthew Farrellee commented on SPARK-2972:
------------------------------------------

[~roji] this was addressed for a pyspark shell in https://issues.apache.org/jira/browse/SPARK-2435. as for applications, it is the programmer's responsibility to stop the context before exit. this can be seen in all the example code provided with spark. are you looking for the SparkContext to stop itself?

> APPLICATION_COMPLETE not created in Python unless context explicitly stopped
> ----------------------------------------------------------------------------
>
>                 Key: SPARK-2972
>                 URL: https://issues.apache.org/jira/browse/SPARK-2972
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 1.0.2
>         Environment: Cloudera 5.1, yarn master on ubuntu precise
>            Reporter: Shay Rojansky
>
> If you don't explicitly stop a SparkContext at the end of a Python application with sc.stop(), an APPLICATION_COMPLETE file isn't created and the job doesn't get picked up by the history server.
> This can be easily reproduced with pyspark (but affects scripts as well).
> The current workaround is to wrap the entire script with a try/finally and stop manually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org