You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Patrick Wendell (JIRA)" <ji...@apache.org> on 2014/12/08 02:11:12 UTC

[jira] [Commented] (SPARK-4783) Remove all System.exit calls from sparkcontext

    [ https://issues.apache.org/jira/browse/SPARK-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14237341#comment-14237341 ] 

Patrick Wendell commented on SPARK-4783:
----------------------------------------

For code cleanliness, we should go through and look everywhere we call System.exit() and see which ones can be converted safely to exceptions.  Most of our use of System.exit is on the executor side, but there may be a few on the driver/SparkContext side.

That said, if there is a fatal exception in the SparkContext, I don't think your app can safely just catch the exception, log it, and create a new SparkContext. Is that what you are trying to do? In that case there could be static state around that is not properly cleaned up and will cause the new context to be buggy.



> Remove all System.exit calls from sparkcontext
> ----------------------------------------------
>
>                 Key: SPARK-4783
>                 URL: https://issues.apache.org/jira/browse/SPARK-4783
>             Project: Spark
>          Issue Type: Bug
>            Reporter: David Semeria
>
> A common architectural choice for integrating Spark within a larger application is to employ a gateway to handle Spark jobs. The gateway is a server which contains one or more long-running sparkcontexts.
> A typical server is created with the following pseudo code:
> var continue = true
> while (continue){
>  try {
>     server.run() 
>   } catch (e) {
>   continue = log_and_examine_error(e)
> }
> The problem is that sparkcontext frequently calls System.exit when it encounters a problem which means the server can only be re-spawned at the process level, which is much more messy than the simple code above.
> Therefore, I believe it makes sense to replace all System.exit calls in sparkcontext with the throwing of a fatal error. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org