You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2015/04/16 11:45:58 UTC

[jira] [Resolved] (SPARK-4783) System.exit() calls in SparkContext disrupt applications embedding Spark

     [ https://issues.apache.org/jira/browse/SPARK-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-4783.
------------------------------
       Resolution: Fixed
    Fix Version/s: 1.4.0

Issue resolved by pull request 5492
[https://github.com/apache/spark/pull/5492]

> System.exit() calls in SparkContext disrupt applications embedding Spark
> ------------------------------------------------------------------------
>
>                 Key: SPARK-4783
>                 URL: https://issues.apache.org/jira/browse/SPARK-4783
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>            Reporter: David Semeria
>             Fix For: 1.4.0
>
>
> A common architectural choice for integrating Spark within a larger application is to employ a gateway to handle Spark jobs. The gateway is a server which contains one or more long-running sparkcontexts.
> A typical server is created with the following pseudo code:
> var continue = true
> while (continue){
>  try {
>     server.run() 
>   } catch (e) {
>   continue = log_and_examine_error(e)
> }
> The problem is that sparkcontext frequently calls System.exit when it encounters a problem which means the server can only be re-spawned at the process level, which is much more messy than the simple code above.
> Therefore, I believe it makes sense to replace all System.exit calls in sparkcontext with the throwing of a fatal error. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org