You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Aseem Bansal (JIRA)" <ji...@apache.org> on 2016/11/03 07:15:59 UTC

[jira] [Commented] (SPARK-18241) If Spark Launcher fails to startApplication then handle's state does not change

    [ https://issues.apache.org/jira/browse/SPARK-18241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15631898#comment-15631898 ] 

Aseem Bansal commented on SPARK-18241:
--------------------------------------

Looking at the source code after mainClass = Utils.classForName(childMainClass) at https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/SparkSubmit.scala#L695 I see that the exceptions are being printed instead of being thrown/sent to the listeners.

The API says that startApplication is preferred but the various failures need to be sent via the handlers otherwise the listener API is not useful. Another case where failures are not sent via the Launcher API https://issues.apache.org/jira/browse/SPARK-17742

> If Spark Launcher fails to startApplication then handle's state does not change
> -------------------------------------------------------------------------------
>
>                 Key: SPARK-18241
>                 URL: https://issues.apache.org/jira/browse/SPARK-18241
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Submit
>    Affects Versions: 2.0.0
>            Reporter: Aseem Bansal
>
> I am using Spark 2.0.0. I am using https://spark.apache.org/docs/2.0.0/api/java/org/apache/spark/launcher/SparkLauncher.html to submit my job. 
> If there is a failure after launcher's startapplication has been called but before the spark job has actually started (i.e. in starting the spark process that submits the job itself) there is 
> * no exception in the main thread that is submitting the job 
> * no exception in the job as it has not started
> * no state change of the launcher
> * the exception is logged in the error stream on the default logger name that spark produces using the Job's main class.
> Basically, it is not possible to catch an exception if it happens during that time. The easiest way to reproduce it is to delete the JAR file or use an invalid spark home while launching the job using sparkLauncher. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org