You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Ji Hao (JIRA)" <ji...@apache.org> on 2016/02/02 09:45:40 UTC

[jira] [Issue Comment Deleted] (SPARK-13133) When the option --master of spark-submit script is inconsistent with SparkConf.setMaster in Spark appliction code, the behavior of Spark application is difficult to understand

     [ https://issues.apache.org/jira/browse/SPARK-13133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Ji Hao updated SPARK-13133:
---------------------------
    Comment: was deleted

(was: Sean Owen, I think you should consider this issue, the error log may be more clearly!)

> When the option --master of spark-submit script is inconsistent with SparkConf.setMaster in Spark appliction code, the behavior of Spark application is difficult to understand
> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-13133
>                 URL: https://issues.apache.org/jira/browse/SPARK-13133
>             Project: Spark
>          Issue Type: Bug
>    Affects Versions: 1.3.0, 1.4.0, 1.5.0, 1.6.0
>            Reporter: Li Ye
>            Priority: Minor
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> When the option --master of spark-submit script is inconsistent with SparkConf.setMaster in Spark application code, the behavior is difficult to understand. For example, if the option --master of spark-submit script is yarn-cluster while there is SparkConf.setMaster("local") in Spark application code, the application exit abnormally after about 2 minutes. In driver's log there is an error whose content is "SparkContext did not initialize after waiting for 100000 ms. Please check earlier log output for errors. Failing the application".
> When SparkContext is launched, it should be checked whether the option --master of spark-submit script and SparkConf.setMaster in Spark application code are different. If they are different, there should be a clear hint in the driver's log for the developer to troubleshoot.
> I found the same question with me in stackoverflow:
> http://stackoverflow.com/questions/30670933/submit-spark-job-on-yarn-cluster



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org