You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2017/10/11 03:40:00 UTC

[jira] [Assigned] (SPARK-22243) job failed to restart from checkpoint

     [ https://issues.apache.org/jira/browse/SPARK-22243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-22243:
------------------------------------

    Assignee: Apache Spark

> job failed to restart from checkpoint
> -------------------------------------
>
>                 Key: SPARK-22243
>                 URL: https://issues.apache.org/jira/browse/SPARK-22243
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.1.0, 2.2.0
>            Reporter: StephenZou
>            Assignee: Apache Spark
>
> My spark-defaults.conf has an item related to the issue, I upload all jars in spark's jars folder to the hdfs path:
> spark.yarn.jars  hdfs:///spark/cache/spark2.2/* 
> Streaming job failed to restart from checkpoint, ApplicationMaster throws  "Error: Could not find or load main class org.apache.spark.deploy.yarn.ExecutorLauncher".  The problem is always reproducible.
> I examine the sparkconf object recovered from checkpoint, and find spark.yarn.jars are set empty, which let all jars not exist in AM side. The solution is spark.yarn.jars should be reload from properties files when recovering from checkpoint. 
> attach is a demo to reproduce the issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org