You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Nico Pappagianis (JIRA)" <ji...@apache.org> on 2017/06/08 22:32:18 UTC

[jira] [Commented] (SPARK-10795) FileNotFoundException while deploying pyspark job on cluster

    [ https://issues.apache.org/jira/browse/SPARK-10795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16043550#comment-16043550 ] 

Nico Pappagianis commented on SPARK-10795:
------------------------------------------

[~HackerWilson] Were you able to resolve this? I'm hitting the same thing running Spark 2.0.1 and Hadoop 2.7.2.

My Python code is just creating a SparkContext and then calling sc.stop().

In the YARN logs I see:

INFO: 2017-06-08 22:16:24,462 INFO  [main] yarn.Client - Uploading resource file:/home/.../python/lib/py4j-0.10.1-src.zip -> hdfs://.../.sparkStaging/application_1494012577752_1403/py4j-0.10.1-src.zip

when I do an fs -ls on the above HDFS directory it shows the py4j file, but the job fails with a FileNotFoundException for the py4j file above:

File does not exist: hdfs://.../.sparkStaging/application_1494012577752_1403/py4j-0.10.1-src.zip
(stack trace here: https://gist.github.com/anonymous/5506654b88e19e6f51ffbd85cd3f25ee)

One thing to note is that I am launching a Map-only job that launches a the Spark application on the cluster. The launcher job is using SparkLauncher (Java). Master and deploy mode are set to "yarn" and "cluster", respectively.

When I submit the Python job from via a spark-submit it runs successfully (I set the HADOOP_CONF_DIR and HADOOP_JAVA_HOME to the same as what I am setting using the launcher job).






> FileNotFoundException while deploying pyspark job on cluster
> ------------------------------------------------------------
>
>                 Key: SPARK-10795
>                 URL: https://issues.apache.org/jira/browse/SPARK-10795
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>         Environment: EMR 
>            Reporter: Harshit
>
> I am trying to run simple spark job using pyspark, it works as standalone , but while I deploy over cluster it fails.
> Events :
> 2015-09-24 10:38:49,602 INFO  [main] yarn.Client (Logging.scala:logInfo(59)) - Uploading resource file:/usr/lib/spark/python/lib/pyspark.zip -> hdfs://ip-xxxx.ap-southeast-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1439967440341_0461/pyspark.zip
> Above uploading resource file is successfull , I manually checked file is present in above specified path , but after a while I face following error :
> Diagnostics: File does not exist: hdfs://ip-xxx.ap-southeast-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1439967440341_0461/pyspark.zip
> java.io.FileNotFoundException: File does not exist: hdfs://ip-1xxx.ap-southeast-1.compute.internal:8020/user/hadoop/.sparkStaging/application_1439967440341_0461/pyspark.zip



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org