You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Kousuke Saruta (JIRA)" <ji...@apache.org> on 2014/11/12 21:08:33 UTC
[jira] [Comment Edited] (SPARK-4267) Failing to launch jobs on
Spark on YARN with Hadoop 2.5.0 or later
[ https://issues.apache.org/jira/browse/SPARK-4267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14208595#comment-14208595 ]
Kousuke Saruta edited comment on SPARK-4267 at 11/12/14 8:07 PM:
-----------------------------------------------------------------
Hi [~ozawa], On my YARN 2.5.1(JDK 1.7.0_60) cluster, Spark Shell works well.
I built with following command.
{code}
sbt/sbt -Dhadoop.version=2.5.1 -Pyarn assembly
{code}
And launched Spark Shell with following command.
{code}
bin/spark-shell --master yarn --deploy-mode client --executor-cores 1 --driver-memory 512M --executor-memory 512M --num-executors 1
{code}
And then, I ran job with following script.
{code}
sc.textFile("hdfs:///user/kou/LICENSE.txt").flatMap(line => line.split(" ")).map(word => (word, 1)).persist().reduceByKey((a, b) => a + b, 16).saveAsTextFile("hdfs:///user/kou/LICENSE.txt.count")
{code}
So I think the problem is not caused by the version of Hadoop.
One possible case is that SparkContext#stop is called between instantiating SparkContext and running job accidentally.
Did you see any ERROR log on the shell?
was (Author: sarutak):
Hi [~ozawa], On my YARN-2.5.1(JDK 1.7.0_60) cluster, Spark Shell works well.
I built with following command.
{code}
sbt/sbt -Dhadoop.version=2.5.1 -Pyarn assembly
{code}
And launched Spark Shell with following command.
{code}
bin/spark-shell --master yarn --deploy-mode client --executor-cores 1 --driver-memory 512M --executor-memory 512M --num-executors 1
{code}
And then, I ran job with following script.
{code}
sc.textFile("hdfs:///user/kou/LICENSE.txt").flatMap(line => line.split(" ")).map(word => (word, 1)).persist().reduceByKey((a, b) => a + b, 16).saveAsTextFile("hdfs:///user/kou/LICENSE.txt.count")
{code}
So I think the problem is not caused by the version of Hadoop.
One possible case is that SparkContext#stop is called between instantiating SparkContext and running job accidentally.
Did you see any ERROR log on the shell?
> Failing to launch jobs on Spark on YARN with Hadoop 2.5.0 or later
> ------------------------------------------------------------------
>
> Key: SPARK-4267
> URL: https://issues.apache.org/jira/browse/SPARK-4267
> Project: Spark
> Issue Type: Bug
> Reporter: Tsuyoshi OZAWA
>
> Currently we're trying Spark on YARN included in Hadoop 2.5.1. Hadoop 2.5 uses protobuf 2.5.0 so I compiled with protobuf 2.5.1 like this:
> {code}
> ./make-distribution.sh --name spark-1.1.1 --tgz -Pyarn -Dhadoop.version=2.5.1 -Dprotobuf.version=2.5.0
> {code}
> Then Spark on YARN fails to launch jobs with NPE.
> {code}
> $ bin/spark-shell --master yarn-client
> scala> sc.textFile("hdfs:///user/ozawa/wordcountInput20G").flatMap(line => line.split(" ")).map(word => (word, 1)).persist().reduceByKey((a, b) => a + b, 16).saveAsTextFile("hdfs:///user/ozawa/sparkWordcountOutNew2");
> java.lang.NullPointerException
> at org.apache.spark.SparkContext.defaultParallelism(SparkContext.scala:1284)
> at org.apache.spark.SparkContext.defaultMinPartitions(SparkContext.scala:1291)
> at org.apache.spark.SparkContext.textFile$default$2(SparkContext.scala:480)
> at $iwC$$iwC$$iwC$$iwC.<init>(<console>:13)
> at $iwC$$iwC$$iwC.<init>(<console>:18)
> at $iwC$$iwC.<init>(<console>:20)
> at $iwC.<init>(<console>:22)
> at <init>(<console>:24)
> at .<init>(<console>:28)
> at .<clinit>(<console>)
> at .<init>(<console>:7)
> at .<clinit>(<console>)
> at $print(<console>)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:789)
> at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1062)
> at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:615)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:646)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:610)
> at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:823)
> at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:868)
> at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:780)
> at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:625)
> at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:633)
> at org.apache.spark.repl.SparkILoop.loop(SparkILoop.scala:638)
> at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:963)
> at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:911)
> at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:911)
> at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:911)
> at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1006)
> at org.apache.spark.repl.Main$.main(Main.scala:31)
> at org.apache.spark.repl.Main.main(Main.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:329)
> at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
> at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org