You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2015/01/24 15:46:34 UTC

[jira] [Resolved] (SPARK-4289) Creating an instance of Hadoop Job fails in the Spark shell when toString() is called on the instance.

     [ https://issues.apache.org/jira/browse/SPARK-4289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-4289.
------------------------------
    Resolution: Not a Problem

I suggest this is NotAProblem, at least not something I can see Spark can address. I think that {{toString()}} failing is a minor Hadoop bug really. There's the {{:silent}} workaround.

> Creating an instance of Hadoop Job fails in the Spark shell when toString() is called on the instance.
> ------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-4289
>                 URL: https://issues.apache.org/jira/browse/SPARK-4289
>             Project: Spark
>          Issue Type: Bug
>            Reporter: Corey J. Nolet
>
> This one is easy to reproduce.
> <pre>val job = new Job(sc.hadoopConfiguration)</pre>
> I'm not sure what the solution would be off hand as it's happening when the shell is calling toString() on the instance of Job. The problem is, because of the failure, the instance is never actually assigned to the job val.
> java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
> 	at org.apache.hadoop.mapreduce.Job.ensureState(Job.java:283)
> 	at org.apache.hadoop.mapreduce.Job.toString(Job.java:452)
> 	at scala.runtime.ScalaRunTime$.scala$runtime$ScalaRunTime$$inner$1(ScalaRunTime.scala:324)
> 	at scala.runtime.ScalaRunTime$.stringOf(ScalaRunTime.scala:329)
> 	at scala.runtime.ScalaRunTime$.replStringOf(ScalaRunTime.scala:337)
> 	at .<init>(<console>:10)
> 	at .<clinit>(<console>)
> 	at $print(<console>)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:789)
> 	at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1062)
> 	at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:615)
> 	at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:646)
> 	at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:610)
> 	at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:814)
> 	at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:859)
> 	at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:771)
> 	at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:616)
> 	at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:624)
> 	at org.apache.spark.repl.SparkILoop.loop(SparkILoop.scala:629)
> 	at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply$mcZ$sp(SparkILoop.scala:954)
> 	at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:902)
> 	at org.apache.spark.repl.SparkILoop$$anonfun$process$1.apply(SparkILoop.scala:902)
> 	at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
> 	at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:902)
> 	at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:997)
> 	at org.apache.spark.repl.Main$.main(Main.scala:31)
> 	at org.apache.spark.repl.Main.main(Main.scala)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at org.apache.spark.deploy.SparkSubmit$.launch(SparkSubmit.scala:328)
> 	at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:75)
> 	at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org