You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Arsen Vladimirskiy (JIRA)" <ji...@apache.org> on 2016/08/03 22:38:20 UTC

[jira] [Comment Edited] (SPARK-13710) Spark shell shows ERROR when launching on Windows

    [ https://issues.apache.org/jira/browse/SPARK-13710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15406744#comment-15406744 ] 

Arsen Vladimirskiy edited comment on SPARK-13710 at 8/3/16 10:38 PM:
---------------------------------------------------------------------

I noticed that when using the binary package  "spark-2.0.0-bin-without-hadoop.tgz" (i.e. with user-provided Hadoop pointed to via "export SPARK_DIST_CLASSPATH=$(hadoop classpath)") the same error still happens.

java.lang.NoClassDefFoundError: Could not initialize class scala.tools.fusesource_embedded.jansi.internal.Kernel32
        at scala.tools.fusesource_embedded.jansi.internal.WindowsSupport.getConsoleMode(WindowsSupport.java:50)

I compared the jars provided with spark-2.0.0-bin-with-hadoop-2.7 to the ones provided with spark-2.0.0-bin-without-hadoop and noticed that jline-2.12.jar is present in the "with-hadoop" but is missing from the "without-hadoop" binary package.

When I copy the jline-2.12.jar to the jars folder of "withou-hadoop", I can start bin\spark-shell starts without encountering this error.

Is there a reason jline-2.12.jar is not part of the "without-hadoop" package?


was (Author: arsenvlad):
I noticed that when using the binary package  "spark-2.0.0-bin-without-hadoop.tgz" (i.e. with user-provided Hadoop pointed to via "export SPARK_DIST_CLASSPATH=$(hadoop classpath)") the same error still happens.

java.lang.NoClassDefFoundError: Could not initialize class scala.tools.fusesource_embedded.jansi.internal.Kernel32
        at scala.tools.fusesource_embedded.jansi.internal.WindowsSupport.getConsoleMode(WindowsSupport.java:50)

I compared the jars provided with spark-2.0.0-bin-*with*-hadoop-2.7 to the ones provided with spark-2.0.0-bin-*without*-hadoop and noticed that jline-2.12.jar is present in the "with-hadoop" but is missing from the "without-hadoop" binary package.

When I copy the jline-2.12.jar to the jars folder of "withou-hadoop", I can start bin\spark-shell starts without encountering this error.

Is there a reason jline-2.12.jar is not part of the "without-hadoop" package?

> Spark shell shows ERROR when launching on Windows
> -------------------------------------------------
>
>                 Key: SPARK-13710
>                 URL: https://issues.apache.org/jira/browse/SPARK-13710
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Shell, Windows
>            Reporter: Masayoshi TSUZUKI
>            Assignee: Michel Lemay
>            Priority: Minor
>             Fix For: 2.0.0
>
>
> On Windows, when we launch {{bin\spark-shell.cmd}}, it shows ERROR message and stacktrace.
> {noformat}
> C:\Users\tsudukim\Documents\workspace\spark-dev3>bin\spark-shell
> [ERROR] Terminal initialization failed; falling back to unsupported
> java.lang.NoClassDefFoundError: Could not initialize class scala.tools.fusesource_embedded.jansi.internal.Kernel32
>         at scala.tools.fusesource_embedded.jansi.internal.WindowsSupport.getConsoleMode(WindowsSupport.java:50)
>         at scala.tools.jline_embedded.WindowsTerminal.getConsoleMode(WindowsTerminal.java:204)
>         at scala.tools.jline_embedded.WindowsTerminal.init(WindowsTerminal.java:82)
>         at scala.tools.jline_embedded.TerminalFactory.create(TerminalFactory.java:101)
>         at scala.tools.jline_embedded.TerminalFactory.get(TerminalFactory.java:158)
>         at scala.tools.jline_embedded.console.ConsoleReader.<init>(ConsoleReader.java:229)
>         at scala.tools.jline_embedded.console.ConsoleReader.<init>(ConsoleReader.java:221)
>         at scala.tools.jline_embedded.console.ConsoleReader.<init>(ConsoleReader.java:209)
>         at scala.tools.nsc.interpreter.jline_embedded.JLineConsoleReader.<init>(JLineReader.scala:61)
>         at scala.tools.nsc.interpreter.jline_embedded.InteractiveReader.<init>(JLineReader.scala:33)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
>         at scala.tools.nsc.interpreter.ILoop$$anonfun$scala$tools$nsc$interpreter$ILoop$$instantiate$1$1.apply(ILoop.scala:865)
>         at scala.tools.nsc.interpreter.ILoop$$anonfun$scala$tools$nsc$interpreter$ILoop$$instantiate$1$1.apply(ILoop.scala:862)
>         at scala.tools.nsc.interpreter.ILoop.scala$tools$nsc$interpreter$ILoop$$mkReader$1(ILoop.scala:871)
>         at scala.tools.nsc.interpreter.ILoop$$anonfun$15$$anonfun$apply$8.apply(ILoop.scala:875)
>         at scala.tools.nsc.interpreter.ILoop$$anonfun$15$$anonfun$apply$8.apply(ILoop.scala:875)
>         at scala.util.Try$.apply(Try.scala:192)
>         at scala.tools.nsc.interpreter.ILoop$$anonfun$15.apply(ILoop.scala:875)
>         at scala.tools.nsc.interpreter.ILoop$$anonfun$15.apply(ILoop.scala:875)
>         at scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:418)
>         at scala.collection.immutable.Stream$$anonfun$map$1.apply(Stream.scala:418)
>         at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1233)
>         at scala.collection.immutable.Stream$Cons.tail(Stream.scala:1223)
>         at scala.collection.immutable.Stream.collect(Stream.scala:435)
>         at scala.tools.nsc.interpreter.ILoop.chooseReader(ILoop.scala:877)
>         at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1$$anonfun$apply$mcZ$sp$2.apply(ILoop.scala:916)
>         at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:916)
>         at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:911)
>         at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:911)
>         at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:97)
>         at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:911)
>         at org.apache.spark.repl.Main$.doMain(Main.scala:64)
>         at org.apache.spark.repl.Main$.main(Main.scala:47)
>         at org.apache.spark.repl.Main.main(Main.scala)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:497)
>         at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:737)
>         at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:183)
>         at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:208)
>         at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:122)
>         at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
> Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
> Setting default log level to "WARN".
> To adjust logging level use sc.setLogLevel(newLevel).
> 16/03/07 13:05:32 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> Spark context available as sc (master = local[*], app id = local-1457323533704).
> SQL context available as sqlContext.
> Welcome to
>       ____              __
>      / __/__  ___ _____/ /__
>     _\ \/ _ \/ _ `/ __/  '_/
>    /___/ .__/\_,_/_/ /_/\_\   version 2.0.0-SNAPSHOT
>       /_/
> Using Scala version 2.11.7 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_40)
> Type in expressions to have them evaluated.
> Type :help for more information.
> scala> sc.textFile("README.md")
> res0: org.apache.spark.rdd.RDD[String] = README.md MapPartitionsRDD[1] at textFile at <console>:25
> scala> sc.textFile("README.md").count()
> res1: Long = 97
> {noformat}
> Spark-shell itself seems to work file during my simple operation check.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org