You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@zeppelin.apache.org by "Nelson Costa (JIRA)" <ji...@apache.org> on 2018/03/13 16:05:00 UTC

[jira] [Created] (ZEPPELIN-3324) SparkILoop - ambiguous reference to overloaded definition

Nelson Costa created ZEPPELIN-3324:
--------------------------------------

             Summary: SparkILoop - ambiguous reference to overloaded definition
                 Key: ZEPPELIN-3324
                 URL: https://issues.apache.org/jira/browse/ZEPPELIN-3324
             Project: Zeppelin
          Issue Type: Bug
    Affects Versions: 0.9.0
            Reporter: Nelson Costa
             Fix For: 0.9.0



{noformat}
[WARNING] OpenJDK 64-Bit Server VM warning: ignoring option PermSize=64m; support was removed in 8.0
[WARNING] OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0
[ERROR] /opt/src/zeppelin/spark/scala-2.10/src/main/scala/org/apache/zeppelin/spark/SparkScala210Interpreter.scala:70: error: ambiguous reference to overloaded definition,
[ERROR] both constructor SparkILoop in class SparkILoop of type (in0: java.io.BufferedReader, out: tools.nsc.interpreter.JPrintWriter)org.apache.spark.repl.SparkILoop
[ERROR] and  constructor SparkILoop in class SparkILoop of type (in0: Option[java.io.BufferedReader], out: tools.nsc.interpreter.JPrintWriter)org.apache.spark.repl.SparkILoop
[ERROR] match argument types (Null,java.io.PrintWriter) and expected result type org.apache.spark.repl.SparkILoop
[ERROR]     sparkILoop = new SparkILoop(null, new JPrintWriter(Console.out, true))
[ERROR]                  ^
[ERROR] one error found
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Zeppelin ........................................... SUCCESS [  8.840 s]
[INFO] Zeppelin: Interpreter Parent ....................... SUCCESS [  1.466 s]
[INFO] Zeppelin: Interpreter .............................. SUCCESS [ 12.182 s]
[INFO] Zeppelin: Display system apis ...................... SUCCESS [ 16.248 s]
[INFO] Zeppelin: Spark Parent ............................. SUCCESS [ 18.949 s]
[INFO] Zeppelin: Spark Scala Parent ....................... SUCCESS [ 24.648 s]
[INFO] Zeppelin: Spark Interpreter Scala_2.11 ............. SUCCESS [  6.341 s]
[INFO] Zeppelin: Spark Interpreter Scala_2.10 ............. FAILURE [  3.879 s]
[INFO] Zeppelin: Spark Shims .............................. SKIPPED
[INFO] Zeppelin: Spark1 Shims ............................. SKIPPED
[INFO] Zeppelin: Spark2 Shims ............................. SKIPPED
[INFO] Zeppelin: Python interpreter ....................... SKIPPED
[INFO] Zeppelin: Spark Interpreter ........................ SKIPPED
[INFO] Zeppelin: Zengine .................................. SKIPPED
[INFO] Zeppelin: Spark dependencies ....................... SKIPPED
[INFO] Zeppelin: Markdown interpreter ..................... SKIPPED
[INFO] Zeppelin: Angular interpreter ...................... SKIPPED
[INFO] Zeppelin: Shell interpreter ........................ SKIPPED
[INFO] Zeppelin: JDBC interpreter ......................... SKIPPED
[INFO] Zeppelin: Apache Cassandra interpreter ............. SKIPPED
[INFO] Zeppelin: Sap ...................................... SKIPPED
[INFO] Zeppelin: web Application .......................... SKIPPED
[INFO] Zeppelin: Server ................................... SKIPPED
[INFO] Zeppelin: Packaging distribution ................... SKIPPED
[INFO] Zeppelin: R Interpreter ............................ SKIPPED
[INFO] Zeppelin: Helium development interpreter ........... SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:33 min
[INFO] Finished at: 2018-03-07T14:17:34Z
[INFO] Final Memory: 83M/907M
[INFO] ------------------------------------------------------------------------
[WARNING] The requested profile "cassandra-spark-1.5" could not be activated because it does not exist.
[WARNING] The requested profile "sparkr" could not be activated because it does not exist.
[WARNING] The requested profile "pyspark" could not be activated because it does not exist.
[WARNING] The requested profile "hadoop-2.7" could not be activated because it does not exist.
[ERROR] Failed to execute goal net.alchim31.maven:scala-maven-plugin:3.2.2:compile (scala-compile-first) on project spark-scala-2.10: wrap: org.apache.commons.exec.ExecuteException: Process exited with an error: 1 (Exit value: 1) -> [Help 1]
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)