You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@zeppelin.apache.org by "Gerald G Probst (Jira)" <ji...@apache.org> on 2020/11/04 20:52:00 UTC

[jira] [Created] (ZEPPELIN-5120) Unable to open Spark Interpreter after upgrading from 0.8.1 to 0.9.0-preview2

Gerald G Probst created ZEPPELIN-5120:
-----------------------------------------

             Summary: Unable to open Spark Interpreter after upgrading from 0.8.1 to 0.9.0-preview2
                 Key: ZEPPELIN-5120
                 URL: https://issues.apache.org/jira/browse/ZEPPELIN-5120
             Project: Zeppelin
          Issue Type: Bug
          Components: Interpreters, spark
    Affects Versions: 0.9.0
         Environment: zeppelin-env.sh:
{code:java}
export MASTER="yarn"
export SPARK_HOME="/usr/lib/spark"
export SPARK_SUBMIT_OPTIONS="--driver-memory 12G --executor-memory 8G"
export AWS_ACCESS_KEY_ID=<key>
export AWS_SECRET_ACCESS_KEY=<secret>
export HADOOP_HOME="/usr/lib/hadoop"
export USE_HADOOP=true
export HADOOP_CONF_DIR="/etc/hadoop/conf"{code}
 

spark: 2.4.0

scala: 2.11.12

java: 1.8.01_232

OS: Debian 4.9.144-3.1

 
            Reporter: Gerald G Probst


We were successfully running Zeppelin 0.8.1 on a Google Cloud Dataproc cluster and we needed to upgrade as we were encounter in a bug that was resolved in newer versions. After upgrading to 0.9.0-preview2 we can no longer connect to the Spark Interpreter.

 

Any code returns the following :

 
{noformat}
org.apache.zeppelin.interpreter.InterpreterException: org.apache.zeppelin.interpreter.InterpreterException: Fail to open SparkInterpreter at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:76) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:760) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:668) at org.apache.zeppelin.scheduler.Job.run(Job.java:172) at org.apache.zeppelin.scheduler.AbstractScheduler.runJob(AbstractScheduler.java:130) at org.apache.zeppelin.scheduler.FIFOScheduler.lambda$runJobInScheduler$0(FIFOScheduler.java:39) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.zeppelin.interpreter.InterpreterException: Fail to open SparkInterpreter at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:122) at org.apache.zeppelin.interpreter.LazyOpenInterpreter.open(LazyOpenInterpreter.java:70) ... 8 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.zeppelin.spark.BaseSparkScalaInterpreter.spark2CreateContext(BaseSparkScalaInterpreter.scala:292) at org.apache.zeppelin.spark.BaseSparkScalaInterpreter.createSparkContext(BaseSparkScalaInterpreter.scala:230) at org.apache.zeppelin.spark.SparkScala211Interpreter.open(SparkScala211Interpreter.scala:98) at org.apache.zeppelin.spark.SparkInterpreter.open(SparkInterpreter.java:106) ... 9 more Caused by: java.lang.VerifyError: Bad return type Exception Details: Location: org/apache/hadoop/hdfs/DFSClient.getQuotaUsage(Ljava/lang/String;)Lorg/apache/hadoop/fs/QuotaUsage; @157: areturn Reason: Type 'org/apache/hadoop/fs/ContentSummary' (current frame, stack[0]) is not assignable to 'org/apache/hadoop/fs/QuotaUsage' (from method signature) Current Frame: bci: @157 flags: { } locals: { 'org/apache/hadoop/hdfs/DFSClient', 'java/lang/String', 'org/apache/hadoop/ipc/RemoteException', 'java/io/IOException' } stack: { 'org/apache/hadoop/fs/ContentSummary' } Bytecode: 0x0000000: 2ab6 00df 2a13 01ff 2bb6 00b4 4d01 4e2a 0x0000010: b400 422b b902 0002 003a 042c c600 1d2d 0x0000020: c600 152c b600 b6a7 0012 3a05 2d19 05b6 0x0000030: 00b8 a700 072c b600 b619 04b0 3a04 1904 0x0000040: 4e19 04bf 3a06 2cc6 001d 2dc6 0015 2cb6 0x0000050: 00b6 a700 123a 072d 1907 b600 b8a7 0007 0x0000060: 2cb6 00b6 1906 bf4d 2c07 bd00 d159 0312 0x0000070: d353 5904 12dd 5359 0512 de53 5906 1302 0x0000080: 0153 b600 d44e 2dc1 0201 9900 14b2 0023 0x0000090: 1302 02b9 002b 0200 2a2b b602 03b0 2dbf 0x00000a0: Exception Handler Table: bci [35, 39] => handler: 42 bci [15, 27] => handler: 60 bci [15, 27] => handler: 68 bci [78, 82] => handler: 85 bci [60, 70] => handler: 68 bci [4, 57] => handler: 103 bci [60, 103] => handler: 103 Stackmap Table: full_frame(@42,{Object[#736],Object[#759],Object[#814],Object[#784],Object[#1217]},{Object[#784]}) same_frame(@53) same_frame(@57) full_frame(@60,{Object[#736],Object[#759],Object[#814],Object[#784]},{Object[#784]}) same_locals_1_stack_item_frame(@68,Object[#784]) full_frame(@85,{Object[#736],Object[#759],Object[#814],Object[#784],Top,Top,Object[#784]},{Object[#784]}) same_frame(@96) same_frame(@100) full_frame(@103,{Object[#736],Object[#759]},{Object[#839]}) append_frame(@158,Object[#839],Object[#799]) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:164) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373) at org.apache.spark.util.Utils$.getHadoopFileSystem(Utils.scala:1866) at org.apache.spark.scheduler.EventLoggingListener.<init>(EventLoggingListener.scala:71) at org.apache.spark.SparkContext.<init>(SparkContext.scala:528) at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2530) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935) at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:926) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:926) ... 17 more{noformat}
 

We can connect to spark via the spark-shell and run commands successfully.

This is blocking our current development and have been unable to find a way to resolve this issue.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)