You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@zeppelin.apache.org by "Igor Berman (JIRA)" <ji...@apache.org> on 2019/06/05 14:33:00 UTC

[jira] [Created] (ZEPPELIN-4179) SparkScala211Interpreter missing synchronization

Igor Berman created ZEPPELIN-4179:
-------------------------------------

             Summary: SparkScala211Interpreter missing synchronization
                 Key: ZEPPELIN-4179
                 URL: https://issues.apache.org/jira/browse/ZEPPELIN-4179
             Project: Zeppelin
          Issue Type: Bug
            Reporter: Igor Berman


Hi

using new spark interpreter with zeppelin.spark.useNew : true

with scala 2.11 and spak 2.2 sometimes creates very cryptic errors

while disabling it eliminates the problem

I believe it's connected to the fact that scala interpretation is not synchronized, while in OldSparkInterpreter there is 

synchronized (this) {} in interpret method

 

the errors we are getting varies from :
{code:java}
java.io.InvalidClassException: $line187494590728.$read$$iw$$iw$$iw$$iw$$iw$$iw$$iw$$iw;
local class incompatible:
stream classdesc serialVersionUID = 665869549059 8467942,
local class serialVersionUID = 5448801901630725440{code}
 

to 
{code:java}
java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133){code}
 

 

to reproduce define several notebooks with only difference of name of function suffix(1,2,3,...), run then sequentially:

spark paragraph that defines simple udf:
{code:java}
import spark.implicits._

def num_to_ip3(num: Long) = {
require(num >= 0 && num < (2L << 31), s"Invalid number to convert: ${num}")
val buf : StringBuilder = new StringBuilder()
var i = 24
var ip = num
while (i >= 0) {
val a = ip >> i
ip = a << i ^ ip
buf.append(a)
if (i > 0) {
buf.append(".")
}
i = i - 8
}
buf.toString()
}

spark.udf.register("num_to_ip3", num_to_ip3 _)

{code}
and in 2nd paragraph usage of the above udf 
{code:java}
%sql
select num_to_ip3(1342342){code}

[~mixermt] [~lior.c@taboola.com] fyi



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)