You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@zeppelin.apache.org by Alexander Bezzubov <bz...@apache.org> on 2016/11/16 11:40:25 UTC

Re: Two different errors while executing Spark SQL queries against cached temp tables

Hi Florian,

sorry for slow response, I guess the main reason for not much feedback here
is that its hard to reproduce the error you describe, as it does not happen
reliably even on your local environment.

java.lang.NoSuchMethodException: org.apache.spark.io.LZ4CompressionCodec

This can be a sign of Hadoop FS codec miss-configuration.

Could you share a bit more details on Zeppelin/Spark/Hadoop configuration
that you use?

What is your SPARK_HOME ? zeppelin-env.sh ? Do you use external Spark
cluster? Is it Spark standalone or yarn-client cluster configuration? You
have shared Spark interpreter logs, but just in case, is there anything
strange in Zeppelin server .log/.out ?

Details like this would enable more people to chime in and help.

--

Alex

On Wed, Nov 16, 2016, 12:31 Florian Schulz <Fl...@web.de> wrote:

Hi,

can anyone help me with this? It is very anoying, because I get this error
very often (on my local maschine and also on a second vm). I use Zeppelin
0.6.2 with Spark 2.0 and Scala 2.11.


Best regards
Florian

*Gesendet:* Montag, 14. November 2016 um 20:45 Uhr
*Von:* "Florian Schulz" <Fl...@web.de>
*An:* users@zeppelin.apache.org
*Betreff:* Two different errors while executing Spark SQL queries against
cached temp tables
Hi everyone,

I have some trouble while executing some Spark SQL queries against some
cached temp tables. I query different temp tables and while doing
aggregates etc., I often get these errors back:

java.lang.NoSuchMethodException:
org.apache.spark.io.LZ4CompressionCodec.<init>(org.apache.spark.SparkConf)
    at java.lang.Class.getConstructor0(Class.java:3082)
    at java.lang.Class.getConstructor(Class.java:1825)
    at
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
    at
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:66)
    at org.apache.spark.sql.execution.SparkPlan.org
$apache$spark$sql$execution$SparkPlan$$decodeUnsafeRows(SparkPlan.scala:265)
    at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeTake$1.apply(SparkPlan.scala:351)
    at
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeTake$1.apply(SparkPlan.scala:350)
    at
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
    at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
    at
org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:350)
    at
org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:39)
    at
org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2183)
    at
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
    at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2532)
    at org.apache.spark.sql.Dataset.org
$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2182)
    at org.apache.spark.sql.Dataset.org
$apache$spark$sql$Dataset$$collect(Dataset.scala:2189)
    at
org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1925)
    at
org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1924)
    at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2562)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:1924)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2139)
    at sun.reflect.GeneratedMethodAccessor322.invoke(Unknown Source)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at
org.apache.zeppelin.spark.ZeppelinContext.showDF(ZeppelinContext.java:216)
    at
org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:129)
    at
org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
    at
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341)
    at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
    at
org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
    at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
    at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
  org.codehaus.janino.JaninoRuntimeException: Class
'org.apache.spark.sql.catalyst.expressions.codegen.GeneratedClass' was
loaded through a different loader
    at
org.codehaus.janino.SimpleCompiler$1.getDelegate(SimpleCompiler.java:337)
    at org.codehaus.janino.SimpleCompiler$1.accept(SimpleCompiler.java:291)
    at org.codehaus.janino.UnitCompiler.getType(UnitCompiler.java:5159)
    at org.codehaus.janino.UnitCompiler.access$16700(UnitCompiler.java:185)
    at
org.codehaus.janino.UnitCompiler$29.getSuperclass2(UnitCompiler.java:8154)
    at org.codehaus.janino.IClass.getSuperclass(IClass.java:406)
    at org.codehaus.janino.IClass.findMemberType(IClass.java:766)
    at org.codehaus.janino.IClass.findMemberType(IClass.java:733)
    at
org.codehaus.janino.UnitCompiler.findMemberType(UnitCompiler.java:10116)
    at
org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:5300)
    at
org.codehaus.janino.UnitCompiler.getReferenceType(UnitCompiler.java:5207)
    at org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:5188)
    at org.codehaus.janino.UnitCompiler.access$12600(UnitCompiler.java:185)
    at
org.codehaus.janino.UnitCompiler$16.visitReferenceType(UnitCompiler.java:5119)
    at org.codehaus.janino.Java$ReferenceType.accept(Java.java:2880)
    at org.codehaus.janino.UnitCompiler.getType(UnitCompiler.java:5159)
    at org.codehaus.janino.UnitCompiler.getType2(UnitCompiler.java:5414)
    at org.codehaus.janino.UnitCompiler.access$12400(UnitCompiler.java:185)
    at
org.codehaus.janino.UnitCompiler$16.visitArrayType(UnitCompiler.java:5117)
    at org.codehaus.janino.Java$ArrayType.accept(Java.java:2954)
    at org.codehaus.janino.UnitCompiler.getType(UnitCompiler.java:5159)
    at org.codehaus.janino.UnitCompiler.access$16700(UnitCompiler.java:185)
    at
org.codehaus.janino.UnitCompiler$31.getParameterTypes2(UnitCompiler.java:8533)
    at
org.codehaus.janino.IClass$IInvocable.getParameterTypes(IClass.java:835)
    at org.codehaus.janino.IClass$IMethod.getDescriptor2(IClass.java:1063)
    at org.codehaus.janino.IClass$IInvocable.getDescriptor(IClass.java:849)
    at org.codehaus.janino.IClass.getIMethods(IClass.java:211)
    at org.codehaus.janino.IClass.getIMethods(IClass.java:199)
    at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:409)
    at org.codehaus.janino.UnitCompiler.compile2(UnitCompiler.java:393)
    at org.codehaus.janino.UnitCompiler.access$400(UnitCompiler.java:185)
    at
org.codehaus.janino.UnitCompiler$2.visitPackageMemberClassDeclaration(UnitCompiler.java:347)
    at
org.codehaus.janino.Java$PackageMemberClassDeclaration.accept(Java.java:1139)
    at org.codehaus.janino.UnitCompiler.compile(UnitCompiler.java:354)
    at org.codehaus.janino.UnitCompiler.compileUnit(UnitCompiler.java:322)
    at
org.codehaus.janino.SimpleCompiler.compileToClassLoader(SimpleCompiler.java:383)
    at
org.codehaus.janino.ClassBodyEvaluator.compileToClass(ClassBodyEvaluator.java:315)
    at
org.codehaus.janino.ClassBodyEvaluator.cook(ClassBodyEvaluator.java:233)
    at org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:192)
    at org.codehaus.commons.compiler.Cookable.cook(Cookable.java:84)
    at
org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.org$apache$spark$sql$catalyst$expressions$codegen$CodeGenerator$$doCompile(CodeGenerator.scala:883)
    at
org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:941)
    at
org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:938)
    at
org.spark_project.guava.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599)
    at
org.spark_project.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)
    at
org.spark_project.guava.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
    at
org.spark_project.guava.cache.LocalCache$Segment.get(LocalCache.java:2257)
    at org.spark_project.guava.cache.LocalCache.get(LocalCache.java:4000)
    at
org.spark_project.guava.cache.LocalCache.getOrLoad(LocalCache.java:4004)
    at
org.spark_project.guava.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874)
    at
org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.compile(CodeGenerator.scala:837)
    at
org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:146)
    at
org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:43)
    at
org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:821)
    at
org.apache.spark.sql.catalyst.expressions.codegen.LazilyGeneratedOrdering.<init>(GenerateOrdering.scala:160)
    at
org.apache.spark.sql.catalyst.expressions.codegen.LazilyGeneratedOrdering.<init>(GenerateOrdering.scala:157)
    at
org.apache.spark.sql.execution.TakeOrderedAndProjectExec.executeCollect(limit.scala:127)
    at
org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2183)
    at
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
    at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2532)
    at org.apache.spark.sql.Dataset.org
$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2182)
    at org.apache.spark.sql.Dataset.org
$apache$spark$sql$Dataset$$collect(Dataset.scala:2189)
    at
org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1925)
    at
org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:1924)
    at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2562)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:1924)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2139)
    at sun.reflect.GeneratedMethodAccessor322.invoke(Unknown Source)
    at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at
org.apache.zeppelin.spark.ZeppelinContext.showDF(ZeppelinContext.java:216)
    at
org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:129)
    at
org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
    at
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:341)
    at org.apache.zeppelin.scheduler.Job.run(Job.java:176)
    at
org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:139)
    at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
    at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
    at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)


I use Apache Zeppelin 0.6.2 with Apache Spark 2.0. It is totally random
when I get these errors, because when I fire the exact same Spark SQL
statement a second time (and only a few seconds after the first try), the
statement works like a charm. Around every fourth statement isn't working
correctly. Do you have any idea about this?
I only query something like:

SELECT columnA, COUNT(*) FROM temp_table GROUP BY columnA


Best regards
Florian

Re: Aw: Re: Two different errors while executing Spark SQL queries against cached temp tables

Posted by josephpconley <jo...@gmail.com>.
I recently ran into this issue running 0.7.1.  Specifically, I had cached a
dataframe before registering it as a temp table.  When I removed the caching
this error stopped.



--
View this message in context: http://apache-zeppelin-users-incubating-mailing-list.75479.x6.nabble.com/Two-different-errors-while-executing-Spark-SQL-queries-against-cached-temp-tables-tp4517p6038.html
Sent from the Apache Zeppelin Users (incubating) mailing list mailing list archive at Nabble.com.