You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@zeppelin.apache.org by "Brian Lockwood (JIRA)" <ji...@apache.org> on 2015/07/09 17:32:06 UTC
[jira] [Created] (ZEPPELIN-162) java.lang.NoSuchMethodError:
org.json4s.JsonDSL$.string2jvalue quering hive table
Brian Lockwood created ZEPPELIN-162:
---------------------------------------
Summary: java.lang.NoSuchMethodError: org.json4s.JsonDSL$.string2jvalue quering hive table
Key: ZEPPELIN-162
URL: https://issues.apache.org/jira/browse/ZEPPELIN-162
Project: Zeppelin
Issue Type: Bug
Environment: spark-1.4.0
Hive 0.13.0.2.1.5.0-695
Reporter: Brian Lockwood
When using a notebook to query a hive table ex:
%sql
select * from prod.location_updates limit 10
The query errors (below) out on the backend displaying nothing.
INFO [2015-07-09 15:15:13,618] ({WebSocketWorker-11} NotebookServer.java[onMessage]:100) - RECEIVE << RUN_PARAGRAPH
INFO [2015-07-09 15:15:13,622] ({WebSocketWorker-11} NotebookServer.java[broadcast]:251) - SEND >> NOTE
INFO [2015-07-09 15:15:13,623] ({WebSocketWorker-11} NotebookServer.java[broadcast]:251) - SEND >> NOTE
INFO [2015-07-09 15:15:13,628] ({pool-1-thread-10} SchedulerFactory.java[jobStarted]:132) - Job paragraph_1435020063738_-1659528375 started by scheduler remoteinterpreter_1855267388
INFO [2015-07-09 15:15:13,630] ({pool-1-thread-10} Paragraph.java[jobRun]:194) - run paragraph 20150623-004103_894658927 using sql org.apache.zeppelin.interpreter.LazyOpenInterpreter@663b1f0b
INFO [2015-07-09 15:15:13,630] ({pool-1-thread-10} Paragraph.java[jobRun]:211) - RUN : select * from prod.location_updates limit 10
==> zeppelin-interpreter-spark-master-02.log <==
INFO [2015-07-09 15:15:13,642] ({pool-2-thread-19} SchedulerFactory.java[jobStarted]:132) - Job remoteInterpretJob_1436454913641 started by scheduler org.apache.zeppelin.spark.SparkInterpreter1680144510
==> zeppelin-master-02.log <==
INFO [2015-07-09 15:15:13,735] ({Thread-169} NotebookServer.java[broadcast]:251) - SEND >> NOTE
==> zeppelin-interpreter-spark-master-02.log <==
INFO [2015-07-09 15:15:13,794] ({pool-2-thread-19} ParseDriver.java[parse]:185) - Parsing command: select * from prod.location_updates limit 10
INFO [2015-07-09 15:15:13,795] ({pool-2-thread-19} ParseDriver.java[parse]:206) - Parse Completed
ERROR [2015-07-09 15:15:13,912] ({pool-2-thread-19} Job.java[run]:183) - Job failed
java.lang.NoSuchMethodError: org.json4s.JsonDSL$.string2jvalue(Ljava/lang/String;)Lorg/json4s/JsonAST$JValue;
at org.apache.spark.sql.types.DataType.jsonValue(DataType.scala:60)
at org.apache.spark.sql.types.StructField.jsonValue(StructField.scala:50)
at org.apache.spark.sql.types.StructType$$anonfun$jsonValue$2.apply(StructType.scala:161)
at org.apache.spark.sql.types.StructType$$anonfun$jsonValue$2.apply(StructType.scala:161)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at org.apache.spark.sql.types.StructType.foreach(StructType.scala:94)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at org.apache.spark.sql.types.StructType.map(StructType.scala:94)
at org.apache.spark.sql.types.StructType.jsonValue(StructType.scala:161)
at org.apache.spark.sql.types.StructType.jsonValue(StructType.scala:94)
at org.apache.spark.sql.types.DataType.json(DataType.scala:63)
at org.apache.spark.sql.hive.HiveMetastoreCatalog.org$apache$spark$sql$hive$HiveMetastoreCatalog$$convertToParquetRelation(HiveMetastoreCatalog.scala:260)
at org.apache.spark.sql.hive.HiveMetastoreCatalog$ParquetConversions$$anonfun$1.applyOrElse(HiveMetastoreCatalog.scala:406)
at org.apache.spark.sql.hive.HiveMetastoreCatalog$ParquetConversions$$anonfun$1.applyOrElse(HiveMetastoreCatalog.scala:378)
at scala.PartialFunction$Lifted.apply(PartialFunction.scala:218)
at scala.PartialFunction$Lifted.apply(PartialFunction.scala:214)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collect$1.apply(TreeNode.scala:129)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collect$1.apply(TreeNode.scala:129)
at org.apache.spark.sql.catalyst.trees.TreeNode.foreach(TreeNode.scala:88)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreach$1.apply(TreeNode.scala:89)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreach$1.apply(TreeNode.scala:89)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.catalyst.trees.TreeNode.foreach(TreeNode.scala:89)
at org.apache.spark.sql.catalyst.trees.TreeNode.collect(TreeNode.scala:129)
at org.apache.spark.sql.hive.HiveMetastoreCatalog$ParquetConversions$.apply(HiveMetastoreCatalog.scala:378)
at org.apache.spark.sql.hive.HiveMetastoreCatalog$ParquetConversions$.apply(HiveMetastoreCatalog.scala:371)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:61)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:59)
at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111)
at scala.collection.immutable.List.foldLeft(List.scala:84)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:59)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:51)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:51)
at org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:922)
at org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:922)
at org.apache.spark.sql.SQLContext$QueryExecution.assertAnalyzed(SQLContext.scala:920)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:131)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:744)
at org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:132)
at org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:277)
at org.apache.zeppelin.scheduler.Job.run(Job.java:170)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
INFO [2015-07-09 15:15:13,913] ({pool-2-thread-19} SchedulerFactory.java[jobFinished]:138) - Job remoteInterpretJob_1436454913641 finished by scheduler org.apache.zeppelin.spark.SparkInterpreter1680144510
ERROR [2015-07-09 15:15:13,913] ({pool-1-thread-4} ProcessFunction.java[process]:41) - Internal error processing interpret
org.apache.thrift.TException: java.lang.NoSuchMethodError: org.json4s.JsonDSL$.string2jvalue(Ljava/lang/String;)Lorg/json4s/JsonAST$JValue;
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer.interpret(RemoteInterpreterServer.java:214)
at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Processor$interpret.getResult(RemoteInterpreterService.java:898)
at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Processor$interpret.getResult(RemoteInterpreterService.java:883)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoSuchMethodError: org.json4s.JsonDSL$.string2jvalue(Ljava/lang/String;)Lorg/json4s/JsonAST$JValue;
at org.apache.spark.sql.types.DataType.jsonValue(DataType.scala:60)
at org.apache.spark.sql.types.StructField.jsonValue(StructField.scala:50)
at org.apache.spark.sql.types.StructType$$anonfun$jsonValue$2.apply(StructType.scala:161)
at org.apache.spark.sql.types.StructType$$anonfun$jsonValue$2.apply(StructType.scala:161)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at org.apache.spark.sql.types.StructType.foreach(StructType.scala:94)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at org.apache.spark.sql.types.StructType.map(StructType.scala:94)
at org.apache.spark.sql.types.StructType.jsonValue(StructType.scala:161)
at org.apache.spark.sql.types.StructType.jsonValue(StructType.scala:94)
at org.apache.spark.sql.types.DataType.json(DataType.scala:63)
at org.apache.spark.sql.hive.HiveMetastoreCatalog.org$apache$spark$sql$hive$HiveMetastoreCatalog$$convertToParquetRelation(HiveMetastoreCatalog.scala:260)
at org.apache.spark.sql.hive.HiveMetastoreCatalog$ParquetConversions$$anonfun$1.applyOrElse(HiveMetastoreCatalog.scala:406)
at org.apache.spark.sql.hive.HiveMetastoreCatalog$ParquetConversions$$anonfun$1.applyOrElse(HiveMetastoreCatalog.scala:378)
at scala.PartialFunction$Lifted.apply(PartialFunction.scala:218)
at scala.PartialFunction$Lifted.apply(PartialFunction.scala:214)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collect$1.apply(TreeNode.scala:129)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$collect$1.apply(TreeNode.scala:129)
at org.apache.spark.sql.catalyst.trees.TreeNode.foreach(TreeNode.scala:88)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreach$1.apply(TreeNode.scala:89)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$foreach$1.apply(TreeNode.scala:89)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.catalyst.trees.TreeNode.foreach(TreeNode.scala:89)
at org.apache.spark.sql.catalyst.trees.TreeNode.collect(TreeNode.scala:129)
at org.apache.spark.sql.hive.HiveMetastoreCatalog$ParquetConversions$.apply(HiveMetastoreCatalog.scala:378)
at org.apache.spark.sql.hive.HiveMetastoreCatalog$ParquetConversions$.apply(HiveMetastoreCatalog.scala:371)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:61)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:59)
at scala.collection.LinearSeqOptimized$class.foldLeft(LinearSeqOptimized.scala:111)
at scala.collection.immutable.List.foldLeft(List.scala:84)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:59)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:51)
at scala.collection.immutable.List.foreach(List.scala:318)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:51)
at org.apache.spark.sql.SQLContext$QueryExecution.analyzed$lzycompute(SQLContext.scala:922)
at org.apache.spark.sql.SQLContext$QueryExecution.analyzed(SQLContext.scala:922)
at org.apache.spark.sql.SQLContext$QueryExecution.assertAnalyzed(SQLContext.scala:920)
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:131)
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:744)
at org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:132)
at org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:277)
at org.apache.zeppelin.scheduler.Job.run(Job.java:170)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
... 3 more
==> zeppelin-master-02.log <==
ERROR [2015-07-09 15:15:13,914] ({pool-1-thread-10} Job.java[run]:183) - Job failed
org.apache.zeppelin.interpreter.InterpreterException: org.apache.thrift.TApplicationException: Internal error processing interpret
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.interpret(RemoteInterpreter.java:221)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:212)
at org.apache.zeppelin.scheduler.Job.run(Job.java:170)
at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:296)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.TApplicationException: Internal error processing interpret
at org.apache.thrift.TApplicationException.read(TApplicationException.java:108)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_interpret(RemoteInterpreterService.java:190)
at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.interpret(RemoteInterpreterService.java:175)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.interpret(RemoteInterpreter.java:204)
... 11 more
INFO [2015-07-09 15:15:13,915] ({Thread-169} NotebookServer.java[afterStatusChange]:571) - Job 20150623-004103_894658927 is finished
INFO [2015-07-09 15:15:13,918] ({Thread-169} NotebookServer.java[broadcast]:251) - SEND >> NOTE
INFO [2015-07-09 15:15:13,919] ({pool-1-thread-10} SchedulerFactory.java[jobFinished]:138) - Job paragraph_1435020063738_-1659528375 finished by scheduler remoteinterpreter_1855267388
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)