You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@zeppelin.apache.org by michele crudele <mi...@gmail.com> on 2015/10/17 00:18:04 UTC

java.lang.reflect.InvocationTargetException in a %sql paragraph

I've created a supersimple notebook:

-- 1 --
%dep

z.reset
z.load("com.databricks:spark-csv_2.10:1.2.0")

--2--
%spark

val smsFile = "/home/barabba/data/SMSSpamCollection.csv"
sqlContext.load("com.databricks.spark.csv", Map("path" -> smsFile, "header"
-> "true", "delimiter" -> "|")).registerTempTable("sms")

--3--
%sql
select * from sms

1 and 2 runs fine, while 3 displays the
java.lang.reflect.InvocationTargetException. I cannot find anything useful
in the logs.
If I add the paragraph

--4--
%spark
sqlContext.sql("select * from sms").show

it works correctly, showing the top rows of the table.
+--+----+--------------------+
|id|type|                text|
+--+----+--------------------+
| 0| ham|Go until jurong p...|
| 1| ham|Ok lar... Joking ...|
| 2|spam|Free entry in 2 a...|
| 3| ham|U dun say so earl...|
| 4| ham|Nah I don't think...|
| 5|spam|FreeMsg Hey there...|
| 6| ham|Even my brother i...|
| 7| ham|As per your reque...|
...

I'm using zeppelin-0.5.0-incubating-bin-spark-1.3.1_hadoop-2.3.


Any idea of what's going on? Thanks
- michele

Re: java.lang.reflect.InvocationTargetException in a %sql paragraph

Posted by michele crudele <mi...@gmail.com>.
Any news on this problem? I did some more research and I found a similar
issue logged into zeppelin issue tracker at
https://issues.apache.org/jira/browse/ZEPPELIN-194

The exception is not exactly the same but it may be due to a different
version of com.databricks:spark-csv_2.10, which is 1.1.0 vs. 1.2.0 that I'm
using. Thanks in advance.

- michele


On Mon, Oct 19, 2015 at 4:18 PM, michele crudele <mi...@gmail.com>
wrote:

> Thanks, I just did it. Some progresses, but not yet there.
> Now in the paragraph:
>
> --3--
> %sql
> select * from sms
>
> I'm getting the following exception. I googled other people getting
> similar exception but did not find root cause. Any idea of what's going on?
> Note that my env is very simple, I just built zeppelin as per your
> suggestion, started it, and tried my supersimple notebook.
>
> java.lang.ClassNotFoundException:
> com.databricks.spark.csv.CsvRelation$$anonfun$tokenRdd$1$$anonfun$1 at
> java.net.URLClassLoader$1.run(URLClassLoader.java:366) at
> java.net.URLClassLoader$1.run(URLClassLoader.java:355) at
> java.security.AccessController.doPrivileged(Native Method) at
> java.net.URLClassLoader.findClass(URLClassLoader.java:354) at
> java.lang.ClassLoader.loadClass(ClassLoader.java:425) at
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at
> java.lang.ClassLoader.loadClass(ClassLoader.java:358) at
> java.lang.Class.forName0(Native Method) at
> java.lang.Class.forName(Class.java:274) at
> org.apache.spark.util.InnerClosureFinder$$anon$4.visitMethodInsn(ClosureCleaner.scala:455)
> at
> com.esotericsoftware.reflectasm.shaded.org.objectweb.asm.ClassReader.accept(Unknown
> Source) at
> com.esotericsoftware.reflectasm.shaded.org.objectweb.asm.ClassReader.accept(Unknown
> Source) at
> org.apache.spark.util.ClosureCleaner$.getInnerClosureClasses(ClosureCleaner.scala:101)
> at
> org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:197)
> at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:132) at
> org.apache.spark.SparkContext.clean(SparkContext.scala:1893) at
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:683) at
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:682) at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
> at org.apache.spark.rdd.RDD.withScope(RDD.scala:286) at
> org.apache.spark.rdd.RDD.mapPartitions(RDD.scala:682) at
> com.databricks.spark.csv.CsvRelation.tokenRdd(CsvRelation.scala:90) at
> com.databricks.spark.csv.CsvRelation.buildScan(CsvRelation.scala:105) at
> org.apache.spark.sql.sources.DataSourceStrategy$.apply(DataSourceStrategy.scala:101)
> at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
> at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
> at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at
> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
> at
> org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
> at
> org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:314)
> at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
> at
> org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
> at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at
> org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
> at
> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:943)
> at
> org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:941)
> at
> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:947)
> at
> org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:947)
> at org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1269) at
> org.apache.spark.sql.DataFrame.head(DataFrame.scala:1203) at
> org.apache.spark.sql.DataFrame.take(DataFrame.scala:1262) at
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606) at
> org.apache.zeppelin.spark.ZeppelinContext.showDF(ZeppelinContext.java:300)
> at
> org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:142)
> at
> org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
> at
> org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
> at
> org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276)
> at org.apache.zeppelin.scheduler.Job.run(Job.java:170) at
> org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
>
> On Mon, Oct 19, 2015 at 1:59 PM, moon soo Lee <mo...@apache.org> wrote:
>
>> You'll need to build your self.
>> Please let me know if you have any problem on building Zeppelin.
>>
>> Thanks,
>> moon
>>
>>
>> On Mon, Oct 19, 2015 at 8:32 PM michele crudele <mi...@gmail.com>
>> wrote:
>>
>>> Thanks moon,
>>>
>>> is there a repo where I can download the 0.6.0-SNAPHOT or do I have to
>>> build zeppelin myself ? Thanks again for your help.
>>>
>>> - michele
>>>
>>>
>>> On Mon, Oct 19, 2015 at 7:41 AM, moon soo Lee <mo...@apache.org> wrote:
>>>
>>>> Hi Michele,
>>>>
>>>> Thanks for sharing the problem.
>>>> I have tested your code on both 0.5.0 and 0.6.0-SNAPSHOT.
>>>> I have the same problem on 0.5.0 but 0.6.0-SNAPSHOT runs smoothly.
>>>>
>>>> So, you can try 0.6.0-SNAPSHOT until next release is out.
>>>> Or if you want to see what's going on with 0.5.0, you'll need to apply
>>>> this commit
>>>> https://github.com/apache/incubator-zeppelin/commit/d0a30435414726e7fa6d8b8e106e4b6ddb46da67 to
>>>> see exception in your notebook.
>>>>
>>>> Best,
>>>> moon
>>>>
>>>> On Sat, Oct 17, 2015 at 7:18 AM michele crudele <mi...@gmail.com>
>>>> wrote:
>>>>
>>>>> I've created a supersimple notebook:
>>>>>
>>>>> -- 1 --
>>>>> %dep
>>>>>
>>>>> z.reset
>>>>> z.load("com.databricks:spark-csv_2.10:1.2.0")
>>>>>
>>>>> --2--
>>>>> %spark
>>>>>
>>>>> val smsFile = "/home/barabba/data/SMSSpamCollection.csv"
>>>>> sqlContext.load("com.databricks.spark.csv", Map("path" -> smsFile,
>>>>> "header" -> "true", "delimiter" -> "|")).registerTempTable("sms")
>>>>>
>>>>> --3--
>>>>> %sql
>>>>> select * from sms
>>>>>
>>>>> 1 and 2 runs fine, while 3 displays the
>>>>> java.lang.reflect.InvocationTargetException. I cannot find anything useful
>>>>> in the logs.
>>>>> If I add the paragraph
>>>>>
>>>>> --4--
>>>>> %spark
>>>>> sqlContext.sql("select * from sms").show
>>>>>
>>>>> it works correctly, showing the top rows of the table.
>>>>> +--+----+--------------------+
>>>>> |id|type|                text|
>>>>> +--+----+--------------------+
>>>>> | 0| ham|Go until jurong p...|
>>>>> | 1| ham|Ok lar... Joking ...|
>>>>> | 2|spam|Free entry in 2 a...|
>>>>> | 3| ham|U dun say so earl...|
>>>>> | 4| ham|Nah I don't think...|
>>>>> | 5|spam|FreeMsg Hey there...|
>>>>> | 6| ham|Even my brother i...|
>>>>> | 7| ham|As per your reque...|
>>>>> ...
>>>>>
>>>>> I'm using zeppelin-0.5.0-incubating-bin-spark-1.3.1_hadoop-2.3.
>>>>>
>>>>>
>>>>> Any idea of what's going on? Thanks
>>>>> - michele
>>>>>
>>>>>
>>>
>

Re: java.lang.reflect.InvocationTargetException in a %sql paragraph

Posted by michele crudele <mi...@gmail.com>.
Thanks, I just did it. Some progresses, but not yet there.
Now in the paragraph:

--3--
%sql
select * from sms

I'm getting the following exception. I googled other people getting similar
exception but did not find root cause. Any idea of what's going on?
Note that my env is very simple, I just built zeppelin as per your
suggestion, started it, and tried my supersimple notebook.

java.lang.ClassNotFoundException:
com.databricks.spark.csv.CsvRelation$$anonfun$tokenRdd$1$$anonfun$1 at
java.net.URLClassLoader$1.run(URLClassLoader.java:366) at
java.net.URLClassLoader$1.run(URLClassLoader.java:355) at
java.security.AccessController.doPrivileged(Native Method) at
java.net.URLClassLoader.findClass(URLClassLoader.java:354) at
java.lang.ClassLoader.loadClass(ClassLoader.java:425) at
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) at
java.lang.ClassLoader.loadClass(ClassLoader.java:358) at
java.lang.Class.forName0(Native Method) at
java.lang.Class.forName(Class.java:274) at
org.apache.spark.util.InnerClosureFinder$$anon$4.visitMethodInsn(ClosureCleaner.scala:455)
at
com.esotericsoftware.reflectasm.shaded.org.objectweb.asm.ClassReader.accept(Unknown
Source) at
com.esotericsoftware.reflectasm.shaded.org.objectweb.asm.ClassReader.accept(Unknown
Source) at
org.apache.spark.util.ClosureCleaner$.getInnerClosureClasses(ClosureCleaner.scala:101)
at
org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:197)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:132) at
org.apache.spark.SparkContext.clean(SparkContext.scala:1893) at
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:683) at
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1.apply(RDD.scala:682) at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:286) at
org.apache.spark.rdd.RDD.mapPartitions(RDD.scala:682) at
com.databricks.spark.csv.CsvRelation.tokenRdd(CsvRelation.scala:90) at
com.databricks.spark.csv.CsvRelation.buildScan(CsvRelation.scala:105) at
org.apache.spark.sql.sources.DataSourceStrategy$.apply(DataSourceStrategy.scala:101)
at
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at
org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
at
org.apache.spark.sql.catalyst.planning.QueryPlanner.planLater(QueryPlanner.scala:54)
at
org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:314)
at
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at
org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:58)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371) at
org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:59)
at
org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan$lzycompute(SQLContext.scala:943)
at
org.apache.spark.sql.SQLContext$QueryExecution.sparkPlan(SQLContext.scala:941)
at
org.apache.spark.sql.SQLContext$QueryExecution.executedPlan$lzycompute(SQLContext.scala:947)
at
org.apache.spark.sql.SQLContext$QueryExecution.executedPlan(SQLContext.scala:947)
at org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1269) at
org.apache.spark.sql.DataFrame.head(DataFrame.scala:1203) at
org.apache.spark.sql.DataFrame.take(DataFrame.scala:1262) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606) at
org.apache.zeppelin.spark.ZeppelinContext.showDF(ZeppelinContext.java:300)
at
org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:142)
at
org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
at
org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
at
org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276)
at org.apache.zeppelin.scheduler.Job.run(Job.java:170) at
org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262) at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)


On Mon, Oct 19, 2015 at 1:59 PM, moon soo Lee <mo...@apache.org> wrote:

> You'll need to build your self.
> Please let me know if you have any problem on building Zeppelin.
>
> Thanks,
> moon
>
>
> On Mon, Oct 19, 2015 at 8:32 PM michele crudele <mi...@gmail.com>
> wrote:
>
>> Thanks moon,
>>
>> is there a repo where I can download the 0.6.0-SNAPHOT or do I have to
>> build zeppelin myself ? Thanks again for your help.
>>
>> - michele
>>
>>
>> On Mon, Oct 19, 2015 at 7:41 AM, moon soo Lee <mo...@apache.org> wrote:
>>
>>> Hi Michele,
>>>
>>> Thanks for sharing the problem.
>>> I have tested your code on both 0.5.0 and 0.6.0-SNAPSHOT.
>>> I have the same problem on 0.5.0 but 0.6.0-SNAPSHOT runs smoothly.
>>>
>>> So, you can try 0.6.0-SNAPSHOT until next release is out.
>>> Or if you want to see what's going on with 0.5.0, you'll need to apply
>>> this commit
>>> https://github.com/apache/incubator-zeppelin/commit/d0a30435414726e7fa6d8b8e106e4b6ddb46da67 to
>>> see exception in your notebook.
>>>
>>> Best,
>>> moon
>>>
>>> On Sat, Oct 17, 2015 at 7:18 AM michele crudele <mi...@gmail.com>
>>> wrote:
>>>
>>>> I've created a supersimple notebook:
>>>>
>>>> -- 1 --
>>>> %dep
>>>>
>>>> z.reset
>>>> z.load("com.databricks:spark-csv_2.10:1.2.0")
>>>>
>>>> --2--
>>>> %spark
>>>>
>>>> val smsFile = "/home/barabba/data/SMSSpamCollection.csv"
>>>> sqlContext.load("com.databricks.spark.csv", Map("path" -> smsFile,
>>>> "header" -> "true", "delimiter" -> "|")).registerTempTable("sms")
>>>>
>>>> --3--
>>>> %sql
>>>> select * from sms
>>>>
>>>> 1 and 2 runs fine, while 3 displays the
>>>> java.lang.reflect.InvocationTargetException. I cannot find anything useful
>>>> in the logs.
>>>> If I add the paragraph
>>>>
>>>> --4--
>>>> %spark
>>>> sqlContext.sql("select * from sms").show
>>>>
>>>> it works correctly, showing the top rows of the table.
>>>> +--+----+--------------------+
>>>> |id|type|                text|
>>>> +--+----+--------------------+
>>>> | 0| ham|Go until jurong p...|
>>>> | 1| ham|Ok lar... Joking ...|
>>>> | 2|spam|Free entry in 2 a...|
>>>> | 3| ham|U dun say so earl...|
>>>> | 4| ham|Nah I don't think...|
>>>> | 5|spam|FreeMsg Hey there...|
>>>> | 6| ham|Even my brother i...|
>>>> | 7| ham|As per your reque...|
>>>> ...
>>>>
>>>> I'm using zeppelin-0.5.0-incubating-bin-spark-1.3.1_hadoop-2.3.
>>>>
>>>>
>>>> Any idea of what's going on? Thanks
>>>> - michele
>>>>
>>>>
>>

Re: java.lang.reflect.InvocationTargetException in a %sql paragraph

Posted by moon soo Lee <mo...@apache.org>.
You'll need to build your self.
Please let me know if you have any problem on building Zeppelin.

Thanks,
moon

On Mon, Oct 19, 2015 at 8:32 PM michele crudele <mi...@gmail.com>
wrote:

> Thanks moon,
>
> is there a repo where I can download the 0.6.0-SNAPHOT or do I have to
> build zeppelin myself ? Thanks again for your help.
>
> - michele
>
>
> On Mon, Oct 19, 2015 at 7:41 AM, moon soo Lee <mo...@apache.org> wrote:
>
>> Hi Michele,
>>
>> Thanks for sharing the problem.
>> I have tested your code on both 0.5.0 and 0.6.0-SNAPSHOT.
>> I have the same problem on 0.5.0 but 0.6.0-SNAPSHOT runs smoothly.
>>
>> So, you can try 0.6.0-SNAPSHOT until next release is out.
>> Or if you want to see what's going on with 0.5.0, you'll need to apply
>> this commit
>> https://github.com/apache/incubator-zeppelin/commit/d0a30435414726e7fa6d8b8e106e4b6ddb46da67 to
>> see exception in your notebook.
>>
>> Best,
>> moon
>>
>> On Sat, Oct 17, 2015 at 7:18 AM michele crudele <mi...@gmail.com>
>> wrote:
>>
>>> I've created a supersimple notebook:
>>>
>>> -- 1 --
>>> %dep
>>>
>>> z.reset
>>> z.load("com.databricks:spark-csv_2.10:1.2.0")
>>>
>>> --2--
>>> %spark
>>>
>>> val smsFile = "/home/barabba/data/SMSSpamCollection.csv"
>>> sqlContext.load("com.databricks.spark.csv", Map("path" -> smsFile,
>>> "header" -> "true", "delimiter" -> "|")).registerTempTable("sms")
>>>
>>> --3--
>>> %sql
>>> select * from sms
>>>
>>> 1 and 2 runs fine, while 3 displays the
>>> java.lang.reflect.InvocationTargetException. I cannot find anything useful
>>> in the logs.
>>> If I add the paragraph
>>>
>>> --4--
>>> %spark
>>> sqlContext.sql("select * from sms").show
>>>
>>> it works correctly, showing the top rows of the table.
>>> +--+----+--------------------+
>>> |id|type|                text|
>>> +--+----+--------------------+
>>> | 0| ham|Go until jurong p...|
>>> | 1| ham|Ok lar... Joking ...|
>>> | 2|spam|Free entry in 2 a...|
>>> | 3| ham|U dun say so earl...|
>>> | 4| ham|Nah I don't think...|
>>> | 5|spam|FreeMsg Hey there...|
>>> | 6| ham|Even my brother i...|
>>> | 7| ham|As per your reque...|
>>> ...
>>>
>>> I'm using zeppelin-0.5.0-incubating-bin-spark-1.3.1_hadoop-2.3.
>>>
>>>
>>> Any idea of what's going on? Thanks
>>> - michele
>>>
>>>
>

Re: java.lang.reflect.InvocationTargetException in a %sql paragraph

Posted by michele crudele <mi...@gmail.com>.
Thanks moon,

is there a repo where I can download the 0.6.0-SNAPHOT or do I have to
build zeppelin myself ? Thanks again for your help.

- michele


On Mon, Oct 19, 2015 at 7:41 AM, moon soo Lee <mo...@apache.org> wrote:

> Hi Michele,
>
> Thanks for sharing the problem.
> I have tested your code on both 0.5.0 and 0.6.0-SNAPSHOT.
> I have the same problem on 0.5.0 but 0.6.0-SNAPSHOT runs smoothly.
>
> So, you can try 0.6.0-SNAPSHOT until next release is out.
> Or if you want to see what's going on with 0.5.0, you'll need to apply
> this commit
> https://github.com/apache/incubator-zeppelin/commit/d0a30435414726e7fa6d8b8e106e4b6ddb46da67 to
> see exception in your notebook.
>
> Best,
> moon
>
> On Sat, Oct 17, 2015 at 7:18 AM michele crudele <mi...@gmail.com>
> wrote:
>
>> I've created a supersimple notebook:
>>
>> -- 1 --
>> %dep
>>
>> z.reset
>> z.load("com.databricks:spark-csv_2.10:1.2.0")
>>
>> --2--
>> %spark
>>
>> val smsFile = "/home/barabba/data/SMSSpamCollection.csv"
>> sqlContext.load("com.databricks.spark.csv", Map("path" -> smsFile,
>> "header" -> "true", "delimiter" -> "|")).registerTempTable("sms")
>>
>> --3--
>> %sql
>> select * from sms
>>
>> 1 and 2 runs fine, while 3 displays the
>> java.lang.reflect.InvocationTargetException. I cannot find anything useful
>> in the logs.
>> If I add the paragraph
>>
>> --4--
>> %spark
>> sqlContext.sql("select * from sms").show
>>
>> it works correctly, showing the top rows of the table.
>> +--+----+--------------------+
>> |id|type|                text|
>> +--+----+--------------------+
>> | 0| ham|Go until jurong p...|
>> | 1| ham|Ok lar... Joking ...|
>> | 2|spam|Free entry in 2 a...|
>> | 3| ham|U dun say so earl...|
>> | 4| ham|Nah I don't think...|
>> | 5|spam|FreeMsg Hey there...|
>> | 6| ham|Even my brother i...|
>> | 7| ham|As per your reque...|
>> ...
>>
>> I'm using zeppelin-0.5.0-incubating-bin-spark-1.3.1_hadoop-2.3.
>>
>>
>> Any idea of what's going on? Thanks
>> - michele
>>
>>

Re: java.lang.reflect.InvocationTargetException in a %sql paragraph

Posted by moon soo Lee <mo...@apache.org>.
Hi Michele,

Thanks for sharing the problem.
I have tested your code on both 0.5.0 and 0.6.0-SNAPSHOT.
I have the same problem on 0.5.0 but 0.6.0-SNAPSHOT runs smoothly.

So, you can try 0.6.0-SNAPSHOT until next release is out.
Or if you want to see what's going on with 0.5.0, you'll need to apply this
commit
https://github.com/apache/incubator-zeppelin/commit/d0a30435414726e7fa6d8b8e106e4b6ddb46da67
to
see exception in your notebook.

Best,
moon

On Sat, Oct 17, 2015 at 7:18 AM michele crudele <mi...@gmail.com>
wrote:

> I've created a supersimple notebook:
>
> -- 1 --
> %dep
>
> z.reset
> z.load("com.databricks:spark-csv_2.10:1.2.0")
>
> --2--
> %spark
>
> val smsFile = "/home/barabba/data/SMSSpamCollection.csv"
> sqlContext.load("com.databricks.spark.csv", Map("path" -> smsFile,
> "header" -> "true", "delimiter" -> "|")).registerTempTable("sms")
>
> --3--
> %sql
> select * from sms
>
> 1 and 2 runs fine, while 3 displays the
> java.lang.reflect.InvocationTargetException. I cannot find anything useful
> in the logs.
> If I add the paragraph
>
> --4--
> %spark
> sqlContext.sql("select * from sms").show
>
> it works correctly, showing the top rows of the table.
> +--+----+--------------------+
> |id|type|                text|
> +--+----+--------------------+
> | 0| ham|Go until jurong p...|
> | 1| ham|Ok lar... Joking ...|
> | 2|spam|Free entry in 2 a...|
> | 3| ham|U dun say so earl...|
> | 4| ham|Nah I don't think...|
> | 5|spam|FreeMsg Hey there...|
> | 6| ham|Even my brother i...|
> | 7| ham|As per your reque...|
> ...
>
> I'm using zeppelin-0.5.0-incubating-bin-spark-1.3.1_hadoop-2.3.
>
>
> Any idea of what's going on? Thanks
> - michele
>
>