You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@zeppelin.apache.org by Armand Naude <ar...@gmail.com> on 2015/10/16 16:37:52 UTC

Zeppelin spark RDD commands fail yet work in spark-shell

I have setup a standalone single node "cluster" running the following:

   - Cassandra 2.2.2
   - Spark 1.5.1
   - List item
   - Compiled fat jar for Spark-Cassandra-Connector 1.5.0-M2
   - Compiled Zeppelin 0.6 snapshot compiled with: mvn -Pspark-1.5
   -Dspark.version=1.5.1 -Dhadoop.version=2.6.0 -Phadoop-2.4 -DskipTests clean
   package

I can work perfectly fine with spark shell retrieving data from cassandra

I have altered the Zeppelin-env.sh as follow:

export MASTER=spark://localhost:7077
export SPARK_HOME=/root/spark-1.5.1-bin-hadoop2.6/
export ZEPPELIN_PORT=8880
export ZEPPELIN_JAVA_OPTS="-Dspark.jars=/opt/sparkconnector/spark-cassandra-connector-assembly-1.5.0-M2-SNAPSHOT.jar
-Dspark.cassandra.connection.host=localhost"
export ZEPPELIN_NOTEBOOK_DIR="/root/gowalla-spark-demo/notebooks/zeppelin"
export SPARK_SUBMIT_OPTIONS="--jars
/opt/sparkconnector/spark-cassandra-connector-assembly-1.5.0-M2-SNAPSHOT.jar
--deploy-mode cluster"
export ZEPPELIN_INTP_JAVA_OPTS=$ZEPPELIN_JAVA_OPTS

I then start adding paragraphs to a notebook and import the following first:

import com.datastax.spark.connector._
import com.datastax.spark.connector.cql._
import com.datastax.spark.connector.rdd.CassandraRDD
import org.apache.spark.rdd.RDD
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf

Not sure if all of these are necessary. This paragraph runs fine.

Then I do the following:

val checkins = sc.cassandraTable("lbsn", "checkins")

This runs fine and returns:

checkins: com.datastax.spark.connector.rdd.CassandraTableScanRDD[com.datastax.spark.connector.CassandraRow]
= CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:15

Then the next paragraph - the follow 2 statements are run -the first
succeeds and the second fails:

checkins.count
checkins.first

Result:

res13: Long = 138449
com.fasterxml.jackson.databind.JsonMappingException: Could not find
creator property with name 'id' (in class
org.apache.spark.rdd.RDDOperationScope)
at [Source: {"id":"4","name":"first"}; line: 1, column: 1]
at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148)
at com.fasterxml.jackson.databind.DeserializationContext.mappingException(DeserializationContext.java:843)
at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.addBeanProps(BeanDeserializerFactory.java:533)
at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.buildBeanDeserializer(BeanDeserializerFactory.java:220)
at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.createBeanDeserializer(BeanDeserializerFactory.java:143)
at com.fasterxml.jackson.databind.deser.DeserializerCache._createDeserializer2(DeserializerCache.java:409)
at com.fasterxml.jackson.databind.deser.DeserializerCache._createDeserializer(DeserializerCache.java:358)
at com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCache2(DeserializerCache.java:265)
at com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCacheValueDeserializer(DeserializerCache.java:245)
at com.fasterxml.jackson.databind.deser.DeserializerCache.findValueDeserializer(DeserializerCache.java:143)
at com.fasterxml.jackson.databind.DeserializationContext.findRootValueDeserializer(DeserializationContext.java:439)
at com.fasterxml.jackson.databind.ObjectMapper._findRootDeserializer(ObjectMapper.java:3666)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3558)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2578)
at org.apache.spark.rdd.RDDOperationScope$.fromJson(RDDOperationScope.scala:82)
at org.apache.spark.rdd.RDD$$anonfun$34.apply(RDD.scala:1582)
at org.apache.spark.rdd.RDD$$anonfun$34.apply(RDD.scala:1582)
at scala.Option.map(Option.scala:145)
at org.apache.spark.rdd.RDD.<init>(RDD.scala:1582)
at com.datastax.spark.connector.rdd.CassandraRDD.<init>(CassandraRDD.scala:15)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.<init>(CassandraTableScanRDD.scala:59)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.copy(CassandraTableScanRDD.scala:92)
at com.datastax.spark.connector.rdd.CassandraTableScanRDD.copy(CassandraTableScanRDD.scala:59)
at com.datastax.spark.connector.rdd.CassandraRDD.limit(CassandraRDD.scala:103)
at com.datastax.spark.connector.rdd.CassandraRDD.take(CassandraRDD.scala:122)
at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1312)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
at org.apache.spark.rdd.RDD.first(RDD.scala:1311)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:36)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:41)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:43)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:45)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:47)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:49)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:51)
at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:53)
at $iwC$$iwC$$iwC$$iwC.<init>(<console>:55)
at $iwC$$iwC$$iwC.<init>(<console>:57)
at $iwC$$iwC.<init>(<console>:59)
at $iwC.<init>(<console>:61)
at <init>(<console>:63)
at .<init>(<console>:67)
at .<clinit>(<console>)
at .<init>(<console>:7)
at .<clinit>(<console>)
at $print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1340)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
at org.apache.zeppelin.spark.SparkInterpreter.interpretInput(SparkInterpreter.java:655)
at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:620)
at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:613)
at org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276)
at org.apache.zeppelin.scheduler.Job.run(Job.java:170)
at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Why does the call to first fail. Calls such as sc.fromTextFile also fail.

the following also works:

checkins.where("year = 2010 and month=2 and day>12 and day<15").count()

but this does not:

checkins.where("year = 2010 and month=2 and day>12 and day<15").first()

Please assist as this is driving me insane. Especially since the spark
shell works but this does not or at least seems partially broken.

Thanks
-- 
Armand Naudé
Software Engineer

Cell:     +27 082 552 0421

------------------------------------------------------------
To see a world in a grain of sand
And a heaven in a wild flower,
Hold infinity in the palm of your hand
And eternity in an hour.
                       -- William Blake

Re: Zeppelin spark RDD commands fail yet work in spark-shell

Posted by moon soo Lee <mo...@apache.org>.
"mvn dependency:tree" will show all transitive dependency that Zeppelin
uses. so, you can try

mvn -Pspark-1.5 -Dspark.version=1.5.1 -Dhadoop.version=2.6.0 -Phadoop-2.4
-DskipTests clean package dependency:tree

to see what the downloaded spark-1.5.1-hadoop2.6.

Best,
moon

On Sat, Oct 17, 2015 at 4:48 AM Armand Naude <ar...@gmail.com> wrote:

> Thanks for responding Jonathan
>
> I was inclined to think that it is a jackson issue. I can see in the
> distribution build of Zeppelin that the libs folder has references to
> jackson 2.5.3. How do I confirm what the downloaded spark-1.5.1-hadoop2.6
> uses?
> Also I have 0 issues on spark shell so how do I fix zeppelin?
>
> Thank you
>
> On Fri, Oct 16, 2015 at 8:58 PM, Jonathan Kelly <jo...@gmail.com>
> wrote:
>
>> That exception looks like one that I see when using too new of a version
>> of Jackson. See https://issues.apache.org/jira/browse/SPARK-8332. The
>> short answer is that you probably need to make sure you are using Jackson
>> 2.4.x rather than 2.5.x.
>>
>> ~ Jonathan
>>
>> On Fri, Oct 16, 2015 at 7:37 AM, Armand Naude <ar...@gmail.com>
>> wrote:
>>
>>> I have setup a standalone single node "cluster" running the following:
>>>
>>>    - Cassandra 2.2.2
>>>    - Spark 1.5.1
>>>    - List item
>>>    - Compiled fat jar for Spark-Cassandra-Connector 1.5.0-M2
>>>    - Compiled Zeppelin 0.6 snapshot compiled with: mvn -Pspark-1.5
>>>    -Dspark.version=1.5.1 -Dhadoop.version=2.6.0 -Phadoop-2.4 -DskipTests clean
>>>    package
>>>
>>> I can work perfectly fine with spark shell retrieving data from cassandra
>>>
>>> I have altered the Zeppelin-env.sh as follow:
>>>
>>> export MASTER=spark://localhost:7077
>>> export SPARK_HOME=/root/spark-1.5.1-bin-hadoop2.6/
>>> export ZEPPELIN_PORT=8880
>>> export ZEPPELIN_JAVA_OPTS="-Dspark.jars=/opt/sparkconnector/spark-cassandra-connector-assembly-1.5.0-M2-SNAPSHOT.jar -Dspark.cassandra.connection.host=localhost"
>>> export ZEPPELIN_NOTEBOOK_DIR="/root/gowalla-spark-demo/notebooks/zeppelin"
>>> export SPARK_SUBMIT_OPTIONS="--jars /opt/sparkconnector/spark-cassandra-connector-assembly-1.5.0-M2-SNAPSHOT.jar --deploy-mode cluster"
>>> export ZEPPELIN_INTP_JAVA_OPTS=$ZEPPELIN_JAVA_OPTS
>>>
>>> I then start adding paragraphs to a notebook and import the following
>>> first:
>>>
>>> import com.datastax.spark.connector._
>>> import com.datastax.spark.connector.cql._
>>> import com.datastax.spark.connector.rdd.CassandraRDD
>>> import org.apache.spark.rdd.RDD
>>> import org.apache.spark.SparkContext
>>> import org.apache.spark.SparkConf
>>>
>>> Not sure if all of these are necessary. This paragraph runs fine.
>>>
>>> Then I do the following:
>>>
>>> val checkins = sc.cassandraTable("lbsn", "checkins")
>>>
>>> This runs fine and returns:
>>>
>>> checkins: com.datastax.spark.connector.rdd.CassandraTableScanRDD[com.datastax.spark.connector.CassandraRow] = CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:15
>>>
>>> Then the next paragraph - the follow 2 statements are run -the first
>>> succeeds and the second fails:
>>>
>>> checkins.count
>>> checkins.first
>>>
>>> Result:
>>>
>>> res13: Long = 138449
>>> com.fasterxml.jackson.databind.JsonMappingException: Could not find creator property with name 'id' (in class org.apache.spark.rdd.RDDOperationScope)
>>> at [Source: {"id":"4","name":"first"}; line: 1, column: 1]
>>> at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148)
>>> at com.fasterxml.jackson.databind.DeserializationContext.mappingException(DeserializationContext.java:843)
>>> at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.addBeanProps(BeanDeserializerFactory.java:533)
>>> at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.buildBeanDeserializer(BeanDeserializerFactory.java:220)
>>> at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.createBeanDeserializer(BeanDeserializerFactory.java:143)
>>> at com.fasterxml.jackson.databind.deser.DeserializerCache._createDeserializer2(DeserializerCache.java:409)
>>> at com.fasterxml.jackson.databind.deser.DeserializerCache._createDeserializer(DeserializerCache.java:358)
>>> at com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCache2(DeserializerCache.java:265)
>>> at com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCacheValueDeserializer(DeserializerCache.java:245)
>>> at com.fasterxml.jackson.databind.deser.DeserializerCache.findValueDeserializer(DeserializerCache.java:143)
>>> at com.fasterxml.jackson.databind.DeserializationContext.findRootValueDeserializer(DeserializationContext.java:439)
>>> at com.fasterxml.jackson.databind.ObjectMapper._findRootDeserializer(ObjectMapper.java:3666)
>>> at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3558)
>>> at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2578)
>>> at org.apache.spark.rdd.RDDOperationScope$.fromJson(RDDOperationScope.scala:82)
>>> at org.apache.spark.rdd.RDD$$anonfun$34.apply(RDD.scala:1582)
>>> at org.apache.spark.rdd.RDD$$anonfun$34.apply(RDD.scala:1582)
>>> at scala.Option.map(Option.scala:145)
>>> at org.apache.spark.rdd.RDD.<init>(RDD.scala:1582)
>>> at com.datastax.spark.connector.rdd.CassandraRDD.<init>(CassandraRDD.scala:15)
>>> at com.datastax.spark.connector.rdd.CassandraTableScanRDD.<init>(CassandraTableScanRDD.scala:59)
>>> at com.datastax.spark.connector.rdd.CassandraTableScanRDD.copy(CassandraTableScanRDD.scala:92)
>>> at com.datastax.spark.connector.rdd.CassandraTableScanRDD.copy(CassandraTableScanRDD.scala:59)
>>> at com.datastax.spark.connector.rdd.CassandraRDD.limit(CassandraRDD.scala:103)
>>> at com.datastax.spark.connector.rdd.CassandraRDD.take(CassandraRDD.scala:122)
>>> at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1312)
>>> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>>> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
>>> at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
>>> at org.apache.spark.rdd.RDD.first(RDD.scala:1311)
>>> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:36)
>>> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:41)
>>> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:43)
>>> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:45)
>>> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:47)
>>> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:49)
>>> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:51)
>>> at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:53)
>>> at $iwC$$iwC$$iwC$$iwC.<init>(<console>:55)
>>> at $iwC$$iwC$$iwC.<init>(<console>:57)
>>> at $iwC$$iwC.<init>(<console>:59)
>>> at $iwC.<init>(<console>:61)
>>> at <init>(<console>:63)
>>> at .<init>(<console>:67)
>>> at .<clinit>(<console>)
>>> at .<init>(<console>:7)
>>> at .<clinit>(<console>)
>>> at $print(<console>)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:497)
>>> at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>>> at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1340)
>>> at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>>> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>>> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>>> at org.apache.zeppelin.spark.SparkInterpreter.interpretInput(SparkInterpreter.java:655)
>>> at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:620)
>>> at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:613)
>>> at org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
>>> at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
>>> at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276)
>>> at org.apache.zeppelin.scheduler.Job.run(Job.java:170)
>>> at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)
>>> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>>> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>>> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>> at java.lang.Thread.run(Thread.java:745)
>>>
>>> Why does the call to first fail. Calls such as sc.fromTextFile also fail.
>>>
>>> the following also works:
>>>
>>> checkins.where("year = 2010 and month=2 and day>12 and day<15").count()
>>>
>>> but this does not:
>>>
>>> checkins.where("year = 2010 and month=2 and day>12 and day<15").first()
>>>
>>> Please assist as this is driving me insane. Especially since the spark
>>> shell works but this does not or at least seems partially broken.
>>>
>>> Thanks
>>> --
>>> Armand Naudé
>>> Software Engineer
>>>
>>> Cell:     +27 082 552 0421
>>>
>>> ------------------------------------------------------------
>>> To see a world in a grain of sand
>>> And a heaven in a wild flower,
>>> Hold infinity in the palm of your hand
>>> And eternity in an hour.
>>>                        -- William Blake
>>>
>>
>>
>
>
> --
> Armand Naudé
> Software Engineer
>
> Cell:     +27 082 552 0421
>
> ------------------------------------------------------------
> To see a world in a grain of sand
> And a heaven in a wild flower,
> Hold infinity in the palm of your hand
> And eternity in an hour.
>                        -- William Blake
>

Re: Zeppelin spark RDD commands fail yet work in spark-shell

Posted by Armand Naude <ar...@gmail.com>.
Thanks for responding Jonathan

I was inclined to think that it is a jackson issue. I can see in the
distribution build of Zeppelin that the libs folder has references to
jackson 2.5.3. How do I confirm what the downloaded spark-1.5.1-hadoop2.6
uses?
Also I have 0 issues on spark shell so how do I fix zeppelin?

Thank you

On Fri, Oct 16, 2015 at 8:58 PM, Jonathan Kelly <jo...@gmail.com>
wrote:

> That exception looks like one that I see when using too new of a version
> of Jackson. See https://issues.apache.org/jira/browse/SPARK-8332. The
> short answer is that you probably need to make sure you are using Jackson
> 2.4.x rather than 2.5.x.
>
> ~ Jonathan
>
> On Fri, Oct 16, 2015 at 7:37 AM, Armand Naude <ar...@gmail.com>
> wrote:
>
>> I have setup a standalone single node "cluster" running the following:
>>
>>    - Cassandra 2.2.2
>>    - Spark 1.5.1
>>    - List item
>>    - Compiled fat jar for Spark-Cassandra-Connector 1.5.0-M2
>>    - Compiled Zeppelin 0.6 snapshot compiled with: mvn -Pspark-1.5
>>    -Dspark.version=1.5.1 -Dhadoop.version=2.6.0 -Phadoop-2.4 -DskipTests clean
>>    package
>>
>> I can work perfectly fine with spark shell retrieving data from cassandra
>>
>> I have altered the Zeppelin-env.sh as follow:
>>
>> export MASTER=spark://localhost:7077
>> export SPARK_HOME=/root/spark-1.5.1-bin-hadoop2.6/
>> export ZEPPELIN_PORT=8880
>> export ZEPPELIN_JAVA_OPTS="-Dspark.jars=/opt/sparkconnector/spark-cassandra-connector-assembly-1.5.0-M2-SNAPSHOT.jar -Dspark.cassandra.connection.host=localhost"
>> export ZEPPELIN_NOTEBOOK_DIR="/root/gowalla-spark-demo/notebooks/zeppelin"
>> export SPARK_SUBMIT_OPTIONS="--jars /opt/sparkconnector/spark-cassandra-connector-assembly-1.5.0-M2-SNAPSHOT.jar --deploy-mode cluster"
>> export ZEPPELIN_INTP_JAVA_OPTS=$ZEPPELIN_JAVA_OPTS
>>
>> I then start adding paragraphs to a notebook and import the following
>> first:
>>
>> import com.datastax.spark.connector._
>> import com.datastax.spark.connector.cql._
>> import com.datastax.spark.connector.rdd.CassandraRDD
>> import org.apache.spark.rdd.RDD
>> import org.apache.spark.SparkContext
>> import org.apache.spark.SparkConf
>>
>> Not sure if all of these are necessary. This paragraph runs fine.
>>
>> Then I do the following:
>>
>> val checkins = sc.cassandraTable("lbsn", "checkins")
>>
>> This runs fine and returns:
>>
>> checkins: com.datastax.spark.connector.rdd.CassandraTableScanRDD[com.datastax.spark.connector.CassandraRow] = CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:15
>>
>> Then the next paragraph - the follow 2 statements are run -the first
>> succeeds and the second fails:
>>
>> checkins.count
>> checkins.first
>>
>> Result:
>>
>> res13: Long = 138449
>> com.fasterxml.jackson.databind.JsonMappingException: Could not find creator property with name 'id' (in class org.apache.spark.rdd.RDDOperationScope)
>> at [Source: {"id":"4","name":"first"}; line: 1, column: 1]
>> at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148)
>> at com.fasterxml.jackson.databind.DeserializationContext.mappingException(DeserializationContext.java:843)
>> at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.addBeanProps(BeanDeserializerFactory.java:533)
>> at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.buildBeanDeserializer(BeanDeserializerFactory.java:220)
>> at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.createBeanDeserializer(BeanDeserializerFactory.java:143)
>> at com.fasterxml.jackson.databind.deser.DeserializerCache._createDeserializer2(DeserializerCache.java:409)
>> at com.fasterxml.jackson.databind.deser.DeserializerCache._createDeserializer(DeserializerCache.java:358)
>> at com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCache2(DeserializerCache.java:265)
>> at com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCacheValueDeserializer(DeserializerCache.java:245)
>> at com.fasterxml.jackson.databind.deser.DeserializerCache.findValueDeserializer(DeserializerCache.java:143)
>> at com.fasterxml.jackson.databind.DeserializationContext.findRootValueDeserializer(DeserializationContext.java:439)
>> at com.fasterxml.jackson.databind.ObjectMapper._findRootDeserializer(ObjectMapper.java:3666)
>> at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3558)
>> at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2578)
>> at org.apache.spark.rdd.RDDOperationScope$.fromJson(RDDOperationScope.scala:82)
>> at org.apache.spark.rdd.RDD$$anonfun$34.apply(RDD.scala:1582)
>> at org.apache.spark.rdd.RDD$$anonfun$34.apply(RDD.scala:1582)
>> at scala.Option.map(Option.scala:145)
>> at org.apache.spark.rdd.RDD.<init>(RDD.scala:1582)
>> at com.datastax.spark.connector.rdd.CassandraRDD.<init>(CassandraRDD.scala:15)
>> at com.datastax.spark.connector.rdd.CassandraTableScanRDD.<init>(CassandraTableScanRDD.scala:59)
>> at com.datastax.spark.connector.rdd.CassandraTableScanRDD.copy(CassandraTableScanRDD.scala:92)
>> at com.datastax.spark.connector.rdd.CassandraTableScanRDD.copy(CassandraTableScanRDD.scala:59)
>> at com.datastax.spark.connector.rdd.CassandraRDD.limit(CassandraRDD.scala:103)
>> at com.datastax.spark.connector.rdd.CassandraRDD.take(CassandraRDD.scala:122)
>> at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1312)
>> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
>> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
>> at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
>> at org.apache.spark.rdd.RDD.first(RDD.scala:1311)
>> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:36)
>> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:41)
>> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:43)
>> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:45)
>> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:47)
>> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:49)
>> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:51)
>> at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:53)
>> at $iwC$$iwC$$iwC$$iwC.<init>(<console>:55)
>> at $iwC$$iwC$$iwC.<init>(<console>:57)
>> at $iwC$$iwC.<init>(<console>:59)
>> at $iwC.<init>(<console>:61)
>> at <init>(<console>:63)
>> at .<init>(<console>:67)
>> at .<clinit>(<console>)
>> at .<init>(<console>:7)
>> at .<clinit>(<console>)
>> at $print(<console>)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:497)
>> at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>> at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1340)
>> at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>> at org.apache.zeppelin.spark.SparkInterpreter.interpretInput(SparkInterpreter.java:655)
>> at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:620)
>> at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:613)
>> at org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
>> at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
>> at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276)
>> at org.apache.zeppelin.scheduler.Job.run(Job.java:170)
>> at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)
>> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>>
>> Why does the call to first fail. Calls such as sc.fromTextFile also fail.
>>
>> the following also works:
>>
>> checkins.where("year = 2010 and month=2 and day>12 and day<15").count()
>>
>> but this does not:
>>
>> checkins.where("year = 2010 and month=2 and day>12 and day<15").first()
>>
>> Please assist as this is driving me insane. Especially since the spark
>> shell works but this does not or at least seems partially broken.
>>
>> Thanks
>> --
>> Armand Naudé
>> Software Engineer
>>
>> Cell:     +27 082 552 0421
>>
>> ------------------------------------------------------------
>> To see a world in a grain of sand
>> And a heaven in a wild flower,
>> Hold infinity in the palm of your hand
>> And eternity in an hour.
>>                        -- William Blake
>>
>
>


-- 
Armand Naudé
Software Engineer

Cell:     +27 082 552 0421

------------------------------------------------------------
To see a world in a grain of sand
And a heaven in a wild flower,
Hold infinity in the palm of your hand
And eternity in an hour.
                       -- William Blake

Re: Zeppelin spark RDD commands fail yet work in spark-shell

Posted by Jonathan Kelly <jo...@gmail.com>.
That exception looks like one that I see when using too new of a version of
Jackson. See https://issues.apache.org/jira/browse/SPARK-8332. The short
answer is that you probably need to make sure you are using Jackson 2.4.x
rather than 2.5.x.

~ Jonathan

On Fri, Oct 16, 2015 at 7:37 AM, Armand Naude <ar...@gmail.com> wrote:

> I have setup a standalone single node "cluster" running the following:
>
>    - Cassandra 2.2.2
>    - Spark 1.5.1
>    - List item
>    - Compiled fat jar for Spark-Cassandra-Connector 1.5.0-M2
>    - Compiled Zeppelin 0.6 snapshot compiled with: mvn -Pspark-1.5
>    -Dspark.version=1.5.1 -Dhadoop.version=2.6.0 -Phadoop-2.4 -DskipTests clean
>    package
>
> I can work perfectly fine with spark shell retrieving data from cassandra
>
> I have altered the Zeppelin-env.sh as follow:
>
> export MASTER=spark://localhost:7077
> export SPARK_HOME=/root/spark-1.5.1-bin-hadoop2.6/
> export ZEPPELIN_PORT=8880
> export ZEPPELIN_JAVA_OPTS="-Dspark.jars=/opt/sparkconnector/spark-cassandra-connector-assembly-1.5.0-M2-SNAPSHOT.jar -Dspark.cassandra.connection.host=localhost"
> export ZEPPELIN_NOTEBOOK_DIR="/root/gowalla-spark-demo/notebooks/zeppelin"
> export SPARK_SUBMIT_OPTIONS="--jars /opt/sparkconnector/spark-cassandra-connector-assembly-1.5.0-M2-SNAPSHOT.jar --deploy-mode cluster"
> export ZEPPELIN_INTP_JAVA_OPTS=$ZEPPELIN_JAVA_OPTS
>
> I then start adding paragraphs to a notebook and import the following
> first:
>
> import com.datastax.spark.connector._
> import com.datastax.spark.connector.cql._
> import com.datastax.spark.connector.rdd.CassandraRDD
> import org.apache.spark.rdd.RDD
> import org.apache.spark.SparkContext
> import org.apache.spark.SparkConf
>
> Not sure if all of these are necessary. This paragraph runs fine.
>
> Then I do the following:
>
> val checkins = sc.cassandraTable("lbsn", "checkins")
>
> This runs fine and returns:
>
> checkins: com.datastax.spark.connector.rdd.CassandraTableScanRDD[com.datastax.spark.connector.CassandraRow] = CassandraTableScanRDD[0] at RDD at CassandraRDD.scala:15
>
> Then the next paragraph - the follow 2 statements are run -the first
> succeeds and the second fails:
>
> checkins.count
> checkins.first
>
> Result:
>
> res13: Long = 138449
> com.fasterxml.jackson.databind.JsonMappingException: Could not find creator property with name 'id' (in class org.apache.spark.rdd.RDDOperationScope)
> at [Source: {"id":"4","name":"first"}; line: 1, column: 1]
> at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148)
> at com.fasterxml.jackson.databind.DeserializationContext.mappingException(DeserializationContext.java:843)
> at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.addBeanProps(BeanDeserializerFactory.java:533)
> at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.buildBeanDeserializer(BeanDeserializerFactory.java:220)
> at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.createBeanDeserializer(BeanDeserializerFactory.java:143)
> at com.fasterxml.jackson.databind.deser.DeserializerCache._createDeserializer2(DeserializerCache.java:409)
> at com.fasterxml.jackson.databind.deser.DeserializerCache._createDeserializer(DeserializerCache.java:358)
> at com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCache2(DeserializerCache.java:265)
> at com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCacheValueDeserializer(DeserializerCache.java:245)
> at com.fasterxml.jackson.databind.deser.DeserializerCache.findValueDeserializer(DeserializerCache.java:143)
> at com.fasterxml.jackson.databind.DeserializationContext.findRootValueDeserializer(DeserializationContext.java:439)
> at com.fasterxml.jackson.databind.ObjectMapper._findRootDeserializer(ObjectMapper.java:3666)
> at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3558)
> at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2578)
> at org.apache.spark.rdd.RDDOperationScope$.fromJson(RDDOperationScope.scala:82)
> at org.apache.spark.rdd.RDD$$anonfun$34.apply(RDD.scala:1582)
> at org.apache.spark.rdd.RDD$$anonfun$34.apply(RDD.scala:1582)
> at scala.Option.map(Option.scala:145)
> at org.apache.spark.rdd.RDD.<init>(RDD.scala:1582)
> at com.datastax.spark.connector.rdd.CassandraRDD.<init>(CassandraRDD.scala:15)
> at com.datastax.spark.connector.rdd.CassandraTableScanRDD.<init>(CassandraTableScanRDD.scala:59)
> at com.datastax.spark.connector.rdd.CassandraTableScanRDD.copy(CassandraTableScanRDD.scala:92)
> at com.datastax.spark.connector.rdd.CassandraTableScanRDD.copy(CassandraTableScanRDD.scala:59)
> at com.datastax.spark.connector.rdd.CassandraRDD.limit(CassandraRDD.scala:103)
> at com.datastax.spark.connector.rdd.CassandraRDD.take(CassandraRDD.scala:122)
> at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1312)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
> at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
> at org.apache.spark.rdd.RDD.first(RDD.scala:1311)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:36)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:41)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:43)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:45)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:47)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:49)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:51)
> at $iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:53)
> at $iwC$$iwC$$iwC$$iwC.<init>(<console>:55)
> at $iwC$$iwC$$iwC.<init>(<console>:57)
> at $iwC$$iwC.<init>(<console>:59)
> at $iwC.<init>(<console>:61)
> at <init>(<console>:63)
> at .<init>(<console>:67)
> at .<clinit>(<console>)
> at .<init>(<console>:7)
> at .<clinit>(<console>)
> at $print(<console>)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
> at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1340)
> at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
> at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
> at org.apache.zeppelin.spark.SparkInterpreter.interpretInput(SparkInterpreter.java:655)
> at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:620)
> at org.apache.zeppelin.spark.SparkInterpreter.interpret(SparkInterpreter.java:613)
> at org.apache.zeppelin.interpreter.ClassloaderInterpreter.interpret(ClassloaderInterpreter.java:57)
> at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:93)
> at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:276)
> at org.apache.zeppelin.scheduler.Job.run(Job.java:170)
> at org.apache.zeppelin.scheduler.FIFOScheduler$1.run(FIFOScheduler.java:118)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> Why does the call to first fail. Calls such as sc.fromTextFile also fail.
>
> the following also works:
>
> checkins.where("year = 2010 and month=2 and day>12 and day<15").count()
>
> but this does not:
>
> checkins.where("year = 2010 and month=2 and day>12 and day<15").first()
>
> Please assist as this is driving me insane. Especially since the spark
> shell works but this does not or at least seems partially broken.
>
> Thanks
> --
> Armand Naudé
> Software Engineer
>
> Cell:     +27 082 552 0421
>
> ------------------------------------------------------------
> To see a world in a grain of sand
> And a heaven in a wild flower,
> Hold infinity in the palm of your hand
> And eternity in an hour.
>                        -- William Blake
>