You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Frens Jan Rumph (JIRA)" <ji...@apache.org> on 2014/09/19 11:26:34 UTC

[jira] [Commented] (SPARK-3602) Can't run cassandra_inputformat.py

    [ https://issues.apache.org/jira/browse/SPARK-3602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14140236#comment-14140236 ] 

Frens Jan Rumph commented on SPARK-3602:
----------------------------------------

When running this against the spark-1.1.0-bin-hadoop build I get the following output:

{noformat}
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
14/09/19 11:24:31 WARN Utils: Your hostname, laptop-xxxxx resolves to a loopback address: 127.0.0.1; using 192.168.2.2 instead (on interface wlan0)
14/09/19 11:24:31 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
14/09/19 11:24:31 INFO SecurityManager: Changing view acls to: frens-jan,
14/09/19 11:24:31 INFO SecurityManager: Changing modify acls to: frens-jan,
14/09/19 11:24:31 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(frens-jan, ); users with modify permissions: Set(frens-jan, )
14/09/19 11:24:31 INFO Slf4jLogger: Slf4jLogger started
14/09/19 11:24:31 INFO Remoting: Starting remoting
14/09/19 11:24:32 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@laptop-xxxxx.local:44417]
14/09/19 11:24:32 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver@laptop-xxxxx.local:44417]
14/09/19 11:24:32 INFO Utils: Successfully started service 'sparkDriver' on port 44417.
14/09/19 11:24:32 INFO SparkEnv: Registering MapOutputTracker
14/09/19 11:24:32 INFO SparkEnv: Registering BlockManagerMaster
14/09/19 11:24:32 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20140919112432-527c
14/09/19 11:24:32 INFO Utils: Successfully started service 'Connection manager for block manager' on port 44978.
14/09/19 11:24:32 INFO ConnectionManager: Bound socket to port 44978 with id = ConnectionManagerId(laptop-xxxxx.local,44978)
14/09/19 11:24:32 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
14/09/19 11:24:32 INFO BlockManagerMaster: Trying to register BlockManager
14/09/19 11:24:32 INFO BlockManagerMasterActor: Registering block manager laptop-xxxxx.local:44978 with 265.4 MB RAM
14/09/19 11:24:32 INFO BlockManagerMaster: Registered BlockManager
14/09/19 11:24:32 INFO HttpFileServer: HTTP File server directory is /tmp/spark-4168e04d-508f-4f3b-92b4-050ecb47dfc7
14/09/19 11:24:32 INFO HttpServer: Starting HTTP Server
14/09/19 11:24:32 INFO Utils: Successfully started service 'HTTP file server' on port 54892.
14/09/19 11:24:32 INFO Utils: Successfully started service 'SparkUI' on port 4040.
14/09/19 11:24:32 INFO SparkUI: Started SparkUI at http://laptop-xxxxx.local:4040
14/09/19 11:24:33 INFO SparkContext: Added JAR file:/home/frens-jan/Desktop/spark-1.1.0-bin-hadoop1/lib/spark-examples-1.1.0-hadoop1.0.4.jar at http://192.168.2.2:54892/jars/spark-examples-1.1.0-hadoop1.0.4.jar with timestamp 1411118673018
14/09/19 11:24:33 INFO Utils: Copying /home/frens-jan/Desktop/spark-1.1.0-bin-hadoop1/examples/src/main/python/cassandra_inputformat.py to /tmp/spark-be9320ce-82f7-437d-af36-a31b6f7375be/cassandra_inputformat.py
14/09/19 11:24:33 INFO SparkContext: Added file file:/home/frens-jan/Desktop/spark-1.1.0-bin-hadoop1/examples/src/main/python/cassandra_inputformat.py at http://192.168.2.2:54892/files/cassandra_inputformat.py with timestamp 1411118673019
14/09/19 11:24:33 INFO AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@laptop-xxxxx.local:44417/user/HeartbeatReceiver
14/09/19 11:24:33 INFO MemoryStore: ensureFreeSpace(34980) called with curMem=0, maxMem=278302556
14/09/19 11:24:33 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 34.2 KB, free 265.4 MB)
14/09/19 11:24:33 INFO MemoryStore: ensureFreeSpace(34980) called with curMem=34980, maxMem=278302556
14/09/19 11:24:33 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 34.2 KB, free 265.3 MB)
14/09/19 11:24:33 INFO Converter: Loaded converter: org.apache.spark.examples.pythonconverters.CassandraCQLKeyConverter
14/09/19 11:24:33 INFO Converter: Loaded converter: org.apache.spark.examples.pythonconverters.CassandraCQLValueConverter
14/09/19 11:24:33 INFO SparkContext: Starting job: first at SerDeUtil.scala:70
14/09/19 11:24:33 INFO DAGScheduler: Got job 0 (first at SerDeUtil.scala:70) with 1 output partitions (allowLocal=true)
14/09/19 11:24:33 INFO DAGScheduler: Final stage: Stage 0(first at SerDeUtil.scala:70)
14/09/19 11:24:33 INFO DAGScheduler: Parents of final stage: List()
14/09/19 11:24:33 INFO DAGScheduler: Missing parents: List()
14/09/19 11:24:33 INFO DAGScheduler: Submitting Stage 0 (MappedRDD[1] at map at PythonHadoopUtil.scala:185), which has no missing parents
14/09/19 11:24:33 INFO MemoryStore: ensureFreeSpace(2440) called with curMem=69960, maxMem=278302556
14/09/19 11:24:33 INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 2.4 KB, free 265.3 MB)
14/09/19 11:24:33 INFO DAGScheduler: Submitting 1 missing tasks from Stage 0 (MappedRDD[1] at map at PythonHadoopUtil.scala:185)
14/09/19 11:24:33 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
14/09/19 11:24:33 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 1365 bytes)
14/09/19 11:24:33 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
14/09/19 11:24:33 INFO Executor: Fetching http://192.168.2.2:54892/files/cassandra_inputformat.py with timestamp 1411118673019
14/09/19 11:24:33 INFO Utils: Fetching http://192.168.2.2:54892/files/cassandra_inputformat.py to /tmp/fetchFileTemp2573104058304705570.tmp
14/09/19 11:24:33 INFO Executor: Fetching http://192.168.2.2:54892/jars/spark-examples-1.1.0-hadoop1.0.4.jar with timestamp 1411118673018
14/09/19 11:24:33 INFO Utils: Fetching http://192.168.2.2:54892/jars/spark-examples-1.1.0-hadoop1.0.4.jar to /tmp/fetchFileTemp4263044381025913656.tmp
14/09/19 11:24:33 INFO Executor: Adding file:/tmp/spark-be9320ce-82f7-437d-af36-a31b6f7375be/spark-examples-1.1.0-hadoop1.0.4.jar to class loader
14/09/19 11:24:33 INFO NewHadoopRDD: Input split: ColumnFamilySplit((8352330471461189390, '8471518727309379778] @[localhost])
14/09/19 11:24:33 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
java.lang.NoClassDefFoundError: org/codehaus/jackson/annotate/JsonClass
	at org.codehaus.jackson.map.introspect.JacksonAnnotationIntrospector.findDeserializationType(JacksonAnnotationIntrospector.java:524)
	at org.codehaus.jackson.map.deser.BasicDeserializerFactory.modifyTypeByAnnotation(BasicDeserializerFactory.java:732)
	at org.codehaus.jackson.map.deser.BasicDeserializerFactory.createCollectionDeserializer(BasicDeserializerFactory.java:229)
	at org.codehaus.jackson.map.deser.StdDeserializerProvider._createDeserializer(StdDeserializerProvider.java:386)
	at org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCache2(StdDeserializerProvider.java:307)
	at org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCacheValueDeserializer(StdDeserializerProvider.java:287)
	at org.codehaus.jackson.map.deser.StdDeserializerProvider.findValueDeserializer(StdDeserializerProvider.java:136)
	at org.codehaus.jackson.map.deser.StdDeserializerProvider.findTypedValueDeserializer(StdDeserializerProvider.java:157)
	at org.codehaus.jackson.map.ObjectMapper._findRootDeserializer(ObjectMapper.java:2468)
	at org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
	at org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
	at org.apache.cassandra.utils.FBUtilities.fromJsonList(FBUtilities.java:530)
	at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader.retrieveKeys(CqlPagingRecordReader.java:666)
	at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader.initialize(CqlPagingRecordReader.java:140)
	at org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(NewHadoopRDD.scala:117)
	at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:103)
	at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:65)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
	at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
	at org.apache.spark.scheduler.Task.run(Task.scala:54)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: org.codehaus.jackson.annotate.JsonClass
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	... 28 more
14/09/19 11:24:33 ERROR ExecutorUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-0,5,main]
java.lang.NoClassDefFoundError: org/codehaus/jackson/annotate/JsonClass
	at org.codehaus.jackson.map.introspect.JacksonAnnotationIntrospector.findDeserializationType(JacksonAnnotationIntrospector.java:524)
	at org.codehaus.jackson.map.deser.BasicDeserializerFactory.modifyTypeByAnnotation(BasicDeserializerFactory.java:732)
	at org.codehaus.jackson.map.deser.BasicDeserializerFactory.createCollectionDeserializer(BasicDeserializerFactory.java:229)
	at org.codehaus.jackson.map.deser.StdDeserializerProvider._createDeserializer(StdDeserializerProvider.java:386)
	at org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCache2(StdDeserializerProvider.java:307)
	at org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCacheValueDeserializer(StdDeserializerProvider.java:287)
	at org.codehaus.jackson.map.deser.StdDeserializerProvider.findValueDeserializer(StdDeserializerProvider.java:136)
	at org.codehaus.jackson.map.deser.StdDeserializerProvider.findTypedValueDeserializer(StdDeserializerProvider.java:157)
	at org.codehaus.jackson.map.ObjectMapper._findRootDeserializer(ObjectMapper.java:2468)
	at org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
	at org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
	at org.apache.cassandra.utils.FBUtilities.fromJsonList(FBUtilities.java:530)
	at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader.retrieveKeys(CqlPagingRecordReader.java:666)
	at org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader.initialize(CqlPagingRecordReader.java:140)
	at org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(NewHadoopRDD.scala:117)
	at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:103)
	at org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:65)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
	at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
	at org.apache.spark.scheduler.Task.run(Task.scala:54)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: org.codehaus.jackson.annotate.JsonClass
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	... 28 more
14/09/19 11:24:33 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.NoClassDefFoundError: org/codehaus/jackson/annotate/JsonClass
        org.codehaus.jackson.map.introspect.JacksonAnnotationIntrospector.findDeserializationType(JacksonAnnotationIntrospector.java:524)
        org.codehaus.jackson.map.deser.BasicDeserializerFactory.modifyTypeByAnnotation(BasicDeserializerFactory.java:732)
        org.codehaus.jackson.map.deser.BasicDeserializerFactory.createCollectionDeserializer(BasicDeserializerFactory.java:229)
        org.codehaus.jackson.map.deser.StdDeserializerProvider._createDeserializer(StdDeserializerProvider.java:386)
        org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCache2(StdDeserializerProvider.java:307)
        org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCacheValueDeserializer(StdDeserializerProvider.java:287)
        org.codehaus.jackson.map.deser.StdDeserializerProvider.findValueDeserializer(StdDeserializerProvider.java:136)
        org.codehaus.jackson.map.deser.StdDeserializerProvider.findTypedValueDeserializer(StdDeserializerProvider.java:157)
        org.codehaus.jackson.map.ObjectMapper._findRootDeserializer(ObjectMapper.java:2468)
        org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
        org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
        org.apache.cassandra.utils.FBUtilities.fromJsonList(FBUtilities.java:530)
        org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader.retrieveKeys(CqlPagingRecordReader.java:666)
        org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader.initialize(CqlPagingRecordReader.java:140)
        org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(NewHadoopRDD.scala:117)
        org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:103)
        org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:65)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
        org.apache.spark.scheduler.Task.run(Task.scala:54)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
        java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:745)
14/09/19 11:24:33 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
14/09/19 11:24:33 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool 
14/09/19 11:24:33 INFO TaskSchedulerImpl: Cancelling stage 0
14/09/19 11:24:33 INFO DAGScheduler: Failed to run first at SerDeUtil.scala:70
Traceback (most recent call last):
  File "/home/frens-jan/Desktop/spark-1.1.0-bin-hadoop1/examples/src/main/python/cassandra_inputformat.py", line 76, in <module>
    conf=conf)
  File "/home/frens-jan/Desktop/spark-1.1.0-bin-hadoop1/python/pyspark/context.py", line 471, in newAPIHadoopRDD
    jconf, batchSize)
  File "/home/frens-jan/Desktop/spark-1.1.0-bin-hadoop1/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
  File "/home/frens-jan/Desktop/spark-1.1.0-bin-hadoop1/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.lang.NoClassDefFoundError: org/codehaus/jackson/annotate/JsonClass
        org.codehaus.jackson.map.introspect.JacksonAnnotationIntrospector.findDeserializationType(JacksonAnnotationIntrospector.java:524)
        org.codehaus.jackson.map.deser.BasicDeserializerFactory.modifyTypeByAnnotation(BasicDeserializerFactory.java:732)
        org.codehaus.jackson.map.deser.BasicDeserializerFactory.createCollectionDeserializer(BasicDeserializerFactory.java:229)
        org.codehaus.jackson.map.deser.StdDeserializerProvider._createDeserializer(StdDeserializerProvider.java:386)
        org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCache2(StdDeserializerProvider.java:307)
        org.codehaus.jackson.map.deser.StdDeserializerProvider._createAndCacheValueDeserializer(StdDeserializerProvider.java:287)
        org.codehaus.jackson.map.deser.StdDeserializerProvider.findValueDeserializer(StdDeserializerProvider.java:136)
        org.codehaus.jackson.map.deser.StdDeserializerProvider.findTypedValueDeserializer(StdDeserializerProvider.java:157)
        org.codehaus.jackson.map.ObjectMapper._findRootDeserializer(ObjectMapper.java:2468)
        org.codehaus.jackson.map.ObjectMapper._readMapAndClose(ObjectMapper.java:2402)
        org.codehaus.jackson.map.ObjectMapper.readValue(ObjectMapper.java:1602)
        org.apache.cassandra.utils.FBUtilities.fromJsonList(FBUtilities.java:530)
        org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader.retrieveKeys(CqlPagingRecordReader.java:666)
        org.apache.cassandra.hadoop.cql3.CqlPagingRecordReader.initialize(CqlPagingRecordReader.java:140)
        org.apache.spark.rdd.NewHadoopRDD$$anon$1.<init>(NewHadoopRDD.scala:117)
        org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:103)
        org.apache.spark.rdd.NewHadoopRDD.compute(NewHadoopRDD.scala:65)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
        org.apache.spark.scheduler.Task.run(Task.scala:54)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
        java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
	at scala.Option.foreach(Option.scala:236)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
	at akka.actor.ActorCell.invoke(ActorCell.scala:456)
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
	at akka.dispatch.Mailbox.run(Mailbox.scala:219)
	at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
{noformat}

> Can't run cassandra_inputformat.py
> ----------------------------------
>
>                 Key: SPARK-3602
>                 URL: https://issues.apache.org/jira/browse/SPARK-3602
>             Project: Spark
>          Issue Type: Bug
>          Components: Examples, PySpark
>    Affects Versions: 1.1.0
>         Environment: Ubuntu 14.04
>            Reporter: Frens Jan Rumph
>
> When I execute:
> {noformat}
> wget http://apache.cs.uu.nl/dist/spark/spark-1.1.0/spark-1.1.0-bin-hadoop2.4.tgz
> tar xzf spark-1.1.0-bin-hadoop2.4.tgz
> cd spark-1.1.0-bin-hadoop2.4/
> ./bin/spark-submit --jars lib/spark-examples-1.1.0-hadoop2.4.0.jar examples/src/main/python/cassandra_inputformat.py localhost keyspace cf
> {noformat}
> The output is:
> {noformat}
> Spark assembly has been built with Hive, including Datanucleus jars on classpath
> Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
> 14/09/19 10:41:10 WARN Utils: Your hostname, laptop-xxxxx resolves to a loopback address: 127.0.0.1; using 192.168.2.2 instead (on interface wlan0)
> 14/09/19 10:41:10 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
> 14/09/19 10:41:10 INFO SecurityManager: Changing view acls to: frens-jan,
> 14/09/19 10:41:10 INFO SecurityManager: Changing modify acls to: frens-jan,
> 14/09/19 10:41:10 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(frens-jan, ); users with modify permissions: Set(frens-jan, )
> 14/09/19 10:41:11 INFO Slf4jLogger: Slf4jLogger started
> 14/09/19 10:41:11 INFO Remoting: Starting remoting
> 14/09/19 10:41:11 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@laptop-xxxxx.local:43790]
> 14/09/19 10:41:11 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver@laptop-xxxxx.local:43790]
> 14/09/19 10:41:11 INFO Utils: Successfully started service 'sparkDriver' on port 43790.
> 14/09/19 10:41:11 INFO SparkEnv: Registering MapOutputTracker
> 14/09/19 10:41:11 INFO SparkEnv: Registering BlockManagerMaster
> 14/09/19 10:41:11 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20140919104111-145e
> 14/09/19 10:41:11 INFO Utils: Successfully started service 'Connection manager for block manager' on port 45408.
> 14/09/19 10:41:11 INFO ConnectionManager: Bound socket to port 45408 with id = ConnectionManagerId(laptop-xxxxx.local,45408)
> 14/09/19 10:41:11 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
> 14/09/19 10:41:11 INFO BlockManagerMaster: Trying to register BlockManager
> 14/09/19 10:41:11 INFO BlockManagerMasterActor: Registering block manager laptop-xxxxx.local:45408 with 265.4 MB RAM
> 14/09/19 10:41:11 INFO BlockManagerMaster: Registered BlockManager
> 14/09/19 10:41:11 INFO HttpFileServer: HTTP File server directory is /tmp/spark-5f0289d7-9b20-4bd7-a713-db84c38c4eac
> 14/09/19 10:41:11 INFO HttpServer: Starting HTTP Server
> 14/09/19 10:41:11 INFO Utils: Successfully started service 'HTTP file server' on port 36556.
> 14/09/19 10:41:11 INFO Utils: Successfully started service 'SparkUI' on port 4040.
> 14/09/19 10:41:11 INFO SparkUI: Started SparkUI at http://laptop-frens-jan.local:4040
> 14/09/19 10:41:12 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> 14/09/19 10:41:12 INFO SparkContext: Added JAR file:/home/frens-jan/Desktop/spark-1.1.0-bin-hadoop2.4/lib/spark-examples-1.1.0-hadoop2.4.0.jar at http://192.168.2.2:36556/jars/spark-examples-1.1.0-hadoop2.4.0.jar with timestamp 1411116072417
> 14/09/19 10:41:12 INFO Utils: Copying /home/frens-jan/Desktop/spark-1.1.0-bin-hadoop2.4/examples/src/main/python/cassandra_inputformat.py to /tmp/spark-7dbb1b4d-016c-4f8b-858d-f79c9297f58f/cassandra_inputformat.py
> 14/09/19 10:41:12 INFO SparkContext: Added file file:/home/frens-jan/Desktop/spark-1.1.0-bin-hadoop2.4/examples/src/main/python/cassandra_inputformat.py at http://192.168.2.2:36556/files/cassandra_inputformat.py with timestamp 1411116072419
> 14/09/19 10:41:12 INFO AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@laptop-frens-jan.local:43790/user/HeartbeatReceiver
> 14/09/19 10:41:12 INFO MemoryStore: ensureFreeSpace(167659) called with curMem=0, maxMem=278302556
> 14/09/19 10:41:12 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 163.7 KB, free 265.3 MB)
> 14/09/19 10:41:12 INFO MemoryStore: ensureFreeSpace(167659) called with curMem=167659, maxMem=278302556
> 14/09/19 10:41:12 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 163.7 KB, free 265.1 MB)
> 14/09/19 10:41:12 INFO Converter: Loaded converter: org.apache.spark.examples.pythonconverters.CassandraCQLKeyConverter
> 14/09/19 10:41:12 INFO Converter: Loaded converter: org.apache.spark.examples.pythonconverters.CassandraCQLValueConverter
> Traceback (most recent call last):
>   File "/home/frens-jan/Desktop/spark-1.1.0-bin-hadoop2.4/examples/src/main/python/cassandra_inputformat.py", line 76, in <module>
>     conf=conf)
>   File "/home/frens-jan/Desktop/spark-1.1.0-bin-hadoop2.4/python/pyspark/context.py", line 471, in newAPIHadoopRDD
>     jconf, batchSize)
>   File "/home/frens-jan/Desktop/spark-1.1.0-bin-hadoop2.4/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
>   File "/home/frens-jan/Desktop/spark-1.1.0-bin-hadoop2.4/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
> py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD.
> : java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
> 	at org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getSplits(AbstractColumnFamilyInputFormat.java:113)
> 	at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:94)
> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
> 	at scala.Option.getOrElse(Option.scala:120)
> 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
> 	at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
> 	at scala.Option.getOrElse(Option.scala:120)
> 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
> 	at org.apache.spark.rdd.RDD.take(RDD.scala:1060)
> 	at org.apache.spark.rdd.RDD.first(RDD.scala:1092)
> 	at org.apache.spark.api.python.SerDeUtil$.pairRDDToPython(SerDeUtil.scala:70)
> 	at org.apache.spark.api.python.PythonRDD$.newAPIHadoopRDD(PythonRDD.scala:441)
> 	at org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD(PythonRDD.scala)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
> 	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
> 	at py4j.Gateway.invoke(Gateway.java:259)
> 	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
> 	at py4j.commands.CallCommand.execute(CallCommand.java:79)
> 	at py4j.GatewayConnection.run(GatewayConnection.java:207)
> 	at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I am able to run scala based jobs though. I've tried various alternative sets of classpaths using the --jars option, but without succes. Would be nice if the example would run out of the box :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org