You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Sean Owen (JIRA)" <ji...@apache.org> on 2015/05/15 15:35:00 UTC

[jira] [Resolved] (SPARK-3602) Can't run cassandra_inputformat.py

     [ https://issues.apache.org/jira/browse/SPARK-3602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sean Owen resolved SPARK-3602.
------------------------------
    Resolution: Not A Problem

I think this is due to mismatching Hadoop libs, or at least is stale enough at this point that I think it should be closed.

> Can't run cassandra_inputformat.py
> ----------------------------------
>
>                 Key: SPARK-3602
>                 URL: https://issues.apache.org/jira/browse/SPARK-3602
>             Project: Spark
>          Issue Type: Bug
>          Components: Examples, PySpark
>    Affects Versions: 1.1.0
>         Environment: Ubuntu 14.04
>            Reporter: Frens Jan Rumph
>
> When I execute:
> {noformat}
> wget http://apache.cs.uu.nl/dist/spark/spark-1.1.0/spark-1.1.0-bin-hadoop2.4.tgz
> tar xzf spark-1.1.0-bin-hadoop2.4.tgz
> cd spark-1.1.0-bin-hadoop2.4/
> ./bin/spark-submit --jars lib/spark-examples-1.1.0-hadoop2.4.0.jar examples/src/main/python/cassandra_inputformat.py localhost keyspace cf
> {noformat}
> The output is:
> {noformat}
> Spark assembly has been built with Hive, including Datanucleus jars on classpath
> Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
> 14/09/19 10:41:10 WARN Utils: Your hostname, laptop-xxxxx resolves to a loopback address: 127.0.0.1; using 192.168.2.2 instead (on interface wlan0)
> 14/09/19 10:41:10 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
> 14/09/19 10:41:10 INFO SecurityManager: Changing view acls to: frens-jan,
> 14/09/19 10:41:10 INFO SecurityManager: Changing modify acls to: frens-jan,
> 14/09/19 10:41:10 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(frens-jan, ); users with modify permissions: Set(frens-jan, )
> 14/09/19 10:41:11 INFO Slf4jLogger: Slf4jLogger started
> 14/09/19 10:41:11 INFO Remoting: Starting remoting
> 14/09/19 10:41:11 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@laptop-xxxxx.local:43790]
> 14/09/19 10:41:11 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver@laptop-xxxxx.local:43790]
> 14/09/19 10:41:11 INFO Utils: Successfully started service 'sparkDriver' on port 43790.
> 14/09/19 10:41:11 INFO SparkEnv: Registering MapOutputTracker
> 14/09/19 10:41:11 INFO SparkEnv: Registering BlockManagerMaster
> 14/09/19 10:41:11 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20140919104111-145e
> 14/09/19 10:41:11 INFO Utils: Successfully started service 'Connection manager for block manager' on port 45408.
> 14/09/19 10:41:11 INFO ConnectionManager: Bound socket to port 45408 with id = ConnectionManagerId(laptop-xxxxx.local,45408)
> 14/09/19 10:41:11 INFO MemoryStore: MemoryStore started with capacity 265.4 MB
> 14/09/19 10:41:11 INFO BlockManagerMaster: Trying to register BlockManager
> 14/09/19 10:41:11 INFO BlockManagerMasterActor: Registering block manager laptop-xxxxx.local:45408 with 265.4 MB RAM
> 14/09/19 10:41:11 INFO BlockManagerMaster: Registered BlockManager
> 14/09/19 10:41:11 INFO HttpFileServer: HTTP File server directory is /tmp/spark-5f0289d7-9b20-4bd7-a713-db84c38c4eac
> 14/09/19 10:41:11 INFO HttpServer: Starting HTTP Server
> 14/09/19 10:41:11 INFO Utils: Successfully started service 'HTTP file server' on port 36556.
> 14/09/19 10:41:11 INFO Utils: Successfully started service 'SparkUI' on port 4040.
> 14/09/19 10:41:11 INFO SparkUI: Started SparkUI at http://laptop-frens-jan.local:4040
> 14/09/19 10:41:12 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
> 14/09/19 10:41:12 INFO SparkContext: Added JAR file:/home/frens-jan/Desktop/spark-1.1.0-bin-hadoop2.4/lib/spark-examples-1.1.0-hadoop2.4.0.jar at http://192.168.2.2:36556/jars/spark-examples-1.1.0-hadoop2.4.0.jar with timestamp 1411116072417
> 14/09/19 10:41:12 INFO Utils: Copying /home/frens-jan/Desktop/spark-1.1.0-bin-hadoop2.4/examples/src/main/python/cassandra_inputformat.py to /tmp/spark-7dbb1b4d-016c-4f8b-858d-f79c9297f58f/cassandra_inputformat.py
> 14/09/19 10:41:12 INFO SparkContext: Added file file:/home/frens-jan/Desktop/spark-1.1.0-bin-hadoop2.4/examples/src/main/python/cassandra_inputformat.py at http://192.168.2.2:36556/files/cassandra_inputformat.py with timestamp 1411116072419
> 14/09/19 10:41:12 INFO AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@laptop-frens-jan.local:43790/user/HeartbeatReceiver
> 14/09/19 10:41:12 INFO MemoryStore: ensureFreeSpace(167659) called with curMem=0, maxMem=278302556
> 14/09/19 10:41:12 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 163.7 KB, free 265.3 MB)
> 14/09/19 10:41:12 INFO MemoryStore: ensureFreeSpace(167659) called with curMem=167659, maxMem=278302556
> 14/09/19 10:41:12 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 163.7 KB, free 265.1 MB)
> 14/09/19 10:41:12 INFO Converter: Loaded converter: org.apache.spark.examples.pythonconverters.CassandraCQLKeyConverter
> 14/09/19 10:41:12 INFO Converter: Loaded converter: org.apache.spark.examples.pythonconverters.CassandraCQLValueConverter
> Traceback (most recent call last):
>   File "/home/frens-jan/Desktop/spark-1.1.0-bin-hadoop2.4/examples/src/main/python/cassandra_inputformat.py", line 76, in <module>
>     conf=conf)
>   File "/home/frens-jan/Desktop/spark-1.1.0-bin-hadoop2.4/python/pyspark/context.py", line 471, in newAPIHadoopRDD
>     jconf, batchSize)
>   File "/home/frens-jan/Desktop/spark-1.1.0-bin-hadoop2.4/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
>   File "/home/frens-jan/Desktop/spark-1.1.0-bin-hadoop2.4/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
> py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD.
> : java.lang.IncompatibleClassChangeError: Found interface org.apache.hadoop.mapreduce.JobContext, but class was expected
> 	at org.apache.cassandra.hadoop.AbstractColumnFamilyInputFormat.getSplits(AbstractColumnFamilyInputFormat.java:113)
> 	at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:94)
> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
> 	at scala.Option.getOrElse(Option.scala:120)
> 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
> 	at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
> 	at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
> 	at scala.Option.getOrElse(Option.scala:120)
> 	at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
> 	at org.apache.spark.rdd.RDD.take(RDD.scala:1060)
> 	at org.apache.spark.rdd.RDD.first(RDD.scala:1092)
> 	at org.apache.spark.api.python.SerDeUtil$.pairRDDToPython(SerDeUtil.scala:70)
> 	at org.apache.spark.api.python.PythonRDD$.newAPIHadoopRDD(PythonRDD.scala:441)
> 	at org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD(PythonRDD.scala)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> 	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 	at java.lang.reflect.Method.invoke(Method.java:606)
> 	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
> 	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
> 	at py4j.Gateway.invoke(Gateway.java:259)
> 	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
> 	at py4j.commands.CallCommand.execute(CallCommand.java:79)
> 	at py4j.GatewayConnection.run(GatewayConnection.java:207)
> 	at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I am able to run scala based jobs though. I've tried various alternative sets of classpaths using the --jars option, but without succes. Would be nice if the example would run out of the box :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org