You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by aazout <al...@velos.io> on 2014/04/10 16:39:16 UTC

Spark 0.9.1 PySpark ImportError

I am getting a python ImportError on Spark standalone cluster. I have set the
PYTHONPATH on both worker and slave and the package imports properly when I
run PySpark command line on both machines. This only happens with Master -
Slave communication. Here is the error below: 

14/04/10 13:40:19 INFO scheduler.TaskSetManager: Loss was due to
org.apache.spark.api.python.PythonException: Traceback (most recent call
last): 
  File "/root/spark/python/pyspark/worker.py", line 73, in main 
    command = pickleSer._read_with_length(infile) 
  File "/root/spark/python/pyspark/serializers.py", line 137, in
_read_with_length 
    return self.loads(obj) 
  File "/root/spark/python/pyspark/cloudpickle.py", line 810, in subimport 
    __import__(name) 
ImportError: ('No module named volatility.atm_impl_vol', <function subimport
at 0xa36050>, ('volatility.atm_impl_vol',)) 

Any ideas?



-----
CEO / Velos (velos.io)
--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-0-9-1-PySpark-ImportError-tp4068.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Re: Spark 0.9.1 PySpark ImportError

Posted by aazout <al...@velos.io>.
Matei, thanks. So including the PYTHONPATH in spark-env.sh seemed to work. I
am faced with this issue now. I am doing a large GroupBy in pyspark and the
process fails (at the driver it seems). There is not much of a stack trace
here to see where the issue is happening. This process works locally. 

----

14/04/11 12:59:11 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0,
whose tasks have all completed, from pool
14/04/11 12:59:11 INFO scheduler.DAGScheduler: Failed to run foreach at
load/load_etl.py:150
Traceback (most recent call last):
  File "load/load_etl.py", line 164, in <module>
    generateImplVolSeries(dirName="vodimo/data/month/", symbols=symbols,
outputFilePath="vodimo/data/series/output")
  File "load/load_etl.py", line 150, in generateImplVolSeries
    rdd = rdd.foreach(generateATMImplVols)
  File "/root/spark/python/pyspark/rdd.py", line 462, in foreach
    self.mapPartitions(processPartition).collect()  # Force evaluation
  File "/root/spark/python/pyspark/rdd.py", line 469, in collect
    bytesInJava = self._jrdd.collect().iterator()
  File "/root/spark/python/lib/py4j-0.8.1-src.zip/py4j/java_gateway.py",
line 537, in __call__
  File "/root/spark/python/lib/py4j-0.8.1-src.zip/py4j/protocol.py", line
300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o55.collect.
: org.apache.spark.SparkException: Job aborted: Task 0.0:0 failed 4 times
(most recent failure: unknown)
	at
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1020)
	at
org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1018)
	at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
	at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1018)
	at
org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604)
	at
org.apache.spark.scheduler.DAGScheduler$$anonfun$processEvent$10.apply(DAGScheduler.scala:604)
	at scala.Option.foreach(Option.scala:236)
	at
org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:604)
	at
org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:190)
	at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
	at akka.actor.ActorCell.invoke(ActorCell.scala:456)
	at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
	at akka.dispatch.Mailbox.run(Mailbox.scala:219)
	at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
	at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
	at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
	at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
	at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

14/04/11 12:59:11 INFO scheduler.DAGScheduler: Executor lost: 3 (epoch 4)
14/04/11 12:59:11 INFO storage.BlockManagerMasterActor: Trying to remove
executor 3 from BlockManagerMaster.
14/04/11 12:59:11 INFO storage.BlockManagerMaster: Removed 3 successfully in
removeExecutor



-----
CEO / Velos (velos.io)
--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-0-9-1-PySpark-ImportError-tp4068p4125.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Re: Spark 0.9.1 PySpark ImportError

Posted by Matei Zaharia <ma...@gmail.com>.
Kind of strange because we haven’t updated CloudPickle AFAIK. Is this a package you added on the PYTHONPATH? How did you set the path, was it in conf/spark-env.sh?

Matei

On Apr 10, 2014, at 7:39 AM, aazout <al...@velos.io> wrote:

> I am getting a python ImportError on Spark standalone cluster. I have set the
> PYTHONPATH on both worker and slave and the package imports properly when I
> run PySpark command line on both machines. This only happens with Master -
> Slave communication. Here is the error below: 
> 
> 14/04/10 13:40:19 INFO scheduler.TaskSetManager: Loss was due to
> org.apache.spark.api.python.PythonException: Traceback (most recent call
> last): 
>  File "/root/spark/python/pyspark/worker.py", line 73, in main 
>    command = pickleSer._read_with_length(infile) 
>  File "/root/spark/python/pyspark/serializers.py", line 137, in
> _read_with_length 
>    return self.loads(obj) 
>  File "/root/spark/python/pyspark/cloudpickle.py", line 810, in subimport 
>    __import__(name) 
> ImportError: ('No module named volatility.atm_impl_vol', <function subimport
> at 0xa36050>, ('volatility.atm_impl_vol',)) 
> 
> Any ideas?
> 
> 
> 
> -----
> CEO / Velos (velos.io)
> --
> View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Spark-0-9-1-PySpark-ImportError-tp4068.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.