You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@zeppelin.apache.org by Paulo Cheadi Haddad Filho <pa...@gmail.com> on 2015/09/16 18:20:24 UTC

Fwd: Zeppelin error when trying to run pyspark using python3

Hello,

Yesterday I installed a Spark server with Zeppelin and, while testing in a
new notebook, I realized that pyspark is using Python 2.7.9. I have Python
3.4.3 installed and some other things I can tell if you later.

Looking for how to use python3, I found this post [1]. I tried setting
those env variables in .bashrc and zeppelin-env.sh as

export PYSPARK_PYTHON="python3"
> export PYSPARK_DRIVER_PYTHON="ipython3"


When I run ./bin/pyspark I get

paulo_filho@spark:~$ $SPARK_HOME/bin/pyspark
> Python 3.4.3 (default, Mar 26 2015, 22:03:40)
> Type "copyright", "credits" or "license" for more information.
> IPython 4.0.0 -- An enhanced Interactive Python.
> ...
> Welcome to
>       ____              __
>      / __/__  ___ _____/ /__
>     _\ \/ _ \/ _ `/ __/  '_/
>    /__ / .__/\_,_/_/ /_/\_\   version 1.5.0
>       /_/
> Using Python version 3.4.3 (default, Mar 26 2015 22:03:40)


but Zeppelin didn't work. Instead, I got the error below:

%pyspark
import sys
print(sys.version_info)

> sys.version_info(major=2, minor=7, micro=9, releaselevel='final', serial=0)



%pyspark
bankText = sc.textFile("/home/paulo_filho/data/bank.csv")
print(bankText.take(2))

> Py4JJavaError: An error occurred while calling
> z:org.apache.spark.api.python.PythonRDD.runJob.
> : org.apache.spark.SparkException: Job aborted due to stage failure: Task
> 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage
> 0.0 (TID 0, localhost): org.apache.spark.api.python.PythonException:
> Traceback (most recent call last):
>   File
> "/usr/local/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",
> line 64, in main
>     ("%d.%d" % sys.version_info[:2], version))
> Exception: Python in worker has different version 3.4 than that in driver
> 2.7, PySpark cannot run with different minor versions
> at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
> at
> org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> at org.apache.spark.scheduler.Task.run(Task.scala:88)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Driver stacktrace:
> at org.apache.spark.scheduler.DAGScheduler.org
> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1280)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1268)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1267)
> at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1267)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
> at scala.Option.foreach(Option.scala:236)
> at
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1493)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1455)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1444)
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
> at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1813)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1826)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1839)
> at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:361)
> at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:497)
> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
> at py4j.Gateway.invoke(Gateway.java:259)
> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
> at py4j.commands.CallCommand.execute(CallCommand.java:79)
> at py4j.GatewayConnection.run(GatewayConnection.java:207)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.spark.api.python.PythonException: Traceback (most
> recent call last):
>   File
> "/usr/local/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",
> line 64, in main
>     ("%d.%d" % sys.version_info[:2], version))
> Exception: Python in worker has different version 3.4 than that in driver
> 2.7, PySpark cannot run with different minor versions
> at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
> at
> org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> at org.apache.spark.scheduler.Task.run(Task.scala:88)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> ... 1 more
> (<class 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error occurred
> while calling z:org.apache.spark.api.python.PythonRDD.runJob.\n',
> JavaObject id=o53), <traceback object at 0x7f814b4596c8>)


I noticed that in Zeppelin's "Interpreter" section there's a config

zeppelin.pyspark.python python
>

I've already tried to change that, but I got the same error always.


So, I'm here asking for your help. =)

Thanks!


[1]
http://stackoverflow.com/questions/30518362/how-do-i-set-the-drivers-python-version-in-spark

Re: Fwd: Zeppelin error when trying to run pyspark using python3

Posted by Paulo Cheadi Haddad Filho <pa...@gmail.com>.
No problem, I will do in the next hours. Thanks!

On Fri, Sep 18, 2015, 1:26 PM Felix Cheung <fe...@hotmail.com>
wrote:

> It will be best if you could open a bug. I plan to set this up to test it
> out, it might take me a while.
>
>
>
>
>
> On Fri, Sep 18, 2015 at 7:54 AM -0700, "Paulo Cheadi Haddad Filho" <
> paulochf@gmail.com> wrote:
>
> Is this a bug? I can open an issue if it is.
>
> On Wed, Sep 16, 2015, 4:51 PM Paulo Cheadi Haddad Filho <
> paulochf@gmail.com> wrote:
>
> Actually, I've done this already, but I'd forgotten the outcome.
>
> I get "pyspark is not responding" message.
>
>
> On Wed, Sep 16, 2015 at 4:01 PM Felix Cheung <fe...@hotmail.com>
> wrote:
>
> Could you try setting zeppelin.pyspark.python in the interpreter setting
> to the matching Python 3? "python3" in your example below.
>
>
> _____________________________
>
> From: Paulo Cheadi Haddad Filho <pa...@gmail.com>
> Sent: Wednesday, September 16, 2015 9:21 AM
> Subject: Fwd: Zeppelin error when trying to run pyspark using python3
> To: <us...@zeppelin.incubator.apache.org>
>
>
> Hello,
>
> Yesterday I installed a Spark server with Zeppelin and, while testing in a
> new notebook, I realized that pyspark is using Python 2.7.9. I have Python
> 3.4.3 installed and some other things I can tell if you later.
>
> Looking for how to use python3, I found this post [1]. I tried setting
> those env variables in .bashrc and zeppelin-env.sh as
>
> export PYSPARK_PYTHON="python3"
> export PYSPARK_DRIVER_PYTHON="ipython3"
>
>
> When I run ./bin/pyspark I get
>
> paulo_filho@spark:~$ $SPARK_HOME/bin/pyspark
> Python 3.4.3 (default, Mar 26 2015, 22:03:40)
> Type "copyright", "credits" or "license" for more information.
> IPython 4.0.0 -- An enhanced Interactive Python.
> ...
> Welcome to
>       ____              __
>      / __/__  ___ _____/ /__
>     _\ \/ _ \/ _ `/ __/  '_/
>    /__ / .__/\_,_/_/ /_/\_\   version 1.5.0
>       /_/
> Using Python version 3.4.3 (default, Mar 26 2015 22:03:40)
>
>
> but Zeppelin didn't work. Instead, I got the error below:
>
> %pyspark
> import sys
> print(sys.version_info)
>
> sys.version_info(major=2, minor=7, micro=9, releaselevel='final',
> serial=0)
>
>
>
> %pyspark
> bankText = sc.textFile("/home/paulo_filho/data/bank.csv")
> print(bankText.take(2))
>
> Py4JJavaError: An error occurred while calling
> z:org.apache.spark.api.python.PythonRDD.runJob.
> : org.apache.spark.SparkException: Job aborted due to stage failure: Task
> 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage
> 0.0 (TID 0, localhost): org.apache.spark.api.python.PythonException:
> Traceback (most recent call last):
>   File
> "/usr/local/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",
> line 64, in main
>     ("%d.%d" % sys.version_info[:2], version))
> Exception: Python in worker has different version 3.4 than that in driver
> 2.7, PySpark cannot run with different minor versions
> at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
> at
> org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> at org.apache.spark.scheduler.Task.run(Task.scala:88)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
> at java.lang.Thread.run(Thread.java:745)
> Driver stacktrace:
> at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1280)
>
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1268)
>
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1267)
>
> at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1267)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
>
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
>
> at scala.Option.foreach(Option.scala:236)
> at
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
>
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1493)
>
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1455)
>
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1444)
>
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
> at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1813)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1826)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1839)
> at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:361)
> at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:497)
> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
> at py4j.Gateway.invoke(Gateway.java:259)
> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
> at py4j.commands.CallCommand.execute(CallCommand.java:79)
> at py4j.GatewayConnection.run(GatewayConnection.java:207)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.spark.api.python.PythonException: Traceback (most
> recent call last):
>   File
> "/usr/local/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",
> line 64, in main
>     ("%d.%d" % sys.version_info[:2], version))
> Exception: Python in worker has different version 3.4 than that in driver
> 2.7, PySpark cannot run with different minor versions
> at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
> at
> org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> at org.apache.spark.scheduler.Task.run(Task.scala:88)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>
> ... 1 more
> (<class 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error occurred
> while calling z:org.apache.spark.api.python.PythonRDD.runJob.\n',
> JavaObject id=o53), <traceback object at 0x7f814b4596c8>)
>
>
> I noticed that in Zeppelin's "Interpreter" section there's a config
>
> zeppelin.pyspark.python python
>
>
> I've already tried to change that, but I got the same error always.
>
>
> So, I'm here asking for your help. =)
>
> Thanks!
>
>
> [1]
> http://stackoverflow.com/questions/30518362/how-do-i-set-the-drivers-python-version-in-spark
>
>
>
>

Re: Fwd: Zeppelin error when trying to run pyspark using python3

Posted by Felix Cheung <fe...@hotmail.com>.
It will be best if you could open a bug. I plan to set this up to test it out, it might take me a while.





On Fri, Sep 18, 2015 at 7:54 AM -0700, "Paulo Cheadi Haddad Filho" <pa...@gmail.com> wrote:
Is this a bug? I can open an issue if it is.

On Wed, Sep 16, 2015, 4:51 PM Paulo Cheadi Haddad Filho <pa...@gmail.com>
wrote:

> Actually, I've done this already, but I'd forgotten the outcome.
>
> I get "pyspark is not responding" message.
>
>
> On Wed, Sep 16, 2015 at 4:01 PM Felix Cheung <fe...@hotmail.com>
> wrote:
>
>> Could you try setting zeppelin.pyspark.python in the interpreter setting
>> to the matching Python 3? "python3" in your example below.
>>
>>
>> _____________________________
>>
>> From: Paulo Cheadi Haddad Filho <pa...@gmail.com>
>> Sent: Wednesday, September 16, 2015 9:21 AM
>> Subject: Fwd: Zeppelin error when trying to run pyspark using python3
>> To: <us...@zeppelin.incubator.apache.org>
>>
>>
>> Hello,
>>
>> Yesterday I installed a Spark server with Zeppelin and, while testing in
>> a new notebook, I realized that pyspark is using Python 2.7.9. I have
>> Python 3.4.3 installed and some other things I can tell if you later.
>>
>> Looking for how to use python3, I found this post [1]. I tried setting
>> those env variables in .bashrc and zeppelin-env.sh as
>>
>> export PYSPARK_PYTHON="python3"
>>> export PYSPARK_DRIVER_PYTHON="ipython3"
>>
>>
>> When I run ./bin/pyspark I get
>>
>> paulo_filho@spark:~$ $SPARK_HOME/bin/pyspark
>>> Python 3.4.3 (default, Mar 26 2015, 22:03:40)
>>> Type "copyright", "credits" or "license" for more information.
>>> IPython 4.0.0 -- An enhanced Interactive Python.
>>> ...
>>> Welcome to
>>>       ____              __
>>>      / __/__  ___ _____/ /__
>>>     _\ \/ _ \/ _ `/ __/  '_/
>>>    /__ / .__/\_,_/_/ /_/\_\   version 1.5.0
>>>       /_/
>>> Using Python version 3.4.3 (default, Mar 26 2015 22:03:40)
>>
>>
>> but Zeppelin didn't work. Instead, I got the error below:
>>
>> %pyspark
>> import sys
>> print(sys.version_info)
>>
>>> sys.version_info(major=2, minor=7, micro=9, releaselevel='final',
>>> serial=0)
>>
>>
>>
>> %pyspark
>> bankText = sc.textFile("/home/paulo_filho/data/bank.csv")
>> print(bankText.take(2))
>>
>>> Py4JJavaError: An error occurred while calling
>>> z:org.apache.spark.api.python.PythonRDD.runJob.
>>> : org.apache.spark.SparkException: Job aborted due to stage failure:
>>> Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in
>>> stage 0.0 (TID 0, localhost): org.apache.spark.api.python.PythonException:
>>> Traceback (most recent call last):
>>>   File
>>> "/usr/local/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",
>>> line 64, in main
>>>     ("%d.%d" % sys.version_info[:2], version))
>>> Exception: Python in worker has different version 3.4 than that in
>>> driver 2.7, PySpark cannot run with different minor versions
>>> at
>>> org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
>>> at
>>> org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
>>> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
>>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
>>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>>> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>>> at org.apache.spark.scheduler.Task.run(Task.scala:88)
>>> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>>
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>>
>>> at java.lang.Thread.run(Thread.java:745)
>>> Driver stacktrace:
>>> at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1280)
>>>
>>> at
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1268)
>>>
>>> at
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1267)
>>>
>>> at
>>> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>>>
>>> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>>> at
>>> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1267)
>>> at
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
>>>
>>> at
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
>>>
>>> at scala.Option.foreach(Option.scala:236)
>>> at
>>> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
>>>
>>> at
>>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1493)
>>>
>>> at
>>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1455)
>>>
>>> at
>>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1444)
>>>
>>> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>>> at
>>> org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
>>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1813)
>>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1826)
>>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1839)
>>> at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:361)
>>> at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>>
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>
>>> at java.lang.reflect.Method.invoke(Method.java:497)
>>> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
>>> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
>>> at py4j.Gateway.invoke(Gateway.java:259)
>>> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
>>> at py4j.commands.CallCommand.execute(CallCommand.java:79)
>>> at py4j.GatewayConnection.run(GatewayConnection.java:207)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Caused by: org.apache.spark.api.python.PythonException: Traceback (most
>>> recent call last):
>>>   File
>>> "/usr/local/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",
>>> line 64, in main
>>>     ("%d.%d" % sys.version_info[:2], version))
>>> Exception: Python in worker has different version 3.4 than that in
>>> driver 2.7, PySpark cannot run with different minor versions
>>> at
>>> org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
>>> at
>>> org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
>>> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
>>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
>>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>>> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>>> at org.apache.spark.scheduler.Task.run(Task.scala:88)
>>> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>>
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>>
>>> ... 1 more
>>> (<class 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error
>>> occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.\n',
>>> JavaObject id=o53), <traceback object at 0x7f814b4596c8>)
>>
>>
>> I noticed that in Zeppelin's "Interpreter" section there's a config
>>
>> zeppelin.pyspark.python python
>>>
>>
>> I've already tried to change that, but I got the same error always.
>>
>>
>> So, I'm here asking for your help. =)
>>
>> Thanks!
>>
>>
>> [1]
>> http://stackoverflow.com/questions/30518362/how-do-i-set-the-drivers-python-version-in-spark
>>
>>
>>
>>

Re: Fwd: Zeppelin error when trying to run pyspark using python3

Posted by Paulo Cheadi Haddad Filho <pa...@gmail.com>.
Is this a bug? I can open an issue if it is.

On Wed, Sep 16, 2015, 4:51 PM Paulo Cheadi Haddad Filho <pa...@gmail.com>
wrote:

> Actually, I've done this already, but I'd forgotten the outcome.
>
> I get "pyspark is not responding" message.
>
>
> On Wed, Sep 16, 2015 at 4:01 PM Felix Cheung <fe...@hotmail.com>
> wrote:
>
>> Could you try setting zeppelin.pyspark.python in the interpreter setting
>> to the matching Python 3? "python3" in your example below.
>>
>>
>> _____________________________
>>
>> From: Paulo Cheadi Haddad Filho <pa...@gmail.com>
>> Sent: Wednesday, September 16, 2015 9:21 AM
>> Subject: Fwd: Zeppelin error when trying to run pyspark using python3
>> To: <us...@zeppelin.incubator.apache.org>
>>
>>
>> Hello,
>>
>> Yesterday I installed a Spark server with Zeppelin and, while testing in
>> a new notebook, I realized that pyspark is using Python 2.7.9. I have
>> Python 3.4.3 installed and some other things I can tell if you later.
>>
>> Looking for how to use python3, I found this post [1]. I tried setting
>> those env variables in .bashrc and zeppelin-env.sh as
>>
>> export PYSPARK_PYTHON="python3"
>>> export PYSPARK_DRIVER_PYTHON="ipython3"
>>
>>
>> When I run ./bin/pyspark I get
>>
>> paulo_filho@spark:~$ $SPARK_HOME/bin/pyspark
>>> Python 3.4.3 (default, Mar 26 2015, 22:03:40)
>>> Type "copyright", "credits" or "license" for more information.
>>> IPython 4.0.0 -- An enhanced Interactive Python.
>>> ...
>>> Welcome to
>>>       ____              __
>>>      / __/__  ___ _____/ /__
>>>     _\ \/ _ \/ _ `/ __/  '_/
>>>    /__ / .__/\_,_/_/ /_/\_\   version 1.5.0
>>>       /_/
>>> Using Python version 3.4.3 (default, Mar 26 2015 22:03:40)
>>
>>
>> but Zeppelin didn't work. Instead, I got the error below:
>>
>> %pyspark
>> import sys
>> print(sys.version_info)
>>
>>> sys.version_info(major=2, minor=7, micro=9, releaselevel='final',
>>> serial=0)
>>
>>
>>
>> %pyspark
>> bankText = sc.textFile("/home/paulo_filho/data/bank.csv")
>> print(bankText.take(2))
>>
>>> Py4JJavaError: An error occurred while calling
>>> z:org.apache.spark.api.python.PythonRDD.runJob.
>>> : org.apache.spark.SparkException: Job aborted due to stage failure:
>>> Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in
>>> stage 0.0 (TID 0, localhost): org.apache.spark.api.python.PythonException:
>>> Traceback (most recent call last):
>>>   File
>>> "/usr/local/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",
>>> line 64, in main
>>>     ("%d.%d" % sys.version_info[:2], version))
>>> Exception: Python in worker has different version 3.4 than that in
>>> driver 2.7, PySpark cannot run with different minor versions
>>> at
>>> org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
>>> at
>>> org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
>>> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
>>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
>>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>>> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>>> at org.apache.spark.scheduler.Task.run(Task.scala:88)
>>> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>>
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>>
>>> at java.lang.Thread.run(Thread.java:745)
>>> Driver stacktrace:
>>> at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1280)
>>>
>>> at
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1268)
>>>
>>> at
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1267)
>>>
>>> at
>>> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>>>
>>> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>>> at
>>> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1267)
>>> at
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
>>>
>>> at
>>> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
>>>
>>> at scala.Option.foreach(Option.scala:236)
>>> at
>>> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
>>>
>>> at
>>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1493)
>>>
>>> at
>>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1455)
>>>
>>> at
>>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1444)
>>>
>>> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>>> at
>>> org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
>>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1813)
>>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1826)
>>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1839)
>>> at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:361)
>>> at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>>
>>> at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>
>>> at java.lang.reflect.Method.invoke(Method.java:497)
>>> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
>>> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
>>> at py4j.Gateway.invoke(Gateway.java:259)
>>> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
>>> at py4j.commands.CallCommand.execute(CallCommand.java:79)
>>> at py4j.GatewayConnection.run(GatewayConnection.java:207)
>>> at java.lang.Thread.run(Thread.java:745)
>>> Caused by: org.apache.spark.api.python.PythonException: Traceback (most
>>> recent call last):
>>>   File
>>> "/usr/local/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",
>>> line 64, in main
>>>     ("%d.%d" % sys.version_info[:2], version))
>>> Exception: Python in worker has different version 3.4 than that in
>>> driver 2.7, PySpark cannot run with different minor versions
>>> at
>>> org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
>>> at
>>> org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
>>> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
>>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
>>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>>> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>>> at org.apache.spark.scheduler.Task.run(Task.scala:88)
>>> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>>
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>>
>>> ... 1 more
>>> (<class 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error
>>> occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.\n',
>>> JavaObject id=o53), <traceback object at 0x7f814b4596c8>)
>>
>>
>> I noticed that in Zeppelin's "Interpreter" section there's a config
>>
>> zeppelin.pyspark.python python
>>>
>>
>> I've already tried to change that, but I got the same error always.
>>
>>
>> So, I'm here asking for your help. =)
>>
>> Thanks!
>>
>>
>> [1]
>> http://stackoverflow.com/questions/30518362/how-do-i-set-the-drivers-python-version-in-spark
>>
>>
>>
>>

Re: Fwd: Zeppelin error when trying to run pyspark using python3

Posted by Paulo Cheadi Haddad Filho <pa...@gmail.com>.
Actually, I've done this already, but I'd forgotten the outcome.

I get "pyspark is not responding" message.

On Wed, Sep 16, 2015 at 4:01 PM Felix Cheung <fe...@hotmail.com>
wrote:

> Could you try setting zeppelin.pyspark.python in the interpreter setting
> to the matching Python 3? "python3" in your example below.
>
>
> _____________________________
>
> From: Paulo Cheadi Haddad Filho <pa...@gmail.com>
> Sent: Wednesday, September 16, 2015 9:21 AM
> Subject: Fwd: Zeppelin error when trying to run pyspark using python3
> To: <us...@zeppelin.incubator.apache.org>
>
>
> Hello,
>
> Yesterday I installed a Spark server with Zeppelin and, while testing in a
> new notebook, I realized that pyspark is using Python 2.7.9. I have Python
> 3.4.3 installed and some other things I can tell if you later.
>
> Looking for how to use python3, I found this post [1]. I tried setting
> those env variables in .bashrc and zeppelin-env.sh as
>
> export PYSPARK_PYTHON="python3"
>> export PYSPARK_DRIVER_PYTHON="ipython3"
>
>
> When I run ./bin/pyspark I get
>
> paulo_filho@spark:~$ $SPARK_HOME/bin/pyspark
>> Python 3.4.3 (default, Mar 26 2015, 22:03:40)
>> Type "copyright", "credits" or "license" for more information.
>> IPython 4.0.0 -- An enhanced Interactive Python.
>> ...
>> Welcome to
>>       ____              __
>>      / __/__  ___ _____/ /__
>>     _\ \/ _ \/ _ `/ __/  '_/
>>    /__ / .__/\_,_/_/ /_/\_\   version 1.5.0
>>       /_/
>> Using Python version 3.4.3 (default, Mar 26 2015 22:03:40)
>
>
> but Zeppelin didn't work. Instead, I got the error below:
>
> %pyspark
> import sys
> print(sys.version_info)
>
>> sys.version_info(major=2, minor=7, micro=9, releaselevel='final',
>> serial=0)
>
>
>
> %pyspark
> bankText = sc.textFile("/home/paulo_filho/data/bank.csv")
> print(bankText.take(2))
>
>> Py4JJavaError: An error occurred while calling
>> z:org.apache.spark.api.python.PythonRDD.runJob.
>> : org.apache.spark.SparkException: Job aborted due to stage failure: Task
>> 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage
>> 0.0 (TID 0, localhost): org.apache.spark.api.python.PythonException:
>> Traceback (most recent call last):
>>   File
>> "/usr/local/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",
>> line 64, in main
>>     ("%d.%d" % sys.version_info[:2], version))
>> Exception: Python in worker has different version 3.4 than that in driver
>> 2.7, PySpark cannot run with different minor versions
>> at
>> org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
>> at
>> org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
>> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>> at org.apache.spark.scheduler.Task.run(Task.scala:88)
>> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>
>> at java.lang.Thread.run(Thread.java:745)
>> Driver stacktrace:
>> at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1280)
>>
>> at
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1268)
>>
>> at
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1267)
>>
>> at
>> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>>
>> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>> at
>> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1267)
>> at
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
>>
>> at
>> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
>>
>> at scala.Option.foreach(Option.scala:236)
>> at
>> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
>>
>> at
>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1493)
>>
>> at
>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1455)
>>
>> at
>> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1444)
>>
>> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>> at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1813)
>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1826)
>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1839)
>> at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:361)
>> at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>> at java.lang.reflect.Method.invoke(Method.java:497)
>> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
>> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
>> at py4j.Gateway.invoke(Gateway.java:259)
>> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
>> at py4j.commands.CallCommand.execute(CallCommand.java:79)
>> at py4j.GatewayConnection.run(GatewayConnection.java:207)
>> at java.lang.Thread.run(Thread.java:745)
>> Caused by: org.apache.spark.api.python.PythonException: Traceback (most
>> recent call last):
>>   File
>> "/usr/local/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py",
>> line 64, in main
>>     ("%d.%d" % sys.version_info[:2], version))
>> Exception: Python in worker has different version 3.4 than that in driver
>> 2.7, PySpark cannot run with different minor versions
>> at
>> org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)
>> at
>> org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)
>> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>> at org.apache.spark.scheduler.Task.run(Task.scala:88)
>> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>>
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>>
>> ... 1 more
>> (<class 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error occurred
>> while calling z:org.apache.spark.api.python.PythonRDD.runJob.\n',
>> JavaObject id=o53), <traceback object at 0x7f814b4596c8>)
>
>
> I noticed that in Zeppelin's "Interpreter" section there's a config
>
> zeppelin.pyspark.python python
>>
>
> I've already tried to change that, but I got the same error always.
>
>
> So, I'm here asking for your help. =)
>
> Thanks!
>
>
> [1]
> http://stackoverflow.com/questions/30518362/how-do-i-set-the-drivers-python-version-in-spark
>
>
>
>

Re: Fwd: Zeppelin error when trying to run pyspark using python3

Posted by Felix Cheung <fe...@hotmail.com>.
Could you try setting zeppelin.pyspark.python in the interpreter setting to the matching Python 3? "python3" in your example below.



    _____________________________
From: Paulo Cheadi Haddad Filho <pa...@gmail.com>
Sent: Wednesday, September 16, 2015 9:21 AM
Subject: Fwd: Zeppelin error when trying to run pyspark using python3
To:  <us...@zeppelin.incubator.apache.org>


                Hello,     
     
Yesterday I installed a Spark server with Zeppelin and, while testing in a new notebook, I realized that pyspark is using Python 2.7.9. I have Python 3.4.3 installed and some other things I can tell if you later.     
     
Looking for how to use python3, I found this post [1]. I tried setting those env variables in .bashrc and zeppelin-env.sh as           
                       export PYSPARK_PYTHON="python3"       
export PYSPARK_DRIVER_PYTHON="ipython3"                      
                When I run ./bin/pyspark I get                 
                       paulo_filho@spark:~$ $SPARK_HOME/bin/pyspark        
Python 3.4.3 (default, Mar 26 2015, 22:03:40)        
Type "copyright", "credits" or "license" for more information.       
IPython 4.0.0 -- An enhanced Interactive Python.       
...       
Welcome to       
      ____              __       
     / __/__  ___ _____/ /__       
    _\ \/ _ \/ _ `/ __/  '_/       
   /__ / .__/\_,_/_/ /_/\_\   version 1.5.0       
      /_/       
Using Python version 3.4.3 (default, Mar 26 2015 22:03:40)                      
                but Zeppelin didn't work. Instead, I got the error below:      
                
                       %pyspark                   import sys                   print(sys.version_info)                      sys.version_info(major=2, minor=7, micro=9, releaselevel='final', serial=0)                
                
%pyspark             bankText = sc.textFile("/home/paulo_filho/data/bank.csv")                   print(bankText.take(2))                           Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.        
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):        
  File "/usr/local/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py", line 64, in main        
    ("%d.%d" % sys.version_info[:2], version))        
Exception: Python in worker has different version 3.4 than that in driver 2.7, PySpark cannot run with different minor versions        
         at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)        
         at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)        
         at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)        
         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)        
         at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)        
         at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)        
         at org.apache.spark.scheduler.Task.run(Task.scala:88)        
         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)        
         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)        
         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)        
         at java.lang.Thread.run(Thread.java:745)        
Driver stacktrace:        
         at         org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1280)        
         at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1268)        
         at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1267)        
         at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)        
         at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)        
         at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1267)        
         at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)        
         at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)        
         at scala.Option.foreach(Option.scala:236)        
         at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)        
         at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1493)        
         at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1455)        
         at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1444)        
         at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)        
         at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)        
         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1813)        
         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1826)        
         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1839)        
         at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:361)        
         at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)        
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)        
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)        
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)        
         at java.lang.reflect.Method.invoke(Method.java:497)        
         at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)        
         at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)        
         at py4j.Gateway.invoke(Gateway.java:259)        
         at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)        
         at py4j.commands.CallCommand.execute(CallCommand.java:79)        
         at py4j.GatewayConnection.run(GatewayConnection.java:207)        
         at java.lang.Thread.run(Thread.java:745)        
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):        
  File "/usr/local/spark-1.5.0-bin-hadoop2.6/python/lib/pyspark.zip/pyspark/worker.py", line 64, in main        
    ("%d.%d" % sys.version_info[:2], version))        
Exception: Python in worker has different version 3.4 than that in driver 2.7, PySpark cannot run with different minor versions        
         at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:138)        
         at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:179)        
         at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:97)        
         at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)        
         at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)        
         at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)        
         at org.apache.spark.scheduler.Task.run(Task.scala:88)        
         at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)        
         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)        
         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)        
         ... 1 more        
(<class 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
', JavaObject id=o53), <traceback object at 0x7f814b4596c8>)                          
                   I noticed that in Zeppelin's "Interpreter" section there's a config                   
                   zeppelin.pyspark.python        python       
                   
                   I've already tried to change that, but I got the same error always.                    
                   
                   So, I'm here asking for your help. =)                   
                   Thanks!                   
                   
[1]        http://stackoverflow.com/questions/30518362/how-do-i-set-the-drivers-python-version-in-spark