You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@zeppelin.apache.org by mingda li <li...@gmail.com> on 2017/02/01 02:48:47 UTC

Error about PySpark

Dear all,

We are using Zeppelin. And I have added the export
PYTHONPATH=/home/clash/sparks/spark-1.6.1-bin-hadoop12/python
to zeppelin-env.sh.
But each time, when I want to use pyspark, for example the program:

%pyspark
from pyspark import SparkContext
logFile = "hiv.data"
logData = sc.textFile(logFile).cache()
numAs = logData.filter(lambda s: 'a' in s).count()
numBs = logData.filter(lambda s: 'b' in s).count()
print "Lines with a: %i, lines with b: %i" % (numAs, numBs)

It can firstly run well. But second time, I run it again I will get such
error:
*Traceback (most recent call last):*
*  File "/tmp/zeppelin_pyspark-4018989172273347075.py", line 238, in
<module>*
*    sc.setJobGroup(jobGroup, "Zeppelin")*
*  File
"/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/pyspark/context.py",
line 876, in setJobGroup*
*    self._jsc.setJobGroup(groupId, description, interruptOnCancel)*
*AttributeError: 'NoneType' object has no attribute 'setJobGroup'*

*I need to rm */tmp/zeppelin_pyspark-4018989172273347075.py and start
zeppelin again to let it work.
Does anyone have idea why?

Thanks

Re: Error about PySpark

Posted by mingda li <li...@gmail.com>.
Oh, yeah. I have two versions of Python running on UCLA's cluster.
One is Python 2.7.6 and one is Python 3.4.
Do I need to specify the version of Python in Spark's environment by
setting "PYSPARK_PYTHON" ?
Or I need to remove one Python version?

Thanks
Mingda

On Thu, Feb 2, 2017 at 5:29 PM, Jeff Zhang <zj...@gmail.com> wrote:

> Do you have multiple python installed ? From the error message, it is
> clear that it complains about no numpy is installed.
>
>
> mingda li <li...@gmail.com>于2017年2月3日周五 上午9:16写道:
>
>> I have numpy on the cluster. Otherwise the pySpark can't work also in
>> terminal.
>>
>> And I also tried to use numpy in python terminal. It can work.
>>
>> Any other reason?
>>
>> On Thu, Feb 2, 2017 at 4:52 PM, Jianfeng (Jeff) Zhang <
>> jzhang@hortonworks.com> wrote:
>>
>>
>> Please try to install numpy
>>
>>
>> Best Regard,
>> Jeff Zhang
>>
>>
>> From: mingda li <li...@gmail.com>
>> Reply-To: "users@zeppelin.apache.org" <us...@zeppelin.apache.org>
>> Date: Friday, February 3, 2017 at 6:03 AM
>> To: "users@zeppelin.apache.org" <us...@zeppelin.apache.org>
>> Subject: Re: Error about PySpark
>>
>> And I tried the ./bin/pyspark to run same program with package of mllib,
>> That can work well for spark.
>>
>> So do I need to set something for Zeppelin? Like PySpark_Python or
>> PythonPath.
>>
>> Bests,
>> Mingda
>>
>> On Thu, Feb 2, 2017 at 12:07 PM, mingda li <li...@gmail.com>
>> wrote:
>>
>> Thanks. But when I changed the env of zeppelin as following:
>>
>> export JAVA_HOME=/home/clash/asterixdb/jdk1.8.0_101
>>
>> export ZEPPELIN_PORT=19037
>>
>> export SPARK_HOME=/home/clash/sparks/spark-1.6.1-bin-hadoop12
>> Each time, if I want to use the mllib in zeppelin, I will meet the
>> problem:
>>
>> Py4JJavaError: An error occurred while calling
>> z:org.apache.spark.api.python.PythonRDD.runJob.
>> : org.apache.spark.SparkException: Job aborted due to stage failure:
>> Task 0 in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in
>> stage 5.0 (TID 13, SCAI05.CS.UCLA.EDU): org.apache.spark.api.python.PythonException:
>> Traceback (most recent call last):
>> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/
>> pyspark.zip/pyspark/worker.py", line 98, in main
>> command = pickleSer._read_with_length(infile)
>> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/
>> pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
>> return self.loads(obj)
>> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/
>> pyspark.zip/pyspark/serializers.py", line 422, in loads
>> return pickle.loads(obj)
>> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/
>> pyspark.zip/pyspark/mllib/__init__.py", line 25, in <module>
>> ImportError: No module named numpy
>> at org.apache.spark.api.python.PythonRunner$$anon$1.read(
>> PythonRDD.scala:166)
>> at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(
>> PythonRDD.scala:207)
>> at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
>> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>> at org.apache.spark.scheduler.Task.run(Task.scala:89)
>> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(
>> ThreadPoolExecutor.java:1142)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>> Driver stacktrace:
>> at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$
>> scheduler$DAGScheduler$$failJobAndIndependentStages(
>> DAGScheduler.scala:1431)
>> at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(
>> DAGScheduler.scala:1419)
>> at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(
>> DAGScheduler.scala:1418)
>> at scala.collection.mutable.ResizableArray$class.foreach(
>> ResizableArray.scala:59)
>> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>> at org.apache.spark.scheduler.DAGScheduler.abortStage(
>> DAGScheduler.scala:1418)
>> at org.apache.spark.scheduler.DAGScheduler$$anonfun$
>> handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
>> at org.apache.spark.scheduler.DAGScheduler$$anonfun$
>> handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
>> at scala.Option.foreach(Option.scala:236)
>> at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(
>> DAGScheduler.scala:799)
>> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.
>> doOnReceive(DAGScheduler.scala:1640)
>> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.
>> onReceive(DAGScheduler.scala:1599)
>> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.
>> onReceive(DAGScheduler.scala:1588)
>> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>> at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
>> at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:393)
>> at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(
>> NativeMethodAccessorImpl.java:62)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
>> DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
>> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
>> at py4j.Gateway.invoke(Gateway.java:259)
>> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
>> at py4j.commands.CallCommand.execute(CallCommand.java:79)
>> at py4j.GatewayConnection.run(GatewayConnection.java:209)
>> at java.lang.Thread.run(Thread.java:745)
>> Caused by: org.apache.spark.api.python.PythonException: Traceback (most
>> recent call last):
>> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/
>> pyspark.zip/pyspark/worker.py", line 98, in main
>> command = pickleSer._read_with_length(infile)
>> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/
>> pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
>> return self.loads(obj)
>> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/
>> pyspark.zip/pyspark/serializers.py", line 422, in loads
>> return pickle.loads(obj)
>> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/
>> pyspark.zip/pyspark/mllib/__init__.py", line 25, in <module>
>> ImportError: No module named numpy
>> at org.apache.spark.api.python.PythonRunner$$anon$1.read(
>> PythonRDD.scala:166)
>> at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(
>> PythonRDD.scala:207)
>> at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
>> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>> at org.apache.spark.scheduler.Task.run(Task.scala:89)
>> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(
>> ThreadPoolExecutor.java:1142)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> ThreadPoolExecutor.java:617)
>> ... 1 more
>> (<class 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error
>> occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.\n',
>> JavaObject id=o108), <traceback object at 0x7f3054e56f80>)
>>
>> Do you know why? Do I need to set the python path?
>>
>> On Wed, Feb 1, 2017 at 1:33 AM, Hyung Sung Shim <hs...@nflabs.com>
>> wrote:
>>
>> Hello.
>> You don't need to remove the /tmp/zeppelin_pyspark-4018989172273347075.py
>> because it is generated automatically when you run the pyspark command.
>> and I think you don't need to set PYTHONPATH if you have python in your
>> system.
>>
>> I recommend you are using the SPARK_HOME like following.
>> *export SPARK_HOME=/home/clash/sparks/spark-1.6.1-bin-hadoop12*
>>
>> now you can restart zeppelin please run your python command.
>>
>> *. Could you give the absolute path for logFile like following.
>> logFile = "/Users/user/hiv.data"
>>
>>
>> 2017-02-01 11:48 GMT+09:00 mingda li <li...@gmail.com>:
>>
>> Dear all,
>>
>> We are using Zeppelin. And I have added the export
>> PYTHONPATH=/home/clash/sparks/spark-1.6.1-bin-hadoop12/python
>> to zeppelin-env.sh.
>> But each time, when I want to use pyspark, for example the program:
>>
>> %pyspark
>> from pyspark import SparkContext
>> logFile = "hiv.data"
>> logData = sc.textFile(logFile).cache()
>> numAs = logData.filter(lambda s: 'a' in s).count()
>> numBs = logData.filter(lambda s: 'b' in s).count()
>> print "Lines with a: %i, lines with b: %i" % (numAs, numBs)
>>
>> It can firstly run well. But second time, I run it again I will get such
>> error:
>> *Traceback (most recent call last):*
>> *  File "/tmp/zeppelin_pyspark-4018989172273347075.py", line 238, in
>> <module>*
>> *    sc.setJobGroup(jobGroup, "Zeppelin")*
>> *  File
>> "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/pyspark/context.py",
>> line 876, in setJobGroup*
>> *    self._jsc.setJobGroup(groupId, description, interruptOnCancel)*
>> *AttributeError: 'NoneType' object has no attribute 'setJobGroup'*
>>
>> *I need to rm */tmp/zeppelin_pyspark-4018989172273347075.py and start
>> zeppelin again to let it work.
>> Does anyone have idea why?
>>
>> Thanks
>>
>>
>>
>>
>>
>>

Re: Error about PySpark

Posted by Jeff Zhang <zj...@gmail.com>.
Do you have multiple python installed ? From the error message, it is clear
that it complains about no numpy is installed.


mingda li <li...@gmail.com>于2017年2月3日周五 上午9:16写道:

> I have numpy on the cluster. Otherwise the pySpark can't work also in
> terminal.
>
> And I also tried to use numpy in python terminal. It can work.
>
> Any other reason?
>
> On Thu, Feb 2, 2017 at 4:52 PM, Jianfeng (Jeff) Zhang <
> jzhang@hortonworks.com> wrote:
>
>
> Please try to install numpy
>
>
> Best Regard,
> Jeff Zhang
>
>
> From: mingda li <li...@gmail.com>
> Reply-To: "users@zeppelin.apache.org" <us...@zeppelin.apache.org>
> Date: Friday, February 3, 2017 at 6:03 AM
> To: "users@zeppelin.apache.org" <us...@zeppelin.apache.org>
> Subject: Re: Error about PySpark
>
> And I tried the ./bin/pyspark to run same program with package of mllib,
> That can work well for spark.
>
> So do I need to set something for Zeppelin? Like PySpark_Python or
> PythonPath.
>
> Bests,
> Mingda
>
> On Thu, Feb 2, 2017 at 12:07 PM, mingda li <li...@gmail.com> wrote:
>
> Thanks. But when I changed the env of zeppelin as following:
>
> export JAVA_HOME=/home/clash/asterixdb/jdk1.8.0_101
>
> export ZEPPELIN_PORT=19037
>
> export SPARK_HOME=/home/clash/sparks/spark-1.6.1-bin-hadoop12
> Each time, if I want to use the mllib in zeppelin, I will meet the problem:
>
> Py4JJavaError: An error occurred while calling
> z:org.apache.spark.api.python.PythonRDD.runJob.
> : org.apache.spark.SparkException: Job aborted due to stage failure: Task
> 0 in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in stage
> 5.0 (TID 13, SCAI05.CS.UCLA.EDU): org.apache.spark.api.python.PythonException:
> Traceback (most recent call last):
> File
> "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/worker.py",
> line 98, in main
> command = pickleSer._read_with_length(infile)
> File
> "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/serializers.py",
> line 164, in _read_with_length
> return self.loads(obj)
> File
> "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/serializers.py",
> line 422, in loads
> return pickle.loads(obj)
> File
> "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/mllib/__init__.py",
> line 25, in <module>
> ImportError: No module named numpy
> at org.apache.spark.api.python.Py
> thonRunner$$anon$1.read(PythonRDD.scala:166)
> at org.apache.spark.api.python.Py
> thonRunner$$anon$1.<init>(PythonRDD.scala:207)
> at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> at org.apache.spark.scheduler.Task.run(Task.scala:89)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Driver stacktrace:
> at org.apache.spark.scheduler.DAGScheduler.org
> $apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
> at
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
> at
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
> at scala.Option.foreach(Option.scala:236)
> at
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
> at
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
> at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
> at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:393)
> at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
> at py4j.Gateway.invoke(Gateway.java:259)
> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
> at py4j.commands.CallCommand.execute(CallCommand.java:79)
> at py4j.GatewayConnection.run(GatewayConnection.java:209)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.spark.api.python.PythonException: Traceback (most
> recent call last):
> File
> "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/worker.py",
> line 98, in main
> command = pickleSer._read_with_length(infile)
> File
> "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/serializers.py",
> line 164, in _read_with_length
> return self.loads(obj)
> File
> "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/serializers.py",
> line 422, in loads
> return pickle.loads(obj)
> File
> "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/mllib/__init__.py",
> line 25, in <module>
> ImportError: No module named numpy
> at org.apache.spark.api.python.Py
> thonRunner$$anon$1.read(PythonRDD.scala:166)
> at org.apache.spark.api.python.Py
> thonRunner$$anon$1.<init>(PythonRDD.scala:207)
> at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> at org.apache.spark.scheduler.Task.run(Task.scala:89)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> ... 1 more
> (<class 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error occurred
> while calling z:org.apache.spark.api.python.PythonRDD.runJob.\n',
> JavaObject id=o108), <traceback object at 0x7f3054e56f80>)
>
> Do you know why? Do I need to set the python path?
>
> On Wed, Feb 1, 2017 at 1:33 AM, Hyung Sung Shim <hs...@nflabs.com> wrote:
>
> Hello.
> You don't need to remove the /tmp/zeppelin_pyspark-4018989172273347075.py
> because it is generated automatically when you run the pyspark command.
> and I think you don't need to set PYTHONPATH if you have python in your
> system.
>
> I recommend you are using the SPARK_HOME like following.
> *export SPARK_HOME=/home/clash/sparks/spark-1.6.1-bin-hadoop12*
>
> now you can restart zeppelin please run your python command.
>
> *. Could you give the absolute path for logFile like following.
> logFile = "/Users/user/hiv.data"
>
>
> 2017-02-01 11:48 GMT+09:00 mingda li <li...@gmail.com>:
>
> Dear all,
>
> We are using Zeppelin. And I have added the export
> PYTHONPATH=/home/clash/sparks/spark-1.6.1-bin-hadoop12/python
> to zeppelin-env.sh.
> But each time, when I want to use pyspark, for example the program:
>
> %pyspark
> from pyspark import SparkContext
> logFile = "hiv.data"
> logData = sc.textFile(logFile).cache()
> numAs = logData.filter(lambda s: 'a' in s).count()
> numBs = logData.filter(lambda s: 'b' in s).count()
> print "Lines with a: %i, lines with b: %i" % (numAs, numBs)
>
> It can firstly run well. But second time, I run it again I will get such
> error:
> *Traceback (most recent call last):*
> *  File "/tmp/zeppelin_pyspark-4018989172273347075.py", line 238, in
> <module>*
> *    sc.setJobGroup(jobGroup, "Zeppelin")*
> *  File
> "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/pyspark/context.py",
> line 876, in setJobGroup*
> *    self._jsc.setJobGroup(groupId, description, interruptOnCancel)*
> *AttributeError: 'NoneType' object has no attribute 'setJobGroup'*
>
> *I need to rm */tmp/zeppelin_pyspark-4018989172273347075.py and start
> zeppelin again to let it work.
> Does anyone have idea why?
>
> Thanks
>
>
>
>
>
>

Re: Error about PySpark

Posted by mingda li <li...@gmail.com>.
I have numpy on the cluster. Otherwise the pySpark can't work also in
terminal.

And I also tried to use numpy in python terminal. It can work.

Any other reason?

On Thu, Feb 2, 2017 at 4:52 PM, Jianfeng (Jeff) Zhang <
jzhang@hortonworks.com> wrote:

>
> Please try to install numpy
>
>
> Best Regard,
> Jeff Zhang
>
>
> From: mingda li <li...@gmail.com>
> Reply-To: "users@zeppelin.apache.org" <us...@zeppelin.apache.org>
> Date: Friday, February 3, 2017 at 6:03 AM
> To: "users@zeppelin.apache.org" <us...@zeppelin.apache.org>
> Subject: Re: Error about PySpark
>
> And I tried the ./bin/pyspark to run same program with package of mllib,
> That can work well for spark.
>
> So do I need to set something for Zeppelin? Like PySpark_Python or
> PythonPath.
>
> Bests,
> Mingda
>
> On Thu, Feb 2, 2017 at 12:07 PM, mingda li <li...@gmail.com> wrote:
>
>> Thanks. But when I changed the env of zeppelin as following:
>>
>> export JAVA_HOME=/home/clash/asterixdb/jdk1.8.0_101
>>
>> export ZEPPELIN_PORT=19037
>>
>> export SPARK_HOME=/home/clash/sparks/spark-1.6.1-bin-hadoop12
>> Each time, if I want to use the mllib in zeppelin, I will meet the
>> problem:
>>
>> Py4JJavaError: An error occurred while calling
>> z:org.apache.spark.api.python.PythonRDD.runJob.
>> : org.apache.spark.SparkException: Job aborted due to stage failure:
>> Task 0 in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in
>> stage 5.0 (TID 13, SCAI05.CS.UCLA.EDU): org.apache.spark.api.python.PythonException:
>> Traceback (most recent call last):
>> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/worker.py",
>> line 98, in main
>> command = pickleSer._read_with_length(infile)
>> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pysp
>> ark.zip/pyspark/serializers.py", line 164, in _read_with_length
>> return self.loads(obj)
>> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pysp
>> ark.zip/pyspark/serializers.py", line 422, in loads
>> return pickle.loads(obj)
>> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pysp
>> ark.zip/pyspark/mllib/__init__.py", line 25, in <module>
>> ImportError: No module named numpy
>> at org.apache.spark.api.python.PythonRunner$$anon$1.read(Python
>> RDD.scala:166)
>> at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(Pyth
>> onRDD.scala:207)
>> at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
>> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>> at org.apache.spark.scheduler.Task.run(Task.scala:89)
>> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1142)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>> Driver stacktrace:
>> at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$sch
>> eduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
>> at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$
>> 1.apply(DAGScheduler.scala:1419)
>> at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$
>> 1.apply(DAGScheduler.scala:1418)
>> at scala.collection.mutable.ResizableArray$class.foreach(Resiza
>> bleArray.scala:59)
>> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>> at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGSchedu
>> ler.scala:1418)
>> at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskS
>> etFailed$1.apply(DAGScheduler.scala:799)
>> at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskS
>> etFailed$1.apply(DAGScheduler.scala:799)
>> at scala.Option.foreach(Option.scala:236)
>> at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(
>> DAGScheduler.scala:799)
>> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOn
>> Receive(DAGScheduler.scala:1640)
>> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onRe
>> ceive(DAGScheduler.scala:1599)
>> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onRe
>> ceive(DAGScheduler.scala:1588)
>> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>> at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
>> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
>> at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:393)
>> at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>> ssorImpl.java:62)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
>> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
>> at py4j.Gateway.invoke(Gateway.java:259)
>> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
>> at py4j.commands.CallCommand.execute(CallCommand.java:79)
>> at py4j.GatewayConnection.run(GatewayConnection.java:209)
>> at java.lang.Thread.run(Thread.java:745)
>> Caused by: org.apache.spark.api.python.PythonException: Traceback (most
>> recent call last):
>> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/worker.py",
>> line 98, in main
>> command = pickleSer._read_with_length(infile)
>> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pysp
>> ark.zip/pyspark/serializers.py", line 164, in _read_with_length
>> return self.loads(obj)
>> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pysp
>> ark.zip/pyspark/serializers.py", line 422, in loads
>> return pickle.loads(obj)
>> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pysp
>> ark.zip/pyspark/mllib/__init__.py", line 25, in <module>
>> ImportError: No module named numpy
>> at org.apache.spark.api.python.PythonRunner$$anon$1.read(Python
>> RDD.scala:166)
>> at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(Pyth
>> onRDD.scala:207)
>> at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
>> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
>> at org.apache.spark.scheduler.Task.run(Task.scala:89)
>> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1142)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:617)
>> ... 1 more
>> (<class 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error
>> occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.\n',
>> JavaObject id=o108), <traceback object at 0x7f3054e56f80>)
>>
>> Do you know why? Do I need to set the python path?
>>
>> On Wed, Feb 1, 2017 at 1:33 AM, Hyung Sung Shim <hs...@nflabs.com>
>> wrote:
>>
>>> Hello.
>>> You don't need to remove the /tmp/zeppelin_pyspark-4018989172273347075.py
>>> because it is generated automatically when you run the pyspark command.
>>> and I think you don't need to set PYTHONPATH if you have python in your
>>> system.
>>>
>>> I recommend you are using the SPARK_HOME like following.
>>> *export SPARK_HOME=/home/clash/sparks/spark-1.6.1-bin-hadoop12*
>>>
>>> now you can restart zeppelin please run your python command.
>>>
>>> *. Could you give the absolute path for logFile like following.
>>> logFile = "/Users/user/hiv.data"
>>>
>>>
>>> 2017-02-01 11:48 GMT+09:00 mingda li <li...@gmail.com>:
>>>
>>>> Dear all,
>>>>
>>>> We are using Zeppelin. And I have added the export
>>>> PYTHONPATH=/home/clash/sparks/spark-1.6.1-bin-hadoop12/python
>>>> to zeppelin-env.sh.
>>>> But each time, when I want to use pyspark, for example the program:
>>>>
>>>> %pyspark
>>>> from pyspark import SparkContext
>>>> logFile = "hiv.data"
>>>> logData = sc.textFile(logFile).cache()
>>>> numAs = logData.filter(lambda s: 'a' in s).count()
>>>> numBs = logData.filter(lambda s: 'b' in s).count()
>>>> print "Lines with a: %i, lines with b: %i" % (numAs, numBs)
>>>>
>>>> It can firstly run well. But second time, I run it again I will get
>>>> such error:
>>>> *Traceback (most recent call last):*
>>>> *  File "/tmp/zeppelin_pyspark-4018989172273347075.py", line 238, in
>>>> <module>*
>>>> *    sc.setJobGroup(jobGroup, "Zeppelin")*
>>>> *  File
>>>> "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/pyspark/context.py",
>>>> line 876, in setJobGroup*
>>>> *    self._jsc.setJobGroup(groupId, description, interruptOnCancel)*
>>>> *AttributeError: 'NoneType' object has no attribute 'setJobGroup'*
>>>>
>>>> *I need to rm */tmp/zeppelin_pyspark-4018989172273347075.py and start
>>>> zeppelin again to let it work.
>>>> Does anyone have idea why?
>>>>
>>>> Thanks
>>>>
>>>
>>>
>>
>

Re: Error about PySpark

Posted by "Jianfeng (Jeff) Zhang" <jz...@hortonworks.com>.
Please try to install numpy


Best Regard,
Jeff Zhang


From: mingda li <li...@gmail.com>>
Reply-To: "users@zeppelin.apache.org<ma...@zeppelin.apache.org>" <us...@zeppelin.apache.org>>
Date: Friday, February 3, 2017 at 6:03 AM
To: "users@zeppelin.apache.org<ma...@zeppelin.apache.org>" <us...@zeppelin.apache.org>>
Subject: Re: Error about PySpark

And I tried the ./bin/pyspark to run same program with package of mllib, That can work well for spark.

So do I need to set something for Zeppelin? Like PySpark_Python or PythonPath.

Bests,
Mingda

On Thu, Feb 2, 2017 at 12:07 PM, mingda li <li...@gmail.com>> wrote:
Thanks. But when I changed the env of zeppelin as following:

export JAVA_HOME=/home/clash/asterixdb/jdk1.8.0_101

export ZEPPELIN_PORT=19037

export SPARK_HOME=/home/clash/sparks/spark-1.6.1-bin-hadoop12

Each time, if I want to use the mllib in zeppelin, I will meet the problem:

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in stage 5.0 (TID 13, SCAI05.CS.UCLA.EDU<http://SCAI05.CS.UCLA.EDU>): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main
command = pickleSer._read_with_length(infile)
File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/serializers.py", line 422, in loads
return pickle.loads(obj)
File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/mllib/__init__.py", line 25, in <module>
ImportError: No module named numpy
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org<http://org.apache.spark.scheduler.DAGScheduler.org>$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:393)
at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main
command = pickleSer._read_with_length(infile)
File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
return self.loads(obj)
File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/serializers.py", line 422, in loads
return pickle.loads(obj)
File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/mllib/__init__.py", line 25, in <module>
ImportError: No module named numpy
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
(<class 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob.\n', JavaObject id=o108), <traceback object at 0x7f3054e56f80>)

Do you know why? Do I need to set the python path?

On Wed, Feb 1, 2017 at 1:33 AM, Hyung Sung Shim <hs...@nflabs.com>> wrote:
Hello.
You don't need to remove the /tmp/zeppelin_pyspark-4018989172273347075.py because it is generated automatically when you run the pyspark command.
and I think you don't need to set PYTHONPATH if you have python in your system.

I recommend you are using the SPARK_HOME like following.
export SPARK_HOME=/home/clash/sparks/spark-1.6.1-bin-hadoop12

now you can restart zeppelin please run your python command.

*. Could you give the absolute path for logFile like following.
logFile = "/Users/user/hiv.data"


2017-02-01 11:48 GMT+09:00 mingda li <li...@gmail.com>>:
Dear all,

We are using Zeppelin. And I have added the export PYTHONPATH=/home/clash/sparks/spark-1.6.1-bin-hadoop12/python to zeppelin-env.sh.
But each time, when I want to use pyspark, for example the program:

%pyspark
from pyspark import SparkContext
logFile = "hiv.data"
logData = sc.textFile(logFile).cache()
numAs = logData.filter(lambda s: 'a' in s).count()
numBs = logData.filter(lambda s: 'b' in s).count()
print "Lines with a: %i, lines with b: %i" % (numAs, numBs)

It can firstly run well. But second time, I run it again I will get such error:
Traceback (most recent call last):
  File "/tmp/zeppelin_pyspark-4018989172273347075.py", line 238, in <module>
    sc.setJobGroup(jobGroup, "Zeppelin")
  File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/pyspark/context.py", line 876, in setJobGroup
    self._jsc.setJobGroup(groupId, description, interruptOnCancel)
AttributeError: 'NoneType' object has no attribute 'setJobGroup'

I need to rm /tmp/zeppelin_pyspark-4018989172273347075.py and start zeppelin again to let it work.
Does anyone have idea why?

Thanks




Re: Error about PySpark

Posted by mingda li <li...@gmail.com>.
And I tried the ./bin/pyspark to run same program with package of mllib,
That can work well for spark.

So do I need to set something for Zeppelin? Like PySpark_Python or
PythonPath.

Bests,
Mingda

On Thu, Feb 2, 2017 at 12:07 PM, mingda li <li...@gmail.com> wrote:

> Thanks. But when I changed the env of zeppelin as following:
>
> export JAVA_HOME=/home/clash/asterixdb/jdk1.8.0_101
>
> export ZEPPELIN_PORT=19037
>
> export SPARK_HOME=/home/clash/sparks/spark-1.6.1-bin-hadoop12
> Each time, if I want to use the mllib in zeppelin, I will meet the problem:
>
> Py4JJavaError: An error occurred while calling
> z:org.apache.spark.api.python.PythonRDD.runJob.
> : org.apache.spark.SparkException: Job aborted due to stage failure: Task
> 0 in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in stage
> 5.0 (TID 13, SCAI05.CS.UCLA.EDU): org.apache.spark.api.python.PythonException:
> Traceback (most recent call last):
> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/
> pyspark.zip/pyspark/worker.py", line 98, in main
> command = pickleSer._read_with_length(infile)
> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/
> pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
> return self.loads(obj)
> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/
> pyspark.zip/pyspark/serializers.py", line 422, in loads
> return pickle.loads(obj)
> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/
> pyspark.zip/pyspark/mllib/__init__.py", line 25, in <module>
> ImportError: No module named numpy
> at org.apache.spark.api.python.PythonRunner$$anon$1.read(
> PythonRDD.scala:166)
> at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(
> PythonRDD.scala:207)
> at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> at org.apache.spark.scheduler.Task.run(Task.scala:89)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Driver stacktrace:
> at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$
> scheduler$DAGScheduler$$failJobAndIndependentStages(
> DAGScheduler.scala:1431)
> at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(
> DAGScheduler.scala:1419)
> at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(
> DAGScheduler.scala:1418)
> at scala.collection.mutable.ResizableArray$class.foreach(
> ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at org.apache.spark.scheduler.DAGScheduler.abortStage(
> DAGScheduler.scala:1418)
> at org.apache.spark.scheduler.DAGScheduler$$anonfun$
> handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
> at org.apache.spark.scheduler.DAGScheduler$$anonfun$
> handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
> at scala.Option.foreach(Option.scala:236)
> at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(
> DAGScheduler.scala:799)
> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.
> doOnReceive(DAGScheduler.scala:1640)
> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.
> onReceive(DAGScheduler.scala:1599)
> at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.
> onReceive(DAGScheduler.scala:1588)
> at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
> at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
> at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
> at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:393)
> at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
> at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
> at py4j.Gateway.invoke(Gateway.java:259)
> at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
> at py4j.commands.CallCommand.execute(CallCommand.java:79)
> at py4j.GatewayConnection.run(GatewayConnection.java:209)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.spark.api.python.PythonException: Traceback (most
> recent call last):
> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/
> pyspark.zip/pyspark/worker.py", line 98, in main
> command = pickleSer._read_with_length(infile)
> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/
> pyspark.zip/pyspark/serializers.py", line 164, in _read_with_length
> return self.loads(obj)
> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/
> pyspark.zip/pyspark/serializers.py", line 422, in loads
> return pickle.loads(obj)
> File "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/
> pyspark.zip/pyspark/mllib/__init__.py", line 25, in <module>
> ImportError: No module named numpy
> at org.apache.spark.api.python.PythonRunner$$anon$1.read(
> PythonRDD.scala:166)
> at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(
> PythonRDD.scala:207)
> at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
> at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
> at org.apache.spark.scheduler.Task.run(Task.scala:89)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> ... 1 more
> (<class 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error occurred
> while calling z:org.apache.spark.api.python.PythonRDD.runJob.\n',
> JavaObject id=o108), <traceback object at 0x7f3054e56f80>)
>
> Do you know why? Do I need to set the python path?
>
> On Wed, Feb 1, 2017 at 1:33 AM, Hyung Sung Shim <hs...@nflabs.com> wrote:
>
>> Hello.
>> You don't need to remove the /tmp/zeppelin_pyspark-4018989172273347075.py
>> because it is generated automatically when you run the pyspark command.
>> and I think you don't need to set PYTHONPATH if you have python in your
>> system.
>>
>> I recommend you are using the SPARK_HOME like following.
>> *export SPARK_HOME=/home/clash/sparks/spark-1.6.1-bin-hadoop12*
>>
>> now you can restart zeppelin please run your python command.
>>
>> *. Could you give the absolute path for logFile like following.
>> logFile = "/Users/user/hiv.data"
>>
>>
>> 2017-02-01 11:48 GMT+09:00 mingda li <li...@gmail.com>:
>>
>>> Dear all,
>>>
>>> We are using Zeppelin. And I have added the export
>>> PYTHONPATH=/home/clash/sparks/spark-1.6.1-bin-hadoop12/python
>>> to zeppelin-env.sh.
>>> But each time, when I want to use pyspark, for example the program:
>>>
>>> %pyspark
>>> from pyspark import SparkContext
>>> logFile = "hiv.data"
>>> logData = sc.textFile(logFile).cache()
>>> numAs = logData.filter(lambda s: 'a' in s).count()
>>> numBs = logData.filter(lambda s: 'b' in s).count()
>>> print "Lines with a: %i, lines with b: %i" % (numAs, numBs)
>>>
>>> It can firstly run well. But second time, I run it again I will get such
>>> error:
>>> *Traceback (most recent call last):*
>>> *  File "/tmp/zeppelin_pyspark-4018989172273347075.py", line 238, in
>>> <module>*
>>> *    sc.setJobGroup(jobGroup, "Zeppelin")*
>>> *  File
>>> "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/pyspark/context.py",
>>> line 876, in setJobGroup*
>>> *    self._jsc.setJobGroup(groupId, description, interruptOnCancel)*
>>> *AttributeError: 'NoneType' object has no attribute 'setJobGroup'*
>>>
>>> *I need to rm */tmp/zeppelin_pyspark-4018989172273347075.py and start
>>> zeppelin again to let it work.
>>> Does anyone have idea why?
>>>
>>> Thanks
>>>
>>
>>
>

Re: Error about PySpark

Posted by mingda li <li...@gmail.com>.
Thanks. But when I changed the env of zeppelin as following:

export JAVA_HOME=/home/clash/asterixdb/jdk1.8.0_101

export ZEPPELIN_PORT=19037

export SPARK_HOME=/home/clash/sparks/spark-1.6.1-bin-hadoop12
Each time, if I want to use the mllib in zeppelin, I will meet the problem:

Py4JJavaError: An error occurred while calling
z:org.apache.spark.api.python.PythonRDD.runJob.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0
in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in stage
5.0 (TID 13, SCAI05.CS.UCLA.EDU):
org.apache.spark.api.python.PythonException: Traceback (most recent call
last):
File
"/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/worker.py",
line 98, in main
command = pickleSer._read_with_length(infile)
File
"/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/serializers.py",
line 164, in _read_with_length
return self.loads(obj)
File
"/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/serializers.py",
line 422, in loads
return pickle.loads(obj)
File
"/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/mllib/__init__.py",
line 25, in <module>
ImportError: No module named numpy
at
org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at
org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org
$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.api.python.PythonRDD$.runJob(PythonRDD.scala:393)
at org.apache.spark.api.python.PythonRDD.runJob(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most
recent call last):
File
"/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/worker.py",
line 98, in main
command = pickleSer._read_with_length(infile)
File
"/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/serializers.py",
line 164, in _read_with_length
return self.loads(obj)
File
"/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/serializers.py",
line 422, in loads
return pickle.loads(obj)
File
"/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/lib/pyspark.zip/pyspark/mllib/__init__.py",
line 25, in <module>
ImportError: No module named numpy
at
org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at
org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
(<class 'py4j.protocol.Py4JJavaError'>, Py4JJavaError(u'An error occurred
while calling z:org.apache.spark.api.python.PythonRDD.runJob.\n',
JavaObject id=o108), <traceback object at 0x7f3054e56f80>)

Do you know why? Do I need to set the python path?

On Wed, Feb 1, 2017 at 1:33 AM, Hyung Sung Shim <hs...@nflabs.com> wrote:

> Hello.
> You don't need to remove the /tmp/zeppelin_pyspark-4018989172273347075.py
> because it is generated automatically when you run the pyspark command.
> and I think you don't need to set PYTHONPATH if you have python in your
> system.
>
> I recommend you are using the SPARK_HOME like following.
> *export SPARK_HOME=/home/clash/sparks/spark-1.6.1-bin-hadoop12*
>
> now you can restart zeppelin please run your python command.
>
> *. Could you give the absolute path for logFile like following.
> logFile = "/Users/user/hiv.data"
>
>
> 2017-02-01 11:48 GMT+09:00 mingda li <li...@gmail.com>:
>
>> Dear all,
>>
>> We are using Zeppelin. And I have added the export
>> PYTHONPATH=/home/clash/sparks/spark-1.6.1-bin-hadoop12/python
>> to zeppelin-env.sh.
>> But each time, when I want to use pyspark, for example the program:
>>
>> %pyspark
>> from pyspark import SparkContext
>> logFile = "hiv.data"
>> logData = sc.textFile(logFile).cache()
>> numAs = logData.filter(lambda s: 'a' in s).count()
>> numBs = logData.filter(lambda s: 'b' in s).count()
>> print "Lines with a: %i, lines with b: %i" % (numAs, numBs)
>>
>> It can firstly run well. But second time, I run it again I will get such
>> error:
>> *Traceback (most recent call last):*
>> *  File "/tmp/zeppelin_pyspark-4018989172273347075.py", line 238, in
>> <module>*
>> *    sc.setJobGroup(jobGroup, "Zeppelin")*
>> *  File
>> "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/pyspark/context.py",
>> line 876, in setJobGroup*
>> *    self._jsc.setJobGroup(groupId, description, interruptOnCancel)*
>> *AttributeError: 'NoneType' object has no attribute 'setJobGroup'*
>>
>> *I need to rm */tmp/zeppelin_pyspark-4018989172273347075.py and start
>> zeppelin again to let it work.
>> Does anyone have idea why?
>>
>> Thanks
>>
>
>

Re: Error about PySpark

Posted by Hyung Sung Shim <hs...@nflabs.com>.
Hello.
You don't need to remove the /tmp/zeppelin_pyspark-4018989172273347075.py
because it is generated automatically when you run the pyspark command.
and I think you don't need to set PYTHONPATH if you have python in your
system.

I recommend you are using the SPARK_HOME like following.
*export SPARK_HOME=/home/clash/sparks/spark-1.6.1-bin-hadoop12*

now you can restart zeppelin please run your python command.

*. Could you give the absolute path for logFile like following.
logFile = "/Users/user/hiv.data"


2017-02-01 11:48 GMT+09:00 mingda li <li...@gmail.com>:

> Dear all,
>
> We are using Zeppelin. And I have added the export
> PYTHONPATH=/home/clash/sparks/spark-1.6.1-bin-hadoop12/python
> to zeppelin-env.sh.
> But each time, when I want to use pyspark, for example the program:
>
> %pyspark
> from pyspark import SparkContext
> logFile = "hiv.data"
> logData = sc.textFile(logFile).cache()
> numAs = logData.filter(lambda s: 'a' in s).count()
> numBs = logData.filter(lambda s: 'b' in s).count()
> print "Lines with a: %i, lines with b: %i" % (numAs, numBs)
>
> It can firstly run well. But second time, I run it again I will get such
> error:
> *Traceback (most recent call last):*
> *  File "/tmp/zeppelin_pyspark-4018989172273347075.py", line 238, in
> <module>*
> *    sc.setJobGroup(jobGroup, "Zeppelin")*
> *  File
> "/home/clash/sparks/spark-1.6.1-bin-hadoop12/python/pyspark/context.py",
> line 876, in setJobGroup*
> *    self._jsc.setJobGroup(groupId, description, interruptOnCancel)*
> *AttributeError: 'NoneType' object has no attribute 'setJobGroup'*
>
> *I need to rm */tmp/zeppelin_pyspark-4018989172273347075.py and start
> zeppelin again to let it work.
> Does anyone have idea why?
>
> Thanks
>