You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2015/12/16 09:38:46 UTC

[jira] [Assigned] (SPARK-12361) Should set PYSPARK_DRIVER_PYTHON before python test

     [ https://issues.apache.org/jira/browse/SPARK-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Apache Spark reassigned SPARK-12361:
------------------------------------

    Assignee:     (was: Apache Spark)

> Should set PYSPARK_DRIVER_PYTHON before python test
> ---------------------------------------------------
>
>                 Key: SPARK-12361
>                 URL: https://issues.apache.org/jira/browse/SPARK-12361
>             Project: Spark
>          Issue Type: Improvement
>          Components: PySpark, Tests
>    Affects Versions: 1.6.0
>            Reporter: Jeff Zhang
>            Priority: Minor
>
> If PYSPARK_DRIVER_PYTHON is not set, python version mismatch exception may happen (when I set PYSPARK_DRIVER_PYTHON in .profile). And the weird thing is that this exception won't cause the unit test fail. The return_code is still 0 which hide the unit test failure. And if I invoke the test command directly, I can see the return code is not 0. This is very weird. 
> * invoke unit test command directly
> {code}
> export SPARK_TESTING = 1
> export PYSPARK_PYTHON=python2.6
> bin/pyspark pyspark.ml.clustering      
> {code}
> * return code from python unit test
> {code}
> retcode = subprocess.Popen(
>             [os.path.join(SPARK_HOME, "bin/pyspark"), test_name],
>             stderr=per_test_output, stdout=per_test_output, env=env).wait()
> {code}
> * exception of python version mismatch
> {code}
>  File "/Users/jzhang/github/spark/python/lib/pyspark.zip/pyspark/worker.py", line 64, in main
>     ("%d.%d" % sys.version_info[:2], version))
> Exception: Python in worker has different version 2.6 than that in driver 2.7, PySpark cannot run with different minor versions
>     at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
>     at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
>     at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
>     at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
>     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>     at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>     at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>     at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>     at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>     at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>     at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>     at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>     at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>     at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>     at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>     at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>     at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>     at org.apache.spark.scheduler.Task.run(Task.scala:88)
>     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
>     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>     at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org