You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Peter Taylor (JIRA)" <ji...@apache.org> on 2015/06/15 13:35:00 UTC

[jira] [Commented] (SPARK-2898) Failed to connect to daemon

    [ https://issues.apache.org/jira/browse/SPARK-2898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14585809#comment-14585809 ] 

Peter Taylor commented on SPARK-2898:
-------------------------------------

FYI 

java.io.IOException: Cannot run program "python": error=316, Unknown error: 316

I have seen this error to occur on mac because lib/jspawnhelper is missing execute permissions in your jre.

> Failed to connect to daemon
> ---------------------------
>
>                 Key: SPARK-2898
>                 URL: https://issues.apache.org/jira/browse/SPARK-2898
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 1.1.0
>            Reporter: Davies Liu
>            Assignee: Davies Liu
>             Fix For: 1.1.0
>
>
> There is a deadlock  in handle_sigchld() because of logging
> --------------------------------------------------------------------
> Java options: -Dspark.storage.memoryFraction=0.66 -Dspark.serializer=org.apache.spark.serializer.JavaSerializer -Dspark.executor.memory=3g -Dspark.locality.wait=60000000
> Options: SchedulerThroughputTest --num-tasks=10000 --num-trials=4 --inter-trial-wait=1
> --------------------------------------------------------------------
> 14/08/06 22:09:41 WARN JettyUtils: Failed to create UI on port 4040. Trying again on port 4041. - Failure(java.net.BindException: Address already in use)
> worker 50114 crashed abruptly with exit status 1
> 14/08/06 22:10:37 ERROR Executor: Exception in task 1476.0 in stage 1.0 (TID 11476)
> org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
> 	at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:150)
> 	at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:154)
> 	at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:87)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:54)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:199)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.EOFException
> 	at java.io.DataInputStream.readInt(DataInputStream.java:392)
> 	at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:101)
> 	... 10 more
> 14/08/06 22:10:37 WARN PythonWorkerFactory: Failed to open socket to Python daemon:
> java.net.ConnectException: Connection refused
> 	at java.net.PlainSocketImpl.socketConnect(Native Method)
> 	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> 	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> 	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> 	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> 	at java.net.Socket.connect(Socket.java:579)
> 	at java.net.Socket.connect(Socket.java:528)
> 	at java.net.Socket.<init>(Socket.java:425)
> 	at java.net.Socket.<init>(Socket.java:241)
> 	at org.apache.spark.api.python.PythonWorkerFactory.createSocket$1(PythonWorkerFactory.scala:68)
> 	at org.apache.spark.api.python.PythonWorkerFactory.liftedTree1$1(PythonWorkerFactory.scala:83)
> 	at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:82)
> 	at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:55)
> 	at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:101)
> 	at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:66)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:54)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:199)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> 14/08/06 22:10:37 ERROR Executor: Exception in task 1478.0 in stage 1.0 (TID 11478)
> java.io.EOFException
> 	at java.io.DataInputStream.readInt(DataInputStream.java:392)
> 	at org.apache.spark.api.python.PythonWorkerFactory.createSocket$1(PythonWorkerFactory.scala:69)
> 	at org.apache.spark.api.python.PythonWorkerFactory.liftedTree1$1(PythonWorkerFactory.scala:83)
> 	at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:82)
> 	at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:55)
> 	at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:101)
> 	at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:66)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:54)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:199)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> 14/08/06 22:10:37 WARN PythonWorkerFactory: Assuming that daemon unexpectedly quit, attempting to restart
> 14/08/06 22:10:37 WARN TaskSetManager: Lost task 1476.0 in stage 1.0 (TID 11476, localhost): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
>         org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:150)
>         org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:154)
>         org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:87)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
>         org.apache.spark.scheduler.Task.run(Task.scala:54)
>         org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:199)
>         java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         java.lang.Thread.run(Thread.java:745)
> 14/08/06 22:10:37 ERROR TaskSetManager: Task 1476 in stage 1.0 failed 1 times; aborting job
> 14/08/06 22:10:37 WARN TaskSetManager: Lost task 1478.0 in stage 1.0 (TID 11478, localhost): java.io.EOFException: 
>         java.io.DataInputStream.readInt(DataInputStream.java:392)
>         org.apache.spark.api.python.PythonWorkerFactory.createSocket$1(PythonWorkerFactory.scala:69)
>         org.apache.spark.api.python.PythonWorkerFactory.liftedTree1$1(PythonWorkerFactory.scala:83)
>         org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:82)
>         org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:55)
>         org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:101)
>         org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:66)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
>         org.apache.spark.scheduler.Task.run(Task.scala:54)
>         org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:199)
>         java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         java.lang.Thread.run(Thread.java:745)
> Another one:
> Daemon failed to fork PySpark worker: [Errno 35] Resource temporarily unavailable
> 14/08/07 12:04:37 ERROR Executor: Exception in task 15579.0 in stage 0.0 (TID 15579)
> java.lang.IllegalStateException: Python daemon failed to launch worker
> 	at org.apache.spark.api.python.PythonWorkerFactory.createSocket$1(PythonWorkerFactory.scala:71)
> 	at org.apache.spark.api.python.PythonWorkerFactory.liftedTree1$1(PythonWorkerFactory.scala:83)
> 	at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:82)
> 	at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:55)
> 	at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:101)
> 	at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:66)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:54)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:199)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> 14/08/07 12:04:37 WARN TaskSetManager: Lost task 15579.0 in stage 0.0 (TID 15579, localhost): java.lang.IllegalStateException: Python daemon failed to launch worker
>         org.apache.spark.api.python.PythonWorkerFactory.createSocket$1(PythonWorkerFactory.scala:71)
>         org.apache.spark.api.python.PythonWorkerFactory.liftedTree1$1(PythonWorkerFactory.scala:83)
>         org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:82)
>         org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:55)
>         org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:101)
>         org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:66)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
>         org.apache.spark.scheduler.Task.run(Task.scala:54)
>         org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:199)
>         java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         java.lang.Thread.run(Thread.java:745)
> 14/08/07 12:04:37 ERROR TaskSetManager: Task 15579 in stage 0.0 failed 1 times; aborting job
> worker 17037 crashed abruptly with exit status 1
> 14/08/07 12:06:34 ERROR Executor: Exception in task 19607.0 in stage 0.0 (TID 19607)
> java.io.EOFException
> 	at java.io.DataInputStream.readInt(DataInputStream.java:392)
> 	at org.apache.spark.api.python.PythonWorkerFactory.createSocket$1(PythonWorkerFactory.scala:69)
> 	at org.apache.spark.api.python.PythonWorkerFactory.liftedTree1$1(PythonWorkerFactory.scala:83)
> 	at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:82)
> 	at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:55)
> 	at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:101)
> 	at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:66)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:54)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:199)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> 14/08/07 12:06:34 WARN PythonWorkerFactory: Failed to open socket to Python daemon:
> java.net.ConnectException: Connection refused
> 	at java.net.PlainSocketImpl.socketConnect(Native Method)
> 	at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> 	at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> 	at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> 	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> 	at java.net.Socket.connect(Socket.java:579)
> 	at java.net.Socket.connect(Socket.java:528)
> 	at java.net.Socket.<init>(Socket.java:425)
> 	at java.net.Socket.<init>(Socket.java:241)
> 	at org.apache.spark.api.python.PythonWorkerFactory.createSocket$1(PythonWorkerFactory.scala:68)
> 	at org.apache.spark.api.python.PythonWorkerFactory.liftedTree1$1(PythonWorkerFactory.scala:83)
> 	at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:82)
> 	at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:55)
> 	at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:101)
> 	at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:66)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:54)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:199)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> 4/08/07 12:06:34 ERROR Executor: Exception in task 19604.0 in stage 0.0 (TID 19604)
> org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
> 	at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:150)
> 	at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:154)
> 	at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:87)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:54)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:199)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.EOFException
> 	at java.io.DataInputStream.readInt(DataInputStream.java:392)
> 	at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:101)
> 	... 10 more
> 14/08/07 12:06:34 WARN PythonWorkerFactory: Assuming that daemon unexpectedly quit, attempting to restart
> 14/08/07 12:06:34 WARN TaskSetManager: Lost task 19604.0 in stage 0.0 (TID 19604, localhost): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
>         org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:150)
>         org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:154)
>         org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:87)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
>         org.apache.spark.scheduler.Task.run(Task.scala:54)
>         org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:199)
>         java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         java.lang.Thread.run(Thread.java:745)
> 14/08/07 12:06:34 ERROR TaskSetManager: Task 19604 in stage 0.0 failed 1 times; aborting job
> 14/08/07 12:06:34 WARN TaskSetManager: Lost task 19607.0 in stage 0.0 (TID 19607, localhost): java.io.EOFException:
>         java.io.DataInputStream.readInt(DataInputStream.java:392)
>         org.apache.spark.api.python.PythonWorkerFactory.createSocket$1(PythonWorkerFactory.scala:69)
>         org.apache.spark.api.python.PythonWorkerFactory.liftedTree1$1(PythonWorkerFactory.scala:83)
>         org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:82)
>         org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:55)
>         org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:101)
>         org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:66)
>         org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>         org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>         org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
>         org.apache.spark.scheduler.Task.run(Task.scala:54)
>         org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:199)
>         java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         java.lang.Thread.run(Thread.java:745)
> 14/08/07 13:29:01 WARN PythonWorkerFactory: Assuming that daemon unexpectedly quit, attempting to restart
> 14/08/07 13:29:01 ERROR Executor: Exception in task 9085.0 in stage 0.0 (TID 9085)
> java.io.IOException: Cannot run program "python": error=2, No such file or directory
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
> 	at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:149)
> 	at org.apache.spark.api.python.PythonWorkerFactory.liftedTree1$1(PythonWorkerFactory.scala:89)
> 	at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:82)
> 	at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:55)
> 	at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:101)
> 	at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:66)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:54)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:199)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: error=2, No such file or directory
> 	at java.lang.UNIXProcess.forkAndExec(Native Method)
> 	at java.lang.UNIXProcess.<init>(UNIXProcess.java:184)
> 	at java.lang.ProcessImpl.start(ProcessImpl.java:130)
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
> 	... 14 more
> 14/08/07 13:29:01 ERROR Executor: Exception in task 9084.0 in stage 0.0 (TID 9084)
> java.io.IOException: Cannot run program "python": error=316, Unknown error: 316
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
> 	at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:149)
> 	at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:79)
> 	at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:55)
> 	at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:101)
> 	at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:66)
> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:54)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:199)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> 	at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: error=316, Unknown error: 316
> 	at java.lang.UNIXProcess.forkAndExec(Native Method)
> 	at java.lang.UNIXProcess.<init>(UNIXProcess.java:184)
> 	at java.lang.ProcessImpl.start(ProcessImpl.java:130)
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
> 	... 13 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org