You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by esamanas <ev...@gmail.com> on 2014/09/29 22:52:51 UTC

Using addFile with pipe on a yarn cluster

Hi,

I've been using pyspark with my YARN cluster with success.  The work I'm
doing involves using the RDD's pipe command to send data through a binary
I've made.  I can do this easily in pyspark like so (assuming 'sc' is
already defined):

sc.addFile("./dumb_prog") 
t= sc.parallelize(range(10))
t.pipe("dumb_prog")
t.take(10) # Gives expected result

However, if I do the same thing in Scala, the pipe command gets a 'Cannot
run program "dumb_prog": error=2, No such file or directory' error.  Here's
the code in the Scala shell:

sc.addFile("./dumb_prog")
val t = sc.parallelize(0 until 10)
val u = t.pipe("dumb_prog")
u.take(10)

Why does this only work in Python and not in Scala?  Is there a way I can
get it to work in Scala?  As far as I can see, I can't use the
'SparkFiles.get' command within pipe.

Thanks,

Evan

P.S. Here is the full error message from the scala side:

scala> u.take(3)                                                                                                                                                                                                                                                     
[59/3965]
14/09/29 13:07:47 INFO SparkContext: Starting job: take at <console>:17
14/09/29 13:07:47 INFO DAGScheduler: Got job 3 (take at <console>:17) with 1
output partitions (allowLocal=true)
14/09/29 13:07:47 INFO DAGScheduler: Final stage: Stage 3(take at
<console>:17)
14/09/29 13:07:47 INFO DAGScheduler: Parents of final stage: List()
14/09/29 13:07:47 INFO DAGScheduler: Missing parents: List()
14/09/29 13:07:47 INFO DAGScheduler: Submitting Stage 3 (PipedRDD[3] at pipe
at <console>:14), which has no missing parents
14/09/29 13:07:47 INFO MemoryStore: ensureFreeSpace(2136) called with
curMem=7453, maxMem=278302556
14/09/29 13:07:47 INFO MemoryStore: Block broadcast_3 stored as values in
memory (estimated size 2.1 KB, free 265.4 MB)
14/09/29 13:07:47 INFO MemoryStore: ensureFreeSpace(1389) called with
curMem=9589, maxMem=278302556
14/09/29 13:07:47 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes
in memory (estimated size 1389.0 B, free 265.4 MB)
14/09/29 13:07:47 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory
on 10.10.0.20:37574 (size: 1389.0 B, free: 265.4 MB)
14/09/29 13:07:47 INFO BlockManagerMaster: Updated info of block
broadcast_3_piece0
14/09/29 13:07:47 INFO DAGScheduler: Submitting 1 missing tasks from Stage 3
(PipedRDD[3] at pipe at <console>:14)
14/09/29 13:07:47 INFO YarnClientClusterScheduler: Adding task set 3.0 with
1 tasks
14/09/29 13:07:47 INFO TaskSetManager: Starting task 0.0 in stage 3.0 (TID
6, SERVERNAME, PROCESS_LOCAL, 1201 bytes)
14/09/29 13:07:47 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory
on SERVERNAME:57118 (size: 1389.0 B, free: 530.3 MB)
14/09/29 13:07:47 WARN TaskSetManager: Lost task 0.0 in stage 3.0 (TID 6,
SERVERNAME): java.io.IOException: Cannot run program "dumb_prog": error=2,
No such file or directory
        java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
        org.apache.spark.rdd.PipedRDD.compute(PipedRDD.scala:119)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
        org.apache.spark.scheduler.Task.run(Task.scala:54)
       
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
       
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
       
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:745)
14/09/29 13:07:47 INFO TaskSetManager: Starting task 0.1 in stage 3.0 (TID
7, SERVERNAME, PROCESS_LOCAL, 1201 bytes)
14/09/29 13:07:47 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory
on SERVERNAME:44994 (size: 1389.0 B, free: 530.3 MB)
14/09/29 13:07:47 INFO TaskSetManager: Lost task 0.1 in stage 3.0 (TID 7) on
executor SERVERNAME: java.io.IOException (Cannot run program "dumb_prog":
error=2, No such file or directory) [duplicate 1]
14/09/29 13:07:47 INFO TaskSetManager: Starting task 0.2 in stage 3.0 (TID
8, SERVERNAME, PROCESS_LOCAL, 1201 bytes)
14/09/29 13:07:47 INFO TaskSetManager: Lost task 0.2 in stage 3.0 (TID 8) on
executor SERVERNAME: java.io.IOException (Cannot run program "dumb_prog":
error=2, No such file or directory) [duplicate 2]
14/09/29 13:07:47 INFO TaskSetManager: Starting task 0.3 in stage 3.0 (TID
9, SERVERNAME, PROCESS_LOCAL, 1201 bytes)
14/09/29 13:07:47 INFO TaskSetManager: Lost task 0.3 in stage 3.0 (TID 9) on
executor SERVERNAME: java.io.IOException (Cannot run program "dumb_prog":
error=2, No such file or directory) [duplicate 3]
14/09/29 13:07:47 ERROR TaskSetManager: Task 0 in stage 3.0 failed 4 times;
aborting job
14/09/29 13:07:47 INFO YarnClientClusterScheduler: Removed TaskSet 3.0,
whose tasks have all completed, from pool 
14/09/29 13:07:47 INFO YarnClientClusterScheduler: Cancelling stage 3
14/09/29 13:07:47 INFO DAGScheduler: Failed to run take at <console>:17
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in
stage 3.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3.0
(TID 9, SERVERNAME): java.io.IOException: Cannot run program "dumb_prog":
error=2, No such file or directory
        java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
        org.apache.spark.rdd.PipedRDD.compute(PipedRDD.scala:119)
        org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:62)
        org.apache.spark.scheduler.Task.run(Task.scala:54)
       
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
       
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
       
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
        at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
        at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:688)
        at scala.Option.foreach(Option.scala:236)
        at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:688)
        at
org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1391)
        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
        at akka.actor.ActorCell.invoke(ActorCell.scala:456)
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
        at akka.dispatch.Mailbox.run(Mailbox.scala:219)
        at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
        at
scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
        at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
        at
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
        at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)




--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Using-addFile-with-pipe-on-a-yarn-cluster-tp15361.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org