You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by subacini Arunkumar <su...@gmail.com> on 2014/04/06 20:44:36 UTC

Spark Worker in different machine doesnt work

Hi All,

I am using spark-0.9.0 and am able to run my program successfully if spark
master and worker are in same machine.

If i run the same program in spark master in Machine A and worker in
Machine B, I am getting below exception

I am running program with java -cp "..." instead of scala command
In master UI, i see worked node listed properly
Form worker Node, if i do ping "MasterNode"  or netstat -at | grep 7077 or
host "masterNode" everything looks fine.

Can someone help me here. Thanks in advance.

*Master Logs:*

 INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend:
Registered executor: Actor[akka.tcp://
sparkExecutor@masterNode.XXXXX.com:45478/user/Executor#730301511] with ID 0
INFO org.apache.spark.deploy.client.AppClient$ClientActor: Executor
updated: app-20140406142235-0001/3 is now FAILED (Command exited with code
1)
INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend:
Executor app-20140406142235-0001/3 removed: Command exited with code 1
INFO org.apache.spark.deploy.client.AppClient$ClientActor: Executor added:
app-20140406142235-0001/4 on
worker-20140406005142-workerNode.XXXXX.com-42556 (workerNode.XXXXX.com:42556)
with 4 cores
INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend:
Granted executor ID app-20140406142235-0001/4 on hostPort
workerNode.XXXXX.com:42556 with 4 cores, 4.0 GB RAM
INFO org.apache.spark.deploy.client.AppClient$ClientActor: Executor
updated: app-20140406142235-0001/4 is now RUNNING
INFO org.apache.spark.storage.BlockManagerMasterActor$BlockManagerInfo:
Registering block manager masterNode.XXXXX.com:60418 with 2.3 GB RAM
INFO org.apache.spark.deploy.client.AppClient$ClientActor: Executor
updated: app-20140406142235-0001/4 is now FAILED (Command exited with code
1)
INFO org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend:
Executor app-20140406142235-0001/4 removed: Command exited with code 1
INFO org.apache.spark.deploy.client.AppClient$ClientActor: Executor added:
app-20140406142235-0001/5 on
worker-20140406005142-workerNode.XXXXX.com-42556 (workerNode.XXXXX.com:42556)
with 4 cores
14/04

*Worker Logs:*

Exception in thread "main" java.lang.NoSuchMethodException:
akka.remote.RemoteActorRefProvider.<init>(java.lang.String,
akka.actor.ActorSystem$Settings, akka.event.EventStream,
akka.actor.Scheduler, akka.actor.DynamicAccess)
	at java.lang.Class.getConstructor0(Class.java:2706)
	at java.lang.Class.getDeclaredConstructor(Class.java:1985)
	at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$2.apply(DynamicAccess.scala:77)
	at scala.util.Try$.apply(Try.scala:161)
	at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:74)
	at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$3.apply(DynamicAccess.scala:85)
	at akka.actor.ReflectiveDynamicAccess$$anonfun$createInstanceFor$3.apply(DynamicAccess.scala:85)
	at scala.util.Success.flatMap(Try.scala:200)
	at akka.actor.ReflectiveDynamicAccess.createInstanceFor(DynamicAccess.scala:85)
	at akka.actor.ActorSystemImpl.<init>(ActorSystem.scala:546)
	at akka.actor.IndestructibleActorSystemImpl.<init>(IndestructibleActorSystem.scala:38)
	at akka.actor.IndestructibleActorSystem$.apply(IndestructibleActorSystem.scala:35)
	at akka.actor.IndestructibleActorSystem$.apply(IndestructibleActorSystem.scala:32)
	at org.apache.spark.util.AkkaUtils$.createActorSystem(AkkaUtils.scala:94)
	at org.apache.spark.executor.CoarseGrainedExecutorBackend$.run(CoarseGrainedExecutorBackend.scala:102)
	at org.apache.spark.executor.CoarseGrainedExecutorBackend$.main(CoarseGrainedExecutorBackend.scala:126)
	at org.apache.spark.executor.CoarseGrainedExecutorBackend.main(CoarseGrainedExecutorBackend.scala)