You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Muhammad Haseeb Javed <11...@seecs.edu.pk> on 2015/10/12 14:12:22 UTC

What is the abstraction for a Worker process in Spark code

I understand that each executor that is processing a Spark job is emulated
in Spark code by the Executor class in Executor.scala and
CoarseGrainedExecutorBackend is the abstraction which facilitates
communication between an Executor and the Driver. But what is the
abstraction for a Worker process in Spark code which would a reference to
all the Executors running in it.

Re: What is the abstraction for a Worker process in Spark code

Posted by Shixiong Zhu <zs...@gmail.com>.
Which mode are you using? For standalone, it's
org.apache.spark.deploy.worker.Worker. For Yarn and Mesos, Spark just
submits its request to them and they will schedule processes for Spark.

Best Regards,
Shixiong Zhu

2015-10-12 20:12 GMT+08:00 Muhammad Haseeb Javed <11...@seecs.edu.pk>
:

> I understand that each executor that is processing a Spark job is emulated
> in Spark code by the Executor class in Executor.scala and
> CoarseGrainedExecutorBackend is the abstraction which facilitates
> communication between an Executor and the Driver. But what is the
> abstraction for a Worker process in Spark code which would a reference to
> all the Executors running in it.
>