You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@spark.apache.org by lin <ku...@gmail.com> on 2015/02/12 08:24:31 UTC

driver fail-over in Spark streaming 1.2.0

Hi, all

In Spark Streaming 1.2.0, when the driver fails and a new driver starts
with the most updated check-pointed data, will the former Executors
connects to the new driver, or will the new driver starts out its own set
of new Executors? In which piece of codes is that done?

Any reply will be appreciated :)

regards,

lin

Re: driver fail-over in Spark streaming 1.2.0

Posted by Patrick Wendell <pw...@gmail.com>.
It will create and connect to new executors. The executors are mostly
stateless, so the program can resume with new executors.

On Wed, Feb 11, 2015 at 11:24 PM, lin <ku...@gmail.com> wrote:
> Hi, all
>
> In Spark Streaming 1.2.0, when the driver fails and a new driver starts
> with the most updated check-pointed data, will the former Executors
> connects to the new driver, or will the new driver starts out its own set
> of new Executors? In which piece of codes is that done?
>
> Any reply will be appreciated :)
>
> regards,
>
> lin

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@spark.apache.org
For additional commands, e-mail: dev-help@spark.apache.org