You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Kenichi Maehashi <we...@kenichimaehashi.com> on 2014/12/17 07:40:55 UTC

Rolling upgrade Spark cluster

Hi,

I have a Spark cluster using standalone mode. Spark Master is
configured as High Availablity mode.
Now I am going to upgrade Spark from 1.0 to 1.1, but don't want to
interrupt the currently running jobs.

(1) Are there any way to perform a rolling upgrade (while running a job)?
(2) If not, when using YARN as a cluster manager, can I perform a
rolling upgrade?

Thanks,

Kenichi

-- 
Kenichi Maehashi <we...@kenichimaehashi.com>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Re: Rolling upgrade Spark cluster

Posted by Bhaskar Dutta <bh...@gmail.com>.
HDFS rolling upgrade in Hadoop 2.6 (available since 2.4)
http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html

Some parts of NM and RM work preserving restart was released in Hadoop
2.6.0.
YARN-1367 After restart NM should resync with the RM without killing
containers
YARN-1337 Recover containers upon nodemanager restart

The umbrella tickets YARN-556 and YARN-1336 are still open.

Thanks,
Bhaskar

On Wed, Dec 17, 2014 at 12:10 PM, Kenichi Maehashi <
webmaster@kenichimaehashi.com> wrote:
>
> Hi,
>
> I have a Spark cluster using standalone mode. Spark Master is
> configured as High Availablity mode.
> Now I am going to upgrade Spark from 1.0 to 1.1, but don't want to
> interrupt the currently running jobs.
>
> (1) Are there any way to perform a rolling upgrade (while running a job)?
> (2) If not, when using YARN as a cluster manager, can I perform a
> rolling upgrade?
>
> Thanks,
>
> Kenichi
>
> --
> Kenichi Maehashi <we...@kenichimaehashi.com>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>