You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Camilo Gonzalez <ca...@gmail.com> on 2008/09/02 17:33:56 UTC

Slaves "Hot-Swaping"

Hi!

I was wondering if there is a way to "Hot-Swap" Slave machines, for example,
in case an Slave machine fails while the Cluster is running and I want to
mount a new Slave machine to replace the old one, is there a way to tell the
Master that a new Slave machine is Online without having to stop and start
again the Cluster? I would appreciate the name of this, I don't think it is
named "Hot-Swaping", I don't know even if this exists. lol

BTW, when I try to access http://wiki.apache.org/hadoop/NameNodeFailover the
site tells me that the page doesn't exists. Is it a broken link?

Any information is appreciated.

Thanks in advance

-- 
Camilo A. Gonzalez

Re: Slaves "Hot-Swaping"

Posted by Camilo Gonzalez <ca...@gmail.com>.
Great! I will give it a try.

Thanks for your email.

On Tue, Sep 2, 2008 at 10:58 AM, Mikhail Yakshin
<gr...@gmail.com>wrote:

> On Tue, Sep 2, 2008 at 7:33 PM, Camilo Gonzalez wrote:
> > I was wondering if there is a way to "Hot-Swap" Slave machines, for
> example,
> > in case an Slave machine fails while the Cluster is running and I want to
> > mount a new Slave machine to replace the old one, is there a way to tell
> the
> > Master that a new Slave machine is Online without having to stop and
> start
> > again the Cluster?
>
> You don't have to restart entire cluster, you just have to run
> datanode (DFS support) and/or tasktracker processes on fresh node. You
> can do it using hadoop-daemon.sh, commands are "start datanode" and
> "start tasktracker" respectively. There's no need for "hot swapping"
> and "replacing" old slave machines with new ones pretending to be old
> ones. You just plug new one in with new IP/hostname and it will
> eventually start to do tasks as all other nodes.
>
> You don't really need any "hot standby" or any other high-availability
> schemes. You just plug all possible slaves in and it will balance
> everything out.
>
> --
> WBR, Mikhail Yakshin
>



-- 
Camilo A. Gonzalez
Ing. de Sistemas
Tel: 300 657 96 96

Re: Slaves "Hot-Swaping"

Posted by Mikhail Yakshin <gr...@gmail.com>.
On Tue, Sep 2, 2008 at 7:33 PM, Camilo Gonzalez wrote:
> I was wondering if there is a way to "Hot-Swap" Slave machines, for example,
> in case an Slave machine fails while the Cluster is running and I want to
> mount a new Slave machine to replace the old one, is there a way to tell the
> Master that a new Slave machine is Online without having to stop and start
> again the Cluster?

You don't have to restart entire cluster, you just have to run
datanode (DFS support) and/or tasktracker processes on fresh node. You
can do it using hadoop-daemon.sh, commands are "start datanode" and
"start tasktracker" respectively. There's no need for "hot swapping"
and "replacing" old slave machines with new ones pretending to be old
ones. You just plug new one in with new IP/hostname and it will
eventually start to do tasks as all other nodes.

You don't really need any "hot standby" or any other high-availability
schemes. You just plug all possible slaves in and it will balance
everything out.

-- 
WBR, Mikhail Yakshin

Re: Slaves "Hot-Swaping"

Posted by Allen Wittenauer <aw...@yahoo-inc.com>.


On 9/2/08 8:33 AM, "Camilo Gonzalez" <ca...@gmail.com> wrote:

> I was wondering if there is a way to "Hot-Swap" Slave machines, for example,
> in case an Slave machine fails while the Cluster is running and I want to
> mount a new Slave machine to replace the old one, is there a way to tell the
> Master that a new Slave machine is Online without having to stop and start
> again the Cluster? I would appreciate the name of this, I don't think it is
> named "Hot-Swaping", I don't know even if this exists. Lol


    :)

    Using hadoop dfsadmin -refreshNodes, you can have the name node reload
the include and exclude files.