You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Terry Healy <th...@bnl.gov> on 2012/08/16 21:11:45 UTC

Stopping a single Datanode

Sorry - this seems pretty basic, but I could not find a reference on
line or in my books. Is there a graceful way to stop a single datanode,
(for example to move the system to a new rack where it will be put back
on-line) or do you just whack the process ID and let HDFS clean up the
mess?

Thanks


Re: Stopping a single Datanode

Posted by Nitin Pawar <ni...@gmail.com>.
you can just kill the process id

or there is a command inside bin directory hadoop-daemon.sh

use it with stop datanode and it should stop it

On Fri, Aug 17, 2012 at 12:41 AM, Terry Healy <th...@bnl.gov> wrote:
> Sorry - this seems pretty basic, but I could not find a reference on
> line or in my books. Is there a graceful way to stop a single datanode,
> (for example to move the system to a new rack where it will be put back
> on-line) or do you just whack the process ID and let HDFS clean up the
> mess?
>
> Thanks
>



-- 
Nitin Pawar

Re: Stopping a single Datanode

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Terry,

    You can ssh the command to the node where you want to stop the DN.
Something like this :
$ cluster@ubuntu:~/hadoop-1.0.3$ bin/hadoop-daemon.sh --config
/home/cluster/hadoop-1.0.3/conf/ stop datanode

Regards,
    Mohammad Tariq



On Fri, Aug 17, 2012 at 2:26 AM, Terry Healy <th...@bnl.gov> wrote:

> Thanks guys. I will need the decommission in a few weeks, but for now
> just a simple system move. I found out the hard way not to have a
> masters and slaves file in the conf directory of a slave: when I tried
> bin/stop-all.sh, it stopped processes everywhere.
>
> Gave me an idea to list it's own name as the only one in slaves, which
> might work as expected then....but if I can just kill the process that
> is even easier.
>
>
> On 08/16/2012 03:49 PM, Harsh J wrote:
> > Perhaps what you're looking for is the Decommission feature of HDFS,
> > which lets you safely remove a DN without incurring replica loss? It
> > is detailed in Hadoop: The Definitive Guide (2nd Edition), page 315 |
> > Chapter 10: Administering Hadoop / Maintenance section - Title
> > "Decommissioning old nodes", or at
> > http://developer.yahoo.com/hadoop/tutorial/module2.html#decommission?
> >
> > On Fri, Aug 17, 2012 at 12:41 AM, Terry Healy <th...@bnl.gov> wrote:
> >> Sorry - this seems pretty basic, but I could not find a reference on
> >> line or in my books. Is there a graceful way to stop a single datanode,
> >> (for example to move the system to a new rack where it will be put back
> >> on-line) or do you just whack the process ID and let HDFS clean up the
> >> mess?
> >>
> >> Thanks
> >>
> >
> >
> >
>

Re: Stopping a single Datanode

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Terry,

    You can ssh the command to the node where you want to stop the DN.
Something like this :
$ cluster@ubuntu:~/hadoop-1.0.3$ bin/hadoop-daemon.sh --config
/home/cluster/hadoop-1.0.3/conf/ stop datanode

Regards,
    Mohammad Tariq



On Fri, Aug 17, 2012 at 2:26 AM, Terry Healy <th...@bnl.gov> wrote:

> Thanks guys. I will need the decommission in a few weeks, but for now
> just a simple system move. I found out the hard way not to have a
> masters and slaves file in the conf directory of a slave: when I tried
> bin/stop-all.sh, it stopped processes everywhere.
>
> Gave me an idea to list it's own name as the only one in slaves, which
> might work as expected then....but if I can just kill the process that
> is even easier.
>
>
> On 08/16/2012 03:49 PM, Harsh J wrote:
> > Perhaps what you're looking for is the Decommission feature of HDFS,
> > which lets you safely remove a DN without incurring replica loss? It
> > is detailed in Hadoop: The Definitive Guide (2nd Edition), page 315 |
> > Chapter 10: Administering Hadoop / Maintenance section - Title
> > "Decommissioning old nodes", or at
> > http://developer.yahoo.com/hadoop/tutorial/module2.html#decommission?
> >
> > On Fri, Aug 17, 2012 at 12:41 AM, Terry Healy <th...@bnl.gov> wrote:
> >> Sorry - this seems pretty basic, but I could not find a reference on
> >> line or in my books. Is there a graceful way to stop a single datanode,
> >> (for example to move the system to a new rack where it will be put back
> >> on-line) or do you just whack the process ID and let HDFS clean up the
> >> mess?
> >>
> >> Thanks
> >>
> >
> >
> >
>

Re: Stopping a single Datanode

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Terry,

    You can ssh the command to the node where you want to stop the DN.
Something like this :
$ cluster@ubuntu:~/hadoop-1.0.3$ bin/hadoop-daemon.sh --config
/home/cluster/hadoop-1.0.3/conf/ stop datanode

Regards,
    Mohammad Tariq



On Fri, Aug 17, 2012 at 2:26 AM, Terry Healy <th...@bnl.gov> wrote:

> Thanks guys. I will need the decommission in a few weeks, but for now
> just a simple system move. I found out the hard way not to have a
> masters and slaves file in the conf directory of a slave: when I tried
> bin/stop-all.sh, it stopped processes everywhere.
>
> Gave me an idea to list it's own name as the only one in slaves, which
> might work as expected then....but if I can just kill the process that
> is even easier.
>
>
> On 08/16/2012 03:49 PM, Harsh J wrote:
> > Perhaps what you're looking for is the Decommission feature of HDFS,
> > which lets you safely remove a DN without incurring replica loss? It
> > is detailed in Hadoop: The Definitive Guide (2nd Edition), page 315 |
> > Chapter 10: Administering Hadoop / Maintenance section - Title
> > "Decommissioning old nodes", or at
> > http://developer.yahoo.com/hadoop/tutorial/module2.html#decommission?
> >
> > On Fri, Aug 17, 2012 at 12:41 AM, Terry Healy <th...@bnl.gov> wrote:
> >> Sorry - this seems pretty basic, but I could not find a reference on
> >> line or in my books. Is there a graceful way to stop a single datanode,
> >> (for example to move the system to a new rack where it will be put back
> >> on-line) or do you just whack the process ID and let HDFS clean up the
> >> mess?
> >>
> >> Thanks
> >>
> >
> >
> >
>

Re: Stopping a single Datanode

Posted by Mohammad Tariq <do...@gmail.com>.
Hello Terry,

    You can ssh the command to the node where you want to stop the DN.
Something like this :
$ cluster@ubuntu:~/hadoop-1.0.3$ bin/hadoop-daemon.sh --config
/home/cluster/hadoop-1.0.3/conf/ stop datanode

Regards,
    Mohammad Tariq



On Fri, Aug 17, 2012 at 2:26 AM, Terry Healy <th...@bnl.gov> wrote:

> Thanks guys. I will need the decommission in a few weeks, but for now
> just a simple system move. I found out the hard way not to have a
> masters and slaves file in the conf directory of a slave: when I tried
> bin/stop-all.sh, it stopped processes everywhere.
>
> Gave me an idea to list it's own name as the only one in slaves, which
> might work as expected then....but if I can just kill the process that
> is even easier.
>
>
> On 08/16/2012 03:49 PM, Harsh J wrote:
> > Perhaps what you're looking for is the Decommission feature of HDFS,
> > which lets you safely remove a DN without incurring replica loss? It
> > is detailed in Hadoop: The Definitive Guide (2nd Edition), page 315 |
> > Chapter 10: Administering Hadoop / Maintenance section - Title
> > "Decommissioning old nodes", or at
> > http://developer.yahoo.com/hadoop/tutorial/module2.html#decommission?
> >
> > On Fri, Aug 17, 2012 at 12:41 AM, Terry Healy <th...@bnl.gov> wrote:
> >> Sorry - this seems pretty basic, but I could not find a reference on
> >> line or in my books. Is there a graceful way to stop a single datanode,
> >> (for example to move the system to a new rack where it will be put back
> >> on-line) or do you just whack the process ID and let HDFS clean up the
> >> mess?
> >>
> >> Thanks
> >>
> >
> >
> >
>

Re: Stopping a single Datanode

Posted by Terry Healy <th...@bnl.gov>.
Thanks guys. I will need the decommission in a few weeks, but for now
just a simple system move. I found out the hard way not to have a
masters and slaves file in the conf directory of a slave: when I tried
bin/stop-all.sh, it stopped processes everywhere.

Gave me an idea to list it's own name as the only one in slaves, which
might work as expected then....but if I can just kill the process that
is even easier.


On 08/16/2012 03:49 PM, Harsh J wrote:
> Perhaps what you're looking for is the Decommission feature of HDFS,
> which lets you safely remove a DN without incurring replica loss? It
> is detailed in Hadoop: The Definitive Guide (2nd Edition), page 315 |
> Chapter 10: Administering Hadoop / Maintenance section - Title
> "Decommissioning old nodes", or at
> http://developer.yahoo.com/hadoop/tutorial/module2.html#decommission?
> 
> On Fri, Aug 17, 2012 at 12:41 AM, Terry Healy <th...@bnl.gov> wrote:
>> Sorry - this seems pretty basic, but I could not find a reference on
>> line or in my books. Is there a graceful way to stop a single datanode,
>> (for example to move the system to a new rack where it will be put back
>> on-line) or do you just whack the process ID and let HDFS clean up the
>> mess?
>>
>> Thanks
>>
> 
> 
> 

Re: Stopping a single Datanode

Posted by Terry Healy <th...@bnl.gov>.
Thanks guys. I will need the decommission in a few weeks, but for now
just a simple system move. I found out the hard way not to have a
masters and slaves file in the conf directory of a slave: when I tried
bin/stop-all.sh, it stopped processes everywhere.

Gave me an idea to list it's own name as the only one in slaves, which
might work as expected then....but if I can just kill the process that
is even easier.


On 08/16/2012 03:49 PM, Harsh J wrote:
> Perhaps what you're looking for is the Decommission feature of HDFS,
> which lets you safely remove a DN without incurring replica loss? It
> is detailed in Hadoop: The Definitive Guide (2nd Edition), page 315 |
> Chapter 10: Administering Hadoop / Maintenance section - Title
> "Decommissioning old nodes", or at
> http://developer.yahoo.com/hadoop/tutorial/module2.html#decommission?
> 
> On Fri, Aug 17, 2012 at 12:41 AM, Terry Healy <th...@bnl.gov> wrote:
>> Sorry - this seems pretty basic, but I could not find a reference on
>> line or in my books. Is there a graceful way to stop a single datanode,
>> (for example to move the system to a new rack where it will be put back
>> on-line) or do you just whack the process ID and let HDFS clean up the
>> mess?
>>
>> Thanks
>>
> 
> 
> 

Re: Stopping a single Datanode

Posted by Terry Healy <th...@bnl.gov>.
Thanks guys. I will need the decommission in a few weeks, but for now
just a simple system move. I found out the hard way not to have a
masters and slaves file in the conf directory of a slave: when I tried
bin/stop-all.sh, it stopped processes everywhere.

Gave me an idea to list it's own name as the only one in slaves, which
might work as expected then....but if I can just kill the process that
is even easier.


On 08/16/2012 03:49 PM, Harsh J wrote:
> Perhaps what you're looking for is the Decommission feature of HDFS,
> which lets you safely remove a DN without incurring replica loss? It
> is detailed in Hadoop: The Definitive Guide (2nd Edition), page 315 |
> Chapter 10: Administering Hadoop / Maintenance section - Title
> "Decommissioning old nodes", or at
> http://developer.yahoo.com/hadoop/tutorial/module2.html#decommission?
> 
> On Fri, Aug 17, 2012 at 12:41 AM, Terry Healy <th...@bnl.gov> wrote:
>> Sorry - this seems pretty basic, but I could not find a reference on
>> line or in my books. Is there a graceful way to stop a single datanode,
>> (for example to move the system to a new rack where it will be put back
>> on-line) or do you just whack the process ID and let HDFS clean up the
>> mess?
>>
>> Thanks
>>
> 
> 
> 

Re: Stopping a single Datanode

Posted by Terry Healy <th...@bnl.gov>.
Thanks guys. I will need the decommission in a few weeks, but for now
just a simple system move. I found out the hard way not to have a
masters and slaves file in the conf directory of a slave: when I tried
bin/stop-all.sh, it stopped processes everywhere.

Gave me an idea to list it's own name as the only one in slaves, which
might work as expected then....but if I can just kill the process that
is even easier.


On 08/16/2012 03:49 PM, Harsh J wrote:
> Perhaps what you're looking for is the Decommission feature of HDFS,
> which lets you safely remove a DN without incurring replica loss? It
> is detailed in Hadoop: The Definitive Guide (2nd Edition), page 315 |
> Chapter 10: Administering Hadoop / Maintenance section - Title
> "Decommissioning old nodes", or at
> http://developer.yahoo.com/hadoop/tutorial/module2.html#decommission?
> 
> On Fri, Aug 17, 2012 at 12:41 AM, Terry Healy <th...@bnl.gov> wrote:
>> Sorry - this seems pretty basic, but I could not find a reference on
>> line or in my books. Is there a graceful way to stop a single datanode,
>> (for example to move the system to a new rack where it will be put back
>> on-line) or do you just whack the process ID and let HDFS clean up the
>> mess?
>>
>> Thanks
>>
> 
> 
> 

Re: Stopping a single Datanode

Posted by Harsh J <ha...@cloudera.com>.
Perhaps what you're looking for is the Decommission feature of HDFS,
which lets you safely remove a DN without incurring replica loss? It
is detailed in Hadoop: The Definitive Guide (2nd Edition), page 315 |
Chapter 10: Administering Hadoop / Maintenance section - Title
"Decommissioning old nodes", or at
http://developer.yahoo.com/hadoop/tutorial/module2.html#decommission?

On Fri, Aug 17, 2012 at 12:41 AM, Terry Healy <th...@bnl.gov> wrote:
> Sorry - this seems pretty basic, but I could not find a reference on
> line or in my books. Is there a graceful way to stop a single datanode,
> (for example to move the system to a new rack where it will be put back
> on-line) or do you just whack the process ID and let HDFS clean up the
> mess?
>
> Thanks
>



-- 
Harsh J

Re: Stopping a single Datanode

Posted by Nitin Pawar <ni...@gmail.com>.
you can just kill the process id

or there is a command inside bin directory hadoop-daemon.sh

use it with stop datanode and it should stop it

On Fri, Aug 17, 2012 at 12:41 AM, Terry Healy <th...@bnl.gov> wrote:
> Sorry - this seems pretty basic, but I could not find a reference on
> line or in my books. Is there a graceful way to stop a single datanode,
> (for example to move the system to a new rack where it will be put back
> on-line) or do you just whack the process ID and let HDFS clean up the
> mess?
>
> Thanks
>



-- 
Nitin Pawar

Re: Stopping a single Datanode

Posted by Harsh J <ha...@cloudera.com>.
Perhaps what you're looking for is the Decommission feature of HDFS,
which lets you safely remove a DN without incurring replica loss? It
is detailed in Hadoop: The Definitive Guide (2nd Edition), page 315 |
Chapter 10: Administering Hadoop / Maintenance section - Title
"Decommissioning old nodes", or at
http://developer.yahoo.com/hadoop/tutorial/module2.html#decommission?

On Fri, Aug 17, 2012 at 12:41 AM, Terry Healy <th...@bnl.gov> wrote:
> Sorry - this seems pretty basic, but I could not find a reference on
> line or in my books. Is there a graceful way to stop a single datanode,
> (for example to move the system to a new rack where it will be put back
> on-line) or do you just whack the process ID and let HDFS clean up the
> mess?
>
> Thanks
>



-- 
Harsh J

Re: Stopping a single Datanode

Posted by Harsh J <ha...@cloudera.com>.
Perhaps what you're looking for is the Decommission feature of HDFS,
which lets you safely remove a DN without incurring replica loss? It
is detailed in Hadoop: The Definitive Guide (2nd Edition), page 315 |
Chapter 10: Administering Hadoop / Maintenance section - Title
"Decommissioning old nodes", or at
http://developer.yahoo.com/hadoop/tutorial/module2.html#decommission?

On Fri, Aug 17, 2012 at 12:41 AM, Terry Healy <th...@bnl.gov> wrote:
> Sorry - this seems pretty basic, but I could not find a reference on
> line or in my books. Is there a graceful way to stop a single datanode,
> (for example to move the system to a new rack where it will be put back
> on-line) or do you just whack the process ID and let HDFS clean up the
> mess?
>
> Thanks
>



-- 
Harsh J

Re: Stopping a single Datanode

Posted by Nitin Pawar <ni...@gmail.com>.
you can just kill the process id

or there is a command inside bin directory hadoop-daemon.sh

use it with stop datanode and it should stop it

On Fri, Aug 17, 2012 at 12:41 AM, Terry Healy <th...@bnl.gov> wrote:
> Sorry - this seems pretty basic, but I could not find a reference on
> line or in my books. Is there a graceful way to stop a single datanode,
> (for example to move the system to a new rack where it will be put back
> on-line) or do you just whack the process ID and let HDFS clean up the
> mess?
>
> Thanks
>



-- 
Nitin Pawar

Re: Stopping a single Datanode

Posted by Nitin Pawar <ni...@gmail.com>.
you can just kill the process id

or there is a command inside bin directory hadoop-daemon.sh

use it with stop datanode and it should stop it

On Fri, Aug 17, 2012 at 12:41 AM, Terry Healy <th...@bnl.gov> wrote:
> Sorry - this seems pretty basic, but I could not find a reference on
> line or in my books. Is there a graceful way to stop a single datanode,
> (for example to move the system to a new rack where it will be put back
> on-line) or do you just whack the process ID and let HDFS clean up the
> mess?
>
> Thanks
>



-- 
Nitin Pawar

Re: Stopping a single Datanode

Posted by Harsh J <ha...@cloudera.com>.
Perhaps what you're looking for is the Decommission feature of HDFS,
which lets you safely remove a DN without incurring replica loss? It
is detailed in Hadoop: The Definitive Guide (2nd Edition), page 315 |
Chapter 10: Administering Hadoop / Maintenance section - Title
"Decommissioning old nodes", or at
http://developer.yahoo.com/hadoop/tutorial/module2.html#decommission?

On Fri, Aug 17, 2012 at 12:41 AM, Terry Healy <th...@bnl.gov> wrote:
> Sorry - this seems pretty basic, but I could not find a reference on
> line or in my books. Is there a graceful way to stop a single datanode,
> (for example to move the system to a new rack where it will be put back
> on-line) or do you just whack the process ID and let HDFS clean up the
> mess?
>
> Thanks
>



-- 
Harsh J