You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Chris Grier <gr...@imchris.org> on 2012/06/08 20:46:47 UTC

decommissioning datanodes

Hello,

I'm in the trying to figure out how to decommission data nodes. Here's what
I do:

In hdfs-site.xml I have:

<property>
    <name>dfs.hosts.exclude</name>
    <value>/opt/hadoop/hadoop-1.0.0/conf/exclude</value>
</property>

Add to exclude file:

host1
host2

Then I run 'hadoop dfsadmin -refreshNodes'. On the web interface the two
nodes now appear in both the 'Live Nodes' and 'Dead Nodes' (but there's
nothing in the Decommissioning Nodes list). If I look at the datanode logs
running on host1 or host2, I still see blocks being copied in and it does
not appear that any additional replication was happening.

What am I missing during the decommission process?

-Chris

Re: decommissioning datanodes

Posted by Chris Grier <gr...@ICSI.berkeley.edu>.
Thanks, this seems to work now.

Note that the parameter is 'dfs.hosts' instead of 'dfs.hosts.include'.
(Also, the normal caveats like hostnames are case sensitive).

-Chris

On Fri, Jun 8, 2012 at 12:19 PM, Serge Blazhiyevskyy <
Serge.Blazhiyevskyy@nice.com> wrote:

> Your config should be something like this:
>
> ><property>
> >    <name>dfs.hosts.exclude</name>
> >    <value>/opt/hadoop/hadoop-1.0.0/conf/exclude</value>
> ></property>
>
> ><property>
> >    <name>dfs.hosts.include</name>
> >    <value>/opt/hadoop/hadoop-1.0.0/conf/include</value>
> ></property>
>
>
>
> >
> >Add to exclude file:
> >
> >host1
> >host2
> >
>
>
>
> Add to include file
> >host1
> >host2
> Plus the rest of the nodes
>
>
>
>
> On 6/8/12 12:15 PM, "Chris Grier" <gr...@imchris.org> wrote:
>
> >Do you mean the file specified by the 'dfs.hosts' parameter? That is not
> >currently set in my configuration (the hosts are only specified in the
> >slaves file).
> >
> >-Chris
> >
> >On Fri, Jun 8, 2012 at 11:56 AM, Serge Blazhiyevskyy <
> >Serge.Blazhiyevskyy@nice.com> wrote:
> >
> >> Your nodes need to be in include and exclude file in the same time
> >>
> >>
> >> Do you use both files?
> >>
> >> On 6/8/12 11:46 AM, "Chris Grier" <gr...@imchris.org> wrote:
> >>
> >> >Hello,
> >> >
> >> >I'm in the trying to figure out how to decommission data nodes. Here's
> >> >what
> >> >I do:
> >> >
> >> >In hdfs-site.xml I have:
> >> >
> >> ><property>
> >> >    <name>dfs.hosts.exclude</name>
> >> >    <value>/opt/hadoop/hadoop-1.0.0/conf/exclude</value>
> >> ></property>
> >> >
> >> >Add to exclude file:
> >> >
> >> >host1
> >> >host2
> >> >
> >> >Then I run 'hadoop dfsadmin -refreshNodes'. On the web interface the
> >>two
> >> >nodes now appear in both the 'Live Nodes' and 'Dead Nodes' (but there's
> >> >nothing in the Decommissioning Nodes list). If I look at the datanode
> >>logs
> >> >running on host1 or host2, I still see blocks being copied in and it
> >>does
> >> >not appear that any additional replication was happening.
> >> >
> >> >What am I missing during the decommission process?
> >> >
> >> >-Chris
> >>
> >>
>
>

Re: decommissioning datanodes

Posted by Serge Blazhiyevskyy <Se...@nice.com>.
Your config should be something like this:

><property>
>    <name>dfs.hosts.exclude</name>
>    <value>/opt/hadoop/hadoop-1.0.0/conf/exclude</value>
></property>

><property>
>    <name>dfs.hosts.include</name>
>    <value>/opt/hadoop/hadoop-1.0.0/conf/include</value>
></property>



>
>Add to exclude file:
>
>host1
>host2
>



Add to include file
>host1
>host2
Plus the rest of the nodes




On 6/8/12 12:15 PM, "Chris Grier" <gr...@imchris.org> wrote:

>Do you mean the file specified by the 'dfs.hosts' parameter? That is not
>currently set in my configuration (the hosts are only specified in the
>slaves file).
>
>-Chris
>
>On Fri, Jun 8, 2012 at 11:56 AM, Serge Blazhiyevskyy <
>Serge.Blazhiyevskyy@nice.com> wrote:
>
>> Your nodes need to be in include and exclude file in the same time
>>
>>
>> Do you use both files?
>>
>> On 6/8/12 11:46 AM, "Chris Grier" <gr...@imchris.org> wrote:
>>
>> >Hello,
>> >
>> >I'm in the trying to figure out how to decommission data nodes. Here's
>> >what
>> >I do:
>> >
>> >In hdfs-site.xml I have:
>> >
>> ><property>
>> >    <name>dfs.hosts.exclude</name>
>> >    <value>/opt/hadoop/hadoop-1.0.0/conf/exclude</value>
>> ></property>
>> >
>> >Add to exclude file:
>> >
>> >host1
>> >host2
>> >
>> >Then I run 'hadoop dfsadmin -refreshNodes'. On the web interface the
>>two
>> >nodes now appear in both the 'Live Nodes' and 'Dead Nodes' (but there's
>> >nothing in the Decommissioning Nodes list). If I look at the datanode
>>logs
>> >running on host1 or host2, I still see blocks being copied in and it
>>does
>> >not appear that any additional replication was happening.
>> >
>> >What am I missing during the decommission process?
>> >
>> >-Chris
>>
>>


Re: decommissioning datanodes

Posted by Chris Grier <gr...@imchris.org>.
Do you mean the file specified by the 'dfs.hosts' parameter? That is not
currently set in my configuration (the hosts are only specified in the
slaves file).

-Chris

On Fri, Jun 8, 2012 at 11:56 AM, Serge Blazhiyevskyy <
Serge.Blazhiyevskyy@nice.com> wrote:

> Your nodes need to be in include and exclude file in the same time
>
>
> Do you use both files?
>
> On 6/8/12 11:46 AM, "Chris Grier" <gr...@imchris.org> wrote:
>
> >Hello,
> >
> >I'm in the trying to figure out how to decommission data nodes. Here's
> >what
> >I do:
> >
> >In hdfs-site.xml I have:
> >
> ><property>
> >    <name>dfs.hosts.exclude</name>
> >    <value>/opt/hadoop/hadoop-1.0.0/conf/exclude</value>
> ></property>
> >
> >Add to exclude file:
> >
> >host1
> >host2
> >
> >Then I run 'hadoop dfsadmin -refreshNodes'. On the web interface the two
> >nodes now appear in both the 'Live Nodes' and 'Dead Nodes' (but there's
> >nothing in the Decommissioning Nodes list). If I look at the datanode logs
> >running on host1 or host2, I still see blocks being copied in and it does
> >not appear that any additional replication was happening.
> >
> >What am I missing during the decommission process?
> >
> >-Chris
>
>

Re: decommissioning datanodes

Posted by Serge Blazhiyevskyy <Se...@nice.com>.
Your nodes need to be in include and exclude file in the same time


Do you use both files?

On 6/8/12 11:46 AM, "Chris Grier" <gr...@imchris.org> wrote:

>Hello,
>
>I'm in the trying to figure out how to decommission data nodes. Here's
>what
>I do:
>
>In hdfs-site.xml I have:
>
><property>
>    <name>dfs.hosts.exclude</name>
>    <value>/opt/hadoop/hadoop-1.0.0/conf/exclude</value>
></property>
>
>Add to exclude file:
>
>host1
>host2
>
>Then I run 'hadoop dfsadmin -refreshNodes'. On the web interface the two
>nodes now appear in both the 'Live Nodes' and 'Dead Nodes' (but there's
>nothing in the Decommissioning Nodes list). If I look at the datanode logs
>running on host1 or host2, I still see blocks being copied in and it does
>not appear that any additional replication was happening.
>
>What am I missing during the decommission process?
>
>-Chris