You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by ahn <ah...@gmail.com> on 2011/02/07 12:46:56 UTC

Could not add a new data node without rebooting Hadoop system

Hello everybody

 

I have a problem with adding a new data node to the currently running system
without rebooting.

 

I've found the following solution in the web :

 

1. configure conf/slaves and *.xml files on master machine

2. configure conf/master and *.xml files on slave machine

3. run ${HADOOP}/bin/hadoop datanode

 

But when I ran the commands on the master node, the master node was
recognized as a data node.

When I ran the commands on the data node which I want to add, the data node
was not properly added.(The number of total data node didn't show any
change)

 

 

Does anybody knows what could I do to solve this problem?

I'm using Hadoop version 0.20.2.

 

Kind regards,

Henny Ahn (Ahneuigun@gmail.com)


Re: Could not add a new data node without rebooting Hadoop system

Posted by Jun Young Kim <ju...@gmail.com>.
how about to use to reset for your new network topology?

$> hadoop dfsadmin -refreshNodes

Junyoung Kim (juneng603@gmail.com)


On 02/07/2011 09:16 PM, Harsh J wrote:
> On Mon, Feb 7, 2011 at 5:16 PM, ahn<ah...@gmail.com>  wrote:
>> Hello everybody
>> 1. configure conf/slaves and *.xml files on master machine
>>
>> 2. configure conf/master and *.xml files on slave machine
> 'slaves' and 'masters' file are generally only required in the master
> machine, and only if you are using the start-* scripts supplied with
> Hadoop for use with SSH (FAQ has an entry on this) from master.
>
>> 3. run ${HADOOP}/bin/hadoop datanode
>> But when I ran the commands on the master node, the master node was
>> recognized as a data node.
> 3. wasn't a valid command in this case. start-dfs.sh
>
>> When I ran the commands on the data node which I want to add, the data node
>> was not properly added.(The number of total data node didn't show any
>> change)
> What do the logs say for the DataNode on the slave? Does it start
> successfully? If fs.default.name is set properly in slave's
> core-site.xml it should be able to communicate properly if started
> (and if the version is not mismatched).
>

Re: Could not add a new data node without rebooting Hadoop system

Posted by 안의건 <ah...@gmail.com>.
Dear Harsh,

Your advice gave me insight, and I finally solved my problem.

I'm not sure this is the correct way, but anyway it worked in my situation.

I hope it would be helpful to someone else who has similar problem with me.

------------------------------------------------------------

hadoop/conf
slaves update
*.xml update

hadoop/bin> start-dfs.sh
hadoop/bin> start-maperd.sh

--------------------------------------------------------------


Regards,
Henny (ahneuigun@gmail.com)

2011/2/7 Harsh J <qw...@gmail.com>

> On Mon, Feb 7, 2011 at 5:16 PM, ahn <ah...@gmail.com> wrote:
> > Hello everybody
> > 1. configure conf/slaves and *.xml files on master machine
> >
> > 2. configure conf/master and *.xml files on slave machine
>
> 'slaves' and 'masters' file are generally only required in the master
> machine, and only if you are using the start-* scripts supplied with
> Hadoop for use with SSH (FAQ has an entry on this) from master.
>
> > 3. run ${HADOOP}/bin/hadoop datanode
> > But when I ran the commands on the master node, the master node was
> > recognized as a data node.
>
> 3. wasn't a valid command in this case. start-dfs.sh
>
> > When I ran the commands on the data node which I want to add, the data
> node
> > was not properly added.(The number of total data node didn't show any
> > change)
>
> What do the logs say for the DataNode on the slave? Does it start
> successfully? If fs.default.name is set properly in slave's
> core-site.xml it should be able to communicate properly if started
> (and if the version is not mismatched).
>
> --
> Harsh J
> www.harshj.com
>

Re: Could not add a new data node without rebooting Hadoop system

Posted by Harsh J <qw...@gmail.com>.
On Mon, Feb 7, 2011 at 5:16 PM, ahn <ah...@gmail.com> wrote:
> Hello everybody
> 1. configure conf/slaves and *.xml files on master machine
>
> 2. configure conf/master and *.xml files on slave machine

'slaves' and 'masters' file are generally only required in the master
machine, and only if you are using the start-* scripts supplied with
Hadoop for use with SSH (FAQ has an entry on this) from master.

> 3. run ${HADOOP}/bin/hadoop datanode
> But when I ran the commands on the master node, the master node was
> recognized as a data node.

3. wasn't a valid command in this case. start-dfs.sh

> When I ran the commands on the data node which I want to add, the data node
> was not properly added.(The number of total data node didn't show any
> change)

What do the logs say for the DataNode on the slave? Does it start
successfully? If fs.default.name is set properly in slave's
core-site.xml it should be able to communicate properly if started
(and if the version is not mismatched).

-- 
Harsh J
www.harshj.com