You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Sindhu Hosamane <si...@gmail.com> on 2014/07/30 09:54:46 UTC

Master /slave file configuration for multiple datanodes on same machine

Hello friends ,

I have set up multiple datanodes on same machine following the link
http://mail-archives.apache.org/mod_mbox/hadoop-common-user/201009.mbox/<A3...@mse16be2.mse16.exchange.ms>
So now i have conf and conf2  both in my hadoop directory. 
How should master and slave files of conf and conf2 look like if i want conf to be  master and conf2 to be slave .?
Also how should /etc/hosts file look like ? 
Please help me. I am really stuck


Regards,
Sindhu

Re: Master /slave file configuration for multiple datanodes on same machine

Posted by Harsh J <ha...@cloudera.com>.
The SSH-helped slave file approach will not work for the goal of
running multiple slave daemons per host, where different configuration
directories are instead expected to be used.

You can instead use "hadoop --config custom-dir datanode" to launch
them directly.

On Wed, Jul 30, 2014 at 1:24 PM, Sindhu Hosamane <si...@gmail.com> wrote:
> Hello friends ,
>
> I have set up multiple datanodes on same machine following the link
> http://mail-archives.apache.org/mod_mbox/hadoop-common-user/201009.mbox/<A3...@mse16be2.mse16.exchange.ms>
> So now i have conf and conf2  both in my hadoop directory.
> How should master and slave files of conf and conf2 look like if i want conf
> to be  master and conf2 to be slave .?
> Also how should /etc/hosts file look like ?
> Please help me. I am really stuck
>
>
> Regards,
> Sindhu



-- 
Harsh J

Re: Master /slave file configuration for multiple datanodes on same machine

Posted by Harsh J <ha...@cloudera.com>.
The SSH-helped slave file approach will not work for the goal of
running multiple slave daemons per host, where different configuration
directories are instead expected to be used.

You can instead use "hadoop --config custom-dir datanode" to launch
them directly.

On Wed, Jul 30, 2014 at 1:24 PM, Sindhu Hosamane <si...@gmail.com> wrote:
> Hello friends ,
>
> I have set up multiple datanodes on same machine following the link
> http://mail-archives.apache.org/mod_mbox/hadoop-common-user/201009.mbox/<A3...@mse16be2.mse16.exchange.ms>
> So now i have conf and conf2  both in my hadoop directory.
> How should master and slave files of conf and conf2 look like if i want conf
> to be  master and conf2 to be slave .?
> Also how should /etc/hosts file look like ?
> Please help me. I am really stuck
>
>
> Regards,
> Sindhu



-- 
Harsh J

Re: Master /slave file configuration for multiple datanodes on same machine

Posted by Harsh J <ha...@cloudera.com>.
The SSH-helped slave file approach will not work for the goal of
running multiple slave daemons per host, where different configuration
directories are instead expected to be used.

You can instead use "hadoop --config custom-dir datanode" to launch
them directly.

On Wed, Jul 30, 2014 at 1:24 PM, Sindhu Hosamane <si...@gmail.com> wrote:
> Hello friends ,
>
> I have set up multiple datanodes on same machine following the link
> http://mail-archives.apache.org/mod_mbox/hadoop-common-user/201009.mbox/<A3...@mse16be2.mse16.exchange.ms>
> So now i have conf and conf2  both in my hadoop directory.
> How should master and slave files of conf and conf2 look like if i want conf
> to be  master and conf2 to be slave .?
> Also how should /etc/hosts file look like ?
> Please help me. I am really stuck
>
>
> Regards,
> Sindhu



-- 
Harsh J

Re: Master /slave file configuration for multiple datanodes on same machine

Posted by Harsh J <ha...@cloudera.com>.
The SSH-helped slave file approach will not work for the goal of
running multiple slave daemons per host, where different configuration
directories are instead expected to be used.

You can instead use "hadoop --config custom-dir datanode" to launch
them directly.

On Wed, Jul 30, 2014 at 1:24 PM, Sindhu Hosamane <si...@gmail.com> wrote:
> Hello friends ,
>
> I have set up multiple datanodes on same machine following the link
> http://mail-archives.apache.org/mod_mbox/hadoop-common-user/201009.mbox/<A3...@mse16be2.mse16.exchange.ms>
> So now i have conf and conf2  both in my hadoop directory.
> How should master and slave files of conf and conf2 look like if i want conf
> to be  master and conf2 to be slave .?
> Also how should /etc/hosts file look like ?
> Please help me. I am really stuck
>
>
> Regards,
> Sindhu



-- 
Harsh J