You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Vitaliy Semochkin <vi...@gmail.com> on 2010/06/24 11:56:04 UTC

default masters/slaves content question

Hi,

In default installation hadoop masters and slaves files contain localhost.
Am I correct that masters file contains  list of SECONDARY namenodes?

If so, will localhost node try to start secondary namenode even if it
already have one?
More over will datanodes try to contact itself in order to try to reach
 secondary namenode on localhost?

should I keep localhost in slaves in case I want to run datanode on same
sever I start cluster and run namemode?

This is my first hadoop experience,
Thanks in advance.

Vitaliy S

Re: {Disarmed} Re: default masters/slaves content question

Posted by Khaled BEN BAHRI <Kh...@it-sudparis.eu>.
Hi :)

The file masters in the hadoop default installation contains localhost  
because in this default installation hadoop work in local mode in one  
node

localhost in master's file mean that the namenode and the jobtracker  
wil run on the local node

i hope i reply to your question

Khaled

Quoting Vitaliy Semochkin <vi...@gmail.com>:

> Thank you very much for reply.
>
> Is there any reason masters file contains localhost in default installation
> of hadoop?
>
> On Thu, Jun 24, 2010 at 10:56 PM, Jitendra Nath Pandey <
> jitendra@yahoo-inc.com> wrote:
>
>>  Start-dfs.sh script will try to start the secondary namenode on the node
>> listed in masters file.
>>
>> Datanodes don’t connect to the secondary namenode, and slaves file should
>> contain the hostnames where you want to start the datanode. It can be
>> localhost.
>>
>>
>> On 6/24/10 2:56 AM, "Vitaliy Semochkin" <vi...@gmail.com> wrote:
>>
>> Hi,
>>
>> In default installation hadoop masters and slaves files contain localhost.
>> Am I correct that masters file contains  list of SECONDARY namenodes?
>>
>> If so, will localhost node try to start secondary namenode even if it
>> already have one?
>> More over will datanodes try to contact itself in order to try to reach
>>  secondary namenode on localhost?
>>
>> should I keep localhost in slaves in case I want to run datanode on same
>> sever I start cluster and run namemode?
>>
>> This is my first hadoop experience,
>> Thanks in advance.
>>
>> Vitaliy S
>>
>>
>




Re: default masters/slaves content question

Posted by Allen Wittenauer <aw...@linkedin.com>.
On Jun 25, 2010, at 2:19 AM, Vitaliy Semochkin wrote:

> Thank you very much for reply.
> 
> Is there any reason masters file contains localhost in default installation of hadoop?


Because it would take a long time if we put everyone's hostname in there. :)

Pretty much everything in conf should be looked at during install, including the masters and slaves files.

Re: default masters/slaves content question

Posted by Vitaliy Semochkin <vi...@gmail.com>.
Thank you very much for reply.

Is there any reason masters file contains localhost in default installation
of hadoop?

On Thu, Jun 24, 2010 at 10:56 PM, Jitendra Nath Pandey <
jitendra@yahoo-inc.com> wrote:

>  Start-dfs.sh script will try to start the secondary namenode on the node
> listed in masters file.
>
> Datanodes don’t connect to the secondary namenode, and slaves file should
> contain the hostnames where you want to start the datanode. It can be
> localhost.
>
>
> On 6/24/10 2:56 AM, "Vitaliy Semochkin" <vi...@gmail.com> wrote:
>
> Hi,
>
> In default installation hadoop masters and slaves files contain localhost.
> Am I correct that masters file contains  list of SECONDARY namenodes?
>
> If so, will localhost node try to start secondary namenode even if it
> already have one?
> More over will datanodes try to contact itself in order to try to reach
>  secondary namenode on localhost?
>
> should I keep localhost in slaves in case I want to run datanode on same
> sever I start cluster and run namemode?
>
> This is my first hadoop experience,
> Thanks in advance.
>
> Vitaliy S
>
>

Re: default masters/slaves content question

Posted by Jitendra Nath Pandey <ji...@yahoo-inc.com>.
Start-dfs.sh script will try to start the secondary namenode on the node listed in masters file.

Datanodes don't connect to the secondary namenode, and slaves file should contain the hostnames where you want to start the datanode. It can be localhost.

On 6/24/10 2:56 AM, "Vitaliy Semochkin" <vi...@gmail.com> wrote:

Hi,

In default installation hadoop masters and slaves files contain localhost.
Am I correct that masters file contains  list of SECONDARY namenodes?

If so, will localhost node try to start secondary namenode even if it already have one?
More over will datanodes try to contact itself in order to try to reach  secondary namenode on localhost?

should I keep localhost in slaves in case I want to run datanode on same sever I start cluster and run namemode?

This is my first hadoop experience,
Thanks in advance.

Vitaliy S