You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Richard Tang <tr...@gmail.com> on 2012/09/11 19:57:58 UTC

how to specify the root directory of hadoop on slave node?

Hi, All
I need to setup a hadoop/hdfs cluster with one namenode on a machine and
two datanodes on two other machines. But after setting datanode machiines
in conf/slaves file, running bin/start-dfs.sh can not start hdfs normally..
I am aware that I have not specify the root directory hadoop is installed
on slave nodes and the OS user account to use hadoop on slave node.
I am asking how to specify where hadoop/hdfs is locally installed on slave
node? Also how to specify the user account to start hdfs there?

Regards,
Richard

Re: how to specify the root directory of hadoop on slave node?

Posted by Richard Tang <tr...@gmail.com>.
Hi, Hemanth, thanks for your responses. I have now structure my hdfs
cluster to follow that norm. the two conditions are met. and no need to
explicitly config home address for hdfs anymore.

For the records, previously in my cluster, different nodes have hadoop
installed on different directories and HADOOP_HOME can be used to config
the home dir where hadoop is installed, (though the use of it is now
deprecated. )

Regards,
Richard

On Wed, Sep 12, 2012 at 12:06 AM, Hemanth Yamijala <
yhemanth@thoughtworks.com> wrote:

> Hi Richard,
>
> If you have installed the hadoop software on the same locations on all
> machines and if you have a common user on all the machines, then there
> should be no explicit need to specify anything more on the slaves.
>
> Can you tell us whether the above two conditions are true ? If yes, some
> more details on what is failing when you run start-dfs.sh will help.
>
> Thanks
> Hemanth
>
>
> On Tue, Sep 11, 2012 at 11:27 PM, Richard Tang <tr...@gmail.com>wrote:
>
>> Hi, All
>> I need to setup a hadoop/hdfs cluster with one namenode on a machine and
>> two datanodes on two other machines. But after setting datanode
>> machiines in conf/slaves file, running bin/start-dfs.sh can not start
>> hdfs normally..
>> I am aware that I have not specify the root directory hadoop is installed
>> on slave nodes and the OS user account to use hadoop on slave node.
>> I am asking how to specify where hadoop/hdfs is locally installed on
>> slave node? Also how to specify the user account to start hdfs there?
>>
>> Regards,
>> Richard
>>
>
>

Re: how to specify the root directory of hadoop on slave node?

Posted by Richard Tang <tr...@gmail.com>.
Hi, Hemanth, thanks for your responses. I have now structure my hdfs
cluster to follow that norm. the two conditions are met. and no need to
explicitly config home address for hdfs anymore.

For the records, previously in my cluster, different nodes have hadoop
installed on different directories and HADOOP_HOME can be used to config
the home dir where hadoop is installed, (though the use of it is now
deprecated. )

Regards,
Richard

On Wed, Sep 12, 2012 at 12:06 AM, Hemanth Yamijala <
yhemanth@thoughtworks.com> wrote:

> Hi Richard,
>
> If you have installed the hadoop software on the same locations on all
> machines and if you have a common user on all the machines, then there
> should be no explicit need to specify anything more on the slaves.
>
> Can you tell us whether the above two conditions are true ? If yes, some
> more details on what is failing when you run start-dfs.sh will help.
>
> Thanks
> Hemanth
>
>
> On Tue, Sep 11, 2012 at 11:27 PM, Richard Tang <tr...@gmail.com>wrote:
>
>> Hi, All
>> I need to setup a hadoop/hdfs cluster with one namenode on a machine and
>> two datanodes on two other machines. But after setting datanode
>> machiines in conf/slaves file, running bin/start-dfs.sh can not start
>> hdfs normally..
>> I am aware that I have not specify the root directory hadoop is installed
>> on slave nodes and the OS user account to use hadoop on slave node.
>> I am asking how to specify where hadoop/hdfs is locally installed on
>> slave node? Also how to specify the user account to start hdfs there?
>>
>> Regards,
>> Richard
>>
>
>

Re: how to specify the root directory of hadoop on slave node?

Posted by Richard Tang <tr...@gmail.com>.
Hi, Hemanth, thanks for your responses. I have now structure my hdfs
cluster to follow that norm. the two conditions are met. and no need to
explicitly config home address for hdfs anymore.

For the records, previously in my cluster, different nodes have hadoop
installed on different directories and HADOOP_HOME can be used to config
the home dir where hadoop is installed, (though the use of it is now
deprecated. )

Regards,
Richard

On Wed, Sep 12, 2012 at 12:06 AM, Hemanth Yamijala <
yhemanth@thoughtworks.com> wrote:

> Hi Richard,
>
> If you have installed the hadoop software on the same locations on all
> machines and if you have a common user on all the machines, then there
> should be no explicit need to specify anything more on the slaves.
>
> Can you tell us whether the above two conditions are true ? If yes, some
> more details on what is failing when you run start-dfs.sh will help.
>
> Thanks
> Hemanth
>
>
> On Tue, Sep 11, 2012 at 11:27 PM, Richard Tang <tr...@gmail.com>wrote:
>
>> Hi, All
>> I need to setup a hadoop/hdfs cluster with one namenode on a machine and
>> two datanodes on two other machines. But after setting datanode
>> machiines in conf/slaves file, running bin/start-dfs.sh can not start
>> hdfs normally..
>> I am aware that I have not specify the root directory hadoop is installed
>> on slave nodes and the OS user account to use hadoop on slave node.
>> I am asking how to specify where hadoop/hdfs is locally installed on
>> slave node? Also how to specify the user account to start hdfs there?
>>
>> Regards,
>> Richard
>>
>
>

Re: how to specify the root directory of hadoop on slave node?

Posted by Richard Tang <tr...@gmail.com>.
Hi, Hemanth, thanks for your responses. I have now structure my hdfs
cluster to follow that norm. the two conditions are met. and no need to
explicitly config home address for hdfs anymore.

For the records, previously in my cluster, different nodes have hadoop
installed on different directories and HADOOP_HOME can be used to config
the home dir where hadoop is installed, (though the use of it is now
deprecated. )

Regards,
Richard

On Wed, Sep 12, 2012 at 12:06 AM, Hemanth Yamijala <
yhemanth@thoughtworks.com> wrote:

> Hi Richard,
>
> If you have installed the hadoop software on the same locations on all
> machines and if you have a common user on all the machines, then there
> should be no explicit need to specify anything more on the slaves.
>
> Can you tell us whether the above two conditions are true ? If yes, some
> more details on what is failing when you run start-dfs.sh will help.
>
> Thanks
> Hemanth
>
>
> On Tue, Sep 11, 2012 at 11:27 PM, Richard Tang <tr...@gmail.com>wrote:
>
>> Hi, All
>> I need to setup a hadoop/hdfs cluster with one namenode on a machine and
>> two datanodes on two other machines. But after setting datanode
>> machiines in conf/slaves file, running bin/start-dfs.sh can not start
>> hdfs normally..
>> I am aware that I have not specify the root directory hadoop is installed
>> on slave nodes and the OS user account to use hadoop on slave node.
>> I am asking how to specify where hadoop/hdfs is locally installed on
>> slave node? Also how to specify the user account to start hdfs there?
>>
>> Regards,
>> Richard
>>
>
>

Re: how to specify the root directory of hadoop on slave node?

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Hi Richard,

If you have installed the hadoop software on the same locations on all
machines and if you have a common user on all the machines, then there
should be no explicit need to specify anything more on the slaves.

Can you tell us whether the above two conditions are true ? If yes, some
more details on what is failing when you run start-dfs.sh will help.

Thanks
Hemanth

On Tue, Sep 11, 2012 at 11:27 PM, Richard Tang <tr...@gmail.com>wrote:

> Hi, All
> I need to setup a hadoop/hdfs cluster with one namenode on a machine and
> two datanodes on two other machines. But after setting datanode machiines
> in conf/slaves file, running bin/start-dfs.sh can not start hdfs
> normally..
> I am aware that I have not specify the root directory hadoop is installed
> on slave nodes and the OS user account to use hadoop on slave node.
> I am asking how to specify where hadoop/hdfs is locally installed on
> slave node? Also how to specify the user account to start hdfs there?
>
> Regards,
> Richard
>

Re: how to specify the root directory of hadoop on slave node?

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Hi Richard,

If you have installed the hadoop software on the same locations on all
machines and if you have a common user on all the machines, then there
should be no explicit need to specify anything more on the slaves.

Can you tell us whether the above two conditions are true ? If yes, some
more details on what is failing when you run start-dfs.sh will help.

Thanks
Hemanth

On Tue, Sep 11, 2012 at 11:27 PM, Richard Tang <tr...@gmail.com>wrote:

> Hi, All
> I need to setup a hadoop/hdfs cluster with one namenode on a machine and
> two datanodes on two other machines. But after setting datanode machiines
> in conf/slaves file, running bin/start-dfs.sh can not start hdfs
> normally..
> I am aware that I have not specify the root directory hadoop is installed
> on slave nodes and the OS user account to use hadoop on slave node.
> I am asking how to specify where hadoop/hdfs is locally installed on
> slave node? Also how to specify the user account to start hdfs there?
>
> Regards,
> Richard
>

Re: how to specify the root directory of hadoop on slave node?

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Hi Richard,

If you have installed the hadoop software on the same locations on all
machines and if you have a common user on all the machines, then there
should be no explicit need to specify anything more on the slaves.

Can you tell us whether the above two conditions are true ? If yes, some
more details on what is failing when you run start-dfs.sh will help.

Thanks
Hemanth

On Tue, Sep 11, 2012 at 11:27 PM, Richard Tang <tr...@gmail.com>wrote:

> Hi, All
> I need to setup a hadoop/hdfs cluster with one namenode on a machine and
> two datanodes on two other machines. But after setting datanode machiines
> in conf/slaves file, running bin/start-dfs.sh can not start hdfs
> normally..
> I am aware that I have not specify the root directory hadoop is installed
> on slave nodes and the OS user account to use hadoop on slave node.
> I am asking how to specify where hadoop/hdfs is locally installed on
> slave node? Also how to specify the user account to start hdfs there?
>
> Regards,
> Richard
>

Re: how to specify the root directory of hadoop on slave node?

Posted by Hemanth Yamijala <yh...@thoughtworks.com>.
Hi Richard,

If you have installed the hadoop software on the same locations on all
machines and if you have a common user on all the machines, then there
should be no explicit need to specify anything more on the slaves.

Can you tell us whether the above two conditions are true ? If yes, some
more details on what is failing when you run start-dfs.sh will help.

Thanks
Hemanth

On Tue, Sep 11, 2012 at 11:27 PM, Richard Tang <tr...@gmail.com>wrote:

> Hi, All
> I need to setup a hadoop/hdfs cluster with one namenode on a machine and
> two datanodes on two other machines. But after setting datanode machiines
> in conf/slaves file, running bin/start-dfs.sh can not start hdfs
> normally..
> I am aware that I have not specify the root directory hadoop is installed
> on slave nodes and the OS user account to use hadoop on slave node.
> I am asking how to specify where hadoop/hdfs is locally installed on
> slave node? Also how to specify the user account to start hdfs there?
>
> Regards,
> Richard
>