You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by "Henjarappa, Savitha" <sa...@hp.com> on 2013/02/14 19:23:33 UTC
How to identify datanodes in a cluster
All,
My questions
- If I have multiple Hadoop clusters, how do I make put which DataNode is configured to which NameNode? Similary Task Tracker to JobTracker?
- Can I install Map and Reduce on separate Nodes? Is there any usecase to support this configuration.
Thanks,
Savitha
Re: How to identify datanodes in a cluster
Posted by Robert Molina <rm...@hortonworks.com>.
Hi Savitha,
On your nodes running tasktrackers and datanodes, there is a core-site.xml
file that specifies the namenode to contact via the property fs.default.name.
In addition, there is a mapred-site.xml which specifies address of
jobtracker.
I hope that helps.
Regards,
Robert
On Thu, Feb 14, 2013 at 10:23 AM, Henjarappa, Savitha <
savitha.henjarappa@hp.com> wrote:
> All,
>
> My questions
>
> - If I have multiple Hadoop clusters, how do I make put which DataNode
> is configured to which NameNode? Similary Task Tracker to JobTracker?
> - Can I install Map and Reduce on separate Nodes? Is there any usecase
> to support this configuration.
>
>
> Thanks,
> Savitha
>
>
>
>
>
>
Re: How to identify datanodes in a cluster
Posted by Robert Molina <rm...@hortonworks.com>.
Hi Savitha,
On your nodes running tasktrackers and datanodes, there is a core-site.xml
file that specifies the namenode to contact via the property fs.default.name.
In addition, there is a mapred-site.xml which specifies address of
jobtracker.
I hope that helps.
Regards,
Robert
On Thu, Feb 14, 2013 at 10:23 AM, Henjarappa, Savitha <
savitha.henjarappa@hp.com> wrote:
> All,
>
> My questions
>
> - If I have multiple Hadoop clusters, how do I make put which DataNode
> is configured to which NameNode? Similary Task Tracker to JobTracker?
> - Can I install Map and Reduce on separate Nodes? Is there any usecase
> to support this configuration.
>
>
> Thanks,
> Savitha
>
>
>
>
>
>
Re: How to identify datanodes in a cluster
Posted by Mohammad Tariq <do...@gmail.com>.
Hello Savitha,
You would specify all that by yourself.
And what do you mean by installing Map and Reduce on separate nodes?You
just have TaskTrackers which run continuously on each slave machine and
depending upon the location of the data block you are going to process maps
are started. Once the map phase is finished all the values associated with
a particular key are sent to the same machine for the reduce phase. It
could be both map+reduce on the same machine or on a different machine.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Thu, Feb 14, 2013 at 11:53 PM, Henjarappa, Savitha <
savitha.henjarappa@hp.com> wrote:
> All,
>
> My questions
>
> - If I have multiple Hadoop clusters, how do I make put which DataNode
> is configured to which NameNode? Similary Task Tracker to JobTracker?
> - Can I install Map and Reduce on separate Nodes? Is there any usecase
> to support this configuration.
>
>
> Thanks,
> Savitha
>
>
>
>
>
>
Re: How to identify datanodes in a cluster
Posted by Robert Molina <rm...@hortonworks.com>.
Hi Savitha,
On your nodes running tasktrackers and datanodes, there is a core-site.xml
file that specifies the namenode to contact via the property fs.default.name.
In addition, there is a mapred-site.xml which specifies address of
jobtracker.
I hope that helps.
Regards,
Robert
On Thu, Feb 14, 2013 at 10:23 AM, Henjarappa, Savitha <
savitha.henjarappa@hp.com> wrote:
> All,
>
> My questions
>
> - If I have multiple Hadoop clusters, how do I make put which DataNode
> is configured to which NameNode? Similary Task Tracker to JobTracker?
> - Can I install Map and Reduce on separate Nodes? Is there any usecase
> to support this configuration.
>
>
> Thanks,
> Savitha
>
>
>
>
>
>
Re: How to identify datanodes in a cluster
Posted by Mohammad Tariq <do...@gmail.com>.
Hello Savitha,
You would specify all that by yourself.
And what do you mean by installing Map and Reduce on separate nodes?You
just have TaskTrackers which run continuously on each slave machine and
depending upon the location of the data block you are going to process maps
are started. Once the map phase is finished all the values associated with
a particular key are sent to the same machine for the reduce phase. It
could be both map+reduce on the same machine or on a different machine.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Thu, Feb 14, 2013 at 11:53 PM, Henjarappa, Savitha <
savitha.henjarappa@hp.com> wrote:
> All,
>
> My questions
>
> - If I have multiple Hadoop clusters, how do I make put which DataNode
> is configured to which NameNode? Similary Task Tracker to JobTracker?
> - Can I install Map and Reduce on separate Nodes? Is there any usecase
> to support this configuration.
>
>
> Thanks,
> Savitha
>
>
>
>
>
>
Re: How to identify datanodes in a cluster
Posted by Mohammad Tariq <do...@gmail.com>.
Hello Savitha,
You would specify all that by yourself.
And what do you mean by installing Map and Reduce on separate nodes?You
just have TaskTrackers which run continuously on each slave machine and
depending upon the location of the data block you are going to process maps
are started. Once the map phase is finished all the values associated with
a particular key are sent to the same machine for the reduce phase. It
could be both map+reduce on the same machine or on a different machine.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Thu, Feb 14, 2013 at 11:53 PM, Henjarappa, Savitha <
savitha.henjarappa@hp.com> wrote:
> All,
>
> My questions
>
> - If I have multiple Hadoop clusters, how do I make put which DataNode
> is configured to which NameNode? Similary Task Tracker to JobTracker?
> - Can I install Map and Reduce on separate Nodes? Is there any usecase
> to support this configuration.
>
>
> Thanks,
> Savitha
>
>
>
>
>
>
Re: How to identify datanodes in a cluster
Posted by Mohammad Tariq <do...@gmail.com>.
Hello Savitha,
You would specify all that by yourself.
And what do you mean by installing Map and Reduce on separate nodes?You
just have TaskTrackers which run continuously on each slave machine and
depending upon the location of the data block you are going to process maps
are started. Once the map phase is finished all the values associated with
a particular key are sent to the same machine for the reduce phase. It
could be both map+reduce on the same machine or on a different machine.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Thu, Feb 14, 2013 at 11:53 PM, Henjarappa, Savitha <
savitha.henjarappa@hp.com> wrote:
> All,
>
> My questions
>
> - If I have multiple Hadoop clusters, how do I make put which DataNode
> is configured to which NameNode? Similary Task Tracker to JobTracker?
> - Can I install Map and Reduce on separate Nodes? Is there any usecase
> to support this configuration.
>
>
> Thanks,
> Savitha
>
>
>
>
>
>
Re: How to identify datanodes in a cluster
Posted by Robert Molina <rm...@hortonworks.com>.
Hi Savitha,
On your nodes running tasktrackers and datanodes, there is a core-site.xml
file that specifies the namenode to contact via the property fs.default.name.
In addition, there is a mapred-site.xml which specifies address of
jobtracker.
I hope that helps.
Regards,
Robert
On Thu, Feb 14, 2013 at 10:23 AM, Henjarappa, Savitha <
savitha.henjarappa@hp.com> wrote:
> All,
>
> My questions
>
> - If I have multiple Hadoop clusters, how do I make put which DataNode
> is configured to which NameNode? Similary Task Tracker to JobTracker?
> - Can I install Map and Reduce on separate Nodes? Is there any usecase
> to support this configuration.
>
>
> Thanks,
> Savitha
>
>
>
>
>
>