You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Rita <rm...@gmail.com> on 2011/03/10 04:07:37 UTC

how does hdfs determine what node to use?

I have a 2 rack cluster. All of my files have a replication factor of 2. How
does hdfs determine what node to use when serving the data? Does it always
use the first rack? or is there an algorithm for this?


-- 
--- Get your facts first, then you can distort them as you please.--

Re: how does hdfs determine what node to use?

Posted by Marcos Ortiz <ml...@uci.cu>.
El 3/10/2011 8:37 AM, Rita escribió:
> Thanks Stu. I too was sure there was an algorithm. Is there a place 
> where I can read more about it?  I want to know if it picks a block 
> according to the load average or does it always pick "rack0" first?
>
>
>
> On Wed, Mar 9, 2011 at 10:24 PM, <stu24mail@yahoo.com 
> <ma...@yahoo.com>> wrote:
>
>     There is an algorithm. Each block should have a copy on different
>     nodes. In your case, each block will have a copy on each of the nodes.
>
>     Take care,
>     -stu
>     ------------------------------------------------------------------------
>     *From: * Rita <rmorgan466@gmail.com <ma...@gmail.com>>
>     *Date: *Wed, 9 Mar 2011 22:07:37 -0500
>     *To: *<hdfs-user@hadoop.apache.org
>     <ma...@hadoop.apache.org>>
>     *ReplyTo: * hdfs-user@hadoop.apache.org
>     <ma...@hadoop.apache.org>
>     *Subject: *how does hdfs determine what node to use?
>
>     I have a 2 rack cluster. All of my files have a replication factor
>     of 2. How does hdfs determine what node to use when serving the
>     data? Does it always use the first rack? or is there an algorithm
>     for this?
>
>
>     -- 
>     --- Get your facts first, then you can distort them as you please.--
>
>
>
>
> -- 
> --- Get your facts first, then you can distort them as you please.--
The best source that I found for this is the Tom White´s book: Hadoop: 
The Definitive Guide 2nd Edition. Chapter 3: The Hadoop Distributed 
FileSystem, and the Hadoop wiki: http://wiki.apache.org/hadoop/HDFS.

Regards

-- 
Marcos Luís Ortíz Valmaseda
  Software Engineer
  Universidad de las Ciencias Informáticas
  Linux User # 418229

http://uncubanitolinuxero.blogspot.com
http://www.linkedin.com/in/marcosluis2186


Re: how does hdfs determine what node to use?

Posted by Harsh J <qw...@gmail.com>.
On Thu, Mar 10, 2011 at 9:11 PM, Ayon Sinha <ay...@yahoo.com> wrote:
> So I am guessing if you have a rep factor of 2, both the blocks will be on
> the same rack.

The default block placement strategy of HDFS is to place at least one
block in another rack, for all replication factors greater than one.
This is controlled using the BlockPlacementPolicyDefault class (an
alternate implementation may be specified to the NameNode, I suppose)
and the test cases have such a case for 2-replicas asserting that the
two blocks are not on the same rack :-)

-- 
Harsh J
www.harshj.com

Re: how does hdfs determine what node to use?

Posted by Ayon Sinha <ay...@yahoo.com>.
AFAIK, it does help in the mapper phase (and reduce/shuffle) because it looks 
for the block in the same node first, then same rack and then outside the 
current rack. It doesn't have network awareness (like colo) beyond rack. I think 
colo awareness is being considered as an enhancement. 
 -Ayon





________________________________
From: Jeffrey Buell <jb...@vmware.com>
To: "hdfs-user@hadoop.apache.org" <hd...@hadoop.apache.org>
Sent: Thu, March 10, 2011 10:34:31 AM
Subject: RE: how does hdfs determine what node to use?


Rita said that she has 2 racks (not 2 nodes).  Rita, how many nodes per rack do 
you have?
 
To continue the thread, could there be a performance advantage to having greater 
replication in the shuffle or reduce phases?  That is, is hadoop smart enough 
that when it needs data that are not on the local node, it finds out which copy 
of that data is on the closest (in the network sense) node and gets it from 
there?  Or (if the copies are on the same rack) from the node with the least 
traffic on it currently?  As opposed to always getting it from the node with the 
“primary” copy.
 
Jeff
 
From:stu24mail@yahoo.com [mailto:stu24mail@yahoo.com] 
Sent: Thursday, March 10, 2011 10:19 AM
To: hdfs-user@hadoop.apache.org
Subject: Re: how does hdfs determine what node to use?
 
Actually I just meant to point out however many copies you have, the copies are 
placed on different nodes. Although if you only have two nodes, there aren't a 
whole lot of options.. :)

I thought Rita was mainly worried if they all went to the same node - which 
would be bad.

Take care,
-stu

________________________________

From: Ayon Sinha <ay...@yahoo.com> 
Date: Thu, 10 Mar 2011 07:41:17 -0800 (PST)
To: <hd...@hadoop.apache.org>
ReplyTo: hdfs-user@hadoop.apache.org 
Subject: Re: how does hdfs determine what node to use?
 
I think Stu meant that each block will have a copy on at most 2 nodes. 
Before Hadoop .20 rack awareness was not built-in the algo to pick the 
replication node. With .20 and later, the rack awareness does the following:
1. First copy of the block is picked at "random" from one of the least loaded 
nodes. Then the next copy is picked to be on another node on the same rack (to 
save network hops). 
2. Then if the rep factor is 3, it will pick another node from another rack. 
This is done to provide redundancy in case an entire rack is unavailable due to 
switch failure.
 
So I am guessing if you have a rep factor of 2, both the blocks will be on the 
same rack. Its quite possible that Hadoop has some switch somewhere to change 
this policy, because Hadoop has a switch for everything.
 
-Ayon
 

________________________________

From:Rita <rm...@gmail.com>
To: hdfs-user@hadoop.apache.org; stu24mail@yahoo.com
Sent: Thu, March 10, 2011 5:37:08 AM
Subject: Re: how does hdfs determine what node to use?

Thanks Stu. I too was sure there was an algorithm. Is there a place where I can 
read more about it?  I want to know if it picks a block according to the load 
average or does it always pick "rack0" first? 




On Wed, Mar 9, 2011 at 10:24 PM, <st...@yahoo.com> wrote:
There is an algorithm. Each block should have a copy on different nodes. In your 
case, each block will have a copy on each of the nodes.

Take care,
-stu

________________________________

From: Rita <rm...@gmail.com> 
Date: Wed, 9 Mar 2011 22:07:37 -0500
To: <hd...@hadoop.apache.org>
ReplyTo: hdfs-user@hadoop.apache.org 
Subject: how does hdfs determine what node to use?
 
I have a 2 rack cluster. All of my files have a replication factor of 2. How 
does hdfs determine what node to use when serving the data? Does it always use 
the first rack? or is there an algorithm for this?


-- 
--- Get your facts first, then you can distort them as you please.--



-- 
--- Get your facts first, then you can distort them as you please.--


      

Re: how does hdfs determine what node to use?

Posted by Allen Wittenauer <aw...@apache.org>.
On Mar 10, 2011, at 10:34 AM, Jeffrey Buell wrote:

> Rita said that she has 2 racks (not 2 nodes).  Rita, how many nodes per rack do you have?
> 
> To continue the thread, could there be a performance advantage to having greater replication in the shuffle or reduce phases?  That is, is hadoop smart enough that when it needs data that are not on the local node, it finds out which copy of that data is on the closest (in the network sense) node and gets it from there?  

	The reduce phase doesn't read from HDFS.   It does the equiv. of a  HTTP get from the tasktracker that hold the map's intermediate output.  The speed up here is that the reduce should get scheduled on the same node that one of the job's mapper tasks was scheduled, especially any hosts that have significant map output.  This could potentially reduce network usage, but in the end is likely to be insignificant.

RE: how does hdfs determine what node to use?

Posted by Jeffrey Buell <jb...@vmware.com>.
Rita said that she has 2 racks (not 2 nodes).  Rita, how many nodes per rack do you have?

To continue the thread, could there be a performance advantage to having greater replication in the shuffle or reduce phases?  That is, is hadoop smart enough that when it needs data that are not on the local node, it finds out which copy of that data is on the closest (in the network sense) node and gets it from there?  Or (if the copies are on the same rack) from the node with the least traffic on it currently?  As opposed to always getting it from the node with the "primary" copy.

Jeff

From: stu24mail@yahoo.com [mailto:stu24mail@yahoo.com]
Sent: Thursday, March 10, 2011 10:19 AM
To: hdfs-user@hadoop.apache.org
Subject: Re: how does hdfs determine what node to use?

Actually I just meant to point out however many copies you have, the copies are placed on different nodes. Although if you only have two nodes, there aren't a whole lot of options.. :)

I thought Rita was mainly worried if they all went to the same node - which would be bad.

Take care,
-stu
________________________________
From: Ayon Sinha <ay...@yahoo.com>
Date: Thu, 10 Mar 2011 07:41:17 -0800 (PST)
To: <hd...@hadoop.apache.org>
ReplyTo: hdfs-user@hadoop.apache.org
Subject: Re: how does hdfs determine what node to use?

I think Stu meant that each block will have a copy on at most 2 nodes.
Before Hadoop .20 rack awareness was not built-in the algo to pick the replication node. With .20 and later, the rack awareness does the following:
1. First copy of the block is picked at "random" from one of the least loaded nodes. Then the next copy is picked to be on another node on the same rack (to save network hops).
2. Then if the rep factor is 3, it will pick another node from another rack. This is done to provide redundancy in case an entire rack is unavailable due to switch failure.

So I am guessing if you have a rep factor of 2, both the blocks will be on the same rack. Its quite possible that Hadoop has some switch somewhere to change this policy, because Hadoop has a switch for everything.

-Ayon

________________________________
From: Rita <rm...@gmail.com>
To: hdfs-user@hadoop.apache.org; stu24mail@yahoo.com
Sent: Thu, March 10, 2011 5:37:08 AM
Subject: Re: how does hdfs determine what node to use?

Thanks Stu. I too was sure there was an algorithm. Is there a place where I can read more about it?  I want to know if it picks a block according to the load average or does it always pick "rack0" first?


On Wed, Mar 9, 2011 at 10:24 PM, <st...@yahoo.com>> wrote:
There is an algorithm. Each block should have a copy on different nodes. In your case, each block will have a copy on each of the nodes.

Take care,
-stu
________________________________
From: Rita <rm...@gmail.com>>
Date: Wed, 9 Mar 2011 22:07:37 -0500
To: <hd...@hadoop.apache.org>>
ReplyTo: hdfs-user@hadoop.apache.org<ma...@hadoop.apache.org>
Subject: how does hdfs determine what node to use?

I have a 2 rack cluster. All of my files have a replication factor of 2. How does hdfs determine what node to use when serving the data? Does it always use the first rack? or is there an algorithm for this?


--
--- Get your facts first, then you can distort them as you please.--



--
--- Get your facts first, then you can distort them as you please.--


Re: how does hdfs determine what node to use?

Posted by st...@yahoo.com.
Actually I just meant to point out however many copies you have, the copies are placed on different nodes. Although if you only have two nodes, there aren't a whole lot of options.. :)

I thought Rita was mainly worried if they all went to the same node - which would be bad.

Take care,
 -stu

-----Original Message-----
From: Ayon Sinha <ay...@yahoo.com>
Date: Thu, 10 Mar 2011 07:41:17 
To: <hd...@hadoop.apache.org>
Reply-To: hdfs-user@hadoop.apache.org
Subject: Re: how does hdfs determine what node to use?

I think Stu meant that each block will have a copy on at most 2 nodes. 
Before Hadoop .20 rack awareness was not built-in the algo to pick the 
replication node. With .20 and later, the rack awareness does the following:
1. First copy of the block is picked at "random" from one of the least loaded 
nodes. Then the next copy is picked to be on another node on the same rack (to 
save network hops). 
2. Then if the rep factor is 3, it will pick another node from another rack. 
This is done to provide redundancy in case an entire rack is unavailable due to 
switch failure.

So I am guessing if you have a rep factor of 2, both the blocks will be on the 
same rack. Its quite possible that Hadoop has some switch somewhere to change 
this policy, because Hadoop has a switch for everything.
 -Ayon




________________________________
From: Rita <rm...@gmail.com>
To: hdfs-user@hadoop.apache.org; stu24mail@yahoo.com
Sent: Thu, March 10, 2011 5:37:08 AM
Subject: Re: how does hdfs determine what node to use?

Thanks Stu. I too was sure there was an algorithm. Is there a place where I can 
read more about it?  I want to know if it picks a block according to the load 
average or does it always pick "rack0" first? 





On Wed, Mar 9, 2011 at 10:24 PM, <st...@yahoo.com> wrote:

There is an algorithm. Each block should have a copy on different nodes. In your 
case, each block will have a copy on each of the nodes.
>
>Take care,
>-stu
________________________________

>From:  Rita <rm...@gmail.com> 
>Date: Wed, 9 Mar 2011 22:07:37 -0500
>To: <hd...@hadoop.apache.org>
>ReplyTo:  hdfs-user@hadoop.apache.org 
>Subject: how does hdfs determine what node to use?
>
>I have a 2 rack cluster. All of my files have a replication factor of 2. How 
>does hdfs determine what node to use when serving the data? Does it always use 
>the first rack? or is there an algorithm for this?
>
>
>-- 
>--- Get your facts first, then you can distort them as you please.--
>


-- 
--- Get your facts first, then you can distort them as you please.--



      

Re: how does hdfs determine what node to use?

Posted by Ayon Sinha <ay...@yahoo.com>.
I think Stu meant that each block will have a copy on at most 2 nodes. 
Before Hadoop .20 rack awareness was not built-in the algo to pick the 
replication node. With .20 and later, the rack awareness does the following:
1. First copy of the block is picked at "random" from one of the least loaded 
nodes. Then the next copy is picked to be on another node on the same rack (to 
save network hops). 
2. Then if the rep factor is 3, it will pick another node from another rack. 
This is done to provide redundancy in case an entire rack is unavailable due to 
switch failure.

So I am guessing if you have a rep factor of 2, both the blocks will be on the 
same rack. Its quite possible that Hadoop has some switch somewhere to change 
this policy, because Hadoop has a switch for everything.
 -Ayon




________________________________
From: Rita <rm...@gmail.com>
To: hdfs-user@hadoop.apache.org; stu24mail@yahoo.com
Sent: Thu, March 10, 2011 5:37:08 AM
Subject: Re: how does hdfs determine what node to use?

Thanks Stu. I too was sure there was an algorithm. Is there a place where I can 
read more about it?  I want to know if it picks a block according to the load 
average or does it always pick "rack0" first? 





On Wed, Mar 9, 2011 at 10:24 PM, <st...@yahoo.com> wrote:

There is an algorithm. Each block should have a copy on different nodes. In your 
case, each block will have a copy on each of the nodes.
>
>Take care,
>-stu
________________________________

>From:  Rita <rm...@gmail.com> 
>Date: Wed, 9 Mar 2011 22:07:37 -0500
>To: <hd...@hadoop.apache.org>
>ReplyTo:  hdfs-user@hadoop.apache.org 
>Subject: how does hdfs determine what node to use?
>
>I have a 2 rack cluster. All of my files have a replication factor of 2. How 
>does hdfs determine what node to use when serving the data? Does it always use 
>the first rack? or is there an algorithm for this?
>
>
>-- 
>--- Get your facts first, then you can distort them as you please.--
>


-- 
--- Get your facts first, then you can distort them as you please.--



      

Re: how does hdfs determine what node to use?

Posted by Rita <rm...@gmail.com>.
Thanks Stu. I too was sure there was an algorithm. Is there a place where I
can read more about it?  I want to know if it picks a block according to the
load average or does it always pick "rack0" first?



On Wed, Mar 9, 2011 at 10:24 PM, <st...@yahoo.com> wrote:

> There is an algorithm. Each block should have a copy on different nodes. In
> your case, each block will have a copy on each of the nodes.
>
> Take care,
> -stu
> ------------------------------
> *From: * Rita <rm...@gmail.com>
> *Date: *Wed, 9 Mar 2011 22:07:37 -0500
> *To: *<hd...@hadoop.apache.org>
> *ReplyTo: * hdfs-user@hadoop.apache.org
> *Subject: *how does hdfs determine what node to use?
>
> I have a 2 rack cluster. All of my files have a replication factor of 2.
> How does hdfs determine what node to use when serving the data? Does it
> always use the first rack? or is there an algorithm for this?
>
>
> --
> --- Get your facts first, then you can distort them as you please.--
>



-- 
--- Get your facts first, then you can distort them as you please.--

Re: how does hdfs determine what node to use?

Posted by st...@yahoo.com.
There is an algorithm. Each block should have a copy on different nodes. In your case, each block will have a copy on each of the nodes.

Take care,
 -stu
-----Original Message-----
From: Rita <rm...@gmail.com>
Date: Wed, 9 Mar 2011 22:07:37 
To: <hd...@hadoop.apache.org>
Reply-To: hdfs-user@hadoop.apache.org
Subject: how does hdfs determine what node to use?

I have a 2 rack cluster. All of my files have a replication factor of 2. How
does hdfs determine what node to use when serving the data? Does it always
use the first rack? or is there an algorithm for this?


-- 
--- Get your facts first, then you can distort them as you please.--