You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by "prasana.iyengar" <pr...@gmail.com> on 2008/05/29 17:32:03 UTC

how does one rebalance data nodes

1. After adding new data nodes is there a way to force a rebalance the data
blocks across the new nodes.
We recently added 6 nodes to the cluster - the original 4 nodes seem to have
80+% hdfs usage.
2. In 0.16.0 i also have the following settings in hadoop-site.xml - .
 dfs.datanode.du.reserved - 10G [default = 0]
 dfs.datanode.du.pct - 0.9f [default = 0.98f]

Q:will this stop the fillup of the data node @ 90% and/or 10G remaining on
the partition [whichever is earlier] ?

thanks,
-prasana
-- 
View this message in context: http://www.nabble.com/how-does-one-rebalance-data-nodes-tp17536833p17536833.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.


Re: how does one rebalance data nodes

Posted by Hairong Kuang <ha...@yahoo-inc.com>.
If you set dfs.datanode.du.reserved to be 10G, this guarantees that dfs
won't use more than (the total partition space - 10G).

In my opinion, dfs.datanode.du.pct is not of much use. So you can ignore it
for now.

Hairong 

On 5/29/08 8:32 AM, "prasana.iyengar" <pr...@gmail.com> wrote:

> 
> 1. After adding new data nodes is there a way to force a rebalance the data
> blocks across the new nodes.
> We recently added 6 nodes to the cluster - the original 4 nodes seem to have
> 80+% hdfs usage.
> 2. In 0.16.0 i also have the following settings in hadoop-site.xml - .
>  dfs.datanode.du.reserved - 10G [default = 0]
>  dfs.datanode.du.pct - 0.9f [default = 0.98f]
> 
> Q:will this stop the fillup of the data node @ 90% and/or 10G remaining on
> the partition [whichever is earlier] ?
> 
> thanks,
> -prasana