You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Paul Smith <ps...@aconex.com> on 2009/10/02 00:02:30 UTC

Re: local node Quotas (for an R&D cluster)

On 23/09/2009, at 10:47 AM, Ravi Phulari wrote:

> Hello Paul here is quick answer to your question -
> You can use dfs.datanode.du.pct  and dfs.datanode.du.reserved   
> property in hdfs-site.xml config file to  configure
> maximum  local disk space used by hdfs and mapreduce.
>
>  <property>
>     <name>dfs.datanode.du.pct</name>
>     <value>0.85f</value>
>     <description>When calculating remaining space, only use this  
> percentage of the real available space
>   </description>
>   </property>
>
> <property>
>     <name>dfs.datanode.du.reserved</name>
>     <value>1070000</value>
>     <description>Reserved space in bytes per volume. Always leave  
> this much space free for non dfs use.
>   </description>
>   </property>


Sorry for taking so long to reply here and actually use this  
suggestion, but I'm not sure if this is working or not, I have placed  
the above snippet in hdfs-site.xml and replicated that config around  
the cluster.  When I start up the cluster though, and go to the  
management console page I don't see the overall DFS available size  
decrease, nor does each node appear to report any difference in  
available space.

I was working on the assumption that by adding these properties, these  
free space metrics would change (go down) after a restart, but I don't  
see that.  Should I?

I'm using 0.20.1, r810220

cheers,

Paul