You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by Ravi Phulari <rp...@yahoo-inc.com> on 2010/03/31 05:29:40 UTC

Re: is there any way we can limit Hadoop Data node's disk usage?

Hello Steven ,
You can use  dfs.datanode.du.reserved configuration value in $HADOOP_HOME/conf/hdfs-site.xml for limiting disk usage.

<property>
    <name>dfs.datanode.du.reserved</name>
    <!-- cluster variant -->
    <value>182400</value>
    <description>Reserved space in bytes per volume. Always leave this much space free for non dfs use.
  </description>
  </property>

Ravi
Hadoop @ Yahoo!

On 3/30/10 8:12 PM, "steven zhuang" <st...@gmail.com> wrote:

hi, guys,
               we have some machine with 1T disk, some with 100GB disk,
               I have this question that is there any means we can limit the
disk usage of datanodes on those machines with smaller disk?
               thanks!


Ravi
--


JobTracker website data - can it be increased?

Posted by Raymond Jennings III <ra...@yahoo.com>.
I am running an application that has many iterations and I find that the JobTracker's website cuts off many of the initial runs.  Is there any way to increase the results of previous jobs such that they are still available at the JobTracker's website?  Thank you.