You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by Apache Wiki <wi...@apache.org> on 2010/03/31 06:23:08 UTC

[Hadoop Wiki] Trivial Update of "FAQ" by RaviPhulari

Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change notification.

The "FAQ" page has been changed by RaviPhulari.
http://wiki.apache.org/hadoop/FAQ?action=diff&rev1=68&rev2=69

--------------------------------------------------

  
  So, the simple answer is that 4-6Gbps is most likely just fine for most practical jobs. If you want to be extra safe, many inexpensive switches can operate in a "stacked" configuration where the bandwidth between them is essentially backplane speed. That should scale you to 96 nodes with plenty of headroom. Many inexpensive gigabit switches also have one or two 10GigE ports which can be used effectively to connect to each other or to a 10GE core.
  
+ '''29.[[#29|How to limit Data node's disk usage?]]'''
+ 
+ Use dfs.datanode.du.reserved configuration value in $HADOOP_HOME/conf/hdfs-site.xml for limiting disk usage.
+ 
+ {{{
+   <property>
+     <name>dfs.datanode.du.reserved</name>
+     <!-- cluster variant -->
+     <value>182400</value>
+     <description>Reserved space in bytes per volume. Always leave this much space free for non dfs use.
+     </description>
+   </property>
+ }}} 
+