You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by st...@apache.org on 2009/07/20 18:55:17 UTC

svn commit: r795916 - in /hadoop/hbase: branches/0.19/src/java/overview.html trunk/src/java/overview.html

Author: stack
Date: Mon Jul 20 16:55:17 2009
New Revision: 795916

URL: http://svn.apache.org/viewvc?rev=795916&view=rev
Log:
HBASE-1573 Holes in master state change; updated startcode and server go into .META. but catalog scanner just got old values

Modified:
    hadoop/hbase/branches/0.19/src/java/overview.html
    hadoop/hbase/trunk/src/java/overview.html

Modified: hadoop/hbase/branches/0.19/src/java/overview.html
URL: http://svn.apache.org/viewvc/hadoop/hbase/branches/0.19/src/java/overview.html?rev=795916&r1=795915&r2=795916&view=diff
==============================================================================
--- hadoop/hbase/branches/0.19/src/java/overview.html (original)
+++ hadoop/hbase/branches/0.19/src/java/overview.html Mon Jul 20 16:55:17 2009
@@ -41,12 +41,17 @@
   <a href="http://wiki.apache.org/hadoop/Hbase/FAQ#6">FAQ: Why do I see "java.io.IOException...(Too many open files)" in my logs?</a>
   for how to up the limit.  Also, as of 0.18.x hadoop, datanodes have an upper-bound
       on the number of threads they will support (<code>dfs.datanode.max.xcievers</code>).
-      Default is 256.  If loading lots of data into hbase, up this limit on your
+      Default is 256.  Up this limit on your
       hadoop cluster.  Also consider upping the number of datanode handlers from
       the default of 3. See <code>dfs.datanode.handler.count</code>.</li>
       <li>The clocks on cluster members should be in basic alignments.  Some skew is tolerable but
       wild skew can generate odd behaviors.  Run <a href="http://en.wikipedia.org/wiki/Network_Time_Protocol">NTP</a>
       on your cluster, or an equivalent.</li>
+      <li>HBase servers put up 10 listeners for incoming connections by default.  Up this
+      number if you have a dataset of any substance by setting hbase.regionserver.handler.count
+      in your hbase-site.xml.</li>
+      <li><a hef="https://issues.apache.org/jira/browse/HADOOP-4681">HADOOP-4681 <i>"DFSClient block read failures cause open DFSInputStream to become unusable"</i></a>. This patch will help with the ever-popular, "No live nodes contain current block".
+      </li>
 </ul>
 
 <h2><a name="getting_started" >Getting Started</a></h2>

Modified: hadoop/hbase/trunk/src/java/overview.html
URL: http://svn.apache.org/viewvc/hadoop/hbase/trunk/src/java/overview.html?rev=795916&r1=795915&r2=795916&view=diff
==============================================================================
--- hadoop/hbase/trunk/src/java/overview.html (original)
+++ hadoop/hbase/trunk/src/java/overview.html Mon Jul 20 16:55:17 2009
@@ -50,11 +50,13 @@
   <a href="http://wiki.apache.org/hadoop/Hbase/FAQ#6">FAQ: Why do I see "java.io.IOException...(Too many open files)" in my logs?</a>
   for how to up the limit.  Also, as of 0.18.x hadoop, datanodes have an upper-bound
       on the number of threads they will support (<code>dfs.datanode.max.xcievers</code>).
-      Default is 256.  If loading lots of data into hbase, up this limit on your
-      hadoop cluster.
+      Default is 256.  Up this limit on your hadoop cluster.
       <li>The clocks on cluster members should be in basic alignments.  Some skew is tolerable but
       wild skew can generate odd behaviors.  Run <a href="http://en.wikipedia.org/wiki/Network_Time_Protocol">NTP</a>
       on your cluster, or an equivalent.</li>
+      <li>HBase servers put up 10 listeners for incoming connections by default.  Up this
+      number if you have a dataset of any substance by setting hbase.regionserver.handler.count
+      in your hbase-site.xml.</li>
       <li>This is a list of patches we recommend you apply to your running Hadoop cluster:
       <ul>
       <li><a hef="https://issues.apache.org/jira/browse/HADOOP-4681">HADOOP-4681 <i>"DFSClient block read failures cause open DFSInputStream to become unusable"</i></a>. This patch will help with the ever-popular, "No live nodes contain current block".