You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@hbase.apache.org by st...@apache.org on 2011/01/23 21:28:26 UTC

svn commit: r1062516 - /hbase/branches/0.90/src/docbkx/book.xml

Author: stack
Date: Sun Jan 23 20:28:25 2011
New Revision: 1062516

URL: http://svn.apache.org/viewvc?rev=1062516&view=rev
Log:
Added more to the book index

Modified:
    hbase/branches/0.90/src/docbkx/book.xml

Modified: hbase/branches/0.90/src/docbkx/book.xml
URL: http://svn.apache.org/viewvc/hbase/branches/0.90/src/docbkx/book.xml?rev=1062516&r1=1062515&r2=1062516&view=diff
==============================================================================
--- hbase/branches/0.90/src/docbkx/book.xml (original)
+++ hbase/branches/0.90/src/docbkx/book.xml Sun Jan 23 20:28:25 2011
@@ -285,7 +285,7 @@ stopping hbase...............</programli
 Usually you'll want to use the latest version available except the problematic u18  (u22 is the latest version as of this writing).</para>
 </section>
 
-  <section xml:id="hadoop"><title><link xlink:href="http://hadoop.apache.org">hadoop</link></title>
+  <section xml:id="hadoop"><title><link xlink:href="http://hadoop.apache.org">hadoop</link><indexterm><primary>Hadoop</primary></indexterm></title>
 <para>This version of HBase will only run on <link xlink:href="http://hadoop.apache.org/common/releases.html">Hadoop 0.20.x</link>.
     It will not run on hadoop 0.21.x (nor 0.22.x) as of this writing.
     HBase will lose data unless it is running on an HDFS that has a durable <code>sync</code>.
@@ -343,7 +343,7 @@ be running to use Hadoop's scripts to ma
 
 
       <section xml:id="ulimit">
-      <title><varname>ulimit</varname></title>
+      <title><varname>ulimit</varname><indexterm><primary>ulimit</primary></indexterm></title>
       <para>HBase is a database, it uses a lot of files at the same time.
       The default ulimit -n of 1024 on *nix systems is insufficient.
       Any significant amount of loading will lead you to 
@@ -390,7 +390,7 @@ be running to use Hadoop's scripts to ma
       </section>
 
       <section xml:id="dfs.datanode.max.xcievers">
-      <title><varname>dfs.datanode.max.xcievers</varname></title>
+      <title><varname>dfs.datanode.max.xcievers</varname><indexterm><primary>xcievers</primary></indexterm></title>
       <para>
       An Hadoop HDFS datanode has an upper bound on the number of files
       that it will serve at any one time.
@@ -1067,7 +1067,7 @@ to ensure well-formedness of your docume
       </section>
 
       <section xml:id="lzo">
-      <title>LZO compression</title>
+      <title>LZO compression<indexterm><primary>LZO</primary></indexterm></title>
       <para>You should consider enabling LZO compression.  Its
       near-frictionless and in most all cases boosts performance.
       </para>
@@ -1894,7 +1894,7 @@ of all regions.
 
   <appendix xml:id="compression">
 
-    <title >Compression In HBase</title>
+    <title >Compression In HBase<indexterm><primary>Compression</primary></indexterm></title>
 
     <section id="compression.test">
     <title>CompressionTest Tool</title>
@@ -1952,7 +1952,7 @@ of all regions.
     available on the CLASSPATH; in this case it will use native
     compressors instead (If the native libs are NOT present,
     you will see lots of <emphasis>Got brand-new compressor</emphasis>
-    reports in your logs; TO BE FIXED).
+    reports in your logs; see <link linkend="brand.new.compressor">FAQ</link>).
     </para>
     </section>
   </appendix>
@@ -1971,7 +1971,7 @@ of all regions.
                 </para>
             </answer>
         </qandaentry>
-        <qandaentry>
+        <qandaentry xml:id="brand.new.compressor">
             <question><para>Why are logs flooded with '2011-01-10 12:40:48,407 INFO org.apache.hadoop.io.compress.CodecPool: Got
             brand-new compressor' messages?</para></question>
             <answer>