You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by cu...@apache.org on 2006/02/08 22:22:45 UTC

svn commit: r376069 - in /lucene/hadoop/trunk: build.xml conf/configuration.xsl src/java/overview.html

Author: cutting
Date: Wed Feb  8 13:22:19 2006
New Revision: 376069

URL: http://svn.apache.org/viewcvs?rev=376069&view=rev
Log:
Create html documentation for configuration defaults and link to these from javadoc.

Modified:
    lucene/hadoop/trunk/build.xml
    lucene/hadoop/trunk/conf/configuration.xsl
    lucene/hadoop/trunk/src/java/overview.html

Modified: lucene/hadoop/trunk/build.xml
URL: http://svn.apache.org/viewcvs/lucene/hadoop/trunk/build.xml?rev=376069&r1=376068&r2=376069&view=diff
==============================================================================
--- lucene/hadoop/trunk/build.xml (original)
+++ lucene/hadoop/trunk/build.xml Wed Feb  8 13:22:19 2006
@@ -223,7 +223,7 @@
   <!-- ================================================================== -->
   <!-- Documentation                                                      -->
   <!-- ================================================================== -->
-  <target name="javadoc" depends="compile">
+  <target name="javadoc" depends="compile, default-doc">
     <mkdir dir="${build.javadoc}"/>
     <javadoc
       overview="${src.dir}/overview.html"
@@ -247,7 +247,7 @@
   </target>	
 	
   <target name="default-doc">
-    <style basedir="${conf.dir}" destdir="${docs.dir}"
+    <style basedir="${conf.dir}" destdir="${build.docs}"
            includes="hadoop-default.xml" style="conf/configuration.xsl"/>
   </target>
 
@@ -287,8 +287,8 @@
         <fileset dir="${dist.dir}/bin"/>
     </chmod>
 
-    <copy todir="${dist.dir}/docs/api">
-      <fileset dir="${build.javadoc}"/>
+    <copy todir="${dist.dir}/docs">
+      <fileset dir="${build.docs}"/>
     </copy>
 
     <copy todir="${dist.dir}">

Modified: lucene/hadoop/trunk/conf/configuration.xsl
URL: http://svn.apache.org/viewcvs/lucene/hadoop/trunk/conf/configuration.xsl?rev=376069&r1=376068&r2=376069&view=diff
==============================================================================
--- lucene/hadoop/trunk/conf/configuration.xsl (original)
+++ lucene/hadoop/trunk/conf/configuration.xsl Wed Feb  8 13:22:19 2006
@@ -12,7 +12,7 @@
 </tr>
 <xsl:for-each select="property">
 <tr>
-  <td><xsl:value-of select="name"/></td>
+  <td><a name="{name}"><xsl:value-of select="name"/></a></td>
   <td><xsl:value-of select="value"/></td>
   <td><xsl:value-of select="description"/></td>
 </tr>

Modified: lucene/hadoop/trunk/src/java/overview.html
URL: http://svn.apache.org/viewcvs/lucene/hadoop/trunk/src/java/overview.html?rev=376069&r1=376068&r2=376069&view=diff
==============================================================================
--- lucene/hadoop/trunk/src/java/overview.html (original)
+++ lucene/hadoop/trunk/src/java/overview.html Wed Feb  8 13:22:19 2006
@@ -80,11 +80,15 @@
 
 <li>The {@link org.apache.hadoop.dfs.NameNode} (Distributed Filesystem
 master) host and port.  This is specified with the configuration
-property <tt>fs.default.name</tt>.</li>
+property <tt><a
+href="../hadoop-default.html#fs.default.name">fs.default.name</a></tt>.
+</li>
 
 <li>The {@link org.apache.hadoop.mapred.JobTracker} (MapReduce master)
 host and port.  This is specified with the configuration property
-<tt>mapred.job.tracker</tt>.</li>
+<tt><a
+href="../hadoop-default.html#mapred.job.tracker">mapred.job.tracker</a></tt>.
+</li>
 
 <li>A <em>slaves</em> file that lists the names of all the hosts in
 the cluster.  The default slaves file is <tt>~/.slaves</tt>.
@@ -115,11 +119,11 @@
 
 </configuration></xmp>
 
-<p>Note that we also set the DFS replication level to 1 in order to
-reduce the number of warnings.</p>
+<p>(We also set the DFS replication level to 1 in order to
+reduce the number of warnings.)</p>
 
-Now check that the command <br><tt>ssh localhost</tt><br> does not
-require a password.  If it does, execute the following commands:<p>
+<p>Now check that the command <br><tt>ssh localhost</tt><br> does not
+require a password.  If it does, execute the following commands:</p>
 
 <p><tt>ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa<br>
 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
@@ -163,23 +167,33 @@
 
 <ol>
 
-<li>Specify hostname or IP address of the master server in the values for
-<tt>fs.default.name</tt> and <tt>mapred.job.tracker</tt> in
-<tt>conf/hadoop-site.xml</tt>.  These are specified as
+<li>Specify hostname or IP address of the master server in the values
+for <tt><a
+href="../hadoop-default.html#fs.default.name">fs.default.name</a></tt>
+and <tt><a
+href="../hadoop-default.html#mapred.job.tracker">mapred.job.tracker</a></tt>
+in <tt>conf/hadoop-site.xml</tt>.  These are specified as
 <tt><em>host</em>:<em>port</em></tt> pairs.</li>
 
-<li>Specify directories for <tt>dfs.name.dir</tt> and
-<tt>dfs.data.dir</tt> in <tt>conf/hadoop-site.xml</tt>.  These are
-used to hold distributed filesystem data on the master node and slave nodes
-respectively.  Note that <tt>dfs.data.dir</tt> may contain a space- or
-comma-separated list of directory names, so that data may be stored on
-multiple devices.</li>
-
-<li>Specify <tt>mapred.local.dir</tt> in
-<tt>conf/hadoop-site.xml</tt>.  This determines where temporary
+<li>Specify directories for <tt><a
+href="../hadoop-default.html#dfs.name.dir">dfs.name.dir</a></tt> and
+<tt><a
+href="../hadoop-default.html#dfs.data.dir">dfs.data.dir</a></tt> in
+<tt>conf/hadoop-site.xml</tt>.  These are used to hold distributed
+filesystem data on the master node and slave nodes respectively.  Note
+that <tt>dfs.data.dir</tt> may contain a space- or comma-separated
+list of directory names, so that data may be stored on multiple
+devices.</li>
+
+<li>Specify <tt><a
+href="../hadoop-default.html#mapred.local.dir">mapred.local.dir</a></tt>
+in <tt>conf/hadoop-site.xml</tt>.  This determines where temporary
 MapReduce data is written.  It also may be a list of directories.</li>
 
-<li>Specify <tt>mapred.map.tasks</tt> and <tt>mapred.reduce.tasks</tt>
+<li>Specify <tt><a
+href="../hadoop-default.html#mapred.map.tasks">mapred.map.tasks</a></tt>
+and <tt><a
+href="../hadoop-default.html#mapred.reduce.tasks">mapred.reduce.tasks</a></tt>
 in <tt>conf/mapred-default.xml</tt>.  As a rule of thumb, use 10x the
 number of slave processors for <tt>mapred.map.tasks</tt>, and 2x the
 number of slave processors for <tt>mapred.reduce.tasks</tt>.</li>