You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-commits@hadoop.apache.org by st...@apache.org on 2009/11/28 21:06:08 UTC

svn commit: r885143 [6/18] - in /hadoop/hdfs/branches/HDFS-326: ./ .eclipse.templates/ .eclipse.templates/.launches/ conf/ ivy/ lib/ src/ant/org/apache/hadoop/ant/ src/ant/org/apache/hadoop/ant/condition/ src/c++/ src/c++/libhdfs/ src/c++/libhdfs/docs/...

Modified: hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml?rev=885143&r1=885142&r2=885143&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml (original)
+++ hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/hdfs_permissions_guide.xml Sat Nov 28 20:05:56 2009
@@ -24,17 +24,33 @@
 
   <header>
     <title>
-      HDFS Permissions Guide
+      Permissions Guide
     </title>
   </header>
 
   <body>
     <section> <title>Overview</title>
       <p>
-		The Hadoop Distributed File System (HDFS) implements a permissions model for files and directories that shares much of the POSIX model. Each file and directory is associated with an <em>owner</em> and a <em>group</em>. The file or directory has separate permissions for the user that is the owner, for other users that are members of the group, and for all other users. For files, the <em>r</em> permission is required to read the file, and the <em>w</em> permission is required to write or append to the file. For directories, the <em>r</em> permission is required to list the contents of the directory, the <em>w</em> permission is required to create or delete files or directories, and the <em>x</em> permission is required to access a child of the directory. In contrast to the POSIX model, there are no <em>setuid</em> or <em>setgid</em> bits for files as there is no notion of executable files. For directories, there are no <em>setuid</em> or <em>setgid</em> bits directory as a s
 implification. The <em>Sticky bit</em> can be set on directories, preventing anyone except the superuser, directory owner or file owner from deleting or moving the files within the directory. Setting the sticky bit for a file has no effect. Collectively, the permissions of a file or directory are its <em>mode</em>. In general, Unix customs for representing and displaying modes will be used, including the use of octal numbers in this description. When a file or directory is created, its owner is the user identity of the client process, and its group is the group of the parent directory (the BSD rule).
+		The Hadoop Distributed File System (HDFS) implements a permissions model for files and directories that shares much of the POSIX model. 
+		Each file and directory is associated with an <em>owner</em> and a <em>group</em>. The file or directory has separate permissions for the 
+		user that is the owner, for other users that are members of the group, and for all other users. 
+		
+		For files, the <em>r</em> permission is required to read the file, and the <em>w</em> permission is required to write or append to the file. 
+		
+		For directories, the <em>r</em> permission is required to list the contents of the directory, the <em>w</em> permission is required to create 
+		or delete files or directories, and the <em>x</em> permission is required to access a child of the directory. 
+		</p>
+	 <p>	
+		In contrast to the POSIX model, there are no <em>setuid</em> or <em>setgid</em> bits for files as there is no notion of executable files. 
+		For directories, there are no <em>setuid</em> or <em>setgid</em> bits directory as a simplification. The <em>Sticky bit</em> can be set 
+		on directories, preventing anyone except the superuser, directory owner or file owner from deleting or moving the files within the directory. 
+		Setting the sticky bit for a file has no effect. Collectively, the permissions of a file or directory are its <em>mode</em>. In general, Unix 
+		customs for representing and displaying modes will be used, including the use of octal numbers in this description. When a file or directory 
+		is created, its owner is the user identity of the client process, and its group is the group of the parent directory (the BSD rule).
 	</p>
 	<p>
-		Each client process that accesses HDFS has a two-part identity composed of the <em>user name</em>, and <em>groups list</em>. Whenever HDFS must do a permissions check for a file or directory <code>foo</code> accessed by a client process,
+		Each client process that accesses HDFS has a two-part identity composed of the <em>user name</em>, and <em>groups list</em>. 
+		Whenever HDFS must do a permissions check for a file or directory <code>foo</code> accessed by a client process,
 	</p>
 	<ul>
 		<li>
@@ -67,22 +83,34 @@
 </ul>
 
 <p>
-In the future there will be other ways of establishing user identity (think Kerberos, LDAP, and others). There is no expectation that this first method is secure in protecting one user from impersonating another. This user identity mechanism combined with the permissions model allows a cooperative community to share file system resources in an organized fashion.
+In the future there will be other ways of establishing user identity (think Kerberos, LDAP, and others). There is no expectation that 
+this first method is secure in protecting one user from impersonating another. This user identity mechanism combined with the 
+permissions model allows a cooperative community to share file system resources in an organized fashion.
 </p>
 <p>
-In any case, the user identity mechanism is extrinsic to HDFS itself. There is no provision within HDFS for creating user identities, establishing groups, or processing user credentials.
+In any case, the user identity mechanism is extrinsic to HDFS itself. There is no provision within HDFS for creating user identities, 
+establishing groups, or processing user credentials.
 </p>
 </section>
 
 <section> <title>Understanding the Implementation</title>
 <p>
-Each file or directory operation passes the full path name to the name node, and the permissions checks are applied along the path for each operation. The client framework will implicitly associate the user identity with the connection to the name node, reducing the need for changes to the existing client API. It has always been the case that when one operation on a file succeeds, the operation might fail when repeated because the file, or some directory on the path, no longer exists. For instance, when the client first begins reading a file, it makes a first request to the name node to discover the location of the first blocks of the file. A second request made to find additional blocks may fail. On the other hand, deleting a file does not revoke access by a client that already knows the blocks of the file. With the addition of permissions, a client's access to a file may be withdrawn between requests. Again, changing permissions does not revoke the access of a client that 
 already knows the file's blocks.
+Each file or directory operation passes the full path name to the name node, and the permissions checks are applied along the 
+path for each operation. The client framework will implicitly associate the user identity with the connection to the name node, 
+reducing the need for changes to the existing client API. It has always been the case that when one operation on a file succeeds, 
+the operation might fail when repeated because the file, or some directory on the path, no longer exists. For instance, when the 
+client first begins reading a file, it makes a first request to the name node to discover the location of the first blocks of the file. 
+A second request made to find additional blocks may fail. On the other hand, deleting a file does not revoke access by a client 
+that already knows the blocks of the file. With the addition of permissions, a client's access to a file may be withdrawn between 
+requests. Again, changing permissions does not revoke the access of a client that already knows the file's blocks.
 </p>
 <p>
-The map-reduce framework delegates the user identity by passing strings without special concern for confidentiality. The owner and group of a file or directory are stored as strings; there is no conversion from user and group identity numbers as is conventional in Unix.
+The MapReduce framework delegates the user identity by passing strings without special concern for confidentiality. The owner 
+and group of a file or directory are stored as strings; there is no conversion from user and group identity numbers as is conventional in Unix.
 </p>
 <p>
-The permissions features of this release did not require any changes to the behavior of data nodes. Blocks on the data nodes do not have any of the <em>Hadoop</em> ownership or permissions attributes associated with them.
+The permissions features of this release did not require any changes to the behavior of data nodes. Blocks on the data nodes 
+do not have any of the <em>Hadoop</em> ownership or permissions attributes associated with them.
 </p>
 </section>
      
@@ -93,7 +121,8 @@
 <p>New methods:</p>
 <ul>
 	<li>
-		<code>public FSDataOutputStream create(Path f, FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, Progressable progress) throws IOException;</code>
+		<code>public FSDataOutputStream create(Path f, FsPermission permission, boolean overwrite, int bufferSize, short 
+		replication, long blockSize, Progressable progress) throws IOException;</code>
 	</li>
 	<li>
 		<code>public boolean mkdirs(Path f, FsPermission permission) throws IOException;</code>
@@ -105,84 +134,115 @@
 		<code>public void setOwner(Path p, String username, String groupname) throws IOException;</code>
 	</li>
 	<li>
-		<code>public FileStatus getFileStatus(Path f) throws IOException;</code> will additionally return the user, group and mode associated with the path.
+		<code>public FileStatus getFileStatus(Path f) throws IOException;</code> will additionally return the user, 
+		group and mode associated with the path.
 	</li>
 
 </ul>
 <p>
-The mode of a new file or directory is restricted my the <code>umask</code> set as a configuration parameter. When the existing <code>create(path, &hellip;)</code> method (<em>without</em> the permission parameter) is used, the mode of the new file is <code>666&thinsp;&amp;&thinsp;^umask</code>. When the new <code>create(path, </code><em>permission</em><code>, &hellip;)</code> method (<em>with</em> the permission parameter <em>P</em>) is used, the mode of the new file is <code>P&thinsp;&amp;&thinsp;^umask&thinsp;&amp;&thinsp;666</code>. When a new directory is created with the existing <code>mkdirs(path)</code> method (<em>without</em> the permission parameter), the mode of the new directory is <code>777&thinsp;&amp;&thinsp;^umask</code>. When the new <code>mkdirs(path, </code><em>permission</em> <code>)</code> method (<em>with</em> the permission parameter <em>P</em>) is used, the mode of new directory is <code>P&thinsp;&amp;&thinsp;^umask&thinsp;&amp;&thinsp;777</code>. 
+The mode of a new file or directory is restricted my the <code>umask</code> set as a configuration parameter. 
+When the existing <code>create(path, &hellip;)</code> method (<em>without</em> the permission parameter) 
+is used, the mode of the new file is <code>666&thinsp;&amp;&thinsp;^umask</code>. When the 
+new <code>create(path, </code><em>permission</em><code>, &hellip;)</code> method 
+(<em>with</em> the permission parameter <em>P</em>) is used, the mode of the new file is 
+<code>P&thinsp;&amp;&thinsp;^umask&thinsp;&amp;&thinsp;666</code>. When a new directory is 
+created with the existing <code>mkdirs(path)</code> method (<em>without</em> the permission parameter), 
+the mode of the new directory is <code>777&thinsp;&amp;&thinsp;^umask</code>. When the 
+new <code>mkdirs(path, </code><em>permission</em> <code>)</code> method (<em>with</em> the 
+permission parameter <em>P</em>) is used, the mode of new directory is 
+<code>P&thinsp;&amp;&thinsp;^umask&thinsp;&amp;&thinsp;777</code>. 
 </p>
 </section>
 
      
 <section> <title>Changes to the Application Shell</title>
 <p>New operations:</p>
-<dl>
-	<dt><code>chmod [-R]</code> <em>mode file &hellip;</em></dt>
-	<dd>
-		Only the owner of a file or the super-user is permitted to change the mode of a file.
-	</dd>
-	<dt><code>chgrp [-R]</code> <em>group file &hellip;</em></dt>
-	<dd>
-		The user invoking <code>chgrp</code> must belong to the specified group and be the owner of the file, or be the super-user.
-	</dd>
-	<dt><code>chown [-R]</code> <em>[owner][:[group]] file &hellip;</em></dt>
-	<dd>
-		The owner of a file may only be altered by a super-user.
-	</dd>
-	<dt><code>ls </code> <em>file &hellip;</em></dt><dd></dd>
-	<dt><code>lsr </code> <em>file &hellip;</em></dt>
-	<dd>
-		The output is reformatted to display the owner, group and mode.
-	</dd>
-</dl></section>
+<ul>
+	<li><code>chmod [-R]</code> <em>mode file &hellip;</em>
+	<br />Only the owner of a file or the super-user is permitted to change the mode of a file.
+    </li>
+    
+	<li><code>chgrp [-R]</code> <em>group file &hellip;</em>
+	<br />The user invoking <code>chgrp</code> must belong to the specified group and be the owner of the file, or be the super-user.
+    </li>
+    
+	<li><code>chown [-R]</code> <em>[owner][:[group]] file &hellip;</em>
+    <br />The owner of a file may only be altered by a super-user.
+    </li>
+	
+	<li><code>ls </code> <em>file &hellip;</em>
+	</li>
+
+	<li><code>lsr </code> <em>file &hellip;</em>
+    <br />The output is reformatted to display the owner, group and mode.
+	</li>
+</ul>
+</section>
 
      
 <section> <title>The Super-User</title>
 <p>
-	The super-user is the user with the same identity as name node process itself. Loosely, if you started the name node, then you are the super-user. The super-user can do anything in that permissions checks never fail for the super-user. There is no persistent notion of who <em>was</em> the super-user; when the name node is started the process identity determines who is the super-user <em>for now</em>. The HDFS super-user does not have to be the super-user of the name node host, nor is it necessary that all clusters have the same super-user. Also, an experimenter running HDFS on a personal workstation, conveniently becomes that installation's super-user without any configuration.
+	The super-user is the user with the same identity as name node process itself. Loosely, if you started the name 
+	node, then you are the super-user. The super-user can do anything in that permissions checks never fail for the 
+	super-user. There is no persistent notion of who <em>was</em> the super-user; when the name node is started 
+	the process identity determines who is the super-user <em>for now</em>. The HDFS super-user does not have 
+	to be the super-user of the name node host, nor is it necessary that all clusters have the same super-user. Also, 
+	an experimenter running HDFS on a personal workstation, conveniently becomes that installation's super-user 
+	without any configuration.
 	</p>
 	<p>
-	In addition, the administrator my identify a distinguished group using a configuration parameter. If set, members of this group are also super-users.
+	In addition, the administrator my identify a distinguished group using a configuration parameter. If set, members 
+	of this group are also super-users.
 </p>
 </section>
 
 <section> <title>The Web Server</title>
 <p>
-The identity of the web server is a configuration parameter. That is, the name node has no notion of the identity of the <em>real</em> user, but the web server behaves as if it has the identity (user and groups) of a user chosen by the administrator. Unless the chosen identity matches the super-user, parts of the name space may be invisible to the web server.</p>
+The identity of the web server is a configuration parameter. That is, the name node has no notion of the identity of 
+the <em>real</em> user, but the web server behaves as if it has the identity (user and groups) of a user chosen 
+by the administrator. Unless the chosen identity matches the super-user, parts of the name space may be invisible 
+to the web server.</p>
 </section>
 
 <section> <title>On-line Upgrade</title>
 <p>
-If a cluster starts with a version 0.15 data set (<code>fsimage</code>), all files and directories will have owner <em>O</em>, group <em>G</em>, and mode <em>M</em>, where <em>O</em> and <em>G</em> are the user and group identity of the super-user, and <em>M</em> is a configuration parameter. </p>
+If a cluster starts with a version 0.15 data set (<code>fsimage</code>), all files and directories will have 
+owner <em>O</em>, group <em>G</em>, and mode <em>M</em>, where <em>O</em> and <em>G</em> 
+are the user and group identity of the super-user, and <em>M</em> is a configuration parameter. </p>
 </section>
 
 <section> <title>Configuration Parameters</title>
-<dl>
-	<dt><code>dfs.permissions = true </code></dt>
-	<dd>
-		If <code>yes</code> use the permissions system as described here. If <code>no</code>, permission <em>checking</em> is turned off, but all other behavior is unchanged. Switching from one parameter value to the other does not change the mode, owner or group of files or directories.
-		<p>
-		</p>
-		Regardless of whether permissions are on or off, <code>chmod</code>, <code>chgrp</code> and <code>chown</code> <em>always</em> check permissions. These functions are only useful in the permissions context, and so there is no backwards compatibility issue. Furthermore, this allows administrators to reliably set owners and permissions in advance of turning on regular permissions checking.
-	</dd>
-	<dt><code>dfs.web.ugi = webuser,webgroup</code></dt>
-	<dd>
-		The user name to be used by the web server. Setting this to the name of the super-user allows any web client to see everything. Changing this to an otherwise unused identity allows web clients to see only those things visible using "other" permissions. Additional groups may be added to the comma-separated list.
-	</dd>
-	<dt><code>dfs.permissions.supergroup = supergroup</code></dt>
-	<dd>
-		The name of the group of super-users.
-	</dd>
-	<dt><code>dfs.upgrade.permission = 0777</code></dt>
-	<dd>
-		The choice of initial mode during upgrade. The <em>x</em> permission is <em>never</em> set for files. For configuration files, the decimal value <em>511<sub>10</sub></em> may be used.
-	</dd>
-	<dt><code>dfs.umask = 022</code></dt>
-	<dd>
-		The <code>umask</code> used when creating files and directories. For configuration files, the decimal value <em>18<sub>10</sub></em> may be used.
-	</dd>
-</dl>
+<ul>
+	<li><code>dfs.permissions = true </code>
+		<br />If <code>yes</code> use the permissions system as described here. If <code>no</code>, permission 
+		<em>checking</em> is turned off, but all other behavior is unchanged. Switching from one parameter 
+		value to the other does not change the mode, owner or group of files or directories.
+		<br />Regardless of whether permissions are on or off, <code>chmod</code>, <code>chgrp</code> and 
+		<code>chown</code> <em>always</em> check permissions. These functions are only useful in the 
+		permissions context, and so there is no backwards compatibility issue. Furthermore, this allows 
+		administrators to reliably set owners and permissions in advance of turning on regular permissions checking.
+    </li>
+
+	<li><code>dfs.web.ugi = webuser,webgroup</code>
+	<br />The user name to be used by the web server. Setting this to the name of the super-user allows any 
+		web client to see everything. Changing this to an otherwise unused identity allows web clients to see 
+		only those things visible using "other" permissions. Additional groups may be added to the comma-separated list.
+    </li>
+    
+	<li><code>dfs.permissions.supergroup = supergroup</code>
+	<br />The name of the group of super-users.
+	</li>
+
+	<li><code>dfs.upgrade.permission = 0777</code>
+	<br />The choice of initial mode during upgrade. The <em>x</em> permission is <em>never</em> set for files. 
+		For configuration files, the decimal value <em>511<sub>10</sub></em> may be used.
+    </li>
+    
+	<li><code>dfs.umask = 022</code>
+    <br />The <code>umask</code> used when creating files and directories. For configuration files, the decimal 
+		value <em>18<sub>10</sub></em> may be used.
+	</li>
+</ul>
 </section>
 
      

Modified: hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/hdfs_quota_admin_guide.xml
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/hdfs_quota_admin_guide.xml?rev=885143&r1=885142&r2=885143&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/hdfs_quota_admin_guide.xml (original)
+++ hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/hdfs_quota_admin_guide.xml Sat Nov 28 20:05:56 2009
@@ -20,13 +20,16 @@
 
 <document>
 
- <header> <title> HDFS Quotas Guide</title> </header>
+ <header> <title>Quotas Guide</title> </header>
 
  <body>
+ 
+ <section> <title>Overview</title>
 
- <p> The Hadoop Distributed File System (HDFS) allows the administrator to set quotas for the number of names used and the
+ <p> The Hadoop Distributed File System (HDFS) allows the <strong>administrator</strong> to set quotas for the number of names used and the
 amount of space used for individual directories. Name quotas and space quotas operate independently, but the administration and
 implementation of the two types of quotas are closely parallel. </p>
+</section>
 
 <section> <title>Name Quotas</title>
 

Modified: hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/hdfs_user_guide.xml
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/hdfs_user_guide.xml?rev=885143&r1=885142&r2=885143&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/hdfs_user_guide.xml (original)
+++ hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/hdfs_user_guide.xml Sat Nov 28 20:05:56 2009
@@ -24,7 +24,7 @@
 
   <header>
     <title>
-      HDFS User Guide
+      HDFS Users Guide
     </title>
   </header>
 
@@ -32,9 +32,8 @@
     <section> <title>Purpose</title>
       <p>
  This document is a starting point for users working with
- Hadoop Distributed File System (HDFS) either as a part of a
- <a href="http://hadoop.apache.org/">Hadoop</a>
- cluster or as a stand-alone general purpose distributed file system.
+ Hadoop Distributed File System (HDFS) either as a part of a Hadoop cluster  
+ or as a stand-alone general purpose distributed file system.
  While HDFS is designed to "just work" in many environments, a working
  knowledge of HDFS helps greatly with configuration improvements and
  diagnostics on a specific cluster.
@@ -46,7 +45,7 @@
  HDFS is the primary distributed storage used by Hadoop applications. A
  HDFS cluster primarily consists of a NameNode that manages the
  file system metadata and DataNodes that store the actual data. The
- <a href="hdfs_design.html">HDFS Architecture</a> describes HDFS in detail. This user guide primarily deals with 
+ <a href="hdfs_design.html">HDFS Architecture Guide</a> describes HDFS in detail. This user guide primarily deals with 
  the interaction of users and administrators with HDFS clusters. 
  The <a href="images/hdfsarchitecture.gif">HDFS architecture diagram</a> depicts 
  basic interactions among NameNode, the DataNodes, and the clients. 
@@ -61,8 +60,7 @@
     <li>
     	Hadoop, including HDFS, is well suited for distributed storage
     	and distributed processing using commodity hardware. It is fault
-    	tolerant, scalable, and extremely simple to expand.
-    	<a href="mapred_tutorial.html">Map/Reduce</a>,
+    	tolerant, scalable, and extremely simple to expand. MapReduce, 
     	well known for its simplicity and applicability for large set of
     	distributed applications, is an integral part of Hadoop.
     </li>
@@ -134,18 +132,17 @@
     </li>
     </ul>
     
-    </section> <section> <title> Pre-requisites </title>
+    </section> <section> <title> Prerequisites </title>
     <p>
- 	The following documents describe installation and set up of a
- 	Hadoop cluster : 
+ 	The following documents describe how to install and set up a Hadoop cluster: 
     </p>
  	<ul>
  	<li>
- 		<a href="quickstart.html">Hadoop Quick Start</a>
+ 		<a href="http://hadoop.apache.org/common/docs/current/single_node_setup.html">Single Node Setup</a>
  		for first-time users.
  	</li>
  	<li>
- 		<a href="cluster_setup.html">Hadoop Cluster Setup</a>
+ 		<a href="http://hadoop.apache.org/common/docs/current/cluster_setup.html">Cluster Setup</a>
  		for large, distributed clusters.
  	</li>
     </ul>
@@ -173,14 +170,15 @@
       Hadoop includes various shell-like commands that directly
       interact with HDFS and other file systems that Hadoop supports.
       The command
-      <code>bin/hadoop fs -help</code>
+      <code>bin/hdfs dfs -help</code>
       lists the commands supported by Hadoop
       shell. Furthermore, the command
-      <code>bin/hadoop fs -help command-name</code>
+      <code>bin/hdfs dfs -help command-name</code>
       displays more detailed help for a command. These commands support
-      most of the normal files ystem operations like copying files,
+      most of the normal files system operations like copying files,
       changing file permissions, etc. It also supports a few HDFS
-      specific operations like changing replication of files.
+      specific operations like changing replication of files. 
+      For more information see <a href="http://hadoop.apache.org/common/docs/current/file_system_shell.html">File System Shell Guide</a>.
      </p>
 
    <section> <title> DFSAdmin Command </title>
@@ -223,17 +221,19 @@
     </li>
    	</ul>
    	<p>
-   	  For command usage, see <a href="commands_manual.html#dfsadmin">dfsadmin command</a>.
+   	  For command usage, see  
+   	  <a href="http://hadoop.apache.org/common/docs/current/commands_manual.html#dfsadmin">dfsadmin</a>.
    	</p>  
    </section>
    
    </section> 
 	<section> <title>Secondary NameNode</title>
-   <p>
-     The Secondary NameNode has been deprecated; considering using the 
-   <a href="hdfs_user_guide.html#Checkpoint+node">Checkpoint node</a> or 
-   <a href="hdfs_user_guide.html#Backup+node">Backup node</a> instead.
-   </p>
+   <note>
+   The Secondary NameNode has been deprecated. 
+   Instead, consider using the 
+   <a href="hdfs_user_guide.html#Checkpoint+node">Checkpoint Node</a> or 
+   <a href="hdfs_user_guide.html#Backup+node">Backup Node</a>.
+   </note>
    <p>	
      The NameNode stores modifications to the file system as a log
      appended to a native file system file, <code>edits</code>. 
@@ -277,10 +277,11 @@
      read by the primary NameNode if necessary.
    </p>
    <p>
-     For command usage, see <a href="commands_manual.html#secondarynamenode"><code>secondarynamenode</code> command</a>.
+     For command usage, see  
+     <a href="http://hadoop.apache.org/common/docs/current/commands_manual.html#secondarynamenode">secondarynamenode</a>.
    </p>
    
-   </section><section> <title> Checkpoint node </title>
+   </section><section> <title> Checkpoint Node </title>
    <p>NameNode persists its namespace using two files: <code>fsimage</code>,
       which is the latest checkpoint of the namespace and <code>edits</code>,
       a journal (log) of changes to the namespace since the checkpoint.
@@ -329,17 +330,17 @@
    </p>
    <p>Multiple checkpoint nodes may be specified in the cluster configuration file.</p>
    <p>
-     For command usage, see
-     <a href="commands_manual.html#namenode"><code>namenode</code> command</a>.
+     For command usage, see  
+     <a href="http://hadoop.apache.org/common/docs/current/commands_manual.html#namenode">namenode</a>.
    </p>
    </section>
 
-   <section> <title> Backup node </title>
+   <section> <title> Backup Node </title>
    <p>	
     The Backup node provides the same checkpointing functionality as the 
     Checkpoint node, as well as maintaining an in-memory, up-to-date copy of the
     file system namespace that is always synchronized with the active NameNode state.
-    Along with accepting a journal stream of filesystem edits from 
+    Along with accepting a journal stream of file system edits from 
     the NameNode and persisting this to disk, the Backup node also applies 
     those edits into its own copy of the namespace in memory, thus creating 
     a backup of the namespace.
@@ -384,12 +385,12 @@
     For a complete discussion of the motivation behind the creation of the 
     Backup node and Checkpoint node, see 
     <a href="https://issues.apache.org/jira/browse/HADOOP-4539">HADOOP-4539</a>.
-    For command usage, see 
-    <a href="commands_manual.html#namenode"><code>namenode</code> command</a>.
+    For command usage, see  
+     <a href="http://hadoop.apache.org/common/docs/current/commands_manual.html#namenode">namenode</a>.
    </p>
    </section>
 
-   <section> <title> Import checkpoint </title>
+   <section> <title> Import Checkpoint </title>
    <p>
      The latest checkpoint can be imported to the NameNode if
      all other copies of the image and the edits files are lost.
@@ -418,8 +419,8 @@
      consistent, but does not modify it in any way.
    </p>
    <p>
-     For command usage, see
-     <a href="commands_manual.html#namenode"><code>namenode</code> command</a>.
+     For command usage, see  
+      <a href="http://hadoop.apache.org/common/docs/current/commands_manual.html#namenode">namenode</a>.
    </p>
    </section>
 
@@ -461,7 +462,8 @@
       <a href="http://issues.apache.org/jira/browse/HADOOP-1652">HADOOP-1652</a>.
     </p>
     <p>
-     For command usage, see <a href="commands_manual.html#balancer">balancer command</a>.
+     For command usage, see  
+     <a href="http://hadoop.apache.org/common/docs/current/commands_manual.html#balancer">balancer</a>.
    </p>
     
    </section> <section> <title> Rack Awareness </title>
@@ -512,7 +514,8 @@
       <code>fsck</code> ignores open files but provides an option to select all files during reporting.
       The HDFS <code>fsck</code> command is not a
       Hadoop shell command. It can be run as '<code>bin/hadoop fsck</code>'.
-      For command usage, see <a href="commands_manual.html#fsck"><code>fsck</code> command</a>. 
+      For command usage, see  
+      <a href="http://hadoop.apache.org/common/docs/current/commands_manual.html#fsck">fsck</a>.
       <code>fsck</code> can be run on the whole file system or on a subset of files.
      </p>
      
@@ -527,7 +530,7 @@
       of Hadoop and rollback the cluster to the state it was in 
       before
       the upgrade. HDFS upgrade is described in more detail in 
-      <a href="http://wiki.apache.org/hadoop/Hadoop%20Upgrade">upgrade wiki</a>.
+      <a href="http://wiki.apache.org/hadoop/Hadoop%20Upgrade">Hadoop Upgrade</a> Wiki page.
       HDFS can have one such backup at a time. Before upgrading,
       administrators need to remove existing backup using <code>bin/hadoop
       dfsadmin -finalizeUpgrade</code> command. The following
@@ -571,13 +574,13 @@
       treated as the superuser for HDFS. Future versions of HDFS will
       support network authentication protocols like Kerberos for user
       authentication and encryption of data transfers. The details are discussed in the 
-      <a href="hdfs_permissions_guide.html">HDFS Admin Guide: Permissions</a>.
+      <a href="hdfs_permissions_guide.html">Permissions Guide</a>.
      </p>
      
    </section> <section> <title> Scalability </title>
      <p>
-      Hadoop currently runs on clusters with thousands of nodes.
-      <a href="http://wiki.apache.org/hadoop/PoweredBy">Powered By Hadoop</a>
+      Hadoop currently runs on clusters with thousands of nodes. The  
+      <a href="http://wiki.apache.org/hadoop/PoweredBy">PoweredBy</a> Wiki page 
       lists some of the organizations that deploy Hadoop on large
       clusters. HDFS has one NameNode for each cluster. Currently
       the total memory available on NameNode is the primary scalability
@@ -585,8 +588,8 @@
       files stored in HDFS helps with increasing cluster size without
       increasing memory requirements on NameNode.
    
-      The default configuration may not suite very large clustes.
-      <a href="http://wiki.apache.org/hadoop/FAQ">Hadoop FAQ</a> page lists
+      The default configuration may not suite very large clustes. The 
+      <a href="http://wiki.apache.org/hadoop/FAQ">FAQ</a> Wiki page lists
       suggested configuration improvements for large Hadoop clusters.
      </p>
      
@@ -599,15 +602,16 @@
       </p>
       <ul>
       <li>
-        <a href="http://hadoop.apache.org/">Hadoop Home Page</a>: The start page for everything Hadoop.
+        <a href="http://hadoop.apache.org/">Hadoop Site</a>: The home page for the Apache Hadoop site.
       </li>
       <li>
-        <a href="http://wiki.apache.org/hadoop/FrontPage">Hadoop Wiki</a>
-        : Front page for Hadoop Wiki documentation. Unlike this
-        guide which is part of Hadoop source tree, Hadoop Wiki is
+        <a href="http://wiki.apache.org/hadoop/FrontPage">Hadoop Wiki</a>:
+        The home page (FrontPage) for the Hadoop Wiki. Unlike the released documentation, 
+        which is part of Hadoop source tree, Hadoop Wiki is
         regularly edited by Hadoop Community.
       </li>
-      <li> <a href="http://wiki.apache.org/hadoop/FAQ">FAQ</a> from Hadoop Wiki.
+      <li> <a href="http://wiki.apache.org/hadoop/FAQ">FAQ</a>: 
+      The FAQ Wiki page.
       </li>
       <li>
         Hadoop <a href="http://hadoop.apache.org/core/docs/current/api/">
@@ -623,7 +627,7 @@
          description of most of the configuration variables available.
       </li>
       <li>
-        <a href="commands_manual.html">Hadoop Command Guide</a>: commands usage.
+        <a href="http://hadoop.apache.org/common/docs/current/commands_manual.html">Hadoop Commands Guide</a>: Hadoop commands usage.
       </li>
       </ul>
      </section>

Modified: hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/index.xml
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/index.xml?rev=885143&r1=885142&r2=885143&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/index.xml (original)
+++ hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/index.xml Sat Nov 28 20:05:56 2009
@@ -19,19 +19,28 @@
 <!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" "http://forrest.apache.org/dtd/document-v20.dtd">
 
 <document>
-  
   <header>
     <title>Overview</title>
   </header>
-  
   <body>
+  
   <p>
-  The Hadoop Documentation provides the information you need to get started using Hadoop, the Hadoop Distributed File System (HDFS), and Hadoop on Demand (HOD).
-  </p><p>
-Begin with the <a href="quickstart.html">Hadoop Quick Start</a> which shows you how to set up a single-node Hadoop installation. Then move on to the <a href="cluster_setup.html">Hadoop Cluster Setup</a> to learn how to set up a multi-node Hadoop installation. Once your Hadoop installation is in place, try out the <a href="mapred_tutorial.html">Hadoop Map/Reduce Tutorial</a>. 
-  </p><p>
-If you have more questions, you can ask on the <a href="ext:lists">Hadoop Core Mailing Lists</a> or browse the <a href="ext:archive">Mailing List Archives</a>.
-    </p>
-  </body>
+  The HDFS Documentation provides the information you need to get started using the Hadoop Distributed File System. 
+  Begin with the <a href="hdfs_user_guide.html">HDFS Users Guide</a> to obtain an overview of the system and then
+  move on to the <a href="hdfs_design.html">HDFS Architecture Guide</a> for more detailed information.
+  </p>
   
+  <p>
+   HDFS commonly works in tandem with a cluster environment and MapReduce applications. 
+   For information about Hadoop clusters (single or multi node) see the 
+ <a href="http://hadoop.apache.org/common/docs/current/index.html">Hadoop Common Documentation</a>.
+   For information about MapReduce see the 
+ <a href="http://hadoop.apache.org/mapreduce/docs/current/index.html">MapReduce Documentation</a>.
+  </p>   
+  
+<p>
+If you have more questions, you can ask on the <a href="ext:lists">HDFS Mailing Lists</a> or browse the <a href="ext:archive">Mailing List Archives</a>.
+</p>
+
+</body>
 </document>

Modified: hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/libhdfs.xml
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/libhdfs.xml?rev=885143&r1=885142&r2=885143&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/libhdfs.xml (original)
+++ hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/libhdfs.xml Sat Nov 28 20:05:56 2009
@@ -19,20 +19,22 @@
 <!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN"
           "http://forrest.apache.org/dtd/document-v20.dtd">
 
-
 <document>
 <header>
-<title>C API to HDFS: libhdfs</title>
+<title>C API libhdfs</title>
 <meta name="http-equiv">Content-Type</meta>
 <meta name="content">text/html;</meta>
 <meta name="charset">utf-8</meta>
 </header>
 <body>
 <section>
-<title>C API to HDFS: libhdfs</title>
+<title>Overview</title>
 
 <p>
-libhdfs is a JNI based C api for Hadoop's DFS. It provides C apis to a subset of the HDFS APIs to manipulate DFS files and the filesystem. libhdfs is part of the hadoop distribution and comes pre-compiled in ${HADOOP_HOME}/libhdfs/libhdfs.so .
+libhdfs is a JNI based C API for Hadoop's Distributed File System (HDFS).
+It provides C APIs to a subset of the HDFS APIs to manipulate HDFS files and
+the filesystem. libhdfs is part of the Hadoop distribution and comes 
+pre-compiled in ${HADOOP_HOME}/libhdfs/libhdfs.so .
 </p>
 
 </section>
@@ -47,7 +49,7 @@
 </p>
 </section>
 <section>
-<title>A sample program</title>
+<title>A Sample Program</title>
 
 <source>
 #include "hdfs.h" 
@@ -69,29 +71,40 @@
     }
    hdfsCloseFile(fs, writeFile);
 }
-
 </source>
 </section>
 
 <section>
-<title>How to link with the library</title>
+<title>How To Link With The Library</title>
 <p>
-See the Makefile for hdfs_test.c in the libhdfs source directory (${HADOOP_HOME}/src/c++/libhdfs/Makefile) or something like:
+See the Makefile for hdfs_test.c in the libhdfs source directory (${HADOOP_HOME}/src/c++/libhdfs/Makefile) or something like:<br />
 gcc above_sample.c -I${HADOOP_HOME}/src/c++/libhdfs -L${HADOOP_HOME}/libhdfs -lhdfs -o above_sample
 </p>
 </section>
 <section>
-<title>Common problems</title>
+<title>Common Problems</title>
 <p>
-The most common problem is the CLASSPATH is not set properly when calling a program that uses libhdfs. Make sure you set it to all the hadoop jars needed to run Hadoop itself. Currently, there is no way to programmatically generate the classpath, but a good bet is to include all the jar files in ${HADOOP_HOME} and ${HADOOP_HOME}/lib as well as the right configuration directory containing hdfs-site.xml
+The most common problem is the CLASSPATH is not set properly when calling a program that uses libhdfs. 
+Make sure you set it to all the Hadoop jars needed to run Hadoop itself. Currently, there is no way to 
+programmatically generate the classpath, but a good bet is to include all the jar files in ${HADOOP_HOME} 
+and ${HADOOP_HOME}/lib as well as the right configuration directory containing hdfs-site.xml
 </p>
 </section>
 <section>
-<title>libhdfs is thread safe</title>
-<p>Concurrency and Hadoop FS "handles" - the hadoop FS implementation includes a FS handle cache which caches based on the URI of the namenode along with the user connecting. So, all calls to hdfsConnect will return the same handle but calls to hdfsConnectAsUser with different users will return different handles.  But, since HDFS client handles are completely thread safe, this has no bearing on concurrency. 
-</p>
-<p>Concurrency and libhdfs/JNI - the libhdfs calls to JNI should always be creating thread local storage, so (in theory), libhdfs should be as thread safe as the underlying calls to the Hadoop FS.
-</p>
+<title>Thread Safe</title>
+<p>libdhfs is thread safe.</p>
+<ul>
+<li>Concurrency and Hadoop FS "handles" 
+<br />The Hadoop FS implementation includes a FS handle cache which caches based on the URI of the 
+namenode along with the user connecting. So, all calls to hdfsConnect will return the same handle but 
+calls to hdfsConnectAsUser with different users will return different handles.  But, since HDFS client 
+handles are completely thread safe, this has no bearing on concurrency. 
+</li>
+<li>Concurrency and libhdfs/JNI 
+<br />The libhdfs calls to JNI should always be creating thread local storage, so (in theory), libhdfs 
+should be as thread safe as the underlying calls to the Hadoop FS.
+</li>
+</ul>
 </section>
 </body>
 </document>

Modified: hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/site.xml
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/site.xml?rev=885143&r1=885142&r2=885143&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/site.xml (original)
+++ hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/site.xml Sat Nov 28 20:05:56 2009
@@ -33,43 +33,20 @@
 <site label="Hadoop" href="" xmlns="http://apache.org/forrest/linkmap/1.0">
   
    <docs label="Getting Started"> 
-		<overview   				label="Overview" 					href="index.html" />
-		<quickstart 				label="Quick Start"        		href="quickstart.html" />
-		<setup     					label="Cluster Setup"      		href="cluster_setup.html" />
-		<mapred    				label="Map/Reduce Tutorial" 	href="mapred_tutorial.html" />
-  </docs>	
-		
- <docs label="Programming Guides">
-		<commands 				label="Commands"     					href="commands_manual.html" />
-		<distcp    					label="DistCp"       						href="distcp.html" />
-		<native_lib    				label="Native Libraries" 					href="native_libraries.html" />
-		<streaming 				label="Streaming"          				href="streaming.html" />
-		<fair_scheduler 			label="Fair Scheduler" 					href="fair_scheduler.html"/>
-		<cap_scheduler 		label="Capacity Scheduler" 			href="capacity_scheduler.html"/>
-		<SLA					 	label="Service Level Authorization" 	href="service_level_auth.html"/>
-		<vaidya    					label="Vaidya" 								href="vaidya.html"/>
-		<archives  				label="Archives"     						href="hadoop_archives.html"/>
+     <hdfsproxy 			label="HDFS Proxy" 					href="hdfsproxy.html"/>
+     <hdfs_user      				label="User Guide"    							href="hdfs_user_guide.html" />
+     <hdfs_arch     				label="Architecture"  								href="hdfs_design.html" />	
    </docs>
-   
-   <docs label="HDFS">
-		<hdfs_user      				label="User Guide"    							href="hdfs_user_guide.html" />
-		<hdfs_arch     				label="Architecture"  								href="hdfs_design.html" />	
-		<hdfs_fs       	 				label="File System Shell Guide"     		href="hdfs_shell.html" />
-		<hdfs_perm      				label="Permissions Guide"    					href="hdfs_permissions_guide.html" />
-		<hdfs_quotas     			label="Quotas Guide" 							href="hdfs_quota_admin_guide.html" />
-		<hdfs_SLG        			label="Synthetic Load Generator Guide"  href="SLG_user_guide.html" />
-		<hdfs_imageviewer						label="Offline Image Viewer Guide"	href="hdfs_imageviewer.html" />
-		<hdfs_libhdfs   				label="C API libhdfs"         						href="libhdfs.html" /> 
-                <docs label="Testing">
-                    <faultinject_framework              label="Fault Injection"                                                     href="faultinject_framework.html" />
-                </docs>
-   </docs> 
-   
-   <docs label="HOD">
-		<hod_user 	label="User Guide" 	href="hod_user_guide.html"/>
-		<hod_admin 	label="Admin Guide" 	href="hod_admin_guide.html"/>
-		<hod_config 	label="Config Guide" 	href="hod_config_guide.html"/> 
-   </docs> 
+   <docs label="Guides">
+      <hdfs_perm      				label="Permissions Guide"    					href="hdfs_permissions_guide.html" />
+      <hdfs_quotas     			label="Quotas Guide" 							href="hdfs_quota_admin_guide.html" />
+      <hdfs_SLG        			label="Synthetic Load Generator Guide"  href="SLG_user_guide.html" />
+      <hdfs_imageviewer						label="Offline Image Viewer Guide"	href="hdfs_imageviewer.html" />
+      <hdfs_libhdfs   				label="C API libhdfs"         						href="libhdfs.html" /> 
+    </docs>
+    <docs label="Testing">
+      <faultinject_framework              label="Fault Injection"                                                     href="faultinject_framework.html" />
+    </docs>
    
    <docs label="Miscellaneous"> 
 		<api       	label="API Docs"           href="ext:api/index" />
@@ -81,19 +58,20 @@
    </docs> 
    
   <external-refs>
-    <site      href="http://hadoop.apache.org/core/"/>
-    <lists     href="http://hadoop.apache.org/core/mailing_lists.html"/>
-    <archive   href="http://mail-archives.apache.org/mod_mbox/hadoop-core-commits/"/>
-    <releases  href="http://hadoop.apache.org/core/releases.html">
-      <download href="#Download" />
+    <site      href="http://hadoop.apache.org/hdfs/"/>
+    <lists     href="http://hadoop.apache.org/hdfs/mailing_lists.html"/>
+    <archive   href="http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-commits/"/>
+    <releases  href="http://hadoop.apache.org/hdfs/releases.html">
+              <download href="#Download" />
     </releases>
-    <jira      href="http://hadoop.apache.org/core/issue_tracking.html"/>
-    <wiki      href="http://wiki.apache.org/hadoop/" />
-    <faq       href="http://wiki.apache.org/hadoop/FAQ" />
-    <hadoop-default href="http://hadoop.apache.org/core/docs/current/hadoop-default.html" />
-    <core-default href="http://hadoop.apache.org/core/docs/current/core-default.html" />
-    <hdfs-default href="http://hadoop.apache.org/core/docs/current/hdfs-default.html" />
-    <mapred-default href="http://hadoop.apache.org/core/docs/current/mapred-default.html" />
+    <jira      href="http://hadoop.apache.org/hdfs/issue_tracking.html"/>
+    <wiki      href="http://wiki.apache.org/hadoop/HDFS" />
+    <faq       href="http://wiki.apache.org/hadoop/HDFS/FAQ" />
+    
+    <common-default href="http://hadoop.apache.org/common/docs/current/common-default.html" />
+    <hdfs-default href="http://hadoop.apache.org/hdfs/docs/current/hdfs-default.html" />
+    <mapred-default href="http://hadoop.apache.org/mapreduce/docs/current/mapred-default.html" />
+    
     <zlib      href="http://www.zlib.net/" />
     <gzip      href="http://www.gzip.org/" />
     <bzip      href="http://www.bzip.org/" />

Modified: hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/tabs.xml
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/tabs.xml?rev=885143&r1=885142&r2=885143&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/tabs.xml (original)
+++ hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/content/xdocs/tabs.xml Sat Nov 28 20:05:56 2009
@@ -30,8 +30,8 @@
     directory (ends in '/'), in which case /index.html will be added
   -->
 
-  <tab label="Project" href="http://hadoop.apache.org/core/" />
-  <tab label="Wiki" href="http://wiki.apache.org/hadoop" />
-  <tab label="Hadoop 0.21 Documentation" dir="" />  
+  <tab label="Project" href="http://hadoop.apache.org/hdfs/" />
+  <tab label="Wiki" href="http://wiki.apache.org/hadoop/hdfs" />
+  <tab label="HDFS 0.21 Documentation" dir="" />  
   
 </tabs>

Modified: hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/skinconf.xml
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/skinconf.xml?rev=885143&r1=885142&r2=885143&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/skinconf.xml (original)
+++ hadoop/hdfs/branches/HDFS-326/src/docs/src/documentation/skinconf.xml Sat Nov 28 20:05:56 2009
@@ -67,8 +67,8 @@
   <!-- project logo -->
   <project-name>Hadoop</project-name>
   <project-description>Scalable Computing Platform</project-description>
-  <project-url>http://hadoop.apache.org/core/</project-url>
-  <project-logo>images/core-logo.gif</project-logo>
+  <project-url>http://hadoop.apache.org/hdfs/</project-url>
+  <project-logo>images/hdfs-logo.jpg</project-logo>
 
   <!-- group logo -->
   <group-name>Hadoop</group-name>
@@ -146,13 +146,13 @@
     <!--Headers -->
 	#content h1 {
 	  margin-bottom: .5em;
-	  font-size: 200%; color: black;
+	  font-size: 185%; color: black;
 	  font-family: arial;
 	}  
-    h2, .h3 { font-size: 195%; color: black; font-family: arial; }
-	h3, .h4 { font-size: 140%; color: black; font-family: arial; margin-bottom: 0.5em; }
+    h2, .h3 { font-size: 175%; color: black; font-family: arial; }
+	h3, .h4 { font-size: 135%; color: black; font-family: arial; margin-bottom: 0.5em; }
 	h4, .h5 { font-size: 125%; color: black;  font-style: italic; font-weight: bold; font-family: arial; }
-	h5, h6 { font-size: 110%; color: #363636; font-weight: bold; } 
+	h5, h6 { font-size: 110%; color: #363636; font-weight: bold; }    
    
    <!--Code Background -->
     pre.code {

Propchange: hadoop/hdfs/branches/HDFS-326/src/java/
------------------------------------------------------------------------------
--- svn:mergeinfo (original)
+++ svn:mergeinfo Sat Nov 28 20:05:56 2009
@@ -1,3 +1,5 @@
 /hadoop/core/branches/branch-0.19/hdfs/src/java:713112
 /hadoop/core/trunk/src/hdfs:776175-785643,785929-786278
-/hadoop/hdfs/trunk/src/java:804973-807690
+/hadoop/hdfs/branches/HDFS-265/src/java:796829-820463
+/hadoop/hdfs/branches/branch-0.21/src/java:820487
+/hadoop/hdfs/trunk/src/java:804973-884907

Modified: hadoop/hdfs/branches/HDFS-326/src/java/hdfs-default.xml
URL: http://svn.apache.org/viewvc/hadoop/hdfs/branches/HDFS-326/src/java/hdfs-default.xml?rev=885143&r1=885142&r2=885143&view=diff
==============================================================================
--- hadoop/hdfs/branches/HDFS-326/src/java/hdfs-default.xml (original)
+++ hadoop/hdfs/branches/HDFS-326/src/java/hdfs-default.xml Sat Nov 28 20:05:56 2009
@@ -8,6 +8,12 @@
 <configuration>
 
 <property>
+  <name>hadoop.hdfs.configuration.version</name>
+  <value>1</value>
+  <description>version of this configuration file</description>
+</property>
+
+<property>
   <name>dfs.namenode.logging.level</name>
   <value>info</value>
   <description>The logging level for dfs namenode. Other values are "dir"(trac
@@ -16,7 +22,7 @@
 </property>
 
 <property>
-  <name>dfs.secondary.http.address</name>
+  <name>dfs.namenode.secondary.http-address</name>
   <value>0.0.0.0:50090</value>
   <description>
     The secondary namenode http server address and port.
@@ -58,7 +64,7 @@
 </property>
 
 <property>
-  <name>dfs.http.address</name>
+  <name>dfs.namenode.http-address</name>
   <value>0.0.0.0:50070</value>
   <description>
     The address and the base port where the dfs namenode web ui will listen on.
@@ -74,7 +80,7 @@
 </property>
 
 <property>
-  <name>dfs.https.need.client.auth</name>
+  <name>dfs.client.https.need-auth</name>
   <value>false</value>
   <description>Whether SSL client certificate authentication is required
   </description>
@@ -89,7 +95,7 @@
 </property>
 
 <property>
-  <name>dfs.https.client.keystore.resource</name>
+  <name>dfs.client.https.keystore.resource</name>
   <value>ssl-client.xml</value>
   <description>Resource file from which ssl client keystore
   information will be extracted
@@ -102,7 +108,7 @@
 </property>
 
 <property>
-  <name>dfs.https.address</name>
+  <name>dfs.namenode.https-address</name>
   <value>0.0.0.0:50470</value>
 </property>
 
@@ -124,7 +130,7 @@
  </property>
  
  <property>
-  <name>dfs.backup.address</name>
+  <name>dfs.namenode.backup.address</name>
   <value>0.0.0.0:50100</value>
   <description>
     The backup node server address and port.
@@ -133,7 +139,7 @@
 </property>
  
  <property>
-  <name>dfs.backup.http.address</name>
+  <name>dfs.namenode.backup.http-address</name>
   <value>0.0.0.0:50105</value>
   <description>
     The backup node http server address and port.
@@ -142,7 +148,7 @@
 </property>
 
 <property>
-  <name>dfs.replication.considerLoad</name>
+  <name>dfs.namenode.replication.considerLoad</name>
   <value>true</value>
   <description>Decide if chooseTarget considers the target's load or not
   </description>
@@ -162,7 +168,7 @@
 </property>
 
 <property>
-  <name>dfs.name.dir</name>
+  <name>dfs.namenode.name.dir</name>
   <value>${hadoop.tmp.dir}/dfs/name</value>
   <description>Determines where on the local filesystem the DFS name node
       should store the name table(fsimage).  If this is a comma-delimited list
@@ -171,8 +177,8 @@
 </property>
 
 <property>
-  <name>dfs.name.edits.dir</name>
-  <value>${dfs.name.dir}</value>
+  <name>dfs.namenode.edits.dir</name>
+  <value>${dfs.namenode.name.dir}</value>
   <description>Determines where on the local filesystem the DFS name node
       should store the transaction (edits) file. If this is a comma-delimited list
       of directories then the transaction file is replicated in all of the 
@@ -188,7 +194,7 @@
 </property>
 
 <property>
-  <name>dfs.permissions</name>
+  <name>dfs.permissions.enabled</name>
   <value>true</value>
   <description>
     If "true", enable permission checking in HDFS.
@@ -200,13 +206,13 @@
 </property>
 
 <property>
-  <name>dfs.permissions.supergroup</name>
+  <name>dfs.permissions.superusergroup</name>
   <value>supergroup</value>
   <description>The name of the group of super-users.</description>
 </property>
 
 <property>
-  <name>dfs.access.token.enable</name>
+  <name>dfs.block.access.token.enable</name>
   <value>false</value>
   <description>
     If "true", access tokens are used as capabilities for accessing datanodes.
@@ -215,7 +221,7 @@
 </property>
 
 <property>
-  <name>dfs.access.key.update.interval</name>
+  <name>dfs.block.access.key.update.interval</name>
   <value>600</value>
   <description>
     Interval in minutes at which namenode updates its access keys.
@@ -223,13 +229,13 @@
 </property>
 
 <property>
-  <name>dfs.access.token.lifetime</name>
+  <name>dfs.block.access.token.lifetime</name>
   <value>600</value>
   <description>The lifetime of access tokens in minutes.</description>
 </property>
 
 <property>
-  <name>dfs.data.dir</name>
+  <name>dfs.datanode.data.dir</name>
   <value>${hadoop.tmp.dir}/dfs/data</value>
   <description>Determines where on the local filesystem an DFS data node
   should store its blocks.  If this is a comma-delimited
@@ -256,25 +262,19 @@
 </property>
 
 <property>
-  <name>dfs.replication.min</name>
+  <name>dfs.namenode.replication.min</name>
   <value>1</value>
   <description>Minimal block replication. 
   </description>
 </property>
 
 <property>
-  <name>dfs.block.size</name>
+  <name>dfs.blocksize</name>
   <value>67108864</value>
   <description>The default block size for new files.</description>
 </property>
 
 <property>
-  <name>dfs.df.interval</name>
-  <value>60000</value>
-  <description>Disk usage statistics refresh interval in msec.</description>
-</property>
-
-<property>
   <name>dfs.client.block.write.retries</name>
   <value>3</value>
   <description>The number of retries for writing blocks to the data nodes, 
@@ -314,18 +314,18 @@
 </property>
 
 <property>
-  <name>dfs.safemode.threshold.pct</name>
+  <name>dfs.namenode.safemode.threshold-pct</name>
   <value>0.999f</value>
   <description>
     Specifies the percentage of blocks that should satisfy 
-    the minimal replication requirement defined by dfs.replication.min.
+    the minimal replication requirement defined by dfs.namenode.replication.min.
     Values less than or equal to 0 mean not to start in safe mode.
     Values greater than 1 will make safe mode permanent.
   </description>
 </property>
 
 <property>
-  <name>dfs.safemode.extension</name>
+  <name>dfs.namenode.safemode.extension</name>
   <value>30000</value>
   <description>
     Determines extension of safe mode in milliseconds 
@@ -334,7 +334,7 @@
 </property>
 
 <property>
-  <name>dfs.balance.bandwidthPerSec</name>
+  <name>dfs.datanode.balance.bandwidthPerSec</name>
   <value>1048576</value>
   <description>
         Specifies the maximum amount of bandwidth that each datanode
@@ -362,7 +362,7 @@
 </property> 
 
 <property>
-  <name>dfs.max.objects</name>
+  <name>dfs.namenode.max.objects</name>
   <value>0</value>
   <description>The maximum number of files, directories and blocks
   dfs supports. A value of zero indicates no limit to the number
@@ -385,14 +385,14 @@
 </property>
 
 <property>
-  <name>dfs.replication.interval</name>
+  <name>dfs.namenode.replication.interval</name>
   <value>3</value>
   <description>The periodicity in seconds with which the namenode computes 
   repliaction work for datanodes. </description>
 </property>
 
 <property>
-  <name>dfs.access.time.precision</name>
+  <name>dfs.namenode.accesstime.precision</name>
   <value>3600000</value>
   <description>The access time for HDFS file is precise upto this value. 
                The default value is 1 hour. Setting a value of 0 disables
@@ -423,4 +423,62 @@
   </description>
 </property>
 
+<property>
+  <name>dfs.stream-buffer-size</name>
+  <value>4096</value>
+  <description>The size of buffer to stream files.
+  The size of this buffer should probably be a multiple of hardware
+  page size (4096 on Intel x86), and it determines how much data is
+  buffered during read and write operations.</description>
+</property>
+
+<property>
+  <name>dfs.bytes-per-checksum</name>
+  <value>512</value>
+  <description>The number of bytes per checksum.  Must not be larger than
+  dfs.stream-buffer-size</description>
+</property>
+
+<property>
+  <name>dfs.client-write-packet-size</name>
+  <value>65536</value>
+  <description>Packet size for clients to write</description>
+</property>
+
+<property>
+  <name>dfs.namenode.checkpoint.dir</name>
+  <value>${hadoop.tmp.dir}/dfs/namesecondary</value>
+  <description>Determines where on the local filesystem the DFS secondary
+      name node should store the temporary images to merge.
+      If this is a comma-delimited list of directories then the image is
+      replicated in all of the directories for redundancy.
+  </description>
+</property>
+
+<property>
+  <name>dfs.namenode.checkpoint.edits.dir</name>
+  <value>${dfs.namenode.checkpoint.dir}</value>
+  <description>Determines where on the local filesystem the DFS secondary
+      name node should store the temporary edits to merge.
+      If this is a comma-delimited list of directoires then teh edits is
+      replicated in all of the directoires for redundancy.
+      Default value is same as fs.checkpoint.dir
+  </description>
+</property>
+
+<property>
+  <name>dfs.namenode.checkpoint.period</name>
+  <value>3600</value>
+  <description>The number of seconds between two periodic checkpoints.
+  </description>
+</property>
+
+<property>
+  <name>dfs.namenode.checkpoint.size</name>
+  <value>67108864</value>
+  <description>The size of the current edit log (in bytes) that triggers
+       a periodic checkpoint even if the fs.checkpoint.period hasn't expired.
+  </description>
+</property>
+
 </configuration>

Propchange: hadoop/hdfs/branches/HDFS-326/src/java/org/apache/hadoop/hdfs/BlockMissingException.java
------------------------------------------------------------------------------
    svn:mime-type = text/plain