You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by aw...@apache.org on 2015/02/10 22:40:06 UTC

[2/7] hadoop git commit: HADOOP-11495. Convert site documentation from apt to markdown (Masatake Iwasaki via aw)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e9d26fe9/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md b/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
new file mode 100644
index 0000000..ae3bea8
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md
@@ -0,0 +1,689 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+* [Overview](#Overview)
+    * [appendToFile](#appendToFile)
+    * [cat](#cat)
+    * [checksum](#checksum)
+    * [chgrp](#chgrp)
+    * [chmod](#chmod)
+    * [chown](#chown)
+    * [copyFromLocal](#copyFromLocal)
+    * [copyToLocal](#copyToLocal)
+    * [count](#count)
+    * [cp](#cp)
+    * [createSnapshot](#createSnapshot)
+    * [deleteSnapshot](#deleteSnapshot)
+    * [df](#df)
+    * [du](#du)
+    * [dus](#dus)
+    * [expunge](#expunge)
+    * [find](#find)
+    * [get](#get)
+    * [getfacl](#getfacl)
+    * [getfattr](#getfattr)
+    * [getmerge](#getmerge)
+    * [help](#help)
+    * [ls](#ls)
+    * [lsr](#lsr)
+    * [mkdir](#mkdir)
+    * [moveFromLocal](#moveFromLocal)
+    * [moveToLocal](#moveToLocal)
+    * [mv](#mv)
+    * [put](#put)
+    * [renameSnapshot](#renameSnapshot)
+    * [rm](#rm)
+    * [rmdir](#rmdir)
+    * [rmr](#rmr)
+    * [setfacl](#setfacl)
+    * [setfattr](#setfattr)
+    * [setrep](#setrep)
+    * [stat](#stat)
+    * [tail](#tail)
+    * [test](#test)
+    * [text](#text)
+    * [touchz](#touchz)
+    * [usage](#usage)
+
+Overview
+========
+
+The File System (FS) shell includes various shell-like commands that directly interact with the Hadoop Distributed File System (HDFS) as well as other file systems that Hadoop supports, such as Local FS, HFTP FS, S3 FS, and others. The FS shell is invoked by:
+
+    bin/hadoop fs <args>
+
+All FS shell commands take path URIs as arguments. The URI format is `scheme://authority/path`. For HDFS the scheme is `hdfs`, and for the Local FS the scheme is `file`. The scheme and authority are optional. If not specified, the default scheme specified in the configuration is used. An HDFS file or directory such as /parent/child can be specified as `hdfs://namenodehost/parent/child` or simply as `/parent/child` (given that your configuration is set to point to `hdfs://namenodehost`).
+
+Most of the commands in FS shell behave like corresponding Unix commands. Differences are described with each of the commands. Error information is sent to stderr and the output is sent to stdout.
+
+If HDFS is being used, `hdfs dfs` is a synonym.
+
+See the [Commands Manual](./CommandsManual.html) for generic shell options.
+
+appendToFile
+------------
+
+Usage: `hadoop fs -appendToFile <localsrc> ... <dst> `
+
+Append single src, or multiple srcs from local file system to the destination file system. Also reads input from stdin and appends to destination file system.
+
+* `hadoop fs -appendToFile localfile /user/hadoop/hadoopfile`
+* `hadoop fs -appendToFile localfile1 localfile2 /user/hadoop/hadoopfile`
+* `hadoop fs -appendToFile localfile hdfs://nn.example.com/hadoop/hadoopfile`
+* `hadoop fs -appendToFile - hdfs://nn.example.com/hadoop/hadoopfile` Reads the input from stdin.
+
+Exit Code:
+
+Returns 0 on success and 1 on error.
+
+cat
+---
+
+Usage: `hadoop fs -cat URI [URI ...]`
+
+Copies source paths to stdout.
+
+Example:
+
+* `hadoop fs -cat hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2`
+* `hadoop fs -cat file:///file3 /user/hadoop/file4`
+
+Exit Code:
+
+Returns 0 on success and -1 on error.
+
+checksum
+--------
+
+Usage: `hadoop fs -checksum URI`
+
+Returns the checksum information of a file.
+
+Example:
+
+* `hadoop fs -checksum hdfs://nn1.example.com/file1`
+* `hadoop fs -checksum file:///etc/hosts`
+
+chgrp
+-----
+
+Usage: `hadoop fs -chgrp [-R] GROUP URI [URI ...]`
+
+Change group association of files. The user must be the owner of files, or else a super-user. Additional information is in the [Permissions Guide](../hadoop-hdfs/HdfsPermissionsGuide.html).
+
+Options
+
+* The -R option will make the change recursively through the directory structure.
+
+chmod
+-----
+
+Usage: `hadoop fs -chmod [-R] <MODE[,MODE]... | OCTALMODE> URI [URI ...]`
+
+Change the permissions of files. With -R, make the change recursively through the directory structure. The user must be the owner of the file, or else a super-user. Additional information is in the [Permissions Guide](../hadoop-hdfs/HdfsPermissionsGuide.html).
+
+Options
+
+* The -R option will make the change recursively through the directory structure.
+
+chown
+-----
+
+Usage: `hadoop fs -chown [-R] [OWNER][:[GROUP]] URI [URI ]`
+
+Change the owner of files. The user must be a super-user. Additional information is in the [Permissions Guide](../hadoop-hdfs/HdfsPermissionsGuide.html).
+
+Options
+
+* The -R option will make the change recursively through the directory structure.
+
+copyFromLocal
+-------------
+
+Usage: `hadoop fs -copyFromLocal <localsrc> URI`
+
+Similar to put command, except that the source is restricted to a local file reference.
+
+Options:
+
+* The -f option will overwrite the destination if it already exists.
+
+copyToLocal
+-----------
+
+Usage: `hadoop fs -copyToLocal [-ignorecrc] [-crc] URI <localdst> `
+
+Similar to get command, except that the destination is restricted to a local file reference.
+
+count
+-----
+
+Usage: `hadoop fs -count [-q] [-h] [-v] <paths> `
+
+Count the number of directories, files and bytes under the paths that match the specified file pattern. The output columns with -count are: DIR\_COUNT, FILE\_COUNT, CONTENT\_SIZE, PATHNAME
+
+The output columns with -count -q are: QUOTA, REMAINING\_QUATA, SPACE\_QUOTA, REMAINING\_SPACE\_QUOTA, DIR\_COUNT, FILE\_COUNT, CONTENT\_SIZE, PATHNAME
+
+The -h option shows sizes in human readable format.
+
+The -v option displays a header line.
+
+Example:
+
+* `hadoop fs -count hdfs://nn1.example.com/file1 hdfs://nn2.example.com/file2`
+* `hadoop fs -count -q hdfs://nn1.example.com/file1`
+* `hadoop fs -count -q -h hdfs://nn1.example.com/file1`
+* `hdfs dfs -count -q -h -v hdfs://nn1.example.com/file1`
+
+Exit Code:
+
+Returns 0 on success and -1 on error.
+
+cp
+----
+
+Usage: `hadoop fs -cp [-f] [-p | -p[topax]] URI [URI ...] <dest> `
+
+Copy files from source to destination. This command allows multiple sources as well in which case the destination must be a directory.
+
+'raw.\*' namespace extended attributes are preserved if (1) the source and destination filesystems support them (HDFS only), and (2) all source and destination pathnames are in the /.reserved/raw hierarchy. Determination of whether raw.\* namespace xattrs are preserved is independent of the -p (preserve) flag.
+
+Options:
+
+* The -f option will overwrite the destination if it already exists.
+* The -p option will preserve file attributes [topx] (timestamps, ownership, permission, ACL, XAttr). If -p is specified with no *arg*, then preserves timestamps, ownership, permission. If -pa is specified, then preserves permission also because ACL is a super-set of permission. Determination of whether raw namespace extended attributes are preserved is independent of the -p flag.
+
+Example:
+
+* `hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2`
+* `hadoop fs -cp /user/hadoop/file1 /user/hadoop/file2 /user/hadoop/dir`
+
+Exit Code:
+
+Returns 0 on success and -1 on error.
+
+createSnapshot
+--------------
+
+See [HDFS Snapshots Guide](../hadoop-hdfs/HdfsSnapshots.html).
+
+deleteSnapshot
+--------------
+
+See [HDFS Snapshots Guide](../hadoop-hdfs/HdfsSnapshots.html).
+
+df
+----
+
+Usage: `hadoop fs -df [-h] URI [URI ...]`
+
+Displays free space.
+
+Options:
+
+* The -h option will format file sizes in a "human-readable" fashion (e.g 64.0m instead of 67108864)
+
+Example:
+
+* `hadoop dfs -df /user/hadoop/dir1`
+
+du
+----
+
+Usage: `hadoop fs -du [-s] [-h] URI [URI ...]`
+
+Displays sizes of files and directories contained in the given directory or the length of a file in case its just a file.
+
+Options:
+
+* The -s option will result in an aggregate summary of file lengths being displayed, rather than the individual files.
+* The -h option will format file sizes in a "human-readable" fashion (e.g 64.0m instead of 67108864)
+
+Example:
+
+* `hadoop fs -du /user/hadoop/dir1 /user/hadoop/file1 hdfs://nn.example.com/user/hadoop/dir1`
+
+Exit Code: Returns 0 on success and -1 on error.
+
+dus
+---
+
+Usage: `hadoop fs -dus <args> `
+
+Displays a summary of file lengths.
+
+**Note:** This command is deprecated. Instead use `hadoop fs -du -s`.
+
+expunge
+-------
+
+Usage: `hadoop fs -expunge`
+
+Empty the Trash. Refer to the [HDFS Architecture Guide](../hadoop-hdfs/HdfsDesign.html) for more information on the Trash feature.
+
+find
+----
+
+Usage: `hadoop fs -find <path> ... <expression> ... `
+
+Finds all files that match the specified expression and applies selected actions to them. If no *path* is specified then defaults to the current working directory. If no expression is specified then defaults to -print.
+
+The following primary expressions are recognised:
+
+*   -name pattern<br />-iname pattern
+
+    Evaluates as true if the basename of the file matches the pattern using standard file system globbing. If -iname is used then the match is case insensitive.
+
+*   -print<br />-print0Always
+
+    evaluates to true. Causes the current pathname to be written to standard output. If the -print0 expression is used then an ASCII NULL character is appended.
+
+The following operators are recognised:
+
+* expression -a expression<br />expression -and expression<br />expression expression
+
+    Logical AND operator for joining two expressions. Returns true if both child expressions return true. Implied by the juxtaposition of two expressions and so does not need to be explicitly specified. The second expression will not be applied if the first fails.
+
+Example:
+
+`hadoop fs -find / -name test -print`
+
+Exit Code:
+
+Returns 0 on success and -1 on error.
+
+get
+---
+
+Usage: `hadoop fs -get [-ignorecrc] [-crc] <src> <localdst> `
+
+Copy files to the local file system. Files that fail the CRC check may be copied with the -ignorecrc option. Files and CRCs may be copied using the -crc option.
+
+Example:
+
+* `hadoop fs -get /user/hadoop/file localfile`
+* `hadoop fs -get hdfs://nn.example.com/user/hadoop/file localfile`
+
+Exit Code:
+
+Returns 0 on success and -1 on error.
+
+getfacl
+-------
+
+Usage: `hadoop fs -getfacl [-R] <path> `
+
+Displays the Access Control Lists (ACLs) of files and directories. If a directory has a default ACL, then getfacl also displays the default ACL.
+
+Options:
+
+* -R: List the ACLs of all files and directories recursively.
+* *path*: File or directory to list.
+
+Examples:
+
+* `hadoop fs -getfacl /file`
+* `hadoop fs -getfacl -R /dir`
+
+Exit Code:
+
+Returns 0 on success and non-zero on error.
+
+getfattr
+--------
+
+Usage: `hadoop fs -getfattr [-R] -n name | -d [-e en] <path> `
+
+Displays the extended attribute names and values (if any) for a file or directory.
+
+Options:
+
+* -R: Recursively list the attributes for all files and directories.
+* -n name: Dump the named extended attribute value.
+* -d: Dump all extended attribute values associated with pathname.
+* -e *encoding*: Encode values after retrieving them. Valid encodings are "text", "hex", and "base64". Values encoded as text strings are enclosed in double quotes ("), and values encoded as hexadecimal and base64 are prefixed with 0x and 0s, respectively.
+* *path*: The file or directory.
+
+Examples:
+
+* `hadoop fs -getfattr -d /file`
+* `hadoop fs -getfattr -R -n user.myAttr /dir`
+
+Exit Code:
+
+Returns 0 on success and non-zero on error.
+
+getmerge
+--------
+
+Usage: `hadoop fs -getmerge <src> <localdst> [addnl]`
+
+Takes a source directory and a destination file as input and concatenates files in src into the destination local file. Optionally addnl can be set to enable adding a newline character at the end of each file.
+
+help
+----
+
+Usage: `hadoop fs -help`
+
+Return usage output.
+
+ls
+----
+
+Usage: `hadoop fs -ls [-d] [-h] [-R] [-t] [-S] [-r] [-u] <args> `
+
+Options:
+
+* -d: Directories are listed as plain files.
+* -h: Format file sizes in a human-readable fashion (eg 64.0m instead of 67108864).
+* -R: Recursively list subdirectories encountered.
+* -t: Sort output by modification time (most recent first).
+* -S: Sort output by file size.
+* -r: Reverse the sort order.
+* -u: Use access time rather than modification time for display and sorting.  
+
+For a file ls returns stat on the file with the following format:
+
+    permissions number_of_replicas userid groupid filesize modification_date modification_time filename
+
+For a directory it returns list of its direct children as in Unix. A directory is listed as:
+
+    permissions userid groupid modification_date modification_time dirname
+
+Files within a directory are order by filename by default.
+
+Example:
+
+* `hadoop fs -ls /user/hadoop/file1`
+
+Exit Code:
+
+Returns 0 on success and -1 on error.
+
+lsr
+---
+
+Usage: `hadoop fs -lsr <args> `
+
+Recursive version of ls.
+
+**Note:** This command is deprecated. Instead use `hadoop fs -ls -R`
+
+mkdir
+-----
+
+Usage: `hadoop fs -mkdir [-p] <paths> `
+
+Takes path uri's as argument and creates directories.
+
+Options:
+
+* The -p option behavior is much like Unix mkdir -p, creating parent directories along the path.
+
+Example:
+
+* `hadoop fs -mkdir /user/hadoop/dir1 /user/hadoop/dir2`
+* `hadoop fs -mkdir hdfs://nn1.example.com/user/hadoop/dir hdfs://nn2.example.com/user/hadoop/dir`
+
+Exit Code:
+
+Returns 0 on success and -1 on error.
+
+moveFromLocal
+-------------
+
+Usage: `hadoop fs -moveFromLocal <localsrc> <dst> `
+
+Similar to put command, except that the source localsrc is deleted after it's copied.
+
+moveToLocal
+-----------
+
+Usage: `hadoop fs -moveToLocal [-crc] <src> <dst> `
+
+Displays a "Not implemented yet" message.
+
+mv
+----
+
+Usage: `hadoop fs -mv URI [URI ...] <dest> `
+
+Moves files from source to destination. This command allows multiple sources as well in which case the destination needs to be a directory. Moving files across file systems is not permitted.
+
+Example:
+
+* `hadoop fs -mv /user/hadoop/file1 /user/hadoop/file2`
+* `hadoop fs -mv hdfs://nn.example.com/file1 hdfs://nn.example.com/file2 hdfs://nn.example.com/file3 hdfs://nn.example.com/dir1`
+
+Exit Code:
+
+Returns 0 on success and -1 on error.
+
+put
+---
+
+Usage: `hadoop fs -put <localsrc> ... <dst> `
+
+Copy single src, or multiple srcs from local file system to the destination file system. Also reads input from stdin and writes to destination file system.
+
+* `hadoop fs -put localfile /user/hadoop/hadoopfile`
+* `hadoop fs -put localfile1 localfile2 /user/hadoop/hadoopdir`
+* `hadoop fs -put localfile hdfs://nn.example.com/hadoop/hadoopfile`
+* `hadoop fs -put - hdfs://nn.example.com/hadoop/hadoopfile` Reads the input from stdin.
+
+Exit Code:
+
+Returns 0 on success and -1 on error.
+
+renameSnapshot
+--------------
+
+See [HDFS Snapshots Guide](../hadoop-hdfs/HdfsSnapshots.html).
+
+rm
+----
+
+Usage: `hadoop fs -rm [-f] [-r |-R] [-skipTrash] URI [URI ...]`
+
+Delete files specified as args.
+
+Options:
+
+* The -f option will not display a diagnostic message or modify the exit status to reflect an error if the file does not exist.
+* The -R option deletes the directory and any content under it recursively.
+* The -r option is equivalent to -R.
+* The -skipTrash option will bypass trash, if enabled, and delete the specified file(s) immediately. This can be useful when it is necessary to delete files from an over-quota directory.
+
+Example:
+
+* `hadoop fs -rm hdfs://nn.example.com/file /user/hadoop/emptydir`
+
+Exit Code:
+
+Returns 0 on success and -1 on error.
+
+rmdir
+-----
+
+Usage: `hadoop fs -rmdir [--ignore-fail-on-non-empty] URI [URI ...]`
+
+Delete a directory.
+
+Options:
+
+* `--ignore-fail-on-non-empty`: When using wildcards, do not fail if a directory still contains files.
+
+Example:
+
+* `hadoop fs -rmdir /user/hadoop/emptydir`
+
+rmr
+---
+
+Usage: `hadoop fs -rmr [-skipTrash] URI [URI ...]`
+
+Recursive version of delete.
+
+**Note:** This command is deprecated. Instead use `hadoop fs -rm -r`
+
+setfacl
+-------
+
+Usage: `hadoop fs -setfacl [-R] [-b |-k -m |-x <acl_spec> <path>] |[--set <acl_spec> <path>] `
+
+Sets Access Control Lists (ACLs) of files and directories.
+
+Options:
+
+* -b: Remove all but the base ACL entries. The entries for user, group and others are retained for compatibility with permission bits.
+* -k: Remove the default ACL.
+* -R: Apply operations to all files and directories recursively.
+* -m: Modify ACL. New entries are added to the ACL, and existing entries are retained.
+* -x: Remove specified ACL entries. Other ACL entries are retained.
+* ``--set``: Fully replace the ACL, discarding all existing entries. The *acl\_spec* must include entries for user, group, and others for compatibility with permission bits.
+* *acl\_spec*: Comma separated list of ACL entries.
+* *path*: File or directory to modify.
+
+Examples:
+
+* `hadoop fs -setfacl -m user:hadoop:rw- /file`
+* `hadoop fs -setfacl -x user:hadoop /file`
+* `hadoop fs -setfacl -b /file`
+* `hadoop fs -setfacl -k /dir`
+* `hadoop fs -setfacl --set user::rw-,user:hadoop:rw-,group::r--,other::r-- /file`
+* `hadoop fs -setfacl -R -m user:hadoop:r-x /dir`
+* `hadoop fs -setfacl -m default:user:hadoop:r-x /dir`
+
+Exit Code:
+
+Returns 0 on success and non-zero on error.
+
+setfattr
+--------
+
+Usage: `hadoop fs -setfattr -n name [-v value] | -x name <path> `
+
+Sets an extended attribute name and value for a file or directory.
+
+Options:
+
+* -b: Remove all but the base ACL entries. The entries for user, group and others are retained for compatibility with permission bits.
+* -n name: The extended attribute name.
+* -v value: The extended attribute value. There are three different encoding methods for the value. If the argument is enclosed in double quotes, then the value is the string inside the quotes. If the argument is prefixed with 0x or 0X, then it is taken as a hexadecimal number. If the argument begins with 0s or 0S, then it is taken as a base64 encoding.
+* -x name: Remove the extended attribute.
+* *path*: The file or directory.
+
+Examples:
+
+* `hadoop fs -setfattr -n user.myAttr -v myValue /file`
+* `hadoop fs -setfattr -n user.noValue /file`
+* `hadoop fs -setfattr -x user.myAttr /file`
+
+Exit Code:
+
+Returns 0 on success and non-zero on error.
+
+setrep
+------
+
+Usage: `hadoop fs -setrep [-R] [-w] <numReplicas> <path> `
+
+Changes the replication factor of a file. If *path* is a directory then the command recursively changes the replication factor of all files under the directory tree rooted at *path*.
+
+Options:
+
+* The -w flag requests that the command wait for the replication to complete. This can potentially take a very long time.
+* The -R flag is accepted for backwards compatibility. It has no effect.
+
+Example:
+
+* `hadoop fs -setrep -w 3 /user/hadoop/dir1`
+
+Exit Code:
+
+Returns 0 on success and -1 on error.
+
+stat
+----
+
+Usage: `hadoop fs -stat [format] <path> ...`
+
+Print statistics about the file/directory at \<path\> in the specified format. Format accepts filesize in blocks (%b), type (%F), group name of owner (%g), name (%n), block size (%o), replication (%r), user name of owner(%u), and modification date (%y, %Y). %y shows UTC date as "yyyy-MM-dd HH:mm:ss" and %Y shows milliseconds since January 1, 1970 UTC. If the format is not specified, %y is used by default.
+
+Example:
+
+* `hadoop fs -stat "%F %u:%g %b %y %n" /file`
+
+Exit Code: Returns 0 on success and -1 on error.
+
+tail
+----
+
+Usage: `hadoop fs -tail [-f] URI`
+
+Displays last kilobyte of the file to stdout.
+
+Options:
+
+* The -f option will output appended data as the file grows, as in Unix.
+
+Example:
+
+* `hadoop fs -tail pathname`
+
+Exit Code: Returns 0 on success and -1 on error.
+
+test
+----
+
+Usage: `hadoop fs -test -[defsz] URI`
+
+Options:
+
+* -d: f the path is a directory, return 0.
+* -e: if the path exists, return 0.
+* -f: if the path is a file, return 0.
+* -s: if the path is not empty, return 0.
+* -z: if the file is zero length, return 0.
+
+Example:
+
+* `hadoop fs -test -e filename`
+
+text
+----
+
+Usage: `hadoop fs -text <src> `
+
+Takes a source file and outputs the file in text format. The allowed formats are zip and TextRecordInputStream.
+
+touchz
+------
+
+Usage: `hadoop fs -touchz URI [URI ...]`
+
+Create a file of zero length.
+
+Example:
+
+* `hadoop fs -touchz pathname`
+
+Exit Code: Returns 0 on success and -1 on error.
+
+usage
+-----
+
+Usage: `hadoop fs -usage command`
+
+Return the help for an individual command.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e9d26fe9/hadoop-common-project/hadoop-common/src/site/markdown/HttpAuthentication.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/HttpAuthentication.md b/hadoop-common-project/hadoop-common/src/site/markdown/HttpAuthentication.md
new file mode 100644
index 0000000..e0a2693
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/HttpAuthentication.md
@@ -0,0 +1,58 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+Authentication for Hadoop HTTP web-consoles
+===========================================
+
+* [Authentication for Hadoop HTTP web-consoles](#Authentication_for_Hadoop_HTTP_web-consoles)
+    * [Introduction](#Introduction)
+    * [Configuration](#Configuration)
+
+Introduction
+------------
+
+This document describes how to configure Hadoop HTTP web-consoles to require user authentication.
+
+By default Hadoop HTTP web-consoles (JobTracker, NameNode, TaskTrackers and DataNodes) allow access without any form of authentication.
+
+Similarly to Hadoop RPC, Hadoop HTTP web-consoles can be configured to require Kerberos authentication using HTTP SPNEGO protocol (supported by browsers like Firefox and Internet Explorer).
+
+In addition, Hadoop HTTP web-consoles support the equivalent of Hadoop's Pseudo/Simple authentication. If this option is enabled, user must specify their user name in the first browser interaction using the user.name query string parameter. For example: `http://localhost:50030/jobtracker.jsp?user.name=babu`.
+
+If a custom authentication mechanism is required for the HTTP web-consoles, it is possible to implement a plugin to support the alternate authentication mechanism (refer to Hadoop hadoop-auth for details on writing an `AuthenticatorHandler`).
+
+The next section describes how to configure Hadoop HTTP web-consoles to require user authentication.
+
+Configuration
+-------------
+
+The following properties should be in the `core-site.xml` of all the nodes in the cluster.
+
+`hadoop.http.filter.initializers`: add to this property the `org.apache.hadoop.security.AuthenticationFilterInitializer` initializer class.
+
+`hadoop.http.authentication.type`: Defines authentication used for the HTTP web-consoles. The supported values are: `simple` | `kerberos` | `#AUTHENTICATION_HANDLER_CLASSNAME#`. The dfeault value is `simple`.
+
+`hadoop.http.authentication.token.validity`: Indicates how long (in seconds) an authentication token is valid before it has to be renewed. The default value is `36000`.
+
+`hadoop.http.authentication.signature.secret.file`: The signature secret file for signing the authentication tokens. The same secret should be used for all nodes in the cluster, JobTracker, NameNode, DataNode and TastTracker. The default value is `$user.home/hadoop-http-auth-signature-secret`. IMPORTANT: This file should be readable only by the Unix user running the daemons.
+
+`hadoop.http.authentication.cookie.domain`: The domain to use for the HTTP cookie that stores the authentication token. In order to authentiation to work correctly across all nodes in the cluster the domain must be correctly set. There is no default value, the HTTP cookie will not have a domain working only with the hostname issuing the HTTP cookie.
+
+IMPORTANT: when using IP addresses, browsers ignore cookies with domain settings. For this setting to work properly all nodes in the cluster must be configured to generate URLs with `hostname.domain` names on it.
+
+`hadoop.http.authentication.simple.anonymous.allowed`: Indicates if anonymous requests are allowed when using 'simple' authentication. The default value is `true`
+
+`hadoop.http.authentication.kerberos.principal`: Indicates the Kerberos principal to be used for HTTP endpoint when using 'kerberos' authentication. The principal short name must be `HTTP` per Kerberos HTTP SPNEGO specification. The default value is `HTTP/_HOST@$LOCALHOST`, where `_HOST` -if present- is replaced with bind address of the HTTP server.
+
+`hadoop.http.authentication.kerberos.keytab`: Location of the keytab file with the credentials for the Kerberos principal used for the HTTP endpoint. The default value is `$user.home/hadoop.keytab`.i

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e9d26fe9/hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md b/hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
new file mode 100644
index 0000000..0392610
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md
@@ -0,0 +1,105 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+Hadoop Interface Taxonomy: Audience and Stability Classification
+================================================================
+
+* [Hadoop Interface Taxonomy: Audience and Stability Classification](#Hadoop_Interface_Taxonomy:_Audience_and_Stability_Classification)
+    * [Motivation](#Motivation)
+    * [Interface Classification](#Interface_Classification)
+        * [Audience](#Audience)
+        * [Stability](#Stability)
+    * [How are the Classifications Recorded?](#How_are_the_Classifications_Recorded)
+    * [FAQ](#FAQ)
+
+Motivation
+----------
+
+The interface taxonomy classification provided here is for guidance to developers and users of interfaces. The classification guides a developer to declare the targeted audience or users of an interface and also its stability.
+
+* Benefits to the user of an interface: Knows which interfaces to use or not use and their stability.
+* Benefits to the developer: to prevent accidental changes of interfaces and hence accidental impact on users or other components or system. This is particularly useful in large systems with many developers who may not all have a shared state/history of the project.
+
+Interface Classification
+------------------------
+
+Hadoop adopts the following interface classification, this classification was derived from the [OpenSolaris taxonomy](http://www.opensolaris.org/os/community/arc/policies/interface-taxonomy/#Advice) and, to some extent, from taxonomy used inside Yahoo. Interfaces have two main attributes: Audience and Stability
+
+### Audience
+
+Audience denotes the potential consumers of the interface. While many interfaces are internal/private to the implementation, other are public/external interfaces are meant for wider consumption by applications and/or clients. For example, in posix, libc is an external or public interface, while large parts of the kernel are internal or private interfaces. Also, some interfaces are targeted towards other specific subsystems.
+
+Identifying the audience of an interface helps define the impact of breaking it. For instance, it might be okay to break the compatibility of an interface whose audience is a small number of specific subsystems. On the other hand, it is probably not okay to break a protocol interfaces that millions of Internet users depend on.
+
+Hadoop uses the following kinds of audience in order of increasing/wider visibility:
+
+* Private:
+    * The interface is for internal use within the project (such as HDFS or MapReduce) and should not be used by applications or by other projects. It is subject to change at anytime without notice. Most interfaces of a project are Private (also referred to as project-private).
+* Limited-Private:
+    * The interface is used by a specified set of projects or systems (typically closely related projects). Other projects or systems should not use the interface. Changes to the interface will be communicated/ negotiated with the specified projects. For example, in the Hadoop project, some interfaces are LimitedPrivate{HDFS, MapReduce} in that they are private to the HDFS and MapReduce projects.
+* Public
+    * The interface is for general use by any application.
+
+Hadoop doesn't have a Company-Private classification, which is meant for APIs which are intended to be used by other projects within the company, since it doesn't apply to opensource projects. Also, certain APIs are annotated as @VisibleForTesting (from com.google.common .annotations.VisibleForTesting) - these are meant to be used strictly for unit tests and should be treated as "Private" APIs.
+
+### Stability
+
+Stability denotes how stable an interface is, as in when incompatible changes to the interface are allowed. Hadoop APIs have the following levels of stability.
+
+* Stable
+    * Can evolve while retaining compatibility for minor release boundaries; in other words, incompatible changes to APIs marked Stable are allowed only at major releases (i.e. at m.0).
+* Evolving
+    * Evolving, but incompatible changes are allowed at minor release (i.e. m .x)
+* Unstable
+    * Incompatible changes to Unstable APIs are allowed any time. This usually makes sense for only private interfaces.
+    * However one may call this out for a supposedly public interface to highlight that it should not be used as an interface; for public interfaces, labeling it as Not-an-interface is probably more appropriate than "Unstable".
+        * Examples of publicly visible interfaces that are unstable (i.e. not-an-interface): GUI, CLIs whose output format will change
+* Deprecated
+    * APIs that could potentially removed in the future and should not be used.
+
+How are the Classifications Recorded?
+-------------------------------------
+
+How will the classification be recorded for Hadoop APIs?
+
+* Each interface or class will have the audience and stability recorded using annotations in org.apache.hadoop.classification package.
+* The javadoc generated by the maven target javadoc:javadoc lists only the public API.
+* One can derive the audience of java classes and java interfaces by the audience of the package in which they are contained. Hence it is useful to declare the audience of each java package as public or private (along with the private audience variations).
+
+FAQ
+---
+
+* Why aren’t the java scopes (private, package private and public) good enough?
+    * Java’s scoping is not very complete. One is often forced to make a class public in order for other internal components to use it. It does not have friends or sub-package-private like C++.
+* But I can easily access a private implementation interface if it is Java public. Where is the protection and control?
+    * The purpose of this is not providing absolute access control. Its purpose is to communicate to users and developers. One can access private implementation functions in libc; however if they change the internal implementation details, your application will break and you will have little sympathy from the folks who are supplying libc. If you use a non-public interface you understand the risks.
+* Why bother declaring the stability of a private interface? Aren’t private interfaces always unstable?
+    * Private interfaces are not always unstable. In the cases where they are stable they capture internal properties of the system and can communicate these properties to its internal users and to developers of the interface.
+        * e.g. In HDFS, NN-DN protocol is private but stable and can help implement rolling upgrades. It communicates that this interface should not be changed in incompatible ways even though it is private.
+        * e.g. In HDFS, FSImage stability can help provide more flexible roll backs.
+* What is the harm in applications using a private interface that is stable? How is it different than a public stable interface?
+    * While a private interface marked as stable is targeted to change only at major releases, it may break at other times if the providers of that interface are willing to changes the internal users of that interface. Further, a public stable interface is less likely to break even at major releases (even though it is allowed to break compatibility) because the impact of the change is larger. If you use a private interface (regardless of its stability) you run the risk of incompatibility.
+* Why bother with Limited-private? Isn’t it giving special treatment to some projects? That is not fair.
+    * First, most interfaces should be public or private; actually let us state it even stronger: make it private unless you really want to expose it to public for general use.
+    * Limited-private is for interfaces that are not intended for general use. They are exposed to related projects that need special hooks. Such a classification has a cost to both the supplier and consumer of the limited interface. Both will have to work together if ever there is a need to break the interface in the future; for example the supplier and the consumers will have to work together to get coordinated releases of their respective projects. This should not be taken lightly – if you can get away with private then do so; if the interface is really for general use for all applications then do so. But remember that making an interface public has huge responsibility. Sometimes Limited-private is just right.
+    * A good example of a limited-private interface is BlockLocations, This is fairly low-level interface that we are willing to expose to MR and perhaps HBase. We are likely to change it down the road and at that time we will have get a coordinated effort with the MR team to release matching releases. While MR and HDFS are always released in sync today, they may change down the road.
+    * If you have a limited-private interface with many projects listed then you are fooling yourself. It is practically public.
+    * It might be worth declaring a special audience classification called Hadoop-Private for the Hadoop family.
+* Lets treat all private interfaces as Hadoop-private. What is the harm in projects in the Hadoop family have access to private classes?
+    * Do we want MR accessing class files that are implementation details inside HDFS. There used to be many such layer violations in the code that we have been cleaning up over the last few years. We don’t want such layer violations to creep back in by no separating between the major components like HDFS and MR.
+* Aren't all public interfaces stable?
+    * One may mark a public interface as evolving in its early days. Here one is promising to make an effort to make compatible changes but may need to break it at minor releases.
+    * One example of a public interface that is unstable is where one is providing an implementation of a standards-body based interface that is still under development. For example, many companies, in an attampt to be first to market, have provided implementations of a new NFS protocol even when the protocol was not fully completed by IETF. The implementor cannot evolve the interface in a fashion that causes least distruption because the stability is controlled by the standards body. Hence it is appropriate to label the interface as unstable.
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e9d26fe9/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
new file mode 100644
index 0000000..dbcf0d8
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
@@ -0,0 +1,456 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+* [Overview](#Overview)
+* [jvm context](#jvm_context)
+    * [JvmMetrics](#JvmMetrics)
+* [rpc context](#rpc_context)
+    * [rpc](#rpc)
+    * [RetryCache/NameNodeRetryCache](#RetryCacheNameNodeRetryCache)
+* [rpcdetailed context](#rpcdetailed_context)
+    * [rpcdetailed](#rpcdetailed)
+* [dfs context](#dfs_context)
+    * [namenode](#namenode)
+    * [FSNamesystem](#FSNamesystem)
+    * [JournalNode](#JournalNode)
+    * [datanode](#datanode)
+* [yarn context](#yarn_context)
+    * [ClusterMetrics](#ClusterMetrics)
+    * [QueueMetrics](#QueueMetrics)
+    * [NodeManagerMetrics](#NodeManagerMetrics)
+* [ugi context](#ugi_context)
+    * [UgiMetrics](#UgiMetrics)
+* [metricssystem context](#metricssystem_context)
+    * [MetricsSystem](#MetricsSystem)
+* [default context](#default_context)
+    * [StartupProgress](#StartupProgress)
+
+Overview
+========
+
+Metrics are statistical information exposed by Hadoop daemons, used for monitoring, performance tuning and debug. There are many metrics available by default and they are very useful for troubleshooting. This page shows the details of the available metrics.
+
+Each section describes each context into which metrics are grouped.
+
+The documentation of Metrics 2.0 framework is [here](../../api/org/apache/hadoop/metrics2/package-summary.html).
+
+jvm context
+===========
+
+JvmMetrics
+----------
+
+Each metrics record contains tags such as ProcessName, SessionID and Hostname as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `MemNonHeapUsedM` | Current non-heap memory used in MB |
+| `MemNonHeapCommittedM` | Current non-heap memory committed in MB |
+| `MemNonHeapMaxM` | Max non-heap memory size in MB |
+| `MemHeapUsedM` | Current heap memory used in MB |
+| `MemHeapCommittedM` | Current heap memory committed in MB |
+| `MemHeapMaxM` | Max heap memory size in MB |
+| `MemMaxM` | Max memory size in MB |
+| `ThreadsNew` | Current number of NEW threads |
+| `ThreadsRunnable` | Current number of RUNNABLE threads |
+| `ThreadsBlocked` | Current number of BLOCKED threads |
+| `ThreadsWaiting` | Current number of WAITING threads |
+| `ThreadsTimedWaiting` | Current number of TIMED\_WAITING threads |
+| `ThreadsTerminated` | Current number of TERMINATED threads |
+| `GcInfo` | Total GC count and GC time in msec, grouped by the kind of GC.  ex.) GcCountPS Scavenge=6, GCTimeMillisPS Scavenge=40, GCCountPS MarkSweep=0, GCTimeMillisPS MarkSweep=0 |
+| `GcCount` | Total GC count |
+| `GcTimeMillis` | Total GC time in msec |
+| `LogFatal` | Total number of FATAL logs |
+| `LogError` | Total number of ERROR logs |
+| `LogWarn` | Total number of WARN logs |
+| `LogInfo` | Total number of INFO logs |
+| `GcNumWarnThresholdExceeded` | Number of times that the GC warn threshold is exceeded |
+| `GcNumInfoThresholdExceeded` | Number of times that the GC info threshold is exceeded |
+| `GcTotalExtraSleepTime` | Total GC extra sleep time in msec |
+
+rpc context
+===========
+
+rpc
+---
+
+Each metrics record contains tags such as Hostname and port (number to which server is bound) as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `ReceivedBytes` | Total number of received bytes |
+| `SentBytes` | Total number of sent bytes |
+| `RpcQueueTimeNumOps` | Total number of RPC calls |
+| `RpcQueueTimeAvgTime` | Average queue time in milliseconds |
+| `RpcProcessingTimeNumOps` | Total number of RPC calls (same to RpcQueueTimeNumOps) |
+| `RpcProcessingAvgTime` | Average Processing time in milliseconds |
+| `RpcAuthenticationFailures` | Total number of authentication failures |
+| `RpcAuthenticationSuccesses` | Total number of authentication successes |
+| `RpcAuthorizationFailures` | Total number of authorization failures |
+| `RpcAuthorizationSuccesses` | Total number of authorization successes |
+| `NumOpenConnections` | Current number of open connections |
+| `CallQueueLength` | Current length of the call queue |
+| `rpcQueueTime`*num*`sNumOps` | Shows total number of RPC calls (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcQueueTime`*num*`s50thPercentileLatency` | Shows the 50th percentile of RPC queue time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcQueueTime`*num*`s75thPercentileLatency` | Shows the 75th percentile of RPC queue time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcQueueTime`*num*`s90thPercentileLatency` | Shows the 90th percentile of RPC queue time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcQueueTime`*num*`s95thPercentileLatency` | Shows the 95th percentile of RPC queue time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcQueueTime`*num*`s99thPercentileLatency` | Shows the 99th percentile of RPC queue time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcProcessingTime`*num*`sNumOps` | Shows total number of RPC calls (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcProcessingTime`*num*`s50thPercentileLatency` | Shows the 50th percentile of RPC processing time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcProcessingTime`*num*`s75thPercentileLatency` | Shows the 75th percentile of RPC processing time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcProcessingTime`*num*`s90thPercentileLatency` | Shows the 90th percentile of RPC processing time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcProcessingTime`*num*`s95thPercentileLatency` | Shows the 95th percentile of RPC processing time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcProcessingTime`*num*`s99thPercentileLatency` | Shows the 99th percentile of RPC processing time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+
+RetryCache/NameNodeRetryCache
+-----------------------------
+
+RetryCache metrics is useful to monitor NameNode fail-over. Each metrics record contains Hostname tag.
+
+| Name | Description |
+|:---- |:---- |
+| `CacheHit` | Total number of RetryCache hit |
+| `CacheCleared` | Total number of RetryCache cleared |
+| `CacheUpdated` | Total number of RetryCache updated |
+
+rpcdetailed context
+===================
+
+Metrics of rpcdetailed context are exposed in unified manner by RPC layer. Two metrics are exposed for each RPC based on its name. Metrics named "(RPC method name)NumOps" indicates total number of method calls, and metrics named "(RPC method name)AvgTime" shows average turn around time for method calls in milliseconds.
+
+rpcdetailed
+-----------
+
+Each metrics record contains tags such as Hostname and port (number to which server is bound) as additional information along with metrics.
+
+The Metrics about RPCs which is not called are not included in metrics record.
+
+| Name | Description |
+|:---- |:---- |
+| *methodname*`NumOps` | Total number of the times the method is called |
+| *methodname*`AvgTime` | Average turn around time of the method in milliseconds |
+
+dfs context
+===========
+
+namenode
+--------
+
+Each metrics record contains tags such as ProcessName, SessionId, and Hostname as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `CreateFileOps` | Total number of files created |
+| `FilesCreated` | Total number of files and directories created by create or mkdir operations |
+| `FilesAppended` | Total number of files appended |
+| `GetBlockLocations` | Total number of getBlockLocations operations |
+| `FilesRenamed` | Total number of rename **operations** (NOT number of files/dirs renamed) |
+| `GetListingOps` | Total number of directory listing operations |
+| `DeleteFileOps` | Total number of delete operations |
+| `FilesDeleted` | Total number of files and directories deleted by delete or rename operations |
+| `FileInfoOps` | Total number of getFileInfo and getLinkFileInfo operations |
+| `AddBlockOps` | Total number of addBlock operations succeeded |
+| `GetAdditionalDatanodeOps` | Total number of getAdditionalDatanode operations |
+| `CreateSymlinkOps` | Total number of createSymlink operations |
+| `GetLinkTargetOps` | Total number of getLinkTarget operations |
+| `FilesInGetListingOps` | Total number of files and directories listed by directory listing operations |
+| `AllowSnapshotOps` | Total number of allowSnapshot operations |
+| `DisallowSnapshotOps` | Total number of disallowSnapshot operations |
+| `CreateSnapshotOps` | Total number of createSnapshot operations |
+| `DeleteSnapshotOps` | Total number of deleteSnapshot operations |
+| `RenameSnapshotOps` | Total number of renameSnapshot operations |
+| `ListSnapshottableDirOps` | Total number of snapshottableDirectoryStatus operations |
+| `SnapshotDiffReportOps` | Total number of getSnapshotDiffReport operations |
+| `TransactionsNumOps` | Total number of Journal transactions |
+| `TransactionsAvgTime` | Average time of Journal transactions in milliseconds |
+| `SyncsNumOps` | Total number of Journal syncs |
+| `SyncsAvgTime` | Average time of Journal syncs in milliseconds |
+| `TransactionsBatchedInSync` | Total number of Journal transactions batched in sync |
+| `BlockReportNumOps` | Total number of processing block reports from DataNode |
+| `BlockReportAvgTime` | Average time of processing block reports in milliseconds |
+| `CacheReportNumOps` | Total number of processing cache reports from DataNode |
+| `CacheReportAvgTime` | Average time of processing cache reports in milliseconds |
+| `SafeModeTime` | The interval between FSNameSystem starts and the last time safemode leaves in milliseconds.  (sometimes not equal to the time in SafeMode, see [HDFS-5156](https://issues.apache.org/jira/browse/HDFS-5156)) |
+| `FsImageLoadTime` | Time loading FS Image at startup in milliseconds |
+| `FsImageLoadTime` | Time loading FS Image at startup in milliseconds |
+| `GetEditNumOps` | Total number of edits downloads from SecondaryNameNode |
+| `GetEditAvgTime` | Average edits download time in milliseconds |
+| `GetImageNumOps` | Total number of fsimage downloads from SecondaryNameNode |
+| `GetImageAvgTime` | Average fsimage download time in milliseconds |
+| `PutImageNumOps` | Total number of fsimage uploads to SecondaryNameNode |
+| `PutImageAvgTime` | Average fsimage upload time in milliseconds |
+
+FSNamesystem
+------------
+
+Each metrics record contains tags such as HAState and Hostname as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `MissingBlocks` | Current number of missing blocks |
+| `ExpiredHeartbeats` | Total number of expired heartbeats |
+| `TransactionsSinceLastCheckpoint` | Total number of transactions since last checkpoint |
+| `TransactionsSinceLastLogRoll` | Total number of transactions since last edit log roll |
+| `LastWrittenTransactionId` | Last transaction ID written to the edit log |
+| `LastCheckpointTime` | Time in milliseconds since epoch of last checkpoint |
+| `CapacityTotal` | Current raw capacity of DataNodes in bytes |
+| `CapacityTotalGB` | Current raw capacity of DataNodes in GB |
+| `CapacityUsed` | Current used capacity across all DataNodes in bytes |
+| `CapacityUsedGB` | Current used capacity across all DataNodes in GB |
+| `CapacityRemaining` | Current remaining capacity in bytes |
+| `CapacityRemainingGB` | Current remaining capacity in GB |
+| `CapacityUsedNonDFS` | Current space used by DataNodes for non DFS purposes in bytes |
+| `TotalLoad` | Current number of connections |
+| `SnapshottableDirectories` | Current number of snapshottable directories |
+| `Snapshots` | Current number of snapshots |
+| `BlocksTotal` | Current number of allocated blocks in the system |
+| `FilesTotal` | Current number of files and directories |
+| `PendingReplicationBlocks` | Current number of blocks pending to be replicated |
+| `UnderReplicatedBlocks` | Current number of blocks under replicated |
+| `CorruptBlocks` | Current number of blocks with corrupt replicas. |
+| `ScheduledReplicationBlocks` | Current number of blocks scheduled for replications |
+| `PendingDeletionBlocks` | Current number of blocks pending deletion |
+| `ExcessBlocks` | Current number of excess blocks |
+| `PostponedMisreplicatedBlocks` | (HA-only) Current number of blocks postponed to replicate |
+| `PendingDataNodeMessageCourt` | (HA-only) Current number of pending block-related messages for later processing in the standby NameNode |
+| `MillisSinceLastLoadedEdits` | (HA-only) Time in milliseconds since the last time standby NameNode load edit log. In active NameNode, set to 0 |
+| `BlockCapacity` | Current number of block capacity |
+| `StaleDataNodes` | Current number of DataNodes marked stale due to delayed heartbeat |
+| `TotalFiles` | Current number of files and directories (same as FilesTotal) |
+
+JournalNode
+-----------
+
+The server-side metrics for a journal from the JournalNode's perspective. Each metrics record contains Hostname tag as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `Syncs60sNumOps` | Number of sync operations (1 minute granularity) |
+| `Syncs60s50thPercentileLatencyMicros` | The 50th percentile of sync latency in microseconds (1 minute granularity) |
+| `Syncs60s75thPercentileLatencyMicros` | The 75th percentile of sync latency in microseconds (1 minute granularity) |
+| `Syncs60s90thPercentileLatencyMicros` | The 90th percentile of sync latency in microseconds (1 minute granularity) |
+| `Syncs60s95thPercentileLatencyMicros` | The 95th percentile of sync latency in microseconds (1 minute granularity) |
+| `Syncs60s99thPercentileLatencyMicros` | The 99th percentile of sync latency in microseconds (1 minute granularity) |
+| `Syncs300sNumOps` | Number of sync operations (5 minutes granularity) |
+| `Syncs300s50thPercentileLatencyMicros` | The 50th percentile of sync latency in microseconds (5 minutes granularity) |
+| `Syncs300s75thPercentileLatencyMicros` | The 75th percentile of sync latency in microseconds (5 minutes granularity) |
+| `Syncs300s90thPercentileLatencyMicros` | The 90th percentile of sync latency in microseconds (5 minutes granularity) |
+| `Syncs300s95thPercentileLatencyMicros` | The 95th percentile of sync latency in microseconds (5 minutes granularity) |
+| `Syncs300s99thPercentileLatencyMicros` | The 99th percentile of sync latency in microseconds (5 minutes granularity) |
+| `Syncs3600sNumOps` | Number of sync operations (1 hour granularity) |
+| `Syncs3600s50thPercentileLatencyMicros` | The 50th percentile of sync latency in microseconds (1 hour granularity) |
+| `Syncs3600s75thPercentileLatencyMicros` | The 75th percentile of sync latency in microseconds (1 hour granularity) |
+| `Syncs3600s90thPercentileLatencyMicros` | The 90th percentile of sync latency in microseconds (1 hour granularity) |
+| `Syncs3600s95thPercentileLatencyMicros` | The 95th percentile of sync latency in microseconds (1 hour granularity) |
+| `Syncs3600s99thPercentileLatencyMicros` | The 99th percentile of sync latency in microseconds (1 hour granularity) |
+| `BatchesWritten` | Total number of batches written since startup |
+| `TxnsWritten` | Total number of transactions written since startup |
+| `BytesWritten` | Total number of bytes written since startup |
+| `BatchesWrittenWhileLagging` | Total number of batches written where this node was lagging |
+| `LastWriterEpoch` | Current writer's epoch number |
+| `CurrentLagTxns` | The number of transactions that this JournalNode is lagging |
+| `LastWrittenTxId` | The highest transaction id stored on this JournalNode |
+| `LastPromisedEpoch` | The last epoch number which this node has promised not to accept any lower epoch, or 0 if no promises have been made |
+
+datanode
+--------
+
+Each metrics record contains tags such as SessionId and Hostname as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `BytesWritten` | Total number of bytes written to DataNode |
+| `BytesRead` | Total number of bytes read from DataNode |
+| `BlocksWritten` | Total number of blocks written to DataNode |
+| `BlocksRead` | Total number of blocks read from DataNode |
+| `BlocksReplicated` | Total number of blocks replicated |
+| `BlocksRemoved` | Total number of blocks removed |
+| `BlocksVerified` | Total number of blocks verified |
+| `BlockVerificationFailures` | Total number of verifications failures |
+| `BlocksCached` | Total number of blocks cached |
+| `BlocksUncached` | Total number of blocks uncached |
+| `ReadsFromLocalClient` | Total number of read operations from local client |
+| `ReadsFromRemoteClient` | Total number of read operations from remote client |
+| `WritesFromLocalClient` | Total number of write operations from local client |
+| `WritesFromRemoteClient` | Total number of write operations from remote client |
+| `BlocksGetLocalPathInfo` | Total number of operations to get local path names of blocks |
+| `FsyncCount` | Total number of fsync |
+| `VolumeFailures` | Total number of volume failures occurred |
+| `ReadBlockOpNumOps` | Total number of read operations |
+| `ReadBlockOpAvgTime` | Average time of read operations in milliseconds |
+| `WriteBlockOpNumOps` | Total number of write operations |
+| `WriteBlockOpAvgTime` | Average time of write operations in milliseconds |
+| `BlockChecksumOpNumOps` | Total number of blockChecksum operations |
+| `BlockChecksumOpAvgTime` | Average time of blockChecksum operations in milliseconds |
+| `CopyBlockOpNumOps` | Total number of block copy operations |
+| `CopyBlockOpAvgTime` | Average time of block copy operations in milliseconds |
+| `ReplaceBlockOpNumOps` | Total number of block replace operations |
+| `ReplaceBlockOpAvgTime` | Average time of block replace operations in milliseconds |
+| `HeartbeatsNumOps` | Total number of heartbeats |
+| `HeartbeatsAvgTime` | Average heartbeat time in milliseconds |
+| `BlockReportsNumOps` | Total number of block report operations |
+| `BlockReportsAvgTime` | Average time of block report operations in milliseconds |
+| `CacheReportsNumOps` | Total number of cache report operations |
+| `CacheReportsAvgTime` | Average time of cache report operations in milliseconds |
+| `PacketAckRoundTripTimeNanosNumOps` | Total number of ack round trip |
+| `PacketAckRoundTripTimeNanosAvgTime` | Average time from ack send to receive minus the downstream ack time in nanoseconds |
+| `FlushNanosNumOps` | Total number of flushes |
+| `FlushNanosAvgTime` | Average flush time in nanoseconds |
+| `FsyncNanosNumOps` | Total number of fsync |
+| `FsyncNanosAvgTime` | Average fsync time in nanoseconds |
+| `SendDataPacketBlockedOnNetworkNanosNumOps` | Total number of sending packets |
+| `SendDataPacketBlockedOnNetworkNanosAvgTime` | Average waiting time of sending packets in nanoseconds |
+| `SendDataPacketTransferNanosNumOps` | Total number of sending packets |
+| `SendDataPacketTransferNanosAvgTime` | Average transfer time of sending packets in nanoseconds |
+
+yarn context
+============
+
+ClusterMetrics
+--------------
+
+ClusterMetrics shows the metrics of the YARN cluster from the ResourceManager's perspective. Each metrics record contains Hostname tag as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `NumActiveNMs` | Current number of active NodeManagers |
+| `NumDecommissionedNMs` | Current number of decommissioned NodeManagers |
+| `NumLostNMs` | Current number of lost NodeManagers for not sending heartbeats |
+| `NumUnhealthyNMs` | Current number of unhealthy NodeManagers |
+| `NumRebootedNMs` | Current number of rebooted NodeManagers |
+
+QueueMetrics
+------------
+
+QueueMetrics shows an application queue from the ResourceManager's perspective. Each metrics record shows the statistics of each queue, and contains tags such as queue name and Hostname as additional information along with metrics.
+
+In `running_`*num* metrics such as `running_0`, you can set the property `yarn.resourcemanager.metrics.runtime.buckets` in yarn-site.xml to change the buckets. The default values is `60,300,1440`.
+
+| Name | Description |
+|:---- |:---- |
+| `running_0` | Current number of running applications whose elapsed time are less than 60 minutes |
+| `running_60` | Current number of running applications whose elapsed time are between 60 and 300 minutes |
+| `running_300` | Current number of running applications whose elapsed time are between 300 and 1440 minutes |
+| `running_1440` | Current number of running applications elapsed time are more than 1440 minutes |
+| `AppsSubmitted` | Total number of submitted applications |
+| `AppsRunning` | Current number of running applications |
+| `AppsPending` | Current number of applications that have not yet been assigned by any containers |
+| `AppsCompleted` | Total number of completed applications |
+| `AppsKilled` | Total number of killed applications |
+| `AppsFailed` | Total number of failed applications |
+| `AllocatedMB` | Current allocated memory in MB |
+| `AllocatedVCores` | Current allocated CPU in virtual cores |
+| `AllocatedContainers` | Current number of allocated containers |
+| `AggregateContainersAllocated` | Total number of allocated containers |
+| `AggregateContainersReleased` | Total number of released containers |
+| `AvailableMB` | Current available memory in MB |
+| `AvailableVCores` | Current available CPU in virtual cores |
+| `PendingMB` | Current pending memory resource requests in MB that are not yet fulfilled by the scheduler |
+| `PendingVCores` | Current pending CPU allocation requests in virtual cores that are not yet fulfilled by the scheduler |
+| `PendingContainers` | Current pending resource requests that are not yet fulfilled by the scheduler |
+| `ReservedMB` | Current reserved memory in MB |
+| `ReservedVCores` | Current reserved CPU in virtual cores |
+| `ReservedContainers` | Current number of reserved containers |
+| `ActiveUsers` | Current number of active users |
+| `ActiveApplications` | Current number of active applications |
+| `FairShareMB` | (FairScheduler only) Current fair share of memory in MB |
+| `FairShareVCores` | (FairScheduler only) Current fair share of CPU in virtual cores |
+| `MinShareMB` | (FairScheduler only) Minimum share of memory in MB |
+| `MinShareVCores` | (FairScheduler only) Minimum share of CPU in virtual cores |
+| `MaxShareMB` | (FairScheduler only) Maximum share of memory in MB |
+| `MaxShareVCores` | (FairScheduler only) Maximum share of CPU in virtual cores |
+
+NodeManagerMetrics
+------------------
+
+NodeManagerMetrics shows the statistics of the containers in the node. Each metrics record contains Hostname tag as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `containersLaunched` | Total number of launched containers |
+| `containersCompleted` | Total number of successfully completed containers |
+| `containersFailed` | Total number of failed containers |
+| `containersKilled` | Total number of killed containers |
+| `containersIniting` | Current number of initializing containers |
+| `containersRunning` | Current number of running containers |
+| `allocatedContainers` | Current number of allocated containers |
+| `allocatedGB` | Current allocated memory in GB |
+| `availableGB` | Current available memory in GB |
+
+ugi context
+===========
+
+UgiMetrics
+----------
+
+UgiMetrics is related to user and group information. Each metrics record contains Hostname tag as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `LoginSuccessNumOps` | Total number of successful kerberos logins |
+| `LoginSuccessAvgTime` | Average time for successful kerberos logins in milliseconds |
+| `LoginFailureNumOps` | Total number of failed kerberos logins |
+| `LoginFailureAvgTime` | Average time for failed kerberos logins in milliseconds |
+| `getGroupsNumOps` | Total number of group resolutions |
+| `getGroupsAvgTime` | Average time for group resolution in milliseconds |
+| `getGroups`*num*`sNumOps` | Total number of group resolutions (*num* seconds granularity). *num* is specified by `hadoop.user.group.metrics.percentiles.intervals`. |
+| `getGroups`*num*`s50thPercentileLatency` | Shows the 50th percentile of group resolution time in milliseconds (*num* seconds granularity). *num* is specified by `hadoop.user.group.metrics.percentiles.intervals`. |
+| `getGroups`*num*`s75thPercentileLatency` | Shows the 75th percentile of group resolution time in milliseconds (*num* seconds granularity). *num* is specified by `hadoop.user.group.metrics.percentiles.intervals`. |
+| `getGroups`*num*`s90thPercentileLatency` | Shows the 90th percentile of group resolution time in milliseconds (*num* seconds granularity). *num* is specified by `hadoop.user.group.metrics.percentiles.intervals`. |
+| `getGroups`*num*`s95thPercentileLatency` | Shows the 95th percentile of group resolution time in milliseconds (*num* seconds granularity). *num* is specified by `hadoop.user.group.metrics.percentiles.intervals`. |
+| `getGroups`*num*`s99thPercentileLatency` | Shows the 99th percentile of group resolution time in milliseconds (*num* seconds granularity). *num* is specified by `hadoop.user.group.metrics.percentiles.intervals`. |
+
+metricssystem context
+=====================
+
+MetricsSystem
+-------------
+
+MetricsSystem shows the statistics for metrics snapshots and publishes. Each metrics record contains Hostname tag as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `NumActiveSources` | Current number of active metrics sources |
+| `NumAllSources` | Total number of metrics sources |
+| `NumActiveSinks` | Current number of active sinks |
+| `NumAllSinks` | Total number of sinks  (BUT usually less than `NumActiveSinks`, see [HADOOP-9946](https://issues.apache.org/jira/browse/HADOOP-9946)) |
+| `SnapshotNumOps` | Total number of operations to snapshot statistics from a metrics source |
+| `SnapshotAvgTime` | Average time in milliseconds to snapshot statistics from a metrics source |
+| `PublishNumOps` | Total number of operations to publish statistics to a sink |
+| `PublishAvgTime` | Average time in milliseconds to publish statistics to a sink |
+| `DroppedPubAll` | Total number of dropped publishes |
+| `Sink_`*instance*`NumOps` | Total number of sink operations for the *instance* |
+| `Sink_`*instance*`AvgTime` | Average time in milliseconds of sink operations for the *instance* |
+| `Sink_`*instance*`Dropped` | Total number of dropped sink operations for the *instance* |
+| `Sink_`*instance*`Qsize` | Current queue length of sink operations  (BUT always set to 0 because nothing to increment this metrics, see [HADOOP-9941](https://issues.apache.org/jira/browse/HADOOP-9941)) |
+
+default context
+===============
+
+StartupProgress
+---------------
+
+StartupProgress metrics shows the statistics of NameNode startup. Four metrics are exposed for each startup phase based on its name. The startup *phase*s are `LoadingFsImage`, `LoadingEdits`, `SavingCheckpoint`, and `SafeMode`. Each metrics record contains Hostname tag as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `ElapsedTime` | Total elapsed time in milliseconds |
+| `PercentComplete` | Current rate completed in NameNode startup progress  (The max value is not 100 but 1.0) |
+| *phase*`Count` | Total number of steps completed in the phase |
+| *phase*`ElapsedTime` | Total elapsed time in the phase in milliseconds |
+| *phase*`Total` | Total number of steps in the phase |
+| *phase*`PercentComplete` | Current rate completed in the phase  (The max value is not 100 but 1.0) |
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e9d26fe9/hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm b/hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm
new file mode 100644
index 0000000..5a2c70c
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm
@@ -0,0 +1,145 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+Native Libraries Guide
+======================
+
+* [Native Libraries Guide](#Native_Libraries_Guide)
+    * [Overview](#Overview)
+    * [Native Hadoop Library](#Native_Hadoop_Library)
+    * [Usage](#Usage)
+    * [Components](#Components)
+    * [Supported Platforms](#Supported_Platforms)
+    * [Download](#Download)
+    * [Build](#Build)
+    * [Runtime](#Runtime)
+    * [Check](#Check)
+    * [Native Shared Libraries](#Native_Shared_Libraries)
+
+Overview
+--------
+
+This guide describes the native hadoop library and includes a small discussion about native shared libraries.
+
+Note: Depending on your environment, the term "native libraries" could refer to all \*.so's you need to compile; and, the term "native compression" could refer to all \*.so's you need to compile that are specifically related to compression. Currently, however, this document only addresses the native hadoop library (`libhadoop.so`). The document for libhdfs library (`libhdfs.so`) is [here](../hadoop-hdfs/LibHdfs.html).
+
+Native Hadoop Library
+---------------------
+
+Hadoop has native implementations of certain components for performance reasons and for non-availability of Java implementations. These components are available in a single, dynamically-linked native library called the native hadoop library. On the \*nix platforms the library is named `libhadoop.so`.
+
+Usage
+-----
+
+It is fairly easy to use the native hadoop library:
+
+1.  Review the components.
+2.  Review the supported platforms.
+3.  Either download a hadoop release, which will include a pre-built version of the native hadoop library, or build your own version of the native hadoop library. Whether you download or build, the name for the library is the same: libhadoop.so
+4.  Install the compression codec development packages (\>zlib-1.2, \>gzip-1.2):
+    * If you download the library, install one or more development packages - whichever compression codecs you want to use with your deployment.
+    * If you build the library, it is mandatory to install both development packages.
+5.  Check the runtime log files.
+
+Components
+----------
+
+The native hadoop library includes various components:
+
+* Compression Codecs (bzip2, lz4, snappy, zlib)
+* Native IO utilities for [HDFS Short-Circuit Local Reads](../hadoop-hdfs/ShortCircuitLocalReads.html) and [Centralized Cache Management in HDFS](../hadoop-hdfs/CentralizedCacheManagement.html)
+* CRC32 checksum implementation
+
+Supported Platforms
+-------------------
+
+The native hadoop library is supported on \*nix platforms only. The library does not to work with Cygwin or the Mac OS X platform.
+
+The native hadoop library is mainly used on the GNU/Linus platform and has been tested on these distributions:
+
+* RHEL4/Fedora
+* Ubuntu
+* Gentoo
+
+On all the above distributions a 32/64 bit native hadoop library will work with a respective 32/64 bit jvm.
+
+Download
+--------
+
+The pre-built 32-bit i386-Linux native hadoop library is available as part of the hadoop distribution and is located in the `lib/native` directory. You can download the hadoop distribution from Hadoop Common Releases.
+
+Be sure to install the zlib and/or gzip development packages - whichever compression codecs you want to use with your deployment.
+
+Build
+-----
+
+The native hadoop library is written in ANSI C and is built using the GNU autotools-chain (autoconf, autoheader, automake, autoscan, libtool). This means it should be straight-forward to build the library on any platform with a standards-compliant C compiler and the GNU autotools-chain (see the supported platforms).
+
+The packages you need to install on the target platform are:
+
+* C compiler (e.g. GNU C Compiler)
+* GNU Autools Chain: autoconf, automake, libtool
+* zlib-development package (stable version \>= 1.2.0)
+* openssl-development package(e.g. libssl-dev)
+
+Once you installed the prerequisite packages use the standard hadoop pom.xml file and pass along the native flag to build the native hadoop library:
+
+       $ mvn package -Pdist,native -DskipTests -Dtar
+
+You should see the newly-built library in:
+
+       $ hadoop-dist/target/hadoop-${project.version}/lib/native
+
+Please note the following:
+
+* It is mandatory to install both the zlib and gzip development packages on the target platform in order to build the native hadoop library; however, for deployment it is sufficient to install just one package if you wish to use only one codec.
+* It is necessary to have the correct 32/64 libraries for zlib, depending on the 32/64 bit jvm for the target platform, in order to build and deploy the native hadoop library.
+
+Runtime
+-------
+
+The bin/hadoop script ensures that the native hadoop library is on the library path via the system property: `-Djava.library.path=<path> `
+
+During runtime, check the hadoop log files for your MapReduce tasks.
+
+* If everything is all right, then: `DEBUG util.NativeCodeLoader - Trying to load the custom-built native-hadoop library...` `INFO util.NativeCodeLoader - Loaded the native-hadoop library`
+* If something goes wrong, then: `INFO util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable`
+
+Check
+-----
+
+NativeLibraryChecker is a tool to check whether native libraries are loaded correctly. You can launch NativeLibraryChecker as follows:
+
+       $ hadoop checknative -a
+       14/12/06 01:30:45 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version
+       14/12/06 01:30:45 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
+       Native library checking:
+       hadoop: true /home/ozawa/hadoop/lib/native/libhadoop.so.1.0.0
+       zlib:   true /lib/x86_64-linux-gnu/libz.so.1
+       snappy: true /usr/lib/libsnappy.so.1
+       lz4:    true revision:99
+       bzip2:  false
+
+Native Shared Libraries
+-----------------------
+
+You can load any native shared library using DistributedCache for distributing and symlinking the library files.
+
+This example shows you how to distribute a shared library, mylib.so, and load it from a MapReduce task.
+
+1.  First copy the library to the HDFS: `bin/hadoop fs -copyFromLocal mylib.so.1 /libraries/mylib.so.1`
+2.  The job launching program should contain the following: `DistributedCache.createSymlink(conf);` `DistributedCache.addCacheFile("hdfs://host:port/libraries/mylib.so. 1#mylib.so", conf);`
+3.  The MapReduce task can contain: `System.loadLibrary("mylib.so");`
+
+Note: If you downloaded or built the native hadoop library, you don’t need to use DistibutedCache to make the library available to your MapReduce tasks.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/e9d26fe9/hadoop-common-project/hadoop-common/src/site/markdown/RackAwareness.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/RackAwareness.md b/hadoop-common-project/hadoop-common/src/site/markdown/RackAwareness.md
new file mode 100644
index 0000000..41fcb37
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/RackAwareness.md
@@ -0,0 +1,104 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+* [Rack Awareness](#Rack_Awareness)
+    * [python Example](#python_Example)
+    * [bash Example](#bash_Example)
+
+Rack Awareness
+==============
+
+Hadoop components are rack-aware. For example, HDFS block placement will use rack awareness for fault tolerance by placing one block replica on a different rack. This provides data availability in the event of a network switch failure or partition within the cluster.
+
+Hadoop master daemons obtain the rack id of the cluster slaves by invoking either an external script or java class as specified by configuration files. Using either the java class or external script for topology, output must adhere to the java **org.apache.hadoop.net.DNSToSwitchMapping** interface. The interface expects a one-to-one correspondence to be maintained and the topology information in the format of '/myrack/myhost', where '/' is the topology delimiter, 'myrack' is the rack identifier, and 'myhost' is the individual host. Assuming a single /24 subnet per rack, one could use the format of '/192.168.100.0/192.168.100.5' as a unique rack-host topology mapping.
+
+To use the java class for topology mapping, the class name is specified by the **topology.node.switch.mapping.impl** parameter in the configuration file. An example, NetworkTopology.java, is included with the hadoop distribution and can be customized by the Hadoop administrator. Using a Java class instead of an external script has a performance benefit in that Hadoop doesn't need to fork an external process when a new slave node registers itself.
+
+If implementing an external script, it will be specified with the **topology.script.file.name** parameter in the configuration files. Unlike the java class, the external topology script is not included with the Hadoop distribution and is provided by the administrator. Hadoop will send multiple IP addresses to ARGV when forking the topology script. The number of IP addresses sent to the topology script is controlled with **net.topology.script.number.args** and defaults to 100. If **net.topology.script.number.args** was changed to 1, a topology script would get forked for each IP submitted by DataNodes and/or NodeManagers.
+
+If **topology.script.file.name** or **topology.node.switch.mapping.impl** is not set, the rack id '/default-rack' is returned for any passed IP address. While this behavior appears desirable, it can cause issues with HDFS block replication as default behavior is to write one replicated block off rack and is unable to do so as there is only a single rack named '/default-rack'.
+
+An additional configuration setting is **mapreduce.jobtracker.taskcache.levels** which determines the number of levels (in the network topology) of caches MapReduce will use. So, for example, if it is the default value of 2, two levels of caches will be constructed - one for hosts (host -\> task mapping) and another for racks (rack -\> task mapping). Giving us our one-to-one mapping of '/myrack/myhost'.
+
+python Example
+--------------
+```python
+#!/usr/bin/python
+# this script makes assumptions about the physical environment.
+#  1) each rack is its own layer 3 network with a /24 subnet, which
+# could be typical where each rack has its own
+#     switch with uplinks to a central core router.
+#
+#             +-----------+
+#             |core router|
+#             +-----------+
+#            /             \
+#   +-----------+        +-----------+
+#   |rack switch|        |rack switch|
+#   +-----------+        +-----------+
+#   | data node |        | data node |
+#   +-----------+        +-----------+
+#   | data node |        | data node |
+#   +-----------+        +-----------+
+#
+# 2) topology script gets list of IP's as input, calculates network address, and prints '/network_address/ip'.
+
+import netaddr
+import sys
+sys.argv.pop(0)                                                  # discard name of topology script from argv list as we just want IP addresses
+
+netmask = '255.255.255.0'                                        # set netmask to what's being used in your environment.  The example uses a /24
+
+for ip in sys.argv:                                              # loop over list of datanode IP's
+address = '{0}/{1}'.format(ip, netmask)                      # format address string so it looks like 'ip/netmask' to make netaddr work
+try:
+   network_address = netaddr.IPNetwork(address).network     # calculate and print network address
+   print "/{0}".format(network_address)
+except:
+   print "/rack-unknown"                                    # print catch-all value if unable to calculate network address
+```
+
+bash Example
+------------
+
+```bash
+#!/bin/bash
+# Here's a bash example to show just how simple these scripts can be
+# Assuming we have flat network with everything on a single switch, we can fake a rack topology.
+# This could occur in a lab environment where we have limited nodes,like 2-8 physical machines on a unmanaged switch.
+# This may also apply to multiple virtual machines running on the same physical hardware.
+# The number of machines isn't important, but that we are trying to fake a network topology when there isn't one.
+#
+#       +----------+    +--------+
+#       |jobtracker|    |datanode|
+#       +----------+    +--------+
+#              \        /
+#  +--------+  +--------+  +--------+
+#  |datanode|--| switch |--|datanode|
+#  +--------+  +--------+  +--------+
+#              /        \
+#       +--------+    +--------+
+#       |datanode|    |namenode|
+#       +--------+    +--------+
+#
+# With this network topology, we are treating each host as a rack.  This is being done by taking the last octet
+# in the datanode's IP and prepending it with the word '/rack-'.  The advantage for doing this is so HDFS
+# can create its 'off-rack' block copy.
+# 1) 'echo $@' will echo all ARGV values to xargs.
+# 2) 'xargs' will enforce that we print a single argv value per line
+# 3) 'awk' will split fields on dots and append the last field to the string '/rack-'. If awk
+#    fails to split on four dots, it will still print '/rack-' last field value
+
+echo $@ | xargs -n 1 | awk -F '.' '{print "/rack-"$NF}'
+```
\ No newline at end of file