You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by wh...@apache.org on 2015/03/11 22:31:31 UTC

[12/12] hadoop git commit: HADOOP-11633. Convert remaining branch-2 .apt.vm files to markdown. Contributed by Masatake Iwasaki.

HADOOP-11633. Convert remaining branch-2 .apt.vm files to markdown. Contributed by Masatake Iwasaki.


Project: http://git-wip-us.apache.org/repos/asf/hadoop/repo
Commit: http://git-wip-us.apache.org/repos/asf/hadoop/commit/245f7b2a
Tree: http://git-wip-us.apache.org/repos/asf/hadoop/tree/245f7b2a
Diff: http://git-wip-us.apache.org/repos/asf/hadoop/diff/245f7b2a

Branch: refs/heads/branch-2.7
Commit: 245f7b2a77f2e0a6edee553e2621eabf2ead79da
Parents: 0fe5f5b
Author: Haohui Mai <wh...@apache.org>
Authored: Wed Mar 11 14:23:44 2015 -0700
Committer: Haohui Mai <wh...@apache.org>
Committed: Wed Mar 11 14:31:16 2015 -0700

----------------------------------------------------------------------
 .../hadoop-auth/src/site/apt/BuildingIt.apt.vm  |   70 -
 .../src/site/apt/Configuration.apt.vm           |  377 ---
 .../hadoop-auth/src/site/apt/Examples.apt.vm    |  133 -
 .../hadoop-auth/src/site/apt/index.apt.vm       |   59 -
 .../hadoop-auth/src/site/markdown/BuildingIt.md |   56 +
 .../src/site/markdown/Configuration.md          |  341 +++
 .../hadoop-auth/src/site/markdown/Examples.md   |  109 +
 .../hadoop-auth/src/site/markdown/index.md      |   43 +
 hadoop-common-project/hadoop-common/CHANGES.txt |    3 +
 .../src/site/apt/ClusterSetup.apt.vm            |  703 -----
 .../src/site/apt/CommandsManual.apt.vm          |  283 --
 .../src/site/markdown/ClusterSetup.md           |  337 +++
 .../src/site/markdown/CommandsManual.md         |  178 ++
 .../hadoop-kms/src/site/apt/index.apt.vm        | 1022 -------
 .../hadoop-kms/src/site/markdown/index.md.vm    |  864 ++++++
 .../src/site/apt/ServerSetup.apt.vm             |  159 -
 .../src/site/apt/UsingHttpTools.apt.vm          |   87 -
 .../src/site/apt/index.apt.vm                   |   83 -
 .../src/site/markdown/ServerSetup.md.vm         |  121 +
 .../src/site/markdown/UsingHttpTools.md         |   62 +
 .../src/site/markdown/index.md                  |   50 +
 .../src/site/apt/DistributedCacheDeploy.apt.vm  |  151 -
 .../src/site/apt/EncryptedShuffle.apt.vm        |  320 ---
 .../src/site/apt/MapReduceTutorial.apt.vm       | 1605 -----------
 ...pReduce_Compatibility_Hadoop1_Hadoop2.apt.vm |  114 -
 .../src/site/apt/MapredAppMasterRest.apt.vm     | 2709 ------------------
 .../src/site/apt/MapredCommands.apt.vm          |  227 --
 .../apt/PluggableShuffleAndPluggableSort.apt.vm |   98 -
 .../site/markdown/DistributedCacheDeploy.md.vm  |  119 +
 .../src/site/markdown/EncryptedShuffle.md       |  255 ++
 .../src/site/markdown/MapReduceTutorial.md      | 1156 ++++++++
 .../MapReduce_Compatibility_Hadoop1_Hadoop2.md  |   69 +
 .../src/site/markdown/MapredAppMasterRest.md    | 2397 ++++++++++++++++
 .../src/site/markdown/MapredCommands.md         |  151 +
 .../PluggableShuffleAndPluggableSort.md         |   73 +
 .../src/site/apt/HistoryServerRest.apt.vm       | 2672 -----------------
 .../src/site/markdown/HistoryServerRest.md      | 2361 +++++++++++++++
 hadoop-project/src/site/apt/index.apt.vm        |   73 -
 hadoop-project/src/site/markdown/index.md.vm    |   69 +
 .../hadoop-openstack/src/site/apt/index.apt.vm  |  686 -----
 .../hadoop-openstack/src/site/markdown/index.md |  544 ++++
 .../src/site/resources/css/site.css             |   29 +
 .../src/site/apt/SchedulerLoadSimulator.apt.vm  |  439 ---
 .../src/site/markdown/SchedulerLoadSimulator.md |  357 +++
 .../src/site/apt/HadoopStreaming.apt.vm         |  792 -----
 .../src/site/markdown/HadoopStreaming.md.vm     |  559 ++++
 46 files changed, 10303 insertions(+), 12862 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/hadoop/blob/245f7b2a/hadoop-common-project/hadoop-auth/src/site/apt/BuildingIt.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-auth/src/site/apt/BuildingIt.apt.vm b/hadoop-common-project/hadoop-auth/src/site/apt/BuildingIt.apt.vm
deleted file mode 100644
index 2ca2f0a..0000000
--- a/hadoop-common-project/hadoop-auth/src/site/apt/BuildingIt.apt.vm
+++ /dev/null
@@ -1,70 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  Hadoop Auth, Java HTTP SPNEGO ${project.version} - Building It
-  ---
-  ---
-  ${maven.build.timestamp}
-
-Hadoop Auth, Java HTTP SPNEGO ${project.version} - Building It
-
-* Requirements
-
-  * Java 6+
-
-  * Maven 3+
-
-  * Kerberos KDC (for running Kerberos test cases)
-
-* Building
-
-  Use Maven goals: clean, test, compile, package, install
-
-  Available profiles: docs, testKerberos
-
-* Testing
-
-  By default Kerberos testcases are not run.
-
-  The requirements to run Kerberos testcases are a running KDC, a keytab
-  file with a client principal and a kerberos principal.
-
-  To run Kerberos tescases use the <<<testKerberos>>> Maven profile:
-
-+---+
-$ mvn test -PtestKerberos
-+---+
-
-  The following Maven <<<-D>>> options can be used to change the default
-  values:
-
-  * <<<hadoop-auth.test.kerberos.realm>>>: default value <<LOCALHOST>>
-
-  * <<<hadoop-auth.test.kerberos.client.principal>>>: default value <<client>>
-
-  * <<<hadoop-auth.test.kerberos.server.principal>>>: default value
-    <<HTTP/localhost>> (it must start 'HTTP/')
-
-  * <<<hadoop-auth.test.kerberos.keytab.file>>>: default value
-    <<${HOME}/${USER}.keytab>>
-
-** Generating Documentation
-
-  To create the documentation use the <<<docs>>> Maven profile:
-
-+---+
-$ mvn package -Pdocs
-+---+
-
-  The generated documentation is available at
-  <<<hadoop-auth/target/site/>>>.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/245f7b2a/hadoop-common-project/hadoop-auth/src/site/apt/Configuration.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-auth/src/site/apt/Configuration.apt.vm b/hadoop-common-project/hadoop-auth/src/site/apt/Configuration.apt.vm
deleted file mode 100644
index 88248e5..0000000
--- a/hadoop-common-project/hadoop-auth/src/site/apt/Configuration.apt.vm
+++ /dev/null
@@ -1,377 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  Hadoop Auth, Java HTTP SPNEGO ${project.version} - Server Side
-  Configuration
-  ---
-  ---
-  ${maven.build.timestamp}
-
-Hadoop Auth, Java HTTP SPNEGO ${project.version} - Server Side
-Configuration
-
-* Server Side Configuration Setup
-
-  The AuthenticationFilter filter is Hadoop Auth's server side component.
-
-  This filter must be configured in front of all the web application resources
-  that required authenticated requests. For example:
-
-  The Hadoop Auth and dependent JAR files must be in the web application
-  classpath (commonly the <<<WEB-INF/lib>>> directory).
-
-  Hadoop Auth uses SLF4J-API for logging. Auth Maven POM dependencies define
-  the SLF4J API dependency but it does not define the dependency on a concrete
-  logging implementation, this must be addded explicitly to the web
-  application. For example, if the web applicationan uses Log4j, the
-  SLF4J-LOG4J12 and LOG4J jar files must be part part of the web application
-  classpath as well as the Log4j configuration file.
-
-** Common Configuration parameters
-
-  * <<<config.prefix>>>: If specified, all other configuration parameter names
-    must start with the prefix. The default value is no prefix.
-
-  * <<<[PREFIX.]type>>>: the authentication type keyword (<<<simple>>> or
-    <<<kerberos>>>) or a Authentication handler implementation.
-
-  * <<<[PREFIX.]signature.secret>>>: When <<<signer.secret.provider>>> is set to
-    <<<string>>> or not specified, this is the value for the secret used to sign
-    the HTTP cookie.
-
-  * <<<[PREFIX.]token.validity>>>: The validity -in seconds- of the generated
-    authentication token. The default value is <<<3600>>> seconds. This is also
-    used for the rollover interval when <<<signer.secret.provider>>> is set to
-    <<<random>>> or <<<zookeeper>>>.
-
-  * <<<[PREFIX.]cookie.domain>>>: domain to use for the HTTP cookie that stores
-    the authentication token.
-
-  * <<<[PREFIX.]cookie.path>>>: path to use for the HTTP cookie that stores the
-    authentication token.
-
-  * <<<signer.secret.provider>>>: indicates the name of the SignerSecretProvider
-    class to use. Possible values are: <<<string>>>, <<<random>>>,
-    <<<zookeeper>>>, or a classname. If not specified, the <<<string>>>
-    implementation will be used; and failing that, the <<<random>>>
-    implementation will be used.
-
-** Kerberos Configuration
-
-  <<IMPORTANT>>: A KDC must be configured and running.
-
-  To use Kerberos SPNEGO as the authentication mechanism, the authentication
-  filter must be configured with the following init parameters:
-
-    * <<<[PREFIX.]type>>>: the keyword <<<kerberos>>>.
-
-    * <<<[PREFIX.]kerberos.principal>>>: The web-application Kerberos principal
-      name. The Kerberos principal name must start with <<<HTTP/...>>>. For
-      example: <<<...@LOCALHOST>>>.  There is no default value.
-
-    * <<<[PREFIX.]kerberos.keytab>>>: The path to the keytab file containing
-      the credentials for the kerberos principal. For example:
-      <<</Users/tucu/tucu.keytab>>>. There is no default value.
-
-  <<Example>>:
-
-+---+
-<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee">
-    ...
-
-    <filter>
-        <filter-name>kerberosFilter</filter-name>
-        <filter-class>org.apache.hadoop.security.auth.server.AuthenticationFilter</filter-class>
-        <init-param>
-            <param-name>type</param-name>
-            <param-value>kerberos</param-value>
-        </init-param>
-        <init-param>
-            <param-name>token.validity</param-name>
-            <param-value>30</param-value>
-        </init-param>
-        <init-param>
-            <param-name>cookie.domain</param-name>
-            <param-value>.foo.com</param-value>
-        </init-param>
-        <init-param>
-            <param-name>cookie.path</param-name>
-            <param-value>/</param-value>
-        </init-param>
-        <init-param>
-            <param-name>kerberos.principal</param-name>
-            <param-value>HTTP/localhost@LOCALHOST</param-value>
-        </init-param>
-        <init-param>
-            <param-name>kerberos.keytab</param-name>
-            <param-value>/tmp/auth.keytab</param-value>
-        </init-param>
-    </filter>
-
-    <filter-mapping>
-        <filter-name>kerberosFilter</filter-name>
-        <url-pattern>/kerberos/*</url-pattern>
-    </filter-mapping>
-
-    ...
-</web-app>
-+---+
-
-** Pseudo/Simple Configuration
-
-  To use Pseudo/Simple as the authentication mechanism (trusting the value of
-  the query string parameter 'user.name'), the authentication filter must be
-  configured with the following init parameters:
-
-    * <<<[PREFIX.]type>>>: the keyword <<<simple>>>.
-
-    * <<<[PREFIX.]simple.anonymous.allowed>>>: is a boolean parameter that
-      indicates if anonymous requests are allowed or not. The default value is
-      <<<false>>>.
-
-  <<Example>>:
-
-+---+
-<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee">
-    ...
-
-    <filter>
-        <filter-name>simpleFilter</filter-name>
-        <filter-class>org.apache.hadoop.security.auth.server.AuthenticationFilter</filter-class>
-        <init-param>
-            <param-name>type</param-name>
-            <param-value>simple</param-value>
-        </init-param>
-        <init-param>
-            <param-name>token.validity</param-name>
-            <param-value>30</param-value>
-        </init-param>
-        <init-param>
-            <param-name>cookie.domain</param-name>
-            <param-value>.foo.com</param-value>
-        </init-param>
-        <init-param>
-            <param-name>cookie.path</param-name>
-            <param-value>/</param-value>
-        </init-param>
-        <init-param>
-            <param-name>simple.anonymous.allowed</param-name>
-            <param-value>false</param-value>
-        </init-param>
-    </filter>
-
-    <filter-mapping>
-        <filter-name>simpleFilter</filter-name>
-        <url-pattern>/simple/*</url-pattern>
-    </filter-mapping>
-
-    ...
-</web-app>
-+---+
-
-** AltKerberos Configuration
-
-  <<IMPORTANT>>: A KDC must be configured and running.
-
-  The AltKerberos authentication mechanism is a partially implemented derivative
-  of the Kerberos SPNEGO authentication mechanism which allows a "mixed" form of
-  authentication where Kerberos SPNEGO is used by non-browsers while an
-  alternate form of authentication (to be implemented by the user) is used for
-  browsers.  To use AltKerberos as the authentication mechanism (besides
-  providing an implementation), the authentication filter must be configured
-  with the following init parameters, in addition to the previously mentioned
-  Kerberos SPNEGO ones:
-
-    * <<<[PREFIX.]type>>>: the full class name of the implementation of
-      AltKerberosAuthenticationHandler to use.
-
-    * <<<[PREFIX.]alt-kerberos.non-browser.user-agents>>>: a comma-separated
-      list of which user-agents should be considered non-browsers.
-
-  <<Example>>:
-
-+---+
-<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee">
-    ...
-
-    <filter>
-        <filter-name>kerberosFilter</filter-name>
-        <filter-class>org.apache.hadoop.security.auth.server.AuthenticationFilter</filter-class>
-        <init-param>
-            <param-name>type</param-name>
-            <param-value>org.my.subclass.of.AltKerberosAuthenticationHandler</param-value>
-        </init-param>
-        <init-param>
-            <param-name>alt-kerberos.non-browser.user-agents</param-name>
-            <param-value>java,curl,wget,perl</param-value>
-        </init-param>
-        <init-param>
-            <param-name>token.validity</param-name>
-            <param-value>30</param-value>
-        </init-param>
-        <init-param>
-            <param-name>cookie.domain</param-name>
-            <param-value>.foo.com</param-value>
-        </init-param>
-        <init-param>
-            <param-name>cookie.path</param-name>
-            <param-value>/</param-value>
-        </init-param>
-        <init-param>
-            <param-name>kerberos.principal</param-name>
-            <param-value>HTTP/localhost@LOCALHOST</param-value>
-        </init-param>
-        <init-param>
-            <param-name>kerberos.keytab</param-name>
-            <param-value>/tmp/auth.keytab</param-value>
-        </init-param>
-    </filter>
-
-    <filter-mapping>
-        <filter-name>kerberosFilter</filter-name>
-        <url-pattern>/kerberos/*</url-pattern>
-    </filter-mapping>
-
-    ...
-</web-app>
-+---+
-
-** SignerSecretProvider Configuration
-
-  The SignerSecretProvider is used to provide more advanced behaviors for the
-  secret used for signing the HTTP Cookies.
-
-  These are the relevant configuration properties:
-
-    * <<<signer.secret.provider>>>: indicates the name of the
-      SignerSecretProvider class to use. Possible values are: "string",
-      "random", "zookeeper", or a classname. If not specified, the "string"
-      implementation will be used; and failing that, the "random" implementation
-      will be used.
-
-    * <<<[PREFIX.]signature.secret>>>: When <<<signer.secret.provider>>> is set
-      to <<<string>>> or not specified, this is the value for the secret used to
-      sign the HTTP cookie.
-
-    * <<<[PREFIX.]token.validity>>>: The validity -in seconds- of the generated
-      authentication token. The default value is <<<3600>>> seconds. This is
-      also used for the rollover interval when <<<signer.secret.provider>>> is
-      set to <<<random>>> or <<<zookeeper>>>.
-
-  The following configuration properties are specific to the <<<zookeeper>>>
-  implementation:
-
-    * <<<signer.secret.provider.zookeeper.connection.string>>>: Indicates the
-      ZooKeeper connection string to connect with.
-
-    * <<<signer.secret.provider.zookeeper.path>>>: Indicates the ZooKeeper path
-      to use for storing and retrieving the secrets.  All servers
-      that need to coordinate their secret should point to the same path
-
-    * <<<signer.secret.provider.zookeeper.auth.type>>>: Indicates the auth type
-      to use.  Supported values are <<<none>>> and <<<sasl>>>.  The default
-      value is <<<none>>>.
-
-    * <<<signer.secret.provider.zookeeper.kerberos.keytab>>>: Set this to the
-      path with the Kerberos keytab file.  This is only required if using
-      Kerberos.
-
-    * <<<signer.secret.provider.zookeeper.kerberos.principal>>>: Set this to the
-      Kerberos principal to use.  This only required if using Kerberos.
-
-  <<Example>>:
-
-+---+
-<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee">
-    ...
-
-    <filter>
-        <!-- AuthenticationHandler configs not shown -->
-        <init-param>
-            <param-name>signer.secret.provider</param-name>
-            <param-value>string</param-value>
-        </init-param>
-        <init-param>
-            <param-name>signature.secret</param-name>
-            <param-value>my_secret</param-value>
-        </init-param>
-    </filter>
-
-    ...
-</web-app>
-+---+
-
-  <<Example>>:
-
-+---+
-<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee">
-    ...
-
-    <filter>
-        <!-- AuthenticationHandler configs not shown -->
-        <init-param>
-            <param-name>signer.secret.provider</param-name>
-            <param-value>random</param-value>
-        </init-param>
-        <init-param>
-            <param-name>token.validity</param-name>
-            <param-value>30</param-value>
-        </init-param>
-    </filter>
-
-    ...
-</web-app>
-+---+
-
-  <<Example>>:
-
-+---+
-<web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee">
-    ...
-
-    <filter>
-        <!-- AuthenticationHandler configs not shown -->
-        <init-param>
-            <param-name>signer.secret.provider</param-name>
-            <param-value>zookeeper</param-value>
-        </init-param>
-        <init-param>
-            <param-name>token.validity</param-name>
-            <param-value>30</param-value>
-        </init-param>
-        <init-param>
-            <param-name>signer.secret.provider.zookeeper.connection.string</param-name>
-            <param-value>zoo1:2181,zoo2:2181,zoo3:2181</param-value>
-        </init-param>
-        <init-param>
-            <param-name>signer.secret.provider.zookeeper.path</param-name>
-            <param-value>/myapp/secrets</param-value>
-        </init-param>
-        <init-param>
-            <param-name>signer.secret.provider.zookeeper.use.kerberos.acls</param-name>
-            <param-value>true</param-value>
-        </init-param>
-        <init-param>
-            <param-name>signer.secret.provider.zookeeper.kerberos.keytab</param-name>
-            <param-value>/tmp/auth.keytab</param-value>
-        </init-param>
-        <init-param>
-            <param-name>signer.secret.provider.zookeeper.kerberos.principal</param-name>
-            <param-value>HTTP/localhost@LOCALHOST</param-value>
-        </init-param>
-    </filter>
-
-    ...
-</web-app>
-+---+
-

http://git-wip-us.apache.org/repos/asf/hadoop/blob/245f7b2a/hadoop-common-project/hadoop-auth/src/site/apt/Examples.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-auth/src/site/apt/Examples.apt.vm b/hadoop-common-project/hadoop-auth/src/site/apt/Examples.apt.vm
deleted file mode 100644
index 1b1afd5..0000000
--- a/hadoop-common-project/hadoop-auth/src/site/apt/Examples.apt.vm
+++ /dev/null
@@ -1,133 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  Hadoop Auth, Java HTTP SPNEGO ${project.version} - Examples
-  ---
-  ---
-  ${maven.build.timestamp}
-
-Hadoop Auth, Java HTTP SPNEGO ${project.version} - Examples
-
-* Accessing a Hadoop Auth protected URL Using a browser
-
-  <<IMPORTANT:>> The browser must support HTTP Kerberos SPNEGO. For example,
-  Firefox or Internet Explorer.
-
-  For Firefox access the low level configuration page by loading the
-  <<<about:config>>> page. Then go to the
-  <<<network.negotiate-auth.trusted-uris>>> preference and add the hostname or
-  the domain of the web server that is HTTP Kerberos SPNEGO protected (if using
-  multiple domains and hostname use comma to separate them).
-  
-* Accessing a Hadoop Auth protected URL Using <<<curl>>>
-
-  <<IMPORTANT:>> The <<<curl>>> version must support GSS, run <<<curl -V>>>.
-
-+---+
-$ curl -V
-curl 7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8l zlib/1.2.3
-Protocols: tftp ftp telnet dict ldap http file https ftps
-Features: GSS-Negotiate IPv6 Largefile NTLM SSL libz
-+---+
-
-  Login to the KDC using <<kinit>> and then use <<<curl>>> to fetch protected
-  URL:
-
-+---+
-$ kinit
-Please enter the password for tucu@LOCALHOST:
-$ curl --negotiate -u foo -b ~/cookiejar.txt -c ~/cookiejar.txt http://localhost:8080/hadoop-auth-examples/kerberos/who
-Enter host password for user 'tucu':
-
-Hello Hadoop Auth Examples!
-+---+
-
-  * The <<<--negotiate>>> option enables SPNEGO in <<<curl>>>.
-
-  * The <<<-u foo>>> option is required but the user ignored (the principal
-    that has been kinit-ed is used).
-
-  * The <<<-b>>> and <<<-c>>> are use to store and send HTTP Cookies.
-
-* Using the Java Client
-
-  Use the <<<AuthenticatedURL>>> class to obtain an authenticated HTTP
-  connection:
-
-+---+
-...
-URL url = new URL("http://localhost:8080/hadoop-auth/kerberos/who");
-AuthenticatedURL.Token token = new AuthenticatedURL.Token();
-...
-HttpURLConnection conn = new AuthenticatedURL(url, token).openConnection();
-...
-conn = new AuthenticatedURL(url, token).openConnection();
-...
-+---+
-
-* Building and Running the Examples
-
-  Download Hadoop-Auth's source code, the examples are in the
-  <<<src/main/examples>>> directory.
-
-** Server Example:
-
-  Edit the <<<hadoop-auth-examples/src/main/webapp/WEB-INF/web.xml>>> and set the
-  right configuration init parameters for the <<<AuthenticationFilter>>>
-  definition configured for Kerberos (the right Kerberos principal and keytab
-  file must be specified). Refer to the {{{./Configuration.html}Configuration
-  document}} for details.
-
-  Create the web application WAR file by running the <<<mvn package>>> command.
-
-  Deploy the WAR file in a servlet container. For example, if using Tomcat,
-  copy the WAR file to Tomcat's <<<webapps/>>> directory.
-
-  Start the servlet container.
-
-** Accessing the server using <<<curl>>>
-
-  Try accessing protected resources using <<<curl>>>. The protected resources
-  are:
-
-+---+
-$ kinit
-Please enter the password for tucu@LOCALHOST:
-
-$ curl http://localhost:8080/hadoop-auth-examples/anonymous/who
-
-$ curl http://localhost:8080/hadoop-auth-examples/simple/who?user.name=foo
-
-$ curl --negotiate -u foo -b ~/cookiejar.txt -c ~/cookiejar.txt http://localhost:8080/hadoop-auth-examples/kerberos/who
-+---+
-
-** Accessing the server using the Java client example
-
-+---+
-$ kinit
-Please enter the password for tucu@LOCALHOST:
-
-$ cd examples
-
-$ mvn exec:java -Durl=http://localhost:8080/hadoop-auth-examples/kerberos/who
-
-....
-
-Token value: "u=tucu,p=tucu@LOCALHOST,t=kerberos,e=1295305313146,s=sVZ1mpSnC5TKhZQE3QLN5p2DWBo="
-Status code: 200 OK
-
-You are: user[tucu] principal[tucu@LOCALHOST]
-
-....
-
-+---+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/245f7b2a/hadoop-common-project/hadoop-auth/src/site/apt/index.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-auth/src/site/apt/index.apt.vm b/hadoop-common-project/hadoop-auth/src/site/apt/index.apt.vm
deleted file mode 100644
index bf85f7f..0000000
--- a/hadoop-common-project/hadoop-auth/src/site/apt/index.apt.vm
+++ /dev/null
@@ -1,59 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  Hadoop Auth, Java HTTP SPNEGO ${project.version}
-  ---
-  ---
-  ${maven.build.timestamp}
-
-Hadoop Auth, Java HTTP SPNEGO ${project.version}
-
-  Hadoop Auth is a Java library consisting of a client and a server
-  components to enable Kerberos SPNEGO authentication for HTTP.
-
-  Hadoop Auth also supports additional authentication mechanisms on the client
-  and the server side via 2 simple interfaces.
-
-  Additionally, it provides a partially implemented derivative of the Kerberos
-  SPNEGO authentication to allow a "mixed" form of authentication where Kerberos
-  SPNEGO is used by non-browsers while an alternate form of authentication
-  (to be implemented by the user) is used for browsers.
-
-* License
-
-  Hadoop Auth is distributed under {{{http://www.apache.org/licenses/}Apache
-  License 2.0}}.
-
-* How Does Auth Works?
-
-  Hadoop Auth enforces authentication on protected resources, once authentiation
-  has been established it sets a signed HTTP Cookie that contains an
-  authentication token with the user name, user principal, authentication type
-  and expiration time.
-
-  Subsequent HTTP client requests presenting the signed HTTP Cookie have access
-  to the protected resources until the HTTP Cookie expires.
-
-  The secret used to sign the HTTP Cookie has multiple implementations that
-  provide different behaviors, including a hardcoded secret string, a rolling
-  randomly generated secret, and a rolling randomly generated secret
-  synchronized between multiple servers using ZooKeeper.
-
-* User Documentation
-
-  * {{{./Examples.html}Examples}}
-
-  * {{{./Configuration.html}Configuration}}
-
-  * {{{./BuildingIt.html}Building It}}
-

http://git-wip-us.apache.org/repos/asf/hadoop/blob/245f7b2a/hadoop-common-project/hadoop-auth/src/site/markdown/BuildingIt.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-auth/src/site/markdown/BuildingIt.md b/hadoop-common-project/hadoop-auth/src/site/markdown/BuildingIt.md
new file mode 100644
index 0000000..53a49d4
--- /dev/null
+++ b/hadoop-common-project/hadoop-auth/src/site/markdown/BuildingIt.md
@@ -0,0 +1,56 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+Hadoop Auth, Java HTTP SPNEGO - Building It
+===========================================
+
+Requirements
+------------
+
+* Java 6+
+* Maven 3+
+* Kerberos KDC (for running Kerberos test cases)
+
+Building
+--------
+
+Use Maven goals: clean, test, compile, package, install
+
+Available profiles: docs, testKerberos
+
+Testing
+-------
+
+By default Kerberos testcases are not run.
+
+The requirements to run Kerberos testcases are a running KDC, a keytab file with a client principal and a kerberos principal.
+
+To run Kerberos tescases use the `testKerberos` Maven profile:
+
+    $ mvn test -PtestKerberos
+
+The following Maven `-D` options can be used to change the default values:
+
+* `hadoop-auth.test.kerberos.realm`: default value **LOCALHOST**
+* `hadoop-auth.test.kerberos.client.principal`: default value **client**
+* `hadoop-auth.test.kerberos.server.principal`: default value **HTTP/localhost** (it must start 'HTTP/')
+* `hadoop-auth.test.kerberos.keytab.file`: default value **$HOME/$USER.keytab**
+
+### Generating Documentation
+
+To create the documentation use the `docs` Maven profile:
+
+    $ mvn package -Pdocs
+
+The generated documentation is available at `hadoop-auth/target/site/`.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/245f7b2a/hadoop-common-project/hadoop-auth/src/site/markdown/Configuration.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-auth/src/site/markdown/Configuration.md b/hadoop-common-project/hadoop-auth/src/site/markdown/Configuration.md
new file mode 100644
index 0000000..9d076bb
--- /dev/null
+++ b/hadoop-common-project/hadoop-auth/src/site/markdown/Configuration.md
@@ -0,0 +1,341 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+Hadoop Auth, Java HTTP SPNEGO - Server Side Configuration
+=========================================================
+
+Server Side Configuration Setup
+-------------------------------
+
+The AuthenticationFilter filter is Hadoop Auth's server side component.
+
+This filter must be configured in front of all the web application resources that required authenticated requests. For example:
+
+The Hadoop Auth and dependent JAR files must be in the web application classpath (commonly the `WEB-INF/lib` directory).
+
+Hadoop Auth uses SLF4J-API for logging. Auth Maven POM dependencies define the SLF4J API dependency but it does not define the dependency on a concrete logging implementation, this must be addded explicitly to the web application. For example, if the web applicationan uses Log4j, the SLF4J-LOG4J12 and LOG4J jar files must be part part of the web application classpath as well as the Log4j configuration file.
+
+### Common Configuration parameters
+
+*   `config.prefix`: If specified, all other configuration parameter names
+    must start with the prefix. The default value is no prefix.
+
+*   `[PREFIX.]type`: the authentication type keyword (`simple` or \
+    `kerberos`) or a Authentication handler implementation.
+
+*   `[PREFIX.]signature.secret`: When `signer.secret.provider` is set to
+    `string` or not specified, this is the value for the secret used to sign
+    the HTTP cookie.
+
+*   `[PREFIX.]token.validity`: The validity -in seconds- of the generated
+    authentication token. The default value is `3600` seconds. This is also
+    used for the rollover interval when `signer.secret.provider` is set to
+    `random` or `zookeeper`.
+
+*   `[PREFIX.]cookie.domain`: domain to use for the HTTP cookie that stores
+    the authentication token.
+
+*   `[PREFIX.]cookie.path`: path to use for the HTTP cookie that stores the
+    authentication token.
+
+*   `signer.secret.provider`: indicates the name of the SignerSecretProvider
+    class to use. Possible values are: `string`, `random`,
+    `zookeeper`, or a classname. If not specified, the `string`
+    implementation will be used; and failing that, the `random`
+    implementation will be used.
+
+### Kerberos Configuration
+
+**IMPORTANT**: A KDC must be configured and running.
+
+To use Kerberos SPNEGO as the authentication mechanism, the authentication filter must be configured with the following init parameters:
+
+*   `[PREFIX.]type`: the keyword `kerberos`.
+
+*   `[PREFIX.]kerberos.principal`: The web-application Kerberos principal
+    name. The Kerberos principal name must start with `HTTP/...`. For
+    example: `HTTP/localhost@LOCALHOST`. There is no default value.
+
+*   `[PREFIX.]kerberos.keytab`: The path to the keytab file containing
+    the credentials for the kerberos principal. For example:
+    `/Users/tucu/tucu.keytab`. There is no default value.
+
+**Example**:
+
+    <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee">
+        ...
+
+        <filter>
+            <filter-name>kerberosFilter</filter-name>
+            <filter-class>org.apache.hadoop.security.auth.server.AuthenticationFilter</filter-class>
+            <init-param>
+                <param-name>type</param-name>
+                <param-value>kerberos</param-value>
+            </init-param>
+            <init-param>
+                <param-name>token.validity</param-name>
+                <param-value>30</param-value>
+            </init-param>
+            <init-param>
+                <param-name>cookie.domain</param-name>
+                <param-value>.foo.com</param-value>
+            </init-param>
+            <init-param>
+                <param-name>cookie.path</param-name>
+                <param-value>/</param-value>
+            </init-param>
+            <init-param>
+                <param-name>kerberos.principal</param-name>
+                <param-value>HTTP/localhost@LOCALHOST</param-value>
+            </init-param>
+            <init-param>
+                <param-name>kerberos.keytab</param-name>
+                <param-value>/tmp/auth.keytab</param-value>
+            </init-param>
+        </filter>
+
+        <filter-mapping>
+            <filter-name>kerberosFilter</filter-name>
+            <url-pattern>/kerberos/*</url-pattern>
+        </filter-mapping>
+
+        ...
+    </web-app>
+
+### Pseudo/Simple Configuration
+
+To use Pseudo/Simple as the authentication mechanism (trusting the value of the query string parameter 'user.name'), the authentication filter must be configured with the following init parameters:
+
+*   `[PREFIX.]type`: the keyword `simple`.
+
+*   `[PREFIX.]simple.anonymous.allowed`: is a boolean parameter that
+    indicates if anonymous requests are allowed or not. The default value is
+    `false`.
+
+**Example**:
+
+    <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee">
+        ...
+
+        <filter>
+            <filter-name>simpleFilter</filter-name>
+            <filter-class>org.apache.hadoop.security.auth.server.AuthenticationFilter</filter-class>
+            <init-param>
+                <param-name>type</param-name>
+                <param-value>simple</param-value>
+            </init-param>
+            <init-param>
+                <param-name>token.validity</param-name>
+                <param-value>30</param-value>
+            </init-param>
+            <init-param>
+                <param-name>cookie.domain</param-name>
+                <param-value>.foo.com</param-value>
+            </init-param>
+            <init-param>
+                <param-name>cookie.path</param-name>
+                <param-value>/</param-value>
+            </init-param>
+            <init-param>
+                <param-name>simple.anonymous.allowed</param-name>
+                <param-value>false</param-value>
+            </init-param>
+        </filter>
+
+        <filter-mapping>
+            <filter-name>simpleFilter</filter-name>
+            <url-pattern>/simple/*</url-pattern>
+        </filter-mapping>
+
+        ...
+    </web-app>
+
+### AltKerberos Configuration
+
+**IMPORTANT**: A KDC must be configured and running.
+
+The AltKerberos authentication mechanism is a partially implemented derivative of the Kerberos SPNEGO authentication mechanism which allows a "mixed" form of authentication where Kerberos SPNEGO is used by non-browsers while an alternate form of authentication (to be implemented by the user) is used for browsers. To use AltKerberos as the authentication mechanism (besides providing an implementation), the authentication filter must be configured with the following init parameters, in addition to the previously mentioned Kerberos SPNEGO ones:
+
+*   `[PREFIX.]type`: the full class name of the implementation of
+    AltKerberosAuthenticationHandler to use.
+
+*   `[PREFIX.]alt-kerberos.non-browser.user-agents`: a comma-separated
+    list of which user-agents should be considered non-browsers.
+
+**Example**:
+
+    <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee">
+        ...
+
+        <filter>
+            <filter-name>kerberosFilter</filter-name>
+            <filter-class>org.apache.hadoop.security.auth.server.AuthenticationFilter</filter-class>
+            <init-param>
+                <param-name>type</param-name>
+                <param-value>org.my.subclass.of.AltKerberosAuthenticationHandler</param-value>
+            </init-param>
+            <init-param>
+                <param-name>alt-kerberos.non-browser.user-agents</param-name>
+                <param-value>java,curl,wget,perl</param-value>
+            </init-param>
+            <init-param>
+                <param-name>token.validity</param-name>
+                <param-value>30</param-value>
+            </init-param>
+            <init-param>
+                <param-name>cookie.domain</param-name>
+                <param-value>.foo.com</param-value>
+            </init-param>
+            <init-param>
+                <param-name>cookie.path</param-name>
+                <param-value>/</param-value>
+            </init-param>
+            <init-param>
+                <param-name>kerberos.principal</param-name>
+                <param-value>HTTP/localhost@LOCALHOST</param-value>
+            </init-param>
+            <init-param>
+                <param-name>kerberos.keytab</param-name>
+                <param-value>/tmp/auth.keytab</param-value>
+            </init-param>
+        </filter>
+
+        <filter-mapping>
+            <filter-name>kerberosFilter</filter-name>
+            <url-pattern>/kerberos/*</url-pattern>
+        </filter-mapping>
+
+        ...
+    </web-app>
+
+### SignerSecretProvider Configuration
+
+The SignerSecretProvider is used to provide more advanced behaviors for the secret used for signing the HTTP Cookies.
+
+These are the relevant configuration properties:
+
+*   `signer.secret.provider`: indicates the name of the
+    SignerSecretProvider class to use. Possible values are: "string",
+    "random", "zookeeper", or a classname. If not specified, the "string"
+    implementation will be used; and failing that, the "random" implementation
+    will be used.
+
+*   `[PREFIX.]signature.secret`: When `signer.secret.provider` is set
+    to `string` or not specified, this is the value for the secret used to
+    sign the HTTP cookie.
+
+*   `[PREFIX.]token.validity`: The validity -in seconds- of the generated
+    authentication token. The default value is `3600` seconds. This is
+    also used for the rollover interval when `signer.secret.provider` is
+    set to `random` or `zookeeper`.
+
+The following configuration properties are specific to the `zookeeper` implementation:
+
+*   `signer.secret.provider.zookeeper.connection.string`: Indicates the
+    ZooKeeper connection string to connect with.
+
+*   `signer.secret.provider.zookeeper.path`: Indicates the ZooKeeper path
+    to use for storing and retrieving the secrets. All servers
+    that need to coordinate their secret should point to the same path
+
+*   `signer.secret.provider.zookeeper.auth.type`: Indicates the auth type
+    to use. Supported values are `none` and `sasl`. The default
+    value is `none`.
+
+*   `signer.secret.provider.zookeeper.kerberos.keytab`: Set this to the
+    path with the Kerberos keytab file. This is only required if using
+    Kerberos.
+
+*   `signer.secret.provider.zookeeper.kerberos.principal`: Set this to the
+    Kerberos principal to use. This only required if using Kerberos.
+
+**Example**:
+
+    <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee">
+        ...
+
+        <filter>
+            <!-- AuthenticationHandler configs not shown -->
+            <init-param>
+                <param-name>signer.secret.provider</param-name>
+                <param-value>string</param-value>
+            </init-param>
+            <init-param>
+                <param-name>signature.secret</param-name>
+                <param-value>my_secret</param-value>
+            </init-param>
+        </filter>
+
+        ...
+    </web-app>
+
+**Example**:
+
+    <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee">
+        ...
+
+        <filter>
+            <!-- AuthenticationHandler configs not shown -->
+            <init-param>
+                <param-name>signer.secret.provider</param-name>
+                <param-value>random</param-value>
+            </init-param>
+            <init-param>
+                <param-name>token.validity</param-name>
+                <param-value>30</param-value>
+            </init-param>
+        </filter>
+
+        ...
+    </web-app>
+
+**Example**:
+
+    <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee">
+        ...
+
+        <filter>
+            <!-- AuthenticationHandler configs not shown -->
+            <init-param>
+                <param-name>signer.secret.provider</param-name>
+                <param-value>zookeeper</param-value>
+            </init-param>
+            <init-param>
+                <param-name>token.validity</param-name>
+                <param-value>30</param-value>
+            </init-param>
+            <init-param>
+                <param-name>signer.secret.provider.zookeeper.connection.string</param-name>
+                <param-value>zoo1:2181,zoo2:2181,zoo3:2181</param-value>
+            </init-param>
+            <init-param>
+                <param-name>signer.secret.provider.zookeeper.path</param-name>
+                <param-value>/myapp/secrets</param-value>
+            </init-param>
+            <init-param>
+                <param-name>signer.secret.provider.zookeeper.use.kerberos.acls</param-name>
+                <param-value>true</param-value>
+            </init-param>
+            <init-param>
+                <param-name>signer.secret.provider.zookeeper.kerberos.keytab</param-name>
+                <param-value>/tmp/auth.keytab</param-value>
+            </init-param>
+            <init-param>
+                <param-name>signer.secret.provider.zookeeper.kerberos.principal</param-name>
+                <param-value>HTTP/localhost@LOCALHOST</param-value>
+            </init-param>
+        </filter>
+
+        ...
+    </web-app>

http://git-wip-us.apache.org/repos/asf/hadoop/blob/245f7b2a/hadoop-common-project/hadoop-auth/src/site/markdown/Examples.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-auth/src/site/markdown/Examples.md b/hadoop-common-project/hadoop-auth/src/site/markdown/Examples.md
new file mode 100644
index 0000000..7efb642
--- /dev/null
+++ b/hadoop-common-project/hadoop-auth/src/site/markdown/Examples.md
@@ -0,0 +1,109 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+Hadoop Auth, Java HTTP SPNEGO - Examples
+========================================
+
+Accessing a Hadoop Auth protected URL Using a browser
+-----------------------------------------------------
+
+**IMPORTANT:** The browser must support HTTP Kerberos SPNEGO. For example, Firefox or Internet Explorer.
+
+For Firefox access the low level configuration page by loading the `about:config` page. Then go to the `network.negotiate-auth.trusted-uris` preference and add the hostname or the domain of the web server that is HTTP Kerberos SPNEGO protected (if using multiple domains and hostname use comma to separate them).
+
+Accessing a Hadoop Auth protected URL Using `curl`
+--------------------------------------------------
+
+**IMPORTANT:** The `curl` version must support GSS, run `curl -V`.
+
+    $ curl -V
+    curl 7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8l zlib/1.2.3
+    Protocols: tftp ftp telnet dict ldap http file https ftps
+    Features: GSS-Negotiate IPv6 Largefile NTLM SSL libz
+
+Login to the KDC using **kinit** and then use `curl` to fetch protected URL:
+
+    $ kinit
+    Please enter the password for tucu@LOCALHOST:
+    $ curl --negotiate -u foo -b ~/cookiejar.txt -c ~/cookiejar.txt http://localhost:8080/hadoop-auth-examples/kerberos/who
+    Enter host password for user 'tucu':
+
+    Hello Hadoop Auth Examples!
+
+*   The `--negotiate` option enables SPNEGO in `curl`.
+
+*   The `-u foo` option is required but the user ignored (the principal
+    that has been kinit-ed is used).
+
+*   The `-b` and `-c` are use to store and send HTTP Cookies.
+
+Using the Java Client
+---------------------
+
+Use the `AuthenticatedURL` class to obtain an authenticated HTTP connection:
+
+    ...
+    URL url = new URL("http://localhost:8080/hadoop-auth/kerberos/who");
+    AuthenticatedURL.Token token = new AuthenticatedURL.Token();
+    ...
+    HttpURLConnection conn = new AuthenticatedURL(url, token).openConnection();
+    ...
+    conn = new AuthenticatedURL(url, token).openConnection();
+    ...
+
+Building and Running the Examples
+---------------------------------
+
+Download Hadoop-Auth's source code, the examples are in the `src/main/examples` directory.
+
+### Server Example:
+
+Edit the `hadoop-auth-examples/src/main/webapp/WEB-INF/web.xml` and set the right configuration init parameters for the `AuthenticationFilter` definition configured for Kerberos (the right Kerberos principal and keytab file must be specified). Refer to the [Configuration document](./Configuration.html) for details.
+
+Create the web application WAR file by running the `mvn package` command.
+
+Deploy the WAR file in a servlet container. For example, if using Tomcat, copy the WAR file to Tomcat's `webapps/` directory.
+
+Start the servlet container.
+
+### Accessing the server using `curl`
+
+Try accessing protected resources using `curl`. The protected resources are:
+
+    $ kinit
+    Please enter the password for tucu@LOCALHOST:
+
+    $ curl http://localhost:8080/hadoop-auth-examples/anonymous/who
+
+    $ curl http://localhost:8080/hadoop-auth-examples/simple/who?user.name=foo
+
+    $ curl --negotiate -u foo -b ~/cookiejar.txt -c ~/cookiejar.txt http://localhost:8080/hadoop-auth-examples/kerberos/who
+
+### Accessing the server using the Java client example
+
+    $ kinit
+    Please enter the password for tucu@LOCALHOST:
+
+    $ cd examples
+
+    $ mvn exec:java -Durl=http://localhost:8080/hadoop-auth-examples/kerberos/who
+
+    ....
+
+    Token value: "u=tucu,p=tucu@LOCALHOST,t=kerberos,e=1295305313146,s=sVZ1mpSnC5TKhZQE3QLN5p2DWBo="
+    Status code: 200 OK
+
+    You are: user[tucu] principal[tucu@LOCALHOST]
+
+    ....

http://git-wip-us.apache.org/repos/asf/hadoop/blob/245f7b2a/hadoop-common-project/hadoop-auth/src/site/markdown/index.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-auth/src/site/markdown/index.md b/hadoop-common-project/hadoop-auth/src/site/markdown/index.md
new file mode 100644
index 0000000..8573b18
--- /dev/null
+++ b/hadoop-common-project/hadoop-auth/src/site/markdown/index.md
@@ -0,0 +1,43 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+Hadoop Auth, Java HTTP SPNEGO
+=============================
+
+Hadoop Auth is a Java library consisting of a client and a server components to enable Kerberos SPNEGO authentication for HTTP.
+
+Hadoop Auth also supports additional authentication mechanisms on the client and the server side via 2 simple interfaces.
+
+Additionally, it provides a partially implemented derivative of the Kerberos SPNEGO authentication to allow a "mixed" form of authentication where Kerberos SPNEGO is used by non-browsers while an alternate form of authentication (to be implemented by the user) is used for browsers.
+
+License
+-------
+
+Hadoop Auth is distributed under [Apache License 2.0](http://www.apache.org/licenses/).
+
+How Does Auth Works?
+--------------------
+
+Hadoop Auth enforces authentication on protected resources, once authentiation has been established it sets a signed HTTP Cookie that contains an authentication token with the user name, user principal, authentication type and expiration time.
+
+Subsequent HTTP client requests presenting the signed HTTP Cookie have access to the protected resources until the HTTP Cookie expires.
+
+The secret used to sign the HTTP Cookie has multiple implementations that provide different behaviors, including a hardcoded secret string, a rolling randomly generated secret, and a rolling randomly generated secret synchronized between multiple servers using ZooKeeper.
+
+User Documentation
+------------------
+
+* [Examples](./Examples.html)
+* [Configuration](./Configuration.html)
+* [Building It](./BuildingIt.html)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/245f7b2a/hadoop-common-project/hadoop-common/CHANGES.txt
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/CHANGES.txt b/hadoop-common-project/hadoop-common/CHANGES.txt
index 04ce1ed..7e0258a 100644
--- a/hadoop-common-project/hadoop-common/CHANGES.txt
+++ b/hadoop-common-project/hadoop-common/CHANGES.txt
@@ -257,6 +257,9 @@ Release 2.7.0 - UNRELEASED
 
     HADOOP-11183. Memory-based S3AOutputstream. (Thomas Demoor via stevel)
 
+    HADOOP-11633. Convert remaining branch-2 .apt.vm files to markdown.
+    (Masatake Iwasaki via wheat9)
+
   BUG FIXES
 
     HADOOP-11512. Use getTrimmedStrings when reading serialization keys

http://git-wip-us.apache.org/repos/asf/hadoop/blob/245f7b2a/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm b/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
deleted file mode 100644
index 26678f6..0000000
--- a/hadoop-common-project/hadoop-common/src/site/apt/ClusterSetup.apt.vm
+++ /dev/null
@@ -1,703 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~   http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License. See accompanying LICENSE file.
-
-  ---
-  Hadoop Map Reduce Next Generation-${project.version} - Cluster Setup
-  ---
-  ---
-  ${maven.build.timestamp}
-
-%{toc|section=1|fromDepth=0}
-
-Hadoop MapReduce Next Generation - Cluster Setup
-
-* {Purpose}
-
-  This document describes how to install, configure and manage non-trivial
-  Hadoop clusters ranging from a few nodes to extremely large clusters
-  with thousands of nodes.
-
-  To play with Hadoop, you may first want to install it on a single
-  machine (see {{{./SingleCluster.html}Single Node Setup}}).
-
-* {Prerequisites}
-
-  Download a stable version of Hadoop from Apache mirrors.
-
-* {Installation}
-
-  Installing a Hadoop cluster typically involves unpacking the software on all
-  the machines in the cluster or installing RPMs.
-
-  Typically one machine in the cluster is designated as the NameNode and
-  another machine the as ResourceManager, exclusively. These are the masters.
-
-  The rest of the machines in the cluster act as both DataNode and NodeManager.
-  These are the slaves.
-
-* {Running Hadoop in Non-Secure Mode}
-
-  The following sections describe how to configure a Hadoop cluster.
-
-  {Configuration Files}
-
-    Hadoop configuration is driven by two types of important configuration files:
-
-      * Read-only default configuration - <<<core-default.xml>>>,
-        <<<hdfs-default.xml>>>, <<<yarn-default.xml>>> and
-        <<<mapred-default.xml>>>.
-
-      * Site-specific configuration - <<conf/core-site.xml>>,
-        <<conf/hdfs-site.xml>>, <<conf/yarn-site.xml>> and
-        <<conf/mapred-site.xml>>.
-
-
-    Additionally, you can control the Hadoop scripts found in the bin/
-    directory of the distribution, by setting site-specific values via the
-    <<conf/hadoop-env.sh>> and <<yarn-env.sh>>.
-
-  {Site Configuration}
-
-  To configure the Hadoop cluster you will need to configure the
-  <<<environment>>> in which the Hadoop daemons execute as well as the
-  <<<configuration parameters>>> for the Hadoop daemons.
-
-  The Hadoop daemons are NameNode/DataNode and ResourceManager/NodeManager.
-
-
-** {Configuring Environment of Hadoop Daemons}
-
-  Administrators should use the <<conf/hadoop-env.sh>> and
-  <<conf/yarn-env.sh>> script to do site-specific customization of the
-  Hadoop daemons' process environment.
-
-  At the very least you should specify the <<<JAVA_HOME>>> so that it is
-  correctly defined on each remote node.
-
-  In most cases you should also specify <<<HADOOP_PID_DIR>>> and
-  <<<HADOOP_SECURE_DN_PID_DIR>>> to point to directories that can only be
-  written to by the users that are going to run the hadoop daemons.
-  Otherwise there is the potential for a symlink attack.
-
-  Administrators can configure individual daemons using the configuration
-  options shown below in the table:
-
-*--------------------------------------+--------------------------------------+
-|| Daemon                              || Environment Variable                |
-*--------------------------------------+--------------------------------------+
-| NameNode                             | HADOOP_NAMENODE_OPTS                 |
-*--------------------------------------+--------------------------------------+
-| DataNode                             | HADOOP_DATANODE_OPTS                 |
-*--------------------------------------+--------------------------------------+
-| Secondary NameNode                   | HADOOP_SECONDARYNAMENODE_OPTS        |
-*--------------------------------------+--------------------------------------+
-| ResourceManager                      | YARN_RESOURCEMANAGER_OPTS            |
-*--------------------------------------+--------------------------------------+
-| NodeManager                          | YARN_NODEMANAGER_OPTS                |
-*--------------------------------------+--------------------------------------+
-| WebAppProxy                          | YARN_PROXYSERVER_OPTS                |
-*--------------------------------------+--------------------------------------+
-| Map Reduce Job History Server        | HADOOP_JOB_HISTORYSERVER_OPTS        |
-*--------------------------------------+--------------------------------------+
-
-
-  For example, To configure Namenode to use parallelGC, the following
-  statement should be added in hadoop-env.sh :
-
-----
-  export HADOOP_NAMENODE_OPTS="-XX:+UseParallelGC ${HADOOP_NAMENODE_OPTS}"
-----
-
-  Other useful configuration parameters that you can customize include:
-
-    * <<<HADOOP_LOG_DIR>>> / <<<YARN_LOG_DIR>>> - The directory where the
-      daemons' log files are stored. They are automatically created if they
-      don't exist.
-
-    * <<<HADOOP_HEAPSIZE>>> / <<<YARN_HEAPSIZE>>> - The maximum amount of
-      heapsize to use, in MB e.g. if the varibale is set to 1000 the heap
-      will be set to 1000MB.  This is used to configure the heap
-      size for the daemon. By default, the value is 1000.  If you want to
-      configure the values separately for each deamon you can use.
-
-*--------------------------------------+--------------------------------------+
-|| Daemon                              || Environment Variable                |
-*--------------------------------------+--------------------------------------+
-| ResourceManager                      | YARN_RESOURCEMANAGER_HEAPSIZE        |
-*--------------------------------------+--------------------------------------+
-| NodeManager                          | YARN_NODEMANAGER_HEAPSIZE            |
-*--------------------------------------+--------------------------------------+
-| WebAppProxy                          | YARN_PROXYSERVER_HEAPSIZE            |
-*--------------------------------------+--------------------------------------+
-| Map Reduce Job History Server        | HADOOP_JOB_HISTORYSERVER_HEAPSIZE    |
-*--------------------------------------+--------------------------------------+
-
-** {Configuring the Hadoop Daemons in Non-Secure Mode}
-
-    This section deals with important parameters to be specified in
-    the given configuration files:
-
-    * <<<conf/core-site.xml>>>
-
-*-------------------------+-------------------------+------------------------+
-|| Parameter              || Value                  || Notes                 |
-*-------------------------+-------------------------+------------------------+
-| <<<fs.defaultFS>>>      | NameNode URI            | <hdfs://host:port/>    |
-*-------------------------+-------------------------+------------------------+
-| <<<io.file.buffer.size>>> | 131072 |  |
-| | | Size of read/write buffer used in SequenceFiles. |
-*-------------------------+-------------------------+------------------------+
-
-    * <<<conf/hdfs-site.xml>>>
-
-      * Configurations for NameNode:
-
-*-------------------------+-------------------------+------------------------+
-|| Parameter              || Value                  || Notes                 |
-*-------------------------+-------------------------+------------------------+
-| <<<dfs.namenode.name.dir>>> | | |
-| | Path on the local filesystem where the NameNode stores the namespace | |
-| | and transactions logs persistently. | |
-| | | If this is a comma-delimited list of directories then the name table is  |
-| | | replicated in all of the directories, for redundancy. |
-*-------------------------+-------------------------+------------------------+
-| <<<dfs.namenode.hosts>>> / <<<dfs.namenode.hosts.exclude>>> | | |
-| | List of permitted/excluded DataNodes. | |
-| | | If necessary, use these files to control the list of allowable |
-| | | datanodes. |
-*-------------------------+-------------------------+------------------------+
-| <<<dfs.blocksize>>> | 268435456 | |
-| | | HDFS blocksize of 256MB for large file-systems. |
-*-------------------------+-------------------------+------------------------+
-| <<<dfs.namenode.handler.count>>> | 100 | |
-| | | More NameNode server threads to handle RPCs from large number of |
-| | | DataNodes. |
-*-------------------------+-------------------------+------------------------+
-
-      * Configurations for DataNode:
-
-*-------------------------+-------------------------+------------------------+
-|| Parameter              || Value                  || Notes                 |
-*-------------------------+-------------------------+------------------------+
-| <<<dfs.datanode.data.dir>>> | | |
-| | Comma separated list of paths on the local filesystem of a | |
-| | <<<DataNode>>> where it should store its blocks. | |
-| | | If this is a comma-delimited list of directories, then data will be |
-| | | stored in all named directories, typically on different devices. |
-*-------------------------+-------------------------+------------------------+
-
-    * <<<conf/yarn-site.xml>>>
-
-      * Configurations for ResourceManager and NodeManager:
-
-*-------------------------+-------------------------+------------------------+
-|| Parameter              || Value                  || Notes                 |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.acl.enable>>> | | |
-| | <<<true>>> / <<<false>>> | |
-| | | Enable ACLs? Defaults to <false>. |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.admin.acl>>> | | |
-| | Admin ACL | |
-| | | ACL to set admins on the cluster. |
-| | | ACLs are of for <comma-separated-users><space><comma-separated-groups>. |
-| | | Defaults to special value of <<*>> which means <anyone>. |
-| | | Special value of just <space> means no one has access. |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.log-aggregation-enable>>> | | |
-| | <false> | |
-| | | Configuration to enable or disable log aggregation |
-*-------------------------+-------------------------+------------------------+
-
-
-      * Configurations for ResourceManager:
-
-*-------------------------+-------------------------+------------------------+
-|| Parameter              || Value                  || Notes                 |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.resourcemanager.address>>> | | |
-| | <<<ResourceManager>>> host:port for clients to submit jobs. | |
-| | | <host:port>\ |
-| | | If set, overrides the hostname set in <<<yarn.resourcemanager.hostname>>>. |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.resourcemanager.scheduler.address>>> | | |
-| | <<<ResourceManager>>> host:port for ApplicationMasters to talk to | |
-| | Scheduler to obtain resources. | |
-| | | <host:port>\ |
-| | | If set, overrides the hostname set in <<<yarn.resourcemanager.hostname>>>. |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.resourcemanager.resource-tracker.address>>> | | |
-| | <<<ResourceManager>>> host:port for NodeManagers. | |
-| | | <host:port>\ |
-| | | If set, overrides the hostname set in <<<yarn.resourcemanager.hostname>>>. |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.resourcemanager.admin.address>>> | | |
-| | <<<ResourceManager>>> host:port for administrative commands. | |
-| | | <host:port>\ |
-| | | If set, overrides the hostname set in <<<yarn.resourcemanager.hostname>>>. |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.resourcemanager.webapp.address>>> | | |
-| | <<<ResourceManager>>> web-ui host:port. | |
-| | | <host:port>\ |
-| | | If set, overrides the hostname set in <<<yarn.resourcemanager.hostname>>>. |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.resourcemanager.hostname>>> | | |
-| | <<<ResourceManager>>> host. | |
-| | | <host>\ |
-| | | Single hostname that can be set in place of setting all <<<yarn.resourcemanager*address>>> resources.  Results in default ports for ResourceManager components. |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.resourcemanager.scheduler.class>>> | | |
-| | <<<ResourceManager>>> Scheduler class. | |
-| | | <<<CapacityScheduler>>> (recommended), <<<FairScheduler>>> (also recommended), or <<<FifoScheduler>>> |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.scheduler.minimum-allocation-mb>>> | | |
-| | Minimum limit of memory to allocate to each container request at the <<<Resource Manager>>>. | |
-| | | In MBs |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.scheduler.maximum-allocation-mb>>> | | |
-| | Maximum limit of memory to allocate to each container request at the <<<Resource Manager>>>. | |
-| | | In MBs |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.resourcemanager.nodes.include-path>>> / | | |
-| <<<yarn.resourcemanager.nodes.exclude-path>>> | | |
-| | List of permitted/excluded NodeManagers. | |
-| | | If necessary, use these files to control the list of allowable |
-| | | NodeManagers. |
-*-------------------------+-------------------------+------------------------+
-
-      * Configurations for NodeManager:
-
-*-------------------------+-------------------------+------------------------+
-|| Parameter              || Value                  || Notes                 |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.nodemanager.resource.memory-mb>>> | | |
-| | Resource i.e. available physical memory, in MB, for given <<<NodeManager>>> | |
-| | | Defines total available resources on the <<<NodeManager>>> to be made |
-| | | available to running containers |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.nodemanager.vmem-pmem-ratio>>> | | |
-| | Maximum ratio by which virtual memory usage of tasks may exceed |
-| | physical memory | |
-| | | The virtual memory usage of each task may exceed its physical memory |
-| | | limit by this ratio. The total amount of virtual memory used by tasks |
-| | | on the NodeManager may exceed its physical memory usage by this ratio. |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.nodemanager.local-dirs>>> | | |
-| | Comma-separated list of paths on the local filesystem where | |
-| | intermediate data is written. ||
-| | | Multiple paths help spread disk i/o. |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.nodemanager.log-dirs>>> | | |
-| | Comma-separated list of paths on the local filesystem where logs  | |
-| | are written. | |
-| | | Multiple paths help spread disk i/o. |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.nodemanager.log.retain-seconds>>> | | |
-| | <10800> | |
-| | | Default time (in seconds) to retain log files on the NodeManager |
-| | | Only applicable if log-aggregation is disabled. |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.nodemanager.remote-app-log-dir>>> | | |
-| | </logs> | |
-| | | HDFS directory where the application logs are moved on application |
-| | | completion. Need to set appropriate permissions. |
-| | | Only applicable if log-aggregation is enabled. |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.nodemanager.remote-app-log-dir-suffix>>> | | |
-| | <logs> | |
-| | | Suffix appended to the remote log dir. Logs will be aggregated to  |
-| | | $\{yarn.nodemanager.remote-app-log-dir\}/$\{user\}/$\{thisParam\} |
-| | | Only applicable if log-aggregation is enabled. |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.nodemanager.aux-services>>> | | |
-| | mapreduce_shuffle  | |
-| | | Shuffle service that needs to be set for Map Reduce applications. |
-*-------------------------+-------------------------+------------------------+
-
-      * Configurations for History Server (Needs to be moved elsewhere):
-
-*-------------------------+-------------------------+------------------------+
-|| Parameter              || Value                  || Notes                 |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.log-aggregation.retain-seconds>>> | | |
-| | <-1> | |
-| | | How long to keep aggregation logs before deleting them. -1 disables. |
-| | | Be careful, set this too small and you will spam the name node. |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.log-aggregation.retain-check-interval-seconds>>> | | |
-| | <-1> | |
-| | | Time between checks for aggregated log retention. If set to 0 or a |
-| | | negative value then the value is computed as one-tenth of the |
-| | | aggregated log retention time. |
-| | | Be careful, set this too small and you will spam the name node. |
-*-------------------------+-------------------------+------------------------+
-
-
-
-    * <<<conf/mapred-site.xml>>>
-
-      * Configurations for MapReduce Applications:
-
-*-------------------------+-------------------------+------------------------+
-|| Parameter              || Value                  || Notes                 |
-*-------------------------+-------------------------+------------------------+
-| <<<mapreduce.framework.name>>> | | |
-| | yarn | |
-| | | Execution framework set to Hadoop YARN. |
-*-------------------------+-------------------------+------------------------+
-| <<<mapreduce.map.memory.mb>>> | 1536 | |
-| | | Larger resource limit for maps. |
-*-------------------------+-------------------------+------------------------+
-| <<<mapreduce.map.java.opts>>> | -Xmx1024M | |
-| | | Larger heap-size for child jvms of maps. |
-*-------------------------+-------------------------+------------------------+
-| <<<mapreduce.reduce.memory.mb>>> | 3072 | |
-| | | Larger resource limit for reduces. |
-*-------------------------+-------------------------+------------------------+
-| <<<mapreduce.reduce.java.opts>>> | -Xmx2560M | |
-| | | Larger heap-size for child jvms of reduces. |
-*-------------------------+-------------------------+------------------------+
-| <<<mapreduce.task.io.sort.mb>>> | 512 | |
-| | | Higher memory-limit while sorting data for efficiency. |
-*-------------------------+-------------------------+------------------------+
-| <<<mapreduce.task.io.sort.factor>>> | 100 | |
-| | | More streams merged at once while sorting files. |
-*-------------------------+-------------------------+------------------------+
-| <<<mapreduce.reduce.shuffle.parallelcopies>>> | 50 | |
-| | | Higher number of parallel copies run by reduces to fetch outputs |
-| | | from very large number of maps. |
-*-------------------------+-------------------------+------------------------+
-
-      * Configurations for MapReduce JobHistory Server:
-
-*-------------------------+-------------------------+------------------------+
-|| Parameter              || Value                  || Notes                 |
-*-------------------------+-------------------------+------------------------+
-| <<<mapreduce.jobhistory.address>>> | | |
-| | MapReduce JobHistory Server <host:port> | Default port is 10020. |
-*-------------------------+-------------------------+------------------------+
-| <<<mapreduce.jobhistory.webapp.address>>> | | |
-| | MapReduce JobHistory Server Web UI <host:port> | Default port is 19888. |
-*-------------------------+-------------------------+------------------------+
-| <<<mapreduce.jobhistory.intermediate-done-dir>>> | /mr-history/tmp | |
-|  | | Directory where history files are written by MapReduce jobs. |
-*-------------------------+-------------------------+------------------------+
-| <<<mapreduce.jobhistory.done-dir>>> | /mr-history/done| |
-| | | Directory where history files are managed by the MR JobHistory Server. |
-*-------------------------+-------------------------+------------------------+
-
-* {Hadoop Rack Awareness}
-
-    The HDFS and the YARN components are rack-aware.
-
-    The NameNode and the ResourceManager obtains the rack information of the
-    slaves in the cluster by invoking an API <resolve> in an administrator
-    configured module.
-
-    The API resolves the DNS name (also IP address) to a rack id.
-
-    The site-specific module to use can be configured using the configuration
-    item <<<topology.node.switch.mapping.impl>>>. The default implementation
-    of the same runs a script/command configured using
-    <<<topology.script.file.name>>>. If <<<topology.script.file.name>>> is
-    not set, the rack id </default-rack> is returned for any passed IP address.
-
-* {Monitoring Health of NodeManagers}
-
-    Hadoop provides a mechanism by which administrators can configure the
-    NodeManager to run an administrator supplied script periodically to
-    determine if a node is healthy or not.
-
-    Administrators can determine if the node is in a healthy state by
-    performing any checks of their choice in the script. If the script
-    detects the node to be in an unhealthy state, it must print a line to
-    standard output beginning with the string ERROR. The NodeManager spawns
-    the script periodically and checks its output. If the script's output
-    contains the string ERROR, as described above, the node's status is
-    reported as <<<unhealthy>>> and the node is black-listed by the
-    ResourceManager. No further tasks will be assigned to this node.
-    However, the NodeManager continues to run the script, so that if the
-    node becomes healthy again, it will be removed from the blacklisted nodes
-    on the ResourceManager automatically. The node's health along with the
-    output of the script, if it is unhealthy, is available to the
-    administrator in the ResourceManager web interface. The time since the
-    node was healthy is also displayed on the web interface.
-
-    The following parameters can be used to control the node health
-    monitoring script in <<<conf/yarn-site.xml>>>.
-
-*-------------------------+-------------------------+------------------------+
-|| Parameter              || Value                  || Notes                 |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.nodemanager.health-checker.script.path>>> | | |
-| | Node health script  | |
-| | | Script to check for node's health status. |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.nodemanager.health-checker.script.opts>>> | | |
-| | Node health script options  | |
-| | | Options for script to check for node's health status. |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.nodemanager.health-checker.script.interval-ms>>> | | |
-| | Node health script interval  | |
-| | | Time interval for running health script. |
-*-------------------------+-------------------------+------------------------+
-| <<<yarn.nodemanager.health-checker.script.timeout-ms>>> | | |
-| | Node health script timeout interval  | |
-| | | Timeout for health script execution. |
-*-------------------------+-------------------------+------------------------+
-
-  The health checker script is not supposed to give ERROR if only some of the
-  local disks become bad. NodeManager has the ability to periodically check
-  the health of the local disks (specifically checks nodemanager-local-dirs
-  and nodemanager-log-dirs) and after reaching the threshold of number of
-  bad directories based on the value set for the config property
-  yarn.nodemanager.disk-health-checker.min-healthy-disks, the whole node is
-  marked unhealthy and this info is sent to resource manager also. The boot
-  disk is either raided or a failure in the boot disk is identified by the
-  health checker script.
-
-* {Slaves file}
-
-  Typically you choose one machine in the cluster to act as the NameNode and
-  one machine as to act as the ResourceManager, exclusively. The rest of the
-  machines act as both a DataNode and NodeManager and are referred to as
-  <slaves>.
-
-  List all slave hostnames or IP addresses in your <<<conf/slaves>>> file,
-  one per line.
-
-* {Logging}
-
-  Hadoop uses the Apache log4j via the Apache Commons Logging framework for
-  logging. Edit the <<<conf/log4j.properties>>> file to customize the
-  Hadoop daemons' logging configuration (log-formats and so on).
-
-* {Operating the Hadoop Cluster}
-
-  Once all the necessary configuration is complete, distribute the files to the
-  <<<HADOOP_CONF_DIR>>> directory on all the machines.
-
-** Hadoop Startup
-
-  To start a Hadoop cluster you will need to start both the HDFS and YARN
-  cluster.
-
-  Format a new distributed filesystem:
-
-----
-$ $HADOOP_PREFIX/bin/hdfs namenode -format <cluster_name>
-----
-
-  Start the HDFS with the following command, run on the designated NameNode:
-
-----
-$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode
-----    	
-
-  Run a script to start DataNodes on all slaves:
-
-----
-$ $HADOOP_PREFIX/sbin/hadoop-daemons.sh --config $HADOOP_CONF_DIR --script hdfs start datanode
-----    	
-
-  Start the YARN with the following command, run on the designated
-  ResourceManager:
-
-----
-$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager
-----    	
-
-  Run a script to start NodeManagers on all slaves:
-
-----
-$ $HADOOP_YARN_HOME/sbin/yarn-daemons.sh --config $HADOOP_CONF_DIR start nodemanager
-----    	
-
-  Start a standalone WebAppProxy server.  If multiple servers
-  are used with load balancing it should be run on each of them:
-
-----
-$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh start proxyserver --config $HADOOP_CONF_DIR
-----
-
-  Start the MapReduce JobHistory Server with the following command, run on the
-  designated server:
-
-----
-$ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh start historyserver --config $HADOOP_CONF_DIR
-----    	
-
-** Hadoop Shutdown
-
-  Stop the NameNode with the following command, run on the designated
-  NameNode:
-
-----
-$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop namenode
-----    	
-
-  Run a script to stop DataNodes on all slaves:
-
-----
-$ $HADOOP_PREFIX/sbin/hadoop-daemons.sh --config $HADOOP_CONF_DIR --script hdfs stop datanode
-----    	
-
-  Stop the ResourceManager with the following command, run on the designated
-  ResourceManager:
-
-----
-$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanager
-----    	
-
-  Run a script to stop NodeManagers on all slaves:
-
-----
-$ $HADOOP_YARN_HOME/sbin/yarn-daemons.sh --config $HADOOP_CONF_DIR stop nodemanager
-----    	
-
-  Stop the WebAppProxy server. If multiple servers are used with load
-  balancing it should be run on each of them:
-
-----
-$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh stop proxyserver --config $HADOOP_CONF_DIR
-----
-
-
-  Stop the MapReduce JobHistory Server with the following command, run on the
-  designated server:
-
-----
-$ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh stop historyserver --config $HADOOP_CONF_DIR
-----    	
-
-
-* {Operating the Hadoop Cluster}
-
-  Once all the necessary configuration is complete, distribute the files to the
-  <<<HADOOP_CONF_DIR>>> directory on all the machines.
-
-  This section also describes the various Unix users who should be starting the
-  various components and uses the same Unix accounts and groups used previously:
-
-** Hadoop Startup
-
-    To start a Hadoop cluster you will need to start both the HDFS and YARN
-    cluster.
-
-    Format a new distributed filesystem as <hdfs>:
-
-----
-[hdfs]$ $HADOOP_PREFIX/bin/hdfs namenode -format <cluster_name>
-----
-
-    Start the HDFS with the following command, run on the designated NameNode
-    as <hdfs>:
-
-----
-[hdfs]$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode
-----    	
-
-    Run a script to start DataNodes on all slaves as <root> with a special
-    environment variable <<<HADOOP_SECURE_DN_USER>>> set to <hdfs>:
-
-----
-[root]$ HADOOP_SECURE_DN_USER=hdfs $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start datanode
-----    	
-
-    Start the YARN with the following command, run on the designated
-    ResourceManager as <yarn>:
-
-----
-[yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager
-----    	
-
-    Run a script to start NodeManagers on all slaves as <yarn>:
-
-----
-[yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start nodemanager
-----    	
-
-    Start a standalone WebAppProxy server. Run on the WebAppProxy
-    server as <yarn>.  If multiple servers are used with load balancing
-    it should be run on each of them:
-
-----
-[yarn]$ $HADOOP_YARN_HOME/bin/yarn start proxyserver --config $HADOOP_CONF_DIR
-----    	
-
-    Start the MapReduce JobHistory Server with the following command, run on the
-    designated server as <mapred>:
-
-----
-[mapred]$ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh start historyserver --config $HADOOP_CONF_DIR
-----    	
-
-** Hadoop Shutdown
-
-  Stop the NameNode with the following command, run on the designated NameNode
-  as <hdfs>:
-
-----
-[hdfs]$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop namenode
-----    	
-
-  Run a script to stop DataNodes on all slaves as <root>:
-
-----
-[root]$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop datanode
-----    	
-
-  Stop the ResourceManager with the following command, run on the designated
-  ResourceManager as <yarn>:
-
-----
-[yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanager
-----    	
-
-  Run a script to stop NodeManagers on all slaves as <yarn>:
-
-----
-[yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop nodemanager
-----    	
-
-  Stop the WebAppProxy server. Run on the WebAppProxy  server as
-  <yarn>.  If multiple servers are used with load balancing it
-  should be run on each of them:
-
-----
-[yarn]$ $HADOOP_YARN_HOME/bin/yarn stop proxyserver --config $HADOOP_CONF_DIR
-----
-
-  Stop the MapReduce JobHistory Server with the following command, run on the
-  designated server as <mapred>:
-
-----
-[mapred]$ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh stop historyserver --config $HADOOP_CONF_DIR
-----    	
-
-* {Web Interfaces}
-
-  Once the Hadoop cluster is up and running check the web-ui of the
-  components as described below:
-
-*-------------------------+-------------------------+------------------------+
-|| Daemon                 || Web Interface          || Notes                 |
-*-------------------------+-------------------------+------------------------+
-| NameNode | http://<nn_host:port>/ | Default HTTP port is 50070. |
-*-------------------------+-------------------------+------------------------+
-| ResourceManager | http://<rm_host:port>/ | Default HTTP port is 8088. |
-*-------------------------+-------------------------+------------------------+
-| MapReduce JobHistory Server | http://<jhs_host:port>/ | |
-| | | Default HTTP port is 19888. |
-*-------------------------+-------------------------+------------------------+
-
-