You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by wh...@apache.org on 2015/03/11 22:31:30 UTC

[11/12] hadoop git commit: HADOOP-11633. Convert remaining branch-2 .apt.vm files to markdown. Contributed by Masatake Iwasaki.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/245f7b2a/hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm b/hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm
deleted file mode 100644
index 6d2fd5e..0000000
--- a/hadoop-common-project/hadoop-common/src/site/apt/CommandsManual.apt.vm
+++ /dev/null
@@ -1,283 +0,0 @@
-~~ Licensed to the Apache Software Foundation (ASF) under one or more
-~~ contributor license agreements.  See the NOTICE file distributed with
-~~ this work for additional information regarding copyright ownership.
-~~ The ASF licenses this file to You under the Apache License, Version 2.0
-~~ (the "License"); you may not use this file except in compliance with
-~~ the License.  You may obtain a copy of the License at
-~~
-~~     http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License.
-
-  ---
-  Hadoop Commands Guide
-  ---
-  ---
-  ${maven.build.timestamp}
-
-%{toc}
-
-Overview
-
-   All hadoop commands are invoked by the <<<bin/hadoop>>> script. Running the
-   hadoop script without any arguments prints the description for all
-   commands.
-
-   Usage: <<<hadoop [--config confdir] [--loglevel loglevel] [COMMAND]
-             [GENERIC_OPTIONS] [COMMAND_OPTIONS]>>>
-
-   Hadoop has an option parsing framework that employs parsing generic
-   options as well as running classes.
-
-*-----------------------+---------------+
-|| COMMAND_OPTION       || Description
-*-----------------------+---------------+
-| <<<--config confdir>>>| Overwrites the default Configuration directory.  Default is <<<${HADOOP_HOME}/conf>>>.
-*-----------------------+---------------+
-| <<<--loglevel loglevel>>>| Overwrites the log level. Valid log levels are
-|                       | FATAL, ERROR, WARN, INFO, DEBUG, and TRACE.
-|                       | Default is INFO.
-*-----------------------+---------------+
-| GENERIC_OPTIONS       | The common set of options supported by multiple commands.
-| COMMAND_OPTIONS       | Various commands with their options are described in the following sections. The commands have been grouped into User Commands and Administration Commands.
-*-----------------------+---------------+
-
-Generic Options
-
-   The following options are supported by {{dfsadmin}}, {{fs}}, {{fsck}},
-   {{job}} and {{fetchdt}}. Applications should implement 
-   {{{../../api/org/apache/hadoop/util/Tool.html}Tool}} to support
-   GenericOptions.
-
-*------------------------------------------------+-----------------------------+
-||            GENERIC_OPTION                     ||            Description
-*------------------------------------------------+-----------------------------+
-|<<<-conf \<configuration file\> >>>             | Specify an application
-                                                 | configuration file.
-*------------------------------------------------+-----------------------------+
-|<<<-D \<property\>=\<value\> >>>                | Use value for given property.
-*------------------------------------------------+-----------------------------+
-|<<<-jt \<local\> or \<resourcemanager:port\>>>> | Specify a ResourceManager.
-                                                 | Applies only to job.
-*------------------------------------------------+-----------------------------+
-|<<<-files \<comma separated list of files\> >>> | Specify comma separated files
-                                                 | to be copied to the map
-                                                 | reduce cluster.  Applies only
-                                                 | to job.
-*------------------------------------------------+-----------------------------+
-|<<<-libjars \<comma seperated list of jars\> >>>| Specify comma separated jar
-                                                 | files to include in the
-                                                 | classpath. Applies only to
-                                                 | job.
-*------------------------------------------------+-----------------------------+
-|<<<-archives \<comma separated list of archives\> >>> | Specify comma separated
-                                                 | archives to be unarchived on
-                                                 | the compute machines. Applies
-                                                 | only to job.
-*------------------------------------------------+-----------------------------+
-
-User Commands
-
-   Commands useful for users of a hadoop cluster.
-
-* <<<archive>>>
-
-   Creates a hadoop archive. More information can be found at
-   {{{../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/HadoopArchives.html}
-   Hadoop Archives Guide}}.
-
-* <<<credential>>>
-
-   Command to manage credentials, passwords and secrets within credential providers.
-
-   The CredentialProvider API in Hadoop allows for the separation of applications
-   and how they store their required passwords/secrets. In order to indicate
-   a particular provider type and location, the user must provide the
-   <hadoop.security.credential.provider.path> configuration element in core-site.xml
-   or use the command line option <<<-provider>>> on each of the following commands.
-   This provider path is a comma-separated list of URLs that indicates the type and
-   location of a list of providers that should be consulted.
-   For example, the following path:
-
-   <<<user:///,jceks://file/tmp/test.jceks,jceks://hdfs@nn1.example.com/my/path/test.jceks>>>
-
-   indicates that the current user's credentials file should be consulted through
-   the User Provider, that the local file located at <<</tmp/test.jceks>>> is a Java Keystore
-   Provider and that the file located within HDFS at <<<nn1.example.com/my/path/test.jceks>>>
-   is also a store for a Java Keystore Provider.
-
-   When utilizing the credential command it will often be for provisioning a password
-   or secret to a particular credential store provider. In order to explicitly
-   indicate which provider store to use the <<<-provider>>> option should be used. Otherwise,
-   given a path of multiple providers, the first non-transient provider will be used.
-   This may or may not be the one that you intended.
-
-   Example: <<<-provider jceks://file/tmp/test.jceks>>>
-
-   Usage: <<<hadoop credential <subcommand> [options]>>>
-
-*-------------------+-------------------------------------------------------+
-||COMMAND_OPTION    ||                   Description
-*-------------------+-------------------------------------------------------+
-| create <alias> [-v <value>][-provider <provider-path>]| Prompts the user for
-                    | a credential to be stored as the given alias when a value
-                    | is not provided via <<<-v>>>. The
-                    | <hadoop.security.credential.provider.path> within the
-                    | core-site.xml file will be used unless a <<<-provider>>> is
-                    | indicated.
-*-------------------+-------------------------------------------------------+
-| delete <alias> [-i][-provider <provider-path>] | Deletes the credential with
-                    | the provided alias and optionally warns the user when
-                    | <<<--interactive>>> is used.
-                    | The <hadoop.security.credential.provider.path> within the
-                    | core-site.xml file will be used unless a <<<-provider>>> is
-                    | indicated.
-*-------------------+-------------------------------------------------------+
-| list [-provider <provider-path>] | Lists all of the credential aliases
-                    | The <hadoop.security.credential.provider.path> within the
-                    | core-site.xml file will be used unless a <<<-provider>>> is
-                    | indicated.
-*-------------------+-------------------------------------------------------+
-
-* <<<distcp>>>
-
-   Copy file or directories recursively. More information can be found at
-   {{{../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistCp.html}
-   Hadoop DistCp Guide}}.
-
-* <<<fs>>>
-
-   Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#dfs}<<<hdfs dfs>>>}}
-   instead.
-
-* <<<fsck>>>
-
-   Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#fsck}<<<hdfs fsck>>>}}
-   instead.
-
-* <<<fetchdt>>>
-
-   Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#fetchdt}
-   <<<hdfs fetchdt>>>}} instead.
-
-* <<<jar>>>
-
-   Runs a jar file. Users can bundle their Map Reduce code in a jar file and
-   execute it using this command.
-
-   Usage: <<<hadoop jar <jar> [mainClass] args...>>>
-
-   The streaming jobs are run via this command. Examples can be referred from
-   Streaming examples
-
-   Word count example is also run using jar command. It can be referred from
-   Wordcount example
-
-   Use {{{../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html#jar}<<<yarn jar>>>}}
-   to launch YARN applications instead.
-
-* <<<job>>>
-
-   Deprecated. Use
-   {{{../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html#job}
-   <<<mapred job>>>}} instead.
-
-* <<<pipes>>>
-
-   Deprecated. Use
-   {{{../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html#pipes}
-   <<<mapred pipes>>>}} instead.
-
-* <<<queue>>>
-
-   Deprecated. Use
-   {{{../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html#queue}
-   <<<mapred queue>>>}} instead.
-
-* <<<version>>>
-
-   Prints the version.
-
-   Usage: <<<hadoop version>>>
-
-* <<<CLASSNAME>>>
-
-   hadoop script can be used to invoke any class.
-
-   Usage: <<<hadoop CLASSNAME>>>
-
-   Runs the class named <<<CLASSNAME>>>.
-
-* <<<classpath>>>
-
-   Prints the class path needed to get the Hadoop jar and the required
-   libraries.  If called without arguments, then prints the classpath set up by
-   the command scripts, which is likely to contain wildcards in the classpath
-   entries.  Additional options print the classpath after wildcard expansion or
-   write the classpath into the manifest of a jar file.  The latter is useful in
-   environments where wildcards cannot be used and the expanded classpath exceeds
-   the maximum supported command line length.
-
-   Usage: <<<hadoop classpath [--glob|--jar <path>|-h|--help]>>>
-
-*-----------------+-----------------------------------------------------------+
-|| COMMAND_OPTION || Description
-*-----------------+-----------------------------------------------------------+
-| --glob          | expand wildcards
-*-----------------+-----------------------------------------------------------+
-| --jar <path>    | write classpath as manifest in jar named <path>
-*-----------------+-----------------------------------------------------------+
-| -h, --help      | print help
-*-----------------+-----------------------------------------------------------+
-
-Administration Commands
-
-   Commands useful for administrators of a hadoop cluster.
-
-* <<<balancer>>>
-
-   Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#balancer}
-   <<<hdfs balancer>>>}} instead.
-
-* <<<daemonlog>>>
-
-   Get/Set the log level for each daemon.
-
-   Usage: <<<hadoop daemonlog -getlevel <host:port> <name> >>>
-   Usage: <<<hadoop daemonlog -setlevel <host:port> <name> <level> >>>
-
-*------------------------------+-----------------------------------------------------------+
-|| COMMAND_OPTION              || Description
-*------------------------------+-----------------------------------------------------------+
-| -getlevel <host:port> <name> | Prints the log level of the daemon running at
-                               | <host:port>. This command internally connects
-                               | to http://<host:port>/logLevel?log=<name>
-*------------------------------+-----------------------------------------------------------+
-|   -setlevel <host:port> <name> <level> | Sets the log level of the daemon
-                               | running at <host:port>. This command internally
-                               | connects to http://<host:port>/logLevel?log=<name>
-*------------------------------+-----------------------------------------------------------+
-
-* <<<datanode>>>
-
-   Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#datanode}
-   <<<hdfs datanode>>>}} instead.
-
-* <<<dfsadmin>>>
-
-   Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#dfsadmin}
-   <<<hdfs dfsadmin>>>}} instead.
-
-* <<<namenode>>>
-
-   Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#namenode}
-   <<<hdfs namenode>>>}} instead.
-
-* <<<secondarynamenode>>>
-
-   Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#secondarynamenode}
-   <<<hdfs secondarynamenode>>>}} instead.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/245f7b2a/hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md b/hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
new file mode 100644
index 0000000..2865716
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md
@@ -0,0 +1,337 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+* [Hadoop Cluster Setup](#Hadoop_Cluster_Setup)
+    * [Purpose](#Purpose)
+    * [Prerequisites](#Prerequisites)
+    * [Installation](#Installation)
+    * [Configuring Hadoop in Non-Secure Mode](#Configuring_Hadoop_in_Non-Secure_Mode)
+        * [Configuring Environment of Hadoop Daemons](#Configuring_Environment_of_Hadoop_Daemons)
+        * [Configuring the Hadoop Daemons](#Configuring_the_Hadoop_Daemons)
+    * [Monitoring Health of NodeManagers](#Monitoring_Health_of_NodeManagers)
+    * [Slaves File](#Slaves_File)
+    * [Hadoop Rack Awareness](#Hadoop_Rack_Awareness)
+    * [Logging](#Logging)
+    * [Operating the Hadoop Cluster](#Operating_the_Hadoop_Cluster)
+        * [Hadoop Startup](#Hadoop_Startup)
+        * [Hadoop Shutdown](#Hadoop_Shutdown)
+    * [Web Interfaces](#Web_Interfaces)
+
+Hadoop Cluster Setup
+====================
+
+Purpose
+-------
+
+This document describes how to install and configure Hadoop clusters ranging from a few nodes to extremely large clusters with thousands of nodes. To play with Hadoop, you may first want to install it on a single machine (see [Single Node Setup](./SingleCluster.html)).
+
+This document does not cover advanced topics such as [Security](./SecureMode.html) or High Availability.
+
+Prerequisites
+-------------
+
+* Install Java. See the [Hadoop Wiki](http://wiki.apache.org/hadoop/HadoopJavaVersions) for known good versions.
+* Download a stable version of Hadoop from Apache mirrors.
+
+Installation
+------------
+
+Installing a Hadoop cluster typically involves unpacking the software on all the machines in the cluster or installing it via a packaging system as appropriate for your operating system. It is important to divide up the hardware into functions.
+
+Typically one machine in the cluster is designated as the NameNode and another machine the as ResourceManager, exclusively. These are the masters. Other services (such as Web App Proxy Server and MapReduce Job History server) are usually run either on dedicated hardware or on shared infrastrucutre, depending upon the load.
+
+The rest of the machines in the cluster act as both DataNode and NodeManager. These are the slaves.
+
+Configuring Hadoop in Non-Secure Mode
+-------------------------------------
+
+Hadoop's Java configuration is driven by two types of important configuration files:
+
+* Read-only default configuration - `core-default.xml`, `hdfs-default.xml`, `yarn-default.xml` and `mapred-default.xml`.
+
+* Site-specific configuration - `etc/hadoop/core-site.xml`, `etc/hadoop/hdfs-site.xml`, `etc/hadoop/yarn-site.xml` and `etc/hadoop/mapred-site.xml`.
+
+Additionally, you can control the Hadoop scripts found in the bin/ directory of the distribution, by setting site-specific values via the `etc/hadoop/hadoop-env.sh` and `etc/hadoop/yarn-env.sh`.
+
+To configure the Hadoop cluster you will need to configure the `environment` in which the Hadoop daemons execute as well as the `configuration parameters` for the Hadoop daemons.
+
+HDFS daemons are NameNode, SecondaryNameNode, and DataNode. YARN damones are ResourceManager, NodeManager, and WebAppProxy. If MapReduce is to be used, then the MapReduce Job History Server will also be running. For large installations, these are generally running on separate hosts.
+
+### Configuring Environment of Hadoop Daemons
+
+Administrators should use the `etc/hadoop/hadoop-env.sh` and optionally the `etc/hadoop/mapred-env.sh` and `etc/hadoop/yarn-env.sh` scripts to do site-specific customization of the Hadoop daemons' process environment.
+
+At the very least, you must specify the `JAVA_HOME` so that it is correctly defined on each remote node.
+
+Administrators can configure individual daemons using the configuration options shown below in the table:
+
+| Daemon | Environment Variable |
+|:---- |:---- |
+| NameNode | HADOOP\_NAMENODE\_OPTS |
+| DataNode | HADOOP\_DATANODE\_OPTS |
+| Secondary NameNode | HADOOP\_SECONDARYNAMENODE\_OPTS |
+| ResourceManager | YARN\_RESOURCEMANAGER\_OPTS |
+| NodeManager | YARN\_NODEMANAGER\_OPTS |
+| WebAppProxy | YARN\_PROXYSERVER\_OPTS |
+| Map Reduce Job History Server | HADOOP\_JOB\_HISTORYSERVER\_OPTS |
+
+For example, To configure Namenode to use parallelGC, the following statement should be added in hadoop-env.sh :
+
+      export HADOOP_NAMENODE_OPTS="-XX:+UseParallelGC"
+
+See `etc/hadoop/hadoop-env.sh` for other examples.
+
+Other useful configuration parameters that you can customize include:
+
+* `HADOOP_PID_DIR` - The directory where the daemons' process id files are stored.
+* `HADOOP_LOG_DIR` - The directory where the daemons' log files are stored. Log files are automatically created if they don't exist.
+* `HADOOP_HEAPSIZE` / `YARN_HEAPSIZE` - The maximum amount of heapsize to use, in MB e.g. if the varibale is set to 1000 the heap will be set to 1000MB. This is used to configure the heap size for the daemon. By default, the value is 1000. If you want to configure the values separately for each deamon you can use.
+
+In most cases, you should specify the `HADOOP_PID_DIR` and `HADOOP_LOG_DIR` directories such that they can only be written to by the users that are going to run the hadoop daemons. Otherwise there is the potential for a symlink attack.
+
+It is also traditional to configure `HADOOP_PREFIX` in the system-wide shell environment configuration. For example, a simple script inside `/etc/profile.d`:
+
+      HADOOP_PREFIX=/path/to/hadoop
+      export HADOOP_PREFIX
+
+| Daemon | Environment Variable |
+|:---- |:---- |
+| ResourceManager | YARN\_RESOURCEMANAGER\_HEAPSIZE |
+| NodeManager | YARN\_NODEMANAGER\_HEAPSIZE |
+| WebAppProxy | YARN\_PROXYSERVER\_HEAPSIZE |
+| Map Reduce Job History Server | HADOOP\_JOB\_HISTORYSERVER\_HEAPSIZE |
+
+### Configuring the Hadoop Daemons
+
+This section deals with important parameters to be specified in the given configuration files:
+
+* `etc/hadoop/core-site.xml`
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `fs.defaultFS` | NameNode URI | hdfs://host:port/ |
+| `io.file.buffer.size` | 131072 | Size of read/write buffer used in SequenceFiles. |
+
+* `etc/hadoop/hdfs-site.xml`
+
+  * Configurations for NameNode:
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `dfs.namenode.name.dir` | Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently. | If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy. |
+| `dfs.hosts` / `dfs.hosts.exclude` | List of permitted/excluded DataNodes. | If necessary, use these files to control the list of allowable datanodes. |
+| `dfs.blocksize` | 268435456 | HDFS blocksize of 256MB for large file-systems. |
+| `dfs.namenode.handler.count` | 100 | More NameNode server threads to handle RPCs from large number of DataNodes. |
+
+  * Configurations for DataNode:
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `dfs.datanode.data.dir` | Comma separated list of paths on the local filesystem of a `DataNode` where it should store its blocks. | If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. |
+
+* `etc/hadoop/yarn-site.xml`
+
+  * Configurations for ResourceManager and NodeManager:
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `yarn.acl.enable` | `true` / `false` | Enable ACLs? Defaults to *false*. |
+| `yarn.admin.acl` | Admin ACL | ACL to set admins on the cluster. ACLs are of for *comma-separated-usersspacecomma-separated-groups*. Defaults to special value of **\*** which means *anyone*. Special value of just *space* means no one has access. |
+| `yarn.log-aggregation-enable` | *false* | Configuration to enable or disable log aggregation |
+
+  * Configurations for ResourceManager:
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `yarn.resourcemanager.address` | `ResourceManager` host:port for clients to submit jobs. | *host:port* If set, overrides the hostname set in `yarn.resourcemanager.hostname`. |
+| `yarn.resourcemanager.scheduler.address` | `ResourceManager` host:port for ApplicationMasters to talk to Scheduler to obtain resources. | *host:port* If set, overrides the hostname set in `yarn.resourcemanager.hostname`. |
+| `yarn.resourcemanager.resource-tracker.address` | `ResourceManager` host:port for NodeManagers. | *host:port* If set, overrides the hostname set in `yarn.resourcemanager.hostname`. |
+| `yarn.resourcemanager.admin.address` | `ResourceManager` host:port for administrative commands. | *host:port* If set, overrides the hostname set in `yarn.resourcemanager.hostname`. |
+| `yarn.resourcemanager.webapp.address` | `ResourceManager` web-ui host:port. | *host:port* If set, overrides the hostname set in `yarn.resourcemanager.hostname`. |
+| `yarn.resourcemanager.hostname` | `ResourceManager` host. | *host* Single hostname that can be set in place of setting all `yarn.resourcemanager*address` resources. Results in default ports for ResourceManager components. |
+| `yarn.resourcemanager.scheduler.class` | `ResourceManager` Scheduler class. | `CapacityScheduler` (recommended), `FairScheduler` (also recommended), or `FifoScheduler` |
+| `yarn.scheduler.minimum-allocation-mb` | Minimum limit of memory to allocate to each container request at the `Resource Manager`. | In MBs |
+| `yarn.scheduler.maximum-allocation-mb` | Maximum limit of memory to allocate to each container request at the `Resource Manager`. | In MBs |
+| `yarn.resourcemanager.nodes.include-path` / `yarn.resourcemanager.nodes.exclude-path` | List of permitted/excluded NodeManagers. | If necessary, use these files to control the list of allowable NodeManagers. |
+
+  * Configurations for NodeManager:
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `yarn.nodemanager.resource.memory-mb` | Resource i.e. available physical memory, in MB, for given `NodeManager` | Defines total available resources on the `NodeManager` to be made available to running containers |
+| `yarn.nodemanager.vmem-pmem-ratio` | Maximum ratio by which virtual memory usage of tasks may exceed physical memory | The virtual memory usage of each task may exceed its physical memory limit by this ratio. The total amount of virtual memory used by tasks on the NodeManager may exceed its physical memory usage by this ratio. |
+| `yarn.nodemanager.local-dirs` | Comma-separated list of paths on the local filesystem where intermediate data is written. | Multiple paths help spread disk i/o. |
+| `yarn.nodemanager.log-dirs` | Comma-separated list of paths on the local filesystem where logs are written. | Multiple paths help spread disk i/o. |
+| `yarn.nodemanager.log.retain-seconds` | *10800* | Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled. |
+| `yarn.nodemanager.remote-app-log-dir` | */logs* | HDFS directory where the application logs are moved on application completion. Need to set appropriate permissions. Only applicable if log-aggregation is enabled. |
+| `yarn.nodemanager.remote-app-log-dir-suffix` | *logs* | Suffix appended to the remote log dir. Logs will be aggregated to ${yarn.nodemanager.remote-app-log-dir}/${user}/${thisParam} Only applicable if log-aggregation is enabled. |
+| `yarn.nodemanager.aux-services` | mapreduce\_shuffle | Shuffle service that needs to be set for Map Reduce applications. |
+
+  * Configurations for History Server (Needs to be moved elsewhere):
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `yarn.log-aggregation.retain-seconds` | *-1* | How long to keep aggregation logs before deleting them. -1 disables. Be careful, set this too small and you will spam the name node. |
+| `yarn.log-aggregation.retain-check-interval-seconds` | *-1* | Time between checks for aggregated log retention. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time. Be careful, set this too small and you will spam the name node. |
+
+* `etc/hadoop/mapred-site.xml`
+
+  * Configurations for MapReduce Applications:
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `mapreduce.framework.name` | yarn | Execution framework set to Hadoop YARN. |
+| `mapreduce.map.memory.mb` | 1536 | Larger resource limit for maps. |
+| `mapreduce.map.java.opts` | -Xmx1024M | Larger heap-size for child jvms of maps. |
+| `mapreduce.reduce.memory.mb` | 3072 | Larger resource limit for reduces. |
+| `mapreduce.reduce.java.opts` | -Xmx2560M | Larger heap-size for child jvms of reduces. |
+| `mapreduce.task.io.sort.mb` | 512 | Higher memory-limit while sorting data for efficiency. |
+| `mapreduce.task.io.sort.factor` | 100 | More streams merged at once while sorting files. |
+| `mapreduce.reduce.shuffle.parallelcopies` | 50 | Higher number of parallel copies run by reduces to fetch outputs from very large number of maps. |
+
+  * Configurations for MapReduce JobHistory Server:
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `mapreduce.jobhistory.address` | MapReduce JobHistory Server *host:port* | Default port is 10020. |
+| `mapreduce.jobhistory.webapp.address` | MapReduce JobHistory Server Web UI *host:port* | Default port is 19888. |
+| `mapreduce.jobhistory.intermediate-done-dir` | /mr-history/tmp | Directory where history files are written by MapReduce jobs. |
+| `mapreduce.jobhistory.done-dir` | /mr-history/done | Directory where history files are managed by the MR JobHistory Server. |
+
+Monitoring Health of NodeManagers
+---------------------------------
+
+Hadoop provides a mechanism by which administrators can configure the NodeManager to run an administrator supplied script periodically to determine if a node is healthy or not.
+
+Administrators can determine if the node is in a healthy state by performing any checks of their choice in the script. If the script detects the node to be in an unhealthy state, it must print a line to standard output beginning with the string ERROR. The NodeManager spawns the script periodically and checks its output. If the script's output contains the string ERROR, as described above, the node's status is reported as `unhealthy` and the node is black-listed by the ResourceManager. No further tasks will be assigned to this node. However, the NodeManager continues to run the script, so that if the node becomes healthy again, it will be removed from the blacklisted nodes on the ResourceManager automatically. The node's health along with the output of the script, if it is unhealthy, is available to the administrator in the ResourceManager web interface. The time since the node was healthy is also displayed on the web interface.
+
+The following parameters can be used to control the node health monitoring script in `etc/hadoop/yarn-site.xml`.
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `yarn.nodemanager.health-checker.script.path` | Node health script | Script to check for node's health status. |
+| `yarn.nodemanager.health-checker.script.opts` | Node health script options | Options for script to check for node's health status. |
+| `yarn.nodemanager.health-checker.script.interval-ms` | Node health script interval | Time interval for running health script. |
+| `yarn.nodemanager.health-checker.script.timeout-ms` | Node health script timeout interval | Timeout for health script execution. |
+
+The health checker script is not supposed to give ERROR if only some of the local disks become bad. NodeManager has the ability to periodically check the health of the local disks (specifically checks nodemanager-local-dirs and nodemanager-log-dirs) and after reaching the threshold of number of bad directories based on the value set for the config property yarn.nodemanager.disk-health-checker.min-healthy-disks, the whole node is marked unhealthy and this info is sent to resource manager also. The boot disk is either raided or a failure in the boot disk is identified by the health checker script.
+
+Slaves File
+-----------
+
+List all slave hostnames or IP addresses in your `etc/hadoop/slaves` file, one per line. Helper scripts (described below) will use the `etc/hadoop/slaves` file to run commands on many hosts at once. It is not used for any of the Java-based Hadoop configuration. In order to use this functionality, ssh trusts (via either passphraseless ssh or some other means, such as Kerberos) must be established for the accounts used to run Hadoop.
+
+Hadoop Rack Awareness
+---------------------
+
+Many Hadoop components are rack-aware and take advantage of the network topology for performance and safety. Hadoop daemons obtain the rack information of the slaves in the cluster by invoking an administrator configured module. See the [Rack Awareness](./RackAwareness.html) documentation for more specific information.
+
+It is highly recommended configuring rack awareness prior to starting HDFS.
+
+Logging
+-------
+
+Hadoop uses the [Apache log4j](http://logging.apache.org/log4j/2.x/) via the Apache Commons Logging framework for logging. Edit the `etc/hadoop/log4j.properties` file to customize the Hadoop daemons' logging configuration (log-formats and so on).
+
+Operating the Hadoop Cluster
+----------------------------
+
+Once all the necessary configuration is complete, distribute the files to the `HADOOP_CONF_DIR` directory on all the machines. This should be the same directory on all machines.
+
+In general, it is recommended that HDFS and YARN run as separate users. In the majority of installations, HDFS processes execute as 'hdfs'. YARN is typically using the 'yarn' account.
+
+### Hadoop Startup
+
+To start a Hadoop cluster you will need to start both the HDFS and YARN cluster.
+
+The first time you bring up HDFS, it must be formatted. Format a new distributed filesystem as *hdfs*:
+
+    [hdfs]$ $HADOOP_PREFIX/bin/hdfs namenode -format <cluster_name>
+
+Start the HDFS NameNode with the following command on the designated node as *hdfs*:
+
+    [hdfs]$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode
+
+Start a HDFS DataNode with the following command on each designated node as *hdfs*:
+
+    [hdfs]$ $HADOOP_PREFIX/sbin/hadoop-daemons.sh --config $HADOOP_CONF_DIR --script hdfs start datanode
+
+If `etc/hadoop/slaves` and ssh trusted access is configured (see [Single Node Setup](./SingleCluster.html)), all of the HDFS processes can be started with a utility script. As *hdfs*:
+
+    [hdfs]$ $HADOOP_PREFIX/sbin/start-dfs.sh
+
+Start the YARN with the following command, run on the designated ResourceManager as *yarn*:
+
+    [yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager
+
+Run a script to start a NodeManager on each designated host as *yarn*:
+
+    [yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemons.sh --config $HADOOP_CONF_DIR start nodemanager
+
+Start a standalone WebAppProxy server. Run on the WebAppProxy server as *yarn*. If multiple servers are used with load balancing it should be run on each of them:
+
+    [yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start proxyserver
+
+If `etc/hadoop/slaves` and ssh trusted access is configured (see [Single Node Setup](./SingleCluster.html)), all of the YARN processes can be started with a utility script. As *yarn*:
+
+    [yarn]$ $HADOOP_PREFIX/sbin/start-yarn.sh
+
+Start the MapReduce JobHistory Server with the following command, run on the designated server as *mapred*:
+
+    [mapred]$ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh --config $HADOOP_CONF_DIR start historyserver
+
+### Hadoop Shutdown
+
+Stop the NameNode with the following command, run on the designated NameNode as *hdfs*:
+
+    [hdfs]$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop namenode
+
+Run a script to stop a DataNode as *hdfs*:
+
+    [hdfs]$ $HADOOP_PREFIX/sbin/hadoop-daemons.sh --config $HADOOP_CONF_DIR --script hdfs stop datanode
+
+If `etc/hadoop/slaves` and ssh trusted access is configured (see [Single Node Setup](./SingleCluster.html)), all of the HDFS processes may be stopped with a utility script. As *hdfs*:
+
+    [hdfs]$ $HADOOP_PREFIX/sbin/stop-dfs.sh
+
+Stop the ResourceManager with the following command, run on the designated ResourceManager as *yarn*:
+
+    [yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanager
+
+Run a script to stop a NodeManager on a slave as *yarn*:
+
+    [yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemons.sh --config $HADOOP_CONF_DIR stop nodemanager
+
+If `etc/hadoop/slaves` and ssh trusted access is configured (see [Single Node Setup](./SingleCluster.html)), all of the YARN processes can be stopped with a utility script. As *yarn*:
+
+    [yarn]$ $HADOOP_PREFIX/sbin/stop-yarn.sh
+
+Stop the WebAppProxy server. Run on the WebAppProxy server as *yarn*. If multiple servers are used with load balancing it should be run on each of them:
+
+    [yarn]$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop proxyserver
+
+Stop the MapReduce JobHistory Server with the following command, run on the designated server as *mapred*:
+
+    [mapred]$ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh --config $HADOOP_CONF_DIR stop historyserver
+
+Web Interfaces
+--------------
+
+Once the Hadoop cluster is up and running check the web-ui of the components as described below:
+
+| Daemon | Web Interface | Notes |
+|:---- |:---- |:---- |
+| NameNode | http://nn_host:port/ | Default HTTP port is 50070. |
+| ResourceManager | http://rm_host:port/ | Default HTTP port is 8088. |
+| MapReduce JobHistory Server | http://jhs_host:port/ | Default HTTP port is 19888. |

http://git-wip-us.apache.org/repos/asf/hadoop/blob/245f7b2a/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md b/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
new file mode 100644
index 0000000..3a61445
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
@@ -0,0 +1,178 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+* [Hadoop Commands Guide](#Hadoop_Commands_Guide)
+    * [Overview](#Overview)
+        * [Generic Options](#Generic_Options)
+* [Hadoop Common Commands](#Hadoop_Common_Commands)
+    * [User Commands](#User_Commands)
+        * [archive](#archive)
+        * [checknative](#checknative)
+        * [classpath](#classpath)
+        * [credential](#credential)
+        * [distcp](#distcp)
+        * [fs](#fs)
+        * [jar](#jar)
+        * [key](#key)
+        * [trace](#trace)
+        * [version](#version)
+        * [CLASSNAME](#CLASSNAME)
+    * [Administration Commands](#Administration_Commands)
+        * [daemonlog](#daemonlog)
+
+Hadoop Commands Guide
+=====================
+
+Overview
+--------
+
+All hadoop commands are invoked by the `bin/hadoop` script. Running the
+hadoop script without any arguments prints the description for all
+commands.
+
+Usage: `hadoop [--config confdir] [--loglevel loglevel] [COMMAND] [GENERIC_OPTIONS] [COMMAND_OPTIONS]`
+
+| FIELD | Description |
+|:---- |:---- |
+| `--config confdir` | Overwrites the default Configuration directory.  Default is `${HADOOP_HOME}/conf`. |
+| `--loglevel loglevel` | Overwrites the log level. Valid log levels are FATAL, ERROR, WARN, INFO, DEBUG, and TRACE. Default is INFO. |
+| GENERIC\_OPTIONS | The common set of options supported by multiple commands. |
+| COMMAND\_OPTIONS | Various commands with their options are described in this documention for the Hadoop common sub-project. HDFS and YARN are covered in other documents. |
+
+### Generic Options
+
+Many subcommands honor a common set of configuration options to alter their behavior:
+
+| GENERIC\_OPTION | Description |
+|:---- |:---- |
+| `-archives <comma separated list of archives> ` | Specify comma separated archives to be unarchived on the compute machines. Applies only to job. |
+| `-conf <configuration file> ` | Specify an application configuration file. |
+| `-D <property>=<value> ` | Use value for given property. |
+| `-files <comma separated list of files> ` | Specify comma separated files to be copied to the map reduce cluster. Applies only to job. |
+| `-jt <local> or <resourcemanager:port>` | Specify a ResourceManager. Applies only to job. |
+| `-libjars <comma seperated list of jars> ` | Specify comma separated jar files to include in the classpath. Applies only to job. |
+
+Hadoop Common Commands
+======================
+
+All of these commands are executed from the `hadoop` shell command. They have been broken up into [User Commands](#User_Commands) and [Admininistration Commands](#Administration_Commands).
+
+User Commands
+-------------
+
+Commands useful for users of a hadoop cluster.
+
+### `archive`
+
+Creates a hadoop archive. More information can be found at [Hadoop Archives Guide](../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/HadoopArchives.html).
+
+### `checknative`
+
+Usage: `hadoop checknative [-a] [-h] `
+
+| COMMAND\_OPTION | Description |
+|:---- |:---- |
+| `-a` | Check all libraries are available. |
+| `-h` | print help |
+
+This command checks the availability of the Hadoop native code. See [\#NativeLibraries.html](#NativeLibraries.html) for more information. By default, this command only checks the availability of libhadoop.
+
+### `classpath`
+
+Usage: `hadoop classpath [--glob |--jar <path> |-h |--help]`
+
+| COMMAND\_OPTION | Description |
+|:---- |:---- |
+| `--glob` | expand wildcards |
+| `--jar` *path* | write classpath as manifest in jar named *path* |
+| `-h`, `--help` | print help |
+
+Prints the class path needed to get the Hadoop jar and the required libraries. If called without arguments, then prints the classpath set up by the command scripts, which is likely to contain wildcards in the classpath entries. Additional options print the classpath after wildcard expansion or write the classpath into the manifest of a jar file. The latter is useful in environments where wildcards cannot be used and the expanded classpath exceeds the maximum supported command line length.
+
+### `credential`
+
+Usage: `hadoop credential <subcommand> [options]`
+
+| COMMAND\_OPTION | Description |
+|:---- |:---- |
+| create *alias* [-v *value*][-provider *provider-path*] | Prompts the user for a credential to be stored as the given alias when a value is not provided via `-v`. The *hadoop.security.credential.provider.path* within the core-site.xml file will be used unless a `-provider` is indicated. |
+| delete *alias* [-i][-provider *provider-path*] | Deletes the credential with the provided alias and optionally warns the user when `--interactive` is used. The *hadoop.security.credential.provider.path* within the core-site.xml file will be used unless a `-provider` is indicated. |
+| list [-provider *provider-path*] | Lists all of the credential aliases The *hadoop.security.credential.provider.path* within the core-site.xml file will be used unless a `-provider` is indicated. |
+
+Command to manage credentials, passwords and secrets within credential providers.
+
+The CredentialProvider API in Hadoop allows for the separation of applications and how they store their required passwords/secrets. In order to indicate a particular provider type and location, the user must provide the *hadoop.security.credential.provider.path* configuration element in core-site.xml or use the command line option `-provider` on each of the following commands. This provider path is a comma-separated list of URLs that indicates the type and location of a list of providers that should be consulted. For example, the following path: `user:///,jceks://file/tmp/test.jceks,jceks://hdfs@nn1.example.com/my/path/test.jceks`
+
+indicates that the current user's credentials file should be consulted through the User Provider, that the local file located at `/tmp/test.jceks` is a Java Keystore Provider and that the file located within HDFS at `nn1.example.com/my/path/test.jceks` is also a store for a Java Keystore Provider.
+
+When utilizing the credential command it will often be for provisioning a password or secret to a particular credential store provider. In order to explicitly indicate which provider store to use the `-provider` option should be used. Otherwise, given a path of multiple providers, the first non-transient provider will be used. This may or may not be the one that you intended.
+
+Example: `hadoop credential list -provider jceks://file/tmp/test.jceks`
+
+### `distcp`
+
+Copy file or directories recursively. More information can be found at [Hadoop DistCp Guide](../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistCp.html).
+
+### `fs`
+
+This command is documented in the [File System Shell Guide](./FileSystemShell.html). It is a synonym for `hdfs dfs` when HDFS is in use.
+
+### `jar`
+
+Usage: `hadoop jar <jar> [mainClass] args...`
+
+Runs a jar file.
+
+Use [`yarn jar`](../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html#jar) to launch YARN applications instead.
+
+### `key`
+
+Manage keys via the KeyProvider.
+
+### `trace`
+
+View and modify Hadoop tracing settings. See the [Tracing Guide](./Tracing.html).
+
+### `version`
+
+Usage: `hadoop version`
+
+Prints the version.
+
+### `CLASSNAME`
+
+Usage: `hadoop CLASSNAME`
+
+Runs the class named `CLASSNAME`.
+
+Administration Commands
+-----------------------
+
+Commands useful for administrators of a hadoop cluster.
+
+### `daemonlog`
+
+Usage:
+
+    hadoop daemonlog -getlevel <host:httpport> <classname>
+    hadoop daemonlog -setlevel <host:httpport> <classname> <level>
+
+| COMMAND\_OPTION | Description |
+|:---- |:---- |
+| `-getlevel` *host:httpport* *classname* | Prints the log level of the log identified by a qualified *classname*, in the daemon running at *host:httpport*. This command internally connects to `http://<host:httpport>/logLevel?log=<classname>` |
+| `-setlevel` *host:httpport* *classname* *level* | Sets the log level of the log identified by a qualified *classname*, in the daemon running at *host:httpport*. This command internally connects to `http://<host:httpport>/logLevel?log=<classname>&level=<level>` |
+
+Get/Set the log level for a Log identified by a qualified class name in the daemon.
+
+	Example: $ bin/hadoop daemonlog -setlevel 127.0.0.1:50070 org.apache.hadoop.hdfs.server.namenode.NameNode DEBUG

http://git-wip-us.apache.org/repos/asf/hadoop/blob/245f7b2a/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm b/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
deleted file mode 100644
index b0c5083..0000000
--- a/hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
+++ /dev/null
@@ -1,1022 +0,0 @@
-~~ Licensed under the Apache License, Version 2.0 (the "License");
-~~ you may not use this file except in compliance with the License.
-~~ You may obtain a copy of the License at
-~~
-~~ http://www.apache.org/licenses/LICENSE-2.0
-~~
-~~ Unless required by applicable law or agreed to in writing, software
-~~ distributed under the License is distributed on an "AS IS" BASIS,
-~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-~~ See the License for the specific language governing permissions and
-~~ limitations under the License.
-
-  ---
-  Hadoop KMS - Documentation Sets ${project.version}
-  ---
-  ---
-  ${maven.build.timestamp}
-
-Hadoop Key Management Server (KMS) - Documentation Sets ${project.version}
-
-  Hadoop KMS is a cryptographic key management server based on Hadoop's
-  <<KeyProvider>> API.
-
-  It provides a client and a server components which communicate over
-  HTTP using a REST API.
-
-  The client is a KeyProvider implementation interacts with the KMS
-  using the KMS HTTP REST API.
-
-  KMS and its client have built-in security and they support HTTP SPNEGO
-  Kerberos authentication and HTTPS secure transport.
-
-  KMS is a Java web-application and it runs using a pre-configured Tomcat
-  bundled with the Hadoop distribution.
-
-* KMS Client Configuration
-
-  The KMS client <<<KeyProvider>>> uses the <<kms>> scheme, and the embedded
-  URL must be the URL of the KMS. For example, for a KMS running
-  on <<<http://localhost:16000/kms>>>, the KeyProvider URI is
-  <<<kms://http@localhost:16000/kms>>>. And, for a KMS running on
-  <<<https://localhost:16000/kms>>>, the KeyProvider URI is
-  <<<kms://https@localhost:16000/kms>>>
-
-* KMS
-
-** KMS Configuration
-
-  Configure the KMS backing KeyProvider properties
-  in the <<<etc/hadoop/kms-site.xml>>> configuration file:
-
-+---+
-  <property>
-    <name>hadoop.kms.key.provider.uri</name>
-    <value>jceks://file@/${user.home}/kms.keystore</value>
-  </property>
-
-  <property>
-    <name>hadoop.security.keystore.java-keystore-provider.password-file</name>
-    <value>kms.keystore.password</value>
-  </property>
-+---+
-
-  The password file is looked up in the Hadoop's configuration directory via the
-  classpath.
-
-  NOTE: You need to restart the KMS for the configuration changes to take
-  effect.
-
-** KMS Cache
-
-  KMS caches keys for short period of time to avoid excessive hits to the
-  underlying key provider.
-
-  The Cache is enabled by default (can be dissabled by setting the
-  <<<hadoop.kms.cache.enable>>> boolean property to false)
-
-  The cache is used with the following 3 methods only, <<<getCurrentKey()>>>
-  and <<<getKeyVersion()>>> and <<<getMetadata()>>>.
-
-  For the <<<getCurrentKey()>>> method, cached entries are kept for a maximum
-  of 30000 millisecond regardless the number of times the key is being access
-  (to avoid stale keys to be considered current).
-
-  For the <<<getKeyVersion()>>> method, cached entries are kept with a default
-  inactivity timeout of 600000 milliseconds (10 mins). This time out is
-  configurable via the following property in the <<<etc/hadoop/kms-site.xml>>>
-  configuration file:
-
-+---+
-  <property>
-    <name>hadoop.kms.cache.enable</name>
-    <value>true</value>
-  </property>
-
-  <property>
-    <name>hadoop.kms.cache.timeout.ms</name>
-    <value>600000</value>
-  </property>
-
-  <property>
-    <name>hadoop.kms.current.key.cache.timeout.ms</name>
-    <value>30000</value>
-  </property>
-+---+
-
-** KMS Aggregated Audit logs
-
-  Audit logs are aggregated for API accesses to the GET_KEY_VERSION,
-  GET_CURRENT_KEY, DECRYPT_EEK, GENERATE_EEK operations.
-
-  Entries are grouped by the (user,key,operation) combined key for a
-  configurable aggregation interval after which the number of accesses to the
-  specified end-point by the user for a given key is flushed to the audit log.
-
-  The Aggregation interval is configured via the property :
-
-+---+
-  <property>
-    <name>hadoop.kms.aggregation.delay.ms</name>
-    <value>10000</value>
-  </property>
-+---+
-
-
-** Start/Stop the KMS
-
-  To start/stop KMS use KMS's bin/kms.sh script. For example:
-
-+---+
-hadoop-${project.version} $ sbin/kms.sh start
-+---+
-
-  NOTE: Invoking the script without any parameters list all possible
-  parameters (start, stop, run, etc.). The <<<kms.sh>>> script is a wrapper
-  for Tomcat's <<<catalina.sh>>> script that sets the environment variables
-  and Java System properties required to run KMS.
-
-** Embedded Tomcat Configuration
-
-  To configure the embedded Tomcat go to the <<<share/hadoop/kms/tomcat/conf>>>.
-
-  KMS pre-configures the HTTP and Admin ports in Tomcat's <<<server.xml>>> to
-  16000 and 16001.
-
-  Tomcat logs are also preconfigured to go to Hadoop's <<<logs/>>> directory.
-
-  The following environment variables (which can be set in KMS's
-  <<<etc/hadoop/kms-env.sh>>> script) can be used to alter those values:
-
-  * KMS_HTTP_PORT
-
-  * KMS_ADMIN_PORT
-
-  * KMS_MAX_THREADS
-
-  * KMS_LOG
-
-  NOTE: You need to restart the KMS for the configuration changes to take
-  effect.
-
-** Loading native libraries
-
-  The following environment variable (which can be set in KMS's
-  <<<etc/hadoop/kms-env.sh>>> script) can be used to specify the location
-  of any required native libraries. For eg. Tomact native Apache Portable
-  Runtime (APR) libraries:
-
-  * JAVA_LIBRARY_PATH
-
-** KMS Security Configuration
-
-*** Enabling Kerberos HTTP SPNEGO Authentication
-
-  Configure the Kerberos <<<etc/krb5.conf>>> file with the information of your
-  KDC server.
-
-  Create a service principal and its keytab for the KMS, it must be an
-  <<<HTTP>>> service principal.
-
-  Configure KMS <<<etc/hadoop/kms-site.xml>>> with the correct security values,
-  for example:
-
-+---+
-  <property>
-    <name>hadoop.kms.authentication.type</name>
-    <value>kerberos</value>
-  </property>
-
-  <property>
-    <name>hadoop.kms.authentication.kerberos.keytab</name>
-    <value>${user.home}/kms.keytab</value>
-  </property>
-
-  <property>
-    <name>hadoop.kms.authentication.kerberos.principal</name>
-    <value>HTTP/localhost</value>
-  </property>
-
-  <property>
-    <name>hadoop.kms.authentication.kerberos.name.rules</name>
-    <value>DEFAULT</value>
-  </property>
-+---+
-
-  NOTE: You need to restart the KMS for the configuration changes to take
-  effect.
-
-*** KMS Proxyuser Configuration
-
-  Each proxyuser must be configured in <<<etc/hadoop/kms-site.xml>>> using the
-  following properties:
-
-+---+
-  <property>
-    <name>hadoop.kms.proxyuser.#USER#.users</name>
-    <value>*</value>
-  </property>
-
-  <property>
-    <name>hadoop.kms.proxyuser.#USER#.groups</name>
-    <value>*</value>
-  </property>
-
-  <property>
-    <name>hadoop.kms.proxyuser.#USER#.hosts</name>
-    <value>*</value>
-  </property>
-+---+
-
-  <<<#USER#>>> is the username of the proxyuser to configure.
-
-  The <<<users>>> property indicates the users that can be impersonated.
-
-  The <<<groups>>> property indicates the groups users being impersonated must
-  belong to.
-
-  At least one of the <<<users>>> or <<<groups>>> properties must be defined.
-  If both are specified, then the configured proxyuser will be able to
-  impersonate and user in the <<<users>>> list and any user belonging to one of
-  the groups in the <<<groups>>> list.
-
-  The <<<hosts>>> property indicates from which host the proxyuser can make
-  impersonation requests.
-
-  If <<<users>>>, <<<groups>>> or <<<hosts>>> has a <<<*>>>, it means there are
-  no restrictions for the proxyuser regarding users, groups or hosts.
-
-*** KMS over HTTPS (SSL)
-
-  To configure KMS to work over HTTPS the following 2 properties must be
-  set in the <<<etc/hadoop/kms_env.sh>>> script (shown with default values):
-
-    * KMS_SSL_KEYSTORE_FILE=${HOME}/.keystore
-
-    * KMS_SSL_KEYSTORE_PASS=password
-
-  In the KMS <<<tomcat/conf>>> directory, replace the <<<server.xml>>> file
-  with the provided <<<ssl-server.xml>>> file.
-
-  You need to create an SSL certificate for the KMS. As the
-  <<<kms>>> Unix user, using the Java <<<keytool>>> command to create the
-  SSL certificate:
-
-+---+
-$ keytool -genkey -alias tomcat -keyalg RSA
-+---+
-
-  You will be asked a series of questions in an interactive prompt.  It will
-  create the keystore file, which will be named <<.keystore>> and located in the
-  <<<kms>>> user home directory.
-
-  The password you enter for "keystore password" must match the  value of the
-  <<<KMS_SSL_KEYSTORE_PASS>>> environment variable set in the
-  <<<kms-env.sh>>> script in the configuration directory.
-
-  The answer to "What is your first and last name?" (i.e. "CN") must be the
-  hostname of the machine where the KMS will be running.
-
-  NOTE: You need to restart the KMS for the configuration changes to take
-  effect.
-
-*** KMS Access Control
-
-  KMS ACLs configuration are defined in the KMS <<<etc/hadoop/kms-acls.xml>>>
-  configuration file. This file is hot-reloaded when it changes.
-
-  KMS supports both fine grained access control as well as blacklist for kms
-  operations via a set ACL configuration properties.
-
-  A user accessing KMS is first checked for inclusion in the Access Control
-  List for the requested operation and then checked for exclusion in the
-  Black list for the operation before access is granted.
-
-
-+---+
-  <property>
-    <name>hadoop.kms.acl.CREATE</name>
-    <value>*</value>
-    <description>
-      ACL for create-key operations.
-      If the user is not in the GET ACL, the key material is not returned
-      as part of the response.
-    </description>
-  </property>
-
-  <property>
-    <name>hadoop.kms.blacklist.CREATE</name>
-    <value>hdfs,foo</value>
-    <description>
-      Blacklist for create-key operations.
-      If the user is in the Blacklist, the key material is not returned
-      as part of the response.
-    </description>
-  </property>
-
-  <property>
-    <name>hadoop.kms.acl.DELETE</name>
-    <value>*</value>
-    <description>
-      ACL for delete-key operations.
-    </description>
-  </property>
-
-  <property>
-    <name>hadoop.kms.blacklist.DELETE</name>
-    <value>hdfs,foo</value>
-    <description>
-      Blacklist for delete-key operations.
-    </description>
-  </property>
-
-  <property>
-    <name>hadoop.kms.acl.ROLLOVER</name>
-    <value>*</value>
-    <description>
-      ACL for rollover-key operations.
-      If the user is not in the GET ACL, the key material is not returned
-      as part of the response.
-    </description>
-  </property>
-
-  <property>
-    <name>hadoop.kms.blacklist.ROLLOVER</name>
-    <value>hdfs,foo</value>
-    <description>
-      Blacklist for rollover-key operations.
-    </description>
-  </property>
-
-  <property>
-    <name>hadoop.kms.acl.GET</name>
-    <value>*</value>
-    <description>
-      ACL for get-key-version and get-current-key operations.
-    </description>
-  </property>
-
-  <property>
-    <name>hadoop.kms.blacklist.GET</name>
-    <value>hdfs,foo</value>
-    <description>
-      ACL for get-key-version and get-current-key operations.
-    </description>
-  </property>
-
-  <property>
-    <name>hadoop.kms.acl.GET_KEYS</name>
-    <value>*</value>
-    <description>
-      ACL for get-keys operation.
-    </description>
-  </property>
-
-  <property>
-    <name>hadoop.kms.blacklist.GET_KEYS</name>
-    <value>hdfs,foo</value>
-    <description>
-      Blacklist for get-keys operation.
-    </description>
-  </property>
-
-  <property>
-    <name>hadoop.kms.acl.GET_METADATA</name>
-    <value>*</value>
-    <description>
-      ACL for get-key-metadata and get-keys-metadata operations.
-    </description>
-  </property>
-
-  <property>
-    <name>hadoop.kms.blacklist.GET_METADATA</name>
-    <value>hdfs,foo</value>
-    <description>
-      Blacklist for get-key-metadata and get-keys-metadata operations.
-    </description>
-  </property>
-
-  <property>
-    <name>hadoop.kms.acl.SET_KEY_MATERIAL</name>
-    <value>*</value>
-    <description>
-        Complimentary ACL for CREATE and ROLLOVER operation to allow the client
-        to provide the key material when creating or rolling a key.
-    </description>
-  </property>
-
-  <property>
-    <name>hadoop.kms.blacklist.SET_KEY_MATERIAL</name>
-    <value>hdfs,foo</value>
-    <description>
-        Complimentary Blacklist for CREATE and ROLLOVER operation to allow the client
-        to provide the key material when creating or rolling a key.
-    </description>
-  </property>
-
-  <property>
-    <name>hadoop.kms.acl.GENERATE_EEK</name>
-    <value>*</value>
-    <description>
-      ACL for generateEncryptedKey
-      CryptoExtension operations
-    </description>
-  </property>
-
-  <property>
-    <name>hadoop.kms.blacklist.GENERATE_EEK</name>
-    <value>hdfs,foo</value>
-    <description>
-      Blacklist for generateEncryptedKey
-      CryptoExtension operations
-    </description>
-  </property>
-
-  <property>
-    <name>hadoop.kms.acl.DECRYPT_EEK</name>
-    <value>*</value>
-    <description>
-      ACL for decrypt EncryptedKey
-      CryptoExtension operations
-    </description>
-  </property>
-</configuration>
-
-  <property>
-    <name>hadoop.kms.blacklist.DECRYPT_EEK</name>
-    <value>hdfs,foo</value>
-    <description>
-      Blacklist for decrypt EncryptedKey
-      CryptoExtension operations
-    </description>
-  </property>
-</configuration>
-
-+---+
-
-*** Key Access Control
-
-  KMS supports access control for all non-read operations at the Key level.
-  All Key Access operations are classified as :
-
-    * MANAGEMENT - createKey, deleteKey, rolloverNewVersion
-
-    * GENERATE_EEK - generateEncryptedKey, warmUpEncryptedKeys
-
-    * DECRYPT_EEK - decryptEncryptedKey
-
-    * READ - getKeyVersion, getKeyVersions, getMetadata, getKeysMetadata,
-             getCurrentKey
-
-    * ALL - all of the above
-
-  These can be defined in the KMS <<<etc/hadoop/kms-acls.xml>>> as follows
-
-  For all keys for which a key access has not been explicitly configured, It
-  is possible to configure a default key access control for a subset of the
-  operation types.
-
-  It is also possible to configure a "whitelist" key ACL for a subset of the
-  operation types. The whitelist key ACL is a whitelist in addition to the
-  explicit or default per-key ACL. That is, if no per-key ACL is explicitly
-  set, a user will be granted access if they are present in the default per-key
-  ACL or the whitelist key ACL. If a per-key ACL is explicitly set, a user
-  will be granted access if they are present in the per-key ACL or the
-  whitelist key ACL.
-
-  If no ACL is configured for a specific key AND no default ACL is configured
-  AND no root key ACL is configured for the requested operation,
-  then access will be DENIED.
-  
-  <<NOTE:>> The default and whitelist key ACL does not support <<<ALL>>>
-            operation qualifier.
-  
-+---+
-  <property>
-    <name>key.acl.testKey1.MANAGEMENT</name>
-    <value>*</value>
-    <description>
-      ACL for create-key, deleteKey and rolloverNewVersion operations.
-    </description>
-  </property>
-
-  <property>
-    <name>key.acl.testKey2.GENERATE_EEK</name>
-    <value>*</value>
-    <description>
-      ACL for generateEncryptedKey operations.
-    </description>
-  </property>
-
-  <property>
-    <name>key.acl.testKey3.DECRYPT_EEK</name>
-    <value>admink3</value>
-    <description>
-      ACL for decryptEncryptedKey operations.
-    </description>
-  </property>
-
-  <property>
-    <name>key.acl.testKey4.READ</name>
-    <value>*</value>
-    <description>
-      ACL for getKeyVersion, getKeyVersions, getMetadata, getKeysMetadata,
-      getCurrentKey operations
-    </description>
-  </property>
-
-  <property>
-    <name>key.acl.testKey5.ALL</name>
-    <value>*</value>
-    <description>
-      ACL for ALL operations.
-    </description>
-  </property>
-
-  <property>
-    <name>whitelist.key.acl.MANAGEMENT</name>
-    <value>admin1</value>
-    <description>
-      whitelist ACL for MANAGEMENT operations for all keys.
-    </description>
-  </property>
-
-  <!--
-  'testKey3' key ACL is defined. Since a 'whitelist'
-  key is also defined for DECRYPT_EEK, in addition to
-  admink3, admin1 can also perform DECRYPT_EEK operations
-  on 'testKey3'
-  -->
-  <property>
-    <name>whitelist.key.acl.DECRYPT_EEK</name>
-    <value>admin1</value>
-    <description>
-      whitelist ACL for DECRYPT_EEK operations for all keys.
-    </description>
-  </property>
-
-  <property>
-    <name>default.key.acl.MANAGEMENT</name>
-    <value>user1,user2</value>
-    <description>
-      default ACL for MANAGEMENT operations for all keys that are not
-      explicitly defined.
-    </description>
-  </property>
-
-  <property>
-    <name>default.key.acl.GENERATE_EEK</name>
-    <value>user1,user2</value>
-    <description>
-      default ACL for GENERATE_EEK operations for all keys that are not
-      explicitly defined.
-    </description>
-  </property>
-
-  <property>
-    <name>default.key.acl.DECRYPT_EEK</name>
-    <value>user1,user2</value>
-    <description>
-      default ACL for DECRYPT_EEK operations for all keys that are not
-      explicitly defined.
-    </description>
-  </property>
-
-  <property>
-    <name>default.key.acl.READ</name>
-    <value>user1,user2</value>
-    <description>
-      default ACL for READ operations for all keys that are not
-      explicitly defined.
-    </description>
-  </property>
-+---+
-
-** KMS Delegation Token Configuration
-
-  KMS delegation token secret manager can be configured with the following
-  properties:
-
-+---+
-  <property>
-    <name>hadoop.kms.authentication.delegation-token.update-interval.sec</name>
-    <value>86400</value>
-    <description>
-      How often the master key is rotated, in seconds. Default value 1 day.
-    </description>
-  </property>
-
-  <property>
-    <name>hadoop.kms.authentication.delegation-token.max-lifetime.sec</name>
-    <value>604800</value>
-    <description>
-      Maximum lifetime of a delagation token, in seconds. Default value 7 days.
-    </description>
-  </property>
-
-  <property>
-    <name>hadoop.kms.authentication.delegation-token.renew-interval.sec</name>
-    <value>86400</value>
-    <description>
-      Renewal interval of a delagation token, in seconds. Default value 1 day.
-    </description>
-  </property>
-
-  <property>
-    <name>hadoop.kms.authentication.delegation-token.removal-scan-interval.sec</name>
-    <value>3600</value>
-    <description>
-      Scan interval to remove expired delegation tokens.
-    </description>
-  </property>
-+---+
-
-
-** Using Multiple Instances of KMS Behind a Load-Balancer or VIP
-
-  KMS supports multiple KMS instances behind a load-balancer or VIP for
-  scalability and for HA purposes.
-
-  When using multiple KMS instances behind a load-balancer or VIP, requests from
-  the same user may be handled by different KMS instances.
-
-  KMS instances behind a load-balancer or VIP must be specially configured to
-  work properly as a single logical service.
-
-*** HTTP Kerberos Principals Configuration
-
-  When KMS instances are behind a load-balancer or VIP, clients will use the
-  hostname of the VIP. For Kerberos SPNEGO authentication, the hostname of the
-  URL is used to construct the Kerberos service name of the server,
-  <<<HTTP/#HOSTNAME#>>>. This means that all KMS instances must have a Kerberos
-  service name with the load-balancer or VIP hostname.
-
-  In order to be able to access directly a specific KMS instance, the KMS
-  instance must also have Keberos service name with its own hostname. This is
-  required for monitoring and admin purposes.
-
-  Both Kerberos service principal credentials (for the load-balancer/VIP
-  hostname and for the actual KMS instance hostname) must be in the keytab file
-  configured for authentication. And the principal name specified in the
-  configuration must be '*'. For example:
-
-+---+
-  <property>
-    <name>hadoop.kms.authentication.kerberos.principal</name>
-    <value>*</value>
-  </property>
-+---+
-
-  <<NOTE:>> If using HTTPS, the SSL certificate used by the KMS instance must
-  be configured to support multiple hostnames (see Java 7
-  <<<keytool>>> SAN extension support for details on how to do this).
-
-*** HTTP Authentication Signature
-
-  KMS uses Hadoop Authentication for HTTP authentication. Hadoop Authentication
-  issues a signed HTTP Cookie once the client has authenticated successfully.
-  This HTTP Cookie has an expiration time, after which it will trigger a new
-  authentication sequence. This is done to avoid triggering the authentication
-  on every HTTP request of a client.
-
-  A KMS instance must verify the HTTP Cookie signatures signed by other KMS
-  instances. To do this all KMS instances must share the signing secret.
-
-  This secret sharing can be done using a Zookeeper service which is configured
-  in KMS with the following properties in the <<<kms-site.xml>>>:
-
-+---+
-  <property>
-    <name>hadoop.kms.authentication.signer.secret.provider</name>
-    <value>zookeeper</value>
-    <description>
-      Indicates how the secret to sign the authentication cookies will be
-      stored. Options are 'random' (default), 'string' and 'zookeeper'.
-      If using a setup with multiple KMS instances, 'zookeeper' should be used.
-    </description>
-  </property>
-  <property>
-    <name>hadoop.kms.authentication.signer.secret.provider.zookeeper.path</name>
-    <value>/hadoop-kms/hadoop-auth-signature-secret</value>
-    <description>
-      The Zookeeper ZNode path where the KMS instances will store and retrieve
-      the secret from.
-    </description>
-  </property>
-  <property>
-    <name>hadoop.kms.authentication.signer.secret.provider.zookeeper.connection.string</name>
-    <value>#HOSTNAME#:#PORT#,...</value>
-    <description>
-      The Zookeeper connection string, a list of hostnames and port comma
-      separated.
-    </description>
-  </property>
-  <property>
-    <name>hadoop.kms.authentication.signer.secret.provider.zookeeper.auth.type</name>
-    <value>kerberos</value>
-    <description>
-      The Zookeeper authentication type, 'none' or 'sasl' (Kerberos).
-    </description>
-  </property>
-  <property>
-    <name>hadoop.kms.authentication.signer.secret.provider.zookeeper.kerberos.keytab</name>
-    <value>/etc/hadoop/conf/kms.keytab</value>
-    <description>
-      The absolute path for the Kerberos keytab with the credentials to
-      connect to Zookeeper.
-    </description>
-  </property>
-  <property>
-    <name>hadoop.kms.authentication.signer.secret.provider.zookeeper.kerberos.principal</name>
-    <value>kms/#HOSTNAME#</value>
-    <description>
-      The Kerberos service principal used to connect to Zookeeper.
-    </description>
-  </property>
-+---+
-
-*** Delegation Tokens
-
-  TBD
-
-** KMS HTTP REST API
-
-*** Create a Key
-
-  <REQUEST:>
-
-+---+
-POST http://HOST:PORT/kms/v1/keys
-Content-Type: application/json
-
-{
-  "name"        : "<key-name>",
-  "cipher"      : "<cipher>",
-  "length"      : <length>,        //int
-  "material"    : "<material>",    //base64
-  "description" : "<description>"
-}
-+---+
-
-  <RESPONSE:>
-
-+---+
-201 CREATED
-LOCATION: http://HOST:PORT/kms/v1/key/<key-name>
-Content-Type: application/json
-
-{
-  "name"        : "versionName",
-  "material"    : "<material>",    //base64, not present without GET ACL
-}
-+---+
-
-*** Rollover Key
-
-  <REQUEST:>
-
-+---+
-POST http://HOST:PORT/kms/v1/key/<key-name>
-Content-Type: application/json
-
-{
-  "material"    : "<material>",
-}
-+---+
-
-  <RESPONSE:>
-
-+---+
-200 OK
-Content-Type: application/json
-
-{
-  "name"        : "versionName",
-  "material"    : "<material>",    //base64, not present without GET ACL
-}
-+---+
-
-*** Delete Key
-
-  <REQUEST:>
-
-+---+
-DELETE http://HOST:PORT/kms/v1/key/<key-name>
-+---+
-
-  <RESPONSE:>
-
-+---+
-200 OK
-+---+
-
-*** Get Key Metadata
-
-  <REQUEST:>
-
-+---+
-GET http://HOST:PORT/kms/v1/key/<key-name>/_metadata
-+---+
-
-  <RESPONSE:>
-
-+---+
-200 OK
-Content-Type: application/json
-
-{
-  "name"        : "<key-name>",
-  "cipher"      : "<cipher>",
-  "length"      : <length>,        //int
-  "description" : "<description>",
-  "created"     : <millis-epoc>,   //long
-  "versions"    : <versions>       //int
-}
-+---+
-
-*** Get Current Key
-
-  <REQUEST:>
-
-+---+
-GET http://HOST:PORT/kms/v1/key/<key-name>/_currentversion
-+---+
-
-  <RESPONSE:>
-
-+---+
-200 OK
-Content-Type: application/json
-
-{
-  "name"        : "versionName",
-  "material"    : "<material>",    //base64
-}
-+---+
-
-
-*** Generate Encrypted Key for Current KeyVersion
-
-  <REQUEST:>
-
-+---+
-GET http://HOST:PORT/kms/v1/key/<key-name>/_eek?eek_op=generate&num_keys=<number-of-keys-to-generate>
-+---+
-
-  <RESPONSE:>
-
-+---+
-200 OK
-Content-Type: application/json
-[
-  {
-    "versionName"         : "encryptionVersionName",
-    "iv"                  : "<iv>",          //base64
-    "encryptedKeyVersion" : {
-        "versionName"       : "EEK",
-        "material"          : "<material>",    //base64
-    }
-  },
-  {
-    "versionName"         : "encryptionVersionName",
-    "iv"                  : "<iv>",          //base64
-    "encryptedKeyVersion" : {
-        "versionName"       : "EEK",
-        "material"          : "<material>",    //base64
-    }
-  },
-  ...
-]
-+---+
-
-*** Decrypt Encrypted Key
-
-  <REQUEST:>
-
-+---+
-POST http://HOST:PORT/kms/v1/keyversion/<version-name>/_eek?ee_op=decrypt
-Content-Type: application/json
-
-{
-  "name"        : "<key-name>",
-  "iv"          : "<iv>",          //base64
-  "material"    : "<material>",    //base64
-}
-
-+---+
-
-  <RESPONSE:>
-
-+---+
-200 OK
-Content-Type: application/json
-
-{
-  "name"        : "EK",
-  "material"    : "<material>",    //base64
-}
-+---+
-
-
-*** Get Key Version
-
-  <REQUEST:>
-
-+---+
-GET http://HOST:PORT/kms/v1/keyversion/<version-name>
-+---+
-
-  <RESPONSE:>
-
-+---+
-200 OK
-Content-Type: application/json
-
-{
-  "name"        : "versionName",
-  "material"    : "<material>",    //base64
-}
-+---+
-
-*** Get Key Versions
-
-  <REQUEST:>
-
-+---+
-GET http://HOST:PORT/kms/v1/key/<key-name>/_versions
-+---+
-
-  <RESPONSE:>
-
-+---+
-200 OK
-Content-Type: application/json
-
-[
-  {
-    "name"        : "versionName",
-    "material"    : "<material>",    //base64
-  },
-  {
-    "name"        : "versionName",
-    "material"    : "<material>",    //base64
-  },
-  ...
-]
-+---+
-
-*** Get Key Names
-
-  <REQUEST:>
-
-+---+
-GET http://HOST:PORT/kms/v1/keys/names
-+---+
-
-  <RESPONSE:>
-
-+---+
-200 OK
-Content-Type: application/json
-
-[
-  "<key-name>",
-  "<key-name>",
-  ...
-]
-+---+
-
-*** Get Keys Metadata
-
-+---+
-GET http://HOST:PORT/kms/v1/keys/metadata?key=<key-name>&key=<key-name>,...
-+---+
-
-  <RESPONSE:>
-
-+---+
-200 OK
-Content-Type: application/json
-
-[
-  {
-    "name"        : "<key-name>",
-    "cipher"      : "<cipher>",
-    "length"      : <length>,        //int
-    "description" : "<description>",
-    "created"     : <millis-epoc>,   //long
-    "versions"    : <versions>       //int
-  },
-  {
-    "name"        : "<key-name>",
-    "cipher"      : "<cipher>",
-    "length"      : <length>,        //int
-    "description" : "<description>",
-    "created"     : <millis-epoc>,   //long
-    "versions"    : <versions>       //int
-  },
-  ...
-]
-+---+
-
-  \[ {{{./index.html}Go Back}} \]