You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by cm...@apache.org on 2015/02/25 01:03:49 UTC

[2/6] hadoop git commit: HADOOP-11495. Backport "convert site documentation from apt to markdown" to branch-2 (Masatake Iwasaki via Colin P. McCabe)

http://git-wip-us.apache.org/repos/asf/hadoop/blob/343cffb0/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
new file mode 100644
index 0000000..dbcf0d8
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/Metrics.md
@@ -0,0 +1,456 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+* [Overview](#Overview)
+* [jvm context](#jvm_context)
+    * [JvmMetrics](#JvmMetrics)
+* [rpc context](#rpc_context)
+    * [rpc](#rpc)
+    * [RetryCache/NameNodeRetryCache](#RetryCacheNameNodeRetryCache)
+* [rpcdetailed context](#rpcdetailed_context)
+    * [rpcdetailed](#rpcdetailed)
+* [dfs context](#dfs_context)
+    * [namenode](#namenode)
+    * [FSNamesystem](#FSNamesystem)
+    * [JournalNode](#JournalNode)
+    * [datanode](#datanode)
+* [yarn context](#yarn_context)
+    * [ClusterMetrics](#ClusterMetrics)
+    * [QueueMetrics](#QueueMetrics)
+    * [NodeManagerMetrics](#NodeManagerMetrics)
+* [ugi context](#ugi_context)
+    * [UgiMetrics](#UgiMetrics)
+* [metricssystem context](#metricssystem_context)
+    * [MetricsSystem](#MetricsSystem)
+* [default context](#default_context)
+    * [StartupProgress](#StartupProgress)
+
+Overview
+========
+
+Metrics are statistical information exposed by Hadoop daemons, used for monitoring, performance tuning and debug. There are many metrics available by default and they are very useful for troubleshooting. This page shows the details of the available metrics.
+
+Each section describes each context into which metrics are grouped.
+
+The documentation of Metrics 2.0 framework is [here](../../api/org/apache/hadoop/metrics2/package-summary.html).
+
+jvm context
+===========
+
+JvmMetrics
+----------
+
+Each metrics record contains tags such as ProcessName, SessionID and Hostname as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `MemNonHeapUsedM` | Current non-heap memory used in MB |
+| `MemNonHeapCommittedM` | Current non-heap memory committed in MB |
+| `MemNonHeapMaxM` | Max non-heap memory size in MB |
+| `MemHeapUsedM` | Current heap memory used in MB |
+| `MemHeapCommittedM` | Current heap memory committed in MB |
+| `MemHeapMaxM` | Max heap memory size in MB |
+| `MemMaxM` | Max memory size in MB |
+| `ThreadsNew` | Current number of NEW threads |
+| `ThreadsRunnable` | Current number of RUNNABLE threads |
+| `ThreadsBlocked` | Current number of BLOCKED threads |
+| `ThreadsWaiting` | Current number of WAITING threads |
+| `ThreadsTimedWaiting` | Current number of TIMED\_WAITING threads |
+| `ThreadsTerminated` | Current number of TERMINATED threads |
+| `GcInfo` | Total GC count and GC time in msec, grouped by the kind of GC.  ex.) GcCountPS Scavenge=6, GCTimeMillisPS Scavenge=40, GCCountPS MarkSweep=0, GCTimeMillisPS MarkSweep=0 |
+| `GcCount` | Total GC count |
+| `GcTimeMillis` | Total GC time in msec |
+| `LogFatal` | Total number of FATAL logs |
+| `LogError` | Total number of ERROR logs |
+| `LogWarn` | Total number of WARN logs |
+| `LogInfo` | Total number of INFO logs |
+| `GcNumWarnThresholdExceeded` | Number of times that the GC warn threshold is exceeded |
+| `GcNumInfoThresholdExceeded` | Number of times that the GC info threshold is exceeded |
+| `GcTotalExtraSleepTime` | Total GC extra sleep time in msec |
+
+rpc context
+===========
+
+rpc
+---
+
+Each metrics record contains tags such as Hostname and port (number to which server is bound) as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `ReceivedBytes` | Total number of received bytes |
+| `SentBytes` | Total number of sent bytes |
+| `RpcQueueTimeNumOps` | Total number of RPC calls |
+| `RpcQueueTimeAvgTime` | Average queue time in milliseconds |
+| `RpcProcessingTimeNumOps` | Total number of RPC calls (same to RpcQueueTimeNumOps) |
+| `RpcProcessingAvgTime` | Average Processing time in milliseconds |
+| `RpcAuthenticationFailures` | Total number of authentication failures |
+| `RpcAuthenticationSuccesses` | Total number of authentication successes |
+| `RpcAuthorizationFailures` | Total number of authorization failures |
+| `RpcAuthorizationSuccesses` | Total number of authorization successes |
+| `NumOpenConnections` | Current number of open connections |
+| `CallQueueLength` | Current length of the call queue |
+| `rpcQueueTime`*num*`sNumOps` | Shows total number of RPC calls (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcQueueTime`*num*`s50thPercentileLatency` | Shows the 50th percentile of RPC queue time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcQueueTime`*num*`s75thPercentileLatency` | Shows the 75th percentile of RPC queue time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcQueueTime`*num*`s90thPercentileLatency` | Shows the 90th percentile of RPC queue time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcQueueTime`*num*`s95thPercentileLatency` | Shows the 95th percentile of RPC queue time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcQueueTime`*num*`s99thPercentileLatency` | Shows the 99th percentile of RPC queue time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcProcessingTime`*num*`sNumOps` | Shows total number of RPC calls (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcProcessingTime`*num*`s50thPercentileLatency` | Shows the 50th percentile of RPC processing time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcProcessingTime`*num*`s75thPercentileLatency` | Shows the 75th percentile of RPC processing time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcProcessingTime`*num*`s90thPercentileLatency` | Shows the 90th percentile of RPC processing time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcProcessingTime`*num*`s95thPercentileLatency` | Shows the 95th percentile of RPC processing time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+| `rpcProcessingTime`*num*`s99thPercentileLatency` | Shows the 99th percentile of RPC processing time in milliseconds (*num* seconds granularity) if `rpc.metrics.quantile.enable` is set to true. *num* is specified by `rpc.metrics.percentiles.intervals`. |
+
+RetryCache/NameNodeRetryCache
+-----------------------------
+
+RetryCache metrics is useful to monitor NameNode fail-over. Each metrics record contains Hostname tag.
+
+| Name | Description |
+|:---- |:---- |
+| `CacheHit` | Total number of RetryCache hit |
+| `CacheCleared` | Total number of RetryCache cleared |
+| `CacheUpdated` | Total number of RetryCache updated |
+
+rpcdetailed context
+===================
+
+Metrics of rpcdetailed context are exposed in unified manner by RPC layer. Two metrics are exposed for each RPC based on its name. Metrics named "(RPC method name)NumOps" indicates total number of method calls, and metrics named "(RPC method name)AvgTime" shows average turn around time for method calls in milliseconds.
+
+rpcdetailed
+-----------
+
+Each metrics record contains tags such as Hostname and port (number to which server is bound) as additional information along with metrics.
+
+The Metrics about RPCs which is not called are not included in metrics record.
+
+| Name | Description |
+|:---- |:---- |
+| *methodname*`NumOps` | Total number of the times the method is called |
+| *methodname*`AvgTime` | Average turn around time of the method in milliseconds |
+
+dfs context
+===========
+
+namenode
+--------
+
+Each metrics record contains tags such as ProcessName, SessionId, and Hostname as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `CreateFileOps` | Total number of files created |
+| `FilesCreated` | Total number of files and directories created by create or mkdir operations |
+| `FilesAppended` | Total number of files appended |
+| `GetBlockLocations` | Total number of getBlockLocations operations |
+| `FilesRenamed` | Total number of rename **operations** (NOT number of files/dirs renamed) |
+| `GetListingOps` | Total number of directory listing operations |
+| `DeleteFileOps` | Total number of delete operations |
+| `FilesDeleted` | Total number of files and directories deleted by delete or rename operations |
+| `FileInfoOps` | Total number of getFileInfo and getLinkFileInfo operations |
+| `AddBlockOps` | Total number of addBlock operations succeeded |
+| `GetAdditionalDatanodeOps` | Total number of getAdditionalDatanode operations |
+| `CreateSymlinkOps` | Total number of createSymlink operations |
+| `GetLinkTargetOps` | Total number of getLinkTarget operations |
+| `FilesInGetListingOps` | Total number of files and directories listed by directory listing operations |
+| `AllowSnapshotOps` | Total number of allowSnapshot operations |
+| `DisallowSnapshotOps` | Total number of disallowSnapshot operations |
+| `CreateSnapshotOps` | Total number of createSnapshot operations |
+| `DeleteSnapshotOps` | Total number of deleteSnapshot operations |
+| `RenameSnapshotOps` | Total number of renameSnapshot operations |
+| `ListSnapshottableDirOps` | Total number of snapshottableDirectoryStatus operations |
+| `SnapshotDiffReportOps` | Total number of getSnapshotDiffReport operations |
+| `TransactionsNumOps` | Total number of Journal transactions |
+| `TransactionsAvgTime` | Average time of Journal transactions in milliseconds |
+| `SyncsNumOps` | Total number of Journal syncs |
+| `SyncsAvgTime` | Average time of Journal syncs in milliseconds |
+| `TransactionsBatchedInSync` | Total number of Journal transactions batched in sync |
+| `BlockReportNumOps` | Total number of processing block reports from DataNode |
+| `BlockReportAvgTime` | Average time of processing block reports in milliseconds |
+| `CacheReportNumOps` | Total number of processing cache reports from DataNode |
+| `CacheReportAvgTime` | Average time of processing cache reports in milliseconds |
+| `SafeModeTime` | The interval between FSNameSystem starts and the last time safemode leaves in milliseconds.  (sometimes not equal to the time in SafeMode, see [HDFS-5156](https://issues.apache.org/jira/browse/HDFS-5156)) |
+| `FsImageLoadTime` | Time loading FS Image at startup in milliseconds |
+| `FsImageLoadTime` | Time loading FS Image at startup in milliseconds |
+| `GetEditNumOps` | Total number of edits downloads from SecondaryNameNode |
+| `GetEditAvgTime` | Average edits download time in milliseconds |
+| `GetImageNumOps` | Total number of fsimage downloads from SecondaryNameNode |
+| `GetImageAvgTime` | Average fsimage download time in milliseconds |
+| `PutImageNumOps` | Total number of fsimage uploads to SecondaryNameNode |
+| `PutImageAvgTime` | Average fsimage upload time in milliseconds |
+
+FSNamesystem
+------------
+
+Each metrics record contains tags such as HAState and Hostname as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `MissingBlocks` | Current number of missing blocks |
+| `ExpiredHeartbeats` | Total number of expired heartbeats |
+| `TransactionsSinceLastCheckpoint` | Total number of transactions since last checkpoint |
+| `TransactionsSinceLastLogRoll` | Total number of transactions since last edit log roll |
+| `LastWrittenTransactionId` | Last transaction ID written to the edit log |
+| `LastCheckpointTime` | Time in milliseconds since epoch of last checkpoint |
+| `CapacityTotal` | Current raw capacity of DataNodes in bytes |
+| `CapacityTotalGB` | Current raw capacity of DataNodes in GB |
+| `CapacityUsed` | Current used capacity across all DataNodes in bytes |
+| `CapacityUsedGB` | Current used capacity across all DataNodes in GB |
+| `CapacityRemaining` | Current remaining capacity in bytes |
+| `CapacityRemainingGB` | Current remaining capacity in GB |
+| `CapacityUsedNonDFS` | Current space used by DataNodes for non DFS purposes in bytes |
+| `TotalLoad` | Current number of connections |
+| `SnapshottableDirectories` | Current number of snapshottable directories |
+| `Snapshots` | Current number of snapshots |
+| `BlocksTotal` | Current number of allocated blocks in the system |
+| `FilesTotal` | Current number of files and directories |
+| `PendingReplicationBlocks` | Current number of blocks pending to be replicated |
+| `UnderReplicatedBlocks` | Current number of blocks under replicated |
+| `CorruptBlocks` | Current number of blocks with corrupt replicas. |
+| `ScheduledReplicationBlocks` | Current number of blocks scheduled for replications |
+| `PendingDeletionBlocks` | Current number of blocks pending deletion |
+| `ExcessBlocks` | Current number of excess blocks |
+| `PostponedMisreplicatedBlocks` | (HA-only) Current number of blocks postponed to replicate |
+| `PendingDataNodeMessageCourt` | (HA-only) Current number of pending block-related messages for later processing in the standby NameNode |
+| `MillisSinceLastLoadedEdits` | (HA-only) Time in milliseconds since the last time standby NameNode load edit log. In active NameNode, set to 0 |
+| `BlockCapacity` | Current number of block capacity |
+| `StaleDataNodes` | Current number of DataNodes marked stale due to delayed heartbeat |
+| `TotalFiles` | Current number of files and directories (same as FilesTotal) |
+
+JournalNode
+-----------
+
+The server-side metrics for a journal from the JournalNode's perspective. Each metrics record contains Hostname tag as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `Syncs60sNumOps` | Number of sync operations (1 minute granularity) |
+| `Syncs60s50thPercentileLatencyMicros` | The 50th percentile of sync latency in microseconds (1 minute granularity) |
+| `Syncs60s75thPercentileLatencyMicros` | The 75th percentile of sync latency in microseconds (1 minute granularity) |
+| `Syncs60s90thPercentileLatencyMicros` | The 90th percentile of sync latency in microseconds (1 minute granularity) |
+| `Syncs60s95thPercentileLatencyMicros` | The 95th percentile of sync latency in microseconds (1 minute granularity) |
+| `Syncs60s99thPercentileLatencyMicros` | The 99th percentile of sync latency in microseconds (1 minute granularity) |
+| `Syncs300sNumOps` | Number of sync operations (5 minutes granularity) |
+| `Syncs300s50thPercentileLatencyMicros` | The 50th percentile of sync latency in microseconds (5 minutes granularity) |
+| `Syncs300s75thPercentileLatencyMicros` | The 75th percentile of sync latency in microseconds (5 minutes granularity) |
+| `Syncs300s90thPercentileLatencyMicros` | The 90th percentile of sync latency in microseconds (5 minutes granularity) |
+| `Syncs300s95thPercentileLatencyMicros` | The 95th percentile of sync latency in microseconds (5 minutes granularity) |
+| `Syncs300s99thPercentileLatencyMicros` | The 99th percentile of sync latency in microseconds (5 minutes granularity) |
+| `Syncs3600sNumOps` | Number of sync operations (1 hour granularity) |
+| `Syncs3600s50thPercentileLatencyMicros` | The 50th percentile of sync latency in microseconds (1 hour granularity) |
+| `Syncs3600s75thPercentileLatencyMicros` | The 75th percentile of sync latency in microseconds (1 hour granularity) |
+| `Syncs3600s90thPercentileLatencyMicros` | The 90th percentile of sync latency in microseconds (1 hour granularity) |
+| `Syncs3600s95thPercentileLatencyMicros` | The 95th percentile of sync latency in microseconds (1 hour granularity) |
+| `Syncs3600s99thPercentileLatencyMicros` | The 99th percentile of sync latency in microseconds (1 hour granularity) |
+| `BatchesWritten` | Total number of batches written since startup |
+| `TxnsWritten` | Total number of transactions written since startup |
+| `BytesWritten` | Total number of bytes written since startup |
+| `BatchesWrittenWhileLagging` | Total number of batches written where this node was lagging |
+| `LastWriterEpoch` | Current writer's epoch number |
+| `CurrentLagTxns` | The number of transactions that this JournalNode is lagging |
+| `LastWrittenTxId` | The highest transaction id stored on this JournalNode |
+| `LastPromisedEpoch` | The last epoch number which this node has promised not to accept any lower epoch, or 0 if no promises have been made |
+
+datanode
+--------
+
+Each metrics record contains tags such as SessionId and Hostname as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `BytesWritten` | Total number of bytes written to DataNode |
+| `BytesRead` | Total number of bytes read from DataNode |
+| `BlocksWritten` | Total number of blocks written to DataNode |
+| `BlocksRead` | Total number of blocks read from DataNode |
+| `BlocksReplicated` | Total number of blocks replicated |
+| `BlocksRemoved` | Total number of blocks removed |
+| `BlocksVerified` | Total number of blocks verified |
+| `BlockVerificationFailures` | Total number of verifications failures |
+| `BlocksCached` | Total number of blocks cached |
+| `BlocksUncached` | Total number of blocks uncached |
+| `ReadsFromLocalClient` | Total number of read operations from local client |
+| `ReadsFromRemoteClient` | Total number of read operations from remote client |
+| `WritesFromLocalClient` | Total number of write operations from local client |
+| `WritesFromRemoteClient` | Total number of write operations from remote client |
+| `BlocksGetLocalPathInfo` | Total number of operations to get local path names of blocks |
+| `FsyncCount` | Total number of fsync |
+| `VolumeFailures` | Total number of volume failures occurred |
+| `ReadBlockOpNumOps` | Total number of read operations |
+| `ReadBlockOpAvgTime` | Average time of read operations in milliseconds |
+| `WriteBlockOpNumOps` | Total number of write operations |
+| `WriteBlockOpAvgTime` | Average time of write operations in milliseconds |
+| `BlockChecksumOpNumOps` | Total number of blockChecksum operations |
+| `BlockChecksumOpAvgTime` | Average time of blockChecksum operations in milliseconds |
+| `CopyBlockOpNumOps` | Total number of block copy operations |
+| `CopyBlockOpAvgTime` | Average time of block copy operations in milliseconds |
+| `ReplaceBlockOpNumOps` | Total number of block replace operations |
+| `ReplaceBlockOpAvgTime` | Average time of block replace operations in milliseconds |
+| `HeartbeatsNumOps` | Total number of heartbeats |
+| `HeartbeatsAvgTime` | Average heartbeat time in milliseconds |
+| `BlockReportsNumOps` | Total number of block report operations |
+| `BlockReportsAvgTime` | Average time of block report operations in milliseconds |
+| `CacheReportsNumOps` | Total number of cache report operations |
+| `CacheReportsAvgTime` | Average time of cache report operations in milliseconds |
+| `PacketAckRoundTripTimeNanosNumOps` | Total number of ack round trip |
+| `PacketAckRoundTripTimeNanosAvgTime` | Average time from ack send to receive minus the downstream ack time in nanoseconds |
+| `FlushNanosNumOps` | Total number of flushes |
+| `FlushNanosAvgTime` | Average flush time in nanoseconds |
+| `FsyncNanosNumOps` | Total number of fsync |
+| `FsyncNanosAvgTime` | Average fsync time in nanoseconds |
+| `SendDataPacketBlockedOnNetworkNanosNumOps` | Total number of sending packets |
+| `SendDataPacketBlockedOnNetworkNanosAvgTime` | Average waiting time of sending packets in nanoseconds |
+| `SendDataPacketTransferNanosNumOps` | Total number of sending packets |
+| `SendDataPacketTransferNanosAvgTime` | Average transfer time of sending packets in nanoseconds |
+
+yarn context
+============
+
+ClusterMetrics
+--------------
+
+ClusterMetrics shows the metrics of the YARN cluster from the ResourceManager's perspective. Each metrics record contains Hostname tag as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `NumActiveNMs` | Current number of active NodeManagers |
+| `NumDecommissionedNMs` | Current number of decommissioned NodeManagers |
+| `NumLostNMs` | Current number of lost NodeManagers for not sending heartbeats |
+| `NumUnhealthyNMs` | Current number of unhealthy NodeManagers |
+| `NumRebootedNMs` | Current number of rebooted NodeManagers |
+
+QueueMetrics
+------------
+
+QueueMetrics shows an application queue from the ResourceManager's perspective. Each metrics record shows the statistics of each queue, and contains tags such as queue name and Hostname as additional information along with metrics.
+
+In `running_`*num* metrics such as `running_0`, you can set the property `yarn.resourcemanager.metrics.runtime.buckets` in yarn-site.xml to change the buckets. The default values is `60,300,1440`.
+
+| Name | Description |
+|:---- |:---- |
+| `running_0` | Current number of running applications whose elapsed time are less than 60 minutes |
+| `running_60` | Current number of running applications whose elapsed time are between 60 and 300 minutes |
+| `running_300` | Current number of running applications whose elapsed time are between 300 and 1440 minutes |
+| `running_1440` | Current number of running applications elapsed time are more than 1440 minutes |
+| `AppsSubmitted` | Total number of submitted applications |
+| `AppsRunning` | Current number of running applications |
+| `AppsPending` | Current number of applications that have not yet been assigned by any containers |
+| `AppsCompleted` | Total number of completed applications |
+| `AppsKilled` | Total number of killed applications |
+| `AppsFailed` | Total number of failed applications |
+| `AllocatedMB` | Current allocated memory in MB |
+| `AllocatedVCores` | Current allocated CPU in virtual cores |
+| `AllocatedContainers` | Current number of allocated containers |
+| `AggregateContainersAllocated` | Total number of allocated containers |
+| `AggregateContainersReleased` | Total number of released containers |
+| `AvailableMB` | Current available memory in MB |
+| `AvailableVCores` | Current available CPU in virtual cores |
+| `PendingMB` | Current pending memory resource requests in MB that are not yet fulfilled by the scheduler |
+| `PendingVCores` | Current pending CPU allocation requests in virtual cores that are not yet fulfilled by the scheduler |
+| `PendingContainers` | Current pending resource requests that are not yet fulfilled by the scheduler |
+| `ReservedMB` | Current reserved memory in MB |
+| `ReservedVCores` | Current reserved CPU in virtual cores |
+| `ReservedContainers` | Current number of reserved containers |
+| `ActiveUsers` | Current number of active users |
+| `ActiveApplications` | Current number of active applications |
+| `FairShareMB` | (FairScheduler only) Current fair share of memory in MB |
+| `FairShareVCores` | (FairScheduler only) Current fair share of CPU in virtual cores |
+| `MinShareMB` | (FairScheduler only) Minimum share of memory in MB |
+| `MinShareVCores` | (FairScheduler only) Minimum share of CPU in virtual cores |
+| `MaxShareMB` | (FairScheduler only) Maximum share of memory in MB |
+| `MaxShareVCores` | (FairScheduler only) Maximum share of CPU in virtual cores |
+
+NodeManagerMetrics
+------------------
+
+NodeManagerMetrics shows the statistics of the containers in the node. Each metrics record contains Hostname tag as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `containersLaunched` | Total number of launched containers |
+| `containersCompleted` | Total number of successfully completed containers |
+| `containersFailed` | Total number of failed containers |
+| `containersKilled` | Total number of killed containers |
+| `containersIniting` | Current number of initializing containers |
+| `containersRunning` | Current number of running containers |
+| `allocatedContainers` | Current number of allocated containers |
+| `allocatedGB` | Current allocated memory in GB |
+| `availableGB` | Current available memory in GB |
+
+ugi context
+===========
+
+UgiMetrics
+----------
+
+UgiMetrics is related to user and group information. Each metrics record contains Hostname tag as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `LoginSuccessNumOps` | Total number of successful kerberos logins |
+| `LoginSuccessAvgTime` | Average time for successful kerberos logins in milliseconds |
+| `LoginFailureNumOps` | Total number of failed kerberos logins |
+| `LoginFailureAvgTime` | Average time for failed kerberos logins in milliseconds |
+| `getGroupsNumOps` | Total number of group resolutions |
+| `getGroupsAvgTime` | Average time for group resolution in milliseconds |
+| `getGroups`*num*`sNumOps` | Total number of group resolutions (*num* seconds granularity). *num* is specified by `hadoop.user.group.metrics.percentiles.intervals`. |
+| `getGroups`*num*`s50thPercentileLatency` | Shows the 50th percentile of group resolution time in milliseconds (*num* seconds granularity). *num* is specified by `hadoop.user.group.metrics.percentiles.intervals`. |
+| `getGroups`*num*`s75thPercentileLatency` | Shows the 75th percentile of group resolution time in milliseconds (*num* seconds granularity). *num* is specified by `hadoop.user.group.metrics.percentiles.intervals`. |
+| `getGroups`*num*`s90thPercentileLatency` | Shows the 90th percentile of group resolution time in milliseconds (*num* seconds granularity). *num* is specified by `hadoop.user.group.metrics.percentiles.intervals`. |
+| `getGroups`*num*`s95thPercentileLatency` | Shows the 95th percentile of group resolution time in milliseconds (*num* seconds granularity). *num* is specified by `hadoop.user.group.metrics.percentiles.intervals`. |
+| `getGroups`*num*`s99thPercentileLatency` | Shows the 99th percentile of group resolution time in milliseconds (*num* seconds granularity). *num* is specified by `hadoop.user.group.metrics.percentiles.intervals`. |
+
+metricssystem context
+=====================
+
+MetricsSystem
+-------------
+
+MetricsSystem shows the statistics for metrics snapshots and publishes. Each metrics record contains Hostname tag as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `NumActiveSources` | Current number of active metrics sources |
+| `NumAllSources` | Total number of metrics sources |
+| `NumActiveSinks` | Current number of active sinks |
+| `NumAllSinks` | Total number of sinks  (BUT usually less than `NumActiveSinks`, see [HADOOP-9946](https://issues.apache.org/jira/browse/HADOOP-9946)) |
+| `SnapshotNumOps` | Total number of operations to snapshot statistics from a metrics source |
+| `SnapshotAvgTime` | Average time in milliseconds to snapshot statistics from a metrics source |
+| `PublishNumOps` | Total number of operations to publish statistics to a sink |
+| `PublishAvgTime` | Average time in milliseconds to publish statistics to a sink |
+| `DroppedPubAll` | Total number of dropped publishes |
+| `Sink_`*instance*`NumOps` | Total number of sink operations for the *instance* |
+| `Sink_`*instance*`AvgTime` | Average time in milliseconds of sink operations for the *instance* |
+| `Sink_`*instance*`Dropped` | Total number of dropped sink operations for the *instance* |
+| `Sink_`*instance*`Qsize` | Current queue length of sink operations  (BUT always set to 0 because nothing to increment this metrics, see [HADOOP-9941](https://issues.apache.org/jira/browse/HADOOP-9941)) |
+
+default context
+===============
+
+StartupProgress
+---------------
+
+StartupProgress metrics shows the statistics of NameNode startup. Four metrics are exposed for each startup phase based on its name. The startup *phase*s are `LoadingFsImage`, `LoadingEdits`, `SavingCheckpoint`, and `SafeMode`. Each metrics record contains Hostname tag as additional information along with metrics.
+
+| Name | Description |
+|:---- |:---- |
+| `ElapsedTime` | Total elapsed time in milliseconds |
+| `PercentComplete` | Current rate completed in NameNode startup progress  (The max value is not 100 but 1.0) |
+| *phase*`Count` | Total number of steps completed in the phase |
+| *phase*`ElapsedTime` | Total elapsed time in the phase in milliseconds |
+| *phase*`Total` | Total number of steps in the phase |
+| *phase*`PercentComplete` | Current rate completed in the phase  (The max value is not 100 but 1.0) |
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/343cffb0/hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm b/hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm
new file mode 100644
index 0000000..5a2c70c
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm
@@ -0,0 +1,145 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+Native Libraries Guide
+======================
+
+* [Native Libraries Guide](#Native_Libraries_Guide)
+    * [Overview](#Overview)
+    * [Native Hadoop Library](#Native_Hadoop_Library)
+    * [Usage](#Usage)
+    * [Components](#Components)
+    * [Supported Platforms](#Supported_Platforms)
+    * [Download](#Download)
+    * [Build](#Build)
+    * [Runtime](#Runtime)
+    * [Check](#Check)
+    * [Native Shared Libraries](#Native_Shared_Libraries)
+
+Overview
+--------
+
+This guide describes the native hadoop library and includes a small discussion about native shared libraries.
+
+Note: Depending on your environment, the term "native libraries" could refer to all \*.so's you need to compile; and, the term "native compression" could refer to all \*.so's you need to compile that are specifically related to compression. Currently, however, this document only addresses the native hadoop library (`libhadoop.so`). The document for libhdfs library (`libhdfs.so`) is [here](../hadoop-hdfs/LibHdfs.html).
+
+Native Hadoop Library
+---------------------
+
+Hadoop has native implementations of certain components for performance reasons and for non-availability of Java implementations. These components are available in a single, dynamically-linked native library called the native hadoop library. On the \*nix platforms the library is named `libhadoop.so`.
+
+Usage
+-----
+
+It is fairly easy to use the native hadoop library:
+
+1.  Review the components.
+2.  Review the supported platforms.
+3.  Either download a hadoop release, which will include a pre-built version of the native hadoop library, or build your own version of the native hadoop library. Whether you download or build, the name for the library is the same: libhadoop.so
+4.  Install the compression codec development packages (\>zlib-1.2, \>gzip-1.2):
+    * If you download the library, install one or more development packages - whichever compression codecs you want to use with your deployment.
+    * If you build the library, it is mandatory to install both development packages.
+5.  Check the runtime log files.
+
+Components
+----------
+
+The native hadoop library includes various components:
+
+* Compression Codecs (bzip2, lz4, snappy, zlib)
+* Native IO utilities for [HDFS Short-Circuit Local Reads](../hadoop-hdfs/ShortCircuitLocalReads.html) and [Centralized Cache Management in HDFS](../hadoop-hdfs/CentralizedCacheManagement.html)
+* CRC32 checksum implementation
+
+Supported Platforms
+-------------------
+
+The native hadoop library is supported on \*nix platforms only. The library does not to work with Cygwin or the Mac OS X platform.
+
+The native hadoop library is mainly used on the GNU/Linus platform and has been tested on these distributions:
+
+* RHEL4/Fedora
+* Ubuntu
+* Gentoo
+
+On all the above distributions a 32/64 bit native hadoop library will work with a respective 32/64 bit jvm.
+
+Download
+--------
+
+The pre-built 32-bit i386-Linux native hadoop library is available as part of the hadoop distribution and is located in the `lib/native` directory. You can download the hadoop distribution from Hadoop Common Releases.
+
+Be sure to install the zlib and/or gzip development packages - whichever compression codecs you want to use with your deployment.
+
+Build
+-----
+
+The native hadoop library is written in ANSI C and is built using the GNU autotools-chain (autoconf, autoheader, automake, autoscan, libtool). This means it should be straight-forward to build the library on any platform with a standards-compliant C compiler and the GNU autotools-chain (see the supported platforms).
+
+The packages you need to install on the target platform are:
+
+* C compiler (e.g. GNU C Compiler)
+* GNU Autools Chain: autoconf, automake, libtool
+* zlib-development package (stable version \>= 1.2.0)
+* openssl-development package(e.g. libssl-dev)
+
+Once you installed the prerequisite packages use the standard hadoop pom.xml file and pass along the native flag to build the native hadoop library:
+
+       $ mvn package -Pdist,native -DskipTests -Dtar
+
+You should see the newly-built library in:
+
+       $ hadoop-dist/target/hadoop-${project.version}/lib/native
+
+Please note the following:
+
+* It is mandatory to install both the zlib and gzip development packages on the target platform in order to build the native hadoop library; however, for deployment it is sufficient to install just one package if you wish to use only one codec.
+* It is necessary to have the correct 32/64 libraries for zlib, depending on the 32/64 bit jvm for the target platform, in order to build and deploy the native hadoop library.
+
+Runtime
+-------
+
+The bin/hadoop script ensures that the native hadoop library is on the library path via the system property: `-Djava.library.path=<path> `
+
+During runtime, check the hadoop log files for your MapReduce tasks.
+
+* If everything is all right, then: `DEBUG util.NativeCodeLoader - Trying to load the custom-built native-hadoop library...` `INFO util.NativeCodeLoader - Loaded the native-hadoop library`
+* If something goes wrong, then: `INFO util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable`
+
+Check
+-----
+
+NativeLibraryChecker is a tool to check whether native libraries are loaded correctly. You can launch NativeLibraryChecker as follows:
+
+       $ hadoop checknative -a
+       14/12/06 01:30:45 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version
+       14/12/06 01:30:45 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
+       Native library checking:
+       hadoop: true /home/ozawa/hadoop/lib/native/libhadoop.so.1.0.0
+       zlib:   true /lib/x86_64-linux-gnu/libz.so.1
+       snappy: true /usr/lib/libsnappy.so.1
+       lz4:    true revision:99
+       bzip2:  false
+
+Native Shared Libraries
+-----------------------
+
+You can load any native shared library using DistributedCache for distributing and symlinking the library files.
+
+This example shows you how to distribute a shared library, mylib.so, and load it from a MapReduce task.
+
+1.  First copy the library to the HDFS: `bin/hadoop fs -copyFromLocal mylib.so.1 /libraries/mylib.so.1`
+2.  The job launching program should contain the following: `DistributedCache.createSymlink(conf);` `DistributedCache.addCacheFile("hdfs://host:port/libraries/mylib.so. 1#mylib.so", conf);`
+3.  The MapReduce task can contain: `System.loadLibrary("mylib.so");`
+
+Note: If you downloaded or built the native hadoop library, you don’t need to use DistibutedCache to make the library available to your MapReduce tasks.

http://git-wip-us.apache.org/repos/asf/hadoop/blob/343cffb0/hadoop-common-project/hadoop-common/src/site/markdown/RackAwareness.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/RackAwareness.md b/hadoop-common-project/hadoop-common/src/site/markdown/RackAwareness.md
new file mode 100644
index 0000000..41fcb37
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/RackAwareness.md
@@ -0,0 +1,104 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+* [Rack Awareness](#Rack_Awareness)
+    * [python Example](#python_Example)
+    * [bash Example](#bash_Example)
+
+Rack Awareness
+==============
+
+Hadoop components are rack-aware. For example, HDFS block placement will use rack awareness for fault tolerance by placing one block replica on a different rack. This provides data availability in the event of a network switch failure or partition within the cluster.
+
+Hadoop master daemons obtain the rack id of the cluster slaves by invoking either an external script or java class as specified by configuration files. Using either the java class or external script for topology, output must adhere to the java **org.apache.hadoop.net.DNSToSwitchMapping** interface. The interface expects a one-to-one correspondence to be maintained and the topology information in the format of '/myrack/myhost', where '/' is the topology delimiter, 'myrack' is the rack identifier, and 'myhost' is the individual host. Assuming a single /24 subnet per rack, one could use the format of '/192.168.100.0/192.168.100.5' as a unique rack-host topology mapping.
+
+To use the java class for topology mapping, the class name is specified by the **topology.node.switch.mapping.impl** parameter in the configuration file. An example, NetworkTopology.java, is included with the hadoop distribution and can be customized by the Hadoop administrator. Using a Java class instead of an external script has a performance benefit in that Hadoop doesn't need to fork an external process when a new slave node registers itself.
+
+If implementing an external script, it will be specified with the **topology.script.file.name** parameter in the configuration files. Unlike the java class, the external topology script is not included with the Hadoop distribution and is provided by the administrator. Hadoop will send multiple IP addresses to ARGV when forking the topology script. The number of IP addresses sent to the topology script is controlled with **net.topology.script.number.args** and defaults to 100. If **net.topology.script.number.args** was changed to 1, a topology script would get forked for each IP submitted by DataNodes and/or NodeManagers.
+
+If **topology.script.file.name** or **topology.node.switch.mapping.impl** is not set, the rack id '/default-rack' is returned for any passed IP address. While this behavior appears desirable, it can cause issues with HDFS block replication as default behavior is to write one replicated block off rack and is unable to do so as there is only a single rack named '/default-rack'.
+
+An additional configuration setting is **mapreduce.jobtracker.taskcache.levels** which determines the number of levels (in the network topology) of caches MapReduce will use. So, for example, if it is the default value of 2, two levels of caches will be constructed - one for hosts (host -\> task mapping) and another for racks (rack -\> task mapping). Giving us our one-to-one mapping of '/myrack/myhost'.
+
+python Example
+--------------
+```python
+#!/usr/bin/python
+# this script makes assumptions about the physical environment.
+#  1) each rack is its own layer 3 network with a /24 subnet, which
+# could be typical where each rack has its own
+#     switch with uplinks to a central core router.
+#
+#             +-----------+
+#             |core router|
+#             +-----------+
+#            /             \
+#   +-----------+        +-----------+
+#   |rack switch|        |rack switch|
+#   +-----------+        +-----------+
+#   | data node |        | data node |
+#   +-----------+        +-----------+
+#   | data node |        | data node |
+#   +-----------+        +-----------+
+#
+# 2) topology script gets list of IP's as input, calculates network address, and prints '/network_address/ip'.
+
+import netaddr
+import sys
+sys.argv.pop(0)                                                  # discard name of topology script from argv list as we just want IP addresses
+
+netmask = '255.255.255.0'                                        # set netmask to what's being used in your environment.  The example uses a /24
+
+for ip in sys.argv:                                              # loop over list of datanode IP's
+address = '{0}/{1}'.format(ip, netmask)                      # format address string so it looks like 'ip/netmask' to make netaddr work
+try:
+   network_address = netaddr.IPNetwork(address).network     # calculate and print network address
+   print "/{0}".format(network_address)
+except:
+   print "/rack-unknown"                                    # print catch-all value if unable to calculate network address
+```
+
+bash Example
+------------
+
+```bash
+#!/bin/bash
+# Here's a bash example to show just how simple these scripts can be
+# Assuming we have flat network with everything on a single switch, we can fake a rack topology.
+# This could occur in a lab environment where we have limited nodes,like 2-8 physical machines on a unmanaged switch.
+# This may also apply to multiple virtual machines running on the same physical hardware.
+# The number of machines isn't important, but that we are trying to fake a network topology when there isn't one.
+#
+#       +----------+    +--------+
+#       |jobtracker|    |datanode|
+#       +----------+    +--------+
+#              \        /
+#  +--------+  +--------+  +--------+
+#  |datanode|--| switch |--|datanode|
+#  +--------+  +--------+  +--------+
+#              /        \
+#       +--------+    +--------+
+#       |datanode|    |namenode|
+#       +--------+    +--------+
+#
+# With this network topology, we are treating each host as a rack.  This is being done by taking the last octet
+# in the datanode's IP and prepending it with the word '/rack-'.  The advantage for doing this is so HDFS
+# can create its 'off-rack' block copy.
+# 1) 'echo $@' will echo all ARGV values to xargs.
+# 2) 'xargs' will enforce that we print a single argv value per line
+# 3) 'awk' will split fields on dots and append the last field to the string '/rack-'. If awk
+#    fails to split on four dots, it will still print '/rack-' last field value
+
+echo $@ | xargs -n 1 | awk -F '.' '{print "/rack-"$NF}'
+```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/hadoop/blob/343cffb0/hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md b/hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md
new file mode 100644
index 0000000..0004d25
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/SecureMode.md
@@ -0,0 +1,377 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+* [Hadoop in Secure Mode](#Hadoop_in_Secure_Mode)
+    * [Introduction](#Introduction)
+    * [Authentication](#Authentication)
+        * [End User Accounts](#End_User_Accounts)
+        * [User Accounts for Hadoop Daemons](#User_Accounts_for_Hadoop_Daemons)
+        * [Kerberos principals for Hadoop Daemons and Users](#Kerberos_principals_for_Hadoop_Daemons_and_Users)
+        * [Mapping from Kerberos principal to OS user account](#Mapping_from_Kerberos_principal_to_OS_user_account)
+        * [Mapping from user to group](#Mapping_from_user_to_group)
+        * [Proxy user](#Proxy_user)
+        * [Secure DataNode](#Secure_DataNode)
+    * [Data confidentiality](#Data_confidentiality)
+        * [Data Encryption on RPC](#Data_Encryption_on_RPC)
+        * [Data Encryption on Block data transfer.](#Data_Encryption_on_Block_data_transfer.)
+        * [Data Encryption on HTTP](#Data_Encryption_on_HTTP)
+    * [Configuration](#Configuration)
+        * [Permissions for both HDFS and local fileSystem paths](#Permissions_for_both_HDFS_and_local_fileSystem_paths)
+        * [Common Configurations](#Common_Configurations)
+        * [NameNode](#NameNode)
+        * [Secondary NameNode](#Secondary_NameNode)
+        * [DataNode](#DataNode)
+        * [WebHDFS](#WebHDFS)
+        * [ResourceManager](#ResourceManager)
+        * [NodeManager](#NodeManager)
+        * [Configuration for WebAppProxy](#Configuration_for_WebAppProxy)
+        * [LinuxContainerExecutor](#LinuxContainerExecutor)
+        * [MapReduce JobHistory Server](#MapReduce_JobHistory_Server)
+
+Hadoop in Secure Mode
+=====================
+
+Introduction
+------------
+
+This document describes how to configure authentication for Hadoop in secure mode.
+
+By default Hadoop runs in non-secure mode in which no actual authentication is required. By configuring Hadoop runs in secure mode, each user and service needs to be authenticated by Kerberos in order to use Hadoop services.
+
+Security features of Hadoop consist of [authentication](#Authentication), [service level authorization](./ServiceLevelAuth.html), [authentication for Web consoles](./HttpAuthentication.html) and [data confidenciality](#Data_confidentiality).
+
+Authentication
+--------------
+
+### End User Accounts
+
+When service level authentication is turned on, end users using Hadoop in secure mode needs to be authenticated by Kerberos. The simplest way to do authentication is using `kinit` command of Kerberos.
+
+### User Accounts for Hadoop Daemons
+
+Ensure that HDFS and YARN daemons run as different Unix users, e.g. `hdfs` and `yarn`. Also, ensure that the MapReduce JobHistory server runs as different user such as `mapred`.
+
+It's recommended to have them share a Unix group, for e.g. `hadoop`. See also "[Mapping from user to group](#Mapping_from_user_to_group)" for group management.
+
+| User:Group | Daemons |
+|:---- |:---- |
+| hdfs:hadoop | NameNode, Secondary NameNode, JournalNode, DataNode |
+| yarn:hadoop | ResourceManager, NodeManager |
+| mapred:hadoop | MapReduce JobHistory Server |
+
+### Kerberos principals for Hadoop Daemons and Users
+
+For running hadoop service daemons in Hadoop in secure mode, Kerberos principals are required. Each service reads auhenticate information saved in keytab file with appropriate permission.
+
+HTTP web-consoles should be served by principal different from RPC's one.
+
+Subsections below shows the examples of credentials for Hadoop services.
+
+#### HDFS
+
+The NameNode keytab file, on the NameNode host, should look like the following:
+
+    $ klist -e -k -t /etc/security/keytab/nn.service.keytab
+    Keytab name: FILE:/etc/security/keytab/nn.service.keytab
+    KVNO Timestamp         Principal
+       4 07/18/11 21:08:09 nn/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 nn/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 nn/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
+       4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
+
+The Secondary NameNode keytab file, on that host, should look like the following:
+
+    $ klist -e -k -t /etc/security/keytab/sn.service.keytab
+    Keytab name: FILE:/etc/security/keytab/sn.service.keytab
+    KVNO Timestamp         Principal
+       4 07/18/11 21:08:09 sn/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 sn/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 sn/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
+       4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
+
+The DataNode keytab file, on each host, should look like the following:
+
+    $ klist -e -k -t /etc/security/keytab/dn.service.keytab
+    Keytab name: FILE:/etc/security/keytab/dn.service.keytab
+    KVNO Timestamp         Principal
+       4 07/18/11 21:08:09 dn/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 dn/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 dn/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
+       4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
+
+#### YARN
+
+The ResourceManager keytab file, on the ResourceManager host, should look like the following:
+
+    $ klist -e -k -t /etc/security/keytab/rm.service.keytab
+    Keytab name: FILE:/etc/security/keytab/rm.service.keytab
+    KVNO Timestamp         Principal
+       4 07/18/11 21:08:09 rm/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 rm/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 rm/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
+       4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
+
+The NodeManager keytab file, on each host, should look like the following:
+
+    $ klist -e -k -t /etc/security/keytab/nm.service.keytab
+    Keytab name: FILE:/etc/security/keytab/nm.service.keytab
+    KVNO Timestamp         Principal
+       4 07/18/11 21:08:09 nm/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 nm/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 nm/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
+       4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
+
+#### MapReduce JobHistory Server
+
+The MapReduce JobHistory Server keytab file, on that host, should look like the following:
+
+    $ klist -e -k -t /etc/security/keytab/jhs.service.keytab
+    Keytab name: FILE:/etc/security/keytab/jhs.service.keytab
+    KVNO Timestamp         Principal
+       4 07/18/11 21:08:09 jhs/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 jhs/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 jhs/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
+       4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
+       4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
+
+### Mapping from Kerberos principal to OS user account
+
+Hadoop maps Kerberos principal to OS user account using the rule specified by `hadoop.security.auth_to_local` which works in the same way as the `auth_to_local` in [Kerberos configuration file (krb5.conf)](http://web.mit.edu/Kerberos/krb5-latest/doc/admin/conf_files/krb5_conf.html). In addition, Hadoop `auth_to_local` mapping supports the **/L** flag that lowercases the returned name.
+
+By default, it picks the first component of principal name as a user name if the realms matches to the `default_realm` (usually defined in /etc/krb5.conf). For example, `host/full.qualified.domain.name@REALM.TLD` is mapped to `host` by default rule.
+
+Custom rules can be tested using the `hadoop kerbname` command.  This command allows one to specify a principal and apply Hadoop's current auth_to_local ruleset.  The output will be what identity Hadoop will use for its usage.
+
+### Mapping from user to group
+
+Though files on HDFS are associated to owner and group, Hadoop does not have the definition of group by itself. Mapping from user to group is done by OS or LDAP.
+
+You can change a way of mapping by specifying the name of mapping provider as a value of `hadoop.security.group.mapping` See [HDFS Permissions Guide](../hadoop-hdfs/HdfsPermissionsGuide.html) for details.
+
+Practically you need to manage SSO environment using Kerberos with LDAP for Hadoop in secure mode.
+
+### Proxy user
+
+Some products such as Apache Oozie which access the services of Hadoop on behalf of end users need to be able to impersonate end users. See [the doc of proxy user](./Superusers.html) for details.
+
+### Secure DataNode
+
+Because the data transfer protocol of DataNode does not use the RPC framework of Hadoop, DataNode must authenticate itself by using privileged ports which are specified by `dfs.datanode.address` and `dfs.datanode.http.address`. This authentication is based on the assumption that the attacker won't be able to get root privileges.
+
+When you execute `hdfs datanode` command as root, server process binds privileged port at first, then drops privilege and runs as the user account specified by `HADOOP_SECURE_DN_USER`. This startup process uses jsvc installed to `JSVC_HOME`. You must specify `HADOOP_SECURE_DN_USER` and `JSVC_HOME` as environment variables on start up (in hadoop-env.sh).
+
+As of version 2.6.0, SASL can be used to authenticate the data transfer protocol. In this configuration, it is no longer required for secured clusters to start the DataNode as root using jsvc and bind to privileged ports. To enable SASL on data transfer protocol, set `dfs.data.transfer.protection` in hdfs-site.xml, set a non-privileged port for `dfs.datanode.address`, set `dfs.http.policy` to *HTTPS\_ONLY* and make sure the `HADOOP_SECURE_DN_USER` environment variable is not defined. Note that it is not possible to use SASL on data transfer protocol if `dfs.datanode.address` is set to a privileged port. This is required for backwards-compatibility reasons.
+
+In order to migrate an existing cluster that used root authentication to start using SASL instead, first ensure that version 2.6.0 or later has been deployed to all cluster nodes as well as any external applications that need to connect to the cluster. Only versions 2.6.0 and later of the HDFS client can connect to a DataNode that uses SASL for authentication of data transfer protocol, so it is vital that all callers have the correct version before migrating. After version 2.6.0 or later has been deployed everywhere, update configuration of any external applications to enable SASL. If an HDFS client is enabled for SASL, then it can connect successfully to a DataNode running with either root authentication or SASL authentication. Changing configuration for all clients guarantees that subsequent configuration changes on DataNodes will not disrupt the applications. Finally, each individual DataNode can be migrated by changing its configuration and restarting. It is acceptable to have a
  mix of some DataNodes running with root authentication and some DataNodes running with SASL authentication temporarily during this migration period, because an HDFS client enabled for SASL can connect to both.
+
+Data confidentiality
+--------------------
+
+### Data Encryption on RPC
+
+The data transfered between hadoop services and clients. Setting `hadoop.rpc.protection` to `"privacy"` in the core-site.xml activate data encryption.
+
+### Data Encryption on Block data transfer.
+
+You need to set `dfs.encrypt.data.transfer` to `"true"` in the hdfs-site.xml in order to activate data encryption for data transfer protocol of DataNode.
+
+Optionally, you may set `dfs.encrypt.data.transfer.algorithm` to either "3des" or "rc4" to choose the specific encryption algorithm. If unspecified, then the configured JCE default on the system is used, which is usually 3DES.
+
+Setting `dfs.encrypt.data.transfer.cipher.suites` to `AES/CTR/NoPadding` activates AES encryption. By default, this is unspecified, so AES is not used. When AES is used, the algorithm specified in `dfs.encrypt.data.transfer.algorithm` is still used during an initial key exchange. The AES key bit length can be configured by setting `dfs.encrypt.data.transfer.cipher.key.bitlength` to 128, 192 or 256. The default is 128.
+
+AES offers the greatest cryptographic strength and the best performance. At this time, 3DES and RC4 have been used more often in Hadoop clusters.
+
+### Data Encryption on HTTP
+
+Data transfer between Web-console and clients are protected by using SSL(HTTPS).
+
+Configuration
+-------------
+
+### Permissions for both HDFS and local fileSystem paths
+
+The following table lists various paths on HDFS and local filesystems (on all nodes) and recommended permissions:
+
+| Filesystem | Path | User:Group | Permissions |
+|:---- |:---- |:---- |:---- |
+| local | `dfs.namenode.name.dir` | hdfs:hadoop | `drwx------` |
+| local | `dfs.datanode.data.dir` | hdfs:hadoop | `drwx------` |
+| local | $HADOOP\_LOG\_DIR | hdfs:hadoop | `drwxrwxr-x` |
+| local | $YARN\_LOG\_DIR | yarn:hadoop | `drwxrwxr-x` |
+| local | `yarn.nodemanager.local-dirs` | yarn:hadoop | `drwxr-xr-x` |
+| local | `yarn.nodemanager.log-dirs` | yarn:hadoop | `drwxr-xr-x` |
+| local | container-executor | root:hadoop | `--Sr-s--*` |
+| local | `conf/container-executor.cfg` | root:hadoop | `r-------*` |
+| hdfs | / | hdfs:hadoop | `drwxr-xr-x` |
+| hdfs | /tmp | hdfs:hadoop | `drwxrwxrwxt` |
+| hdfs | /user | hdfs:hadoop | `drwxr-xr-x` |
+| hdfs | `yarn.nodemanager.remote-app-log-dir` | yarn:hadoop | `drwxrwxrwxt` |
+| hdfs | `mapreduce.jobhistory.intermediate-done-dir` | mapred:hadoop | `drwxrwxrwxt` |
+| hdfs | `mapreduce.jobhistory.done-dir` | mapred:hadoop | `drwxr-x---` |
+
+### Common Configurations
+
+In order to turn on RPC authentication in hadoop, set the value of `hadoop.security.authentication` property to `"kerberos"`, and set security related settings listed below appropriately.
+
+The following properties should be in the `core-site.xml` of all the nodes in the cluster.
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `hadoop.security.authentication` | *kerberos* | `simple` : No authentication. (default)  `kerberos` : Enable authentication by Kerberos. |
+| `hadoop.security.authorization` | *true* | Enable [RPC service-level authorization](./ServiceLevelAuth.html). |
+| `hadoop.rpc.protection` | *authentication* | *authentication* : authentication only (default)  *integrity* : integrity check in addition to authentication  *privacy* : data encryption in addition to integrity |
+| `hadoop.security.auth_to_local` | `RULE:`*exp1* `RULE:`*exp2* *...* DEFAULT | The value is string containing new line characters. See [Kerberos documentation](http://web.mit.edu/Kerberos/krb5-latest/doc/admin/conf_files/krb5_conf.html) for format for *exp*. |
+| `hadoop.proxyuser.`*superuser*`.hosts` | | comma separated hosts from which *superuser* access are allowd to impersonation. `*` means wildcard. |
+| `hadoop.proxyuser.`*superuser*`.groups` | | comma separated groups to which users impersonated by *superuser* belongs. `*` means wildcard. |
+
+### NameNode
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `dfs.block.access.token.enable` | *true* | Enable HDFS block access tokens for secure operations. |
+| `dfs.https.enable` | *true* | This value is deprecated. Use dfs.http.policy |
+| `dfs.http.policy` | *HTTP\_ONLY* or *HTTPS\_ONLY* or *HTTP\_AND\_HTTPS* | HTTPS\_ONLY turns off http access. This option takes precedence over the deprecated configuration dfs.https.enable and hadoop.ssl.enabled. If using SASL to authenticate data transfer protocol instead of running DataNode as root and using privileged ports, then this property must be set to *HTTPS\_ONLY* to guarantee authentication of HTTP servers. (See `dfs.data.transfer.protection`.) |
+| `dfs.namenode.https-address` | *nn\_host\_fqdn:50470* | |
+| `dfs.https.port` | *50470* | |
+| `dfs.namenode.keytab.file` | */etc/security/keytab/nn.service.keytab* | Kerberos keytab file for the NameNode. |
+| `dfs.namenode.kerberos.principal` | nn/\_HOST@REALM.TLD | Kerberos principal name for the NameNode. |
+| `dfs.namenode.kerberos.internal.spnego.principal` | HTTP/\_HOST@REALM.TLD | HTTP Kerberos principal name for the NameNode. |
+
+### Secondary NameNode
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `dfs.namenode.secondary.http-address` | *c\_nn\_host\_fqdn:50090* | |
+| `dfs.namenode.secondary.https-port` | *50470* | |
+| `dfs.secondary.namenode.keytab.file` | */etc/security/keytab/sn.service.keytab* | Kerberos keytab file for the Secondary NameNode. |
+| `dfs.secondary.namenode.kerberos.principal` | sn/\_HOST@REALM.TLD | Kerberos principal name for the Secondary NameNode. |
+| `dfs.secondary.namenode.kerberos.internal.spnego.principal` | HTTP/\_HOST@REALM.TLD | HTTP Kerberos principal name for the Secondary NameNode. |
+
+### DataNode
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `dfs.datanode.data.dir.perm` | 700 | |
+| `dfs.datanode.address` | *0.0.0.0:1004* | Secure DataNode must use privileged port in order to assure that the server was started securely. This means that the server must be started via jsvc. Alternatively, this must be set to a non-privileged port if using SASL to authenticate data transfer protocol. (See `dfs.data.transfer.protection`.) |
+| `dfs.datanode.http.address` | *0.0.0.0:1006* | Secure DataNode must use privileged port in order to assure that the server was started securely. This means that the server must be started via jsvc. |
+| `dfs.datanode.https.address` | *0.0.0.0:50470* | |
+| `dfs.datanode.keytab.file` | */etc/security/keytab/dn.service.keytab* | Kerberos keytab file for the DataNode. |
+| `dfs.datanode.kerberos.principal` | dn/\_HOST@REALM.TLD | Kerberos principal name for the DataNode. |
+| `dfs.encrypt.data.transfer` | *false* | set to `true` when using data encryption |
+| `dfs.encrypt.data.transfer.algorithm` | | optionally set to `3des` or `rc4` when using data encryption to control encryption algorithm |
+| `dfs.encrypt.data.transfer.cipher.suites` | | optionally set to `AES/CTR/NoPadding` to activate AES encryption when using data encryption |
+| `dfs.encrypt.data.transfer.cipher.key.bitlength` | | optionally set to `128`, `192` or `256` to control key bit length when using AES with data encryption |
+| `dfs.data.transfer.protection` | | *authentication* : authentication only  *integrity* : integrity check in addition to authentication  *privacy* : data encryption in addition to integrity This property is unspecified by default. Setting this property enables SASL for authentication of data transfer protocol. If this is enabled, then `dfs.datanode.address` must use a non-privileged port, `dfs.http.policy` must be set to *HTTPS\_ONLY* and the `HADOOP_SECURE_DN_USER` environment variable must be undefined when starting the DataNode process. |
+
+### WebHDFS
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `dfs.web.authentication.kerberos.principal` | http/\_HOST@REALM.TLD | Kerberos keytab file for the WebHDFS. |
+| `dfs.web.authentication.kerberos.keytab` | */etc/security/keytab/http.service.keytab* | Kerberos principal name for WebHDFS. |
+
+### ResourceManager
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `yarn.resourcemanager.keytab` | */etc/security/keytab/rm.service.keytab* | Kerberos keytab file for the ResourceManager. |
+| `yarn.resourcemanager.principal` | rm/\_HOST@REALM.TLD | Kerberos principal name for the ResourceManager. |
+
+### NodeManager
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `yarn.nodemanager.keytab` | */etc/security/keytab/nm.service.keytab* | Kerberos keytab file for the NodeManager. |
+| `yarn.nodemanager.principal` | nm/\_HOST@REALM.TLD | Kerberos principal name for the NodeManager. |
+| `yarn.nodemanager.container-executor.class` | `org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor` | Use LinuxContainerExecutor. |
+| `yarn.nodemanager.linux-container-executor.group` | *hadoop* | Unix group of the NodeManager. |
+| `yarn.nodemanager.linux-container-executor.path` | */path/to/bin/container-executor* | The path to the executable of Linux container executor. |
+
+### Configuration for WebAppProxy
+
+The `WebAppProxy` provides a proxy between the web applications exported by an application and an end user. If security is enabled it will warn users before accessing a potentially unsafe web application. Authentication and authorization using the proxy is handled just like any other privileged web application.
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `yarn.web-proxy.address` | `WebAppProxy` host:port for proxy to AM web apps. | *host:port* if this is the same as `yarn.resourcemanager.webapp.address` or it is not defined then the `ResourceManager` will run the proxy otherwise a standalone proxy server will need to be launched. |
+| `yarn.web-proxy.keytab` | */etc/security/keytab/web-app.service.keytab* | Kerberos keytab file for the WebAppProxy. |
+| `yarn.web-proxy.principal` | wap/\_HOST@REALM.TLD | Kerberos principal name for the WebAppProxy. |
+
+### LinuxContainerExecutor
+
+A `ContainerExecutor` used by YARN framework which define how any *container* launched and controlled.
+
+The following are the available in Hadoop YARN:
+
+| ContainerExecutor | Description |
+|:---- |:---- |
+| `DefaultContainerExecutor` | The default executor which YARN uses to manage container execution. The container process has the same Unix user as the NodeManager. |
+| `LinuxContainerExecutor` | Supported only on GNU/Linux, this executor runs the containers as either the YARN user who submitted the application (when full security is enabled) or as a dedicated user (defaults to nobody) when full security is not enabled. When full security is enabled, this executor requires all user accounts to be created on the cluster nodes where the containers are launched. It uses a *setuid* executable that is included in the Hadoop distribution. The NodeManager uses this executable to launch and kill containers. The setuid executable switches to the user who has submitted the application and launches or kills the containers. For maximum security, this executor sets up restricted permissions and user/group ownership of local files and directories used by the containers such as the shared objects, jars, intermediate files, log files etc. Particularly note that, because of this, except the application owner and NodeManager, no other user can access any of the lo
 cal files/directories including those localized as part of the distributed cache. |
+
+To build the LinuxContainerExecutor executable run:
+
+     $ mvn package -Dcontainer-executor.conf.dir=/etc/hadoop/
+
+The path passed in `-Dcontainer-executor.conf.dir` should be the path on the cluster nodes where a configuration file for the setuid executable should be located. The executable should be installed in $HADOOP\_YARN\_HOME/bin.
+
+The executable must have specific permissions: 6050 or `--Sr-s---` permissions user-owned by *root* (super-user) and group-owned by a special group (e.g. `hadoop`) of which the NodeManager Unix user is the group member and no ordinary application user is. If any application user belongs to this special group, security will be compromised. This special group name should be specified for the configuration property `yarn.nodemanager.linux-container-executor.group` in both `conf/yarn-site.xml` and `conf/container-executor.cfg`.
+
+For example, let's say that the NodeManager is run as user *yarn* who is part of the groups users and *hadoop*, any of them being the primary group. Let also be that *users* has both *yarn* and another user (application submitter) *alice* as its members, and *alice* does not belong to *hadoop*. Going by the above description, the setuid/setgid executable should be set 6050 or `--Sr-s---` with user-owner as *yarn* and group-owner as *hadoop* which has *yarn* as its member (and not *users* which has *alice* also as its member besides *yarn*).
+
+The LinuxTaskController requires that paths including and leading up to the directories specified in `yarn.nodemanager.local-dirs` and `yarn.nodemanager.log-dirs` to be set 755 permissions as described above in the table on permissions on directories.
+
+* `conf/container-executor.cfg`
+
+The executable requires a configuration file called `container-executor.cfg` to be present in the configuration directory passed to the mvn target mentioned above.
+
+The configuration file must be owned by the user running NodeManager (user `yarn` in the above example), group-owned by anyone and should have the permissions 0400 or `r--------` .
+
+The executable requires following configuration items to be present in the `conf/container-executor.cfg` file. The items should be mentioned as simple key=value pairs, one per-line:
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `yarn.nodemanager.linux-container-executor.group` | *hadoop* | Unix group of the NodeManager. The group owner of the *container-executor* binary should be this group. Should be same as the value with which the NodeManager is configured. This configuration is required for validating the secure access of the *container-executor* binary. |
+| `banned.users` | hdfs,yarn,mapred,bin | Banned users. |
+| `allowed.system.users` | foo,bar | Allowed system users. |
+| `min.user.id` | 1000 | Prevent other super-users. |
+
+To re-cap, here are the local file-sysytem permissions required for the various paths related to the `LinuxContainerExecutor`:
+
+| Filesystem | Path | User:Group | Permissions |
+|:---- |:---- |:---- |:---- |
+| local | container-executor | root:hadoop | `--Sr-s--*` |
+| local | `conf/container-executor.cfg` | root:hadoop | `r-------*` |
+| local | `yarn.nodemanager.local-dirs` | yarn:hadoop | `drwxr-xr-x` |
+| local | `yarn.nodemanager.log-dirs` | yarn:hadoop | `drwxr-xr-x` |
+
+### MapReduce JobHistory Server
+
+| Parameter | Value | Notes |
+|:---- |:---- |:---- |
+| `mapreduce.jobhistory.address` | MapReduce JobHistory Server *host:port* | Default port is 10020. |
+| `mapreduce.jobhistory.keytab` | */etc/security/keytab/jhs.service.keytab* | Kerberos keytab file for the MapReduce JobHistory Server. |
+| `mapreduce.jobhistory.principal` | jhs/\_HOST@REALM.TLD | Kerberos principal name for the MapReduce JobHistory Server. |
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/343cffb0/hadoop-common-project/hadoop-common/src/site/markdown/ServiceLevelAuth.md
----------------------------------------------------------------------
diff --git a/hadoop-common-project/hadoop-common/src/site/markdown/ServiceLevelAuth.md b/hadoop-common-project/hadoop-common/src/site/markdown/ServiceLevelAuth.md
new file mode 100644
index 0000000..8b4a10f
--- /dev/null
+++ b/hadoop-common-project/hadoop-common/src/site/markdown/ServiceLevelAuth.md
@@ -0,0 +1,144 @@
+<!---
+  Licensed under the Apache License, Version 2.0 (the "License");
+  you may not use this file except in compliance with the License.
+  You may obtain a copy of the License at
+
+   http://www.apache.org/licenses/LICENSE-2.0
+
+  Unless required by applicable law or agreed to in writing, software
+  distributed under the License is distributed on an "AS IS" BASIS,
+  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+  See the License for the specific language governing permissions and
+  limitations under the License. See accompanying LICENSE file.
+-->
+
+Service Level Authorization Guide
+=================================
+
+* [Service Level Authorization Guide](#Service_Level_Authorization_Guide)
+    * [Purpose](#Purpose)
+    * [Prerequisites](#Prerequisites)
+    * [Overview](#Overview)
+    * [Configuration](#Configuration)
+        * [Enable Service Level Authorization](#Enable_Service_Level_Authorization)
+        * [Hadoop Services and Configuration Properties](#Hadoop_Services_and_Configuration_Properties)
+        * [Access Control Lists](#Access_Control_Lists)
+        * [Refreshing Service Level Authorization Configuration](#Refreshing_Service_Level_Authorization_Configuration)
+        * [Examples](#Examples)
+
+Purpose
+-------
+
+This document describes how to configure and manage Service Level Authorization for Hadoop.
+
+Prerequisites
+-------------
+
+Make sure Hadoop is installed, configured and setup correctly. For more information see:
+
+* [Single Node Setup](./SingleCluster.html) for first-time users.
+* [Cluster Setup](./ClusterSetup.html) for large, distributed clusters.
+
+Overview
+--------
+
+Service Level Authorization is the initial authorization mechanism to ensure clients connecting to a particular Hadoop service have the necessary, pre-configured, permissions and are authorized to access the given service. For example, a MapReduce cluster can use this mechanism to allow a configured list of users/groups to submit jobs.
+
+The `$HADOOP_CONF_DIR/hadoop-policy.xml` configuration file is used to define the access control lists for various Hadoop services.
+
+Service Level Authorization is performed much before to other access control checks such as file-permission checks, access control on job queues etc.
+
+Configuration
+-------------
+
+This section describes how to configure service-level authorization via the configuration file `$HADOOP_CONF_DIR/hadoop-policy.xml`.
+
+### Enable Service Level Authorization
+
+By default, service-level authorization is disabled for Hadoop. To enable it set the configuration property hadoop.security.authorization to true in `$HADOOP_CONF_DIR/core-site.xml`.
+
+### Hadoop Services and Configuration Properties
+
+This section lists the various Hadoop services and their configuration knobs:
+
+| Property | Service |
+|:---- |:---- |
+| security.client.protocol.acl | ACL for ClientProtocol, which is used by user code via the DistributedFileSystem. |
+| security.client.datanode.protocol.acl | ACL for ClientDatanodeProtocol, the client-to-datanode protocol for block recovery. |
+| security.datanode.protocol.acl | ACL for DatanodeProtocol, which is used by datanodes to communicate with the namenode. |
+| security.inter.datanode.protocol.acl | ACL for InterDatanodeProtocol, the inter-datanode protocol for updating generation timestamp. |
+| security.namenode.protocol.acl | ACL for NamenodeProtocol, the protocol used by the secondary namenode to communicate with the namenode. |
+| security.inter.tracker.protocol.acl | ACL for InterTrackerProtocol, used by the tasktrackers to communicate with the jobtracker. |
+| security.job.submission.protocol.acl | ACL for JobSubmissionProtocol, used by job clients to communciate with the jobtracker for job submission, querying job status etc. |
+| security.task.umbilical.protocol.acl | ACL for TaskUmbilicalProtocol, used by the map and reduce tasks to communicate with the parent tasktracker. |
+| security.refresh.policy.protocol.acl | ACL for RefreshAuthorizationPolicyProtocol, used by the dfsadmin and mradmin commands to refresh the security policy in-effect. |
+| security.ha.service.protocol.acl | ACL for HAService protocol used by HAAdmin to manage the active and stand-by states of namenode. |
+
+### Access Control Lists
+
+`$HADOOP_CONF_DIR/hadoop-policy.xml` defines an access control list for each Hadoop service. Every access control list has a simple format:
+
+The list of users and groups are both comma separated list of names. The two lists are separated by a space.
+
+Example: `user1,user2 group1,group2`.
+
+Add a blank at the beginning of the line if only a list of groups is to be provided, equivalently a comma-separated list of users followed by a space or nothing implies only a set of given users.
+
+A special value of `*` implies that all users are allowed to access the service.
+
+If access control list is not defined for a service, the value of `security.service.authorization.default.acl` is applied. If `security.service.authorization.default.acl` is not defined, `*` is applied.
+
+* Blocked Access Control ListsIn some cases, it is required to specify blocked access control list for a service. This specifies the list of users and groups who are not authorized to access the service. The format of the blocked access control list is same as that of access control list. The blocked access control list can be specified via `$HADOOP_CONF_DIR/hadoop-policy.xml`. The property name is derived by suffixing with ".blocked".
+
+    Example: The property name of blocked access control list for `security.client.protocol.acl>> will be <<<security.client.protocol.acl.blocked`
+
+    For a service, it is possible to specify both an access control list and a blocked control list. A user is authorized to access the service if the user is in the access control and not in the blocked access control list.
+
+    If blocked access control list is not defined for a service, the value of `security.service.authorization.default.acl.blocked` is applied. If `security.service.authorization.default.acl.blocked` is not defined, empty blocked access control list is applied.
+
+### Refreshing Service Level Authorization Configuration
+
+The service-level authorization configuration for the NameNode and JobTracker can be changed without restarting either of the Hadoop master daemons. The cluster administrator can change `$HADOOP_CONF_DIR/hadoop-policy.xml` on the master nodes and instruct the NameNode and JobTracker to reload their respective configurations via the `-refreshServiceAcl` switch to `dfsadmin` and `mradmin` commands respectively.
+
+Refresh the service-level authorization configuration for the NameNode:
+
+       $ bin/hadoop dfsadmin -refreshServiceAcl
+
+Refresh the service-level authorization configuration for the JobTracker:
+
+       $ bin/hadoop mradmin -refreshServiceAcl
+
+Of course, one can use the `security.refresh.policy.protocol.acl` property in `$HADOOP_CONF_DIR/hadoop-policy.xml` to restrict access to the ability to refresh the service-level authorization configuration to certain users/groups.
+
+* Access Control using list of ip addresses, host names and ip rangesAccess to a service can be controlled based on the ip address of the client accessing the service. It is possible to restrict access to a service from a set of machines by specifying a list of ip addresses, host names and ip ranges. The property name for each service is derived from the corresponding acl's property name. If the property name of acl is security.client.protocol.acl, property name for the hosts list will be security.client.protocol.hosts.
+
+    If hosts list is not defined for a service, the value of `security.service.authorization.default.hosts` is applied. If `security.service.authorization.default.hosts` is not defined, `*` is applied.
+
+    It is possible to specify a blocked list of hosts. Only those machines which are in the hosts list, but not in the blocked hosts list will be granted access to the service. The property name is derived by suffixing with ".blocked".
+
+    Example: The property name of blocked hosts list for `security.client.protocol.hosts>> will be <<<security.client.protocol.hosts.blocked`
+
+    If blocked hosts list is not defined for a service, the value of `security.service.authorization.default.hosts.blocked` is applied. If `security.service.authorization.default.hosts.blocked` is not defined, empty blocked hosts list is applied.
+
+### Examples
+
+Allow only users `alice`, `bob` and users in the `mapreduce` group to submit jobs to the MapReduce cluster:
+
+    <property>
+         <name>security.job.submission.protocol.acl</name>
+         <value>alice,bob mapreduce</value>
+    </property>
+
+Allow only DataNodes running as the users who belong to the group datanodes to communicate with the NameNode:
+
+    <property>
+         <name>security.datanode.protocol.acl</name>
+         <value>datanodes</value>
+    </property>
+
+Allow any user to talk to the HDFS cluster as a DFSClient:
+
+    <property>
+         <name>security.client.protocol.acl</name>
+         <value>*</value>
+    </property>